NAB Show 2019

This year the NAB Show seemed to emphasize its roots – the “B” in National Association of Broadcasters. Gone or barely visible were the fads of past years, such as stereoscopic 3D, 360-degree video, virtual/augmented reality, drones, etc. Not that these are gone – merely that they have refocused on the smaller segment of marketshare that reflects reality. There’s not much point in promoting stereo 3D at NAB if most of the industry goes ‘meh’.

Big exhibitors of the past, like Quantel, RED, Apple, and Autodesk, are gone from the floor. Quantel products remain as part of Grass Valley (now owned by Belden), which is the consolidation of Grass Valley Group, Quantel, Snell & Wilcox, and Philips. RED decided last year that small, camera-centric shows were better venues. Apple – well, they haven’t been on the main floor for years, but even this year, there was no off-site, Final Cut Pro X stealth presence in a hotel suite somewhere. Autodesk, which shifted to a subscription model a couple of years ago, had a demo suite in the nearby Renaissance Hotel, focusing on its hero product, Flame 2020. Smoke for Mac users – tough luck. It’s been over for years.

This was a nuts-and-bolts year, with many exhibits showing new infrastructure products. These appeal to larger customers, such as broadcasters and network facilities. Specifically the world is shifting to an IP-based infrastructure for signal routing, control, and transmission. This replaces copper and fiber wiring of the past, along with the devices (routers, video switchers, etc) at either end of the wire. Companies that might have appeared less relevant, like Grass Valley, are back in a strong sales position. Other companies, like Blackmagic Design, are being encouraged by their larger clients to fulfill those needs. And as ever, consolidation continues – this year VizRT acquired NewTek, who has been an early player in video-over-IP with their proprietary NDI protocol.

Adobe

The NAB season unofficially started with Adobe’s pre-NAB release of the CC2019 update. For editors and designers, the hallmarks of this update include a new, freeform bin window view and adjustable guides in Premiere Pro and content-aware, video fill in After Effects. These are solid additions in response to customer requests, which is something Adobe has focused on. A smaller, but no less important feature is Adobe’s ongoing effort to improve media performance on the Mac platform.

As in past years, their NAB booth was an opportunity to present these new features in-depth, as well as showcase speakers who use Adobe products for editing, sound, and design. Part of the editing team from the series Atlanta was on hand to discuss the team’s use of Premiere Pro and After Effects in their ‘editing crash pad’.

Avid

For many attendees, NAB actually kicked off on the weekend with Avid Connect, a gathering of Avid users (through the Avid Customer Association), featuring meet-and-greets, workshops, presentations, and ACA leadership committee meetings. While past product announcements at Connect have been subdued from the vantage of Media Composer editors, this year was a major surprise. Avid revealed its Media Composer 2019.5 update (scheduled for release the end of May). This came as part of a host of many updates. Most of these apply to companies that have invested in the full Avid ecosystem, including Nexis storage and Media Central asset management. While those are superb, they only apply to a small percentage of the market. Let’s not forget Avid’s huge presence in the audio world, thanks to the dominance of Pro Tools – now with Dolby ATMOS support. With the acquisition of Euphonix years back, Avid has become a significant player in the live and studio sound arena. Various examples of its S-series consoles in action were presented.

Since I focus on editing, let me discuss Media Composer a bit more. The 2019.5 refresh is the first major Media Composer overhaul in years. It started in secret last year. 2019.5 is the first iteration of the new UI, with more to be updated in coming releases. In short, the interface has been modernized and streamlined in ways to attract newer, younger users, without alienating established editors. Its panel design is similar to Adobe’s approach – i.e. interface panels can be docked, floated, stacked, or tabbed. Panels that you don’t want to see may be closed or simply slid to the side and hidden. Need to see a hidden panel again? Simply side it back open from the edge of the screen.

This isn’t just a new skin. Avid has overhauled the internal video pipeline, with 32-bit floating color and an uncompressed DNx codec. Project formats now support up to 16K. Avid is also compliant with the specs of the Netflix Post Alliance and the ACES logo program.

I found the new version very easy to use and a welcomed changed; however, it will require some adaptation if you’ve been using Media Composer for a long time. In a nod to the Media Composer heritage, the weightlifter (aka ‘liftman’) and scissors icons (for lift and extract edits) are back. Even though Media Composer 2019.5 is just in early beta testing, Avid felt good enough about it to use this version in its workshops, presentations, and stage demos.

One of the reasons to go to NAB is for the in-person presentations by top editors about their real-world experiences. No one can top Avid at this game, who can easily tap a host of Oscar, Emmy, BFTA, and Eddie award winners. The hallmark for many this year was the presentation at Avid Connect and/or at the show by the Oscar-winning picture and sound editing/mixing team for Bohemian Rhapsody. It’s hard not to gather a standing-room-only crowd when you close your talk with the Live Aid finale sequence played in kick-ass surround!

Blackmagic Design

Attendees and worldwide observers have come to expect a surprise NAB product announcement out of Grant Petty each year and he certainly didn’t disappoint this time. Before I get into that, there were quite a few products released, including for IP infrastructures, 8K production and post, and more. Blackmagic is a full spectrum video and audio manufacturer that long ago moved into the ‘big leagues’. This means that just like Avid or Grass Valley, they have to respond to pressure from large users to develop products designed around their specific workflow needs. In the BMD booth, many of those development fruits were on display, like the new Hyperdeck Extreme 8K HDR recorder and the ATEM Constellation 8K switcher.

The big reveal for editors was DaVinci Resolve 16. Blackmagic has steadily been moving into the editorial space with this all-in-one, edit/color/mix/effects/finishing application. If you have no business requirement for – or emotional attachment to – one of the other NLE brands, then Resolve (free) or Resolve Studio (paid) is an absolute no-brainer. Nothing can touch the combined power of Resolve’s feature set.

New for Resolve 16 is an additional editorial module called the Cut Page. At first blush, the design, layout, and operation are amazingly similar to Apple’s Final Cut Pro X. Blackmagic’s intent is to make a fast editor where you can start and end your project for a time-sensitive turnaround without the complexities of the Edit Page. However, it’s just another tool, so you could work entirely in the Cut Page, or start in the Cut Page and refine your timeline in the Edit Page, or skip the Cut Page all together. Resolve offers a buffet of post tools that are at your disposal.

While Resolve 16’s Cut Page does elicit a chuckle from experienced FCPX users, it offers some new twists. For example, there’s a two-level timeline view – the top section is the full-length timeline and the bottom section is the zoomed-in detail view. The intent is quick navigation without the need to constantly zoom in and out of long timelines. There’s also an automatic sync detection function. Let’s say you are cutting a two-camera show. Drop the A-camera clips onto the timeline and then go through your B-camera footage. Find a cut-away shot, mark in/out on the source, and edit. It will ‘automagically’ edit to the in-sync location on the timeline. I presume this is matched by either common sound or timecode. I’ll have to see how this works in practice, but it demos nicely. Changes to other aspects of Resolve were minor and evolutionary, except for one other notable feature. The Color Page added its own version of content-aware, video fill.

Another editorial product addition – tied to the theme of faster, more-efficient editing – was a new edit keyboard. Anyone who’s ever cut in the linear days – especially those who ran Sony BVE9000/9100 controllers – will feel very nostalgic. It’s a robust keyboard with a high-quality, integrated jog/shuttle knob. The feel is very much like controlling a tape deck in a linear system, with fast shuttle response and precise jogging. The precision is far better than any of the USB controllers, like a Contour Shuttle. Whether or not enough people will have interest in shelling out $1,025 for it awaits to be seen. It’s a great tool, but are you really faster with one, than with FCPX’s skimming and a standard keyboard and mouse?

Ironically, if you look around the Blackmagic Design booth there does seem to be a nostalgic homage to Sony hardware of the past. As I said, the edit keyboard is very close to a BVE9100 keyboard. Even the style of the control panel on the Hyperdecks – and the look of the name badges on those panels – is very much Sony’s style. As humans, this appeals to our desire for something other than the glass interfaces we’ve been dealing with for the past few years. Michael Cioni (Panavision, Light Iron) coined this as ‘tactile attraction’ in his excellent Faster Together Stage talk. It manifests itself not only in these type of control surfaces, but also in skeuomorphic designs applied to audio filter interfaces. Or in the emotion created in the viewer when a colorist adds film grain to digital footage.

Maybe Grant is right and these methods are really faster in a pressure-filled production environment. Or maybe this is simply an effort to appeal to emotion and nostalgia by Blackmagic’s designers. (Check out Grant Petty’s two-hour 2019 Product Overview for more in-depth information on Blackmagic Design’s new products.)

8K

I won’t spill a lot of words on 8K. Seems kind of silly when most delivery is HD and even SD in some places. A lot of today’s production is in 4K, but really only for future-proofing. But the industry has to sell newer and flashier items, so they’ve moved on to 8K pixel resolution (7680 x 4320). Much of this is driven by Japanese broadcast and manufacturer efforts, who are pushing into 8K. You can laugh or roll your eyes, but NAB had many examples of 8K production tools (cameras and recorders) and display systems. Of course, it’s NAB, making it hard to tell how many of these are only prototypes and not yet ready for actual production and delivery.

For now, it’s still a 4K game, with plenty of mainstream product. Not only cameras and NLEs, but items like AJA’s KiPro family. The KiPro Ultra Plus records up to four channels of HD or one channel of 4K in ProRes or DNx. The newest member of the family is the KiPro GO, which records up to four channels of HD (25Mbps H.264) onto removable USB media.

Of course, the industry never stops, so while we are working with HD and 4K, and looking at 8K, the developers are planning ahead for 16K. As I mentioned, Avid already has project presets built-in for 16K projects. Yikes!

HDR

HDR – or high dynamic range – is about where it was last year. There are basically four formats vying to become the final standard used in all production, post, and display systems. While there are several frontrunners and edicts from distributors to deliver HDR-compatible masters, there still is no clear path. In you shoot in log or camera raw with nearly any professional camera produced within the past decade, you have originated footage that is HDR-compatible. But none of the low-cost post solutions make this easy. Without the right monitoring environment, you are wasting your time. If anything, those waters are muddier this year. There were a number of HDR displays throughout the show, but there were also a few labelled as using HDR simulation. I saw a couple of those at TV Logic. Yes, they looked gorgeous and yes, they were receiving an HDR signal. I found out that the ‘simulation’ part of the description meant that the display was bright (up to 350 nits), but not bright enough to qualify as ‘true’ HDR (1,000 nits or higher).

As in past transitions, we are certainly going to have to rely on a some ‘glue’ products. For me, that’s AJA again. Through their relationship with Colorfront, AJA offers two FS-HDR products: the HDR Image Analyzer and the FS-HDR convertor. The latter was introduced last year as a real-time frame synchronizer and color convertor to go between SDR and HDR display standards.  The new Analyzer is designed to evaluate color space and gamut compliance. Just remember, no computer display can properly show you HDR, so if you need to post and delivery HDR, proper monitoring and analysis tools are essential.

Cameras

I’m not a cinematographer, but I do keep up with cameras. Nearly all of this year’s camera developments were evolutionary: new LF (large format sensor) cameras (ARRI), 4K camcorders (Sharp, JVC), a full-frame mirrorless DSLR from Nikon (with ProRes RAW recording coming in a future firmware update). Most of the developments were targeted towards live broadcast production, like sports and megachurches.  Ikegami had an 8K camera to show, but their real focus was on 4K and IP camera control.

RED, a big player in the cinema space, was only there in a smaller demo room, so you couldn’t easily compare their 8K imagery against others on the floor, but let’s not forget Sony and Panasonic. While ARRI has been a favorite, due to the ‘look’ of the Alexa, Sony (Venice) and Panasonic (Varicam and now EVA-1) are also well-respected digital cinema tools that create outstanding images. For example, Sony’s booth featured an amazing, theater-sized, LED 8K micro-pixel display system. Some of the sample material shown was of the Rio Carnival, shot with anamorphic lenses on a 6K full-frame Sony Venice camera. Simply stunning.

Finally, let’s not forget Canon’s line-up of cinema cameras, from the C100 to the C700FF. To complement these, Canon introduced their new line of Sumire Prime lenses at the show. The C300 has been a staple of documentary films, including the Oscar-winning film, Free Solo, which I had the pleasure of watching on the flight to Las Vegas. Sweaty palms the whole way. It must have looked awesome in IMAX!

(For more on RED, cameras, and lenses at NAB, check out this thread from DP Phil Holland.)

It’s a wrap

In short, NAB 2019 had plenty for everyone. This also included smaller markets, like products for education seminars. One of these that I ran across was Cinamaker. They were demonstrating a complete multi-camera set-up using four iPhones and an iPad. The iPhones are the cameras (additional iPhones can be used as isolated sound recorders) and the iPad is the ‘switcher/control room’. The set-up can be wired or wireless, but camera control, video switching, and recording is done at the iPad. This can generate the final product, or be transferred to a Mac (with the line cut and camera iso media, plus edit list) for re-editing/refinement in Final Cut Pro X. Not too shabby, given the market that Cinamaker is striving to address.

For those of us who like to use the NAB Show exhibit floor as a miniature yardstick for the industry, one of the trends to watch is what type of gear is used in the booths and press areas. Specifically, one NLE over another, or one hardware platform versus the other. On that front, I saw plenty of Premiere Pro, along with some Final Cut Pro X. Hardware-wise, it looked like Apple versus HP. Granted, PC vendors, like HP, often supply gear to use in the booths as a form of sponsorship, so take this with a grain of salt. Nevertheless, I would guess that I saw more iMac Pros than any other single computer. For PCs, it was a mix of HP Z4, Z6, and Z8 workstations. HP and AMD were partner-sponsors of Avid Connect and they demoed very compelling set-ups with these Z-series units configured with AMD Radeon cards. These are very powerful workstations for editing, grading, mixing, and graphics.

©2019 Oliver Peters

Advertisements

The Nuances of Overcranking

The concept of overcranking and undercranking in the world of film and video production goes back to the origins of motion picture technology. The earliest film cameras required the camera operator to manually crank the film mechanism – they didn’t have internal motors. A good camera operator was partially judged by how constant of a frame rate they could maintain while cranking the film through the camera.

Prior to the introduction of sound, the correct frame rate was 18fps. If the camera was cranked faster than 18fps (overcranking), then the playback speed during projection was in slow motion. If the camera was cranked slower than 18fps (undercranking), the motion was sped up. With sound, the default frame rate shifted from 18 to 24fps. One by-product of this shift is that the projection of old B&W films gained that fast, jerky motion we often incorrectly attribute to “old time movies” today. That characteristic motion is because they are no longer played at their intended speeds.

While manual film cranking seems anachronistic in modern times, it had the benefit of in-camera, variable-speed capture – aka speed ramps. There are modern film cameras that include controlled mechanisms to still be able to do that today – in production, not in post.

Videotape recording

With the advent of videotape recording, the television industry was locked into constant recording speeds. Variable-speed recording wasn’t possible using tape transport mechanisms. Once color technology was established, the standard record, playback, and broadcast frame rates became 29.97fps and/or 25.0fps worldwide. Motion picture films captured at 24.0fps were transferred to video at the slightly slower rate of 23.976fps (23.98) in the US and converted to 29.97 by employing pulldown – a method to repeat certain frames according to a specific cadence. (I’ll skip the field versus frame, interlaced versus progressive scan discussion.)

Once we shifted to high definition, an additional frame rate category of 59.94fps was added to the mix. All of this was still pinned to physical videotape transports and constant frame rates. Slomo and fast speed effects required specialized videotape or disk pack recorders that could playback at variable speeds. A few disk recorders could record at different speeds, but in general, it was a post-production function.

File-based recording

Production shifted to in-camera, file-based recording. Post shifted to digital, computer-based, rather than electro-mechanical methods. The nexus of these two shifts is that the industry is no longer locked into a limited number of frame rates. So-called off-speed recording is now possible with nearly every professional production camera. All NLEs can handle multiple frame rates within the same timeline (albeit at a constant timeline frame rate).

Modern video displays, the web, and streaming delivery platforms enable viewers to view videos mastered at different frame rates, without being dependent on the broadcast transmission standard in their country or region. Common, possible system frame rates today include 23.98, 24.0, 25.0, 29.97, 30.0, 59.94, and 60.0fps. If you master in one of these, anyone around the world can see your video on a computer, smart phone, or tablet.

Record rate versus system/target rate

Since cameras can now record at different rates, it is imperative that the production team and the post team are on the same page. If the camera operator records everything at 29.97 (including sync sound), but the post is designed to be at 23.98, then the editor has four options. 1) Play the files as real-time (29.97 in a 23.98 sequence), which will cause frames to be dropped, resulting in some stuttering on motion. 2) Play the footage at the slowed speed, so that there is a one-to-one relationship of frames, which doesn’t work for sync sound. 3) Go through a frame rate conversion before editing starts, which will result in blended and/or dropped frames. 4) Change the sequence setting to 29.97, which may or may not be acceptable for final delivery.

Professional production cameras allow the operator to set both the system or target frame rate, in addition to the actual recording rate. These may be called different names in the menus, but the concepts are the same. The system or target rate is the base frame rate at which this file will be edited and/or played. The record rate is the frame rate at which images are exposed. When the record rate is higher than the target rate, you are effectively overcranking. That is, you are recording slow motion in-camera.

(Note: from here on I will use simplified instead of integer numbers in this post.) A record rate of 48fps and a target rate of 24fps results in an automatic 50% slow motion playback speed in post, with a one-to-one frame relationship (no duplicated or blended frames). Conversely, a record rate of 12fps with a target rate of 24fps results in playback that is fast motion at 200%. That’s the basis for hyperlapse/timelapse footage.

The good news is that professional production cameras embed the pertinent metadata into the file so that editing and player software automatically knows what to do. Import an ARRI Alexa file that was recorded at 120fps with a target rate of 24fps (23.98/23.976) into Final Cut Pro X or Premiere Pro and it will automatically playback in slow motion. The browser will identify the correct target rate and the clip’s timecode will be based on that same rate.

The bad news as that many cameras used in production today are consumer products or at best “prosumer” cameras. They are relatively “dumb” when it comes to such settings and metadata. Record 30fps on a Canon 5D or Sony A7S and you get 30fps playback. If you are cutting that into a 24fps (23.98) sequence, you will have to decide how to treat it. If the use is for non-sound-sync B-roll footage, then altering the frame rate (making it play slow motion) is fine. In many cases, like drone shots and handheld footage, that will be an intentional choice. The slower footage helps to smooth out the vibration introduced by using such a lightweight camera.

The worst recordings are those made with iPhone, iPads, or similar devices. These use variable-bit-rate codecs and variable-frame-rate recordings, making them especially difficult in post. For example, an iPhone recording at 30.0fps isn’t exactly at that speed. It wobbles around that rate – sometimes slightly slower and something faster. My recommendation for that type of footage is to always transcode to an optimized format before editing. If you must shoot with one of these devices, you really need to invest in the FiLMiC Pro application, which will give you a certain level of professional control over the iPhone/iPad camera.

Transcode

Time and storage permitting, I generally recommend transcoding consumer/prosumer formats into professional, optimized editing formats, like Avid DNxHD/HR or Apple ProRes. If you are dealing with speed differences, then set your file conversion to change the frame rate. In our 30 over 24 example (29.97 record/23.98 target), the new footage will be slowed accordingly with matching timecode. Recognize that any embedded audio will also be slowed, which changes its sample rate. If this is just for B-roll and cutaways, then no problem, because you aren’t using that audio. However, one quirk of Final Cut Pro X is that even when silent, the altered sample rate of the audio on the clip can induce strange sound artifacts upon export. So in FCPX, make sure to detach and delete audio from any such clip on your timeline.

Interpret footage

This may have a different name in any given application, but interpret footage is a function to make the application think that the file should be played at a different rate than it was recorded at. You may find this in your NLE, but also in your encoding software. Plus, there are apps that can re-write the QuickTime header information without transcoding the file. Then that file shows up at the desired rate inside of the NLE. In the case of FCPX, the same potential audio issues can arise as described above if you go this route.

In an NLE like Premiere or Resolve, it’s possible to bring in 30-frame files into a 24-frame project. Then highlight these clips in the browser and modify the frame rate. Instant fix, right? Well, not so fast. While I use this in some cases myself, it comes with some caveats. Interpreting footage often results in mismatched clip linking when you are using the internal proxy workflow. The proxy and full-res files don’t sync up to each other. Likewise, in a roundtrip with Resolve, file relinking in Resolve will be incorrect. It may result in not being able to relink these files at all, because the timecode that Resolve looks for falls outside of the boundaries of the file. So use this function with caution.

Speed adjustments

There’s a rub when work with standard speed changes (not frame rate offsets). Many editors simply apply an arbitrary speed based on what looks right to them. Unfortunately this introduces issues like skipping frames. To perfectly apply slow or fast motion to a clip, you MUST stick to simple multiples of that rate, much like traditional film post. A 200% speed increase is a proper multiple. 150% is not. The former means you are playing every other frame from a clip for smooth action. The latter results in only one fourth of the frames being eliminated in playback, leaving you with some unevenness in the movement. 

Naturally there are times when you simply want the speed you picked, even if it’s something like 177%. That’s when you have to play with the interpolation options of your NLE. Typically these include frame duplication, frame blending, and optical flow. All will give you different looks. When it comes to optical flow, some NLEs handle this better than others. Optical flow “creates” new  in-between frames. In the best case it can truly look like a shot was captured at that native frame rate. However, the computation is tricky and may often lead to unwanted image artifacts.

If you use Resolve for a color correction roundtrip, changes in motion interpolation in Resolve are pointless, unless the final export of the timeline is from Resolve. If clips go back to your NLE for finishing, then it will be that software which determines the quality of motion effects. Twixtor is a plug-in that many editors use when they need even more refined control over motion effects.

Doing the math

Now that I’ve discussed interpreting footage and the ways to deal with standard speed changes, let’s look at how best to handle off-speed clips. The proper workflow in most NLEs is to import the footage at its native frame rate. Then, when you cut the clip into the sequence, alter the speed to the proper rate for frames to play one-to-one (no blended, duplicate, or skipped frames). Final Cut Pro X handles this in the best manner, because it provides an automatic speed adjustment command. This not only makes the correct speed change, but also takes care of any potential audio sample rate issues. With other NLEs, like Premiere Pro, you will have to work out the math manually. 

The easiest way to get a value that yields clean frames (one-to-one frame rate) is to simply divide the timeline frame rate by the clip frame rate. The answer is the percentage to apply to the clip’s speed in the timeline. Simple numbers yield the same math results as integer numbers. If you are in a 23.98 timeline and have 29.97 clips, then 24 divided by 30 equals .8 – i.e. 80% slow motion speed. A 59.94fps clip is 40%. A 25fps clip is 96%.

Going in the other direction, if you are editing in a 29.97 timeline and add a 23.98 clip, the NLE will normally add a pulldown cadence (duplicated frames). If you want this to be one-to-one, if will have to be sped up. But the calculation is the same. 30 divided by 24 results in a 125% speed adjustment. And so on.

Understanding the nuances of frame rates and following these simple guidelines will give you a better finished product. It’s the kind of polish that will make your videos stand out from those of your fellow editors.

© 2019 Oliver Peters

More about ProRes RAW

A few weeks ago I wrote a two-part post – HDR and RAW Demystified. In the second part, I covered Apple’s new ProRes RAW codec. I still see a lot of misinformation on the web about what exactly this is, so I felt it was worth an additional post. Think of this post as an addendum to Part 2. My apologies up front, if there is some overlap between this and the previous post.

_____________________________

Camera raw codecs have been around since before RED Digital Camera brought out their REDCODE RAW codec. At NAB, Apple decided to step into the game. RED brought the innovation of recording the raw signal as a compressed movie file, making on-board recording and simplified post-production possible. Apple has now upped the game with a codec that is optimized for multi-stream playback within Final Cut Pro X, thus taking advantage of how FCPX leverages Apple hardware. At present, ProRes RAW is incompatible with all other applications. The exception is Motion, which will read and play the files, but with incorrect default – albeit correctable – video levels.

ProRes RAW is only an acquisition codec and, for now, can only be recorded externally using an Atomos Inferno or Sumo 19 monitor/recorder, or in-camera with DJI’s Inspire 2 or Zenmuse X7. Like all things Apple, the complexity is hidden under the surface. You don’t get the type of specific raw controls made available for image tweaking, as you do with RED. But, ProRes RAW will cover the needs of most camera raw users, making this the raw codec “for the rest of us”. At least that’s what Apple is banking on.

Capturing in ProRes RAW

The current implementation requires a camera that exports a camera raw signal over SDI, which in turn is connected to the Atomos, where the conversion to ProRes RAW occurs. Although no one is very specific about the exact process, I would presume that Atomos’ firmware is taking in the camera’s form of raw signal and rewrapping or transforming the data into ProRes RAW. This means that the Atomos firmware would require a conversion table for each camera, which would explain why only a few Sony, Panasonic, and Canon models qualify right now. Others, like ARRI Alexa or RED cameras, cannot yet be recorded as ProRes RAW. The ProRes RAW codec supports 12-bit color depth, but it depends on the camera. If the SDI output to the Atomos recorder is only 10-bit, then that’s the bit-depth recorded.

Until more users buy or update these specific Atomos products – or more manufacturers become licensed to record ProRes RAW onboard the camera – any real-word comparisons and conclusions come from a handful of ProRes RAW source files floating around the internet. That, along with the Apple and Atomos documentation, provides a pretty solid picture of the quality and performance of this codec group.

Understanding camera raw

All current raw methods depend on single-sensor cameras that capture a Bayer-pattern image. The sensor uses a monochrome mosaic of photosites, which are filtered to register the data for light in the red, green, or blue wavelengths. Nearly all of these sensors have twice as many green receptors as red or blue. At this point, the sensor is capturing linear light at the maximum dynamic range capable for the exposure range of the camera and that sensor. It’s just an electrical signal being turned into data, but without compression (within the sensor). The signal can be recorded as a camera raw file, with or without compression. Alternatively, it can also be converted directly into a full-color video signal and then recorded – again, with or without compression.

If the RGGB photosite data (camera raw) is converted into RGB pixels, then sensor color information is said to be “baked” into the file. However, if the raw conversion is stored in that form and then later converted to RGB in post, sensor data is preserved intact until much later into the post process. Basically, the choice boils down to whether that conversion is best performed within the camera’s electronics or later via post-production software.

The effect of compression may also be less destructive (fewer visible artifacts) with a raw image, because data, rather than video is being compressed. However, converting the file to RGB, does not mean that a wider dynamic range is being lost. That’s because most camera manufacturers have adopted logarithmic encoding schemes, which allow a wide color space and a high dynamic range (big exposure latitude) to be carried through into post. HDR standards are still in development and have been in testing for several years, completely independent of whether or not the source files are raw.

ProRes RAW compression

ProRes RAW and ProRes RAW HQ are both compressed codecs with roughly the same data footprint as ProRes and ProRes HQ. Both raw and standard versions use a variable bitrate form of compression, but in different ways. Apple explains it this way in their white paper: 

“As is the case with existing ProRes codecs, the data rates of ProRes RAW are proportional to frame rate and resolution. ProRes RAW data rates also vary according to image content, but to a greater degree than ProRes data rates. 

With most video codecs, including the existing ProRes family, a technique known as rate control is used to dynamically adjust compression to meet a target data rate. This means that, in practice, the amount of compression – hence quality – varies from frame to frame depending on the image content. In contrast, ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.”

ProRes RAW and HDR do not depend on each other

One of my gripes, when watching some of the ProRes RAW demos on the web and related comments on forums, is that ProRes RAW is being conflated with HDR. This is simply inaccurate. Raw applies to both SDR and HDR workflows. HDR workflows do not depend on raw source material. One of the online demos I saw recently immediately started with an HDR FCPX Library. The demo ProRes RAW clips were imported and looked blown out. This made for a dramatic example of recovering highlight information. But, it was wrong!

If you start with an SDR FCPX Library and import these same files, the default image looks great. The hitch here, is that these ProRes RAW files were shot with a Sony camera and a default LUT is applied in post. That’s part of the file’s metadata. To my knowledge, all current, common camera LUTs are based on conversion to the Rec709 color space, not HDR or wide gamut. If you set the inspector’s LUT tab to “none” in either SDR or HDR, you get a relatively flat, log image that’s easily graded in whatever direction you want.

What about raw-specific settings?

Are there any advantages to camera raw in the first place? Most people will point to the ability to change ISO values and color temperature. But these aren’t actually something inherently “baked” into the raw file. Instead, this is metadata, dialed in by the DP on the camera, which optimizes the images for the sensor. ISO is a sensitivity concept based on the older ASA film standard for exposing film. In modern digital cameras, it is actually an exposure index (EI), which is how some refer to it. (RedShark’s Phil Rhodes goes into depth in this linked article.)

The bottom line is that EI is a cross-reference to that camera sensor’s “sweet spot”. 800 on one camera might be ideal, while 320 is best on another. Changing ISO/EI has the same effect as changing gain in audio. Raising or lowering ISO/EI values means that you can either see better into the darker areas (with a trade-off of added noise) – or you see better highlight detail, but with denser dark areas. By changing the ISO/EI value in post, you are simply changing that reference point.

In the case of ProRes RAW and FCPX, there are no specific raw controls for any of this. So it’s anyone’s guess whether changing the master level wheel or the color temp/tint sliders within the color wheels panel is doing anything different for a ProRes RAW file than doing the same adjustment for any other RGB-encoded video file. My guess is that it’s not.

In the case of RED camera files, you have to install a camera raw plug-in module in order to work with the REDCODE raw codec inside of Final Cut Pro X. There is a lot of control of the image, prior to tweaking with FCPX’s controls. However, the amount of image control for the raw file is significantly more for a REDCODE file in Premiere Pro, than inside of FCPX. Again, my suspicion is that most of these controls take effect after the conversion to RGB, regardless of whether or not the slider lives in a specific camera raw module or in the app’s own color correction controls. For instance, changing color temperature within the camera raw module has no correlation to the color temperature control within the app’s color correction tools. It is my belief that few of these actually adjust file data at the raw level, regardless of whether this is REDCODE or ProRes RAW. The conversion from raw to RGB is proprietary with every manufacturer.

What is missing in the ProRes RAW implementation is any control over the color science used to process the image, along with de-Bayering options. Over the years, RED has reworked/improved its color science, which theoretically means that a file recorded a few years ago can look better today (using newer color science math) than it originally did. You can select among several color science models, when you work with the REDCODE format. 

You can also opt to lower the de-Bayering resolution to 1/2, 1/4, 1/8, etc. for a RED file.  When working in a 1080p timeline, this speeds up playback performance with minimal impact on the visible resolution displayed in the viewer. For full-quality conversion, software de-Bayering also yields different results than hardware acceleration, as with the RED Rocket-X card. While this level of control is nice to have, I suspect that’s the sort of professional complication that Apple seeks to avoid.

The main benefit of ProRes RAW may be a somewhat better-quality image carried into post at a lower file size. To get the comparable RGB image quality you’d need to go up to uncompressed, ProRes 4444, or ProRes 4444 XQ – all of which become very taxing in post. Yet, for many standard productions, I doubt you’ll see that great of a difference. Nevertheless, more quality with a lower footprint will definitely be welcomed.

People will want to know whether this is a game-changer or not. On that count, probably not. At least not until there are a number of in-camera options. If you don’t edit – and finish – with FCPX, then it’s a non-starter. If you shoot with a camera that records in a high-quality log format, like an ARRI Alexa, then you won’t see much difference in quality or workflow. If you shoot with any RED camera, you have less control over your image. On the other hand, it’s a definite improvement over all raw workflows that capture in image sequences. And it breathes some life into an older camera, like the Sony FS700. So, on balance, ProRes RAW is an advancement, but just not one that will affect as large a part of the industry as the rest of the ProRes family has.

(Note – click any image for an enlarged view. Images courtesy of Apple, FilmPlusGear, and OffHollywood.)

©2018 Oliver Peters

HDR and RAW Demystified, Part 2

(Part 1 of this series is linked here.) One of the surprises of NAB 2018 was the announcement of Apple ProRes RAW. This brought camera raw video to the forefront for many who had previously discounted it. To understand the ‘what’ and ‘why’ about raw, we first have to understand camera sensors.

For quite some years now, cameras have been engineering with a single, CMOS sensor. Most of these sensors use a Bayer-pattern array of photosites. Bayer – named for Bryce Bayer, a Kodak color scientist who developed the system. Photosites are the light-receiving elements of a sensor. The Bayer pattern is a checkerboard filter that separates light according to red/blue/green wavelengths. Each photosite captures light as monochrome data that has been separated according to color components. In doing so, the camera captures a wide exposure latitude as linear data. This is greater than what can be squeezed into standard video in this native form. There is a correlation between physical photosite size and resolution. With smaller photosites, more can fit on the sensor, yielding greater native resolution. But, with fewer, larger photosites, the sensor has better low-light capabilities. In short, resolution and exposure latitude are a trade-off in sensor design.

Log encoding

Typically, raw data is converted into RGB video by the internal electronics of the camera. It is then subsequently converted into component digital video and recorded using a compressed or uncompressed codec and one of the various color sampling schemes (4:4:4, 4:2:2, 4:1:1, 4:2:0). These numbers express a ratio that represents YCrCb – where Y = luminance (the first number) and CrCb = two difference signals (the second two numbers) used to derive color information. You may also see this written as YUV, Y/R-Y/B-Y or other forms. In the conversion, sampling, and compression process, some information is lost. For instance, a 4:4:4 codec preserves twice as much color information than a 4:2:2 codec. Two methods are used to preserve wide-color gamuts and extended dynamic range: log encoding and camera raw capture.

Most camera manufacturers offer some form of logarithmic video encoding, but the best-known is ARRI’s Log-C. Log encoding applies a logarithm to linear sensor data in order to compress that data into a “curve”, which will fit into the available video signal “bucket”. Log-C video, when left uncorrected and viewed in Rec. 709, will appear to lack contrast and saturation. To correct the image, a LUT (color look-up table) must be applied, which is the mathematic inverse of the process used to encode the Log-C signal. Once restored, the image can be graded to use and/or discard as much of the data as needed, depending on whether you are working in an SDR or HDR mode.

Remember that the conversion from a flat, log image to full color will only look good when you have bit-depth precision. This means that if you are working with log material in an 8-bit system, you only have 256 steps between black and white. That may not be enough and the grade from log to full color may result in banding. If you work in a 10-bit system, then you have 1024 steps instead of only 256 between the same black and white points. This greater precision yields a smoother transition in gradients and, therefore, no banding. If you work with ProRes recordings, then according to Apple, “Apple ProRes 4444 XQ and Apple ProRes 4444 support image sources up to 12 bits and preserve alpha sample depths up to 16 bits. All Apple ProRes 422 codecs support up to 10-bit image sources, though the best 10-bit quality is obtained with the higher-bit-rate family members – Apple ProRes 422 and Apple ProRes 422 HQ.”

Camera raw

RAW is not an acronym. It’s simply shorthand for camera raw information. Before video, camera raw was first used in photography, typified by Canon raw (.cr2) and Adobe’s Digital Negative (.dng) formats. The latter was released as an open standard and is widely used in video as Cinema DNG.

Camera raw in video cameras made its first practical introduction when RED Digital Cinema introduced their RED ONE cameras equipped with REDCODE RAW. While not the first with raw, RED’s innovation was to record a compressed data stream as a movie file (.r3d), which made post-production significantly easier. The key difference between raw workflows and non-raw workflows, is that with raw, the conversion into video no longer takes place in the camera or an external recorder. This conversion happens in post. Since the final color and dynamic range data is not “baked” into the file, the post-production process used can be improved in future years, making an even better result possible with an updated software version.

Camera raw data is usually proprietary to each manufacturer. In order for any photographic or video application to properly decode a camera raw signal, it must have a plug-in from that particular manufacturer. Some of these are included with a host application and some require that you download and install a camera-specific add-on. Such add-ons or plug-ins are considered to be a software “black box”. The decoding process is hidden from the host application, but the camera supplier will enable certain control points that an editor or colorist can adjust. For example, with RED’s raw module, you have access to exposure, the demosaicing (de-Bayering) resolution, RED’s color science method, and color temperature/tint. Other camera manufacturers will offer less.

Apple ProRes RAW

The release of ProRes RAW gives Apple a raw codec that is optimized for multi-stream playback performance in Final Cut Pro X and on the newest Apple hardware. This is an acquisition codec, so don’t expect to see the ability to export a timeline from your NLE and record it into ProRes RAW. Although I wouldn’t count out a transcode from another raw format into ProRes RAW, or possibly an export from FCPX when your timeline only consists of ProRes RAW content. In any case, that’s not possible today. In fact, you can only play ProRes RAW files in Final Cut Pro X or Apple Motion, but only FCPX displays the correct color information at default settings.

Currently ProRes RAW has only been licensed by Apple to Atomos and DJI. The Atomos Inferno and Sumo 19 units are equipped with ProRes RAW. This is only active with certain Canon, Panasonic, and Sony camera models that can send their raw signal out over an SDI cable. Then the Atomos unit will remap the camera’s raw values to ProRes RAW and encode the file. DJI’s Zenmuse X7 gimbal camera has also been updated to support ProRes RAW. With DJI, the acquisition occurs in-camera, rather than via an external recorder.

Like RED’s RECODE, Apple ProRes RAW is a variable bit-rate, compressed codec with different quality settings. ProRes RAW and ProRes RAW HQ fall in line similar to the data rates of ProRes and ProRes HQ. Unlike RED, no controls are exposed within Final Cut Pro X to access specific raw controls. Therefore, Final Cut Pro X’s color processing controls may or may not take affect prior to the conversion from raw to video. At this point that’s an unknown.

(Read more about ProRes RAW here.)

Conclusion

The main advantage of the shift to using movie file formats for camera raw – instead of image sequence files – is that processing is faster and the formats are conducive to working natively in most editing applications.

It can be argued whether or not there is really much difference in starting with a log-encoded versus a camera raw file. Leading feature films presented at the highest resolutions have originated both ways. Nevertheless, both methods empower you with extensive creative control in post when grading the image. Both accommodate a move into HDR and wider color gamuts. Clearly log and raw workflows future-proof your productions for little or no additional investment.

Originally written for RedShark News.

©2018 Oliver Peters

Viva Las Vegas – NAB 2018

As more and more folks get all of their information through internet sources, the running question is whether or not trade shows still have value. A show like the annual NAB (National Association of Broadcasters) Show in Las Vegas is both fun and grueling, typified by sensory overload and folks in business attire with sneakers. Although some announcements are made before the exhibits officially open – and nearly all are pretty widely known before the week ends – there still is nothing quite like being there in person.

For some, other shows have taken the place of NAB. The annual HPA Tech Retreat in the Palm Springs area is a gathering of technical specialists, researchers, and creatives that many consider the TED Talks for our industry. For others, the Cine Gear Expo in LA is the prime showcase for grip, lighting, and camera offerings. RED Camera has focused on Cine Gear instead of NAB for the last couple of years. And then, of course, there’s IBC in Amsterdam – the more humane version of NAB in a more pleasant setting. But for me, NAB is still the main event.

First of all, the NAB Show isn’t merely about the exhibit floor at the sprawling Las Vegas Convention Center. Actual NAB members can attend various sessions and workshops related to broadcasting and regulations. There are countless sidebar events specific to various parts of the industry. For editors that includes Avid Connect – a two-day series of Avid presentations in the weekend leading into NAB; Post Production World – a series of workshops, training sessions, and presentations managed by Future Media Concepts; as well as a number of keynote presentations and artist gatherings, including SuperMeet, FCPexchange, and the FCPX Guru Gathering. These are places where you’ll rub shoulders with some well-known editors, colorists, artists, and mixers, learn about new technologies like HDR (high dynamic range imagery), and occasionally see some new product features from vendors who might not officially be on the show floor with a booth, like Apple.

One of the biggest benefits I find in going to NAB is simply walking the floor, checking out the companies and products who might not get a lot of attention. These newcomers often have the most innovative technologies and it’s these new things that you find, which were never on the radar prior to that week.

The second benefit is connection. I meet up again in person with friends that I’ve made over the years – both other users, as well as vendors. Often it’s a chance to meet people that you might only know through the internet (forums, blogs, etc.) and to get to know them just a bit better. A bit more of that might make the internet more friendly, too!

Here are some of my random thoughts and observations from Las Vegas.

__________________________________

Editing hardware and software – four As and a B

Apple uncharacteristically pre-announced their new features just prior to the show, culminating with App Store availability on Monday when the NAB exhibits opened. This includes new Final Cut Pro X/Motion/Compressor updates and the official number of 2.5 million FCPX users. That’s a growth of 500,000 users in 2017, the biggest year to date for Final Cut. The key new feature in FCPX is a captioning function to author, edit, and export both closed and embedded (open) captions. There aren’t many great solutions for captioning and the best to date have been expensive. I found that the Apple approach was now the best and easiest to use that I’ve seen. It’s well-designed and should save time and money for those who need to create captions for their productions – even if you are using another brand of NLE. Best of all, if you own FCPX, you already have that feature. When you don’t have a script to start out, then manual or automatic transcription is required as a starting point. There is now a tie-in between Speedscriber (also updated this week) and FCPX that will expedite the speech-to-text function.

The second part of Apple’s announcement was the introduction of a new camera raw codec family – ProResRAW and ProResRAW HQ. These are acquisition codecs designed to record the raw sensor data from Bayer-pattern sensors (prior to debayering the signal into RGB information) and make that available in post, just like RED’s REDCODE RAW or CinemaDNG. Since this is an acquisition codec and NOT a post or intermediate codec, it requires a partnership on the production side of the equation. Initially this includes Atomos and DJI. Atomos supplies an external recorder, which can record the raw output from various cameras that offer the ability to record raw data externally. This currently includes their Shogun Inferno and Sumo 19 models. As this is camera-specific, Atomos must then create the correct profile by camera to remap that sensor data into ProResRAW. At the show, this included several Canon, Sony, and Panasonic cameras. DJI does this in-camera on the Inspire 2.

The advantage with FCPX, is that ProResRAW is optimized for post, thus allowing for more streams in real-time. ProResRAW data rates (variable) fall between that of ProRes and ProResHQ, while the less compressed ProResRAW HQ rates are between ProRes HQ and ProRes 4444. It’s very early with this new codec, so additional camera and post vendors will likely add ProResRAW support over the coming year. It is currently unknown whether or not any other NLEs can support ProResRAW decode and playback yet.

As always, the Avid booth was quite crowded and, from what I heard, Avid Connect was well attended with enthused Avid users. The Avid offerings are quite broad and hard to encapsulate into any single blog post. Most, these days, are very enterprise-centric. But this year, with a new CEO at the helm, Avid’s creative tools have been reorganized into three strata – First, standard, and Ultimate. This applies to Sibelius, Pro Tools, and Media Composer. In the case of Media Composer, there’s Media Composer | First – a fully functioning free version, with minimal restrictions; Media Composer; and Media Composer | Ultimate – includes all options, such as PhraseFind, ScriptSync, NewsCutter, and Symphony. The big difference is that project sharing has been decoupled from Media Composer. This means that if you get the “standard” version (just named Media Composer) it will not be enabled for collaboration on a shared storage network. That will require Media Composer | Ultimate. So Media Composer (standard) is designed for the individual editor. There is also a new subscription pricing structure, which places Media Composer at about the same annual cost as Adobe Premiere Pro CC (single app license). The push is clearly towards subscription, however, you can still purchase and/or maintain support for perpetual licenses, but it’s a little harder to find that info on Avid’s store website.

Though not as big news, Avid is also launching the Avid DNxID capture/export unit. It is custom-designed by Blackmagic Design for Avid and uses a small form factor. It was created for file-base acquisition, supports 4K, and includes embedded DNx codecs for onboard encoding. Connections are via component analog, HDMI, as well as an SD card slot.

The traffic around Adobe’s booth was thick the entire week. The booth featured interesting demos that were front and center in the middle of one of the South Hall’s main thoroughfares, generally creating a bit of a bottleneck. The newest Creative Cloud updates had preceded the show, but were certainly new to anyone not already using the Adobe apps. Big news for Premiere Pro users was the addition of automatic ducking that was brought over from Audition, and a new shot matching function within the Lumetri color panel. Both are examples of Adobe’s use of their Sensei AI technology. Not to be left out, Audition can now also directly open sequences from Premiere Pro. Character Animator had been in beta form, but is now a full-fledged CC product. And for puppet control Adobe also introduced the Advanced Puppet Engine for After Effects. This is a deformation tool to better bend, twist, and control elements.

Of course when it comes to NLEs, the biggest buzz has been over Blackmagic Design’s DaVinci Resolve 15. The company has an extensive track record of buying up older products whose companies weren’t doing so well, reinvigorating the design, reducing the cost, and breathing new life into them – often to a new, wider customer base. This is no more evident than Resolve, which has now grown from a leading color correction system to a powerful, all-in-one edit/mix/effects/color solution. We had previously seen the integration of the Fairlight audio mixing engine. This year Fusion visual effects were added. As before, each one of these disparate tools appears on its own page with a specific UI optimized for that task.

A number of folks have quipped that someone had finally resurrected Avid DS. Although all-in-ones like DS and Smoke haven’t been hugely successful in the past, Resolve’s price point is considerably more attractive. The Fusion integration means that you now have a subset of Fusion running inside of Resolve. This is a node-based compositor, which makes it easy for a Resolve user to understand, since it, too, already uses nodes in the color page. At least for now, Blackmagic Design intends to also maintain a standalone version of Fusion, which will offer more functions for visual effects compositing. Resolve also gained new editorial features, including tabbed sequences, a pancake timeline view, captioning, and improvements in the Fairlight audio page.

Other Blackmagic Design news includes updates to their various mini-converters, updates to the Cintel Scanner, and the announcement of a 4K Pocket Cinema Camera (due in September). They have also redesigned and modularized the Fairlight console mixing panels. These are now more cost-effective to manufacture and can be combined in various configurations.

This was the year for a number of milestone anniversaries, such as the 100th for Panasonic and the 25th for AJA. There were a lot of new product announcements at the AJA booth, but a big one was the push for more OpenGear-compatible cards. OpenGear is an open source hardware rack standard that was developed by Ross and embraced by many manufacturers. You can purchase any OpenGear version of a manufacturer’s product and then mix and match a variety of OpenGear cards into any OpenGear rack enclosure. AJA’s cards also offer Dashboard support, which is a software tool to configure and control the cards. There are new KONA SDI and HDMI cards, HDR support in the IO 4K Plus, and HDR capture and playback with the KiPro Ultra Plus.

HDR

It’s fair to say that we are all learning about HDR, but from what I observed on the floor, AJA is one of the only companies with a number of hardware product offerings that will allow you to handle HDR. This is thanks to their partnership with ColorFront, who is handling the color science in these products. This includes the FS | HDR – an up/down/cross, SDR/HDR synchronizer/converter. It also includes support for the Tangent Element Kb panel. The FS | HDR was a tech preview last year, but a product now. This year the tech preview product is the HDR Image Analyzer, which offers waveform and histogram monitoring at up to 4K/60fps.

Speaking of HDR (high dynamic range) and SDR (standard dynamic range), I had a chance to sit in on Robbie Carman’s (colorist at DC Color, Mixing Light) Post Production World HDR overview. Carman has graded numerous HDR projects and from his HDR presentation – coupled with exhibits on the floor – it’s quite clear that HDR is the wild, wild west right now. There is much confusion about color space and dynamic range, not to mention what current hardware is capable of versus the maximums expressed in the tech standards. For example, the BT 2020 spec doesn’t inherently mean that the image is HDR. Or the fact that you must be working in 4K to also have HDR and the set must accept the HDMI 2.0 standard.

High dynamic range grading absolutely requires HDR-compatible hardware, such as the proper i/o device and a display with the ability to receive metadata that turns on and sets its target HDR values. This means investing in a device like AJA’s IO 4K Plus or Blackmagic’s UltraStudio 4K Extreme 3. It also means purchasing a true grading monitor costing tens of thousands of dollars, like one from Sony, Canon, or Flanders. You CANNOT properly grade HDR based on the image of ANY computer display. So while the latest version of FCPX can handle HDR, and an iMac Pro screen features a high nits rating, you cannot rely on this screen to see proper HDR.

LG was a sponsor of the show and LG displays were visible in many of the exhibits. Many of their newest products qualify at the minimum HDR spec, but for the most part, the images shown on the floor were simply bright and not HDR – no matter what the sales reps in the booths were saying.

One interesting fact that Carman pointed out was that HDR displays cannot be driven across the full screen at the highest value. You cannot display a full screen of white at 1,000 nits on a 1,000 nits display without causing damage. Therefore, automatic gain adjustments are used in the set’s electronics to dim the screen. Only a smaller percentage of the image (20% maybe?) can be driven at full value before dimming occurs. Another point Carman made was that standard lift/gamma/gain controls may be too coarse to grade HDR images with finesse. His preference is to use Resolve’s log grading controls, because you can make more precise adjustments to highlight and shadow values.

Cameras

I’m not a camera guy, but there was notable camera news at the show. Many folks really like the Panasonic colorimetry for which the Varicam products are known. For people who want a full-featured camera in a small form factor, look no further than the Panasonics AU-EVA-1. It’s a 4K, Super35, handheld cinema camera featuring dual ISOs. Panasonic claims 14 stops of latitude. It will take EF lenses and can output camera raw data. When paired with an Atmos recorder it will be able to record ProResRAW.

Another new camera is Canon’s EOS C700 FF. This is a new full-frame model in both EF and PL lens mount versions. As with the standard C700, this is a 4K, Super35 cinema camera that records ProRes or X-AVC at up to 4K resolution onboard to CFast cards. The full-frame sensor offers higher resolution and a shallower depth of field.

Storage

Storage is of interest to many. As costs come down, collaboration is easier than ever. The direct-attached vendors, like G-Tech, LaCie, OWC, Promise, and others were all there with new products. So were the traditional shared storage vendors like Avid, Facilis, Tiger, 1 Beyond, and EditShare. But three of the newer companies had my interest.

In my editing day job, I work extensively with QNAP, which currently offers the best price/performance ratio of any system. It’s reliable, cost-effective, and provides reasonable JKL response cutting HD media with Premiere Pro in a shared editing installation. But it’s not the most responsive and it struggles with 4K media, in spite of plenty of bandwidth  – especially when the editors are all banging away. This has me looking at both Lumaforge and OpenDrives.

Lumaforge is known to many of the Final Cut Pro X editors, because the developers have optimized the system for FCPX and have had early successes with many key installations. Since then they have also pushed into more Premiere-based installations. Because these units are engineered for video-centric facilities, as opposed to data-centric, they promise a better shared storage, video editing experience.

Likewise, OpenDrives made its name as the provider for high-profile film and TV projects cut on Premiere Pro. Last year they came to the show with their highest performance, all-SSD systems. These units are pricey and, therefore, don’t have a broad appeal. This year they brought a few of the systems that are more applicable to a broader user base. These include spinning disk and hybrid products. All are truly optimized for Premiere Pro.

The cloud

In other storage news, “the cloud” garners a ton of interest. The biggest vendors are Microsoft, Google, IBM, and Amazon. While each of these offers relatively easy ways to use cloud-based services for back-up and archiving, if you want a full cloud-based installation for all of your media needs, then actual off-the-shelf solutions are not readily available. The truth of the matter is that each of these companies offers APIs, which are then handed off to other vendors – often for totally custom solutions.

Avid and Sony seem to have the most complete offerings, with Sony Ci being the best one-size-fits-all answer for customer-facing services. Of course, if review-and-approval is your only need, then Frame.io leads and will have new features rolled out during the year. IBM/Aspera is a great option for standard archiving, because fast Aspera up and down transfers are included. You get your choice of IBM or other (Google, Amazon, etc.) cloud storage. They even offer a trial period using IBM storage for 30 days at up to 100GB free. Backblaze is a competing archive solution with many partnering applications. For example, you can tie it in with Archiware’s P5 Suite of tools for back-up, archiving, and server synchronization to the cloud.

Naturally, when you talk of the “cloud”, many people interpret that to mean software that runs in the cloud – SaaS (software as a service). In most cases, that is nowhere close to happening. However, the exception is The Foundry, which was showing Athera, a suite of its virtualized applications, like Nuke, running on the Google Cloud Platform. They demo’ed it running inside the Chrome browser, thanks to this partnership with Google. The Foundry had a pod in the Google partners pavilion.

In short, you can connect to the internet with a laptop, activate a license of the tool or tools that you need, and then all media, processing, and rendering is handled in the cloud, using Google’s services and hardware. Since all of this happens on Google’s servers, only an updated UI image needs to be pushed back to the connected computer’s display. This concept is ideal for the visual effects world, where the work is generally done on an individual shot basis without a lot of media being moved in real-time. The target is the Nuke-centric shop that may need to add on a few freelancers quickly, and who may or may not be able to work on-premises.

Interesting newcomers

As I mentioned at the beginning, part of the joy of NAB is discovering the small vendors who seek out NAB to make their mark. One example this year is Lumberjack Systems, a venture by Philip Hodgetts and Greg Clarke of Intelligent Assistance. They were in the Lumaforge suite demonstrating Lumberjack Builder, which is a text-based NLE. In the simplest of explanations, your transcription or scripted text is connected to media. As you re-arrange or trim the text, the associated picture is edited accordingly. Newly-written text for voiceovers turns into spoken word media courtesy of the computer’s internal audio system and system voice. Once your text-based rough cut is complete, an FCPXML is sent to Final Cut Pro X, for further finesse and final editing.

Another new vendor I encountered was Quine, co-founded by Norwegian DoP Grunleik Groven. Their QuineBox IoT device attaches to the back of a camera, where it can record and upload “conformable” dailies (ProRes, DNxHD) to your SAN, as well as proxies to the cloud via its internal wi-fi system. Script notes can also be incorporated. The unit has already been battle-test on the Netflix/NRK production of “Norsemen”.

Closing thoughts

It’s always interesting to see, year over year, which companies are not at the show. This isn’t necessarily indicative of a company’s health, but can signal a change in their direction or that of the industry. Sometimes companies opt for smaller suites at an area hotel in lieu of the show floor (Autodesk). Or they are a smaller part of a reseller or partner’s booth (RED). But often, they are simply gone. For instance, in past years drones were all the rage, with a lot of different manufacturers exhibiting. DJI has largely captured that market for both vehicles and camera systems. While there were a few other drone vendors besides DJI, GoPro and Freefly weren’t at the show at all.

Another surprise change for me was the absence of SAM (Snell Advanced Media) – the hybrid company formed out of Snell & Wilcox and Quantel. SAM products are now part of Grass Valley, which, in turn, is owned by Belden (the cable manufacturer). Separate Snell products appear to have been absorbed into the broader Grass Valley product line. Quantel’s Go and Rio editors continue in Grass Valley’s editing line, alongside Edius – as simple, middle, and advanced NLE products. A bit sad actually. And very ironic. Here we are in the world of software and file-based video, but the company that still has money to make acquisitions is the one with a heavy investment in copper (I know, not just copper, but you get the point).

Speaking of “putting a fork in it”, I would have to say that stereo 3D and 360 VR are pretty much dead in the film and video space. I understand that there is a market – potentially quite large – in gaming, education, simulation, engineering, training, etc. But for more traditional entertainment projects, it’s just not there. Vendors were down to a few, and even though the leading NLEs have ways of working with 360 VR projects, the image quality still looks awful. When you view a 4K image within even the best goggles, the qualitative experience is like watching a 1970s-era TV set from a few inches away. For now, it continues to be a novelty looking for a reason to exist.

A few final points… It’s always fun to see what computers were being used in the booths. Apple is again a clear winner, with plenty of MacBook Pros and iMac Pros all over the LVCC when used for any sort of creative products or demos. eGPUs are of interest, with Sonnet being the main vendor. However, eGPUs are not a solution that solves every problem. For example, you will see more benefit by adding an eGPU to a lesser-powered machine, like a 13” MacBook Pro than one with more horsepower, like an iMac Pro. Each eGPU takes one Thunderbolt 3 bus, so realistically, you are likely to only add one additional eGPU to a computer. None of the NLE vendors could really tell me how much of a boost their application would have with an eGPU. Finally, if you are looking for some great-looking, large, OLED displays that are pretty darned accurate and won’t break the bank, then LG is the place to look.

©2018 Oliver Peters

Telestream Switch 4

Once Apple pulled the plug on QuickTime Player Pro 7, the industry started to look elsewhere for an all-purpose media tool that could facilitate the proper playback, inspection, and encoding of media files. For many, that new multipurpose application has become Telestream’s Switch, now in version 4. Telestream offers a range of desktop and enterprise media solutions, including Vantage, ScreenFlow, Flip4Mac, Episode, and others. Switch fills the role of a media player with added post-production capabilities, going far beyond other players, such as QuickTime Player or VLC.

Switch is offered in three versions: the basic Switch Player ($9.99), Switch Plus ($199) and Switch Pro ($499). Pricing for Plus and Pro covers the first year of support, which includes upgrades and assistance. There is also a free demo version with watermarking. All versions are available for both macOS (10.11-13) and Windows (7-10).

Playback support

The first attraction to Switch is its wide support of “consumer”, broadcast, and professional media formats and codecs. For Mac users, some of these are supported in QuickTime Player, too, but require a conversion step before you can play them. Not so with Switch. Of particular importance to editors will be the MPEG-2 and MXF variations. Some formats do require an upgrade to at least the Plus version, so check Telestream’s tech specs for specifics.

One area where Switch shines is file inspection. This has made it to the go-to quality assurance tool at many facilities. File metadata is exposed, along with proper display and reporting of interlaced video. It supports JKL transport control and frame advance using the arrow keys. Since closed captioning is important for all terrestrial and set-top channel broadcasters, you must have a way to check embedded captions. In the case of QuickTime Player, it will only display a single track of embedded captions and then, only the lower track. So, for example, if you have a file with both English and Spanish captions on CC1 and CC3, QuickTime Player will only display the English captions and not even let you verify that more captions are present. With Switch Plus and Pro, the full range of embedded channels are presented and you have the ability to do a check on any of the caption tracks.

Switch Plus likely covers the needs of most users; but Pro adds additional functionality, such as metering for multi-channel audio and loudness compliance. Pro also lets you open up to sixteen different files for comparison. It is the only version that supports external monitoring through Blackmagic Design or AJA i/o hardware. Finally, Pro lets you QC DPP (Digital Production Partnership) files from the desktop and display AS-11 MXF metadata.

Content encoding

Beyond these powerful player and inspection functions, Switch Plus and Pro are also full-fledged media encoders. You can change metadata, reorder audio channels, and export a new media file in various formats. Files can be trimmed, cropped, and/or resized in the export. Do you have a ProRes master file and need to generate an MPEG-2 Transport Stream file for broadcast? No problem.

I had a situation where I received a closed caption master file of a commercial from the captioning facility. It needed to have the ends of the file (slate and black) trimmed to meet the delivery specs. Normally when you edit or convert a file with embedded captioning, it will break the captions on the new file. Not so with Switch. I simply set the in and out points, set my encode specs to video pass-through, and generated the new file. The encode (essentially a file copy in this case) was lightning fast and the captions stayed intact.

Switch Plus and Pro include publishing presets for Vimeo, YouTube, and Facebook. In addition, the Pro version also lets you create an iTunes Store package, necessary to be compliant when distributing via the iTunes Store. Switch is a cross-platform application, but ProRes encoding support is limited to the Mac version. However, the iTunes Store package feature is the exception. ProRes asset creation is available to Windows users when creating the .itms files used by the iTunes Store.

Although Switch Plus or Pro might seem pricy to some when they compare these to Apple Compressor or Adobe Media Encoder; however, the other encoders can’t do the precision media functions that Switch offers. Telestream has built Switch to be an industrial-grade media tool that covers a host of needs in a package that’s easy for anyone to understand. If you liked QuickTime Player Pro 7, then Switch has become its 21st century successor.

Originally written for RedShark News.

©2018 Oliver Peters

What’s up with Final Cut’s Color Wheels?

NOTE: The information presented here has been superseded by the release of FCPX 10.4.1 in April 2018. With that release the color wheels model has been changed. Please read the linked blog post for updated information.

Apple Final Cut Pro X 10.4 introduced new, advanced color correction tools to this editing application, including color wheels, curves, and hue vs. saturation curves. These are tools that users of other NLEs have enjoyed for some time – and, which were part of Final Cut Studio (FCP 7, Color). Like others, my first reaction was, “Super! They’ve added some nice advanced tools, which will improve the use of FCPX for higher-end users.” But, as I started to primarily use the Color Wheels with real correction work, I quickly realized that something wasn’t quite right in how they operated. Or at least, they didn’t work in a way that we’ve come to understand.

In trying to figure it out, I reached out to other industry pros and developers for their thoughts. Naturally this led to some spirited discussions at forums like those at Creative COW. However, other editors have noticed the same problems, so you can also find threads in the Facebook FCPX group and at FCP.co. It is certainly easy to characterize this as just another internet kerfuffle, surrounding Apple’s “think different” approaches to FCPX. But those arguments fall flat when you actually try to use the tools as intended.

The FCPX Color Wheels panel includes four wheels – Master, Shadows, Midtones, and Highlights. The puck in the center of each wheel is a hue offset control to push hues in the direction that you move the puck. The slider to the right of the wheel controls the brightness of that range. The left slider controls the saturation. One of the main issues is that when you adjust luminance using one of these controls, the affected range is too broad. Specifically, in the case of the Midtones control, as you adjust the luminance slider up or down, you are affecting most of the image and not just the midrange levels. This is not the way this type of control normally works in other tools, and in fact, it’s not how FCPX’s Color Board controls work either.

“What’s the big deal?” you might ask. Fair enough. I see two operational issues. The first is that to properly grade the image using the Color Wheels, you end up having to go back-and-forth a lot between wheels, to counteract the changes made by one control with another. The second is that using the Midtones slider tends to drive highlights above 100 IRE, where they will be clipped if any broadcast limiting is used. This doesn’t happen with other color tools, notably Apple’s own Color Board.

A lot of the discussion focuses on luma levels and specifically the Midtones slider, since it’s easy to see the issue there. However, other controls are also affected, but that’s too much to dissect in a single post. Throughout this post, be sure to click on the images to see the full view. I have presented various samples against each other and you will only get the full understanding if you open the thumbnail (which is small but also cropped) to the full image. I have compared the effect using five different tools – the Color Board, the Color Wheels, a color corrector plug-in that I built as a Motion template using Motion effects, Rubber Monkey Software FilmConvert (the wheels portion only), and finally, the Adobe Lumetri controls in Premiere Pro.

I am using three different test images – a black-to-white ramp, a test pattern, and a demo video image. The ramp without correction will appear as a diagonal line (0-100 IRE) on the scope, which makes it easy to analyze what’s happening. The video image has definite shadow and highlight areas, which lets us see how these controls work in the real world. For example, if you want to brighten the area of the shot where the man is in the shadows, but don’t want to make the highlights any brighter, this would normally be done using a Midtones control. Be aware that these various tools certainly aren’t calibrated the same way and some have a greater range of control than others. The weakest of these is FilmConvert’s wheels, since this plug-in has additional level controls in other parts of its interface.

Color science models

In the various forum threads, the argument is made that Apple is simply using a different color science method or a different weighing of some existing models. That’s certainly possible, since not all color correctors are built the same way. The most common approaches are Lift/Gamma/Gain and Shadows/Mids/Highlights. Be careful with naming. Just because something uses the terminology of Shadows, Midtones, and Highlights, does not mean that it also uses the SMH color science model. Many tools use the Lift/Gamma/Gain model, but in fact, call the controls shadows (Lift), mids (Gamma), and highlights (Gain). Another term you may run across is Set-up in some correction tools. This is typically used for control of shadows (equal to Lift), but can also function is an offset control that raises the level of the entire image. Avid Symphony employs this solution. Finally, both Symphony and Adobe SpeedGrade use what has been dubbed a 12-way color corrector. Each range is further subdivided into its own subset of shadows, mids, and highlights controls.

An LGG model provides broad control of shadows and highlights, with the midtones control working like a curve that covers the whole range, but with the largest effect in the middle. An SMH model normally divides the levels into three distinct, precisely overlapping ranges. This is much like a three-band audio equalizing filter. A number of the color correctors add a luma range control, which gives the user the ability to change how much of the image a specific range will affect. In other words, how broad is the control of the shadows, mids, or highlights control? This is like a Q control in an audio equalizer, where you change the shape of the envelope at a certain frequency.

Red Giant’s Magic Bullet Looks offers both color correction models with two different tools – the 4-way color corrector (SMH) and the Colorista color corrector (LGG). When you adjust the midrange control of their 4-way, the result is a graceful S-shaped curve to the levels on the waveform.

To study the effect of an LGG-based corrector, test the ramp. The shadows control (Lift) will raise or lower the dark areas of the image without changing the absolute highlights. The diagonal line of the ramp on the waveform essentially pivots, hinged at the 100 IRE point. Conversely, change the highlights control (Gain) pivots the line pinned to 0 IRE (at black). When you adjust the midtones control (Gamma), you create a curve to the line, which stays pinned at 0 and 100 IRE at either end. In this way you are effectively “expanding” or “compressing” the levels in the middle portion of your image without changing the position of your black or white points.

How the various color correction tools react

Looking at the luma control for the Midtones, two things are clear. First, all of these tools are using the LGG color science model. It’s not clear what the Color Wheels are using, but it isn’t SMH, as there is no bulge or S-curve visible in the scope. Second, the Color Wheels quickly drive the image levels into clipping, while the other tools generally keep black and while levels in place. In essence, the Midtones control affects the image more like a master or offset control would, than a typical mids or Gamma control. Yet, clearly Apple’s Color Board controls adhere to the standard LGG model. The concern, of course, is clipping. In the test image of the man walking on the village street, the sunlit building walls on the opposite side of the street will become overexposed and risk being clipped when the Color Wheels are used.

What about color? As a simple test, I next shifted the Midtones puck to the yellow. Bear in mind that the range of each of these controls is different, so you will see varying degrees of yellow intensity. Nevertheless, the way the control should work is that some pure black and white should be preserved at the top and bottom of the video levels. All of these tools maintain that, except for the Color Wheels. There, the entire image is yellow, effectively making the hue offset puck function more like a tint control.

One other issue to note, is that the Color Wheels offer an extraordinarily control range. The hue offset control RGB intensity values go from 0 (center of the wheel) to 1023. However, the puck icon can only go to the rim of the wheel, which it hits at about 200. With a mouse (or numerical entry), you can keep going well past the stop of the wheel icon – five times farther, in fact. The image not only becomes very yellow in this case, but you can easily lose the location of your control, since the GUI position in no longer relevant.

The working theory

The big question is why don’t the Color Wheels conform to established principles, when in fact, the Color Board controls do? Until there is some further clarification from Apple, one possible explanation is with HDR. FCPX 10.4 introduced High Dynamic Range (HDR) features. One of the various HDR standards is Rec. 2020 PQ. In that color space, the 0-100 IRE limitations of Rec. 709 are expanded to 0-10,000 nits. 0-100 nits is roughly the same brightness as we are used to with Rec. 709.

Looking at this image of the man walking along the street – where I’ve attempted to get a pleasing look with all of the tools – you’ll see that the Color Wheels in Rec. 709 don’t react correctly and will drive the highlights into a range to be clipped. However, in the bottom pane, which is the same image in Rec. 2020 PQ color space, the grade looks pretty normal. And, in practice, the Color Wheels controls work more or less the way I would have expected them to work. Yes, the same controls work differently in the different color spaces – properly in 2020 PQ and not in 709.

But why is that the case? I have no answer, but I do have a wild guess. Maybe, just maybe, the Color Wheels were designed for – or intended to only be used for – HDR work. Or maybe there’s conversion or recalibration of the controls that hasn’t taken place yet in this version. If the tool is only calibrated for HDR, then its range and weighing will be completely wrong for Rec. 709 video. If you increase the Midtones luma of the ramp in both Rec. 709 and Rec. 2020 PQ, you’ll see a similar curve. In fact, if you overlay a screen shot of each waveform, placing the full Rec. 709 scope image over the bottom portion of the Rec. 2020 PQ scale, you’ll notice that these sort of align up to about 100 IRE and nits. It’s as if one is simply a slice out of the other.

Regardless of why, this is something where I would hope Apple will provide a white paper or other demonstration of what the best practices will be for using this tool effectively. If it isn’t intentional, and actually is a mistake, then I presume a fix will be forthcoming. In either case, put in your feedback comments to Apple.

A word about HDR

Over the course of testing this tool and this theory, I’ve done a bit of testing with the HDR color spaces in FCPX. If you want to know more about HDR, I would encourage you to check out these contrary blog posts by Stu Maschwitz and Alexis Van Hurkman. I tend to side with Stu’s point-of-view and am not a big fan of HDR.

The way Apple has implemented these features in Final Cut Pro X 10.4 is to allow the user to set and override color spaces. If you set up your project to be Rec. 2020 PQ (and set preferences to “show HDR as raw values”), then the viewer and a/v output (direct from the Mac, not through a hardware i/o device) are effectively dimmed through the Mac’s color profile system. When you grade the image based on the 0-10,000 nits scale, you’ll end up seeing an image that looks pleasing and essentially the same as if you were working in Rec. 709. However – and I cannot over-emphasize this – you are not going to be able to produce an image that’s truly compatible with Dolby Vision and actually look correct as HDR, unless you have the correct AJA i/o hardware and a proper display. And by display, I mean a top-end Dolby, Canon, or Sony unit, costing tens of thousands of dollars.

As I understand the PQ specs, the bulk of the higher range is for the highlights that are normally constrained or clipped in our current video systems. However, that 10,000 nits scale is weighed, so that about 50% of the image value is in the first 100 nits, making it of comparable brightness to the current 100 IRE. The rest of that range is for brighter information, like specular highlights. You don’t necessarily get more brightness in the shadow detail. Therefore, if you are grading a shot in FCPX in a 2020 PQ color space and you only have the computer display to go by, you’ll grade by eye as much as by scope. This means that to get a pleasing image, you will end up making the average appearance of the image brighter than it really should be. When this is viewed on a real HDR monitor, it will be painfully bright. Having a higher-nits computer display, like on the iMac Pro (up to 500 nits), won’t make much difference, unless maybe, you crank the display brightness to its maximum (ouch!).  “Mine goes the 11!”

Right now, HDR is the wild, wild west. If you are smart, you’ll realize that you don’t know what you don’t know. While it’s nice to have these new features in FCPX, they can be very dangerous in the wrong hands.

But that’s another matter. Right now, I just hope Apple (or one of the usual suspects, like Ripple Training, LumaForge, or Larry Jordan) will come out with more elaboration on the Color Wheels.

©2018 Oliver Peters