Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Advertisements

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters

Did you pick the right camera? Part 1

There are tons of great cameras and lenses on the market. While I am not a camera operator, I have been a videographer on some shoots in the past. Relevant production and camera logistical issues are not foreign to me. However, my main concern in evaluating cameras is how they impact me in post – workflow, editing, and color correction. First – biases on the table. Let me say from the start that I have had the good fortune to work on many productions shot with ARRI Alexas and that is my favorite camera system in regards to the three concerns offered in the introductory post. I love the image, adopting ProRes for recording was a brilliant move, and the workflow couldn’t be easier. But I also recognize that ARRI makes an expensive albeit robust product. It’s not for everyone. Let’s explore.

More camera choices – more considerations

If you are going to only shoot with a single camera system, then that simplifies the equation. As an editor, I long for the days when directors would only shoot single-camera. Productions were more organized and there was less footage to wade through. And most of that footage was useful – not cutting room fodder. But cameras have become cheaper and production timetables condensed, so I get it that having more than one angle for every recording can make up for this. What you will often see is one expensive ‘hero’ camera as the A-camera for a shoot and then cheaper/lighter/smaller cameras as the B and C-cameras. That can work, but the success comes down to the ingredients that the chef puts into the stew. Some cameras go well together and others don’t. That’s because all cameras use different color science.

Lenses are often forgotten in this discussion. If the various cameras being used don’t have a matched set of lenses, the images from even the exact same model cameras – set to the same settings – will not match perfectly. That’s because lenses have coloration to them, which will affect the recorded image. This is even more extreme with re-housed vintage glass. As we move into the era of HDR, it should be noted that various lens specialists are warning that images made with vintage glass – and which look great in SDR – might not deliver predictable results when that same recording is graded for HDR.

Find the right pairing

If you want the best match, use identical camera models and matched glass. But, that’s not practical or affordable for every company nor every production. The next best thing is to stay within the same brand. For example, Canon is a favorite among documentary producers. Projects using cameras from the EOS Cinema line (C300, C300 MkII, C500, C700) will end up with looks that match better in post between cameras. Generally the same holds true for Sony or Panasonic.

It’s when you start going between brands that matching looks becomes harder, because each manufacturer uses their own ‘secret sauce’ for color science. I’m currently color grading travelogue episodes recorded in Cuba with a mix of cameras. A and B-cameras were ARRI Alexa Minis, while the C and D-cameras were Panasonic EVA1s. Additionally Panasonic GH5, Sony A7SII, and various drones cameras were also used. Panasonic appears to use a similar color science as ARRI, although their log color space is not as aggressive (flat). With all cameras set to shoot with a log profile and the appropriate REC709 LUT applied to each in post (LogC and Vlog respectively) I was able to get a decent match between the ARRI and Panasonic cameras, including the GH5. Not so close with the Sony or drone cameras, however.

Likewise, I’ve graded a lot of Canon C300 MkII/C500 footage and it looks great. However, trying to match Canon to ARRI shots just doesn’t come out right. There is too much difference in how blues are rendered.

The hardest matches are when professional production cameras are married with prosumer DSLRs, such as a Sony FS5 and a Fujifilm camera. Not even close. And smartphone cameras – yikes! But as I said above, the GH5 does seem to provide passible results when used with other Panasonic cameras and in our case, the ARRIs. However, my experience there is limited, so I wouldn’t guarantee that in every case.

Unfortunately, there’s no way to really know when different brands will or won’t create a compatible A/B-camera combination until you start a production. Or rather, when you start color correcting the final. Then it’s too late. If you have the luxury of renting or borrowing cameras and doing a test first, that’s the best course of action. But as always, try to get the best you can afford. It may be better to get a more advanced camera, but only one. Then restructure your production to work with a single-camera methodology. At least then, all of your footage should be consistent.

Click here for the Introduction.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Intro

My first facility job after college at a hybrid production/post company included more than just editing. Our largest production effort was to produce, post, and dub weekly price-and-item retail TV commercials for a large, regional grocery chain. This included two to three days a week of studio production for product photography (product displays, as well as prepared food shots).

Early on, part of my shift included being the video shader for the studio camera being used. The video shader in a TV station operation is the engineering operator who makes sure the cameras are set up and adjusts video levels during the actual production. However, in our operation (as would be the case in any teleproduction facility of that time) this was a more creative role – more akin to a modern DIT (digital imaging technician) than a video engineer. It didn’t involve simply adjusting levels, but also ‘painting’ the image to get the best-looking product shots on screen. Under the direction of the agency producer and our lighting DP/camera operator, I would use both the RGB color balance controls of the camera, along with a built-in 6-way secondary color correction circuit, to make each shot look as stylistic – and the food as appetizing – as possible. Then I rolled tape and recorded the shot.

This was the mid-1970s when RCA dominated the broadcast camera market. Production and gear options where either NTSC, PAL, or film. We owned an RCA TK-45 studio camera and a TKP-45 ‘portable’ camera that was tethered to a motor home/mobile unit. This early RCA color correction system of RGB balance/level controls for lift/gamma/gain ranges, coupled with a 6-way secondary color correction circuit (sat/hue trim pots for RGBCMY) was used in RCA cameras and telecines. It became the basis for nearly all post-production color correction technology to follow. I still apply  those early fundamentals that I learned back then in my work today as a colorist.

Options = Complexity

In the intervening decades, the sheer number of camera vendors has blossomed and surpassed RCA, Philips, and the other few companies of the 1970s. Naturally, we are well past the simple concerns of NTSC or PAL; and film-based production is an oddity, not the norm. This has introduced a number of challenges:

1. More and cheaper options mean that productions using multiple cameras is a given.

2. Camera raw and log recording, along with modern color correction methods, give you seemingly infinite possibilities – often making it even harder to dial in the right look.

3. There is no agreement of file format/container standards, so file-based recording adds workflow complexity that never existed in the past.

In the next three blog posts, I will explore each of these items in greater depth.

©2019 Oliver Peters

NAB Show 2019

This year the NAB Show seemed to emphasize its roots – the “B” in National Association of Broadcasters. Gone or barely visible were the fads of past years, such as stereoscopic 3D, 360-degree video, virtual/augmented reality, drones, etc. Not that these are gone – merely that they have refocused on the smaller segment of marketshare that reflects reality. There’s not much point in promoting stereo 3D at NAB if most of the industry goes ‘meh’.

Big exhibitors of the past, like Quantel, RED, Apple, and Autodesk, are gone from the floor. Quantel products remain as part of Grass Valley (now owned by Belden), which is the consolidation of Grass Valley Group, Quantel, Snell & Wilcox, and Philips. RED decided last year that small, camera-centric shows were better venues. Apple – well, they haven’t been on the main floor for years, but even this year, there was no off-site, Final Cut Pro X stealth presence in a hotel suite somewhere. Autodesk, which shifted to a subscription model a couple of years ago, had a demo suite in the nearby Renaissance Hotel, focusing on its hero product, Flame 2020. Smoke for Mac users – tough luck. It’s been over for years.

This was a nuts-and-bolts year, with many exhibits showing new infrastructure products. These appeal to larger customers, such as broadcasters and network facilities. Specifically the world is shifting to an IP-based infrastructure for signal routing, control, and transmission. This replaces copper and fiber wiring of the past, along with the devices (routers, video switchers, etc) at either end of the wire. Companies that might have appeared less relevant, like Grass Valley, are back in a strong sales position. Other companies, like Blackmagic Design, are being encouraged by their larger clients to fulfill those needs. And as ever, consolidation continues – this year VizRT acquired NewTek, who has been an early player in video-over-IP with their proprietary NDI protocol.

Adobe

The NAB season unofficially started with Adobe’s pre-NAB release of the CC2019 update. For editors and designers, the hallmarks of this update include a new, freeform bin window view and adjustable guides in Premiere Pro and content-aware, video fill in After Effects. These are solid additions in response to customer requests, which is something Adobe has focused on. A smaller, but no less important feature is Adobe’s ongoing effort to improve media performance on the Mac platform.

As in past years, their NAB booth was an opportunity to present these new features in-depth, as well as showcase speakers who use Adobe products for editing, sound, and design. Part of the editing team from the series Atlanta was on hand to discuss the team’s use of Premiere Pro and After Effects in their ‘editing crash pad’.

Avid

For many attendees, NAB actually kicked off on the weekend with Avid Connect, a gathering of Avid users (through the Avid Customer Association), featuring meet-and-greets, workshops, presentations, and ACA leadership committee meetings. While past product announcements at Connect have been subdued from the vantage of Media Composer editors, this year was a major surprise. Avid revealed its Media Composer 2019.5 update (scheduled for release the end of May). This came as part of a host of many updates. Most of these apply to companies that have invested in the full Avid ecosystem, including Nexis storage and Media Central asset management. While those are superb, they only apply to a small percentage of the market. Let’s not forget Avid’s huge presence in the audio world, thanks to the dominance of Pro Tools – now with Dolby ATMOS support. With the acquisition of Euphonix years back, Avid has become a significant player in the live and studio sound arena. Various examples of its S-series consoles in action were presented.

Since I focus on editing, let me discuss Media Composer a bit more. The 2019.5 refresh is the first major Media Composer overhaul in years. It started in secret last year. 2019.5 is the first iteration of the new UI, with more to be updated in coming releases. In short, the interface has been modernized and streamlined in ways to attract newer, younger users, without alienating established editors. Its panel design is similar to Adobe’s approach – i.e. interface panels can be docked, floated, stacked, or tabbed. Panels that you don’t want to see may be closed or simply slid to the side and hidden. Need to see a hidden panel again? Simply side it back open from the edge of the screen.

This isn’t just a new skin. Avid has overhauled the internal video pipeline, with 32-bit floating color and an uncompressed DNx codec. Project formats now support up to 16K. Avid is also compliant with the specs of the Netflix Post Alliance and the ACES logo program.

I found the new version very easy to use and a welcomed changed; however, it will require some adaptation if you’ve been using Media Composer for a long time. In a nod to the Media Composer heritage, the weightlifter (aka ‘liftman’) and scissors icons (for lift and extract edits) are back. Even though Media Composer 2019.5 is just in early beta testing, Avid felt good enough about it to use this version in its workshops, presentations, and stage demos.

One of the reasons to go to NAB is for the in-person presentations by top editors about their real-world experiences. No one can top Avid at this game, who can easily tap a host of Oscar, Emmy, BFTA, and Eddie award winners. The hallmark for many this year was the presentation at Avid Connect and/or at the show by the Oscar-winning picture and sound editing/mixing team for Bohemian Rhapsody. It’s hard not to gather a standing-room-only crowd when you close your talk with the Live Aid finale sequence played in kick-ass surround!

Blackmagic Design

Attendees and worldwide observers have come to expect a surprise NAB product announcement out of Grant Petty each year and he certainly didn’t disappoint this time. Before I get into that, there were quite a few products released, including for IP infrastructures, 8K production and post, and more. Blackmagic is a full spectrum video and audio manufacturer that long ago moved into the ‘big leagues’. This means that just like Avid or Grass Valley, they have to respond to pressure from large users to develop products designed around their specific workflow needs. In the BMD booth, many of those development fruits were on display, like the new Hyperdeck Extreme 8K HDR recorder and the ATEM Constellation 8K switcher.

The big reveal for editors was DaVinci Resolve 16. Blackmagic has steadily been moving into the editorial space with this all-in-one, edit/color/mix/effects/finishing application. If you have no business requirement for – or emotional attachment to – one of the other NLE brands, then Resolve (free) or Resolve Studio (paid) is an absolute no-brainer. Nothing can touch the combined power of Resolve’s feature set.

New for Resolve 16 is an additional editorial module called the Cut Page. At first blush, the design, layout, and operation are amazingly similar to Apple’s Final Cut Pro X. Blackmagic’s intent is to make a fast editor where you can start and end your project for a time-sensitive turnaround without the complexities of the Edit Page. However, it’s just another tool, so you could work entirely in the Cut Page, or start in the Cut Page and refine your timeline in the Edit Page, or skip the Cut Page all together. Resolve offers a buffet of post tools that are at your disposal.

While Resolve 16’s Cut Page does elicit a chuckle from experienced FCPX users, it offers some new twists. For example, there’s a two-level timeline view – the top section is the full-length timeline and the bottom section is the zoomed-in detail view. The intent is quick navigation without the need to constantly zoom in and out of long timelines. There’s also an automatic sync detection function. Let’s say you are cutting a two-camera show. Drop the A-camera clips onto the timeline and then go through your B-camera footage. Find a cut-away shot, mark in/out on the source, and edit. It will ‘automagically’ edit to the in-sync location on the timeline. I presume this is matched by either common sound or timecode. I’ll have to see how this works in practice, but it demos nicely. Changes to other aspects of Resolve were minor and evolutionary, except for one other notable feature. The Color Page added its own version of content-aware, video fill.

Another editorial product addition – tied to the theme of faster, more-efficient editing – was a new edit keyboard. Anyone who’s ever cut in the linear days – especially those who ran Sony BVE9000/9100 controllers – will feel very nostalgic. It’s a robust keyboard with a high-quality, integrated jog/shuttle knob. The feel is very much like controlling a tape deck in a linear system, with fast shuttle response and precise jogging. The precision is far better than any of the USB controllers, like a Contour Shuttle. Whether or not enough people will have interest in shelling out $1,025 for it awaits to be seen. It’s a great tool, but are you really faster with one, than with FCPX’s skimming and a standard keyboard and mouse?

Ironically, if you look around the Blackmagic Design booth there does seem to be a nostalgic homage to Sony hardware of the past. As I said, the edit keyboard is very close to a BVE9100 keyboard. Even the style of the control panel on the Hyperdecks – and the look of the name badges on those panels – is very much Sony’s style. As humans, this appeals to our desire for something other than the glass interfaces we’ve been dealing with for the past few years. Michael Cioni (Panavision, Light Iron) coined this as ‘tactile attraction’ in his excellent Faster Together Stage talk. It manifests itself not only in these type of control surfaces, but also in skeuomorphic designs applied to audio filter interfaces. Or in the emotion created in the viewer when a colorist adds film grain to digital footage.

Maybe Grant is right and these methods are really faster in a pressure-filled production environment. Or maybe this is simply an effort to appeal to emotion and nostalgia by Blackmagic’s designers. (Check out Grant Petty’s two-hour 2019 Product Overview for more in-depth information on Blackmagic Design’s new products.)

8K

I won’t spill a lot of words on 8K. Seems kind of silly when most delivery is HD and even SD in some places. A lot of today’s production is in 4K, but really only for future-proofing. But the industry has to sell newer and flashier items, so they’ve moved on to 8K pixel resolution (7680 x 4320). Much of this is driven by Japanese broadcast and manufacturer efforts, who are pushing into 8K. You can laugh or roll your eyes, but NAB had many examples of 8K production tools (cameras and recorders) and display systems. Of course, it’s NAB, making it hard to tell how many of these are only prototypes and not yet ready for actual production and delivery.

For now, it’s still a 4K game, with plenty of mainstream product. Not only cameras and NLEs, but items like AJA’s KiPro family. The KiPro Ultra Plus records up to four channels of HD or one channel of 4K in ProRes or DNx. The newest member of the family is the KiPro GO, which records up to four channels of HD (25Mbps H.264) onto removable USB media.

Of course, the industry never stops, so while we are working with HD and 4K, and looking at 8K, the developers are planning ahead for 16K. As I mentioned, Avid already has project presets built-in for 16K projects. Yikes!

HDR

HDR – or high dynamic range – is about where it was last year. There are basically four formats vying to become the final standard used in all production, post, and display systems. While there are several frontrunners and edicts from distributors to deliver HDR-compatible masters, there still is no clear path. In you shoot in log or camera raw with nearly any professional camera produced within the past decade, you have originated footage that is HDR-compatible. But none of the low-cost post solutions make this easy. Without the right monitoring environment, you are wasting your time. If anything, those waters are muddier this year. There were a number of HDR displays throughout the show, but there were also a few labelled as using HDR simulation. I saw a couple of those at TV Logic. Yes, they looked gorgeous and yes, they were receiving an HDR signal. I found out that the ‘simulation’ part of the description meant that the display was bright (up to 350 nits), but not bright enough to qualify as ‘true’ HDR (1,000 nits or higher).

As in past transitions, we are certainly going to have to rely on a some ‘glue’ products. For me, that’s AJA again. Through their relationship with Colorfront, AJA offers two FS-HDR products: the HDR Image Analyzer and the FS-HDR convertor. The latter was introduced last year as a real-time frame synchronizer and color convertor to go between SDR and HDR display standards.  The new Analyzer is designed to evaluate color space and gamut compliance. Just remember, no computer display can properly show you HDR, so if you need to post and delivery HDR, proper monitoring and analysis tools are essential.

Cameras

I’m not a cinematographer, but I do keep up with cameras. Nearly all of this year’s camera developments were evolutionary: new LF (large format sensor) cameras (ARRI), 4K camcorders (Sharp, JVC), a full-frame mirrorless DSLR from Nikon (with ProRes RAW recording coming in a future firmware update). Most of the developments were targeted towards live broadcast production, like sports and megachurches.  Ikegami had an 8K camera to show, but their real focus was on 4K and IP camera control.

RED, a big player in the cinema space, was only there in a smaller demo room, so you couldn’t easily compare their 8K imagery against others on the floor, but let’s not forget Sony and Panasonic. While ARRI has been a favorite, due to the ‘look’ of the Alexa, Sony (Venice) and Panasonic (Varicam and now EVA-1) are also well-respected digital cinema tools that create outstanding images. For example, Sony’s booth featured an amazing, theater-sized, LED 8K micro-pixel display system. Some of the sample material shown was of the Rio Carnival, shot with anamorphic lenses on a 6K full-frame Sony Venice camera. Simply stunning.

Finally, let’s not forget Canon’s line-up of cinema cameras, from the C100 to the C700FF. To complement these, Canon introduced their new line of Sumire Prime lenses at the show. The C300 has been a staple of documentary films, including the Oscar-winning film, Free Solo, which I had the pleasure of watching on the flight to Las Vegas. Sweaty palms the whole way. It must have looked awesome in IMAX!

(For more on RED, cameras, and lenses at NAB, check out this thread from DP Phil Holland.)

It’s a wrap

In short, NAB 2019 had plenty for everyone. This also included smaller markets, like products for education seminars. One of these that I ran across was Cinamaker. They were demonstrating a complete multi-camera set-up using four iPhones and an iPad. The iPhones are the cameras (additional iPhones can be used as isolated sound recorders) and the iPad is the ‘switcher/control room’. The set-up can be wired or wireless, but camera control, video switching, and recording is done at the iPad. This can generate the final product, or be transferred to a Mac (with the line cut and camera iso media, plus edit list) for re-editing/refinement in Final Cut Pro X. Not too shabby, given the market that Cinamaker is striving to address.

For those of us who like to use the NAB Show exhibit floor as a miniature yardstick for the industry, one of the trends to watch is what type of gear is used in the booths and press areas. Specifically, one NLE over another, or one hardware platform versus the other. On that front, I saw plenty of Premiere Pro, along with some Final Cut Pro X. Hardware-wise, it looked like Apple versus HP. Granted, PC vendors, like HP, often supply gear to use in the booths as a form of sponsorship, so take this with a grain of salt. Nevertheless, I would guess that I saw more iMac Pros than any other single computer. For PCs, it was a mix of HP Z4, Z6, and Z8 workstations. HP and AMD were partner-sponsors of Avid Connect and they demoed very compelling set-ups with these Z-series units configured with AMD Radeon cards. These are very powerful workstations for editing, grading, mixing, and graphics.

©2019 Oliver Peters

The Nuances of Overcranking

The concept of overcranking and undercranking in the world of film and video production goes back to the origins of motion picture technology. The earliest film cameras required the camera operator to manually crank the film mechanism – they didn’t have internal motors. A good camera operator was partially judged by how constant of a frame rate they could maintain while cranking the film through the camera.

Prior to the introduction of sound, the correct frame rate was 18fps. If the camera was cranked faster than 18fps (overcranking), then the playback speed during projection was in slow motion. If the camera was cranked slower than 18fps (undercranking), the motion was sped up. With sound, the default frame rate shifted from 18 to 24fps. One by-product of this shift is that the projection of old B&W films gained that fast, jerky motion we often incorrectly attribute to “old time movies” today. That characteristic motion is because they are no longer played at their intended speeds.

While manual film cranking seems anachronistic in modern times, it had the benefit of in-camera, variable-speed capture – aka speed ramps. There are modern film cameras that include controlled mechanisms to still be able to do that today – in production, not in post.

Videotape recording

With the advent of videotape recording, the television industry was locked into constant recording speeds. Variable-speed recording wasn’t possible using tape transport mechanisms. Once color technology was established, the standard record, playback, and broadcast frame rates became 29.97fps and/or 25.0fps worldwide. Motion picture films captured at 24.0fps were transferred to video at the slightly slower rate of 23.976fps (23.98) in the US and converted to 29.97 by employing pulldown – a method to repeat certain frames according to a specific cadence. (I’ll skip the field versus frame, interlaced versus progressive scan discussion.)

Once we shifted to high definition, an additional frame rate category of 59.94fps was added to the mix. All of this was still pinned to physical videotape transports and constant frame rates. Slomo and fast speed effects required specialized videotape or disk pack recorders that could playback at variable speeds. A few disk recorders could record at different speeds, but in general, it was a post-production function.

File-based recording

Production shifted to in-camera, file-based recording. Post shifted to digital, computer-based, rather than electro-mechanical methods. The nexus of these two shifts is that the industry is no longer locked into a limited number of frame rates. So-called off-speed recording is now possible with nearly every professional production camera. All NLEs can handle multiple frame rates within the same timeline (albeit at a constant timeline frame rate).

Modern video displays, the web, and streaming delivery platforms enable viewers to view videos mastered at different frame rates, without being dependent on the broadcast transmission standard in their country or region. Common, possible system frame rates today include 23.98, 24.0, 25.0, 29.97, 30.0, 59.94, and 60.0fps. If you master in one of these, anyone around the world can see your video on a computer, smart phone, or tablet.

Record rate versus system/target rate

Since cameras can now record at different rates, it is imperative that the production team and the post team are on the same page. If the camera operator records everything at 29.97 (including sync sound), but the post is designed to be at 23.98, then the editor has four options. 1) Play the files as real-time (29.97 in a 23.98 sequence), which will cause frames to be dropped, resulting in some stuttering on motion. 2) Play the footage at the slowed speed, so that there is a one-to-one relationship of frames, which doesn’t work for sync sound. 3) Go through a frame rate conversion before editing starts, which will result in blended and/or dropped frames. 4) Change the sequence setting to 29.97, which may or may not be acceptable for final delivery.

Professional production cameras allow the operator to set both the system or target frame rate, in addition to the actual recording rate. These may be called different names in the menus, but the concepts are the same. The system or target rate is the base frame rate at which this file will be edited and/or played. The record rate is the frame rate at which images are exposed. When the record rate is higher than the target rate, you are effectively overcranking. That is, you are recording slow motion in-camera.

(Note: from here on I will use simplified instead of integer numbers in this post.) A record rate of 48fps and a target rate of 24fps results in an automatic 50% slow motion playback speed in post, with a one-to-one frame relationship (no duplicated or blended frames). Conversely, a record rate of 12fps with a target rate of 24fps results in playback that is fast motion at 200%. That’s the basis for hyperlapse/timelapse footage.

The good news is that professional production cameras embed the pertinent metadata into the file so that editing and player software automatically knows what to do. Import an ARRI Alexa file that was recorded at 120fps with a target rate of 24fps (23.98/23.976) into Final Cut Pro X or Premiere Pro and it will automatically playback in slow motion. The browser will identify the correct target rate and the clip’s timecode will be based on that same rate.

The bad news as that many cameras used in production today are consumer products or at best “prosumer” cameras. They are relatively “dumb” when it comes to such settings and metadata. Record 30fps on a Canon 5D or Sony A7S and you get 30fps playback. If you are cutting that into a 24fps (23.98) sequence, you will have to decide how to treat it. If the use is for non-sound-sync B-roll footage, then altering the frame rate (making it play slow motion) is fine. In many cases, like drone shots and handheld footage, that will be an intentional choice. The slower footage helps to smooth out the vibration introduced by using such a lightweight camera.

The worst recordings are those made with iPhone, iPads, or similar devices. These use variable-bit-rate codecs and variable-frame-rate recordings, making them especially difficult in post. For example, an iPhone recording at 30.0fps isn’t exactly at that speed. It wobbles around that rate – sometimes slightly slower and something faster. My recommendation for that type of footage is to always transcode to an optimized format before editing. If you must shoot with one of these devices, you really need to invest in the FiLMiC Pro application, which will give you a certain level of professional control over the iPhone/iPad camera.

Transcode

Time and storage permitting, I generally recommend transcoding consumer/prosumer formats into professional, optimized editing formats, like Avid DNxHD/HR or Apple ProRes. If you are dealing with speed differences, then set your file conversion to change the frame rate. In our 30 over 24 example (29.97 record/23.98 target), the new footage will be slowed accordingly with matching timecode. Recognize that any embedded audio will also be slowed, which changes its sample rate. If this is just for B-roll and cutaways, then no problem, because you aren’t using that audio. However, one quirk of Final Cut Pro X is that even when silent, the altered sample rate of the audio on the clip can induce strange sound artifacts upon export. So in FCPX, make sure to detach and delete audio from any such clip on your timeline.

Interpret footage

This may have a different name in any given application, but interpret footage is a function to make the application think that the file should be played at a different rate than it was recorded at. You may find this in your NLE, but also in your encoding software. Plus, there are apps that can re-write the QuickTime header information without transcoding the file. Then that file shows up at the desired rate inside of the NLE. In the case of FCPX, the same potential audio issues can arise as described above if you go this route.

In an NLE like Premiere or Resolve, it’s possible to bring in 30-frame files into a 24-frame project. Then highlight these clips in the browser and modify the frame rate. Instant fix, right? Well, not so fast. While I use this in some cases myself, it comes with some caveats. Interpreting footage often results in mismatched clip linking when you are using the internal proxy workflow. The proxy and full-res files don’t sync up to each other. Likewise, in a roundtrip with Resolve, file relinking in Resolve will be incorrect. It may result in not being able to relink these files at all, because the timecode that Resolve looks for falls outside of the boundaries of the file. So use this function with caution.

Speed adjustments

There’s a rub when work with standard speed changes (not frame rate offsets). Many editors simply apply an arbitrary speed based on what looks right to them. Unfortunately this introduces issues like skipping frames. To perfectly apply slow or fast motion to a clip, you MUST stick to simple multiples of that rate, much like traditional film post. A 200% speed increase is a proper multiple. 150% is not. The former means you are playing every other frame from a clip for smooth action. The latter results in only one fourth of the frames being eliminated in playback, leaving you with some unevenness in the movement. 

Naturally there are times when you simply want the speed you picked, even if it’s something like 177%. That’s when you have to play with the interpolation options of your NLE. Typically these include frame duplication, frame blending, and optical flow. All will give you different looks. When it comes to optical flow, some NLEs handle this better than others. Optical flow “creates” new  in-between frames. In the best case it can truly look like a shot was captured at that native frame rate. However, the computation is tricky and may often lead to unwanted image artifacts.

If you use Resolve for a color correction roundtrip, changes in motion interpolation in Resolve are pointless, unless the final export of the timeline is from Resolve. If clips go back to your NLE for finishing, then it will be that software which determines the quality of motion effects. Twixtor is a plug-in that many editors use when they need even more refined control over motion effects.

Doing the math

Now that I’ve discussed interpreting footage and the ways to deal with standard speed changes, let’s look at how best to handle off-speed clips. The proper workflow in most NLEs is to import the footage at its native frame rate. Then, when you cut the clip into the sequence, alter the speed to the proper rate for frames to play one-to-one (no blended, duplicate, or skipped frames). Final Cut Pro X handles this in the best manner, because it provides an automatic speed adjustment command. This not only makes the correct speed change, but also takes care of any potential audio sample rate issues. With other NLEs, like Premiere Pro, you will have to work out the math manually. 

The easiest way to get a value that yields clean frames (one-to-one frame rate) is to simply divide the timeline frame rate by the clip frame rate. The answer is the percentage to apply to the clip’s speed in the timeline. Simple numbers yield the same math results as integer numbers. If you are in a 23.98 timeline and have 29.97 clips, then 24 divided by 30 equals .8 – i.e. 80% slow motion speed. A 59.94fps clip is 40%. A 25fps clip is 96%.

Going in the other direction, if you are editing in a 29.97 timeline and add a 23.98 clip, the NLE will normally add a pulldown cadence (duplicated frames). If you want this to be one-to-one, if will have to be sped up. But the calculation is the same. 30 divided by 24 results in a 125% speed adjustment. And so on.

Understanding the nuances of frame rates and following these simple guidelines will give you a better finished product. It’s the kind of polish that will make your videos stand out from those of your fellow editors.

© 2019 Oliver Peters

More about ProRes RAW

A few weeks ago I wrote a two-part post – HDR and RAW Demystified. In the second part, I covered Apple’s new ProRes RAW codec. I still see a lot of misinformation on the web about what exactly this is, so I felt it was worth an additional post. Think of this post as an addendum to Part 2. My apologies up front, if there is some overlap between this and the previous post.

_____________________________

Camera raw codecs have been around since before RED Digital Camera brought out their REDCODE RAW codec. At NAB, Apple decided to step into the game. RED brought the innovation of recording the raw signal as a compressed movie file, making on-board recording and simplified post-production possible. Apple has now upped the game with a codec that is optimized for multi-stream playback within Final Cut Pro X, thus taking advantage of how FCPX leverages Apple hardware. At present, ProRes RAW is incompatible with all other applications. The exception is Motion, which will read and play the files, but with incorrect default – albeit correctable – video levels.

ProRes RAW is only an acquisition codec and, for now, can only be recorded externally using an Atomos Inferno or Sumo 19 monitor/recorder, or in-camera with DJI’s Inspire 2 or Zenmuse X7. Like all things Apple, the complexity is hidden under the surface. You don’t get the type of specific raw controls made available for image tweaking, as you do with RED. But, ProRes RAW will cover the needs of most camera raw users, making this the raw codec “for the rest of us”. At least that’s what Apple is banking on.

Capturing in ProRes RAW

The current implementation requires a camera that exports a camera raw signal over SDI, which in turn is connected to the Atomos, where the conversion to ProRes RAW occurs. Although no one is very specific about the exact process, I would presume that Atomos’ firmware is taking in the camera’s form of raw signal and rewrapping or transforming the data into ProRes RAW. This means that the Atomos firmware would require a conversion table for each camera, which would explain why only a few Sony, Panasonic, and Canon models qualify right now. Others, like ARRI Alexa or RED cameras, cannot yet be recorded as ProRes RAW. The ProRes RAW codec supports 12-bit color depth, but it depends on the camera. If the SDI output to the Atomos recorder is only 10-bit, then that’s the bit-depth recorded.

Until more users buy or update these specific Atomos products – or more manufacturers become licensed to record ProRes RAW onboard the camera – any real-word comparisons and conclusions come from a handful of ProRes RAW source files floating around the internet. That, along with the Apple and Atomos documentation, provides a pretty solid picture of the quality and performance of this codec group.

Understanding camera raw

All current raw methods depend on single-sensor cameras that capture a Bayer-pattern image. The sensor uses a monochrome mosaic of photosites, which are filtered to register the data for light in the red, green, or blue wavelengths. Nearly all of these sensors have twice as many green receptors as red or blue. At this point, the sensor is capturing linear light at the maximum dynamic range capable for the exposure range of the camera and that sensor. It’s just an electrical signal being turned into data, but without compression (within the sensor). The signal can be recorded as a camera raw file, with or without compression. Alternatively, it can also be converted directly into a full-color video signal and then recorded – again, with or without compression.

If the RGGB photosite data (camera raw) is converted into RGB pixels, then sensor color information is said to be “baked” into the file. However, if the raw conversion is stored in that form and then later converted to RGB in post, sensor data is preserved intact until much later into the post process. Basically, the choice boils down to whether that conversion is best performed within the camera’s electronics or later via post-production software.

The effect of compression may also be less destructive (fewer visible artifacts) with a raw image, because data, rather than video is being compressed. However, converting the file to RGB, does not mean that a wider dynamic range is being lost. That’s because most camera manufacturers have adopted logarithmic encoding schemes, which allow a wide color space and a high dynamic range (big exposure latitude) to be carried through into post. HDR standards are still in development and have been in testing for several years, completely independent of whether or not the source files are raw.

ProRes RAW compression

ProRes RAW and ProRes RAW HQ are both compressed codecs with roughly the same data footprint as ProRes and ProRes HQ. Both raw and standard versions use a variable bitrate form of compression, but in different ways. Apple explains it this way in their white paper: 

“As is the case with existing ProRes codecs, the data rates of ProRes RAW are proportional to frame rate and resolution. ProRes RAW data rates also vary according to image content, but to a greater degree than ProRes data rates. 

With most video codecs, including the existing ProRes family, a technique known as rate control is used to dynamically adjust compression to meet a target data rate. This means that, in practice, the amount of compression – hence quality – varies from frame to frame depending on the image content. In contrast, ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.”

ProRes RAW and HDR do not depend on each other

One of my gripes, when watching some of the ProRes RAW demos on the web and related comments on forums, is that ProRes RAW is being conflated with HDR. This is simply inaccurate. Raw applies to both SDR and HDR workflows. HDR workflows do not depend on raw source material. One of the online demos I saw recently immediately started with an HDR FCPX Library. The demo ProRes RAW clips were imported and looked blown out. This made for a dramatic example of recovering highlight information. But, it was wrong!

If you start with an SDR FCPX Library and import these same files, the default image looks great. The hitch here, is that these ProRes RAW files were shot with a Sony camera and a default LUT is applied in post. That’s part of the file’s metadata. To my knowledge, all current, common camera LUTs are based on conversion to the Rec709 color space, not HDR or wide gamut. If you set the inspector’s LUT tab to “none” in either SDR or HDR, you get a relatively flat, log image that’s easily graded in whatever direction you want.

What about raw-specific settings?

Are there any advantages to camera raw in the first place? Most people will point to the ability to change ISO values and color temperature. But these aren’t actually something inherently “baked” into the raw file. Instead, this is metadata, dialed in by the DP on the camera, which optimizes the images for the sensor. ISO is a sensitivity concept based on the older ASA film standard for exposing film. In modern digital cameras, it is actually an exposure index (EI), which is how some refer to it. (RedShark’s Phil Rhodes goes into depth in this linked article.)

The bottom line is that EI is a cross-reference to that camera sensor’s “sweet spot”. 800 on one camera might be ideal, while 320 is best on another. Changing ISO/EI has the same effect as changing gain in audio. Raising or lowering ISO/EI values means that you can either see better into the darker areas (with a trade-off of added noise) – or you see better highlight detail, but with denser dark areas. By changing the ISO/EI value in post, you are simply changing that reference point.

In the case of ProRes RAW and FCPX, there are no specific raw controls for any of this. So it’s anyone’s guess whether changing the master level wheel or the color temp/tint sliders within the color wheels panel is doing anything different for a ProRes RAW file than doing the same adjustment for any other RGB-encoded video file. My guess is that it’s not.

In the case of RED camera files, you have to install a camera raw plug-in module in order to work with the REDCODE raw codec inside of Final Cut Pro X. There is a lot of control of the image, prior to tweaking with FCPX’s controls. However, the amount of image control for the raw file is significantly more for a REDCODE file in Premiere Pro, than inside of FCPX. Again, my suspicion is that most of these controls take effect after the conversion to RGB, regardless of whether or not the slider lives in a specific camera raw module or in the app’s own color correction controls. For instance, changing color temperature within the camera raw module has no correlation to the color temperature control within the app’s color correction tools. It is my belief that few of these actually adjust file data at the raw level, regardless of whether this is REDCODE or ProRes RAW. The conversion from raw to RGB is proprietary with every manufacturer.

What is missing in the ProRes RAW implementation is any control over the color science used to process the image, along with de-Bayering options. Over the years, RED has reworked/improved its color science, which theoretically means that a file recorded a few years ago can look better today (using newer color science math) than it originally did. You can select among several color science models, when you work with the REDCODE format. 

You can also opt to lower the de-Bayering resolution to 1/2, 1/4, 1/8, etc. for a RED file.  When working in a 1080p timeline, this speeds up playback performance with minimal impact on the visible resolution displayed in the viewer. For full-quality conversion, software de-Bayering also yields different results than hardware acceleration, as with the RED Rocket-X card. While this level of control is nice to have, I suspect that’s the sort of professional complication that Apple seeks to avoid.

The main benefit of ProRes RAW may be a somewhat better-quality image carried into post at a lower file size. To get the comparable RGB image quality you’d need to go up to uncompressed, ProRes 4444, or ProRes 4444 XQ – all of which become very taxing in post. Yet, for many standard productions, I doubt you’ll see that great of a difference. Nevertheless, more quality with a lower footprint will definitely be welcomed.

People will want to know whether this is a game-changer or not. On that count, probably not. At least not until there are a number of in-camera options. If you don’t edit – and finish – with FCPX, then it’s a non-starter. If you shoot with a camera that records in a high-quality log format, like an ARRI Alexa, then you won’t see much difference in quality or workflow. If you shoot with any RED camera, you have less control over your image. On the other hand, it’s a definite improvement over all raw workflows that capture in image sequences. And it breathes some life into an older camera, like the Sony FS700. So, on balance, ProRes RAW is an advancement, but just not one that will affect as large a part of the industry as the rest of the ProRes family has.

(Note – click any image for an enlarged view. Images courtesy of Apple, FilmPlusGear, and OffHollywood.)

©2018 Oliver Peters