Analogue Wayback, Ep. 10

Color correction all stems from a slab of beef.

Starting out as an online editor at a production and post facility included working on a regional grocery chain account. The production company had a well-oiled “assembly line” process worked out with the agency in order to crank out 40-80 weekly TV commercials, plus several hundred station dubs. Start on Tuesday shooting product in the studio and recording/mixing tracks. Begin editing at the end of the day and overnight in time for agency review Wednesday morning. Make changes Wednesday afternoon and then copy station dubs overnight. Repeat the process on Thursday for the second round of the week.

The studio product photography involved tabletop recording of packaged product, as well as cooked spreads, such as a holiday turkey, a cooked steak, or an ice cream sundae. There was a chef on contract, so everything was real and edible – no fake stylist food there! Everything was set up on black or white sweep tables or large rolling, flat tables that could be dressed in whatever fashion was needed.

The camera was an RCA TK-45 with a short zoom lens and was mounted on a TV studio camera pedestal. This was prior to the invention of truly portable, self-contained video cameras. For location production, the two-piece TKP-45 was also used. It was tethered to our remote production RV.

This was a collaborative production, where our DP/camera operator handled lighting and the agency producers handled props and styling. The videotape operator handled the recording, camera set-up, and would insert retail price graphics (from art cards and a copy stand camera) during the recording of each take. Agency producers would review, pick takes, and note the timecode on the script. This allowed editors to assemble the spots unsupervised overnight.

Since studio recording was not a daily affair, there was no dedicated VTR operator at first. This duty was shared between the editors and the chief engineer. When I started as an editor, I would also spend one or two days supporting the studio operation. A big task for the VTR operator was camera set-up, aka camera shading. This is the TV studio equivalent to what a DIT might do today. The camera control electronics were located in my room with the videotape recorder, copy stand camera, and small video switcher.

Television cameras feature several video controls. The iris and pedestal knobs (or joysticks) control black level (pedestal) and brightness/exposure (iris). The TK-45 also included a gain switch, which increased sensitivity (0, +3dB, or +6dB), and a knob called black stretch. The latter would stretch the shadow area much like a software shadows slider or a gamma control today. Finally, there were RGB color balance controls for black and white. In normal operation, you would point the camera at a “chip chart” (grayscale chart) and balance RGB so that the image was truly black and white as measured on a waveform scope. The VTR operator/camera shader would set up the camera to the chart and then only adjust pedestal and iris throughout the day. 

Unfortunately not all food – especially raw ham, beef, or a rare steak – looks great under studio lighting nor in the world of NTSC color. Thankfully, RCA had also developed a camera module called the Chromaproc (chroma processor). This was a small module on the camera control unit that allowed you to adjust RGBCMY – the six vectors of the color spectrum. The exact details are hard to find now, but if I remember correctly, there were switches to enable each of the six vector controls. Below that were six accompanying hue control pots, which required a tweaker (small screwdriver) to adjust. When a producer became picky about the exact appearance of a rare steak and whether or not it looked appetizing, then you could flick on the Chromaproc and slightly shift the hues with the tweaker to get a better result. Thus you were “painting” the image.

RCA used this chroma processing technology in their cameras and telecine controls. The company eventually developed a separate product that predated any of the early color correctors, like the Wiz (the original device from which DaVinci was spawned). In addition to RGB color balance of the lift/gamma/gain ranges, you could further tweak the saturation and hue of these six vectors, which we now refer to as secondary color correction. The missing ingredients were memory, recall, and list management, which were added by the subsequent developers in their own products. This latter augmentation led to high-profile patent lawsuits, which have now largely been forgotten.

And so when I talk about color correction to folks, I’ll often tell them that everything I know about it was learned by shading product shots for grocery commercials!

©2022 Oliver Peters

CineMatch for FCP

Last year FilmConvert, developers of the Nitrate film emulation plug-in, released CineMatch. It’s a camera-matching plug-in designed for multiple platforms – including operating systems and different editing/grading applications. The initial 2020 release worked with DaVinci Resolve and Premiere Pro. Recently FilmConvert added Final Cut Pro support. You can purchase the plug-in for individual hosts or as a bundle for multiple hosts. If you bought the bundled version last year, then that license key is also applicable to the new Final Cut Pro plug-in. So, nothing extra to purchase for bundle owners.

CineMatch is designed to work with log and raw formats and a wide range of camera packs is included within the installer. To date, 70 combinations of brands and models are supported, including iPhones. FilmConvert has created these profiles based on the color science of the sensor used in each of the specific cameras.

CineMatch for FCP works the same way as the Resolve and Premiere Pro versions. First, select the source profile for the camera used. Next, apply the desired target camera profile. Finally, make additional color adjustments as needed.

If you a shoot with one predominant A camera that is augmented by B and C cameras of different makes/models, then you can apply CineMatch to the B and C camera clips in order to better match them to the A camera’s look.

You can also use it to shift the look of a camera to that of a different camera. Let’s say that you want a Canon C300 to look more like an ARRI Alexa or even an iPhone. Simply use CineMatch to do that. In my example images, I’ve adjusted Blackmagic and Alexa clips so that they both emulate the color science of a Sony Venice camera.

When working in Final Cut Pro, remember that it will automatically apply Rec 709 LUTs to some log formats, like ARRI Alexa Log-C. When you plan to use CineMatch, be sure to also set the Camera LUT pulldown selector in the inspector pane to “none.” Otherwise, you will be stacking two LUT conversions resulting in a very ugly look.

Once camera settings have been established, you can further adjust exposure, color balance, lift/gamma/gain color wheels, saturation, and the luma curve. There is also an HSL curves panel to further refine hue, saturation, and luma for individual color ranges. This is helpful when trying to match two cameras or shots to each other with greater accuracy. FCP’s comparison viewer is a great aid in making these tweaks.

As a side note, it’s also possible to use CineMatch in conjunction with FilmConvert Nitrate (if you have it) to not only adjust color science, but then to subsequently emulate different film stocks and grain characteristics.

CineMatch is a useful tool when working with different camera types and want to achieve a cohesive look. It’s easy and quick to use with little performance impact. CineMatch now also supports M1 Macs.

©2021 Oliver Peters

My Kingdom for Some Color Bars

In a former life, video deliverables were on videotape and no one seriously used the internet for any mission-critical media projects. TVs and high-quality video monitors used essentially the same display technology and standards. Every videotape started with SMPTE color bars used as a reference to set up the playback of the tape deck. Monitors were calibrated to bars and gray scale charts to assure proper balance, contrast, saturation, and hue. If the hardware was adjusted to this recognized standard, then what you saw in an edit suite would also be what the network or broadcaster would see going out over the air.

Fast forward to the present when nearly all deliverables are sent as files. Aesthetic judgements – especially by clients and off-site producers – are commonly made viewing MOV or MP4 files on some type of computer or device screen. As an editor who also does color correction, making sure that I’m sending the client a file that matches what I saw when it was created is very important.

Color management and your editing software

In researching and writing several articles and posts about trusting displays and color management, I’ve come to realize the following. If you expect the NLE viewer to be a perfect match with the output to a video display or an exported file playing in every media player, then good luck! The chances are slim.

There are several reasons for this. First, Macs and PCs use different gamma standards when displaying media files. Second, not all computer screens work in the same color space. For instance, some use P3-D65 while others use sRGB. Third, these color space and gamma standards differ from the standards used by televisions and also projection systems.

I’ll stick to standard dynamic range (SDR) in this discussion. HDR is yet another mine field best left for another day. The television display standard for SDR video is Rec. 709 with a 2.4 gamma value. Computers do not use this; however, NLEs use it as the working color space for the timeline. Some NLEs will also emulate this appearance within the source and record viewers in order to match the Rec. 709, 2.4 gamma feed going out through the i/o hardware to a video monitor.

As with still photos, a color profile is assigned when you export a video file, regardless of file wrapper or codec. This color profile is metadata that any media player software can use to interpret how a file should be displayed to the screen. For example, if you edit in Premiere Pro, Adobe uses a working SDR color space of Rec. 709 with 2.4 gamma. Exported files are assigned a color profile of 1-1-1. They will appear slightly lighter and less saturated in QuickTime Player as compared with the Premiere Pro viewer. That’s because computer screens default to a different gamma value – usually 1.96 on Macs. However, if you re-import that file back into Premiere, it will be properly interpreted and will match the original within Premiere. There’s nothing wrong with the exported file. It’s merely a difference based on differing display targets.

The developer’s conundrum

A developer of editing software has several options when designing their color management system. The first is to assume that the viewer should match Rec. 709, 2.4 gamma, since that’s the television standard and is consistent with legacy workflows. This is the approach taken by Adobe, Avid, and Blackmagic, but with some variations. Premiere Pro offers no alternate SDR timeline options, but After Effects does. Media Composer editors can set the viewer based on several standards and different video levels for Rec. 709: legal range (8-bit levels of 16-235) versus full range (8-bit levels of 0-255). Blackmagic enables different gamma options even when the Rec. 709 color space is selected.

Apple has taken a different route with Final Cut Pro by utilizing ColorSync. The same image in an FCP viewer will appear somewhat brighter than in the viewer of other NLEs; however, it will match the playback of an exported file in QuickTime Player. In addition, the output through AJA or Blackmagic i/o hardware to a video display will also match. Not only does the image look great on Apple screens, but it looks consistent across all apps on any Apple device that uses the ColorSync technology.

You have to look at it this way. A ton of content is being delivered only over the internet via sites like Instagram, Facebook, and YouTube rather than through traditional broadcast. A file submitted to a large streamer like Netflix will be properly interpreted within their pipeline, so no real concerns there. This begs the question. Should the app’s viewer really be designed to emulate Rec. 709, 2.4 gamma or should it look correct for the computer’s display technology?

The rubber meets the road

Here’s what happens in actual practice. When you export from Premiere Pro, Final Cut Pro, or Media Composer, the result is a media file tagged with the 1-1-1 color profile. For Premiere and Media Composer, exports will appear with somewhat less contrast and saturation than the image in the viewer.

In Resolve, you can opt to work in Rec. 709 with different gamma settings, including 2.4 or 709-A (“A” for Apple, I presume). These two different output settings would look the same until you start to apply a color grade (so don’t switch midstream). If you are set to 2.4 (or automatic), then the exported file has a color profile of 1-2-1. But with 709-A the exported file has a color profile of 1-1-1. These Resolve files will match the viewer and each other, but will also look darker than the comparable Premiere Pro and FCP exports.

All of the major browsers use the color profile. So do most media players, except VLC. These differences are also apparent on a PC, so it’s not an Apple issue per se. More importantly the profile determines how a file is interpreted. For instance, the two Resolve ProRes exports (one at 1-1-1, the other at 1-2-1) look the same in this first generation export. But let’s say you use Adobe Media Encoder to generate H.264 MP4 viewing copies from those ProRes files. The transcoded MP4 of the 709-A export (1-1-1 color profile) will match its ProRes original. However, the transcoded MP4 of the 2.4 export (1-2-1 color profile) will now look a bit brighter than its ProRes original. That’s because the color profile of the MP4 has been changed to 1-1-1.

Gamma changes mostly affect the midrange and shadow portion of a video signal. Therefore, differences are also more or less apparent depending on content. The more extreme your grading, the more apparent (and to some, obnoxious) these differences become. If these really bother you, then there are several ways to create files that are “enhanced” for computer viewing. This will make them a bit darker and more saturated.

  1. You can tweak the color correction by using an adjustment layer to export a file with a bit more contrast and saturation. In Premiere Pro, you can use a Lumetri effect in the adjustment layer to add a slight s-curve along with a 10% bump in saturation.
  2. You can use a QT Gamma Correction LUT (such as from Adobe) as part of the export. However, in my experience, it’s a bit too dark in the shadows for my taste.
  3. You can pass the exported file through After Effects and create a separate sRGB version.

These approaches are not transparent. In other words, you cannot re-import these files and expect them to match the original. Be very careful about your intentions when using any of these hacks, because you are creating misadjusted files simply for viewing purposes. 

In the end, is it really right to use Rec. 709 2.4 gamma as the standard for an NLE viewer? Personally, I think Apple used the better and more modern approach. Should you do any of these hacks? Well, that’s up to you. More and more people are reviewing content on smart phones and tablets – especially iPhones and iPads – all of which show good-looking images. So maybe these concerns are simply much ado about nothing.

Or paraphrasing Dr. Strangelove – How I Learned to Stop Worrying and Love Color Profiles.

©2021 Oliver Peters

Easy Resolve Grading with 6 Nodes

Spend any time watching Resolve tutorials and you’ll see many different ways in which colorists approach the creation of the same looks. Some create a look with just a few simple nodes. Others build a seemingly convoluted node tree designed to achieve the same goal. Neither approach is right or wrong.

Often what can all be done in a single node is spread across several in order to easily trace back through your steps when changes are needed. It also makes it easy to compare the impact of a correction by enabling and disabling a node. A series of nodes applied to a clip can be saved as a PowerGrade, which is a node preset. PowerGrades can be set up for a certain look or can be populated with blank (unaltered) nodes that are organized for how you like to work. Individual nodes can also be labeled, so that it’s easy to remember what operation you will do in each node.

The following is a simple PowerGrade (node sequence) that can be used as a starting point for most color grading work. It’s based on using log footage, but can also be modified for camera RAW or recordings in non-log color spaces, like Rec 709. These nodes are designed as a simple operational sequence to follow and each step can be used in a manner that works best with your footage. The sample ARRI clip was recorded with an ALEXA camera using the Log-C color profile.

Node 2 (LUT) – This is the starting point, because the first thing I want to do is apply the proper camera LUT to transform the image out of log. You could also do this with manual grading (no LUT). In that case the first three nodes would be rolled into one. Alternately you may use a Color Space Transform effect or even a Dehaze effect in some cases. But for the projects I grade, which largely use ARRI, Panasonic, Canon, and Sony cameras, adding the proper LUT seems to be the best starting point.

Node 1 (Contrast/Saturation) – With the LUT added to Node 2, I will go back to Node 1 to adjust contrast, pivot, and saturation. This changes the image going into the LUT and is a bit like adjusting the volume gain stage prior to applying an effect or filter when mixing sound. Since LUTs affect how color is treated, I will rarely adjust color balance or hue offsets (color wheels) in Node 1, as it may skew what the LUT is doing to the image in Node 2. The objective is to make subtle adjustments in Node 1 that improve the natural result coming out of Node 2.

Node 3 (Primary Correction) – This node is where you’ll want to correct color temperature/tint and use the color wheels, RGB curves,  and other controls to achieve a nice primary color correction. For example, you may need to shift color temperature warmer or cooler, lower black levels, apply a slight s-curve in the RGB curves, or adjust the overall level up or down.

Node 4 (Secondary Correction) – This node is for enhancement and the tools you’ll generally use are hue/sat curves. Let’s say you want to enhance skin tones, or the blue in the sky. Adjust the proper hue/sat curve in this node.

Node 5 (Windows) – You can add one or more “power windows” within the node (or use multiple nodes). Windows can be tracked to follow objects, but the main objective is a way to relight the scene. In most projects, I find that one window per shot is typically all I need, if any at all. Often this is to brighten up the lighting on the main talent in the shot. The use of windows is a way to direct the viewer’s attention. Often a simple soft-edged oval is all you’ll need to achieve a dramatic result.

Node 6 (Vignette) – The last node in this basic structure is to add a vignette, which I generally apply just to subtly darken the corners. This adds a bit of character to most shots. I’ll build the vignette manually with a circular window rather than apply a stock effect. The window is inverted so that the correction impacts the shot outside of the windowed area.

So there’s a simple node tree that works for many jobs. If you need to adjust parameters such as noise reduction, that’s best done in Node 1 or 2. Remember that Resolve grading works on two levels – clip and timeline. These are all clip-based nodes. If you want to apply a global effect, like adding film grain to the whole timeline, then you can change the grading mode from clip to timeline. In the timeline mode, any nodes you apply impact the whole timeline and are added on top of any clip-by-clip correction, so it works a bit like an adjustment layer.

©2021 Oliver Peters

Application Color Management

I’m previously written about the challenge of consistent gamma and saturation across multiple monitoring points. Getting an app’s viewer, QuickTime playback, and the SDI output to all look the same can be a fool’s errand. If you work on a Mac, then there are pros and cons to using Mac displays like an iMac. In general, Apple’s “secret sauce” works quite well for Final Cut Pro. However, if you edit or grade in Resolve, Premiere Pro, or Media Composer, then you aren’t quite as lucky. I’ve opined that you might actually need to generate separate files for broadcast and web deliverables.

The extra step of optimized file creation isn’t practical for most. In my case, the deliverables I create go to multiple platforms; however, few are actually destined for traditional broadcast or to be played in a theater. In most cases, my clients are creating content for the web or to be streamed in various venues. I predominantly edit in Premiere Pro and grade with Resolve. I’ve been tinkering with color management settings in each. The goal is a reasonably close match across both app viewers, the output I see to a Rec 709 display, and the look of the exported file when I view it in QuickTime Player on the computer.

Some of this advice might be a bit contrary to what I previously wrote. Both situations are still valid, depending on the projects you edit or grade. Granted, this is based on what I see on iMac and iMac Pro displays, so it may or may not be consistent with other display brands or when using PCs. And this applies to SDR, Rec 709, or sRGB outputs and not HDR grading. As a starting point, leave the Mac display profile alone. Don’t change its default profile. Yes, I know an iMac is P3, but that’s simply something you’ll have to live with.

Adobe Premiere Pro

Premiere Pro’s Rec 709 timelines are based on 2.4 gamma, which is the broadcast standard. However, an exported file is displayed with a QuickTime color profile of 1-1-1 (1.96 gamma). The challenge is to work with the Premiere Pro viewer and see an image that matches the exported file. I have changed to disabling (unchecking) the Display Color Management in General Preferences. This might seem counter-intuitive, but it results in a setting where the viewer, a Rec 709 output to a monitor, and the exported image all largely look the same.

If you enable Display Color Management, you’ll get an image in the viewer with a somewhat closer match for saturation, but gamma will be darker than the QuickTime or the video monitor. If you disable this setting, the gamma will be a better match (shadows aren’t crushed); however, the saturation of reds will be somewhat enhanced in the Premiere Pro viewer. It’s a bit of a trade-off, but I prefer the setting to be off.

Blackmagic Design DaVinci Resolve

Resolve has multiple places that can trip you up. But I’ve found that once you set them up, the viewer image will be a closer match to the exported file and to the Rec 709 image than is the case for Premiere Pro. There are three sections to change. The first is in the Project Settings pane (gear menu). This is the first place to start with every new Resolve project. Under Color Management, set the Timeline color space to Rec. 709 (Scene). I’ve experimented with various options, including ACES. Unfortunately the ongoing ACES issue with fluorescent color burned me on a project, so I’ll wait until I really have a need to use ACES again. Hopefully it will be less of a work-in-progress then. I’ve gone back to working in Rec. 709, but new for me is to use the Scene variant. I also turn on Broadcast Safe, but use the gentler restricted range of -10 to -110.

The next adjustment is in Resolve Preferences. Go to the General section and turn on: Use 10-bit precision in viewers, Use Mac display color profiles, and Automatically tag Rec. 709 Scene clips as Rec. 709-A. What this last setting does is make sure the exports are tagged with the 1-1-1 QuickTime color profile. If this is not checked, the file will be exported with a profile of 1-2-1 (2.4 gamma) and look darker when you play it to the desktop using QuickTime Player.

The last setting is on the Deliver page. Data levels can be set to Auto or Video. The important thing is to set the Color Space Tag and Gamma Tag to Same as Project. By doing so, the exported files will adhere to the settings described above.

Making these changes in Premiere Pro and Resolve gives me more faith in what I see in the viewer of each application. My exports are a closer match with fewer surprises. Is it a perfect match? Absolutely not. But it’s enough in the ballpark for most footage to be functional for editing purposes. Obviously you should still make critical image and color adjustments using your scopes and a calibrated reference display, but that’s not always an option. Going with these settings should mean that if you have to go by the computer screen alone, then what you see will be close to what you get!

©2021 Oliver Peters