Color Grading Strategies

df_gradestrtgy_1_sm

A common mistake made by editors new to color correction is to try to nail a “look” all in a single application of a filter or color correction layer. Subjective grading is an art. Just like a photographer who dodges and burns areas of a photo in the lab or in Photoshop to “relight” a scene, so it is with the art of digital color correction. This requires several steps, so a single solution will never give you the best result. I follow this concept, regardless of the NLE or grading application I’m using at the time. Whether stacked filters in Premiere Pro, several color corrections in FCP X, rooms in Color, nodes in Resolve or layers in SpeedGrade – the process is the same. The standard grade for me is often a “stack” of four or more grading levels, layers or nodes to achieve the desired results. (Please click on any of the images for an expanded view.)

df_gradestrtgy_1red_smThe first step for me is always to balance the image and to make that balance consistent from shot to shot. Achieving this varies with the type of media and application. For example, RED camera raw footage is compatible with most updated software, allowing you to have control over the raw decoding settings. In FCP X or Premiere Pro, you get there through separate controls to modify the raw source metadata settings. In Resolve, I would usually make this the first node. Typically I will adjust ISO, temperature and tint here and then set the gamma to REDlogFilm for easy grading downstream. In a tool like FCP X, you are changing the settings for the media file itself, so any change to the RED settings for a clip will alter those settings for all instances of that clip throughout all of your projects. In other words, you are not changing the raw settings for only the timeline clips. Depending on the application, this type of change is made in the first step of color correction or it is made before you enter color correction.

df_gradestrtgy_cb1_smI’ll continue this discussion based on FCP X for the sake of simplicity, but just remember that the concepts apply generally to all grading tools. In FCP X, all effects are applied to clips before the color board stage. If you are using a LUT filter or some other type of grading plug-in like Nattress Curves, Hawaiki Color or AutoGrade, remember that this is applied first and then that result is effected by the color board controls, which are downstream in the signal flow. If you want to apply an effect after the color board correction, then you must add an adjustment layer title generator above your clip and apply that effect within the adjustment layer.

df_gradestrtgy_cb2_smIn the example of RED footage, I set the gamma to REDlogFilm for a flatter profile to preserve dynamic range. In FCP X color board correction 1, I’ll make the necessary adjustments to saturation and contrast to restore this to a neutral, but pleasing image. I will do this for all clips in the timeline, being careful to make the shots consistent. I am not applying a “look” at this level.

df_gradestrtgy_cb2a_smThe next step, color board correction 2, is for establishing the “look”. Here’s where I add a subjective grade on top of color board correction 1. This could be new from scratch or from a preset. FCP X supplies a number of default color presets that you access from the pull-down menu. Others are available to be installed, including a free set of presets that I created for FCP X. df_gradestrtgy_cb2b_smIf you have a client that likes to experiment with different looks, you might add several color board correction layers here. For instance, if I’m previewing a “cool look” versus a “warm look”, I might do one in color correction 2 and another in color correction 3. Each correction level can be toggled on and off, so it’s easy to preview the warm versus cool looks for the client.

Assuming that color board correction 2 is for the subjective look, then usually in my hierarchy, correction 3 tends to be reserved for a mask to key faces. Sometimes I’ll do this as a key mask and other times as a shape mask. df_gradestrtgy_cb3_smFCP X is pretty good here, but if you really need finesse, then Resolve would be the tool of choice. The objective is to isolate faces – usually in a close shot of your principal talent – and bring skin tones out against the background. The mask needs to be very soft so as not to draw attention to itself. Like most tools, FCP X allows you to make changes inside and outside of the mask. If I isolate a face, then I could brighten the face slightly (inside mask), as well as slightly darken everything else (outside mask).df_gradestrtgy_cb3a_sm

Depending on the shot, I might have additional correction levels above this, but all placed before the next step. For instance, if I want to darken specific bright areas, like the sun reflecting off of a car hood, I will add separate layers with key or shape masks for each of these adjustments. df_gradestrtgy_cb3b_smThis goes back to the photographic dodging and burning analogy.

df_gradestrtgy_cb4_smI like adding vignettes to subtly darken the outer edge of the frame. This goes on correction level 4 in our simplest set-up. The bottom line is that it should be the top correction level. The shape mask should be feathered to be subtle and then you would darken the outside of the mask, by lowering brightness levels and possibly a little lower on saturation. df_gradestrtgy_cb4a_smYou have to adjust this by feel and one vignette style will not work for all shots. In fact, some shots don’t look right with a vignette, so you have to use this to taste on a shot by shot basis. At this stage it may be necessary to go back to color correction level 2 and adjust the settings in order to get the optimal look, after you’ve done facial correction and vignetting in the higher levels.df_gradestrtgy_cb5_sm

df_gradestrtgy_cb5a_smIf I want any global changes applied after the color correction, then I need to do this using an adjustment layer. One example is a film emulation filter like LUT Utility or FilmConvert. Technically, if the effect should look like film negative, it should be a filter that’s applied before the color board. If the look should be like it’s part of a release print (positive film stock), then it should go after. For the most part, I stick to after (using an adjustment layer), because it’s easier to control, as well as remove, if the client decides against it. df_gradestrtgy_cb5b_smRemember that most film emulation LUTs are based on print stock and therefore should go on the higher layer by definition. Of course, other globals changes, like another color correction filters or grain or a combination of the two can be added. These should all be done as adjustment layers or track-based effects, for consistent application across your entire timeline.

©2014 Oliver Peters

The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

24p HD Restoration

df_24psdhd_6

There’s a lot of good film content that only lives on 4×3 SD 29.97 interlaced videotape masters. Certainly in many cases you can go back and retransfer the film to give it new life, but for many small filmmakers, the associated costs put that out of reach. In general, I’m referring to projects with $0 budgets. Is there a way to get an acceptable HD product from an old Digibeta master without breaking the bank? A recent project of mine would say, yes.

How we got here

I had a rather storied history with this film. It was originally shot on 35mm negative, framed for 1.85:1, with the intent to end up with a cut negative and release prints for theatrical distribution. It was being posted around 2001 at a facility where I worked and I was involved with some of the post production, although not the original edit. At the time, synced dailies were transferred to Beta-SP with burn-in data on the top and bottom of the frame for offline editing purposes. As was common practice back then, the 24fps film negative was transferred to the interlaced video standard of 29.97fps with added 2:3 pulldown – a process that duplicates additional fields from the film frames, such that 24 film frames evenly add up to 60 video fields in the NTSC world. This is loaded into an Avid, where – depending on the system – the redundant fields are removed, or the list that goes to the negative cutter compensates for the adjustments back to a frame-accurate 24fps film cut.

df_24psdhd_5For the purpose of festival screenings, the project file was loaded into our Avid Symphony and I conformed the film at uncompressed SD resolution from the Beta-SP dailies and handled color correction. I applied a mask to hide the burn-in and ended up with a letter-boxed sequence, which was then output to Digibeta for previews and sales pitches to potential distributors. The negative went off to the negative cutter, but for a variety of reasons, that cut was never fully completed. In the two years before a distribution deal was secured, additional minor video changes were made throughout the film to end up with a revised cut, which no longer matched the negative cut.

Ultimately the distribution deal that was struck was only for international video release and nothing theatrical, which meant that rather than finishing/revising the negative cut, the most cost-effective process was to deliver a clean video master. Except, that all video source material had burn-in and the distributor required a full-height 4×3 master. Therefore, letter-boxing was out. To meet the delivery requirements, the filmmaker would have to go back to the original negative and retransfer it in a 4×3 SD format and master that to Digital Betacam. Since the negative was only partially cut and additional shots were added or changed, I went through a process of supervising the color-corrected transfer of all required 35mm film footage. Then I rebuilt the new edit timeline largely by eye-matching the new, clean footage to the old sequence. Once done and synced with the mix, a Digibeta master was created and off it went for distribution.

What goes around comes around

After a few years in distribution, the filmmaker retrieved his master and rights to the film, with the hope of breathing a little life into it through self-distribution – DVDs, Blu-rays, Internet, etc. With the masters back in-hand, it was now a question of how best to create a new product. One thought was simply to letter-box the film (to be in the director’s desired aspect) and call it a day. Of course, that still wouldn’t be in HD, which is where I stepped back in to create a restored master that would work for HD distribution.

Obviously, if there was any budget to retransfer the film negative to HD and repeat the same conforming operation that I’d done a few years ago – except now in HD – that would have been preferable. Naturally, if you have some budget, that path will give you better results, so shop around. Unfortunately, while desktop tools for editors and color correction have become dirt-cheap in the intervening years, film-to-tape transfer and film scanning services have not – and these retain a high price tag. So if I was to create a new HD master, it had to be from the existing 4×3 NTSC interlaced Digibeta master as the starting point.

In my experience, I know that if you are going to blow-up SD to HD frame sizes, it’s best to start with a progressive and not interlaced source. That’s even more true when working with software, rather than hardware up-convertors, like Teranex. Step one was to reconstruct a correct 23.98p SD master from the 29.97i source. To do this, I captured the Digibeta master as a ProResHQ file.

Avid Media Composer to the rescue

df_24psdhd_2_sm

When you talk about software tools that are commonly available to most producers, then there are a number of applications that can correctly apply a “reverse telecine” process. There are, of course, hardware solutions from Snell and Teranex (Blackmagic Design) that do an excellent job, but I’m focusing on a DIY solution in this post. That involves deconstructing the 2:3 pulldown (also called “3:2 pulldown”) cadence of whole and split-field frames back into only whole frames, without any interlaced tearing (split-field frames). After Effects and Cinema Tools offer this feature, but they really only work well when the entire source clip is of a consistent and unbroken cadence. This film had been completed in NTSC 29.97 TV-land, so frequently at cuts, the cadence would change. In addition, there had been some digital noise reduction applied to the final master after the Avid output to tape, which further altered the cadence at some cuts. Therefore, to reconstruct the proper cadence, changes had to be made at every few cuts and, in some scenes, at every shot change. This meant slicing the master file at every required point and applying a different setting to each clip. The only software that I know of to effectively do this with is Avid Media Composer.

Start in Media Composer by creating a 29.97 NTSC 4×3 project for the original source. Import the film file there. Next, create a second 23.98 NTSC 4×3 project. Open the bin from the 29.97 project into the 23.98 project and edit the 29.97 film clip to a new 23.98 sequence. Media Composer will apply a default motion adapter to the clip (which is the entire film) in order to reconcile the 29.97 interlaced frame rate into a 23.98 progressive timeline.

Now comes the hard part. Open the Motion Effect Editor window and “promote” the effect to gain access to the advanced controls. Set the Type to “Both Fields”, Source to “Film with 2:3 Pulldown” and Output to “Progressive”. Although you can hit “Detect” and let Media Composer try to decide the right cadence, it will likely guess incorrectly on a complex file like this. Instead, under the 2:3 Pulldown tab, toggle through the cadence options until you only see whole frames when you step through the shot frame-by-frame. Move forward to the next shot(s) until you see the cadence change and you see split-field frames again. Split the video track (place an “add edit”) at that cut and step through the cadence choices again to find the right combination. Rinse and repeat for the whole film.

Due to the nature of the process, you might have a cut that itself occurs within a split-field frame. That’s usually because this was a cut in the negative and was transferred as a split-field video frame. In that situation, you will have to remove the entire frame across both audio and video. These tiny 1-frame adjustments throughout the film will slightly shorten the duration, but usually it’s not a big deal. However, the audio edit may or may not be noticeable. If it can’t simply be fixed by a short 2-frame dissolve, then usually it’s possible to shift the audio edit a little into a pause between words, where it will sound fine.

Once the entire film is done, export a new self-contained master file. Depending on codecs and options, this might require a mixdown within Avid, especially if AMA linking was used. That was the case for this project, because I started out in ProResHQ. After export, you’ll have a clean, reconstructed 23.98p 4×3 NTSC-sized (720×486) master file. Now for the blow-up to HD.

DaVinci Resolve

df_24psdhd_1_smThere are many applications and filters that can blow-up SD to HD footage, but often the results end up soft. I’ve found DaVinci Resolve to offer some of the cleanest resizing, along with very fast rendering for the final output. Resolve offers three scaling algorithms, with “Sharper” providing the crispest blow-up. The second issue is that since I wanted to restore the wider aspect, which is inherent in going from 4×3 to 16×9, this meant blowing up more than normal – enough to fit the image width and crop the top and bottom of the frame. Since Resolve has the editing tools to split clips at cuts, you have the option to change the vertical position of a frame using the tilt control. Plus, you can do this creatively on a shot-by-shot basis if you want to. This way you can optimize the shot to best fit into the 16×9 frame, rather than arbitrarily lopping off a preset amount from the top and bottom.

df_24psdhd_3_smYou actually have two options. The first is to blow up the film to a large 4×3 frame out of Resolve and then do the slicing and vertical reframing in yet another application, like FCP 7. That’s what I did originally with this project, because back then, the available version of Resolve did not offer what I felt were solid editing tools. Today, I would use the second option, which would be to do all of the reframing strictly within Resolve 11.

As always, there are some uncontrollable issues in this process. The original transfer of the film to Digibeta was done on a Rank Cintel Mark III, which is a telecine unit that used a CRT (literally an oscilloscope tube) as a light source. The images from these tubes get softer as they age and, therefore, they require periodic scheduled replacement. During the course of the transfer of the film, the lab replaced the tube, which resulted in a noticeable difference in crispness between shots done before and after the replacement. In the SD world, this didn’t appear to be a huge deal. Once I started blowing up that footage, however, it really made a difference. The crisper footage (after the tube replacement) held up to more of a blow-up than the earlier footage. In the end, I opted to only take the film to 720p (1280×720) rather than a full 1080p (1920×1080), just because I didn’t feel that the majority of the film held up well enough at 1080. Not just for the softness, but also in the level of film grain. Not ideal, but the best that can be expected under the circumstances. At 720p, it’s still quite good on Blu-ray, standard DVD or for HD over the web.

df_24psdhd_4_smTo finish the process, I dust-busted the film to fix places with obvious negative dirt (white specs in the frame) caused by the initial handling of the film negative. I used FCP X and CoreMelt’s SliceX to hide and cover negative dirt, but other options to do this include built in functions within Avid Media Composer. While 35mm film still holds a certain intangible visual charm – even in such a “manipulated” state – the process certainly makes you appreciate modern digital cameras like the ARRI ALEXA!

As an aside, I’ve done two other complete films this way, but in those cases, I was fortunate to work from 1080i masters, so no blow-up was required. One was a film transferred in its entirety from a low-contrast print, broken into reels. The second was assembled digitally and output to intermediate HDCAM-SR 23.98 masters for each reel. These were then assembled to a 1080i composite master. Aside from being in HD to start with, cadence changes only occurred at the edits between reels. This meant that it only required 5 or 6 cadence corrections to fix the entire film.

©2014 Oliver Peters

New NLE Color Features

df_mascliplut_2_sm

As someone who does color correction as often within an NLE as in a dedicated grading application, it’s nice to see that Apple and Adobe are not treating their color tools as an afterthought. (No snide Apple Color comments, please.) Both the Final Cut Pro 10.1.2 and Creative Cloud 2014 updates include new tools specifically designed to improve color correction. (Click the images below for an expanded view with additional explanation.)

Apple Final Cut Pro 10.1.2

df_mascliplut_3_sm

This FCP X update includes a new, built-in LUT (look-up table) feature designed to correct log-encoded camera files into Rec 709 color space. This type of LUT is camera-specific and FCP X now comes with preset LUTs for ARRI, Sony, Canon and Blackmagic Design cameras. This correction is applied as part of the media file’s color profile and, as such, takes affect before any filters or color correction is applied.

These LUTs can be enabled for master clips in the event, or after a clip has been edited to a sequence (FCP X project). The log processing can be applied to a single clip or a batch of clips in the event browser. Simply highlight one or more clips, open the inspector and choice the “settings” selection. In that pane, access the “log processing” pulldown menu and choose one of the camera options. This will now apply that camera LUT to all selected clips and will stay with a clip when it’s edited to the sequence. Individual clips in the sequence can later be enabled or disabled as needed. This LUT information does not pass though as part of an FCPXML roundtrip, such as sending a sequence to Resolve for color grading.

Although camera LUTs are specific to the color science used for each camera model’s type of log encoding, this doesn’t mean you can’t use a different LUT. Naturally some will be too extreme and not desirable. Some, however, are close and using a different LUT might give you a desirable creative result, somewhat like cross-processing in a film lab.

Adobe CC 2014 – Premiere Pro CC and SpeedGrade CC

df_mascliplut_1_sm

In this CC 2014 release, Adobe added master clip effects that travel back and forth between Premiere Pro CC and SpeedGrade CC via Direct Link. Master clip effects are relational, meaning that the color correction is applied to the master clip and, therefore, every instance of this clip that is edited to the sequence will have the same correction applied to it automatically. When you send the Premiere Pro CC sequence to SpeedGrade CC, you’ll see that the 2014 version now has two correction tabs: master clip and clip. If you want to apply a master clip effect, choose that tab and do your grade. If other sections of the same clip appear on the timeline, they have automatically been graded.

Of course, with a lot of run-and-gun footage, iris levels and lighting changes, so one setting might not work for the entire clip. In that case, you can add a second level of grading by tweaking the shot in the clip tab. Effectively you now have two levels of grading. Depending on the show, you can grade in the master clip tab, the clip tab or both. When the sequence goes back to Premiere Pro CC, SpeedGrade CC corrections are applied as Lumetri effects added to each sequence clip. Any master clip effects also “ripple back” to the master clip in the bin. This way, if you cut a new section from an already-graded master clip to that or any other sequence, color correction has already been applied to it.

In the example I created for the image above, the shot was graded as a master clip effect. Then I added more primary correction and a filter effect, by using the clip mode for the first time the clip appears in the sequence. This was used to create a cartoon look for that segment on the timeline. Compare the two versions of these shots – one with only a master clip effect (shots match) and the other with a separate clip effect added to the first (shots are different).

Since master clip effects apply globally to source clips within a project, editors should be careful about changing them or copy-and-pasting them, as you may inadvertently alter another sequence within the same project.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

SpeedGrade Looks

df_sglooks_1_sm

In a previous post, I discussed how to use Final Cut Pro X Color Board presets. For that post, I created a set of presets and made them available as a free download. That remains one of the most viewed blog posts I’ve written and literally thousands of readers have downloaded the presets. In this post, I’m doing much the same with Adobe SpeedGrade CC Looks.

Adobe SpeedGrade CC uses the Lumetri deep color engine and presets may be shared between Premiere Pro CC and SpeedGrade CC via the Direct Link protocol. Grades, LUTs and  presets applied in SpeedGrade are combined into a single Lumetri filter effect that gets applied to the clip in Premiere Pro. When SpeedGrade CC is installed, it includes a number of preset Look examples developed by Adobe and Looks Labs. These include stylized grades, film emulations and camera log conversions among others. When you work in SpeedGrade, it is possible to save user-created Looks, as well. These are a combination of any set of layers and grades that you have applied to a single clip. These may include color correction adjustments, but also LUTs and special visual effects filters. User files are saved as .look files with a corresponding .jpg thumbnail of the shot that the grade was originally applied to. These .look files are saved by SpeedGrade in a number of possible folder locations, so you have to be careful, as to which folder is open and selected when you save a file.

df_sglooks_2_smI have created a variety of custom Looks covering color treatments, effects, film styles and more. These Looks were built around an image I’ve used for many of my color correction blog posts, because it has a nice spectrum of colors. For example, it’s hard to set up a characteristic “orange & teal” look, when the image has no blues, greens or skin tones. To start, download the file from the link below and unzip the archive file. Inside, you’ll find a folder called “op_sgrades”. Let me point out that my testing and instructions are based on a Mac. I have not tested this on a Windows PC, so I am not sure where the proper default installation folder lives.

On a Mac, the supplied Looks styles (Lumetri and SpeedLooks) are inside the closed application bundle. To install this new folder, you need to open the SpeedGrade CC package contents (right-click the application icon and choose “show package contents”). This will expose the application’s Contents folder. From there, navigate to the MacOS subfolder and then the Look Examples subfolder. Drag the “op_sgrades” folder into the Look Examples folder. When you next open SpeedGrade CC, you will be able to access this new set of Looks in the Looks Management pane. On a PC, right-click the application program icon and select “open file location”. This will expose a set of files, including the Look Examples folder.

df_sglooks_4_sm

Another caveat to this procedure. What happens with the next Adobe update to SpeedGrade CC? I’m not sure what happens to any folders inside the application contents package during an update. It may be that you would have to install this custom folder into the Look Examples folder again after a SpeedGrade CC version update. We’ll see when the next SpeedGrade CC update happens.

Since each of these presets was built on the same log-encoded (flat) image, you will need to adjust the grade according to the image you apply it to. In all of these, the first Primary layer (bottom of the stack) will be the same and is used to neutralize the image. The sliders I adjusted include input saturation, pivot, contrast, temperature and magenta. Only the global settings were adjusted in this layer. You can tweak it, hide/disable it or replace it with a LUT adjustment instead. I have stayed away from camera LUTs, as a way of neutralizing the image, because these will drastically affect the other corrections in the stack – often in unpredictable ways.

If you look back at my FCP X Color Board Presets article, you may notice that those looks were more extreme. In this set, I stayed more subtle, but the presets will be more complex, since SpeedGrade CC permits built-in effects. Some of these may be slow to display and update. This is especially true of any that include blurs.

Click here to download a .zip archive file of the Looks presets.

Click on any image below for a slideshow of the various Looks.

©2014 Oliver Peters

Comparing Color, Resolve, SpeedGrade and Symphony

df_ccc_main_sm

It’s time to talk about color correctors. In this post, I’ll compare Color, Resolve, SpeedGrade and Symphony. These are the popular desktop color correction systems in use today. Certainly there are other options, like Filmlight’s Baselight Editions plug-in, as well as other NLEs with their own powerful color correction tools, including Autodesk Smoke and Quantel Rio. Some of these fall outside of the budget range of small shops or don’t really provide a correction workflow. For the sake of simplicity, in this post I’ll stick with the four I see the most.

df_ccc_sym_sm

Avid Technology Media Composer + Symphony

Although it started as a separate NLE product with dedicated hardware, today’s Symphony is really an add-on option to Media Composer. The main feature that differentiates Symphony from Media Composer in file-based workflows is an enhanced color correction toolset. Symphony used to be the “gold standard” for color correction within an NLE, combining controls “borrowed” from many other software and systems, like Photoshop, hardware proc amps and hardware versions of the DaVinci correctors. It was the first to use the color wheel control model for balance/hue offsets. A subset of the Symphony tools has been migrated into Media Composer. Basic correction features in Symphony include channel mixing, hue offsets (color balance), levels, curves and more.

Many perceive Symphony correction as a single level or layer of correction, but that’s not exactly true. Color correction occurs on two levels – segment and program track. Most of your correction is on individual clips and Symphony offers a relational grading system. This means you can apply grades based on single clips or all instances of a master clip, tape ID, camera, etc. All clips used from a common source can be automatically graded once the first instance of that clip is graded on the timeline. The program track grade allows the colorist to apply an additional layer of grading to a clip, a section of the timeline or the entire timeline. So, when the client asks for everything to be darker, a global adjustment can be made using the program track.

Symphony also offers secondary grading based on isolating colors via an HSL key and adjusting that range. Although Symphony doesn’t offer nodes or correction layers like other software, you can use Avid’s video track timeline hierarchy to add additional correction to blank tracks above those tracks containing the video clips. In this way you are using the tracks as de facto adjustment layers. The biggest weakness is the lack of built-in masking tools to create what is commonly referred to as “power windows” (a term originated by DaVinci). The workaround is to use Avid’s built-in Intraframe/Animatte effects tools to create masks. Then you can apply additional spot correction within the mask area. It takes a bit more work than other tools, but it’s definitely possible. Finally, many plug-in packages, like GenArts Sapphire, Boris Continuum Complete and Magic Bullet Looks include vignette filters that will work with Symphony.

The bottom line is that Symphony started it all, though by today’s standards is “long-in-the-tooth”. Nevertheless, the relational grading model – and the fact that you are working within the NLE and can freely move between color correction and editing/trimming – makes Symphony a fast unit to operate, especially in time-sensitive, long-form productions, like TV shows.

df_ccc_spgrd_sm

Adobe SpeedGrade CC

If you are current as a Creative Cloud subscriber, then you have access to the most recent version of Adobe Premiere Pro CC and SpeedGrade CC. With the updates introduced late last year, Adobe added Direct Link interaction between Premiere Pro and SpeedGrade. When you use Direct Link to send your Premiere Pro timeline to SpeedGrade, the actual Premiere Pro sequence becomes the SpeedGrade sequence. This means codec decoding, transitions and Premiere Pro effects are handled by Premiere Pro’s effects engine, even though you are working inside SpeedGrade. As such, a project created via Direct Link supports features and codecs that would not be possible within a standalone SpeedGrade project.

Another unique aspect is that native and third-party transitions and effects used in Premiere Pro are visible (though not adjustable) when you are working inside SpeedGrade. This is an important distinction, because other correction workflows that rely on roundtrips don’t include NLE-based filters. You can’t see how the correction will be affected by a filter used in the NLE timeline. Naturally, in the case of SpeedGrade, this only works if you are working on a machine with the same third-party filters installed. When you return to Premiere Pro from SpeedGrade, the color corrections on clips are collapsed into a Lumetri filter effect that is applied to the clip or adjustment layer within the Premiere Pro sequence. Essentially this Lumetri effect is similar to a LUT that encapsulates all of the grading layers applied in SpeedGrade into a single effect in Premiere Pro. This is possible because the two applications share the same color science. The result is a render-free workflow with the easy ability to go back-and-forth between Premiere Pro and SpeedGrade for changes and adjustments. Unlike a standard LUT, Lumetri filters can carry masks, keyframes and are 100% precise.

As a color corrector, SpeedGrade is designed with a layer-based interface, much like Photoshop. Layers can be primary (fullscreen), secondary (keys and masks) or filters. A healthy selection of effects filters and LUTs are included. The correction model splits the signal into what amounts to a 12-way color wheel arrangement. There are lift/gamma/gain controls for the overall image, as well as for each of the shadow, middle and highlight ranges. Controls can be configured as wheels or sliders, with additional sliders for contrast, pivot, temperature (red vs. blue bias), magenta (red/blue vs. green bias) and saturation. There are no curves controls.

Overall, I like the looks I get with SpeedGrade, but I find it lacking in some ways. There are definite plusses and minuses. I miss the curves. It currently does not work with Blackmagic Design hardware. Matrox, Bluefish and AJA are OK. It’s got a tracker, but I find both tracking and masking to be mediocre. The biggest workflow shortcoming is the lack of a temporary memory register feature. You can save a whole grade, which saves the entire stack of grading layers applied to a clip as a Lumetri filter. You can apply grades from earlier timeline clips quite simply and SpeedGrade lets you open multiple playheads for comparison/correction between multiple shots on the timeline. You can access the nine grades ahead and the nine grades beyond the current playhead position. You can also copy the grade from the clip below mouse position to the clip under the playhead by pressing the C key. What you cannot do is store a random set of grades or just a single layer in a temporary buffer and then apply it from that buffer somewhere else in the timeline. Adding these two items would greatly speed up the SpeedGrade workflow.

df_ccc_resolve_sm

Blackmagic Design DaVinci Resolve

The DaVinci name is legendary among color correction products, but that reputation was earned with its hardware products, like the DaVinci 2K. Resolve was the software-based product built around a Linux cluster. When Blackmagic bought the assets and technology of DaVinci, all of the legacy hardware products were dropped, in favor of concentrating on Resolve as the software that had the most life for the future. There are now four versions, including Resolve Lite (free), Resolve (paid – software only), Resolve with a Blackmagic control surface and Resolve for Linux. The first three work on Mac and PC. You may download the free Lite version from the Blackmagic website or Apple’s Mac App Store. The Lite version has nearly all of the power of the paid software, but with these limitations: noise reduction, stereoscopic tools and the ability to output at a resolution above UltraHD requires a paid version.

I’m writing this based on Resolve 10, which has rudimentary editing features. It is designed as a standalone color corrector that can be used for some editing. Blackmagic Design doubled-down on the editing side with Resolve 11 (shown at NAB 2014). When that’s finally released this summer, you’ll have a powerful NLE built into the application. The demos at NAB were certainly impressive. If that turns out to be the case, Resolve 11 would function as an Avid Symphony or Quantel Rio type of system. That means you could freely move between creative editing and color correction, simply by changing tabs in the interface. For now, Resolve 10 is mainly a color corrector, with some very good roundtrip and conforming support for other NLEs. Specifically there is very good support for Avid and FCP X workflows.

As a color corrector, Resolve offers the widest set of correction tools of any of these systems. In the work I’ve done, Resolve allows for more extreme grading and is more precise when trying to correct problem shots. I’ve done corrections with it that would have been impossible with any other tool. The correction controls include curves, wheels, primary sliders, channel mixers and more. Corrections are node-based and can be applied to clips or an entire track. Nodes can be applied in a serial or parallel fashion, with special splitter/combiner and layer mixing nodes. The latter includes Photoshop-style blend modes. Unlike SpeedGrade, you can store the value of a single node in a buffer (using the keyboard copy function) and then paste the value of just that node somewhere else. This makes it pretty fast when working up and down a timeline. Finally, the tracker is amazing.

A few things bother me about Resolve, in spite of its powerful toolset. The interface almost presents too many tools and it becomes very easy to lose track of what was done and where. There is no large viewer or fullscreen mode that doesn’t hide the node tree. This forces a lot of toggling between workspace configurations. If you have two displays, you cannot use the second display for anything other than the scopes and audio mixer. (This will change with Resolve 11.) Finally, you can only use Blackmagic Design hardware to view the video output on a grading monitor.

df_ccc_color_sm

Apple Color

Some of you are saying, “Why talk about that? It was killed off a few years ago! Who uses that anymore?” Yes, I know. What people so quickly forget, was that when the software was FinalTouch (before Apple’s purchase), it was very expensive and considered to be very innovative. Apple bought it, added some features and cleaned up some of the workflow. As part of Final Cut Studio, it set the standard for round-tripping with an NLE. Unfortunately for many Mac users, it retained its less glossy, “Unixy” interface and thus, didn’t really catch on for many editors. However, it still works just fine on the newest machines and OS versions and remains a fast, high-quality color corrector.

Nearly all of the long-form jobs I’ve done – including feature films and TV shows up to even a few months ago – have been done with Color. There are two reasons that I prefer it. The first is that most of these jobs were cut using FCP 7, so it’s still the most integrated software for these projects. More importantly, there are several key features that make it faster than SpeedGrade and Resolve for projects that fall within a standard range of grading. In other words, the in-camera look was good and there were no huge problem areas, plus the desired grade didn’t swing into extreme looks.

Color is designed with 10 levels of grading per clip – primary in, eight secondaries and primary out. Since secondaries can be fullscreen or a portion of the image qualified by an HSL key or mask, each secondary layer can actually have two corrections – inside and outside of the mask. In addition to these, there’s a ColorFX layer for node-based filter effects, which can also include color adjustments. In reality, the maximum number of corrections to a single clip could be up to 19. The primary corrections can include value changes for RGB lift/gamma/gain and saturation levels, as well a printer lights. On top of this are lift/gamma/gain color wheels and luma controls. Lastly there are curves. The secondaries include custom mask shapes and hue/sat/luma curves. There’s a tracker, too, but it’s not that great.

Where Color still shines for me is in workflow. Each layer is represented by a labelled bar on the timeline under the clip. This makes it easy to apply only a single secondary adjustment to other clips on the timeline simply by sliding the corresponding secondary bar from one timeline clip to one or more of the others. For example, I used Secondary 3 to qualify a person’s face and brighten it. I could then simply drag the bar for S3 that appears under the first clip on the timeline over to every other clip with the same person and similar set-up. All without selecting each of these clips prior to applying the adjustment.

Color works with all cards that work with Final Cut Pro, so there’s no AJA versus Blackmagic issue as mentioned above. Dual monitors work well. You can have scopes and the viewer (or a fullscreen viewer) on one display and the full control interface on the other. Realistically, Color works best with up to 2K video and one of the standard Apple codecs (uncompressed or ProRes work best). A lot of the footage I’ve graded with it was ProResHQ or ProRes 4444 that came native from an ARRI Alexa or transcoded from a C300, RED or a Canon 5D/7D. But I’ve also done a film that was all native EX rewrapped as .mov from a Sony camera and Color had no issues. Log-profile footage grades very nicely in Color, so Alexa ProRes 4444 encoded as Log-C forms a real sweet spot for Apple Color.

©2014 Oliver Peters