Final Cut Pro X Organizing Tips


Every nonlinear editing application is a database; however, Apple took this concept to a higher level when it launched Final Cut Pro X. One of its true strengths is making the organization of a mountain of media easier than ever. To get the best experience, an editor should approach the session holistically and not simply rely on FCP X to do all the heavy lifting.

At the start of every new production, I set up and work within a specific folder structure. You can use an application like Post Haste to create a folder layout, pick up some templates online, like those from FDPTraining, or simply create your own template. No matter how you get there, the main folder for that production should include subfolders for camera files, audio, graphics, other media, production documents, and projects. This last folder would include your FCP X library, as well as others, like any After Effects or Motion projects. The objective is to end up with everything that you accrue for this production in a single master folder that you can archive upon completion.

FCP X Libraries

df1415_organize_5It helps to understand the Final Cut Pro X Library structure. The Library is what would otherwise be called a project file, but in FCP X terminology an edited sequence is referred to as a Project, while the main session/production file is the Library. Unlike previous versions and other NLEs, the Library is not a closed, binary data file. It is a package file that you can open and peruse, by right-clicking the Library icon and using the “show package contents” command. In there you will find various binary files (labeled CurrentVersion.fcpevent) along with a number of media folders. This structure is similar to the way Avid Media Composer project folders are organized on a hard drive. Since FCP X allows you to store imported, proxy, transcoded, and rendered media within the Library package, the media folders can be filled with actual media used for this production. When you pick this option your Library will grow quite large, but is completely under the application’s control, thus making the media management quite robust.

df1415_organize_4Another option is to leave media files in their place. When this is selected the Library package’s media folders will contain aliases or shortcut links to the actual media files. These media files are located in one or more folders on your hard drive. In this case, your Library file will stay small and is easier to transfer between systems, since the actual audio and video files are externally located. I suggest spreading things out. For example, I’ll create my Library on one drive, the location of the autosaved back-up files on another, and my media on a third. This has the advantage of no single point of failure. If the Library files are located on a drive that is backed up via Time Machine or some other system-wide “cloud” back-up utility, you have even more redundancy and protection.

Following this practice, I typically do not place the Library file in the projects folder for the production, unless this is a RAID-5 (or better) drive array. If I don’t save it there during actual editing, then it is imperative to copy the Library into the project folder for archiving. The rub is that the package contains aliases, which certain software – particular LTO back-up software – does not like. My recommendation is to create a compressed archive (.zip) file for every project file (FCP X Library, AE project, Premiere Pro project, etc.) prior to the final archiving of that production. This will prevent conflicts caused by these aliases.

If you have set up a method of organization that saves Libraries into different folders for each production, it is still possible to have a single folder, which shows you all the Libraries on your drives. To do this, create a Smart Folder in the Finder and set up the criteria to filter for FCP X Libraries. Any Library will automatically be filtered into this folder with a shortcut. Clicking on any of these files will launch FCP X and open to that Library.

Getting started

df1415_organize_2The first level of organization is getting everything into the appropriate folders on the hard drive. Camera files are usually organized by shoot date/location, camera, and card/reel/roll. Mac OS lets you label files with color-coded Finder tags, which enables another level of organization for the editor. As an example, you might have three different on-camera speakers in a production. You could label clips for each with a colored tag. Another example, might be to label all “circle takes” picked by the director in the field with a tag.

The next step is to create a new FCP X Library. This is the equivalent of the FCP 7 project file. Typically you would use a single Library for an entire production, however, FCP X permits you to work with multiple open Libraries, just like you could have multiple projects open in FCP 7. In order to set up all external folder locations within FCP X, highlight the Library name and then in the Inspector panel choose “modify settings” for the storage locations listed in the Library Properties panel. Here you can designate whether media goes directly into the internal folders of the Library package or to other target folders that you assign. This step is similar to setting the Capture Scratch locations in FCP 7.

How to organize clips within FCP X

df1415_organize_8Final Cut Pro X organizes master source clips on three levels – Events, Keyword Collections, and Smart Collections. These are an equivalent to Bins in other NLEs, but don’t completely work in the same fashion. When clips are imported, they will go into a specific Event, which is closest in function to a Bin. It’s best to keep the number of Events low, since Keyword Collections work within an Event and not across multiple Events. I normally create individual Events for edited sequences, camera footage, audio, graphics, and a few more categories. Clips within an Event can be grouped in the browser display in different ways, such as by import date. This can be useful when you want to quickly find the last few files imported in a production that spans many days. Most of the time I set grouping and sorting to “none”.

df1415_organize_7To organize clips within an Event, use Keywords. Setting a Keyword for a clip – or a range within a clip – is comparable to creating subclips in FCP 7. When you add a Keyword, that clip or range will automatically be sorted into a Keyword Collection with a matching name. Keywords can be assigned to keyboard hot keys, which creates a very quick way to go through every clip and assign it into a variety of Keyword Collections. Clips can be assigned to more than one Collection. Again, this is equivalent to creating subclips and placing them into separate Bins.

On one set of commercials featuring company employees, I created Keyword Collections for each person, department, shoot date, store location, employees, managers, and general b-roll footage. This made it easy to derive spots that featured a diverse range of speakers. It also made it easy to locate a specific clip that the director or client might ask for, based on “I think Mary said that” or “It was shot at the Kansas City store”. Keyword Collections can be placed into folders. Collections for people by name went into one folder, Collections by location into another, and so on.

df1415_organize_3The beauty of Final Cut Pro X is that it works in tandem with any organization you’ve done in the Finder. If you spent the time to move clips into specific folders or you assigned color-coded Finder tags, then this information can be used when importing clips into FCP X. The import dialogue gives you the option to “leave files in place” and to use Finder folders and tags to automatically create corresponding Keyword Collections. Camera files that were organized into camera/date/card folders will automatically be placed into Keyword Collections that are organized in the same fashion. If you assigned clips with Mary, John, and Joe to have red, blue, and green tags for each person, then you’ll end up with those clips automatically placed into Keyword Collections named red, blue, and green. Once imported, simply rename the red, blue, and green Collections to Mary, John, and Joe.


The third level of clip organization is Smart Collections. Use these to automatically filter clips based on the criteria that you set. With the release of FCP X version 10.2, Smart Collections have been moved from the Event level (10.1.4 or earlier) to the Library level – meaning that filtering can occur across multiple Events within the Library. By default, new Libraries are created with several preset Smart Collections that can be used, deleted, or modified. Here’s an example of how to use these. When you sync double-system sound clips or multiple cameras, new grouped clips are created – Synchronized Clips and Multicam Clips. These will appear in the Event along with all the other source files, which can be unwieldy. To focus strictly on these new grouped clips, create a Smart Collection with the criteria set by type to include these two categories. Then, as new grouped clips are created, they will automatically be filtered into this Smart Collection, thus reducing clutter for the editor.

Playing nice with others

df1415_organize_9Final Cut Pro X was designed around a new paradigm, so it tends to live in its own world. Most professional editors have the need for a higher level of interoperability with other applications and with outside vendors. To aid in these functions, you’ll need to turn to third party applications from a handful of vendors that have focused on workflow productivity utilities for FCP X. These include Intelligent Assistance/Assisted Editing, XMiL, Spherico, Marquis Broadcast, and Thomas Szabo. Their utilities make it possible to go between FCP X and the outside world, through list formats like AAF, EDL, and XML.

df1415_organize_11Final Cut’s only form of decision list exchange is FCPXML, which is a distinctly different data format than other forms of XML. Apple Logic Pro X, Blackmagic Design DaVinci Resolve and Autodesk Smoke can read it. Everything else requires a translated file and that’s where these independent developers come in. Once you use an application like XtoCC (formerly Xto7) from Intelligent Assistance to convert FCPXML to XML for an edited sequence, other possibilities are opened up. The translated XML file can now be brought into Adobe Premiere Pro or FCP 7. Or you can use other tools designed for FCP 7. For instance, I needed to generate a print-out of markers with comments and thumbnail images from a film, in order to hand off notes to the visual effects company. By bringing a converted XML file into Digital Heaven’s Final Print – originally designed with only the older Final Cut in mind – this became a doable task.

df1415_organize_13Thomas Szabo has concentrated on some of the media functions that are still lacking with FCP X. Need to get to After Effects or Nuke? The ClipExporter and ClipExporter2 applications fit the bill. His newest tool is PrimariesExporter. This utility uses FCPXML to enable batch exports of clips from a timeline, a series of single-frame exports based on markers, or a list of clip metadata. Intelligent Assistance offers the widest set of tools for FCP X, including Producer’s Best Friend. This tool enables editors to create a range of reports needed on most major jobs. It delivers them in spreadsheet format.

Understanding the thought processes behind FCP X and learning to use its powerful relational database will get you through complex projects in record time. Better yet, it gives you the confidence to know that no editorial stone was left unturned. For more information on advanced workflows and organization with Final Cut Pro X, check out FCPworks, MacBreak Studio (hosted by Pixel Corps), Larry Jordan, and Ripple Training.

For those that want to know more about the nuts and bolts of the post production workflow for feature films, check out Mike Matzdorff’s “Final Cut Pro X: Pro Workflow”, an iBook that’s a step-by-step advanced guide based on the lessons learned on Focus.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

DaVinci Resolve – 10 Tips to Improve Your Skills


Blackmagic Design’s DaVinci Resolve is one of the pre-eminent color correction applications – all the more amazing that it’s so accessible to any user. Even the free Lite version does nearly everything you’d want from any color grading software. If you have an understanding of how to use a 3-way color correction filter and you comprehend procedural nodes as a method of stacking corrections, then it’s easy to get proficient with Resolve, given a bit of serious seat time. The following tips are designed to help you get a little more comfortable with the nuances of Resolve. (Click on the images below for enhanced views.)

df2215_resolvetips_1_smPrimary sliders. Resolve gives you two ways to adjust primary color correction – color wheels and sliders. Most people gravitate to the wheels control panel, but the sliders panel is often faster and more precise. Adjustments made in either control will show up in the other. If you adjust color balance using the sliders, while monitoring the RGB parade display and/or the histogram on the video scopes, then it’s very easy to dial in perfect black and white balance for a shot. If the blue shadow portion looks too high on the RGB parade display, it means that the shadows of the image will look bluish. Simply move the blue lift slider lower to push the shadows closer to a true black. An added benefit of this panel is that the controls react to a wheeled mouse. This is great if you don’t have access to a control surface. Hover the mouse over the slider that you want to adjust and twirl the mouse wheel up or down to make your correction.

df2215_resolvetips_2_smGang/ungang curves. Given the propensity of cameras to record with log gamma profiles, you often find the need to apply an s-shaped luma curve during color correction. This shifts the low and high ranges of the image to expand the signal back to full levels, while retaining a “filmic” quality to that image. In the custom curves panel you’ll encounter a typical layout of four curves for luma and RGB. The default is for these to be ganged together. Adjust one and they all change. However, this means you are jacking around chroma levels when you might simply want to alter luma. Therefore, make sure to disable ganging before you start. Then adjust the luma curve. Only adjust the R, G or B curves if it’s beneficial to your look.

df2215_resolvetips_3_smHue/sat curves. If you toggle the curves pulldown menu, you’ll notice a number of other options, like hue vs. hue, hue vs. sat, and so on. These curve options let you grab a specific color and adjust its hue, saturation or brightness, without changing the tone of the entire image. When you sample a color, you end up with three points along the curve – the pin for the selected color and a range boundary pin on either side of that color. These boundary points determine the envelope of your selection. In other words, how broad of a range of hues that you want to affect for the selected color. Think of it as a comparable function to an audio EQ.

It is possible to select multiple points along the curve. Let’s say you want to lower the saturation of both bright yellows and bright blues within the frame. Choose the hue vs. sat curve and select points for both yellow and blue. Pulling these points down will lower the saturation of each of these colors using a single panel.

The hue vs. hue curve is beneficial for skin tones. A film that I’m currently grading features a Korean lead actress. Her skin tones normally skew towards yellow or green in many shots. The Caucasian and African American actors in the same shots appear with “normal” skin tones. By selecting the color that matches her flesh tones on the curve, I am able to shift the hues towards a value that is more in keeping with pleasing flesh tone colors. When used in combination with a mask, it’s possible to isolate this correction to just her part of the frame, so as not to affect the coloration of the other actors within the same shot.

df2215_resolvetips_4_smTracking/stabilization. Most folks know that Resolve has one of the best and fastest trackers of any application. Add an oval mask to someone’s face, so that you can brighten up just that isolated area. However, as the person moves within the shot, you have to adjust the mask to follow their face. This is where Resolve’s cloud-point tracker is a lifesaver. It’s fast and most of the time stays locked to the subject. The tracking window also enables stabilization. Use the pulldown menu to toggle from tracking to stabilization. This is a two-step process – first analyze and then stabilize. You can dial in an amount of smoothness, if you want to retain some of the camera drift for a more natural appearance to the shot.

df2215_resolvetips_5_smBlurs/masks/tracking. Resolve (including the free version) enables blurring of the image. This can be used in conjunction with a mask and with tracking, if you need to blur and track an object, like logos that need to be obscured in non-scripted TV shows. Using a blur with a vignette mask lets you create a dreamy effect. This is all possible without resorting to third-party filters or plug-ins.

df2215_resolvetips_6_smScene detection/slicing. There are three ways to get a show into Resolve: a) edit from scratch in Resolve; b) roundtrip from another NLE using FCPXML, XML, AAF or an EDL; or 3) export a flattened media file of your timeline from another NLE and import that master file into Resolve. This process is similar to when masters were output to tape, which in turn were graded in a DaVinci “tape-to-tape” color correction session. Resolve has the ability to analyze the file and determine edit points with reasonable accuracy. It will break up the files into individual master clips within your media pool. Unfortunately, these are viewed in the timeline as individual media clips with boundaries, thus making trimming difficult.

My preference is to place the clip onto a new timeline and then manually add splices at all edit points and dissolves. Since Resolve includes editing capabilities, you can trim, alter or add points in case of error or missed edits. This can be aided by importing a matching, blank XML or EDL and placing it onto a higher track, which then lets you quickly identify all edit points that you’ll need to create.

df2215_resolvetips_7_smAdd dissolves. In the example above, how do you handle video dissolves that exist in the master file? The solution (in the Resolve timeline) is to add an edit point at the midpoint of the dissolve that’s embedded within the media file. Next, add a new dissolve equal to the length of the existing dissolve in the video. This way, color correction for one shot will naturally dissolve to the color correction of the second shot. In effect, you aren’t dissolving video sources – only color correction values. This technique may also be used within a single shot if you have correction changes inside that shot. Although in the second case, adding correction keyframes in the Color page is normally a better solution. This might be the case if you are trying to counteract level changes within the shot, such as an in-camera iris change.

df2215_resolvetips_8_smNode strategy. Resolve allows you to store complex grades for shots – which will include as many nodes as required to build the look – at a single memory register. You can build up each adjustment in multiple nodes to create the look you desire, store it and then apply that grade to other shots in a single step. This is very useful; however, I tend to work a bit differently when going through a scene in a dramatic project.

I generally go through the scene in multiple “passes”. For instance, I’ll quickly go through each shot with a single node to properly balance the color and make the shots reasonably consistent with each other. Next, I’ll go back through and add a second node (no adjustment yet) for each shot. Once that’s done, I’ll go back to the head of the scene and in that second node make the correction to establish a look. I can now use a standard copy command (cmd-C on the Mac) to store those values for that single node. When I go to the next shot, the second node is already selected, so then I simply paste (cmd-V on the Mac) those values. Let’s say the scene is a two-person dialogue scene using two singles. Angle A is a slightly different color than Angle B. Set the second node adjustment for Angle A, copy, and then paste to each Angle A shot (leapfrogging the Angle B shots). Then repeat for the Angle B shots.

Lastly, I might want to add a vignette. Go back through the scene and add a third, blank node for each shot. Create the vignette in node three of the first shot, then copy and paste into each of the others. I can still adjust the darkness, softness and position of the vignette at each shot, as needed. It’s a bit of an assembly line process, but I find it’s a quick way to go through a scene and build up adjustments without getting fixated on a single shot. At any point, I can review the whole scene and get a better feel for the result of my corrections in the context of the entire scene.

df2215_resolvetips_9_smLUTs. Resolve enables the application of technical and creative LUTs (color look-up tables). While I find their use limited and should be applied selectively, it’s possible to add your own to the palette. Any .cube LUT file – whether you found it, bought it, or created your own – can be added to Resolve’s library of LUTs. On the Mac, the Resolve LUT folder is found in Library/Application Support/Blackmagic Design/DaVinci Resolve/LUT.

df2215_resolvetips_10_smExport with audio. You can export a single finished timeline or individual clips using the Deliver page. At the time of this post, Resolve 12 has yet to be released, but hopefully the audio export issues I’ve encountered have been completely fixed. In my experience using Resolve 11 with RED camera files, it has not been possible to accurately export a complete timeline and have the audio stay in sync. I haven’t found this to be the case with other camera formats, though. So if you are exporting a single master file, expect the potential need to bring the picture into another application or NLE, in order to marry it with your final mix. Resolve 11 and earlier are not really geared for audio – something which Resolve 12 promises to fix. I’ll have a review of Resolve 12 at some point in the future.

Hopefully these tips will give you a deeper dive into Resolve. For serious training, here are some resources to check out:

Color Grading Central




Mixing Light

Ripple Training

Tao of Color

©2015 Oliver Peters

The FCP X – RED – Resolve Dance II


Last October I wrote about the roundtrip workflow surrounding Final Cut Pro X and Resolve, particularly as it relates to working with RED camera files. This month I’ve been color grading a small, indie feature film shot with RED One cameras at 4K resolution. The timeline is 1080p. During the course of grading the film in DaVinci Resolve 11, I’ve encountered a number of issues in the roundtrip process. Here are some workflow steps that I’ve found to be successful.

Step 1 – For the edit, transcode the RED files into 1080p Apple ProRes Proxy QuickTime movies baking in camera color metadata and added burn-in data for clip name and timecode. Use either REDCINE-X Pro or DaVinci Resolve for the transcode.

Step 2 – Import the proxies and double-system audio (if used) into FCP X and sync within the application or use Sync-N-Link X. Ideally all cameras should record reference audio and timecode should match between the cameras and the sound recorder. Slates should also be used as a fall-back measure.

Step 3 – Edit in FCP X until you lock the cut. Prepare a duplicate sequence (Project) for grading. In that sequence, strip off (detach and remove) all audio. As an option, you can create a mix-down track for reference and attach it as a connected clip. Flatten the timeline down to the Primary Storyline where ever possible, so that Resolve only sees this as one track of video. Compound clips should be broken apart, effects should be removed, and titles removed. Audition clips should be finalized, but multicam clips are OK. Remove effects filters. Export an FCPXML (version 1.4 “previous”) list. You should also export a self-contained reference version of the sequence, which can be used to check the conform in Resolve.

Step 4 – Launch Resolve and make sure that the master project settings match that of your FCP X sequence. If it’s supposed to be 1920×1080 at 23.976 (23.98) fps, then make sure that’s set correctly. Resolve defaults to a frame rate of 24.0fps and that won’t work. Locate all of your camera original source media (RED camera files in this example) and add them to your media bin in the Media page. Import the FCPXML (1.4), but disable the setting to automatically load the media files in the import dialogue box. The FCPXML file will load and will relink to the RED files without issue if everything has gone correctly. The timeline may have a few clip conflicts, so look for the little indicator on the clip corner in the Edit window timeline. If there’s a clip conflict, you’ll be presented with several choices. Pick the correct one and that will correct the conflict.

Step 5 – At this point, you should verify that the files have conformed correctly by comparing against a self-contained reference file. Compound clips can still be altered in Resolve by using the Decompose function in the timeline. This will break apart the nested compound clips onto separate video tracks. In general, reframing done in the edit will translate, as will image rotation; however, flips and flops won’t. To flip and flop an image in FCP X requires a negative X or Y scale value (unless you used a filter), which Resolve cannot achieve. When you run across these in Resolve, reset the scale value in the Edit page inspector to normal from that clip. Then in the Color page use the horizontal or vertical flip functions that are part of the resizing controls. Once this is all straight, you can grade.

Step 6 option A – When grading is done, shift to the Deliver page. If your project is largely cuts-and-dissolves and you don’t anticipate further trimming or slipping of edit points in your NLE, then I would recommend exporting the timeline as a self-contained master file. You should do a complete quality check the exported media file to make sure there were no hiccups in the render. This file can then be brought back into any NLE and combined with the final mixed track to create the actual show master. In this case, there is no roundtrip procedure needed to get back into the NLE.

Step 6 option B – If you anticipate additional editing of the graded files – or you used transitions or other effects that are unique to your NLE – then you’ll need to use the roundtrip “return” solution. In the Deliver page, select the Final Cut Pro easy set-up roundtrip. This will render each clip as an individual file at the source or timeline resolution with a user-selected handle length added to the head and tail of each clip. Resolve will also write a corresponding FCPXML file (version 1.4). This file will retain the original transitions. For example, if you used FCP X’s light noise transition, it will show up as a dissolve in Resolve’s timeline. When you go back to FCP X, it will retain the proper transition information in the list, so you’ll get back the light noise transition effect.

Resolve generates this list with the assumption that the media files were rendered at source resolution and not timeline resolution. Therefore, even if your clips are now 1920×1080, the FCPXML represents these as 4K. When you import this new FCPXML back into FCP X, a spatial conform will be applied to “fit” the files into the 1920×1080 raster space of the timeline. Change this to “none” and the 1080 media files will be blown up to 4K. You can choose to simply live with this, leave it to “fit”, and render the files again on FCP X’s output – or follow the next step for a workaround.

Step 7 – Create a new Resolve project, making sure the frame rate and timeline format are correct, such as 1920×1080 at 23.976fps. Load the new media files that were exported from Resolve into the media pool. Now import the FCPXML that Resolve has generated (uncheck the selection to automatically import media files and uncheck sizing information). The media will now be conformed to the timeline. From the Edit page, export another FCPXML 1.4 for that timeline (no additional rendering is required). This FCPXML will be updated to match the media file info for the new files – namely size, track configuration, and frame rate.

At this stage, you will encounter a second serious flaw in the FCP X/Resolve/FCP X roundtrip process. Resolve 11 does not write a proper FCPXML file and leaves out certain critical asset information. You will encounter this if you move the media and lists between different machines, but not if all of the work is being done on a single workstation. The result will be a timeline that loads into FCP X with black clips (not the red “missing” icon). When you attempt to reconnect the media, FCP X will fail to relink and will issue an “incompatible files” error message. To fix the problem, either the colorist must have FCP X installed on the Resolve system or the editor must have Resolve 11 installed on the FCP X system. This last step is the one remaining workaround.

Step 8 option A – If FCP X is installed on the Resolve machine, import the FCPXML into FCP X and reconnect the media generated by Resolve. Then re-export a new FCPXML from FCP X. This new list and media can be moved to any other system. You can move the FCP X Library successfully, as well.

Step 8 option B – If Resolve is installed on the FCP X machine, then follow Step 7. The new FCPXML that you create there will load into FCP X, since you are on the same system.

That’s the state of things right now. Maybe some of these flaws will be fixed with Resolve 12, but I don’t know at this point. The FCPXML list format involves a bit of voodoo at times and this is one of those cases. The good news is that Resolve is very solid when it comes to relinking, which will save you. Good luck!

©2015 Oliver Peters

Understanding SpeedGrade

df1615_sg_1How you handle color correction depends on your temperament and level of expertise. Some editors want to stay within the NLE, so that editorial adjustments are easily made after grading. Others prefer the roundtrip to a powerful external application. When Adobe added the Direct Link conduit between Premiere Pro CC and SpeedGrade CC, they gave Premiere Pro editors the best of both worlds.


df1615_sg_4SpeedGrade is a standalone grading application that was initially designed around an SDI feed from the GPU to a second monitor for your external video. After the Adobe acquisition, Mercury Transmit was eventually added, so you can run SpeedGrade with one display, two computer displays, or a computer display plus a broadcast monitor. With a single display, the video viewer is integrated into the interface. At home, I use two computer displays, so by enabling a dual display layout, I get the SpeedGrade interface on one screen and the full-screen video viewer on the other. To do this you have to correctly offset the pixel dimensions and position for the secondary display in order to see it. Otherwise the image is hidden behind the interface.

Using Mercury Transmit, the viewer image is sent to an external monitor, but you’ll need an appropriate capture/monitoring card or device. AJA products seem to work fine. Some Blackmagic devices work and others don’t. When this works, you will lose the viewer from the interface, so it’s best to have the external display close – as in next to your interface monitor.


df1615_sg_3When you use Direct Link, you are actually sending the Premiere Pro timeline to SpeedGrade. This means that edits and timeline video layers are determined by Premiere Pro and those editing functions are disabled in SpeedGrade. It IS the Premiere Pro timeline. This means certain formats that might not be natively supported by a standalone SpeedGrade project will be supported via the Direct Link path – as long as Premiere Pro natively supports them.

There is a symbiotic relationship between Premiere Pro and SpeedGrade. For example, I worked on a music video that was edited natively using RED camera media. The editor had done a lot of reframing from the native 4K media in the 1080 timeline. All of this geometry was correctly interpreted by SpeedGrade. When I compared the same sequence in Resolve (using an XML roundtrip), the geometry was all wrong. SpeedGrade doesn’t give you access to the camera raw settings for the .r3d media, but Premiere Pro does. So in this case, I adjusted the camera raw values by using the source settings control in Premiere Pro, which then carried those adjustments over to SpeedGrade.

df1615_sg_2Since the Premiere Pro timeline is the SpeedGrade timeline when you use Direct Link, you can add elements into the sequence from Premiere, in order to make them available in SpeedGrade. Let’s say you want to add a common edge vignette across all the clips of your sequence. Simply add an adjustment layer to a top track while in Premiere. This appears in your SpeedGrade timeline, enabling you to add a mask and correction within the adjustment layer clip. In addition, any video effects filters that you’ve applied in Premiere will show up in SpeedGrade. You don’t have access to the controls, but you will see the results interactively as you make color correction adjustments.

df1615_sg_17All SpeedGrade color correction values are applied to the clip as a single Lumetri effect when you send the timeline back to Premiere Pro. All grading layers are collapsed into a single composite effect per clip, which appears in the clip’s effect stack (in Premiere Pro) along with all other filters. In this way you can easily trim edit points without regard to the color correction. Traditional roundtrips render new media with baked-in color correction values. There, you can only work within the boundaries of the handles that you’ve added to the file upon rendering. df1615_sg_16Not so with Direct Link, since color correction is like any other effect applied to the original media. Any editorial changes you’ve made in Premiere Pro are reflected in SpeedGrade should you go back for tweaks, as long as you continue to use Direct Link.

12-way and more

df1615_sg_5Most editors are familiar with 3-way color correctors that have level and balance controls for shadows, midrange and highlights. Many refer to SpeedGrade’s color correction model as a 12-way color corrector. The grading interface features a 3-way (lift/gamma/gain) control for four ranges of correction: overall, shadows, midrange, and highlights. Each tab also adds control of contrast, pivot, color temperature, magenta (tint), and saturation. Since shadow, midrange, and highlight ranges overlap, you also have sliders that adjust the overlap thresholds between shadow and midrange and between the midrange and highlight areas.

df1615_sg_7Color correction is layer based – similar to Photoshop or After Effects. SpeedGrade features primary (“P”) , secondary (“S”) and filter layers (the “+” symbol). When you add layers, they are stacked from bottom to top and each layer includes an opacity control. As such, layers work much the same as rooms in Apple Color or nodes in DaVinci Resolve. You can create a multi-layered adjustment by using a series of stacked primary layers. Shape masks, like that for a vignette, should be applied to a primary layer. df1615_sg_10The mask may be normal or inverted so that the correction is applied either to the inside or the outside of the mask. Secondaries should be reserved for HSL keys. For instance, highlighting the skin tones of a face to adjust its color separately from the rest of the image. The filter layer (“+”) is where you’ll find a number of useful tools, including Photoshop-style creative effect filters, LUTs, and curves.

Working with grades

df1615_sg_13The application of color correction can be applied to a clip as either a master clip correction or just a clip correction (or both). When you grade using the default clip tab, then that color correction is only being applied to that single clip. If you grade in the master clip tab, then any color correction that you apply to that clip will also be applied to every other instance of that same media file elsewhere on the timeline. Theoretically, in a multicam edit – made up of four cameras with a single media file per camera – you could grade the entire timeline by simply color correcting the first clip for each of the four cameras as a master clip correction. All other clips would automatically inherit the same settings. Of course, that almost never works out quite as perfectly, therefore, you can grade a clip using both the master clip and the regular clip tabs. Use the master for a general setting and still use the regular clip tab to tweak each shot as needed.

df1615_sg_9Grades can be saved and recalled as Lumetri Looks, but typically these aren’t as useful in actual grading as standard copy-and-paste functions – a recent addition to SpeedGrade CC. Simply highlight one or more layers of a graded clip and press copy (cmd+c on a Mac). Then paste (cmd+v on a Mac) those to the target clip. These will be pasted in a stack on top of the default, blank primary correction that’s there on every clip. You can choose to use, ignore, or delete this extra primary layer.

SpeedGrade features a cool trick to facilitate shot matching. The timeline playhead can be broken out into multiple playheads, which will enable you to compare two or more shots in real-time on the viewer. This quick comparison lets you make adjustments to each to get a closer match in context with the surrounding shots.

A grading workflow

df1615_sg_14Everyone has their own approach to grading and these days there’s a lot of focus on camera and creative LUTs. My suggestions for prepping a Premiere Pro CC sequence for SpeedGrade CC go something like this.

df1615_sg_6Once, you are largely done with the editing, collapse all multicam clips and flatten the timeline as much as possible down to the bottom video layer. Add one or two video tracks with adjustment layers, depending on what you want to do in the grade. These should be above the last video layer. All graphics – like lower thirds – should be on tracks above the adjustment layer tracks. This is assuming that you don’t want to include these in the color correction. Now duplicate the sequence and delete the tracks with the graphics from the dupe. Send the dupe to SpeedGrade CC via Direct Link.

In SpeedGrade, ignore the first primary layer and add a filter layer (“+”) above it. Select a camera patch LUT. For example, an ARRI Log-C-to-Rec-709 LUT for Log-C gamma-encoded Alexa footage. Repeat this for every clip from the same camera type. If you intend to use a creative LUT, like one of the SpeedLooks from LookLabs, you’ll need one of their camera patches. This shifts the camera video into a unified gamma profile optimized for their creative LUTs. If all of the footage used in the timeline came from the same camera and used the same gamma profile, then in the case of SpeedLooks, you could apply the creative LUT to one the adjustment layer clips. This will apply that LUT to everything in the sequence.

df1615_sg_8Once you’ve applied input and output LUTs you can grade each clip as you’d like, using primary and secondary layers. Use filter layers for curves. Any order and any number of layers per clip is fine. Using this methodology all grading is happening between the camera patch LUT and the creative LUT added to the adjustment layer track. Finally, if you want a soft edge vignette on all clips, apply an edge mask to the default primary layer of the topmost adjustment layer clip. Adjust the size, shape, and softness of the mask. Darken the outside of the mask area. Done.df1615_sg_11

(Note that not every camera uses logarithmic gamma encoding, nor do you want to use LUTs on every project. These are the “icing on the cake”, NOT the “meat and potatoes” of grading. If your sequence is a standard correction without any stylized creative looks, then ignore the LUT procedures I described above.)

df1615_sg_15Now simply send your timeline back to Premiere Pro (the “Pr” button). Back in Premiere Pro CC, duplicate that sequence. Copy-and-paste the graphics tracks from the original sequence to the available blank tracks of the copy. When done, you’ll have three sequences: 1) non-color corrected with graphics, 2) color corrected without graphics, and 3) final with color correction and graphics. The beauty of the Direct Link path between Premiere Pro CC and SpeedGrade CC is that you can easily go back and forth for changes without ever being locked in at any point in the process.

©2015 Oliver Peters

Adobe Anywhere and Divine Access


Editors like the integration of Adobe’s software, especially Dynamic Link and Direct Link between creative applications. This sort of approach is applied to collaborative workflows with Adobe Anywhere, which permits multiple stakeholders, including editors, producers and directors, to access common media and productions from multiple, remote locations. One company that has invested in the Adobe Anywhere environment is G-Men Media of Venice, California, who installed it as their post production hub. By using Adobe Anywhere, Jeff Way (COO) and Clay Glendenning (CEO) sought to improve the efficiency of the filmmaking process for their productions. No science project – they have now tested the concept in the real world on several indie feature films.

Their latest film, Divine Access, produced by The Traveling Picture Show Company in association with G-Men Media, is a religious satire centering on reluctant prophet Jack Harriman. Forces both natural and supernatural lead Harriman down a road to redemption culminating in a final showdown with his long time foe, Reverend Guy Roy Davis. Steven Chester Prince (Boyhood, The Ringer, A Scanner Darkly) moves behind the camera as the film’s director. The entire film was shot in Austin, Texas during May of 2014, but the processing of dailies and all post production was handled back at the Venice facility. Way explains, “During principal photography we were able to utilize our Anywhere system to turn around dailies and rough cuts within hours after shooting. This reduced our turnaround time for review and approval, thus reducing budget line items. Using Anywhere enabled us to identify cuts and mark them as viable the same day, reducing the need for expensive pickup shoots later down the line.”

The production workflow

df0115_da_3_smDirector of Photography Julie Kirkwood (Hello I Must Be Going, Collaborator, Trek Nation) picked the ARRI ALEXA for this film and scenes were recorded as ProRes 4444 in 2K. An on-set data wrangler would back up the media to local hard drives and then a runner would take the media to a downtown upload site. The production company found an Austin location with 1GB upload speeds. This enabled them to upload 200GB of data in about 45 minutes. Most days only 50-80GB were uploaded at one time, since uploads happened several times throughout each day.

Way says, “We implemented a technical pipeline for the film that allowed us to remain flexible.  Adobe’s open API platform made this possible. During production we used an Amazon S3 instance in conjunction with Aspera to get the footage securely to our system and also act as a cloud back-up.” By uploading to Amazon and then downloading the media into their Anywhere system in Venice, G-Men now had secure, full-resolution media in redundant locations. Camera LUTs were also sent with the camera files, which could be added to the media for editorial purposes in Venice. Amazon will also provide a long-term archive of the 8TB of raw media for additional protection and redundancy. This Anywhere/Amazon/Aspera pipeline was supervised by software developer Matt Smith.

df0115_da_5_smBack in Venice, the download and ingest into the Anywhere server and storage was an automated process that Smith programmed. Glendenning explains, “It would automatically populate a bin named for that day with the incoming assets. Wells [Phinny, G-Men editorial assistant] would be able to grab from subfolders named ‘video’ and ‘audio’ to quickly organize clips into scene subfolders within the Anywhere production that he would create from that day’s callsheet. Wells did most of this work remotely from his home office a few miles away from the G-Men headquarters.” Footage was synced and logged for on-set review of dailies and on-set cuts the next day. Phinny effectively functioned as a remote DIT in a unique way.

Remote access in Austin to the Adobe Anywhere production for review was made possible through an iPad application. Way explains, “We had close contact with Wells via text message, phone and e-mail. The iPad access to Anywhere used a secure VPN connection over the Internet. We found that a 4G wireless data connection was sufficient to play the clips and cuts. On scenes where the director had concerns that there might not be enough coverage, the process enabled us to quickly see something. No time was lost to transcoding media or to exporting a viewable copy, which would be typical of the more traditional way of working.”

Creative editorial mixing Adobe Anywhere and Avid Media Composer

df0115_da_4_smOnce principal photography was completed, editing moved into the G-Men mothership. Instead of editing with Premiere Pro, however, Avid Media Composer was used. According to Way, “Our goal was to utilize the Anywhere system throughout as much of the production as possible. Although it would have been nice to use Premiere Pro for the creative edit, we believed going with an editor that shared our director’s creative vision was the best for the film. Kindra Marra [Scenic Route, Sassy Pants, Hick] preferred to cut in Media Composer. This gave us the opportunity to test how the system could adapt already existing Adobe productions.” G-Men has handled post on other productions where the editor worked remotely with an Anywhere production. In this case, since Marra lived close-by in Santa Monica, it was simpler just to set up the cutting room at their Venice facility. At the start of this phase, assistant editor Justin (J.T.) Billings joined the team.

Avid has added subscription pricing, so G-Men installed the Divine Access cutting room using a Mac Pro and “renting” the Media Composer 8 software for a few months. The Anywhere servers are integrated with a Facilis Technology TerraBlock shared storage network, which is compatible with most editing applications, including both Premiere Pro and Media Composer. The Mac Pro tower was wired into the TerraBlock SAN and was able to see the same ALEXA ProRes media as Anywhere. According to Billings, “Once all the media was on the TerraBlock drives, Marra was able to access these in the Media Composer project using Avid’s AMA-linking. This worked well and meant that no media had to be duplicated. The film was cut solely with AMA-linked media. External drives were also connected to the workstations for nightly back-ups as another layer of protection.”

Adobe Anywhere at the finish line

df0115_da_6_smOnce the cut was locked, an AAF composition for the edited sequence was sent from Media Composer to DaVinci Resolve 11, which was installed on an HP workstation at G-Men. This unit was also connected to the TerraBlock storage, so media instantly linked when the AAF file was imported. Freelance colorist Mark Todd Osborne graded the film on Resolve 11 and then exported a new AAF file corresponding to the rendered media, which now also existed on the SAN drives. This AAF composition was then re-imported into Media Composer.

Billings continues, “All of the original audio elements existed in the Media Composer project and there was no reason to bring them into Premiere Pro. By importing Resolve’s AAF back into Media Composer, we could then double-check the final timeline with audio and color corrected picture. From here, the audio and OMF files were exported for Pro Tools [sound editorial and the mix is being done out-of-house]. Reference video of the film for the mix could now use the graded images. A new AAF file for the graded timeline was also exported from Media Composer, which then went back into Premiere Pro and the Anywhere production. Once we get the mixed tracks back, these will be added to the Premiere Pro timeline. Final visual effects shots can also be loaded into Anywhere and then inserted into the Premiere Pro sequence. From here on, all further versions of Divine Access will be exported from Premiere Pro and Anywhere.”

Glendenning points out that, “To make sure the process went smoothly, we did have a veteran post production supervisor – Hank Braxtan – double check our workflow.  He and I have done a lot of work together over the years and has more than a decade of experience overseeing an Avid house. We made sure he was available whenever there were Avid-related technical questions from the editors.”

Way says, “Previously, on post production of [the indie film] Savageland, we were able to utilize Anywhere for full post production through to delivery. Divine Access has allowed us to take advantage of our system on both sides of the creative edit including principal photography and post finishing through to delivery. This gives us capabilities through entire productions. We have a strong mix of Apple and PC hardware and now we’ve proven that our Anywhere implementation is adaptable to a variety of different hardware and software configurations. Now it becomes a non-issue whether it’s Adobe, Avid or Resolve. It’s whatever the creative needs dictate; plus, we are happy to be able to use the fastest machines.”

Glendenning concludes, “Tight budget projects have tight deadlines and some producers have missed their deadlines because of post. We installed Adobe Anywhere and set up the ecosystem surrounding it because we feel this is a better way that can save time and money. I believe the strategy employed for Divine Access has been a great improvement over the usual methods. Using Adobe Anywhere really let us hit it out of the park.”

Originally written for DV magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Gone Girl

df_gg_4David Fincher is back with another dark tale of modern life, Gone Girl – the film adaptation of Gillian Flynn’s 2012 novel. Flynn also penned the screenplay.  It is the story of Nick and Amy Dunne (Ben Affleck and Rosamund Pike) – writers who have been hit by the latest downturn in the economy and are living in America’s heartland. Except that Amy is now mysteriously missing under suspicious circumstances. The story is told from each of their subjective points of view. Nick’s angle is revealed through present events, while Amy’s story is told through her diary in a series of flashbacks. Through these we learn that theirs is less than the ideal marriage we see from the outside. But whose story tells the truth?

To pull the film together, Fincher turned to his trusted team of professionals including director of photography Jeff Cronenweth, editor Kirk Baxter and post production supervisor Peter Mavromates. Like Fincher’s previous films, Gone Girl has blazed new digital workflows and pushed new boundaries. It is the first major feature to use the RED EPIC Dragon camera, racking up 500 hours of raw footage. That’s the equivalent of 2,000,000 feet of 35mm film. Much of the post, including many of the visual effects, were handled in-house.

df_gg_1Kirk Baxter co-edited David Fincher’s The Curious Case of Benjamin Button, The Social Network and The Girl with the Dragon Tattoo with Angus Wall – films that earned the duo two best editing Oscars. Gone Girl was a solo effort for Baxter, who had also cut the first two episodes of House of Cards for Fincher. This film now becomes the first major feature to have been edited using Adobe Premiere Pro CC. Industry insiders consider this Adobe’s Cold Mountain moment. That refers to when Walter Murch used an early version of Apple Final Cut Pro to edit the film Cold Mountain, instantly raising the application’s awareness among the editing community as a viable tool for long-form post production. Now it’s Adobe’s turn.

In my conversation with Kirk Baxter, he revealed, “In between features, I edit commercials, like many other film editors. I had been cutting with Premiere Pro for about ten months before David invited me to edit Gone Girl. The production company made the decision to use Premiere Pro, because of its integration with After Effects, which was used extensively on the previous films. The Adobe suite works well for their goal to bring as much of the post in-house as possible. So, I was very comfortable with Premiere Pro when we started this film.”

It all starts with dailies

df_gg_3Tyler Nelson, assistant editor, explained the workflow, “The RED EPIC Dragon cameras shot 6K frames (6144 x 3072), but the shots were all framed for a 5K center extraction (5120 x 2133). This overshoot allowed reframing and stabilization. The .r3d files from the camera cards were ingested into a FotoKem nextLAB unit, which was used to transcode edit media, viewing dailies, archive the media to LTO data tape and transfer to shuttle drives. For offline editing, we created down-sampled ProRes 422 (LT) QuickTime media, sized at 2304 x 1152, which corresponded to the full 6K frame. The Premiere Pro sequences were set to 1920 x 800 for a 2.40:1 aspect. This size corresponded to the same 5K center extraction within the 6K camera files. By editing with the larger ProRes files inside of this timeline space, Kirk was only viewing the center extraction, but had the same relative overshoot area to enable easy repositioning in all four directions. In addition, we also uploaded dailies to the PIX system for everyone to review footage while on location. PIX also lets you include metadata for each shot, including lens choice and camera settings, such as color temperature and exposure index.”

Kirk Baxter has a very specific way that he likes to tackle dailies. He said, “I typically start in reverse order. David tends to hone in on the performance with each successive take until he feels he’s got it. He’s not like other directors that may ask for completely different deliveries from the actors with each take. With David, the last take might not be the best, but it’s the best starting point from which to judge the other takes. Once I go through a master shot, I’ll cut it up at the points where I feel the edits will be made. Then I’ll have the assistants repeat these edit points on all takes and string out the line readings back-to-back, so that the auditioning process is more accurate. David is very gifted at blocking and staging, so it’s rare that you don’t use an angle that was shot for a scene. I’ll then go through this sequence and lift my selected takes for each line reading up to a higher track on the timeline. My assistants take the selects and assemble a sequence of all the angles in scene order. Once it’s hyper-organized, I’ll send it to David via PIX and get his feedback. After that, I’ll cut the scene. David stays in close contact with me as he’s shooting. He wants to see a scene cut together before he strikes a set or releases an actor.”

Telling the story

df_gg_5The director’s cut is often where the story gets changed from what works on paper to what makes a better film. Baxter elaborated, “When David starts a film, the script has been thoroughly vetted, so typically there isn’t a lot of radical story re-arrangement in the cutting room. As editors, we got a lot of credit for the style of intercutting used in The Social Network, but truthfully that was largely in the script. The dialogue was tight and very integral to the flow, so we really couldn’t deviate a lot. I’ve always found the assembly the toughest part, due to the volume and the pressure of the ticking clock. Trying to stay on pace with the shoot involves some long days. The shooting schedule was 106 days and I had my first cut ready about two weeks after the production wrapped. A director gets around ten weeks for a director’s cut and with some directors, you are almost starting from scratch once the director arrives. With David, most of that ten week period involves adding finesse and polish, because we have done so much of the workload during the shoot.”

df_gg_9He continued, “The first act of Gone Girl uses a lot of flashbacks to tell Amy’s side of the story and with these, we deviated a touch from the script. We dropped a couple of scenes to help speed things along and reduced the back and forth of the two timelines by grouping flashbacks together, so that we didn’t keep interrupting the present day; but, it’s mostly executed as scripted. There was one scene towards the end that I didn’t feel was in the right place. I kept trying to move it, without success. I ended up taking another pass at the cut of the scene. Once we had the emotion right in the cut, the scene felt like it was in the right place, which is where it was written to be.”

“The hardest scenes to cut are the emotional scenes, because David simplifies the shooting. You can’t hide in dynamic motion. More complex scenes are actually easier to cut and certainly quite fun. About an hour into the film is the ‘cool girls’ scene, which rapidly answers lots of question marks that come before it. The scene runs about eight minutes long and is made up of about 200 set-ups. It’s a visual feast that should be hard to put together, but was actually dessert from start to finish, because David thought it through and supplied all the exact pieces to the puzzle.”

Music that builds tension

df_gg_6Composers Trent Reznor and Atticus Ross of Nine Inch Nails fame are another set of Fincher regulars. Reznor and Ross have typically supplied Baxter with an album of preliminary themes scored with key scenes in mind. These are used in the edit and then later enhanced by the composers with the final score at the time of the mix. Baxter explained, “On Gone Girl we received their music a bit later than usual, because they were touring at the time. When it did arrive, though, it was fabulous. Trent and Atticus are very good at nailing the feeling of a film like this. You start with a piece of music that has a vibe of ‘this is a safe, loving neighborhood’ and throughout three minutes it sours to something darker, which really works.”

“The final mix is usually the first time I can relax. We mixed at Skywalker Sound and that was the first chance I really had to enjoy the film, because now I was seeing it with all the right sound design and music added. This allows me to get swallowed up in the story and see beyond my role.”

Visual effects

df_gg_7The key factor to using Premiere Pro CC was its integration with After Effects CC via Adobe’s Dynamic Link feature. Kirk Baxter explained how he uses this feature, “Gone Girl doesn’t seem like a heavy visual effects film, but there are quite a lot of invisible effects. First of all, I tend to do a lot of invisible split screens. In a two-shot, I’ll often use a different performance for each actor. Roughly one-third of the timeline contains such shots. About two-thirds of the timeline has been stabilized or reframed. Normally, this type of in-house effects work is handled by the assistants who are using After Effects. Those shots are replaced in my sequence with an After Effects composition. As they make changes, my timeline is updated.”

“There are other types of visual effects, as well. David will take exteriors and do sky replacements, add flares, signage, trees, snow, breath, etc. The shot of Amy sinking in the water, which has been used in the trailers, is an effects composite. That’s better than trying to do multiple takes with the real actress by drowning her in cold water. Her hair and the water elements were created by Digital Domain. This is also a story about the media frenzy that grows around the mystery, which meant a lot of TV and computer screen comps. That content is as critical in the timing of a scene as the actors who are interacting with it.”

Tyler Nelson added his take on this, “A total of four assistants worked with Kirk on these in-house effects. We were using the same ProRes editing files to create the composites. In order to keep the system performance high, we would render these composites for Kirk’s timeline, instead of using unrendered After Effects composites. Once a shot was finalized, then we would go back to the 6K .r3d files and create the final composite at full resolution. The beauty of doing this all internally is that you have a team of people who really care about the quality of the project as much as everyone else. Plus the entire process becomes that much more interactive. We pushed each other to make everything as good as it could possibly be.”

Optimization and finishing

df_gg_2A custom pipeline was established to make the process efficient. This was spearheaded by post production consultant Jeff Brue, CTO of Open Drives. The front end storage for all active editorial files was a 36TB RAID-protected storage network built with SSDs. A second RAID built with standard HDDs was used for the .r3d camera files and visual effects elements. The hardware included a mix of HP and Apple workstations running with NVIDIA K6000 or K5200 GPU cards. Use of the NVIDIA cards was critical to permit as much real-time performance as possible doing the edit. GPU performance was also a key factor in the de-Bayering of .r3d files, since the team didn’t use any of the RED Rocket accelerator cards in their pipeline. The Macs were primarily used for the offline edit, while the PCs tackled the visual effects and media processing tasks.

In order to keep the Premiere Pro projects manageable, the team broke down the film into eight reels with a separate project file per reel. Each project contained roughly 1,500 to 2,000 files. In addition to Dynamic Linking of After Effects compositions, most of the clips were multi-camera clips, as Fincher typically shoots scenes with two or more cameras for simultaneous coverage. This massive amount of media could have potentially been a huge stumbling block, but Brue worked closely with Adobe to optimize system performance over the life of the project. For example, project load times dropped from about six to eight minutes at the start down to 90 seconds at best towards the end.

The final conform and color grading was handled by Light Iron on their Quantel Pablo Rio system run by colorist Ian Vertovec. The Rio was also configured with NVIDIA Tesla cards to facilitate this 6K pipeline. Nelson explained, “In order to track everything I used a custom Filemaker Pro database as the codebook for the film. This contained all the attributes for each and every shot. By using an EDL in conjunction with the codebook, it was possible to access any shot from the server. Since we were doing a lot of the effects in-house, we essentially ‘pre-conformed’ the reels and then turned those elements over to Light Iron for the final conform. All shots were sent over as 6K DPX frames, which were cropped to 5K during the DI in the Pablo. We also handled the color management of the RED files. Production shot these with the camera color metadata set to RedColor3, RedGamma3 and an exposure index of 800. That’s what we offlined with. These were then switched to RedLogFilm gamma when the DPX files were rendered for Light Iron. If, during the grade, it was decided that one of the raw settings needed to be adjusted for a few shots, then we would change the color settings and re-render a new version for them.” The final mastering was in 4K for theatrical distribution.

df_gg_8As with his previous films, director David Fincher has not only told a great story in Gone Girl, but set new standards in digital post production workflows. Seeking to retain creative control without breaking the bank, Fincher has pushed to handle as many services in-house as possible. His team has made effective use of After Effects for some time now, but the new Creative Cloud tools with Premiere Pro CC as the hub, bring the power of this suite to the forefront. Fortunately, team Fincher has been very eager to work with Adobe on product advances, many of which are evident in the new application versions previewed by Adobe at IBC in Amsterdam. With a film as complex as Gone Girl, it’s clear that Adobe Premiere Pro CC is ready for the big leagues.

Kirk Baxter closed our conversation with these final thoughts about the experience. He said, “It was a joy from start to finish making this film with David. Both he and Cean [Chaffin, producer and David Fincher’s wife] create such a tight knit post production team that you fall into an illusion that you’re making the film for yourselves. It’s almost a sad day when it’s released and belongs to everyone else.”

Originally written for Digital Video magazine / CreativePlanetNetwork.


Needless to say, Gone Girl has received quite a lot of press. Here are just a few additional discussions of the workflow:

Adobe panel discussion with the post team





IndieWire blog

ICG Magazine


Tony Zhou’s Vimeo take on Fincher 

©2014 Oliver Peters

The FCP X – RED – Resolve Dance


I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.


df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.


The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.


df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.


Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters