Understanding SpeedGrade

df1615_sg_1How you handle color correction depends on your temperament and level of expertise. Some editors want to stay within the NLE, so that editorial adjustments are easily made after grading. Others prefer the roundtrip to a powerful external application. When Adobe added the Direct Link conduit between Premiere Pro CC and SpeedGrade CC, they gave Premiere Pro editors the best of both worlds.

Displays

df1615_sg_4SpeedGrade is a standalone grading application that was initially designed around an SDI feed from the GPU to a second monitor for your external video. After the Adobe acquisition, Mercury Transmit was eventually added, so you can run SpeedGrade with one display, two computer displays, or a computer display plus a broadcast monitor. With a single display, the video viewer is integrated into the interface. At home, I use two computer displays, so by enabling a dual display layout, I get the SpeedGrade interface on one screen and the full-screen video viewer on the other. To do this you have to correctly offset the pixel dimensions and position for the secondary display in order to see it. Otherwise the image is hidden behind the interface.

Using Mercury Transmit, the viewer image is sent to an external monitor, but you’ll need an appropriate capture/monitoring card or device. AJA products seem to work fine. Some Blackmagic devices work and others don’t. When this works, you will lose the viewer from the interface, so it’s best to have the external display close – as in next to your interface monitor.

Timeline

df1615_sg_3When you use Direct Link, you are actually sending the Premiere Pro timeline to SpeedGrade. This means that edits and timeline video layers are determined by Premiere Pro and those editing functions are disabled in SpeedGrade. It IS the Premiere Pro timeline. This means certain formats that might not be natively supported by a standalone SpeedGrade project will be supported via the Direct Link path – as long as Premiere Pro natively supports them.

There is a symbiotic relationship between Premiere Pro and SpeedGrade. For example, I worked on a music video that was edited natively using RED camera media. The editor had done a lot of reframing from the native 4K media in the 1080 timeline. All of this geometry was correctly interpreted by SpeedGrade. When I compared the same sequence in Resolve (using an XML roundtrip), the geometry was all wrong. SpeedGrade doesn’t give you access to the camera raw settings for the .r3d media, but Premiere Pro does. So in this case, I adjusted the camera raw values by using the source settings control in Premiere Pro, which then carried those adjustments over to SpeedGrade.

df1615_sg_2Since the Premiere Pro timeline is the SpeedGrade timeline when you use Direct Link, you can add elements into the sequence from Premiere, in order to make them available in SpeedGrade. Let’s say you want to add a common edge vignette across all the clips of your sequence. Simply add an adjustment layer to a top track while in Premiere. This appears in your SpeedGrade timeline, enabling you to add a mask and correction within the adjustment layer clip. In addition, any video effects filters that you’ve applied in Premiere will show up in SpeedGrade. You don’t have access to the controls, but you will see the results interactively as you make color correction adjustments.

df1615_sg_17All SpeedGrade color correction values are applied to the clip as a single Lumetri effect when you send the timeline back to Premiere Pro. All grading layers are collapsed into a single composite effect per clip, which appears in the clip’s effect stack (in Premiere Pro) along with all other filters. In this way you can easily trim edit points without regard to the color correction. Traditional roundtrips render new media with baked-in color correction values. There, you can only work within the boundaries of the handles that you’ve added to the file upon rendering. df1615_sg_16Not so with Direct Link, since color correction is like any other effect applied to the original media. Any editorial changes you’ve made in Premiere Pro are reflected in SpeedGrade should you go back for tweaks, as long as you continue to use Direct Link.

12-way and more

df1615_sg_5Most editors are familiar with 3-way color correctors that have level and balance controls for shadows, midrange and highlights. Many refer to SpeedGrade’s color correction model as a 12-way color corrector. The grading interface features a 3-way (lift/gamma/gain) control for four ranges of correction: overall, shadows, midrange, and highlights. Each tab also adds control of contrast, pivot, color temperature, magenta (tint), and saturation. Since shadow, midrange, and highlight ranges overlap, you also have sliders that adjust the overlap thresholds between shadow and midrange and between the midrange and highlight areas.

df1615_sg_7Color correction is layer based – similar to Photoshop or After Effects. SpeedGrade features primary (“P”) , secondary (“S”) and filter layers (the “+” symbol). When you add layers, they are stacked from bottom to top and each layer includes an opacity control. As such, layers work much the same as rooms in Apple Color or nodes in DaVinci Resolve. You can create a multi-layered adjustment by using a series of stacked primary layers. Shape masks, like that for a vignette, should be applied to a primary layer. df1615_sg_10The mask may be normal or inverted so that the correction is applied either to the inside or the outside of the mask. Secondaries should be reserved for HSL keys. For instance, highlighting the skin tones of a face to adjust its color separately from the rest of the image. The filter layer (“+”) is where you’ll find a number of useful tools, including Photoshop-style creative effect filters, LUTs, and curves.

Working with grades

df1615_sg_13The application of color correction can be applied to a clip as either a master clip correction or just a clip correction (or both). When you grade using the default clip tab, then that color correction is only being applied to that single clip. If you grade in the master clip tab, then any color correction that you apply to that clip will also be applied to every other instance of that same media file elsewhere on the timeline. Theoretically, in a multicam edit – made up of four cameras with a single media file per camera – you could grade the entire timeline by simply color correcting the first clip for each of the four cameras as a master clip correction. All other clips would automatically inherit the same settings. Of course, that almost never works out quite as perfectly, therefore, you can grade a clip using both the master clip and the regular clip tabs. Use the master for a general setting and still use the regular clip tab to tweak each shot as needed.

df1615_sg_9Grades can be saved and recalled as Lumetri Looks, but typically these aren’t as useful in actual grading as standard copy-and-paste functions – a recent addition to SpeedGrade CC. Simply highlight one or more layers of a graded clip and press copy (cmd+c on a Mac). Then paste (cmd+v on a Mac) those to the target clip. These will be pasted in a stack on top of the default, blank primary correction that’s there on every clip. You can choose to use, ignore, or delete this extra primary layer.

SpeedGrade features a cool trick to facilitate shot matching. The timeline playhead can be broken out into multiple playheads, which will enable you to compare two or more shots in real-time on the viewer. This quick comparison lets you make adjustments to each to get a closer match in context with the surrounding shots.

A grading workflow

df1615_sg_14Everyone has their own approach to grading and these days there’s a lot of focus on camera and creative LUTs. My suggestions for prepping a Premiere Pro CC sequence for SpeedGrade CC go something like this.

df1615_sg_6Once, you are largely done with the editing, collapse all multicam clips and flatten the timeline as much as possible down to the bottom video layer. Add one or two video tracks with adjustment layers, depending on what you want to do in the grade. These should be above the last video layer. All graphics – like lower thirds – should be on tracks above the adjustment layer tracks. This is assuming that you don’t want to include these in the color correction. Now duplicate the sequence and delete the tracks with the graphics from the dupe. Send the dupe to SpeedGrade CC via Direct Link.

In SpeedGrade, ignore the first primary layer and add a filter layer (“+”) above it. Select a camera patch LUT. For example, an ARRI Log-C-to-Rec-709 LUT for Log-C gamma-encoded Alexa footage. Repeat this for every clip from the same camera type. If you intend to use a creative LUT, like one of the SpeedLooks from LookLabs, you’ll need one of their camera patches. This shifts the camera video into a unified gamma profile optimized for their creative LUTs. If all of the footage used in the timeline came from the same camera and used the same gamma profile, then in the case of SpeedLooks, you could apply the creative LUT to one the adjustment layer clips. This will apply that LUT to everything in the sequence.

df1615_sg_8Once you’ve applied input and output LUTs you can grade each clip as you’d like, using primary and secondary layers. Use filter layers for curves. Any order and any number of layers per clip is fine. Using this methodology all grading is happening between the camera patch LUT and the creative LUT added to the adjustment layer track. Finally, if you want a soft edge vignette on all clips, apply an edge mask to the default primary layer of the topmost adjustment layer clip. Adjust the size, shape, and softness of the mask. Darken the outside of the mask area. Done.df1615_sg_11

(Note that not every camera uses logarithmic gamma encoding, nor do you want to use LUTs on every project. These are the “icing on the cake”, NOT the “meat and potatoes” of grading. If your sequence is a standard correction without any stylized creative looks, then ignore the LUT procedures I described above.)

df1615_sg_15Now simply send your timeline back to Premiere Pro (the “Pr” button). Back in Premiere Pro CC, duplicate that sequence. Copy-and-paste the graphics tracks from the original sequence to the available blank tracks of the copy. When done, you’ll have three sequences: 1) non-color corrected with graphics, 2) color corrected without graphics, and 3) final with color correction and graphics. The beauty of the Direct Link path between Premiere Pro CC and SpeedGrade CC is that you can easily go back and forth for changes without ever being locked in at any point in the process.

©2015 Oliver Peters

Preparing Digital Camera Files

df0815_media_4_sm.

The modern direction in file-based post production workflows is to keep your camera files native throughout the enter pipeline. While this might work within a closed loop, like a self-contained Avid, Adobe or Apple workflow, it breaks down when you have to move your project across multiple applications. It’s common for an editor to send files to a Pro Tools studio for the final mix and to a colorist running Resolve, Baselight, etc. for the final grade. In doing so, you have to ensure that editorial decisions aren’t incorrectly translated in the process, because the NLE might handle a native camera format differently than the mixer’s or colorist’s tool. To keep the process solid, I’ve developed some disciplines in how I like to handle media. The applications I mention are for Mac OS, but most of these companies offer Windows versions, too. If not, you can easily find equivalents.

Copying media

df0815_media_6_smThe first step is to get the media from the camera cards to a reliable hard drive. It’s preferable to have at least two copies (from the location) and to make the copies using software that verifies the back-up. This is a process often done on location by the lowly “data wrangler” under less than ideal conditions. A number of applications, such as Imagine Products’ ShotPut Pro and Adobe Prelude let you do this task, but my current favorite is Red Giant’s Offload. It uses a dirt simple interface permitting one source and two target locations. It has the sole purpose of safely transferring media with no other frills.

Processing media on location

df0815_media_5_smWith the practice of shooting footage with a flat-looking log gamma profile, many productions like to also see the final, adjusted look on location. This often involves some on-site color grading to create either a temporary look or even the final look. Usually this task falls to a DIT (digital imaging technician). Several applications are available, including DaVinci Resolve, Pomfort Silverstack and Redcine-X Pro. Some new applications, specifically designed for field use, include Red Giant’s BulletProof and Catalyst Browse/Prepare from Sony Creative Software. Catalyst Browse in free and designed for all Sony cameras, whereas Catalyst Prepare is a paid application that covers Sony cameras, but also other brands, including Canon and GoPro. Depending on the application, these tools may be used to add color correction, organize the media, transcode file formats, and even prepare simple rough assemblies of selected footage.

All of these tools add a lot of power, but frankly, I’d prefer that the production company leave these tasks up to the editorial team and allow more time in post. In my testing, most of the aforementioned apps work as advertised; however, BulletProof continues to have issues with the proper handling of timecode.

Transcoding media

df0815_media_2_smI’m not a big believer in always using native media for the edit, unless you are in a fast turnaround situation. To get the maximum advantage for interchanging files between applications, it is ideal to end up in one of several common media formats, if that isn’t how the original footage was recorded. You also want every file to have unique and consistent metadata, including file names, reel IDs and timecode. The easiest common media format is QuickTime using the .mov wrapper and encoded using either Apple ProRes, Panasonic AVC-Intra, Sony XDCAM, or Avid DNxHD codecs. These are generally readable in most applications running on Mac or PC. My preference is to first convert all files into QuickTime using one of these codecs, if they originated as something else. That’s because the file is relatively malleable at that point and doesn’t require a rigid external folder structure.

Applications like BulletProof and Catalyst can transcode camera files into another format. Of course, there are dedicated batch encoders like Sorenson Squeeze, Apple Compressor, Adobe Media Encoder and Telestream Episode. My personal choice for a tool to transcode camera media is either MPEG Streamclip (free) or Divergent Media’s EditReady. Both feature easy-to-use batch processing interfaces, but EditReady adds the ability to apply LUTs, change file names and export to multiple targets. It also reads formats that MPEG Streamclip doesn’t, such as C300 files (Canon XF codec wrapped as .mxf). If you want to generate a clean master copy preserving the log gamma profile, as well as a second lower resolution editorial file with a LUT applied, then EditReady is the right application.

Altering your media

df0815_media_3_smI will go to extra lengths to make sure that files have proper names, timecode and source/tape/reel ID metadata. Most professional video cameras will correctly embed that information. Others, like the Canon 5D Mark III, might encode a non-standard timecode format, allow duplicated file names, and not add reel IDs.

Once the media has been transcoded, I will use two applications to adjust the file metadata. For timecode, I rely on VideoToolShed’s QtChange. This application lets you alter QuickTime files in a number of ways, but I primarily use it to strip off unnecessary audio tracks and bad timecode. Then I use it to embed proper reel IDs and timecode. Because it does this by altering header information, processing a lot of files happens quickly. The second tool in this mix is Better Rename, which is batch renaming utility. I use it frequently for adding, deleting or changing all or part of the file name for a batch of files. For instance, I might append a production job number to the front of a set of Canon 5D files. The point in doing all of this is so that you can easily locate the exact same point within any file using any application, even several years apart.

df0815_media_1_smSpeed is a special condition. Most NLEs handle files with mixed frame rates within the same project and sequences, but often such timelines do not correctly translate from one piece of software to the next. Edit lists are interchanged using EDL, XML, FCPXML and AAF formats and each company has its own variation of the format that they use. Some formats, like FCPXML, require third party utilities to translate the list, adding another variable. Round-tripping, such as going from NLE “A” (for offline) to Color Correction System “B” (for grading) and then to NLE “C” (for finishing), often involves several translations. Apart from effects, speed differences in native camera files can be a huge problem.

A common mixed frame rate situation in the edit is combining 23.98fps and 29.97fps footage. If both of these were intended to run in real-time, then it’s usually OK. However, if the footage was recorded with the intent to overcrank for slomo (59.94 or 29.97 native for a timebase of 23.98) then you start to run into issues. As long as the camera properly flags the file, so that every application plays it at the proper timebase (slowed), then things are fine. This isn’t true of DSLRs, where you might shoot 720p/59.94 for use as slomo in a 1080p/29.97 or 23.98 sequence. With these files, my recommendation is to alter the speed of the file first, before using it inside the NLE. One way to do this is to use Apple Cinema Tools (part of the defunct Final Cut Studio package, but can still be found). You can batch-conform a set of 59.94fps files to play natively at 23.98fps in very short order. This should be done BEFORE adding any timecode with QtChange. Remember that any audio will have its sample rate shifted, which I’ve found to be a problem with FCP X. Therefore, when you do this, also strip off the audio tracks using QtChange. They play slow anyway and so are useless in most cases where you want overcranked, slow motion files.

Audio in your NLE

The last point to understand is that not all NLEs deal with audio tracks in the same fashion. Often camera files are recorded with multiple mono audio sources, such as a boom and a lav mic on channels 1 and 2. These may be interpreted either as stereo or as dual mono, depending on the NLE. Premiere Pro CC in particular sees these as stereo when imported. If you edit them to the timeline as a single stereo track, you will not be able to correct this in the sequence afterwards by panning. Therefore, it’s important to remember to first set-up your camera files with a dual mono channel assignment before making the first edit. This same issue crops up when round-tripping files through Resolve. It may not properly handle audio, depending on how it interprets these files, so be careful.

These steps add a bit more time at the front end of any given edit, but are guaranteed to give you a better editing experience on complex projects. The results will be easier interchange between applications and more reliable relinking. Finally, when you revisit a project a year or more down the road, everything should pop back up, right where you left it.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Adobe Anywhere and Divine Access

df0115_da_1_sm

Editors like the integration of Adobe’s software, especially Dynamic Link and Direct Link between creative applications. This sort of approach is applied to collaborative workflows with Adobe Anywhere, which permits multiple stakeholders, including editors, producers and directors, to access common media and productions from multiple, remote locations. One company that has invested in the Adobe Anywhere environment is G-Men Media of Venice, California, who installed it as their post production hub. By using Adobe Anywhere, Jeff Way (COO) and Clay Glendenning (CEO) sought to improve the efficiency of the filmmaking process for their productions. No science project – they have now tested the concept in the real world on several indie feature films.

Their latest film, Divine Access, produced by The Traveling Picture Show Company in association with G-Men Media, is a religious satire centering on reluctant prophet Jack Harriman. Forces both natural and supernatural lead Harriman down a road to redemption culminating in a final showdown with his long time foe, Reverend Guy Roy Davis. Steven Chester Prince (Boyhood, The Ringer, A Scanner Darkly) moves behind the camera as the film’s director. The entire film was shot in Austin, Texas during May of 2014, but the processing of dailies and all post production was handled back at the Venice facility. Way explains, “During principal photography we were able to utilize our Anywhere system to turn around dailies and rough cuts within hours after shooting. This reduced our turnaround time for review and approval, thus reducing budget line items. Using Anywhere enabled us to identify cuts and mark them as viable the same day, reducing the need for expensive pickup shoots later down the line.”

The production workflow

df0115_da_3_smDirector of Photography Julie Kirkwood (Hello I Must Be Going, Collaborator, Trek Nation) picked the ARRI ALEXA for this film and scenes were recorded as ProRes 4444 in 2K. An on-set data wrangler would back up the media to local hard drives and then a runner would take the media to a downtown upload site. The production company found an Austin location with 1GB upload speeds. This enabled them to upload 200GB of data in about 45 minutes. Most days only 50-80GB were uploaded at one time, since uploads happened several times throughout each day.

Way says, “We implemented a technical pipeline for the film that allowed us to remain flexible.  Adobe’s open API platform made this possible. During production we used an Amazon S3 instance in conjunction with Aspera to get the footage securely to our system and also act as a cloud back-up.” By uploading to Amazon and then downloading the media into their Anywhere system in Venice, G-Men now had secure, full-resolution media in redundant locations. Camera LUTs were also sent with the camera files, which could be added to the media for editorial purposes in Venice. Amazon will also provide a long-term archive of the 8TB of raw media for additional protection and redundancy. This Anywhere/Amazon/Aspera pipeline was supervised by software developer Matt Smith.

df0115_da_5_smBack in Venice, the download and ingest into the Anywhere server and storage was an automated process that Smith programmed. Glendenning explains, “It would automatically populate a bin named for that day with the incoming assets. Wells [Phinny, G-Men editorial assistant] would be able to grab from subfolders named ‘video’ and ‘audio’ to quickly organize clips into scene subfolders within the Anywhere production that he would create from that day’s callsheet. Wells did most of this work remotely from his home office a few miles away from the G-Men headquarters.” Footage was synced and logged for on-set review of dailies and on-set cuts the next day. Phinny effectively functioned as a remote DIT in a unique way.

Remote access in Austin to the Adobe Anywhere production for review was made possible through an iPad application. Way explains, “We had close contact with Wells via text message, phone and e-mail. The iPad access to Anywhere used a secure VPN connection over the Internet. We found that a 4G wireless data connection was sufficient to play the clips and cuts. On scenes where the director had concerns that there might not be enough coverage, the process enabled us to quickly see something. No time was lost to transcoding media or to exporting a viewable copy, which would be typical of the more traditional way of working.”

Creative editorial mixing Adobe Anywhere and Avid Media Composer

df0115_da_4_smOnce principal photography was completed, editing moved into the G-Men mothership. Instead of editing with Premiere Pro, however, Avid Media Composer was used. According to Way, “Our goal was to utilize the Anywhere system throughout as much of the production as possible. Although it would have been nice to use Premiere Pro for the creative edit, we believed going with an editor that shared our director’s creative vision was the best for the film. Kindra Marra [Scenic Route, Sassy Pants, Hick] preferred to cut in Media Composer. This gave us the opportunity to test how the system could adapt already existing Adobe productions.” G-Men has handled post on other productions where the editor worked remotely with an Anywhere production. In this case, since Marra lived close-by in Santa Monica, it was simpler just to set up the cutting room at their Venice facility. At the start of this phase, assistant editor Justin (J.T.) Billings joined the team.

Avid has added subscription pricing, so G-Men installed the Divine Access cutting room using a Mac Pro and “renting” the Media Composer 8 software for a few months. The Anywhere servers are integrated with a Facilis Technology TerraBlock shared storage network, which is compatible with most editing applications, including both Premiere Pro and Media Composer. The Mac Pro tower was wired into the TerraBlock SAN and was able to see the same ALEXA ProRes media as Anywhere. According to Billings, “Once all the media was on the TerraBlock drives, Marra was able to access these in the Media Composer project using Avid’s AMA-linking. This worked well and meant that no media had to be duplicated. The film was cut solely with AMA-linked media. External drives were also connected to the workstations for nightly back-ups as another layer of protection.”

Adobe Anywhere at the finish line

df0115_da_6_smOnce the cut was locked, an AAF composition for the edited sequence was sent from Media Composer to DaVinci Resolve 11, which was installed on an HP workstation at G-Men. This unit was also connected to the TerraBlock storage, so media instantly linked when the AAF file was imported. Freelance colorist Mark Todd Osborne graded the film on Resolve 11 and then exported a new AAF file corresponding to the rendered media, which now also existed on the SAN drives. This AAF composition was then re-imported into Media Composer.

Billings continues, “All of the original audio elements existed in the Media Composer project and there was no reason to bring them into Premiere Pro. By importing Resolve’s AAF back into Media Composer, we could then double-check the final timeline with audio and color corrected picture. From here, the audio and OMF files were exported for Pro Tools [sound editorial and the mix is being done out-of-house]. Reference video of the film for the mix could now use the graded images. A new AAF file for the graded timeline was also exported from Media Composer, which then went back into Premiere Pro and the Anywhere production. Once we get the mixed tracks back, these will be added to the Premiere Pro timeline. Final visual effects shots can also be loaded into Anywhere and then inserted into the Premiere Pro sequence. From here on, all further versions of Divine Access will be exported from Premiere Pro and Anywhere.”

Glendenning points out that, “To make sure the process went smoothly, we did have a veteran post production supervisor – Hank Braxtan – double check our workflow.  He and I have done a lot of work together over the years and has more than a decade of experience overseeing an Avid house. We made sure he was available whenever there were Avid-related technical questions from the editors.”

Way says, “Previously, on post production of [the indie film] Savageland, we were able to utilize Anywhere for full post production through to delivery. Divine Access has allowed us to take advantage of our system on both sides of the creative edit including principal photography and post finishing through to delivery. This gives us capabilities through entire productions. We have a strong mix of Apple and PC hardware and now we’ve proven that our Anywhere implementation is adaptable to a variety of different hardware and software configurations. Now it becomes a non-issue whether it’s Adobe, Avid or Resolve. It’s whatever the creative needs dictate; plus, we are happy to be able to use the fastest machines.”

Glendenning concludes, “Tight budget projects have tight deadlines and some producers have missed their deadlines because of post. We installed Adobe Anywhere and set up the ecosystem surrounding it because we feel this is a better way that can save time and money. I believe the strategy employed for Divine Access has been a great improvement over the usual methods. Using Adobe Anywhere really let us hit it out of the park.”

Originally written for DV magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Gone Girl

df_gg_4David Fincher is back with another dark tale of modern life, Gone Girl – the film adaptation of Gillian Flynn’s 2012 novel. Flynn also penned the screenplay.  It is the story of Nick and Amy Dunne (Ben Affleck and Rosamund Pike) – writers who have been hit by the latest downturn in the economy and are living in America’s heartland. Except that Amy is now mysteriously missing under suspicious circumstances. The story is told from each of their subjective points of view. Nick’s angle is revealed through present events, while Amy’s story is told through her diary in a series of flashbacks. Through these we learn that theirs is less than the ideal marriage we see from the outside. But whose story tells the truth?

To pull the film together, Fincher turned to his trusted team of professionals including director of photography Jeff Cronenweth, editor Kirk Baxter and post production supervisor Peter Mavromates. Like Fincher’s previous films, Gone Girl has blazed new digital workflows and pushed new boundaries. It is the first major feature to use the RED EPIC Dragon camera, racking up 500 hours of raw footage. That’s the equivalent of 2,000,000 feet of 35mm film. Much of the post, including many of the visual effects, were handled in-house.

df_gg_1Kirk Baxter co-edited David Fincher’s The Curious Case of Benjamin Button, The Social Network and The Girl with the Dragon Tattoo with Angus Wall – films that earned the duo two best editing Oscars. Gone Girl was a solo effort for Baxter, who had also cut the first two episodes of House of Cards for Fincher. This film now becomes the first major feature to have been edited using Adobe Premiere Pro CC. Industry insiders consider this Adobe’s Cold Mountain moment. That refers to when Walter Murch used an early version of Apple Final Cut Pro to edit the film Cold Mountain, instantly raising the application’s awareness among the editing community as a viable tool for long-form post production. Now it’s Adobe’s turn.

In my conversation with Kirk Baxter, he revealed, “In between features, I edit commercials, like many other film editors. I had been cutting with Premiere Pro for about ten months before David invited me to edit Gone Girl. The production company made the decision to use Premiere Pro, because of its integration with After Effects, which was used extensively on the previous films. The Adobe suite works well for their goal to bring as much of the post in-house as possible. So, I was very comfortable with Premiere Pro when we started this film.”

It all starts with dailies

df_gg_3Tyler Nelson, assistant editor, explained the workflow, “The RED EPIC Dragon cameras shot 6K frames (6144 x 3072), but the shots were all framed for a 5K center extraction (5120 x 2133). This overshoot allowed reframing and stabilization. The .r3d files from the camera cards were ingested into a FotoKem nextLAB unit, which was used to transcode edit media, viewing dailies, archive the media to LTO data tape and transfer to shuttle drives. For offline editing, we created down-sampled ProRes 422 (LT) QuickTime media, sized at 2304 x 1152, which corresponded to the full 6K frame. The Premiere Pro sequences were set to 1920 x 800 for a 2.40:1 aspect. This size corresponded to the same 5K center extraction within the 6K camera files. By editing with the larger ProRes files inside of this timeline space, Kirk was only viewing the center extraction, but had the same relative overshoot area to enable easy repositioning in all four directions. In addition, we also uploaded dailies to the PIX system for everyone to review footage while on location. PIX also lets you include metadata for each shot, including lens choice and camera settings, such as color temperature and exposure index.”

Kirk Baxter has a very specific way that he likes to tackle dailies. He said, “I typically start in reverse order. David tends to hone in on the performance with each successive take until he feels he’s got it. He’s not like other directors that may ask for completely different deliveries from the actors with each take. With David, the last take might not be the best, but it’s the best starting point from which to judge the other takes. Once I go through a master shot, I’ll cut it up at the points where I feel the edits will be made. Then I’ll have the assistants repeat these edit points on all takes and string out the line readings back-to-back, so that the auditioning process is more accurate. David is very gifted at blocking and staging, so it’s rare that you don’t use an angle that was shot for a scene. I’ll then go through this sequence and lift my selected takes for each line reading up to a higher track on the timeline. My assistants take the selects and assemble a sequence of all the angles in scene order. Once it’s hyper-organized, I’ll send it to David via PIX and get his feedback. After that, I’ll cut the scene. David stays in close contact with me as he’s shooting. He wants to see a scene cut together before he strikes a set or releases an actor.”

Telling the story

df_gg_5The director’s cut is often where the story gets changed from what works on paper to what makes a better film. Baxter elaborated, “When David starts a film, the script has been thoroughly vetted, so typically there isn’t a lot of radical story re-arrangement in the cutting room. As editors, we got a lot of credit for the style of intercutting used in The Social Network, but truthfully that was largely in the script. The dialogue was tight and very integral to the flow, so we really couldn’t deviate a lot. I’ve always found the assembly the toughest part, due to the volume and the pressure of the ticking clock. Trying to stay on pace with the shoot involves some long days. The shooting schedule was 106 days and I had my first cut ready about two weeks after the production wrapped. A director gets around ten weeks for a director’s cut and with some directors, you are almost starting from scratch once the director arrives. With David, most of that ten week period involves adding finesse and polish, because we have done so much of the workload during the shoot.”

df_gg_9He continued, “The first act of Gone Girl uses a lot of flashbacks to tell Amy’s side of the story and with these, we deviated a touch from the script. We dropped a couple of scenes to help speed things along and reduced the back and forth of the two timelines by grouping flashbacks together, so that we didn’t keep interrupting the present day; but, it’s mostly executed as scripted. There was one scene towards the end that I didn’t feel was in the right place. I kept trying to move it, without success. I ended up taking another pass at the cut of the scene. Once we had the emotion right in the cut, the scene felt like it was in the right place, which is where it was written to be.”

“The hardest scenes to cut are the emotional scenes, because David simplifies the shooting. You can’t hide in dynamic motion. More complex scenes are actually easier to cut and certainly quite fun. About an hour into the film is the ‘cool girls’ scene, which rapidly answers lots of question marks that come before it. The scene runs about eight minutes long and is made up of about 200 set-ups. It’s a visual feast that should be hard to put together, but was actually dessert from start to finish, because David thought it through and supplied all the exact pieces to the puzzle.”

Music that builds tension

df_gg_6Composers Trent Reznor and Atticus Ross of Nine Inch Nails fame are another set of Fincher regulars. Reznor and Ross have typically supplied Baxter with an album of preliminary themes scored with key scenes in mind. These are used in the edit and then later enhanced by the composers with the final score at the time of the mix. Baxter explained, “On Gone Girl we received their music a bit later than usual, because they were touring at the time. When it did arrive, though, it was fabulous. Trent and Atticus are very good at nailing the feeling of a film like this. You start with a piece of music that has a vibe of ‘this is a safe, loving neighborhood’ and throughout three minutes it sours to something darker, which really works.”

“The final mix is usually the first time I can relax. We mixed at Skywalker Sound and that was the first chance I really had to enjoy the film, because now I was seeing it with all the right sound design and music added. This allows me to get swallowed up in the story and see beyond my role.”

Visual effects

df_gg_7The key factor to using Premiere Pro CC was its integration with After Effects CC via Adobe’s Dynamic Link feature. Kirk Baxter explained how he uses this feature, “Gone Girl doesn’t seem like a heavy visual effects film, but there are quite a lot of invisible effects. First of all, I tend to do a lot of invisible split screens. In a two-shot, I’ll often use a different performance for each actor. Roughly one-third of the timeline contains such shots. About two-thirds of the timeline has been stabilized or reframed. Normally, this type of in-house effects work is handled by the assistants who are using After Effects. Those shots are replaced in my sequence with an After Effects composition. As they make changes, my timeline is updated.”

“There are other types of visual effects, as well. David will take exteriors and do sky replacements, add flares, signage, trees, snow, breath, etc. The shot of Amy sinking in the water, which has been used in the trailers, is an effects composite. That’s better than trying to do multiple takes with the real actress by drowning her in cold water. Her hair and the water elements were created by Digital Domain. This is also a story about the media frenzy that grows around the mystery, which meant a lot of TV and computer screen comps. That content is as critical in the timing of a scene as the actors who are interacting with it.”

Tyler Nelson added his take on this, “A total of four assistants worked with Kirk on these in-house effects. We were using the same ProRes editing files to create the composites. In order to keep the system performance high, we would render these composites for Kirk’s timeline, instead of using unrendered After Effects composites. Once a shot was finalized, then we would go back to the 6K .r3d files and create the final composite at full resolution. The beauty of doing this all internally is that you have a team of people who really care about the quality of the project as much as everyone else. Plus the entire process becomes that much more interactive. We pushed each other to make everything as good as it could possibly be.”

Optimization and finishing

df_gg_2A custom pipeline was established to make the process efficient. This was spearheaded by post production consultant Jeff Brue, CTO of Open Drives. The front end storage for all active editorial files was a 36TB RAID-protected storage network built with SSDs. A second RAID built with standard HDDs was used for the .r3d camera files and visual effects elements. The hardware included a mix of HP and Apple workstations running with NVIDIA K6000 or K5200 GPU cards. Use of the NVIDIA cards was critical to permit as much real-time performance as possible doing the edit. GPU performance was also a key factor in the de-Bayering of .r3d files, since the team didn’t use any of the RED Rocket accelerator cards in their pipeline. The Macs were primarily used for the offline edit, while the PCs tackled the visual effects and media processing tasks.

In order to keep the Premiere Pro projects manageable, the team broke down the film into eight reels with a separate project file per reel. Each project contained roughly 1,500 to 2,000 files. In addition to Dynamic Linking of After Effects compositions, most of the clips were multi-camera clips, as Fincher typically shoots scenes with two or more cameras for simultaneous coverage. This massive amount of media could have potentially been a huge stumbling block, but Brue worked closely with Adobe to optimize system performance over the life of the project. For example, project load times dropped from about six to eight minutes at the start down to 90 seconds at best towards the end.

The final conform and color grading was handled by Light Iron on their Quantel Pablo Rio system run by colorist Ian Vertovec. The Rio was also configured with NVIDIA Tesla cards to facilitate this 6K pipeline. Nelson explained, “In order to track everything I used a custom Filemaker Pro database as the codebook for the film. This contained all the attributes for each and every shot. By using an EDL in conjunction with the codebook, it was possible to access any shot from the server. Since we were doing a lot of the effects in-house, we essentially ‘pre-conformed’ the reels and then turned those elements over to Light Iron for the final conform. All shots were sent over as 6K DPX frames, which were cropped to 5K during the DI in the Pablo. We also handled the color management of the RED files. Production shot these with the camera color metadata set to RedColor3, RedGamma3 and an exposure index of 800. That’s what we offlined with. These were then switched to RedLogFilm gamma when the DPX files were rendered for Light Iron. If, during the grade, it was decided that one of the raw settings needed to be adjusted for a few shots, then we would change the color settings and re-render a new version for them.” The final mastering was in 4K for theatrical distribution.

df_gg_8As with his previous films, director David Fincher has not only told a great story in Gone Girl, but set new standards in digital post production workflows. Seeking to retain creative control without breaking the bank, Fincher has pushed to handle as many services in-house as possible. His team has made effective use of After Effects for some time now, but the new Creative Cloud tools with Premiere Pro CC as the hub, bring the power of this suite to the forefront. Fortunately, team Fincher has been very eager to work with Adobe on product advances, many of which are evident in the new application versions previewed by Adobe at IBC in Amsterdam. With a film as complex as Gone Girl, it’s clear that Adobe Premiere Pro CC is ready for the big leagues.

Kirk Baxter closed our conversation with these final thoughts about the experience. He said, “It was a joy from start to finish making this film with David. Both he and Cean [Chaffin, producer and David Fincher’s wife] create such a tight knit post production team that you fall into an illusion that you’re making the film for yourselves. It’s almost a sad day when it’s released and belongs to everyone else.”

Originally written for Digital Video magazine / CreativePlanetNetwork.

_________________________________

Needless to say, Gone Girl has received quite a lot of press. Here are just a few additional discussions of the workflow:

Adobe panel discussion with the post team

PostPerspective

FxGuide

HDVideoPro

IndieWire

IndieWire blog

ICG Magazine

RedUser

Tony Zhou’s Vimeo take on Fincher 

©2014 Oliver Peters

Color Grading Strategies

df_gradestrtgy_1_sm

A common mistake made by editors new to color correction is to try to nail a “look” all in a single application of a filter or color correction layer. Subjective grading is an art. Just like a photographer who dodges and burns areas of a photo in the lab or in Photoshop to “relight” a scene, so it is with the art of digital color correction. This requires several steps, so a single solution will never give you the best result. I follow this concept, regardless of the NLE or grading application I’m using at the time. Whether stacked filters in Premiere Pro, several color corrections in FCP X, rooms in Color, nodes in Resolve or layers in SpeedGrade – the process is the same. The standard grade for me is often a “stack” of four or more grading levels, layers or nodes to achieve the desired results. (Please click on any of the images for an expanded view.)

df_gradestrtgy_1red_smThe first step for me is always to balance the image and to make that balance consistent from shot to shot. Achieving this varies with the type of media and application. For example, RED camera raw footage is compatible with most updated software, allowing you to have control over the raw decoding settings. In FCP X or Premiere Pro, you get there through separate controls to modify the raw source metadata settings. In Resolve, I would usually make this the first node. Typically I will adjust ISO, temperature and tint here and then set the gamma to REDlogFilm for easy grading downstream. In a tool like FCP X, you are changing the settings for the media file itself, so any change to the RED settings for a clip will alter those settings for all instances of that clip throughout all of your projects. In other words, you are not changing the raw settings for only the timeline clips. Depending on the application, this type of change is made in the first step of color correction or it is made before you enter color correction.

df_gradestrtgy_cb1_smI’ll continue this discussion based on FCP X for the sake of simplicity, but just remember that the concepts apply generally to all grading tools. In FCP X, all effects are applied to clips before the color board stage. If you are using a LUT filter or some other type of grading plug-in like Nattress Curves, Hawaiki Color or AutoGrade, remember that this is applied first and then that result is effected by the color board controls, which are downstream in the signal flow. If you want to apply an effect after the color board correction, then you must add an adjustment layer title generator above your clip and apply that effect within the adjustment layer.

df_gradestrtgy_cb2_smIn the example of RED footage, I set the gamma to REDlogFilm for a flatter profile to preserve dynamic range. In FCP X color board correction 1, I’ll make the necessary adjustments to saturation and contrast to restore this to a neutral, but pleasing image. I will do this for all clips in the timeline, being careful to make the shots consistent. I am not applying a “look” at this level.

df_gradestrtgy_cb2a_smThe next step, color board correction 2, is for establishing the “look”. Here’s where I add a subjective grade on top of color board correction 1. This could be new from scratch or from a preset. FCP X supplies a number of default color presets that you access from the pull-down menu. Others are available to be installed, including a free set of presets that I created for FCP X. df_gradestrtgy_cb2b_smIf you have a client that likes to experiment with different looks, you might add several color board correction layers here. For instance, if I’m previewing a “cool look” versus a “warm look”, I might do one in color correction 2 and another in color correction 3. Each correction level can be toggled on and off, so it’s easy to preview the warm versus cool looks for the client.

Assuming that color board correction 2 is for the subjective look, then usually in my hierarchy, correction 3 tends to be reserved for a mask to key faces. Sometimes I’ll do this as a key mask and other times as a shape mask. df_gradestrtgy_cb3_smFCP X is pretty good here, but if you really need finesse, then Resolve would be the tool of choice. The objective is to isolate faces – usually in a close shot of your principal talent – and bring skin tones out against the background. The mask needs to be very soft so as not to draw attention to itself. Like most tools, FCP X allows you to make changes inside and outside of the mask. If I isolate a face, then I could brighten the face slightly (inside mask), as well as slightly darken everything else (outside mask).df_gradestrtgy_cb3a_sm

Depending on the shot, I might have additional correction levels above this, but all placed before the next step. For instance, if I want to darken specific bright areas, like the sun reflecting off of a car hood, I will add separate layers with key or shape masks for each of these adjustments. df_gradestrtgy_cb3b_smThis goes back to the photographic dodging and burning analogy.

df_gradestrtgy_cb4_smI like adding vignettes to subtly darken the outer edge of the frame. This goes on correction level 4 in our simplest set-up. The bottom line is that it should be the top correction level. The shape mask should be feathered to be subtle and then you would darken the outside of the mask, by lowering brightness levels and possibly a little lower on saturation. df_gradestrtgy_cb4a_smYou have to adjust this by feel and one vignette style will not work for all shots. In fact, some shots don’t look right with a vignette, so you have to use this to taste on a shot by shot basis. At this stage it may be necessary to go back to color correction level 2 and adjust the settings in order to get the optimal look, after you’ve done facial correction and vignetting in the higher levels.df_gradestrtgy_cb5_sm

df_gradestrtgy_cb5a_smIf I want any global changes applied after the color correction, then I need to do this using an adjustment layer. One example is a film emulation filter like LUT Utility or FilmConvert. Technically, if the effect should look like film negative, it should be a filter that’s applied before the color board. If the look should be like it’s part of a release print (positive film stock), then it should go after. For the most part, I stick to after (using an adjustment layer), because it’s easier to control, as well as remove, if the client decides against it. df_gradestrtgy_cb5b_smRemember that most film emulation LUTs are based on print stock and therefore should go on the higher layer by definition. Of course, other globals changes, like another color correction filters or grain or a combination of the two can be added. These should all be done as adjustment layers or track-based effects, for consistent application across your entire timeline.

©2014 Oliver Peters

24p HD Restoration

df_24psdhd_6

There’s a lot of good film content that only lives on 4×3 SD 29.97 interlaced videotape masters. Certainly in many cases you can go back and retransfer the film to give it new life, but for many small filmmakers, the associated costs put that out of reach. In general, I’m referring to projects with $0 budgets. Is there a way to get an acceptable HD product from an old Digibeta master without breaking the bank? A recent project of mine would say, yes.

How we got here

I had a rather storied history with this film. It was originally shot on 35mm negative, framed for 1.85:1, with the intent to end up with a cut negative and release prints for theatrical distribution. It was being posted around 2001 at a facility where I worked and I was involved with some of the post production, although not the original edit. At the time, synced dailies were transferred to Beta-SP with burn-in data on the top and bottom of the frame for offline editing purposes. As was common practice back then, the 24fps film negative was transferred to the interlaced video standard of 29.97fps with added 2:3 pulldown – a process that duplicates additional fields from the film frames, such that 24 film frames evenly add up to 60 video fields in the NTSC world. This is loaded into an Avid, where – depending on the system – the redundant fields are removed, or the list that goes to the negative cutter compensates for the adjustments back to a frame-accurate 24fps film cut.

df_24psdhd_5For the purpose of festival screenings, the project file was loaded into our Avid Symphony and I conformed the film at uncompressed SD resolution from the Beta-SP dailies and handled color correction. I applied a mask to hide the burn-in and ended up with a letter-boxed sequence, which was then output to Digibeta for previews and sales pitches to potential distributors. The negative went off to the negative cutter, but for a variety of reasons, that cut was never fully completed. In the two years before a distribution deal was secured, additional minor video changes were made throughout the film to end up with a revised cut, which no longer matched the negative cut.

Ultimately the distribution deal that was struck was only for international video release and nothing theatrical, which meant that rather than finishing/revising the negative cut, the most cost-effective process was to deliver a clean video master. Except, that all video source material had burn-in and the distributor required a full-height 4×3 master. Therefore, letter-boxing was out. To meet the delivery requirements, the filmmaker would have to go back to the original negative and retransfer it in a 4×3 SD format and master that to Digital Betacam. Since the negative was only partially cut and additional shots were added or changed, I went through a process of supervising the color-corrected transfer of all required 35mm film footage. Then I rebuilt the new edit timeline largely by eye-matching the new, clean footage to the old sequence. Once done and synced with the mix, a Digibeta master was created and off it went for distribution.

What goes around comes around

After a few years in distribution, the filmmaker retrieved his master and rights to the film, with the hope of breathing a little life into it through self-distribution – DVDs, Blu-rays, Internet, etc. With the masters back in-hand, it was now a question of how best to create a new product. One thought was simply to letter-box the film (to be in the director’s desired aspect) and call it a day. Of course, that still wouldn’t be in HD, which is where I stepped back in to create a restored master that would work for HD distribution.

Obviously, if there was any budget to retransfer the film negative to HD and repeat the same conforming operation that I’d done a few years ago – except now in HD – that would have been preferable. Naturally, if you have some budget, that path will give you better results, so shop around. Unfortunately, while desktop tools for editors and color correction have become dirt-cheap in the intervening years, film-to-tape transfer and film scanning services have not – and these retain a high price tag. So if I was to create a new HD master, it had to be from the existing 4×3 NTSC interlaced Digibeta master as the starting point.

In my experience, I know that if you are going to blow-up SD to HD frame sizes, it’s best to start with a progressive and not interlaced source. That’s even more true when working with software, rather than hardware up-convertors, like Teranex. Step one was to reconstruct a correct 23.98p SD master from the 29.97i source. To do this, I captured the Digibeta master as a ProResHQ file.

Avid Media Composer to the rescue

df_24psdhd_2_sm

When you talk about software tools that are commonly available to most producers, then there are a number of applications that can correctly apply a “reverse telecine” process. There are, of course, hardware solutions from Snell and Teranex (Blackmagic Design) that do an excellent job, but I’m focusing on a DIY solution in this post. That involves deconstructing the 2:3 pulldown (also called “3:2 pulldown”) cadence of whole and split-field frames back into only whole frames, without any interlaced tearing (split-field frames). After Effects and Cinema Tools offer this feature, but they really only work well when the entire source clip is of a consistent and unbroken cadence. This film had been completed in NTSC 29.97 TV-land, so frequently at cuts, the cadence would change. In addition, there had been some digital noise reduction applied to the final master after the Avid output to tape, which further altered the cadence at some cuts. Therefore, to reconstruct the proper cadence, changes had to be made at every few cuts and, in some scenes, at every shot change. This meant slicing the master file at every required point and applying a different setting to each clip. The only software that I know of to effectively do this with is Avid Media Composer.

Start in Media Composer by creating a 29.97 NTSC 4×3 project for the original source. Import the film file there. Next, create a second 23.98 NTSC 4×3 project. Open the bin from the 29.97 project into the 23.98 project and edit the 29.97 film clip to a new 23.98 sequence. Media Composer will apply a default motion adapter to the clip (which is the entire film) in order to reconcile the 29.97 interlaced frame rate into a 23.98 progressive timeline.

Now comes the hard part. Open the Motion Effect Editor window and “promote” the effect to gain access to the advanced controls. Set the Type to “Both Fields”, Source to “Film with 2:3 Pulldown” and Output to “Progressive”. Although you can hit “Detect” and let Media Composer try to decide the right cadence, it will likely guess incorrectly on a complex file like this. Instead, under the 2:3 Pulldown tab, toggle through the cadence options until you only see whole frames when you step through the shot frame-by-frame. Move forward to the next shot(s) until you see the cadence change and you see split-field frames again. Split the video track (place an “add edit”) at that cut and step through the cadence choices again to find the right combination. Rinse and repeat for the whole film.

Due to the nature of the process, you might have a cut that itself occurs within a split-field frame. That’s usually because this was a cut in the negative and was transferred as a split-field video frame. In that situation, you will have to remove the entire frame across both audio and video. These tiny 1-frame adjustments throughout the film will slightly shorten the duration, but usually it’s not a big deal. However, the audio edit may or may not be noticeable. If it can’t simply be fixed by a short 2-frame dissolve, then usually it’s possible to shift the audio edit a little into a pause between words, where it will sound fine.

Once the entire film is done, export a new self-contained master file. Depending on codecs and options, this might require a mixdown within Avid, especially if AMA linking was used. That was the case for this project, because I started out in ProResHQ. After export, you’ll have a clean, reconstructed 23.98p 4×3 NTSC-sized (720×486) master file. Now for the blow-up to HD.

DaVinci Resolve

df_24psdhd_1_smThere are many applications and filters that can blow-up SD to HD footage, but often the results end up soft. I’ve found DaVinci Resolve to offer some of the cleanest resizing, along with very fast rendering for the final output. Resolve offers three scaling algorithms, with “Sharper” providing the crispest blow-up. The second issue is that since I wanted to restore the wider aspect, which is inherent in going from 4×3 to 16×9, this meant blowing up more than normal – enough to fit the image width and crop the top and bottom of the frame. Since Resolve has the editing tools to split clips at cuts, you have the option to change the vertical position of a frame using the tilt control. Plus, you can do this creatively on a shot-by-shot basis if you want to. This way you can optimize the shot to best fit into the 16×9 frame, rather than arbitrarily lopping off a preset amount from the top and bottom.

df_24psdhd_3_smYou actually have two options. The first is to blow up the film to a large 4×3 frame out of Resolve and then do the slicing and vertical reframing in yet another application, like FCP 7. That’s what I did originally with this project, because back then, the available version of Resolve did not offer what I felt were solid editing tools. Today, I would use the second option, which would be to do all of the reframing strictly within Resolve 11.

As always, there are some uncontrollable issues in this process. The original transfer of the film to Digibeta was done on a Rank Cintel Mark III, which is a telecine unit that used a CRT (literally an oscilloscope tube) as a light source. The images from these tubes get softer as they age and, therefore, they require periodic scheduled replacement. During the course of the transfer of the film, the lab replaced the tube, which resulted in a noticeable difference in crispness between shots done before and after the replacement. In the SD world, this didn’t appear to be a huge deal. Once I started blowing up that footage, however, it really made a difference. The crisper footage (after the tube replacement) held up to more of a blow-up than the earlier footage. In the end, I opted to only take the film to 720p (1280×720) rather than a full 1080p (1920×1080), just because I didn’t feel that the majority of the film held up well enough at 1080. Not just for the softness, but also in the level of film grain. Not ideal, but the best that can be expected under the circumstances. At 720p, it’s still quite good on Blu-ray, standard DVD or for HD over the web.

df_24psdhd_4_smTo finish the process, I dust-busted the film to fix places with obvious negative dirt (white specs in the frame) caused by the initial handling of the film negative. I used FCP X and CoreMelt’s SliceX to hide and cover negative dirt, but other options to do this include built in functions within Avid Media Composer. While 35mm film still holds a certain intangible visual charm – even in such a “manipulated” state – the process certainly makes you appreciate modern digital cameras like the ARRI ALEXA!

As an aside, I’ve done two other complete films this way, but in those cases, I was fortunate to work from 1080i masters, so no blow-up was required. One was a film transferred in its entirety from a low-contrast print, broken into reels. The second was assembled digitally and output to intermediate HDCAM-SR 23.98 masters for each reel. These were then assembled to a 1080i composite master. Aside from being in HD to start with, cadence changes only occurred at the edits between reels. This meant that it only required 5 or 6 cadence corrections to fix the entire film.

©2014 Oliver Peters

Final Cut Pro X Batch Export

df_batchex_1_sm

One of the “legacy” items that editors miss when switching to Final Cut Pro X is the batch export function. For instance, you might want to encode H.264 versions of numerous ProRes files from your production, in order to upload raw footage for client review. While FCP X can’t do it directly, there is a simple workaround that will give you the same results. It just takes a few steps.

df_batchex_2_smStep one. The first thing to do is to find the clips that you want to batch export. In my example images, I selected all the bread shots from a grocery store commercial. These have been grouped into a keyword collection called “bread”. Next, I have to edit these to a new sequence (FCP X project) into order to export. These can be in a random order and should include the full clips. Once the clips are in the project, export an FCPXML from that project.

df_batchex_3_smStep two. I’m going to use the free application ClipExporter to work the magic. Launch it and open the FCPXML for the sequence of bread shots. ClipExporter can be used for a number of different tasks, like creating After Effects scripts, but in this case we are using it to create QuickTime movies. Make sure that all of the other icons are not lit. If you toggle the Q icon (QuickTime) once, you will generate new self-contained files, but these might not be the format you want. If you toggle the Q twice, it will display the icon as QR, which means you are now ready to export QuickTime reference files – also something useful from the past. ClipExporter will generate a new QuickTime file (self-contained or reference) for each clip in the FCP X project. These will be copied into the target folder location that you designate.df_batchex_4_sm

df_batchex_5_smStep three. ClipExporter places each new QuickTime clip into its own subfolder, which is a bit cumbersome. Here’s a neat trick that will help. Use the Finder window’s search bar to locate all files that ends with the .mov extension. Make sure you limit the search to only your target folder and not the entire hard drive. Once the clips have been selected, copy-and-paste them to a new location or drag them directly into your encoding application. If you created reference files, copying them will go quickly and not take up additional hard drive space.

df_batchex_6_smStep four. Drop your selected clips into Compressor or whatever other encoding application you choose. (It will need to be able to read QuickTime reference movies.) Apply your settings and target destination and encode.

df_batchex_7_smStep five. Since many encoding presets typically append a suffix to the file name, you may want to alter or remove this on the newly encoded files. I use Better Rename to do this. It’s a batch utility for file name manipulation.

There you go – five easy steps (less if you skip some of the optional tasks) to restore batch exports to FCP X.

©2014 Oliver Peters