Gone Girl

df_gg_4David Fincher is back with another dark tale of modern life, Gone Girl – the film adaptation of Gillian Flynn’s 2012 novel. Flynn also penned the screenplay.  It is the story of Nick and Amy Dunne (Ben Affleck and Rosamund Pike) – writers who have been hit by the latest downturn in the economy and are living in America’s heartland. Except that Amy is now mysteriously missing under suspicious circumstances. The story is told from each of their subjective points of view. Nick’s angle is revealed through present events, while Amy’s story is told through her diary in a series of flashbacks. Through these we learn that theirs is less than the ideal marriage we see from the outside. But whose story tells the truth?

To pull the film together, Fincher turned to his trusted team of professionals including director of photography Jeff Cronenweth, editor Kirk Baxter and post production supervisor Peter Mavromates. Like Fincher’s previous films, Gone Girl has blazed new digital workflows and pushed new boundaries. It is the first major feature to use the RED EPIC Dragon camera, racking up 500 hours of raw footage. That’s the equivalent of 2,000,000 feet of 35mm film. Much of the post, including many of the visual effects, were handled in-house.

df_gg_1Kirk Baxter co-edited David Fincher’s The Curious Case of Benjamin Button, The Social Network and The Girl with the Dragon Tattoo with Angus Wall – films that earned the duo two best editing Oscars. Gone Girl was a solo effort for Baxter, who had also cut the first two episodes of House of Cards for Fincher. This film now becomes the first major feature to have been edited using Adobe Premiere Pro CC. Industry insiders consider this Adobe’s Cold Mountain moment. That refers to when Walter Murch used an early version of Apple Final Cut Pro to edit the film Cold Mountain, instantly raising the application’s awareness among the editing community as a viable tool for long-form post production. Now it’s Adobe’s turn.

In my conversation with Kirk Baxter, he revealed, “In between features, I edit commercials, like many other film editors. I had been cutting with Premiere Pro for about ten months before David invited me to edit Gone Girl. The production company made the decision to use Premiere Pro, because of its integration with After Effects, which was used extensively on the previous films. The Adobe suite works well for their goal to bring as much of the post in-house as possible. So, I was very comfortable with Premiere Pro when we started this film.”

It all starts with dailies

df_gg_3Tyler Nelson, assistant editor, explained the workflow, “The RED EPIC Dragon cameras shot 6K frames (6144 x 3072), but the shots were all framed for a 5K center extraction (5120 x 2133). This overshoot allowed reframing and stabilization. The .r3d files from the camera cards were ingested into a FotoKem nextLAB unit, which was used to transcode edit media, viewing dailies, archive the media to LTO data tape and transfer to shuttle drives. For offline editing, we created down-sampled ProRes 422 (LT) QuickTime media, sized at 2304 x 1152, which corresponded to the full 6K frame. The Premiere Pro sequences were set to 1920 x 800 for a 2.40:1 aspect. This size corresponded to the same 5K center extraction within the 6K camera files. By editing with the larger ProRes files inside of this timeline space, Kirk was only viewing the center extraction, but had the same relative overshoot area to enable easy repositioning in all four directions. In addition, we also uploaded dailies to the PIX system for everyone to review footage while on location. PIX also lets you include metadata for each shot, including lens choice and camera settings, such as color temperature and exposure index.”

Kirk Baxter has a very specific way that he likes to tackle dailies. He said, “I typically start in reverse order. David tends to hone in on the performance with each successive take until he feels he’s got it. He’s not like other directors that may ask for completely different deliveries from the actors with each take. With David, the last take might not be the best, but it’s the best starting point from which to judge the other takes. Once I go through a master shot, I’ll cut it up at the points where I feel the edits will be made. Then I’ll have the assistants repeat these edit points on all takes and string out the line readings back-to-back, so that the auditioning process is more accurate. David is very gifted at blocking and staging, so it’s rare that you don’t use an angle that was shot for a scene. I’ll then go through this sequence and lift my selected takes for each line reading up to a higher track on the timeline. My assistants take the selects and assemble a sequence of all the angles in scene order. Once it’s hyper-organized, I’ll send it to David via PIX and get his feedback. After that, I’ll cut the scene. David stays in close contact with me as he’s shooting. He wants to see a scene cut together before he strikes a set or releases an actor.”

Telling the story

df_gg_5The director’s cut is often where the story gets changed from what works on paper to what makes a better film. Baxter elaborated, “When David starts a film, the script has been thoroughly vetted, so typically there isn’t a lot of radical story re-arrangement in the cutting room. As editors, we got a lot of credit for the style of intercutting used in The Social Network, but truthfully that was largely in the script. The dialogue was tight and very integral to the flow, so we really couldn’t deviate a lot. I’ve always found the assembly the toughest part, due to the volume and the pressure of the ticking clock. Trying to stay on pace with the shoot involves some long days. The shooting schedule was 106 days and I had my first cut ready about two weeks after the production wrapped. A director gets around ten weeks for a director’s cut and with some directors, you are almost starting from scratch once the director arrives. With David, most of that ten week period involves adding finesse and polish, because we have done so much of the workload during the shoot.”

df_gg_9He continued, “The first act of Gone Girl uses a lot of flashbacks to tell Amy’s side of the story and with these, we deviated a touch from the script. We dropped a couple of scenes to help speed things along and reduced the back and forth of the two timelines by grouping flashbacks together, so that we didn’t keep interrupting the present day; but, it’s mostly executed as scripted. There was one scene towards the end that I didn’t feel was in the right place. I kept trying to move it, without success. I ended up taking another pass at the cut of the scene. Once we had the emotion right in the cut, the scene felt like it was in the right place, which is where it was written to be.”

“The hardest scenes to cut are the emotional scenes, because David simplifies the shooting. You can’t hide in dynamic motion. More complex scenes are actually easier to cut and certainly quite fun. About an hour into the film is the ‘cool girls’ scene, which rapidly answers lots of question marks that come before it. The scene runs about eight minutes long and is made up of about 200 set-ups. It’s a visual feast that should be hard to put together, but was actually dessert from start to finish, because David thought it through and supplied all the exact pieces to the puzzle.”

Music that builds tension

df_gg_6Composers Trent Reznor and Atticus Ross of Nine Inch Nails fame are another set of Fincher regulars. Reznor and Ross have typically supplied Baxter with an album of preliminary themes scored with key scenes in mind. These are used in the edit and then later enhanced by the composers with the final score at the time of the mix. Baxter explained, “On Gone Girl we received their music a bit later than usual, because they were touring at the time. When it did arrive, though, it was fabulous. Trent and Atticus are very good at nailing the feeling of a film like this. You start with a piece of music that has a vibe of ‘this is a safe, loving neighborhood’ and throughout three minutes it sours to something darker, which really works.”

“The final mix is usually the first time I can relax. We mixed at Skywalker Sound and that was the first chance I really had to enjoy the film, because now I was seeing it with all the right sound design and music added. This allows me to get swallowed up in the story and see beyond my role.”

Visual effects

df_gg_7The key factor to using Premiere Pro CC was its integration with After Effects CC via Adobe’s Dynamic Link feature. Kirk Baxter explained how he uses this feature, “Gone Girl doesn’t seem like a heavy visual effects film, but there are quite a lot of invisible effects. First of all, I tend to do a lot of invisible split screens. In a two-shot, I’ll often use a different performance for each actor. Roughly one-third of the timeline contains such shots. About two-thirds of the timeline has been stabilized or reframed. Normally, this type of in-house effects work is handled by the assistants who are using After Effects. Those shots are replaced in my sequence with an After Effects composition. As they make changes, my timeline is updated.”

“There are other types of visual effects, as well. David will take exteriors and do sky replacements, add flares, signage, trees, snow, breath, etc. The shot of Amy sinking in the water, which has been used in the trailers, is an effects composite. That’s better than trying to do multiple takes with the real actress by drowning her in cold water. Her hair and the water elements were created by Digital Domain. This is also a story about the media frenzy that grows around the mystery, which meant a lot of TV and computer screen comps. That content is as critical in the timing of a scene as the actors who are interacting with it.”

Tyler Nelson added his take on this, “A total of four assistants worked with Kirk on these in-house effects. We were using the same ProRes editing files to create the composites. In order to keep the system performance high, we would render these composites for Kirk’s timeline, instead of using unrendered After Effects composites. Once a shot was finalized, then we would go back to the 6K .r3d files and create the final composite at full resolution. The beauty of doing this all internally is that you have a team of people who really care about the quality of the project as much as everyone else. Plus the entire process becomes that much more interactive. We pushed each other to make everything as good as it could possibly be.”

Optimization and finishing

df_gg_2A custom pipeline was established to make the process efficient. This was spearheaded by post production consultant Jeff Brue, CTO of Open Drives. The front end storage for all active editorial files was a 36TB RAID-protected storage network built with SSDs. A second RAID built with standard HDDs was used for the .r3d camera files and visual effects elements. The hardware included a mix of HP and Apple workstations running with NVIDIA K6000 or K5200 GPU cards. Use of the NVIDIA cards was critical to permit as much real-time performance as possible doing the edit. GPU performance was also a key factor in the de-Bayering of .r3d files, since the team didn’t use any of the RED Rocket accelerator cards in their pipeline. The Macs were primarily used for the offline edit, while the PCs tackled the visual effects and media processing tasks.

In order to keep the Premiere Pro projects manageable, the team broke down the film into eight reels with a separate project file per reel. Each project contained roughly 1,500 to 2,000 files. In addition to Dynamic Linking of After Effects compositions, most of the clips were multi-camera clips, as Fincher typically shoots scenes with two or more cameras for simultaneous coverage. This massive amount of media could have potentially been a huge stumbling block, but Brue worked closely with Adobe to optimize system performance over the life of the project. For example, project load times dropped from about six to eight minutes at the start down to 90 seconds at best towards the end.

The final conform and color grading was handled by Light Iron on their Quantel Pablo Rio system run by colorist Ian Vertovec. The Rio was also configured with NVIDIA Tesla cards to facilitate this 6K pipeline. Nelson explained, “In order to track everything I used a custom Filemaker Pro database as the codebook for the film. This contained all the attributes for each and every shot. By using an EDL in conjunction with the codebook, it was possible to access any shot from the server. Since we were doing a lot of the effects in-house, we essentially ‘pre-conformed’ the reels and then turned those elements over to Light Iron for the final conform. All shots were sent over as 6K DPX frames, which were cropped to 5K during the DI in the Pablo. We also handled the color management of the RED files. Production shot these with the camera color metadata set to RedColor3, RedGamma3 and an exposure index of 800. That’s what we offlined with. These were then switched to RedLogFilm gamma when the DPX files were rendered for Light Iron. If, during the grade, it was decided that one of the raw settings needed to be adjusted for a few shots, then we would change the color settings and re-render a new version for them.” The final mastering was in 4K for theatrical distribution.

df_gg_8As with his previous films, director David Fincher has not only told a great story in Gone Girl, but set new standards in digital post production workflows. Seeking to retain creative control without breaking the bank, Fincher has pushed to handle as many services in-house as possible. His team has made effective use of After Effects for some time now, but the new Creative Cloud tools with Premiere Pro CC as the hub, bring the power of this suite to the forefront. Fortunately, team Fincher has been very eager to work with Adobe on product advances, many of which are evident in the new application versions previewed by Adobe at IBC in Amsterdam. With a film as complex as Gone Girl, it’s clear that Adobe Premiere Pro CC is ready for the big leagues.

Kirk Baxter closed our conversation with these final thoughts about the experience. He said, “It was a joy from start to finish making this film with David. Both he and Cean [Chaffin, producer and David Fincher’s wife] create such a tight knit post production team that you fall into an illusion that you’re making the film for yourselves. It’s almost a sad day when it’s released and belongs to everyone else.”

Originally written for Digital Video magazine / CreativePlanetNetwork.

_________________________________

Needless to say, Gone Girl has received quite a lot of press. Here are just a few additional discussions of the workflow:

Adobe panel discussion with the post team

PostPerspective

FxGuide

HDVideoPro

IndieWire

IndieWire blog

ICG Magazine

RedUser

Tony Zhou’s Vimeo take on Fincher 

©2014 Oliver Peters

The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

The Hobbit

df_hobbit_1Peter Jackson’s The Hobbit: An Unexpected Journey was one of the most anticipated films of 2012. It broke new technological boundaries and presented many creative challenges to its editor. After working as a television editor, Jabez Olssen started his own odyssey with Jackson in 2000 as an assistant editor and operator on The Lord of the Rings trilogy. After assisting again on King Kong, he next cut Jackson’s Lovely Bones as the first feature film on which he was the sole editor. The director tapped Olssen again for The Hobbit trilogy, where unlike the Rings trilogy, he will be the sole editor on all three films.

Much like the Rings films, all production for the three Hobbit films was shoot in a single eighteen month stretch. Jackson employed as many as 60 RED Digital Cinema EPIC cameras rigged for stereoscopic acquisition at 48fps – double the standard rate of traditional feature photography. Olssen was editing the first film in parallel with the principal photography phase. He had a very tight schedule that only allowed about five months after the production wrapped to lock the cut and get the film ready for release.

To get The Hobbit out on such an aggressive schedule, Olssen leaned hard on a post production infrastructure built around Avid’s technology, including 13 Media Composers (10 with Nitris DX hardware) and an ISIS 7000 with 128TB of storage. Peter Jackson’s production facilities are located in Wellington, New Zealand, where active fibre channel connections tie Stone Street Studio, Weta Digital, Park Road Post Production and the cutting rooms to the Avid ISIS storage. The three films combined, total 2200 hours (1100 x two eyes) of footage, which is the equivalent of 24 million feet of film. In addition, an Apace active backup solution with 72TB of storage was also installed, which could immediately switch over if ISIS failed.

The editorial team – headed up by first assistant editor Dan Best – consisted of eight assistant editors, including three visual effects editors. According to Olssen, “We mimicked a similar pipeline to a film project. Think of the RED camera .r3d media files as a digital negative. Peter’s facility, Park Road Post Production, functioned as the digital lab. They took the RED media from the set and generated one-light, color-corrected dailies for the editors. 24fps 2D DNxHD36 files were created by dropping every second frame from the files of one ‘eye’ of a stereo recording. For example, we used 24fps timecode with the difference between the 48fps frames being a period instead of a colon. Frame A would be 11.22.21.13 and frame B would be 11:22:21:13. This was a very natural solution for editing and a lot like working with single-field media files on interlaced television projects. The DNxHD files were then delivered to the assistant editors, who synced, subclipped and organized clips into the Avid projects. Since we were all on ISIS shared storage, once they were done, I could access the bins and the footage was ready to edit, even if I were on set. For me, working with RED files was no different than a standard film production.”

df_hobbit_2Olssen continued, “A big change for the team since the Rings movies is that the Avid systems have become more portable. Plus the fibre channel connection to ISIS allows us to run much longer distances. This enabled me to have a mobile cart on the set with a portable Media Composer system connected to the ISIS storage in the main editing building. In addition, we also had a camper van outfitted as a more comfortable mobile editing room with its own Media Composer; we called it the EMC – ‘Editorial Mobile Command’. So, I could cut on set while Peter was shooting, using the cart and, as needed, use the EMC for some quick screening of edits during a break in production. I was also on location around New Zealand for three months and during that time I cut on a laptop with mirrored media on external drives.”

The main editing room was set up with a full-blown Nitris DX system connected to a 103” plasma screen for Jackson. The original plan was to cut in 2D and then periodically consolidate scenes to conform a stereo version for screening in the Media Composer suite. Instead they took a different approach. Olssen explained, “We didn’t have enough storage to have all three films’ worth of footage loaded as stereo media, but Peter was comfortable cutting the film in 2D. This was equally important, since more theaters displayed this version of the film. Every few weeks, Park Road Post Production would conform a 48fps stereo version so we could screen the cut. They used an SGO Mistika system for the DI, because it could handle the frame rate and had very good stereo adjustment tools. Although you often have to tweak the cuts after you see the film in a stereo screening, I found we had to do far less of that than I’d expected. We were cognizant of stereo-related concerns during editing. It also helped that we could judge a cut straight from the Avid on the 103” plasma, instead of relying on a small TV screen.”

df_hobbit_3The editorial team was working with what amounted to 24fps high-definition proxy files for stereo 48fps RED .r3d camera masters. Edit decision lists were shared with Weta Digital and Park Road Post Production for visual effects, conform and digital intermediate color correction/finishing at a 2K resolution. Based on these EDLs, each unit would retrieve the specific footage needed from the camera masters, which had been archived onto LTO data tape.

The Hobbit trilogy is a heavy visual effects production, which had Olssen tapping into the Media Composer toolkit. Olssen said, “We started with a lot of low resolution, pre-visualization animations as placeholders for the effects shots. As the real effects started coming in, we would replace the pre-vis footage with the correct effects shots. With the Gollum scenes we were lucky enough to have Andy Serkis in the actual live action footage from set, so they were easy to visualize how the scene would look. But other CG characters, like Azog, were captured separately on a Performance Capture stage. That meant we had to layer separately-shot material into a single shot. We were cutting vertically in the timeline, as well as horizontally. In the early stages, many of the scenes were a patchwork of live action and pre-vis, so I used PIP effects to overlay elements to determine the scene timing. Naturally, I had to do a lot of temp green-screen composites. The dwarves are full-size actors and for many of the scenes, we had to scale them down and reposition them in the shot so we could see how the shots were coming together.”

As with most feature film editors, Jabez Olssen likes to fill out his cut with temporary sound effects and music, so that in-progress screenings feel like a complete film. He continued, “We were lucky to use some of Howard Shore’s music from the Rings films for character themes that tie The Hobbit back into The Lord of the Rings. He wrote some nice ‘Hobbity’ music for those. We couldn’t use too much of it, though, because it was so familiar to us! The sound department at Park Road Post Production uses Avid Pro Tools systems. They also have a Media Composer connected to the same ISIS storage, which enabled the sound editors to screen the cut there. From it, they generated QuickTime files for picture reference and audio files so the sound editors could work locally on their own Pro Tools workstations.”

Audiences are looking forward to the next two films in the series, which means the adventure continues for Jabez Olssen. On such a long term production many editors would be reluctant to update software, but not this time. Olssen concluded, “I actually like to upgrade, because I look forward to the new features. Although, I usually wait a few weeks until everyone knows it’s safe. We ended up on version 6.0 at the end of the first film and are on 6.5 now. Other nonlinear editing software packages are more designed for one-man bands, but Media Composer is really the only software that works for a huge visual effects film. You can’t underestimate how valuable it is to have all of the assistant editors be able to open the same projects and bins. The stability and reliability is the best. It means that we can deliver challenging films like The Hobbit trilogy on a tight post production schedule and know the system won’t let us down.”

Originally written for Avid Technology, Inc.

©2013 Oliver Peters

Offline to online with 4K

df_4k_wkflw_01

The 4K buzz  seems to be steam-rolling the industry just like stereo3D before it. It’s too early to tell whether it will be an immediate issue for editors or not, since 4K delivery requirements are few and far between. Nevertheless, camera and TV-set manufacturers  are building important parts of the pipeline. RED Digital Cinema is leading the way with a post workflow that’s both proven and relatively accessible on any budget. A number of NLEs support editing and effects in 4K, including Avid DS, Autodesk Smoke, Adobe Premiere Pro, Apple Final Cut Pro X, Grass Valley EDIUS and Sony Vegas Pro.

Although many of these support native cutting with RED 4K media, I’m still a strong believer in the traditional offline-to-online editing workflow. In this post I will briefly outline how to use Avid Media Composer and Apple FCP X for a cost-effective 4K post pipeline. One can certainly start and finish a RED-originated project in FCP X or Premiere Pro for that matter, but Media Composer is still the preferred creative  tool for many editing pros. Likewise, FCP X is a viable finishing tool. I realize that statement will raise a few eyebrows, but hear me out. Video passing through Final Cut is very pristine, it supports the various flavors of 2K and 4K formats and there’s a huge and developing ecosystem of highly-inventive effects and transitions. This combination is a great opportunity to think outside of the box.

Offline editing with Avid Media Composer

df_4k_wkflw_04_smAvid has supported native RED files for several versions, but Media Composer is not resolution independent. This means RED’s 4K (or 5K) images are downsampled to 1080p and reformatted (cropped or letterboxed) to fit into the 16:9 frame. When you shoot with a RED camera, you should ideally record in one of their 4K 16:9 sizes. The native .r3d files can be brought into Media Composer using the “Link to AMA File(s)” function. Although you can edit directly with AMA-linked files, the preferred method is to use this as a “first step”. That means, you should use AMA to cull your footage down to the selected takes and then transcode the remainder when you start to fine tune your cut.

Avid’s media creation settings are the place to adjust the RED debayer parameters. Media Composer supports the RED Rocket card for accelerated rendering, but without it, Media Composer can still provide reasonable speed in software-only transcoding. Set the debayer quality to 1/4 or 1/8, and transcoding 4K clips to Avid DNxHD36 for offline editing will be closer to real-time on a fast machine, like an 8-core Mac Pro. This resolution is adequate for making your creative decisions.df_4k_wkflw_02_sm

df_4k_wkflw_08_smWhen the cut is locked, export an AAF file for the edited sequence. Media should be linked (not embedded) and the AAF Edit Protocol setting should be enabled. In this workflow, I will assume that audio post is being handled by an audio editor/mixer running a DAW, such as Pro Tools, so I’ll skip any discussion of audio. That would be exported using standard AAF or OMF workflows for audio post. Note that all effects should be removed from your sequence before generating the AAF file, since they won’t be translated in the next steps. This includes any nested clips, collapsed tracks and speed ramps, which are notorious culprits in any timeline translation.

Color grading with DaVinci Resolve

df_4k_wkflw_03_smBlackmagic Design’s DaVinci Resolve 9 is our next step. You’ll need the full, paid version (software-only) for bigger-than-HD output. After launching Resolve, import the Avid AAF file from Resolve’s conform tab. Make sure you check “link to camera files” so that Resolve connects to the original .r3d media and not the Avid DNxHD transcodes. Resolve will import the sequence, connect to the media and generate a new timeline that matches the sequence exported from Media Composer. Make sure the project is set for the desired 4K format.

df_4k_wkflw_09_smNext, open the Resolve project settings and adjust the camera raw values to the proper RED settings. Then make sure the individual clips are set to “project” in their camera settings tab. You can either use the original camera metadata or adjust all clips to a new value in the project settings pane. Once this is done, you are ready to grade the timeline as with any other production. Resolve uses a very good scaling algorithm, so if the RED files were framed with the intent of resizing and repositioning (for example, 5K files that are to be cropped for the ideal framing within a 4K timeline), then it’s best to make that adjustment within the Resolve timeline.df_4k_wkflw_05_sm

Once you’ve completed the grade, set up the render. Choose the FCP XML easy set-up and alter the output frame size to the 4K format you are using. Start the render job. Resolve 9 renders quite quickly, so even without a RED Rocket card, I found that 4K ProRes HQ or 4444 rendering, using full-resolution debayering, was completed in about a 6:1 ratio to running time on my Mac Pro. When the renders are done, export the FCP XML (for FCP X) from the conform tab. I found I had to use an older version of this new XML format, even though I was running FCP X 10.0.7. It was unable to read the newest version that Resolve had exported.

Online with Apple Final Cut Pro X

df_4k_wkflw_11_smThe last step is finishing. Import the Resolve-generated XML file, which will in turn create the necessary FCP Event (media linked to the 4K ProRes files rendered from Resolve) and a timeline for the edited sequence. Make sure the sequence (Project) settings match your desired 4K format. Import and sync the stereo or surround audio mix (generated by the audio editor/mixer) and rebuild any effects, titles, transitions and fast/slo-mo speed effects. Once everything is completed, use FCP X’s share menu to export your deliverables.

©2013 Oliver Peters

A RED post production workflow

When you work with RED Digital Cinema’s cameras, part of the post production workflow is a “processing” step, not unlike the lab and transfer phase of film post. The RED One, EPIC and SCARLET cameras record raw images using Bayer-pattern light filtering to the sensor. The resulting sensor data is compressed with the proprietary REDCODE codec and stored to CF cards or hard drives. In post, these files have to be decompressed and converted into RGB picture information, much the same as if you had shot camera raw still photography with a Nikon or Canon DSLR.

RED has been pushing the concept of working natively with the .r3d media (skipping any interim conversion steps) and has made an SDK (software development kit) available to NLE manufacturers. This permits REDCODE raw images to be converted and adjusted right inside the editing interface. Although each vendor’s implementation varies, the raw module enables control over the metadata for color temperature, tint, color space, gamma space, ISO and other settings. You also have access to the various quality increments available to “de-Bayer” the image (data-to-RGB interpolation). The downside to working natively, is that even with a fast machine, performance can be sluggish. This is magnified when dealing with a large quantity of footage, such as a feature film or other long-form projects. The native clips in your editing project are encumbered by the overhead of 4K compressed camera files.

For these and other reasons, I still advocate an offline-online procedure, rather than native editing, when working on complex RED projects. You could convert to a high-quality format like ProRes 4444 or 10-bit uncompressed at the beginning and never touch the RED files again, but the following workflow is one designed to give you the best of all worlds – easy editing, plus grading to get the best out of the raw files. There are many possible RED workflows, but I’ve used a variation of these steps quite successfully on a recent indie feature film – cut on Final Cut Pro 7 and graded in Apple Color. My intent here is to describe an easy workflow for projects mastering at 2K and HD sizes, which are destined for film festivals, TV and Blu-ray.

Conversion for offline editing

When you receive media from the studio or location, start by backing up and verifying all files. Make sure your camera-original media is safe. Then move on to RED’s REDCINE-X PRO. There is no need yet to change color metadata. Simply accept what was shot and set up a batch to convert the .r3d files into editing media, such as Avid DNxHD36 or Apple ProRes LT or ProRes Proxy. 1920×1080 or 1280×720 are the preferred sizes for lightweight editing media.

With a RED ROCKET accelerator card installed, conversion time will be about real-time. Without it, adjust the de-Bayer resolution settings to 1/2, 1/4 or 1/8 for faster rendering. The quality of these dailies only needs to be sufficient for making effective editing decisions. The advantage to using REDCINE-X PRO and not the internal conversion tools of the NLE (like FCP 7’s Log and Transfer) is faster conversion, which can be done on any machine and isn’t dependent on the specific requirements of a given editing application.

Creative (offline) editing

Import the media into your NLE. In the case of Final Cut Pro 7, simply drag the converted QuickTime files into a bin. Import any double-system audio and merge the clips. Edit until the picture cut is locked. Break the final sequence into reels of approximately ten minutes in length each. Export audio as OMF files for your sound designer/mixer. Duplicate the reels as video-only timelines, remove any effects, extend the length of shots with dissolves and restore all shots with speed changes to full length. Export an XML file for each of these reels.

REDCINE-X PRO primary grading pass

This is a two-step color grading process: Step 1 in REDCINE-X PRO and Step 2 in Apple Color. The advantage of REDCINE-X PRO is direct access to the raw files without the abstraction layer of an SDK. By adjusting the source settings panel within Color, Resolve, Media Composer, Premiere Pro and others, you are adjusting the raw controls; but, any further color adjustments (like curves and lift/gamma/gain “color wheels”) are made downstream of the internally-converted RGB image. This is functionally no different than rendering a high-quality, raw-adjusted RGB file from one application and then doing further corrections to it in another. That’s the philosophy here.

Import the XML file for each reel as a timeline into REDCINE-X PRO. This conforms the .r3d files into an edited sequence corresponding to your cut in FCP. Adjust the raw settings for all shots in the timeline. First, set color space to RedColor2. (You may temporarily set gamma space to RedGamma2 and increase saturation to better see the affect of your adjustments.) Remember, this is a primary grading pass, so match all shots and get the most consistent look to the entire timeline.

You can definitely do very extensive color correction in REDCINE-X PRO and never need another grading tool. That’s not the process here, though, so a neutral, plain look tends to be better for the next stage. The point is to create an evenly matched timeline that is within boundaries for more subjective and aggressive grading once you move to Color. When you are ready to export, return saturation to normal, set color/gamma space to RedColor2/RedLogFilm and the de-Bayer quality to full resolution. Export (render) the timeline using Apple ProRes 4444 at either a 2K or 1920×1080 size. Make sure the export preset is configured to create unique file names and an accompanying FCP XML. Repeat this process for each reel.

Sending to Color and FCP completion

Import the REDCINE-X PRO-generated XML for each reel into Final Cut. Reconnect media if needed. Remove any filters that REDCINE-X PRO may have inadvertently added. Double-check the sequence against your rough cut to verify accuracy and then send the new timeline to Color. Each reel becomes a separate Color project file. Grade for your desired look and render the final result as ProRes HQ or ProRes 4444. Lastly, send the project back to Final Cut Pro to complete the roundtrip.

Once the graded timelines are back in FCP, rebuild any visual effects, speed effects and transitions, including dissolves. Combine the video-only sequences with the mixed audio and add any finishing touches necessary to complete your master file and deliverables.

Written for DV Magazine (NewBay Media LLC)

©2012 Oliver Peters

The Girl with the Dragon Tattoo

The director who brought us Se7en has tapped into the dark side again with the Christmas-time release of The Girl with the Dragon Tattoo. Hot off of the success of The Social Network, director David Fincher dove straight into this cinematic adaptation of Swedish writer Steig Larsson’s worldwide publishing phenomena. Even though a Swedish film from the book had been released in 2009, Fincher took on the project, bringing his own special touch.

The Girl with the Dragon Tattoo is part of Larsson’s Millennium trilogy. The plot revolves around the disappearance of Harriet Vanger, a member of one of Sweden’s wealthiest families, forty years earlier. After these many years her uncle hires Mikael Blomkvist (Daniel Craig), a disgraced financial reporter, to investigate the disappearance. Blomkvist teams with punk computer hacker Lisbeth Salander (Rooney Mara). Together they start to unravel the truth that links Harriet’s disappearance to a string of grotesque murders that happened forty years before.

For this production, Fincher once again assembled the production and post team that proved successful on The Social Network, including director of photography Jeff Cronenweth, editors Kirk Baxter and Angus Wall and the music scoring team of Trent Reznor and Atticus Ross. Production started in August of last year and proceeded for 167 shooting days on location and in studios in Sweden and Los Angeles.

Like the previous film, The Girl with the Dragon Tattoo was shot completely with RED cameras – about three-quarters using the RED One with the M-X sensor and the remaining quarter with the RED EPIC, which was finally being released around that time. Since the EPIC cameras were in their very early stages, the decision was made to not use them on location in Sweden, because of the extreme cold. After the first phase in Sweden, the crew moved to soundstages in Los Angeles and continued with the RED Ones. The production started using the EPIC cameras during their second phase of photography in Sweden and during reshoots back in Los Angeles.

The editing team

I recently spoke with Kirk Baxter and Angus Wall, who as a team have cut Fincher’s last three films, earning them a best editing Oscar for The Social Network as well as a nomination The Curious Case of Benjamin Button. I was curious about tackling a film that had already been done a couple of years before. Kirk Baxter replied, “We were really reacting to David’s material above all, so the fact that there was another film about the same book didn’t really affect me. I hadn’t seen the film before and I purposefully waited until we were about halfway through the fine cut, before I sat down and watched the film. Then it was interesting to see how they had approached certain story elements, but only as a curiosity.”

As in the past, both Wall and Baxter split up editorial duties based on the workload at any given time. Baxter started cutting at the beginning of production, with Wall joining the project in April of this year. Baxter explained, “I was cutting during the production to keep up with camera, but sometimes priorities would shift. For example, if an actor had to leave the country or a set needed to be struck, David would need to see a cut quickly to be sure that he had the coverage he needed. So in these cases, we’d jump on those scenes to make sure he knew they were OK.” Wall continued, “This was a very labor intensive film. David shot 95% to 98% of everything with two cameras. On The Social Network they recorded 324 hours of footage and selected 281 hours for the edit. On Dragon Tattoo that count went up to 483 hours recorded and 443 hours selected!”

The Girl with the Dragon Tattoo has many invisible effects. According to Wall, “At last count there were over 1,000 visual effects shots throughout the film. Most of these are shot stabilizations or visual enhancements, such as adding matte painting elements, lens flares or re-creating split screens from the offline.  Snow and other seasonal elements were added to a number of shots, helping the overall tone, as well as reinforcing the chronology of the film. I think viewers will be hard pressed to tell which shots are real and which are enhanced.” Baxter added, “In a lot of cases the exterior locations were shot in Sweden and elaborate sets were built on sound stages in LA for the interiors. There’s one sequence that takes place in a cabin. All of the exteriors seen through the windows and doors are green screen shots. And those were bright green! I’ve been seeing the composited shots come back and it’s amazing how perfect they are. The door is opened and there’s a bright exterior there now.”

A winning workflow solution

The key to efficient post on a RED project is the workflow. Assistant editor Tyler Nelson explained the process to me. “We used essentially the same procedures as for The Social Network. Of course, we learned things on that, which we refined for this film. Since they used both the RED M-X and the EPIC cameras, there were two different frame sizes to deal with – 4352 x 2176 for the RED One and 5120 x 2560 for the EPIC. Plus each of these cameras uses a different color science to process the data from the sensor. The file handling was done through Datalab, a company that Angus owns. A custom piece of software called Wrangler automates the handling of the RED files. It takes care of copying, verifying and archiving the .r3d files to LTO and transcoding the media for the editors, as well as for review on the secured PIX system. The larger RED files were scaled down to 1920 x 1080 ProRes LT with a center-cut extraction for the editors, as well as 720p H.264 for PIX. The ‘look’ was established on set, so none of the RED color metadata was changed during this process.”

“When the cut was locked, I used an EDL [edit decision list] and my own database to conform the .r3d files back into reels of conformed DPX image sequences. This part was done in After Effects, which also allowed me to reposition and stabilize shots as needed. Most of the repositioning was generally a north-south adjustment to move a shot up or down for better head room. The final output frame size was 3600 x 1500 pixels. Since I was using After Effects, I could make any last minute fixes if needed. For instance, I saw one shot that had a monitor reflection within the shot. It was easy to quickly paint that out in After Effects. The RED files were set to the RedColor2 / RedLogFilm color space and gamma settings. Then I rendered out extracted DPX image sequences of the edited reels to be sent Light Iron Digital who did the DI again on this film.”

On the musical trail

The Girl with the Dragon Tattoo leans heavily on a score by Trent Reznor and Atticus Ross. An early peak came from a teaser cut for the film by Kirk Baxter to a driving Reznor cover of Led Zeppelin’s “Immigrant Song”. Unlike the typical editor and composer interaction – where library temp tracks are used for the edit and then a new score is done at the end of the line – Reznor and Ross were feeding tracks to the editors during the edit.

Baxter explained, “At first Trent and Atticus score to the story rather than to specific scenes. The main difference with their approach to scoring a picture is that they first provide us with a library of original score, removing the need for needledrops. It’s then a collaborative process of finding homes for the tracks. Ren Klyce [sound designer/re-recording mixer] also plays an integral part in this.” Wall added, “David initially reviewed the tracks and made suggestions as to which scenes they might work best in.  We started with these suggestions and refined placement as the edit evolved.  The huge benefit of working this way was that we had a very refined temp score very early in the process.” Baxter concluded, “Then Trent’s and Atticus’s second phase is scoring to picture. They re-sculpt their existing tracks to perfectly fit picture and the needs of the movie. Trent’s got a great work ethic. He’s very precise and a real perfectionist.”

The cutting experience

I definitely enjoyed the Oscar-winning treatment these two editors applied to intercutting dialogue scenes in The Social Network, but Baxter was quick to interject, “I’d have to say Dragon Tattoo was more complicated than The Social Network. It was a more complex narrative, so there were more opportunities to play with scene order. In the first act you are following the two main characters on separate paths. We played with how their scenes were intercut so that their stories were as interconnected as possible, giving promise to the audience of their inevitable union.”

“The first assembly was about three hours long. That hovered at around 2:50 for a while and got a bit longer as additional material was shot, but then shorter again as we trimmed. Eventually some scenes were lost to bring the locked cut in at two-and-a-half hours. Even though scenes were lost, those still have to be fine cut. You don’t know what can be lost unless you finish everything out and consider the film in its full form. A lot of work was put into the back half of the film to speed it up. Most of those changes were a matter of tightening the pace by losing the lead-in and lead-outs of scenes and often losing some detail within the scenes.”

Wall expanded on this, “Fans of any popular book series want a filmed adaptation to be faithful to the original story. In this case, we’re really dealing with a ‘five act’ structure. [laughs]. Obviously, not everything in the book can make it into the movie. Some of the investigative dead ends have to be excised, but you can’t remove every red herring.  So it was a challenging film to cut. Not only was it very labor intensive, with many disturbing scenes to put together, it was also a tricky storytelling exercise. But when you’re done and it’s all put together, it’s very rewarding to see. The teaser calls it the ‘feel-bad film of Christmas’ but it’s a really engaging story about these characters’ human experience. We hope audiences will find it entertaining.”

Some additional coverage from Post magazine.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters