The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

The NLE that wouldn’t die II

df_nledie2_sm

With echoes of Monty Python in the background, two years on, Final Cut Pro 7 and Final Cut Studio are still widely in use. As I noted in my post from last November, I still see facilities with firmly entrenched and mature FCP “legacy” workflows that haven’t moved to another NLE yet. Some were ready to move to Adobe until they learned subscription was the only choice going forward. Others maintain a fanboy’s faith in Apple that the next version will somehow fix all the things they dislike about Final Cut Pro X. Others simply haven’t found the alternative solutions compelling enough to shift.

I’ve been cutting all manner of projects in FCP X since the beginning and am currently using it on a feature film. I augment it in lots of ways with plug-ins and utilities, so I’m about as deep into FCP X workflows as anyone out there. Yet, there are very few projects in which I don’t touch some aspect of Final Cut Studio to help get the job done. Some fueled by need, some by personal preference. Here are some ways that Studio can still work for you as a suite of applications to fill in the gaps.

DVD creation

There are no more version updates to Apple’s (or Adobe’s) DVD creation tools. FCP X and Compressor can author simple “one-off” discs using their export/share/batch functions. However, if you need a more advanced, authored DVD with branched menus and assets, DVD Studio Pro (as well is Adobe Encore CS6) is still a very viable tool, assuming you already own Final Cut Studio. For me, the need to do this has been reduced, but not completely gone.

Batch export

Final Cut Pro X has no batch export function for source clips. This is something I find immensely helpful. For example, many editorial houses specify that their production company client supply edit-friendly “dailies” – especially when final color correction and finishing will be done by another facility or artist/editor/colorist. This is a throwback to film workflows and is most often the case with RED and ALEXA productions. Certainly a lot of the same processes can be done with DaVinci Resolve, but it’s simply faster and easier with FCP 7.

In the case of ALEXA, a lot of editors prefer to do their offline edit with LUT-corrected, Rec 709 images, instead of the flat, Log-C ProRes 4444 files that come straight from the camera. With FCP 7, simply import the camera files, add a LUT filter like the one from Nick Shaw (Antler Post), enable TC burn-in if you like and run a batch export in the codec of your choice. When I do this, I usually end up with a set of Rec 709 color, ProResLT files with burn-in that I can use to edit with. Since the file name, reel ID and timecode are identical to the camera masters, I can easily edit with the “dailies” and then relink to the camera masters for color correction and finishing. This works well in Adobe Premiere Pro CC, Apple FCP 7 and even FCP X.

Timecode and reel IDs

When I work with files from the various HDSLRs, I prefer to convert them to ProRes (or DNxHD), add timecode and reel ID info. In my eyes, this makes the file professional video media that’s much more easily dealt with throughout the rest of the post pipeline. I have a specific routine for doing this, but when some of these steps fail, due to some file error, I find that FCP 7 is a good back-up utility. From inside FCP 7, you can easily add reel IDs and also modify or add timecode. This metadata is embedded into the actual media file and readable by other applications.

Log and Transfer

Yes, I know that you can import and optimize (transcode) camera files in FCP X. I just don’t like the way it does it. The FCP 7 Log and Transfer module allows the editor to set several naming preferences upon ingest. This includes custom names and reel IDs. That metadata is then embedded directly into the QuickTime movie created by the Log and Transfer module. FCP X doesn’t embed name and ID changes into the media file, but rather into its own database. Subsequently this information is not transportable by simply reading the media file within another application. As a result, when I work with media from a C300, for example, my first step is still Log and Transfer in FCP 7, before I start editing in FCP X.

Conform and reverse telecine

A lot of cameras offer the ability to shoot at higher frame rates with the intent of playing this at a slower frame rate for a slow motion effect – “overcranking” in film terms. Advanced cameras like the ALEXA, RED One, EPIC and Canon C300 write a timebase reference into the file that tells the NLE that a file recorded at 60fps is to be played at 23.98fps. This is not true of HDSLRs, like a Canon 5D, 7D or a GoPro. You have to tell the NLE what to do. FCP X only does this though its Retime effect, which means you are telling the file to be played as slomo, thus requiring a render.

I prefer to use Cinema Tools to “conform” the file. This alters the file header information of the QuickTime file, so that any application will play it at the conformed, rather than recorded frame rate. The process is nearly instant and when imported into FCP X, the application simply plays it at the slower speed – no rendering required. Just like with an ALEXA or RED.

Another function of Cinema Tools is reverse telecine. If a camera file was recorded with built-in “pulldown” – sometimes called 24-over-60 – additional redundant video fields are added to the file. You want to remove these if you are editing in a native 24p project. Cinema Tools will let you do this and in the process render a new, 24p-native file.

Color correction

I really like the built-in and third-party color correction tools for Final Cut Pro X. I also like Blackmagic Design’s DaVinci Resolve, but there are times when Apple Color is still the best tool for the job. I prefer its user interface to Resolve, especially when working with dual displays and if you use an AJA capture/monitoring product, Resolve is a non-starter. For me, Color is the best choice when I get a color correction project from outside where the editor used FCP 7 to cut. I’ve also done some jobs in X and then gone to Color via Xto7 and then FCP 7. It may sound a little convoluted, but is pretty painless and the results speak for themselves.

Audio mixing

I do minimal mixing in X. It’s fine for simple mixes, but for me, a track-based application is the only way to go. I do have X2Pro Audio Convert, but many of the out-of-house ProTools mixers I work with prefer to receive OMFs rather than AAFs. This means going to FCP 7 first and then generating an OMF from within FCP 7. This has the added advantage that I can proof the timeline for errors first. That’s something you can’t do if you are generating an AAF without any way to open and inspect it. FCP X has a tendency to include many clips that are muted and usually out of your way inside X. By going to FCP 7 first, you have a chance to clean up the timeline before the mixer gets it.

Any complex projects that I mix myself are done in Adobe Audition or Soundtrack Pro. I can get to Audition via the XML route – or I can go to Soundtrack Pro through XML and FCP 7 with its “send to” function. Either application works for me and most of my third-party plug-ins show up in each. Plus they both have a healthy set of their own built-in filters. When I’m done, simply export the mix (and/or stems) and import the track back into FCP X to marry it to the picture.

Project trimming

Final Cut Pro X has no media management function.  You can copy/move/aggregate all of the media from a single Project (timeline) into a new Event, but these files are the source clips at full length. There is no ability to create a new project with trimmed or consolidated media. That’s when source files from a timeline are shortened to only include the portion that was cut into the sequence, plus user-defined “handles” (an extra few frames or seconds at the beginning and end of the clip). Trimmed, media-managed projects are often required when sending your edited sequence to an outside color correction facility. It’s also a great way to archive the “unflattened” final sequence of your production, while still leaving some wiggle room for future trimming adjustments. The sequence is editable and you still have the ability to slip, slide or change cuts by a few frames.

I ran into this problem the other day, where I needed to take a production home for further work. It was a series of commercials cut in FCP X, from which I had recut four spots as director’s cuts. The edit was locked, but I wanted to finish the mix and grade at home. No problem, I thought. Simply duplicate the project with “used media”, create the new Event and “organize” (copies media into the new Event folder). I could live with the fact that the media was full length, but there was one rub. Since I had originally edited the series of commercials using Compound Clips for selected takes, the duping process brought over all of these Compounds – even though none was actually used in the edit of the four director’s cuts. This would have resulted in copying nearly two-thirds of the total source media. I could not remove the Compounds from the copied Event, without also removing them from the original, which I didn’t want to do.

The solution was to send the sequence of four spots to FCP 7 and then media manage that timeline into a trimmed project. The difference was 12GB of trimmed source clips instead of HUNDREDS of GB. At home, I then sent the audio to Soundtrack Pro for a mix and the picture back to FCP X for color correction. Connect the mix back to the primary storyline in FCP X and call it done!

I realize that some of this may sound a bit complex to some readers, but professional workflows are all about having a good toolkit and knowing how to use it. FCP X is a great tool for productions that can work within its walls, but if you still own Final Cut Studio, there are a lot more options at your disposal. Why not continue to use them?

©2013 Oliver Peters

A RED post production workflow

When you work with RED Digital Cinema’s cameras, part of the post production workflow is a “processing” step, not unlike the lab and transfer phase of film post. The RED One, EPIC and SCARLET cameras record raw images using Bayer-pattern light filtering to the sensor. The resulting sensor data is compressed with the proprietary REDCODE codec and stored to CF cards or hard drives. In post, these files have to be decompressed and converted into RGB picture information, much the same as if you had shot camera raw still photography with a Nikon or Canon DSLR.

RED has been pushing the concept of working natively with the .r3d media (skipping any interim conversion steps) and has made an SDK (software development kit) available to NLE manufacturers. This permits REDCODE raw images to be converted and adjusted right inside the editing interface. Although each vendor’s implementation varies, the raw module enables control over the metadata for color temperature, tint, color space, gamma space, ISO and other settings. You also have access to the various quality increments available to “de-Bayer” the image (data-to-RGB interpolation). The downside to working natively, is that even with a fast machine, performance can be sluggish. This is magnified when dealing with a large quantity of footage, such as a feature film or other long-form projects. The native clips in your editing project are encumbered by the overhead of 4K compressed camera files.

For these and other reasons, I still advocate an offline-online procedure, rather than native editing, when working on complex RED projects. You could convert to a high-quality format like ProRes 4444 or 10-bit uncompressed at the beginning and never touch the RED files again, but the following workflow is one designed to give you the best of all worlds – easy editing, plus grading to get the best out of the raw files. There are many possible RED workflows, but I’ve used a variation of these steps quite successfully on a recent indie feature film – cut on Final Cut Pro 7 and graded in Apple Color. My intent here is to describe an easy workflow for projects mastering at 2K and HD sizes, which are destined for film festivals, TV and Blu-ray.

Conversion for offline editing

When you receive media from the studio or location, start by backing up and verifying all files. Make sure your camera-original media is safe. Then move on to RED’s REDCINE-X PRO. There is no need yet to change color metadata. Simply accept what was shot and set up a batch to convert the .r3d files into editing media, such as Avid DNxHD36 or Apple ProRes LT or ProRes Proxy. 1920×1080 or 1280×720 are the preferred sizes for lightweight editing media.

With a RED ROCKET accelerator card installed, conversion time will be about real-time. Without it, adjust the de-Bayer resolution settings to 1/2, 1/4 or 1/8 for faster rendering. The quality of these dailies only needs to be sufficient for making effective editing decisions. The advantage to using REDCINE-X PRO and not the internal conversion tools of the NLE (like FCP 7’s Log and Transfer) is faster conversion, which can be done on any machine and isn’t dependent on the specific requirements of a given editing application.

Creative (offline) editing

Import the media into your NLE. In the case of Final Cut Pro 7, simply drag the converted QuickTime files into a bin. Import any double-system audio and merge the clips. Edit until the picture cut is locked. Break the final sequence into reels of approximately ten minutes in length each. Export audio as OMF files for your sound designer/mixer. Duplicate the reels as video-only timelines, remove any effects, extend the length of shots with dissolves and restore all shots with speed changes to full length. Export an XML file for each of these reels.

REDCINE-X PRO primary grading pass

This is a two-step color grading process: Step 1 in REDCINE-X PRO and Step 2 in Apple Color. The advantage of REDCINE-X PRO is direct access to the raw files without the abstraction layer of an SDK. By adjusting the source settings panel within Color, Resolve, Media Composer, Premiere Pro and others, you are adjusting the raw controls; but, any further color adjustments (like curves and lift/gamma/gain “color wheels”) are made downstream of the internally-converted RGB image. This is functionally no different than rendering a high-quality, raw-adjusted RGB file from one application and then doing further corrections to it in another. That’s the philosophy here.

Import the XML file for each reel as a timeline into REDCINE-X PRO. This conforms the .r3d files into an edited sequence corresponding to your cut in FCP. Adjust the raw settings for all shots in the timeline. First, set color space to RedColor2. (You may temporarily set gamma space to RedGamma2 and increase saturation to better see the affect of your adjustments.) Remember, this is a primary grading pass, so match all shots and get the most consistent look to the entire timeline.

You can definitely do very extensive color correction in REDCINE-X PRO and never need another grading tool. That’s not the process here, though, so a neutral, plain look tends to be better for the next stage. The point is to create an evenly matched timeline that is within boundaries for more subjective and aggressive grading once you move to Color. When you are ready to export, return saturation to normal, set color/gamma space to RedColor2/RedLogFilm and the de-Bayer quality to full resolution. Export (render) the timeline using Apple ProRes 4444 at either a 2K or 1920×1080 size. Make sure the export preset is configured to create unique file names and an accompanying FCP XML. Repeat this process for each reel.

Sending to Color and FCP completion

Import the REDCINE-X PRO-generated XML for each reel into Final Cut. Reconnect media if needed. Remove any filters that REDCINE-X PRO may have inadvertently added. Double-check the sequence against your rough cut to verify accuracy and then send the new timeline to Color. Each reel becomes a separate Color project file. Grade for your desired look and render the final result as ProRes HQ or ProRes 4444. Lastly, send the project back to Final Cut Pro to complete the roundtrip.

Once the graded timelines are back in FCP, rebuild any visual effects, speed effects and transitions, including dissolves. Combine the video-only sequences with the mixed audio and add any finishing touches necessary to complete your master file and deliverables.

Written for DV Magazine (NewBay Media LLC)

©2012 Oliver Peters

The Girl with the Dragon Tattoo

The director who brought us Se7en has tapped into the dark side again with the Christmas-time release of The Girl with the Dragon Tattoo. Hot off of the success of The Social Network, director David Fincher dove straight into this cinematic adaptation of Swedish writer Steig Larsson’s worldwide publishing phenomena. Even though a Swedish film from the book had been released in 2009, Fincher took on the project, bringing his own special touch.

The Girl with the Dragon Tattoo is part of Larsson’s Millennium trilogy. The plot revolves around the disappearance of Harriet Vanger, a member of one of Sweden’s wealthiest families, forty years earlier. After these many years her uncle hires Mikael Blomkvist (Daniel Craig), a disgraced financial reporter, to investigate the disappearance. Blomkvist teams with punk computer hacker Lisbeth Salander (Rooney Mara). Together they start to unravel the truth that links Harriet’s disappearance to a string of grotesque murders that happened forty years before.

For this production, Fincher once again assembled the production and post team that proved successful on The Social Network, including director of photography Jeff Cronenweth, editors Kirk Baxter and Angus Wall and the music scoring team of Trent Reznor and Atticus Ross. Production started in August of last year and proceeded for 167 shooting days on location and in studios in Sweden and Los Angeles.

Like the previous film, The Girl with the Dragon Tattoo was shot completely with RED cameras – about three-quarters using the RED One with the M-X sensor and the remaining quarter with the RED EPIC, which was finally being released around that time. Since the EPIC cameras were in their very early stages, the decision was made to not use them on location in Sweden, because of the extreme cold. After the first phase in Sweden, the crew moved to soundstages in Los Angeles and continued with the RED Ones. The production started using the EPIC cameras during their second phase of photography in Sweden and during reshoots back in Los Angeles.

The editing team

I recently spoke with Kirk Baxter and Angus Wall, who as a team have cut Fincher’s last three films, earning them a best editing Oscar for The Social Network as well as a nomination The Curious Case of Benjamin Button. I was curious about tackling a film that had already been done a couple of years before. Kirk Baxter replied, “We were really reacting to David’s material above all, so the fact that there was another film about the same book didn’t really affect me. I hadn’t seen the film before and I purposefully waited until we were about halfway through the fine cut, before I sat down and watched the film. Then it was interesting to see how they had approached certain story elements, but only as a curiosity.”

As in the past, both Wall and Baxter split up editorial duties based on the workload at any given time. Baxter started cutting at the beginning of production, with Wall joining the project in April of this year. Baxter explained, “I was cutting during the production to keep up with camera, but sometimes priorities would shift. For example, if an actor had to leave the country or a set needed to be struck, David would need to see a cut quickly to be sure that he had the coverage he needed. So in these cases, we’d jump on those scenes to make sure he knew they were OK.” Wall continued, “This was a very labor intensive film. David shot 95% to 98% of everything with two cameras. On The Social Network they recorded 324 hours of footage and selected 281 hours for the edit. On Dragon Tattoo that count went up to 483 hours recorded and 443 hours selected!”

The Girl with the Dragon Tattoo has many invisible effects. According to Wall, “At last count there were over 1,000 visual effects shots throughout the film. Most of these are shot stabilizations or visual enhancements, such as adding matte painting elements, lens flares or re-creating split screens from the offline.  Snow and other seasonal elements were added to a number of shots, helping the overall tone, as well as reinforcing the chronology of the film. I think viewers will be hard pressed to tell which shots are real and which are enhanced.” Baxter added, “In a lot of cases the exterior locations were shot in Sweden and elaborate sets were built on sound stages in LA for the interiors. There’s one sequence that takes place in a cabin. All of the exteriors seen through the windows and doors are green screen shots. And those were bright green! I’ve been seeing the composited shots come back and it’s amazing how perfect they are. The door is opened and there’s a bright exterior there now.”

A winning workflow solution

The key to efficient post on a RED project is the workflow. Assistant editor Tyler Nelson explained the process to me. “We used essentially the same procedures as for The Social Network. Of course, we learned things on that, which we refined for this film. Since they used both the RED M-X and the EPIC cameras, there were two different frame sizes to deal with – 4352 x 2176 for the RED One and 5120 x 2560 for the EPIC. Plus each of these cameras uses a different color science to process the data from the sensor. The file handling was done through Datalab, a company that Angus owns. A custom piece of software called Wrangler automates the handling of the RED files. It takes care of copying, verifying and archiving the .r3d files to LTO and transcoding the media for the editors, as well as for review on the secured PIX system. The larger RED files were scaled down to 1920 x 1080 ProRes LT with a center-cut extraction for the editors, as well as 720p H.264 for PIX. The ‘look’ was established on set, so none of the RED color metadata was changed during this process.”

“When the cut was locked, I used an EDL [edit decision list] and my own database to conform the .r3d files back into reels of conformed DPX image sequences. This part was done in After Effects, which also allowed me to reposition and stabilize shots as needed. Most of the repositioning was generally a north-south adjustment to move a shot up or down for better head room. The final output frame size was 3600 x 1500 pixels. Since I was using After Effects, I could make any last minute fixes if needed. For instance, I saw one shot that had a monitor reflection within the shot. It was easy to quickly paint that out in After Effects. The RED files were set to the RedColor2 / RedLogFilm color space and gamma settings. Then I rendered out extracted DPX image sequences of the edited reels to be sent Light Iron Digital who did the DI again on this film.”

On the musical trail

The Girl with the Dragon Tattoo leans heavily on a score by Trent Reznor and Atticus Ross. An early peak came from a teaser cut for the film by Kirk Baxter to a driving Reznor cover of Led Zeppelin’s “Immigrant Song”. Unlike the typical editor and composer interaction – where library temp tracks are used for the edit and then a new score is done at the end of the line – Reznor and Ross were feeding tracks to the editors during the edit.

Baxter explained, “At first Trent and Atticus score to the story rather than to specific scenes. The main difference with their approach to scoring a picture is that they first provide us with a library of original score, removing the need for needledrops. It’s then a collaborative process of finding homes for the tracks. Ren Klyce [sound designer/re-recording mixer] also plays an integral part in this.” Wall added, “David initially reviewed the tracks and made suggestions as to which scenes they might work best in.  We started with these suggestions and refined placement as the edit evolved.  The huge benefit of working this way was that we had a very refined temp score very early in the process.” Baxter concluded, “Then Trent’s and Atticus’s second phase is scoring to picture. They re-sculpt their existing tracks to perfectly fit picture and the needs of the movie. Trent’s got a great work ethic. He’s very precise and a real perfectionist.”

The cutting experience

I definitely enjoyed the Oscar-winning treatment these two editors applied to intercutting dialogue scenes in The Social Network, but Baxter was quick to interject, “I’d have to say Dragon Tattoo was more complicated than The Social Network. It was a more complex narrative, so there were more opportunities to play with scene order. In the first act you are following the two main characters on separate paths. We played with how their scenes were intercut so that their stories were as interconnected as possible, giving promise to the audience of their inevitable union.”

“The first assembly was about three hours long. That hovered at around 2:50 for a while and got a bit longer as additional material was shot, but then shorter again as we trimmed. Eventually some scenes were lost to bring the locked cut in at two-and-a-half hours. Even though scenes were lost, those still have to be fine cut. You don’t know what can be lost unless you finish everything out and consider the film in its full form. A lot of work was put into the back half of the film to speed it up. Most of those changes were a matter of tightening the pace by losing the lead-in and lead-outs of scenes and often losing some detail within the scenes.”

Wall expanded on this, “Fans of any popular book series want a filmed adaptation to be faithful to the original story. In this case, we’re really dealing with a ‘five act’ structure. [laughs]. Obviously, not everything in the book can make it into the movie. Some of the investigative dead ends have to be excised, but you can’t remove every red herring.  So it was a challenging film to cut. Not only was it very labor intensive, with many disturbing scenes to put together, it was also a tricky storytelling exercise. But when you’re done and it’s all put together, it’s very rewarding to see. The teaser calls it the ‘feel-bad film of Christmas’ but it’s a really engaging story about these characters’ human experience. We hope audiences will find it entertaining.”

Some additional coverage from Post magazine.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

RED post for My Fair Lidy

I’ve work on various RED projects, but a recent interesting example is My Fair Lidy, an independent film produced through the Valencia College Film Production Technology program. This was a full-blown feature shot entirely with RED One cameras. In this program, professional filmmakers with real projects in hand partner with a class of eager students seeking to learn the craft of film production. I’ve edited two of these films produced through the program and assisted in various aspects of post on many others. My Fair Lidy – a quirky comedy directed by program director Ralph Clemente – was shot in 17 days this summer at various central Florida locations. Two RED Ones were used – one handled by director of photography Ricardo Galé and the second by student cinematographers. My Fair Lidy was produced by SandWoman Films and stars Christopher Backus and Leigh Shannon.

There are many ways to handle the post production of native RED media and I’ve covered a number of them in these earlier posts. There is no single “best way” to handle these files, because each production is often best-served by a custom solution. Originally, I felt the way to tackle the dailies was to convert the .r3d camera files into ProRes 4444 files using the RedLogFilm profile. This gives you a very flat look, and a starting point very similar to ARRI ALEXA files shot with the Log-C profile. My intension would have been to finish and grade straight from the QuickTimes and never return to the .r3d files, unless I needed to fix some problems. Neutral images with a RedLogFilm gamma setting are very easy to grade and they let the colorist swing the image for different looks with ease. However, after my initial discussions with Ricardo, it was decided to do the final grade from the native camera raw files, so that we had the most control over the image, plus the ability to zoom in and reframe using the native 4K files as a source.

The dailies and editorial flow

My Fair Lidy was lensed with a 16 x 9 aspect ratio, with the REDs set to record 4096 x 2304 (at 23.98fps). In addition to a RED One and a healthy complement of grip, lighting and electrical gear, Valencia College owns several Final Cut Pro post systems and a Red Rocket accelerator card. With two REDs rolling most of the time, the latter was a godsend on this production.  We had two workstations set up – one as the editor’s station with a large Maxx Digital storage array and the other as the assistant’s station. That system housed the Red Rocket card. My two assistants (Kyle Prince and Frank Gould) handled all data back-up and conversion of 4K RED files to 1920 x 1080 ProResHQ for editorial media. Using ProResHQ was probably overkill for cutting the film (any of the lower ProRes codecs would have been fine for editorial decisions) but this gave us the best possible image for an potential screenings, trailers, etc.

Redcine-X was our tool for .r3d media organization and conversion. All in-camera settings were left alone, except the gamma adjustment. The Red Rocket card handles the full-resolution debayering of the raw files, so conversion time is close to real time. The two stations were networked via AFP (Apple’s file-sharing protocol), which permitted the assistant to handle his tasks without slowing down the editor. In addition, the assistant would sync and merge audio from the double-system sound, multi-track audio recordings and enter basic scene/take descriptions. Each shoot day had its own FCP project, so when done, project files and media (.r3d, ProRes and audio) were copied over to the editor’s Maxx array. Master clips from these daily FCP projects were then copied-and-pasted (and media relinked) into a single “master edit” FCP project.

For reasons of schedule and availability, I split the editing responsibilities with a second film editor, Patrick Tyler. My initial role was to bring the film to its first cut and then Patrick handled revisions with the producer and director. Once the picture was locked, I rejoined the project to cover final finishing and color grading. My Fair Lidy was on a very accelerated schedule, with sound design and music scoring running on a parallel track. In total, post took about 15 weeks from start to finish.

Finishing and grading

Since we didn’t use FCP’s Log and Transfer function nor the in-camera QuickTime reference files as edit proxies, there was no easy way to get Apple Color to automatically relink clips to the original .r3d files. You can manually redirect Color to link to RED files, but this must be done one shot at a time – not exactly desirable for the 1300 or so shots in the film.

The recommended workflow is to export an XML from FCP 7, which is then opened in Redcine-X. It will correctly reconnect to the .r3d files in place of the QuickTime movies. From there you export a new XML, which can be imported into Color. Voila! A Color timeline that matches the edit using the native camera files. Unfortunately for us, this is where reality came crashing in – literally. No matter what we did, using both  XMLs and EDLs, everything that we attempted to import into Color crashed the application. We also tried ClipFinder, another free application designed for RED media. It didn’t crash Color, but a significant number of shots were incorrectly linked. I suspect some internal confusion because of the A and B camera situation.

On to Plan B. Since Redcine-X correctly links to the media and includes not only controls for the raw settings, but also a healthy toolset for primary color correction, then why not use it for part of the grading process? Follow that up with a pass through Color to establish the stylistic “look”. This ended up working extremely well for us. Here are the basic steps I followed.

Step 1. We broke the film into ten reels and exported an XML file for each reel from FCP 7.

Step 2. Each reel’s XML was imported into Redcine-X as a timeline. I changed all the camera color metadata for each shot to create a neutral look and to match shots to each other. I used RedColor (slightly more saturated than RedColor2) and RedGamma2 (not quite as flat as RedLogFilm), plus adjusted the color temp, tint and ISO values to get a neutral white balance and match the A and B camera angles. The intent was to bring the image “within the goalposts” of the histogram. Occasionally I would make minor exposure and contrast adjustments, but for the most part, I didn’t touch any of the other color controls.

My objective was to end up with a timeline that looked consistent but preserved dynamic range. Essentially that’s the same thing I would do as the first step using the primary tab within Color. The nice part about this is that once I matched the settings of the shots, the A and B cameras looked very consistent.

Step 3. Each timeline was exported from Redcine-X as a single ProResHQ file with these new settings baked in. We had moved the Red Rocket card into the primary workstation, so these 1920 x 1080 clips were rendered with full resolution debayering. As with the dailies, rendering time was largely real-time or somewhat slower. In this case, approximately 10-20 minutes per reel.

Step 4. I imported each rendered clip back into FCP and placed it onto video track two over the corresponding clips for that reel to check the conforming accuracy and sync. Using the “next edit” keystroke, I quickly stepped through the timeline and “razored” each edit point on the clip from Redcine-X. This may sound cumbersome, but only took a couple of minutes for each reel. Now I had an FCP sequence from a single media clip, but with each cut split as an edit point. Doing this creates “notches” that are used by the color correction software for cuts between corrections. That’s been the basis for all “tape-to-tape” color correction since DaVinci started doing it and the new Resolve software still includes a similar automatic scene detection function today.

Step 5. I sent my newly “notched” timeline to Color and graded as I normally would. By using the Redcine-X step as a “pre-grade”, I had done the same thing to the image as I would have done using the RED tab within Color, thus keeping with the plan to grade from the native camera raw files. I do believe the approach I took was faster and better than trying to do it all inside Color, because of the inefficiency of bouncing in and out of the RED tab in Color for each clip. Not to mention that Color really bogs down when working with 4K files, even with a Red Rocket card in place.

Step 6. The exception to this process was any shot that required a blow-up or repositioning. For these, I sent the ProRes file from dailies in place of the rendered shot from Redcine-X. In Color, I would then manually reconnect to the .r3d file and resize the shot in Color’s geometry room, thus using the file’s full 4K size to preserve resolution at 1080 for the blow-up.

Step 7. The last step was to render in Color and then “Send to FCP” to complete the roundtrip. In FCP, the reel were assembled for the full movie and then married to the mixed soundtrack for a finished film.

© 2011 Oliver Peters

Higher Ground

Timing is often everything when it comes to indie filmmaking. That’s certainly the case with Higher Ground, the directorial debut by Academy Award-nominated actress, Vera Farmiga (Up In The Air, Source Code, Nothing But The Truth). The film about the struggle and coexistence between faith and doubt is inspired by Carolyn S. Biggs’ memoir, This Dark World. It features Farmiga in the lead role of Corrine Walker and follows her through three phases of her life. The film has appeared at the 2011 Sundance, Tribeca and Los Angeles Film Festivals and is currently in distribution through Sony Pictures Classics.

Successfully pulling off a highly-regarded, low budget feature is a challenge for anyone, but even more so, if you are the director, the lead actress and pregnant on top of that. Living in upstate New York, Farmiga happened to be ten minutes away from BCDF Pictures, a production company and facility built with the intent of facilitating indie feature film production. She decided to check them out as a possible production resource and quickly discovered a synergy that was ideal for Higher Ground. Although BCDF was prepping another film at the time, the decision was made to fast-track Higher Ground, in part to be able to film before Farmiga was too far along in her pregnancy. Within a couple of weeks, the film was in full production for a 28-day filming schedule during June 2010.

BCDF Pictures, situated in the upper Hudson River valley, is a mash-up between summer camp and the old Hollywood studio system. The founders also created a film fund, Strategic Motion Ventures, to finance the pictures produced by BCDF. They own RED One MX camera packages and the farmhouse-style facility is home to several edit suites and screening theaters, which makes it ideal for a filmmaking home base. For Higher Ground, BCDF supplied two RED packages to director of photography Michael McDonough. They also worked out various tests prior to the production that let the DoP establish a number of in-camera looks for the three time periods in the story.

Hitting the ground running

Higher Ground editor Colleen Sharp wasn’t hired until three weeks after the start of production. So, BCDF proceeded down a post production workflow path based on the assumption that the film would be edited using Apple Final Cut Pro, their primary in-house NLE platform. Head of post production Jeremy Newmark handled the one-light color correction for the RED camera dailies, transcoding them into ProRes QuickTime movies. By the time Sharp was on board, BCDF had already accumulated two-and-a-half weeks of dailies in the ProRes format.

According to Sharp, “I’ve cut one other film using Final Cut, but I feel more comfortable with [Avid] Media Composer. I suggested, if possible, it would be better if I could cut Higher Ground on an Avid, because I had to hit the ground running. Since I was starting three weeks after filming had begun, I needed to be as efficient as possible and that would be on a system that I was most comfortable with.” Of course, this added the dilemma of whether or not to re-transcode the RED files into a format native to Avid.

Good timing once again played a role. Avid had just released Media Composer version 5.0, which enabled the direct use of ProRes files through AMA (Avid Media Access), as well as limited third-party hardware support for monitoring. In addition to Final Cut systems, BCDF also owned an older Media Composer license. They were able to cost-effectively set up the Avid suite for Sharp by upgrading their older Avid software license and adding the Matrox MXO2 Mini for video output to the large screen in the edit suite.

Newmark explained, “I was concerned about whether I’d need to take the existing dailies and convert them again to DNxHD media for Colleen. I talked it over with a friend at PostWorks in New York and it seemed like using AMA would be viable. We proceeded down the road of using the ProRes files in the Avid and Colleen was able to cut the film entirely using linked AMA files. We never transcoded them into DNxHD and it worked well. Of course, at the beginning I still had the Plan B of converting everything again if the AMA idea didn’t work; but, I wanted to avoid this as it would have cost us extra time. Even though we own a Red Rocket card for fast transcoding, the crew was using two cameras the entire time and often recording very long performance takes. So, in two-and-a-half weeks, they’d already accumulated quite a large amount of footage.”

In the end, it worked better than expected for what was at that time a new software release. Higher Ground is likely the first feature film edited using strictly AMA-linked ProRes files. Thanks in part to the weak economy, the film company was able to secure off-hours packages for DI finishing in Los Angeles and sound editing and mixing at Sound One in New York. Newmark continued, “I was able to send the colorist [Adam Hawkey] an EDL and the trimmed .r3d RED camera files, as well as the looks that I’d established with the DoP. These were imported into a Nucoda system, which read the files perfectly, including the looks presets. Adam told us this worked seamlessly and gave him a great starting point to work from in grading the film. Michael [McDonough] supervised the grading over a five-day stretch.”

Anticipating the big challenges

I asked Colleen Sharp about editing challenges on the film. She replied, “The biggest challenge I’d anticipated turned out not to be an issue at all. That was working with a first-time director, who was also the lead actor. Vera was great to work with. She was new to the entire editing process and very intrigued by the possibilities. She was hands-on during the edit and very helpful. I normally work on a film during the shooting and complete an editor’s cut before I start working with the director. In this case, I wasn’t completely done with my cut before the production wrapped, so the last portion of this first cut was worked out with Vera’s involvement. They finished shooting just after the 4th of July weekend, but I didn’t have my first cut together until the third week in July. It was just under three hours long! We continued working at it until mid-October and ended up at the final length of 107 minutes. Naturally, with that much trimming, you have to lose some scenes that are painful to cut, but that’s all part of the process.”

“I’m glad to say that none of Vera’s decisions were ever based on vanity. Only about the best performance and with this cast, the performances were always good. One editing challenge was dealing with the number of children in the scenes. For instance, Vera’s sister Taissa plays Corrine in the younger scenes. She’s never acted before. So, you had Vera directing her sister and she got a great performance out of her. Of course, as the editor, it’s my job to help get that performance on screen in a way that best represents the story.”

Naturally, whenever you have a lot of footage, the biggest challenge for the editor is wrestling just the sheer volume of material. Higher Ground shot about 14TB of RED footage, which translates into nearly 100 hours of raw material. Fortunately the story progressed in a linear fashion through the three periods of Corinne’s life. No parallel storylines or intercutting between different eras. To help manage the content, assistant editor Peter Saguto organized the ProRes files at the Finder level into folders based on scenes. This made sense for a Final Cut edit, but when it came time to move to Media Composer, most of this structure could be carried into Avid via AMA. As a result, Saguto didn’t have to completely start his logging from scratch after the change of platforms.

In the end, the post production workflow proved to be very viable. Newmark said, “When we started this, a lot of the advice we received ended with ‘good luck – no one has ever done this before.’ I was impressed with the stability of the Avid system, compared with the Final Cut system that was being used at the same time on the other film going through BCDF.” In the future, BCDF intends to handle more films on the Avid system. Newmark continued, “We always want to let the decision be made by the cinematographers and editors whenever possible. We own RED camera packages, but we’ve also shot films with ARRI ALEXA and 35mm film depending on what’s the right approach for that film. I really think Avid is the best tool for feature film editing and I’m glad this experience worked so well. Of course, now when we have a RED show that we know will be cut on Media Composer, we transcode the RED media to DNxHD.  Nevertheless, going ProRes on Higher Ground proved to be far more seamless than I would have expected.”

In its first year, BCDF Pictures produced four films: Higher Ground; Peace, Love, & Misunderstanding; The Last Keepers (formerly known as The Art of Love) and Rhymes with Bananas. They are currently in post production on Predisposed and Liberal Arts and in production on Bachlorette.

Written for DV Magazine (NewBay Media LLC)

©2011 Oliver Peters