The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

The Hobbit

df_hobbit_1Peter Jackson’s The Hobbit: An Unexpected Journey was one of the most anticipated films of 2012. It broke new technological boundaries and presented many creative challenges to its editor. After working as a television editor, Jabez Olssen started his own odyssey with Jackson in 2000 as an assistant editor and operator on The Lord of the Rings trilogy. After assisting again on King Kong, he next cut Jackson’s Lovely Bones as the first feature film on which he was the sole editor. The director tapped Olssen again for The Hobbit trilogy, where unlike the Rings trilogy, he will be the sole editor on all three films.

Much like the Rings films, all production for the three Hobbit films was shoot in a single eighteen month stretch. Jackson employed as many as 60 RED Digital Cinema EPIC cameras rigged for stereoscopic acquisition at 48fps – double the standard rate of traditional feature photography. Olssen was editing the first film in parallel with the principal photography phase. He had a very tight schedule that only allowed about five months after the production wrapped to lock the cut and get the film ready for release.

To get The Hobbit out on such an aggressive schedule, Olssen leaned hard on a post production infrastructure built around Avid’s technology, including 13 Media Composers (10 with Nitris DX hardware) and an ISIS 7000 with 128TB of storage. Peter Jackson’s production facilities are located in Wellington, New Zealand, where active fibre channel connections tie Stone Street Studio, Weta Digital, Park Road Post Production and the cutting rooms to the Avid ISIS storage. The three films combined, total 2200 hours (1100 x two eyes) of footage, which is the equivalent of 24 million feet of film. In addition, an Apace active backup solution with 72TB of storage was also installed, which could immediately switch over if ISIS failed.

The editorial team – headed up by first assistant editor Dan Best – consisted of eight assistant editors, including three visual effects editors. According to Olssen, “We mimicked a similar pipeline to a film project. Think of the RED camera .r3d media files as a digital negative. Peter’s facility, Park Road Post Production, functioned as the digital lab. They took the RED media from the set and generated one-light, color-corrected dailies for the editors. 24fps 2D DNxHD36 files were created by dropping every second frame from the files of one ‘eye’ of a stereo recording. For example, we used 24fps timecode with the difference between the 48fps frames being a period instead of a colon. Frame A would be 11.22.21.13 and frame B would be 11:22:21:13. This was a very natural solution for editing and a lot like working with single-field media files on interlaced television projects. The DNxHD files were then delivered to the assistant editors, who synced, subclipped and organized clips into the Avid projects. Since we were all on ISIS shared storage, once they were done, I could access the bins and the footage was ready to edit, even if I were on set. For me, working with RED files was no different than a standard film production.”

df_hobbit_2Olssen continued, “A big change for the team since the Rings movies is that the Avid systems have become more portable. Plus the fibre channel connection to ISIS allows us to run much longer distances. This enabled me to have a mobile cart on the set with a portable Media Composer system connected to the ISIS storage in the main editing building. In addition, we also had a camper van outfitted as a more comfortable mobile editing room with its own Media Composer; we called it the EMC – ‘Editorial Mobile Command’. So, I could cut on set while Peter was shooting, using the cart and, as needed, use the EMC for some quick screening of edits during a break in production. I was also on location around New Zealand for three months and during that time I cut on a laptop with mirrored media on external drives.”

The main editing room was set up with a full-blown Nitris DX system connected to a 103” plasma screen for Jackson. The original plan was to cut in 2D and then periodically consolidate scenes to conform a stereo version for screening in the Media Composer suite. Instead they took a different approach. Olssen explained, “We didn’t have enough storage to have all three films’ worth of footage loaded as stereo media, but Peter was comfortable cutting the film in 2D. This was equally important, since more theaters displayed this version of the film. Every few weeks, Park Road Post Production would conform a 48fps stereo version so we could screen the cut. They used an SGO Mistika system for the DI, because it could handle the frame rate and had very good stereo adjustment tools. Although you often have to tweak the cuts after you see the film in a stereo screening, I found we had to do far less of that than I’d expected. We were cognizant of stereo-related concerns during editing. It also helped that we could judge a cut straight from the Avid on the 103” plasma, instead of relying on a small TV screen.”

df_hobbit_3The editorial team was working with what amounted to 24fps high-definition proxy files for stereo 48fps RED .r3d camera masters. Edit decision lists were shared with Weta Digital and Park Road Post Production for visual effects, conform and digital intermediate color correction/finishing at a 2K resolution. Based on these EDLs, each unit would retrieve the specific footage needed from the camera masters, which had been archived onto LTO data tape.

The Hobbit trilogy is a heavy visual effects production, which had Olssen tapping into the Media Composer toolkit. Olssen said, “We started with a lot of low resolution, pre-visualization animations as placeholders for the effects shots. As the real effects started coming in, we would replace the pre-vis footage with the correct effects shots. With the Gollum scenes we were lucky enough to have Andy Serkis in the actual live action footage from set, so they were easy to visualize how the scene would look. But other CG characters, like Azog, were captured separately on a Performance Capture stage. That meant we had to layer separately-shot material into a single shot. We were cutting vertically in the timeline, as well as horizontally. In the early stages, many of the scenes were a patchwork of live action and pre-vis, so I used PIP effects to overlay elements to determine the scene timing. Naturally, I had to do a lot of temp green-screen composites. The dwarves are full-size actors and for many of the scenes, we had to scale them down and reposition them in the shot so we could see how the shots were coming together.”

As with most feature film editors, Jabez Olssen likes to fill out his cut with temporary sound effects and music, so that in-progress screenings feel like a complete film. He continued, “We were lucky to use some of Howard Shore’s music from the Rings films for character themes that tie The Hobbit back into The Lord of the Rings. He wrote some nice ‘Hobbity’ music for those. We couldn’t use too much of it, though, because it was so familiar to us! The sound department at Park Road Post Production uses Avid Pro Tools systems. They also have a Media Composer connected to the same ISIS storage, which enabled the sound editors to screen the cut there. From it, they generated QuickTime files for picture reference and audio files so the sound editors could work locally on their own Pro Tools workstations.”

Audiences are looking forward to the next two films in the series, which means the adventure continues for Jabez Olssen. On such a long term production many editors would be reluctant to update software, but not this time. Olssen concluded, “I actually like to upgrade, because I look forward to the new features. Although, I usually wait a few weeks until everyone knows it’s safe. We ended up on version 6.0 at the end of the first film and are on 6.5 now. Other nonlinear editing software packages are more designed for one-man bands, but Media Composer is really the only software that works for a huge visual effects film. You can’t underestimate how valuable it is to have all of the assistant editors be able to open the same projects and bins. The stability and reliability is the best. It means that we can deliver challenging films like The Hobbit trilogy on a tight post production schedule and know the system won’t let us down.”

Originally written for Avid Technology, Inc.

©2013 Oliver Peters

Offline to online with 4K

df_4k_wkflw_01

The 4K buzz  seems to be steam-rolling the industry just like stereo3D before it. It’s too early to tell whether it will be an immediate issue for editors or not, since 4K delivery requirements are few and far between. Nevertheless, camera and TV-set manufacturers  are building important parts of the pipeline. RED Digital Cinema is leading the way with a post workflow that’s both proven and relatively accessible on any budget. A number of NLEs support editing and effects in 4K, including Avid DS, Autodesk Smoke, Adobe Premiere Pro, Apple Final Cut Pro X, Grass Valley EDIUS and Sony Vegas Pro.

Although many of these support native cutting with RED 4K media, I’m still a strong believer in the traditional offline-to-online editing workflow. In this post I will briefly outline how to use Avid Media Composer and Apple FCP X for a cost-effective 4K post pipeline. One can certainly start and finish a RED-originated project in FCP X or Premiere Pro for that matter, but Media Composer is still the preferred creative  tool for many editing pros. Likewise, FCP X is a viable finishing tool. I realize that statement will raise a few eyebrows, but hear me out. Video passing through Final Cut is very pristine, it supports the various flavors of 2K and 4K formats and there’s a huge and developing ecosystem of highly-inventive effects and transitions. This combination is a great opportunity to think outside of the box.

Offline editing with Avid Media Composer

df_4k_wkflw_04_smAvid has supported native RED files for several versions, but Media Composer is not resolution independent. This means RED’s 4K (or 5K) images are downsampled to 1080p and reformatted (cropped or letterboxed) to fit into the 16:9 frame. When you shoot with a RED camera, you should ideally record in one of their 4K 16:9 sizes. The native .r3d files can be brought into Media Composer using the “Link to AMA File(s)” function. Although you can edit directly with AMA-linked files, the preferred method is to use this as a “first step”. That means, you should use AMA to cull your footage down to the selected takes and then transcode the remainder when you start to fine tune your cut.

Avid’s media creation settings are the place to adjust the RED debayer parameters. Media Composer supports the RED Rocket card for accelerated rendering, but without it, Media Composer can still provide reasonable speed in software-only transcoding. Set the debayer quality to 1/4 or 1/8, and transcoding 4K clips to Avid DNxHD36 for offline editing will be closer to real-time on a fast machine, like an 8-core Mac Pro. This resolution is adequate for making your creative decisions.df_4k_wkflw_02_sm

df_4k_wkflw_08_smWhen the cut is locked, export an AAF file for the edited sequence. Media should be linked (not embedded) and the AAF Edit Protocol setting should be enabled. In this workflow, I will assume that audio post is being handled by an audio editor/mixer running a DAW, such as Pro Tools, so I’ll skip any discussion of audio. That would be exported using standard AAF or OMF workflows for audio post. Note that all effects should be removed from your sequence before generating the AAF file, since they won’t be translated in the next steps. This includes any nested clips, collapsed tracks and speed ramps, which are notorious culprits in any timeline translation.

Color grading with DaVinci Resolve

df_4k_wkflw_03_smBlackmagic Design’s DaVinci Resolve 9 is our next step. You’ll need the full, paid version (software-only) for bigger-than-HD output. After launching Resolve, import the Avid AAF file from Resolve’s conform tab. Make sure you check “link to camera files” so that Resolve connects to the original .r3d media and not the Avid DNxHD transcodes. Resolve will import the sequence, connect to the media and generate a new timeline that matches the sequence exported from Media Composer. Make sure the project is set for the desired 4K format.

df_4k_wkflw_09_smNext, open the Resolve project settings and adjust the camera raw values to the proper RED settings. Then make sure the individual clips are set to “project” in their camera settings tab. You can either use the original camera metadata or adjust all clips to a new value in the project settings pane. Once this is done, you are ready to grade the timeline as with any other production. Resolve uses a very good scaling algorithm, so if the RED files were framed with the intent of resizing and repositioning (for example, 5K files that are to be cropped for the ideal framing within a 4K timeline), then it’s best to make that adjustment within the Resolve timeline.df_4k_wkflw_05_sm

Once you’ve completed the grade, set up the render. Choose the FCP XML easy set-up and alter the output frame size to the 4K format you are using. Start the render job. Resolve 9 renders quite quickly, so even without a RED Rocket card, I found that 4K ProRes HQ or 4444 rendering, using full-resolution debayering, was completed in about a 6:1 ratio to running time on my Mac Pro. When the renders are done, export the FCP XML (for FCP X) from the conform tab. I found I had to use an older version of this new XML format, even though I was running FCP X 10.0.7. It was unable to read the newest version that Resolve had exported.

Online with Apple Final Cut Pro X

df_4k_wkflw_11_smThe last step is finishing. Import the Resolve-generated XML file, which will in turn create the necessary FCP Event (media linked to the 4K ProRes files rendered from Resolve) and a timeline for the edited sequence. Make sure the sequence (Project) settings match your desired 4K format. Import and sync the stereo or surround audio mix (generated by the audio editor/mixer) and rebuild any effects, titles, transitions and fast/slo-mo speed effects. Once everything is completed, use FCP X’s share menu to export your deliverables.

©2013 Oliver Peters

A RED post production workflow

When you work with RED Digital Cinema’s cameras, part of the post production workflow is a “processing” step, not unlike the lab and transfer phase of film post. The RED One, EPIC and SCARLET cameras record raw images using Bayer-pattern light filtering to the sensor. The resulting sensor data is compressed with the proprietary REDCODE codec and stored to CF cards or hard drives. In post, these files have to be decompressed and converted into RGB picture information, much the same as if you had shot camera raw still photography with a Nikon or Canon DSLR.

RED has been pushing the concept of working natively with the .r3d media (skipping any interim conversion steps) and has made an SDK (software development kit) available to NLE manufacturers. This permits REDCODE raw images to be converted and adjusted right inside the editing interface. Although each vendor’s implementation varies, the raw module enables control over the metadata for color temperature, tint, color space, gamma space, ISO and other settings. You also have access to the various quality increments available to “de-Bayer” the image (data-to-RGB interpolation). The downside to working natively, is that even with a fast machine, performance can be sluggish. This is magnified when dealing with a large quantity of footage, such as a feature film or other long-form projects. The native clips in your editing project are encumbered by the overhead of 4K compressed camera files.

For these and other reasons, I still advocate an offline-online procedure, rather than native editing, when working on complex RED projects. You could convert to a high-quality format like ProRes 4444 or 10-bit uncompressed at the beginning and never touch the RED files again, but the following workflow is one designed to give you the best of all worlds – easy editing, plus grading to get the best out of the raw files. There are many possible RED workflows, but I’ve used a variation of these steps quite successfully on a recent indie feature film – cut on Final Cut Pro 7 and graded in Apple Color. My intent here is to describe an easy workflow for projects mastering at 2K and HD sizes, which are destined for film festivals, TV and Blu-ray.

Conversion for offline editing

When you receive media from the studio or location, start by backing up and verifying all files. Make sure your camera-original media is safe. Then move on to RED’s REDCINE-X PRO. There is no need yet to change color metadata. Simply accept what was shot and set up a batch to convert the .r3d files into editing media, such as Avid DNxHD36 or Apple ProRes LT or ProRes Proxy. 1920×1080 or 1280×720 are the preferred sizes for lightweight editing media.

With a RED ROCKET accelerator card installed, conversion time will be about real-time. Without it, adjust the de-Bayer resolution settings to 1/2, 1/4 or 1/8 for faster rendering. The quality of these dailies only needs to be sufficient for making effective editing decisions. The advantage to using REDCINE-X PRO and not the internal conversion tools of the NLE (like FCP 7’s Log and Transfer) is faster conversion, which can be done on any machine and isn’t dependent on the specific requirements of a given editing application.

Creative (offline) editing

Import the media into your NLE. In the case of Final Cut Pro 7, simply drag the converted QuickTime files into a bin. Import any double-system audio and merge the clips. Edit until the picture cut is locked. Break the final sequence into reels of approximately ten minutes in length each. Export audio as OMF files for your sound designer/mixer. Duplicate the reels as video-only timelines, remove any effects, extend the length of shots with dissolves and restore all shots with speed changes to full length. Export an XML file for each of these reels.

REDCINE-X PRO primary grading pass

This is a two-step color grading process: Step 1 in REDCINE-X PRO and Step 2 in Apple Color. The advantage of REDCINE-X PRO is direct access to the raw files without the abstraction layer of an SDK. By adjusting the source settings panel within Color, Resolve, Media Composer, Premiere Pro and others, you are adjusting the raw controls; but, any further color adjustments (like curves and lift/gamma/gain “color wheels”) are made downstream of the internally-converted RGB image. This is functionally no different than rendering a high-quality, raw-adjusted RGB file from one application and then doing further corrections to it in another. That’s the philosophy here.

Import the XML file for each reel as a timeline into REDCINE-X PRO. This conforms the .r3d files into an edited sequence corresponding to your cut in FCP. Adjust the raw settings for all shots in the timeline. First, set color space to RedColor2. (You may temporarily set gamma space to RedGamma2 and increase saturation to better see the affect of your adjustments.) Remember, this is a primary grading pass, so match all shots and get the most consistent look to the entire timeline.

You can definitely do very extensive color correction in REDCINE-X PRO and never need another grading tool. That’s not the process here, though, so a neutral, plain look tends to be better for the next stage. The point is to create an evenly matched timeline that is within boundaries for more subjective and aggressive grading once you move to Color. When you are ready to export, return saturation to normal, set color/gamma space to RedColor2/RedLogFilm and the de-Bayer quality to full resolution. Export (render) the timeline using Apple ProRes 4444 at either a 2K or 1920×1080 size. Make sure the export preset is configured to create unique file names and an accompanying FCP XML. Repeat this process for each reel.

Sending to Color and FCP completion

Import the REDCINE-X PRO-generated XML for each reel into Final Cut. Reconnect media if needed. Remove any filters that REDCINE-X PRO may have inadvertently added. Double-check the sequence against your rough cut to verify accuracy and then send the new timeline to Color. Each reel becomes a separate Color project file. Grade for your desired look and render the final result as ProRes HQ or ProRes 4444. Lastly, send the project back to Final Cut Pro to complete the roundtrip.

Once the graded timelines are back in FCP, rebuild any visual effects, speed effects and transitions, including dissolves. Combine the video-only sequences with the mixed audio and add any finishing touches necessary to complete your master file and deliverables.

Written for DV Magazine (NewBay Media LLC)

©2012 Oliver Peters

The Girl with the Dragon Tattoo

The director who brought us Se7en has tapped into the dark side again with the Christmas-time release of The Girl with the Dragon Tattoo. Hot off of the success of The Social Network, director David Fincher dove straight into this cinematic adaptation of Swedish writer Steig Larsson’s worldwide publishing phenomena. Even though a Swedish film from the book had been released in 2009, Fincher took on the project, bringing his own special touch.

The Girl with the Dragon Tattoo is part of Larsson’s Millennium trilogy. The plot revolves around the disappearance of Harriet Vanger, a member of one of Sweden’s wealthiest families, forty years earlier. After these many years her uncle hires Mikael Blomkvist (Daniel Craig), a disgraced financial reporter, to investigate the disappearance. Blomkvist teams with punk computer hacker Lisbeth Salander (Rooney Mara). Together they start to unravel the truth that links Harriet’s disappearance to a string of grotesque murders that happened forty years before.

For this production, Fincher once again assembled the production and post team that proved successful on The Social Network, including director of photography Jeff Cronenweth, editors Kirk Baxter and Angus Wall and the music scoring team of Trent Reznor and Atticus Ross. Production started in August of last year and proceeded for 167 shooting days on location and in studios in Sweden and Los Angeles.

Like the previous film, The Girl with the Dragon Tattoo was shot completely with RED cameras – about three-quarters using the RED One with the M-X sensor and the remaining quarter with the RED EPIC, which was finally being released around that time. Since the EPIC cameras were in their very early stages, the decision was made to not use them on location in Sweden, because of the extreme cold. After the first phase in Sweden, the crew moved to soundstages in Los Angeles and continued with the RED Ones. The production started using the EPIC cameras during their second phase of photography in Sweden and during reshoots back in Los Angeles.

The editing team

I recently spoke with Kirk Baxter and Angus Wall, who as a team have cut Fincher’s last three films, earning them a best editing Oscar for The Social Network as well as a nomination The Curious Case of Benjamin Button. I was curious about tackling a film that had already been done a couple of years before. Kirk Baxter replied, “We were really reacting to David’s material above all, so the fact that there was another film about the same book didn’t really affect me. I hadn’t seen the film before and I purposefully waited until we were about halfway through the fine cut, before I sat down and watched the film. Then it was interesting to see how they had approached certain story elements, but only as a curiosity.”

As in the past, both Wall and Baxter split up editorial duties based on the workload at any given time. Baxter started cutting at the beginning of production, with Wall joining the project in April of this year. Baxter explained, “I was cutting during the production to keep up with camera, but sometimes priorities would shift. For example, if an actor had to leave the country or a set needed to be struck, David would need to see a cut quickly to be sure that he had the coverage he needed. So in these cases, we’d jump on those scenes to make sure he knew they were OK.” Wall continued, “This was a very labor intensive film. David shot 95% to 98% of everything with two cameras. On The Social Network they recorded 324 hours of footage and selected 281 hours for the edit. On Dragon Tattoo that count went up to 483 hours recorded and 443 hours selected!”

The Girl with the Dragon Tattoo has many invisible effects. According to Wall, “At last count there were over 1,000 visual effects shots throughout the film. Most of these are shot stabilizations or visual enhancements, such as adding matte painting elements, lens flares or re-creating split screens from the offline.  Snow and other seasonal elements were added to a number of shots, helping the overall tone, as well as reinforcing the chronology of the film. I think viewers will be hard pressed to tell which shots are real and which are enhanced.” Baxter added, “In a lot of cases the exterior locations were shot in Sweden and elaborate sets were built on sound stages in LA for the interiors. There’s one sequence that takes place in a cabin. All of the exteriors seen through the windows and doors are green screen shots. And those were bright green! I’ve been seeing the composited shots come back and it’s amazing how perfect they are. The door is opened and there’s a bright exterior there now.”

A winning workflow solution

The key to efficient post on a RED project is the workflow. Assistant editor Tyler Nelson explained the process to me. “We used essentially the same procedures as for The Social Network. Of course, we learned things on that, which we refined for this film. Since they used both the RED M-X and the EPIC cameras, there were two different frame sizes to deal with – 4352 x 2176 for the RED One and 5120 x 2560 for the EPIC. Plus each of these cameras uses a different color science to process the data from the sensor. The file handling was done through Datalab, a company that Angus owns. A custom piece of software called Wrangler automates the handling of the RED files. It takes care of copying, verifying and archiving the .r3d files to LTO and transcoding the media for the editors, as well as for review on the secured PIX system. The larger RED files were scaled down to 1920 x 1080 ProRes LT with a center-cut extraction for the editors, as well as 720p H.264 for PIX. The ‘look’ was established on set, so none of the RED color metadata was changed during this process.”

“When the cut was locked, I used an EDL [edit decision list] and my own database to conform the .r3d files back into reels of conformed DPX image sequences. This part was done in After Effects, which also allowed me to reposition and stabilize shots as needed. Most of the repositioning was generally a north-south adjustment to move a shot up or down for better head room. The final output frame size was 3600 x 1500 pixels. Since I was using After Effects, I could make any last minute fixes if needed. For instance, I saw one shot that had a monitor reflection within the shot. It was easy to quickly paint that out in After Effects. The RED files were set to the RedColor2 / RedLogFilm color space and gamma settings. Then I rendered out extracted DPX image sequences of the edited reels to be sent Light Iron Digital who did the DI again on this film.”

On the musical trail

The Girl with the Dragon Tattoo leans heavily on a score by Trent Reznor and Atticus Ross. An early peak came from a teaser cut for the film by Kirk Baxter to a driving Reznor cover of Led Zeppelin’s “Immigrant Song”. Unlike the typical editor and composer interaction – where library temp tracks are used for the edit and then a new score is done at the end of the line – Reznor and Ross were feeding tracks to the editors during the edit.

Baxter explained, “At first Trent and Atticus score to the story rather than to specific scenes. The main difference with their approach to scoring a picture is that they first provide us with a library of original score, removing the need for needledrops. It’s then a collaborative process of finding homes for the tracks. Ren Klyce [sound designer/re-recording mixer] also plays an integral part in this.” Wall added, “David initially reviewed the tracks and made suggestions as to which scenes they might work best in.  We started with these suggestions and refined placement as the edit evolved.  The huge benefit of working this way was that we had a very refined temp score very early in the process.” Baxter concluded, “Then Trent’s and Atticus’s second phase is scoring to picture. They re-sculpt their existing tracks to perfectly fit picture and the needs of the movie. Trent’s got a great work ethic. He’s very precise and a real perfectionist.”

The cutting experience

I definitely enjoyed the Oscar-winning treatment these two editors applied to intercutting dialogue scenes in The Social Network, but Baxter was quick to interject, “I’d have to say Dragon Tattoo was more complicated than The Social Network. It was a more complex narrative, so there were more opportunities to play with scene order. In the first act you are following the two main characters on separate paths. We played with how their scenes were intercut so that their stories were as interconnected as possible, giving promise to the audience of their inevitable union.”

“The first assembly was about three hours long. That hovered at around 2:50 for a while and got a bit longer as additional material was shot, but then shorter again as we trimmed. Eventually some scenes were lost to bring the locked cut in at two-and-a-half hours. Even though scenes were lost, those still have to be fine cut. You don’t know what can be lost unless you finish everything out and consider the film in its full form. A lot of work was put into the back half of the film to speed it up. Most of those changes were a matter of tightening the pace by losing the lead-in and lead-outs of scenes and often losing some detail within the scenes.”

Wall expanded on this, “Fans of any popular book series want a filmed adaptation to be faithful to the original story. In this case, we’re really dealing with a ‘five act’ structure. [laughs]. Obviously, not everything in the book can make it into the movie. Some of the investigative dead ends have to be excised, but you can’t remove every red herring.  So it was a challenging film to cut. Not only was it very labor intensive, with many disturbing scenes to put together, it was also a tricky storytelling exercise. But when you’re done and it’s all put together, it’s very rewarding to see. The teaser calls it the ‘feel-bad film of Christmas’ but it’s a really engaging story about these characters’ human experience. We hope audiences will find it entertaining.”

Some additional coverage from Post magazine.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters