Mindhunter

The investigation of crime is a film topic with which David Fincher is very familiar. He returns to this genre in the new Netflix series, Mindhunter, which is executive produced by Fincher and Charlize Theron. The series is the story of the FBI’s Behavioral Science Unit and how it became an elite profiling team, known for investigating serial criminals. The TV series is based on the nonfiction book Mind Hunter: Inside the FBI’s Elite Serial Crime Unit, co-written by Mark Olshaker and John Douglas, a former agent in the unit who spent 25 years with the FBI. Agent Douglas interviewed scores of serial killers, including Charles Manson, Ted Bundy, and Ed Gein, who dressed himself in his victims’ skin. The lead character in the series, Holden Ford (played by Jonathan Groff) is based on Douglas. The series takes place in 1979 and centers on two FBI agents, who were among the first to interview imprisoned serial killers in order to learn how they think and apply that to other crimes. Mindhunter is about the origins of modern day criminal profiling.

As with other Fincher projects, he brought in much of the team that’s been with him through the various feature films, like Gone Girl, The Girl with the Dragon Tattoo, and Zodiac. It has also given a number in the team the opportunity to move up in their careers. I recently spoke with Tyler Nelson, one of the four series editors, who was given the opportunity to move from the assistant chair to that of a primary editor. Nelson explains, “I’ve been working with David Fincher for nearly 11 years, starting with The Curious Case of Benjamin Button. I started on that as an apprentice, but was bumped up to an assistant editor midway through. There was actually another series in the works for HBO called Videosyncrasy, which I was going to edit on. But that didn’t make it to air. So I’m glad that everyone had the faith in me to let me edit on this series. I cut the four episodes directed by Andrew Douglas and Asif Kapadia, while Kirk Baxter [editor on Gone Girl, The Girl with the Dragon Tattoo, The Social Network] cut the four shows that David directed.”

Pushing the technology envelope

The Fincher post operation has a long history of trying new and innovative techniques, including their selection of editing tools. The editors cut this series using Adobe Premiere Pro CC. Nelson and the other editors are no stranger to Premiere Pro, since Baxter had cut Gone Girl with it. Nelson says, “Of course, Kirk and I have been using it for years. One of the editors, Byron Smith, came over from House of Cards, which was being cut on [Apple] Final Cut Pro 7. So that was an easy transition for him. We are all fans of Adobe’s approach to the entertainment industry and were onboard with using it. In fact, we were running on beta software, which gave us the ability to offer feedback to Adobe on features that will hopefully make it into released products and benefit all Premiere users.”

Pushing the envelope is also a factor on the production side. The series was shot with custom versions of the RED Weapon camera. Shots were recorded at 6K resolution, but framed for a 5K extraction, leaving a lot of “padding” around the edges. This allowed room for reposition and stabilization, which is done a lot on Fincher’s projects. In fact, nearly all of the moving footage is stabilized. All camera footage is processed into EXR image sequences in addition to ProRes editing files for “offline” editing. These ProRes files also get an added camera LUT so everyone sees a good representation of the color correction during the editing process. One change from past projects was to bring color correction in-house. The final grade was handled by Eric Weidt on a FilmLight Baselight X unit, which was sourcing from the EXR files. The final Netflix deliverables are 4K/HDR masters. Pushing a lot of data through a facility requires robust hardware systems. The editors used 2013 (“trash can”) Mac Pros connected to an Open Drives shared storage system. This high-end storage system was initially developed as part of the Gone Girl workflow and uses storage modules populated with all SSD drives.

The feature film approach

Unlike most TV series, where there’s a definite schedule to deliver a new episode each week, Netflix releases all of their shows at once, which changes the dynamic of how episodes are handled in post. Nelson continues, “We were able to treat this like one long feature film. In essence, each episode is like a reel of a film. There are 10 episodes and each is 45 minutes to an hour long. We worked it as if it was an eight-and-a-half to nine hour long movie.” Skywalker Sound did all the sound post after a cut was locked. Nelson adds, “Most of the time we handed off locked cuts, but sometimes when you hear the cleaned up sound, it can highlight issues with the edit that you didn’t notice before. In some cases, we were able to go back into the edit and make some minor tweaks to make it flow better.”

As Adobe moves more into the world of dialogue-driven entertainment, a number of developers are coming up with speech-to-text solutions that are compatible with Premiere Pro. This potentially provides editors a function similar to Avid’s ScriptSync. Would something like this have been beneficial on Mindhunter, a series based on extended interviews? Nelson replies, “I like to work with the application the way it is. I try not to get too dependent on any feature that’s very specific or unique to only one piece of software. I don’t even customize my keyboard settings too much, just so it’s easier to move from one workstation to another that way. I like to work from sequences, so I don’t need a special layout for the bins or anything like that.”

“On Mindhunter we used the same ‘KEM roll’ system as on the films, which is a process that Kirk Baxter and Angus Wall [editor on Zodiac, The Curious Case of Benjamin Button, The Social Network] prefer to work in,” Nelson continues. “All of the coverage for each scene set-up is broken up into ‘story beats’. In a 10 minute take for an interview, there might be 40 ‘beats’. These are all edited in the order of last take to first take, with any ‘starred’ takes at the head of the sequence. This way you will see all of the coverage, takes, and angles for a ‘beat’ before moving on to the group for the next ‘beat’. As you review the sequence, the really good sections of clips are moved up to video track two on the sequence. Then create a new sequence organized in story order from these selected clips and start building the scene. At any given time you can go back to the earlier sequences if the director asks to see something different than what’s in your scene cut. This method works with any NLE, so you don’t become locked into one and only one software tool.”

“Where Adobe’s approach is very helpful to us is with linked After Effects compositions,” explains Nelson. “We do a lot of invisible split screen effects and shot stabilization. Those clips are all put into After Effects comps using Dynamic Link, so that an assistant can go into After Effects and do the work. When it’s done, the completed comp just pops back into the timeline. Then ‘render and replace’ for smooth playback.”

The challenge

Certainly a series like this can be challenging for any editor, but how did Nelson take to it? He answers, “I found every interview scene to be challenging. You have an eight to 10 minute interview that needs to be interesting and compelling. Sometimes it takes two days to just get through looking at the footage for a scene like that. You start with ‘How am I going to do this?’ Somewhere along the line you get to the point where ‘This is totally working.’ And you don’t always know how you got to that point. It takes a long time approaching the footage in different ways until you can flesh it out. I really hope people enjoy the series. These are dramatizations, but real people actually did these terrible things. Certainly that creeps me out, but I really love this show and I hope people will see the craftsmanship that’s gone into Mindhunter and enjoy the series.”

In closing, Nelson offered these additional thoughts. “I’d gotten an education each and every day. Lots of editors haven’t figured it out until well into a long career. I’ve learned a lot being closer to the creative process. I’ve worked with David Fincher for almost 11 years. You think you are ready to edit, but it’s still a challenge. Many folks don’t get an opportunity like this and I don’t take that lightly. Everything that I’ve learned working with David has given me the tools and I feel fortunate that the producers had the confidence in me to let me cut on this amazing show.”

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

Advertisements

6 Below

From IMAX to stereo3D, theaters have invested in various technologies to entice viewers and increase ticket sales. With a tip of the hat to the past, Barco has developed a new ultrawide, 3-screen digital projection system, which is a similar concept to Cinerama film theaters from the 1950s. But modern 6K-capable digital cinema cameras make the new approach possible with stunning clarity. There are currently 40 Barco Escape theaters worldwide, with the company looking for opportunities to run films designed for this format.

Enter Scott Waugh, director (Act of Valor, Need for Speed) and co-founder of LA production company, Bandito Brothers. Waugh, who is always on the lookout for new technologies, was interested in developing the first full-length, feature film to take advantage of this 3-screen, 7:1 aspect ratio for the entire length of the film. But Waugh didn’t want to change how he intended to shoot the film strictly for these theaters, since the film would also be distributed to conventional theaters. This effectively meant that two films needed to come out of the post-production process – one formatted for the Barco Escape format and one for standard 4K theaters.

6 Below (written by Madison Turner) became the right vehicle. This is a true life survival story of Eric LaMarque (played by Josh Harnett), an ex-pro hockey player turned snowboarder with an addiction problem, who finds himself lost in the ice and snow of the California Sierra mountains for a week. To best tell this story, Waugh and company trekked an hour or more into the mountains above Sundance, Utah for the production.

To handle the post workflow and co-edit the film with Waugh, editor Vashi Nedomansky (That Which I Love Destroys Me, Sharknado 2, An American Carol) joined the team. Nedomansky, another veteran of Bandito Brothers who uses Adobe Premiere Pro as his axe of choice, has also helped set up Adobe-based editorial workflows for Deadpool and Gone Girl. Ironically, in earlier years Nedomansky had been a pro hockey player himself, before shifting to a career in film and video. In fact, he played against the real Eric LeMarque on the circuit.

Pushing the boundaries

The Barco Escape format projects three 2K DCPs to cover the total 6K width. To accommodate this, RED 6K cameras were used and post was done with native media at 6K in Adobe Premiere Pro CC. My first question to Nedomansky was this. Why stay native? Nedomansky says, “We had always been pushing the boundaries at Bandito Brothers. What can we get away with? It’s always a question of time, storage, money, and working with a small team. We had a small 4-person post team for 6 Below, located near Sundance. So there was interest in not losing time to transcoding.

After some testing, we settled on decked out Dell workstations, because these could tackle the 6K RED raw files natively.” Two Dell Precision 7910 towers (20-core, 128GB RAM) with Nvidia Quadro M6000 GPUs were set up for editing, along with a third, less beefy HP quad-core computer for the assistant editor and visual effects. All three were connected to shared storage using a 10GigE network. Mike McCarthy, post production supervisor for 6 Below, set up the system. To keep things stable, they were running Windows 7 and stayed on the same Adobe Creative Cloud version throughout the life of the production. Nedomansky continues, “We kept waiting for the 6K to not play, but it never stopped in the six weeks of time that we were up there. My first assembly was almost three hours long – all in a single timeline – and I was able to play it straight through without any skips or stuttering.”

There were other challenges along the way. Nedomansky explains, “Almost all of the film was done as single-camera and Josh has to carry it with his performance as the sole person on screen for much of the film. He has to go through a range of emotions and you can’t just turn that on and off between takes. So there were lots of long 10-minute takes to convey his deterioration within the hostile environmental conditions. The story is about a man lost in the wild, without much dialogue. The challenge is how to cut down these long takes without taking away from his performance. One solution was to go against the grain – using jump cuts to shorten long takes. But I wanted to look for the emotional changes or a physical act to motivate a jump cut in a way that would make it more organic. In one case, I took a 10-minute take down to 45 seconds.”

When you have a film where weather is a character, you hope that the weather will cooperate. Nedomansky adds, “One of our biggest concerns going in, was the weather. Production started in March – a time when there isn’t a lot of snow in Utah. Fortunately for us, a day before we were supposed to start shooting, they had the biggest ‘blizzard’ of the winter for four days. This saved us a lot of VFX time, because we didn’t have to create atmospherics, like snow in front of the lens. It was there naturally.”

Using the Creative Cloud tools to their fullest

6 Below features an extensive percentage of visual effects shots. Nedomansky says, “The film has 1500 shots with 205 of them as VFX shots. John Carr was the assistant editor and visual effects artist on the film and he did all of the work in After Effects and at 6K resolution, which is unusual for films. Some of the shots included ‘day for night’ where John had to add star plates for the sky. This meant rotoscoping behind Josh and the trees to add the plates. He also had to paint out crew footprints in the snow, along with the occasional dolly track or crew member in a shot. There were also some split screens done at 6K right in Premiere Pro.”

The post schedule involved six weeks on-set and then fourteen more weeks back in LA, for a 20-week total. After that, sound post and grading (done at Technicolor). The process to correctly format the film for both Barco and regular theaters almost constituted posting two films. The RED camera image is 6144 x 2592 pixels, Barco Escape 6144 x 864, and a 4K extraction 4096 x 2160. Nedomansky explains, “The Barco frame is thin and wide. It could use the full width, but not height, of the full 6K RED image. So, I had to do a lot of ‘animation’ to reposition the frame within the Barco format. For the 4K version, the framing would be adjusted accordingly. The film has about 1500 shots, but we didn’t use different takes for the two versions. I was able to do this all through reframing.”

In wrapping up our conversation, Nedomansky adds, “I played hockey against Eric and this added an extra layer of responsibility. He’s very much still alive today. Like any film of this type, it’s ‘based on’ the true story, but liberties are taken. I wanted to make sure that Eric would respect the result. Scott and I’ve done films that were heavy on action, but this film shows another directorial style – more personal and emotional with beautiful visuals. That’s also a departure for me and it’s very important for editors to have that option.”

6 Below was released on October 13 in cinemas.

Read Vashi’s own write-up of his post production workflow.

Images are courtesy of Vashi Visuals.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

The FCP X – RED – Resolve Dance II

df2415_roundtrip2_sm

Last October I wrote about the roundtrip workflow surrounding Final Cut Pro X and Resolve, particularly as it relates to working with RED camera files. This month I’ve been color grading a small, indie feature film shot with RED One cameras at 4K resolution. The timeline is 1080p. During the course of grading the film in DaVinci Resolve 11, I’ve encountered a number of issues in the roundtrip process. Here are some workflow steps that I’ve found to be successful.

Step 1 – For the edit, transcode the RED files into 1080p Apple ProRes Proxy QuickTime movies baking in camera color metadata and added burn-in data for clip name and timecode. Use either REDCINE-X Pro or DaVinci Resolve for the transcode.

Step 2 – Import the proxies and double-system audio (if used) into FCP X and sync within the application or use Sync-N-Link X. Ideally all cameras should record reference audio and timecode should match between the cameras and the sound recorder. Slates should also be used as a fall-back measure.

Step 3 – Edit in FCP X until you lock the cut. Prepare a duplicate sequence (Project) for grading. In that sequence, strip off (detach and remove) all audio. As an option, you can create a mix-down track for reference and attach it as a connected clip. Flatten the timeline down to the Primary Storyline where ever possible, so that Resolve only sees this as one track of video. Compound clips should be broken apart, effects should be removed, and titles removed. Audition clips should be finalized, but multicam clips are OK. Remove effects filters. Export an FCPXML (version 1.4 “previous”) list. You should also export a self-contained reference version of the sequence, which can be used to check the conform in Resolve.

Step 4 – Launch Resolve and make sure that the master project settings match that of your FCP X sequence. If it’s supposed to be 1920×1080 at 23.976 (23.98) fps, then make sure that’s set correctly. Resolve defaults to a frame rate of 24.0fps and that won’t work. Locate all of your camera original source media (RED camera files in this example) and add them to your media bin in the Media page. Import the FCPXML (1.4), but disable the setting to automatically load the media files in the import dialogue box. The FCPXML file will load and will relink to the RED files without issue if everything has gone correctly. The timeline may have a few clip conflicts, so look for the little indicator on the clip corner in the Edit window timeline. If there’s a clip conflict, you’ll be presented with several choices. Pick the correct one and that will correct the conflict.

Step 5 – At this point, you should verify that the files have conformed correctly by comparing against a self-contained reference file. Compound clips can still be altered in Resolve by using the Decompose function in the timeline. This will break apart the nested compound clips onto separate video tracks. In general, reframing done in the edit will translate, as will image rotation; however, flips and flops won’t. To flip and flop an image in FCP X requires a negative X or Y scale value (unless you used a filter), which Resolve cannot achieve. When you run across these in Resolve, reset the scale value in the Edit page inspector to normal from that clip. Then in the Color page use the horizontal or vertical flip functions that are part of the resizing controls. Once this is all straight, you can grade.

Step 6 option A – When grading is done, shift to the Deliver page. If your project is largely cuts-and-dissolves and you don’t anticipate further trimming or slipping of edit points in your NLE, then I would recommend exporting the timeline as a self-contained master file. You should do a complete quality check the exported media file to make sure there were no hiccups in the render. This file can then be brought back into any NLE and combined with the final mixed track to create the actual show master. In this case, there is no roundtrip procedure needed to get back into the NLE.

Step 6 option B – If you anticipate additional editing of the graded files – or you used transitions or other effects that are unique to your NLE – then you’ll need to use the roundtrip “return” solution. In the Deliver page, select the Final Cut Pro easy set-up roundtrip. This will render each clip as an individual file at the source or timeline resolution with a user-selected handle length added to the head and tail of each clip. Resolve will also write a corresponding FCPXML file (version 1.4). This file will retain the original transitions. For example, if you used FCP X’s light noise transition, it will show up as a dissolve in Resolve’s timeline. When you go back to FCP X, it will retain the proper transition information in the list, so you’ll get back the light noise transition effect.

Resolve generates this list with the assumption that the media files were rendered at source resolution and not timeline resolution. Therefore, even if your clips are now 1920×1080, the FCPXML represents these as 4K. When you import this new FCPXML back into FCP X, a spatial conform will be applied to “fit” the files into the 1920×1080 raster space of the timeline. Change this to “none” and the 1080 media files will be blown up to 4K. You can choose to simply live with this, leave it to “fit”, and render the files again on FCP X’s output – or follow the next step for a workaround.

Step 7 – Create a new Resolve project, making sure the frame rate and timeline format are correct, such as 1920×1080 at 23.976fps. Load the new media files that were exported from Resolve into the media pool. Now import the FCPXML that Resolve has generated (uncheck the selection to automatically import media files and uncheck sizing information). The media will now be conformed to the timeline. From the Edit page, export another FCPXML 1.4 for that timeline (no additional rendering is required). This FCPXML will be updated to match the media file info for the new files – namely size, track configuration, and frame rate.

At this stage, you will encounter a second serious flaw in the FCP X/Resolve/FCP X roundtrip process. Resolve 11 does not write a proper FCPXML file and leaves out certain critical asset information. You will encounter this if you move the media and lists between different machines, but not if all of the work is being done on a single workstation. The result will be a timeline that loads into FCP X with black clips (not the red “missing” icon). When you attempt to reconnect the media, FCP X will fail to relink and will issue an “incompatible files” error message. To fix the problem, either the colorist must have FCP X installed on the Resolve system or the editor must have Resolve 11 installed on the FCP X system. This last step is the one remaining workaround.

Step 8 option A – If FCP X is installed on the Resolve machine, import the FCPXML into FCP X and reconnect the media generated by Resolve. Then re-export a new FCPXML from FCP X. This new list and media can be moved to any other system. You can move the FCP X Library successfully, as well.

Step 8 option B – If Resolve is installed on the FCP X machine, then follow Step 7. The new FCPXML that you create there will load into FCP X, since you are on the same system.

That’s the state of things right now. Maybe some of these flaws will be fixed with Resolve 12, but I don’t know at this point. The FCPXML list format involves a bit of voodoo at times and this is one of those cases. The good news is that Resolve is very solid when it comes to relinking, which will save you. Good luck!

©2015 Oliver Peters

Camerama 2015

df0715_main

The design of a modern digital video camera comes down to the physics of the sensor and shutter, the software to control colorimetry and smart industrial design to optimize the ergonomics for the operator. Couple that with a powerful internal processor and recording mechanism and you are on your way. Although not exactly easy, these traits no longer require skills that are limited to the traditional camera manufacturers. As a result, innovative new cameras have been popping up from many unlikely sources.

df0715_cionThe newest of these is AJA, which delivered the biggest surprise of NAB 2014 in the form of their CION 4K/UltraHD/2K/HD digital camera. Capitalizing on a trend started by ARRI, the CION records directly to the edit-ready Apple ProRes format, using AJA Pak solid state media. The CION features a 4K APS-C sized CMOS sensor with a global shutter to eliminate rolling-shutter artifacts. AJA claims 12 stops of dynamic range and uses a PL mount for lenses designed for Super 35mm. The CION is also capable of outputting AJA camera raw at frame rates up to 120fps.  It can send out 4K or UHD video from its four 3G-SDI outputs to the AJA Corvid Ultra for replay and center extraction during live events.

df0715_alexaThe darling of the film and high-end television world continues to be ARRI Digital with its line of ALEXA cameras. These now include the Classic, XT, XT Plus, XT M and XT Studio configurations. They vary based on features and sensor size. The Classic cameras have a maximum active sensor photosite size of 2880 x 2160, while the XT models go as high as 3414 x 2198. Another difference is that the XT models allow in-camera recording of ARRIRAW media. The ALEXA introduced ProRes recording and all current XT models permit Apple ProRes and Avid DNxHD recording.

df0715_amiraThe ALEXA has been joined by the newer, lighter AMIRA, which is targeted at documentary-style shooting with smaller crews. The AMIRA is tiered into three versions, with the Premium model offering 2K recording in all ProRes flavors at up to 200fps. ARRI has added 4K capabilities to both the ALEXA and AMIRA line by utilizing the full sensor size using their Open Gate mode. In the Amira, this 3.4K image is internally scaled by a factor of 1.2 to record a UHD file at up to 60fps to its in-camera CFast 2.0 cards. The ALEXA uses a similar technique, but only records the 3.4K signal in-camera, with scaling to be done later in post.

df0715_alexa65To leapfrog the competition, ARRI also introduced its ALEXA 65, which is available through the ARRI Rental division. This camera is a scaled up version of the ALEXA XT and uses a sensor that is larger than a 5-perf 65mm film frame. That’s an Open Gate resolution of 6560 x 3102 photosites. The signal is captured as uncompressed ARRIRAW. Currently the media is recorded on ALEXA XR Capture drives at a maximum frame rate of 27fps.

df0715_bmd_cc_rear_lBlackmagic Design had been the most unexpected camera developer a few years ago, but has since grown its DSLR-style camera line into four models: Studio, Production 4K, Cinema and Pocket Cinema. These vary in cosmetic style and size, which formats they are able to record and the lens mounts they use. df0715_bmdpocketThe Pocket Cinema Camera is essentially a digital equivalent of a Super 16mm film camera, but in a point-and-shoot, small camera form factor. The Cinema and Production 4K cameras feature a larger, Super 35mm sensor. Each of these three incorporate ProRes and/or CinemaDNG raw recording. The Studio Camera is designed as a live production camera. It features a larger viewfinder, housing, accessories and connections designed to integrate this camera into a television studio or remote truck environment. There is an HD and a 4K version.

df0715_ursaThe biggest Blackmagic news was the introduction of the URSA. Compared to the smaller form factors of the other Blackmagic Design cameras, the URSA is literally a “bear” of a camera. It is a rugged 4K camera built around the idea of user-interchangeable parts. You can get EF, PL and broadcast lens mounts, but you can also operate it without a lens as a standalone recording device. It’s designed for UltraHD (3840 x 2160), but can record up to 4,000 pixels wide in raw. Recording formats include CinemaDNG raw (uncompressed and 3:1 compressed), as well as Apple ProRes, with speeds up to 80fps. There are two large displays on both sides of the camera, which can be used for monitoring and operating controls. It has a 10” fold-out viewfinder and a built-in liquid cooling system. As part of the modular design, users can replace mounts and even the sensor in the field.

df0715_c300Canon was the most successful company out of the gate when the industry adopted HD-video-capable DSLR cameras as serious production tools. Canon has expanded these offerings with its Cinema EOS line of small production cameras, including the C100, C100 Mark II, C300 and C500, which all share a similar form factor. Also included in this line-up is the EOS-1D C, a 4K camera that retains its DSLR body. The C300 and C500 camera both use a Super 35mm sized sensor and come in EF or PL mount configurations. The C300 is limited to HD recording using the Canon XF codec. The C500 adds 2K and 4K (4096 cinema and 3840 UHD) recording capabilities, but this signal must be externally recorded using a device like the Convergent Design Odyssey 7Q+. HD signals are recorded internally as Canon XF, just like the C300. The Canon EOS C100 and C100 Mark II share the design of the C300, except that they record to AVCHD instead of Canon XF. In addition, the Mark II can also record MP4 files. Both C100 models record to SD cards, whereas the C300/C500 cameras use CF cards. The Mark II features improved ergonomics over the base C100 model.

df0715_5dThe Canon EOS-1D C is included because it can record 4K video. Since it is also a still photography camera, the sensor is an 18MP full-frame sensor. When recording 4K video, it uses a Motion JPEG codec, but for HD, can also use the AVCHD codec. The big plus over the C500 is that the 1D C records 4K onboard to CF cards, so is better suited to hand-held work. The DSLR cameras that started the craze for Canon continue to be popular, including the EOS 5D Mark III and the new EOS 7D Mark II. Plus the consumer-oriented Rebel versions. All are outstanding still cameras. The 5D features a 22.3MP CMOS sensor and records HD video as H.264 MOV files to onboard CF cards. Thanks to the sensor size, the 5D is still popular for videographers who want extremely shallow depth-of-field shots from a handheld camera.

df0715_d16Digital Bolex has become a Kickstarter success story. These out-of-the-box thinkers coupled the magic of a venerable name from the film era with innovative design and marketing to produce the D16 Cinema Camera. Its form factor mimics older, smaller, handheld film camera designs, making it ideal for run-and-gun documentary production. It features a Super 16mm sized CCD sensor with a global shutter and claims 12 stops of dynamic range. The D16 records in 12-bit CinemaDNG raw to internal SSDs, but media is offloaded to CF cards or via USB3.0 for media interchange. The camera comes with a C-mount, but EF, MFT and PL lens mounts are available. Currently the resolutions include 2048 x 1152 (“S16mm mode”), 2048 x 1080 (“S16 EU”) and HD (“16mm mode”). The D16 records 23.98, 24 and 25fps frame rates, but variable rates up to 32fps in the S16mm mode are coming soon. To expand on the camera’s attractiveness, Digital Bolex also offers a line of accessories, including Kish/Bolex 16mm prime lens sets. These fixed aperture F4 lenses are C-mount for native use with the D16 camera. Digital Bolex also offers the D16 in an MFT mount configuration and in a monochrome version.

df0715_hero4The sheer versatility and disposable quality of GoPro cameras has made the HERO line a staple of many productions. The company continues to advance this product with the HERO4 Black and Silver models as their latest. These are both 4K cameras and have similar features, but if you want full video frame rates in 4K, then the HERO4 Black is the correct model. It will record up to 30fps in 4K, 50fps in 2.7K and 120fps in 1080p. As a photo camera, it uses a 12MP sensor and is capable of 30 frames a one second in burst mode and time-lapse intervals from .5 to 60 seconds. The video signal is recorded as an H264 file with a high-quality mode that’s up 60 Mb/s. MicrosSD card media is used. HERO cameras have been popular for extreme point-of-video shots and its waterproof housing is good for 40 meters. This new HERO4 series offers more manual control, new night time and low-light settings, and improved audio recording.

df0715_d810Nikon actually beat Canon to market with HD-capable DSLRs, but lost the momentum when Canon capitalized on the popularity of the 5D. Nevertheless, Nikon has its share of supportive videographers, thanks in part to the quantity of Nikon lenses in general use. The Nikon range of high-quality still photo and video-enabled cameras fall under Nikon’s D-series product family. The Nikon D800/800E camera has been updated to the D810. This is the camera of most interest to professional videographers. It’s a 36.3MP still photo camera that can also record 1920 x 1080 video in 24/30p modes internally and 60p externally. It can also record up to 9,999 images in a time-lapse sequence. A big plus for many is its optical viewfinder. It records H.264/MPEG-4 media to onboard CF cards. Other Nikon video cameras include the D4S, D610, D7100, D5300 and D3300.

df0715_varicamPanasonic used to own the commercial HD camera market with the original VariCam HD camera. They’ve now reimagined that brand in the new VariCam 35 and VariCam HS versions. The new VariCam uses a modular configuration with each of these two cameras using the same docking electronics back. In fact, a costumer can purchase one camera head and back and then only need to purchase the other head, thus owning both the 35 and the HS models for less than the total cost of two cameras. The VariCam 35 is a 4K camera with wide color gamut and wide dynamic range (14+ stops are claimed). It features a PL lens mount, records from 1 to 120fps and supports dual-recording. For example, you can simultaneously record a 4K log AVC-Intra master to the main recorder (expressP2 card) and 2K/HD Rec 709 AVC-Intra/AVC-Proxy/Apple ProRes to a second internal recorder (microP2 card) for offline editing. VariCam V-Raw camera raw media can be recorded to a separate Codex V-RAW recorder, which can be piggybacked onto the camera. The Panasonic VariCam HS is a 2/3” 3MOS broadcast/EFP camera capable of up to 240fps of continuous recording.  It supports the same dual-recording options as the VariCam 35 using AVC-Intra and/or Apple ProRes codecs, but is limited to HD recordings.

df0715_gh4With interest in DSLRs still in full swing, many users’ interest in Panasonic veers to the Lumix GH4. This camera records 4K cinema (4096) and 4K UHD (3840) sized images, as well as HD. It uses SD memory cards to record in MOV, MP4 or AVCHD formats. It features variable frame rates (up to 96fps), HDMI monitoring and a professional 4K audio/video interface unit. The latter is a dock the fits to the bottom of the camera. It includes XLR audio and SDI video connections with embedded audio and timecode.

RED Digital Cinema started the push for 4K cameras and camera raw video recording with the original RED One. That camera is now only available in refurbished models, as RED has advanced the technology with the EPIC and SCARLET. Both are modular camera designs that are offered with either the Dragon or the Mysterium-X sensor. The Dragon is a 6K, 19MP sensor with 16.5+ stops of claimed dynamic range. The Mysterium-X is a 5K, 14MP sensor that claims 13.5 stops, but up to 18 stops using RED’s HDRx (high dynamic range) technology. df0715_epicThe basic difference between the EPIC and the SCARLET, other than cost, is that the EPIC features more advanced internal processing and this computing power enables a wider range of features. For example, the EPIC can record up to 300fps at 2K, while the SCARLET tops out at 120fps at 1K. The EPIC is also sold in two configurations: EPIC-M, which is hand-assembled using machined parts, and the EPIC-X, which is a production-run camera. With the interest in 4K live production, RED has introduced its 4K Broadcast Module. Coupled with an EPIC camera, you could record a 6K file for archive, while simultaneously feeding a 4K and/or HD live signal for broadcast. RED is selling studio broadcast configurations complete with camera, modules and support accessories as broadcast-ready packages.

df0715_f65Sony has been quickly gaining ground in the 4K market. Its CineAlta line includes the F65, PMW-F55, PMW-F5, PMW-F3, NEX-FS700R and NEX-FS100. All are HD-capable and use Super 35mm sized image sensors, with the lower-end FS700R able to record 4K raw to an external recorder. At the highest end is the 20MP F65, which is designed for feature film production.df0715_f55 The camera is capable of 8K raw recording, as well as 4K, 2K and HD variations. Recordings must be made on a separate SR-R4 SR MASTER field recorder. For most users, the F55 is going to be the high-end camera for them if they purchase from Sony. It permits onboard recording in four formats: MPEG-2 HD, XAVC HD, SR File and XAVC 4K. With an external recorder, 4K and 2K raw recording is also available. High speeds up to 240fps (2K raw with the optional, external recorder) are possible. The F5 is the F55’s smaller sibling. It’s designed for onboard HD recording (MPEG-2 HD, XAVC HD, SR File). 4K and 2K recordings require an external recorder.

df0715_fs7The Sony camera that has caught everyone’s attention is the PXW-FS7. It’s designed as a lightweight, documentary-style camera with a form factor and rig that’s reminiscent of an Aaton 16mm film camera. It uses a Super 35mm sized sensor and delivers 4K resolution using onboard XAVC recording to XQD memory cards. XDCAM MPEG-2 HD recording (now) and ProRes (with a future upgrade) will also be possible. Also raw will be possible to an outboard recorder.

df0715_a7sSony has also not been left behind by the DSLR revolution. The A7s is an APS-C, full frame, mirrorless 12.2MP camera that’s optimized for 4K and low light. It can record up to 1080p/60 (or 720p/120) onboard (50Mbps XAVC S) or feed uncompressed HD and/or 4K (UHD) out via its HDMI port. It will record onboard audio and sports such pro features as Sony’s S-Log2 gamma profile.

With any overview, there’s plenty that we can’t cover. If you are in the market for a camera, remember many of these companies offer a slew of other cameras ranging from consumer to ENG/EFP offerings. I’ve only touched on the highlights. Plus there are others, like Grass Valley, Hitachi, Samsung and Ikegami that make great products in use around the world every day. Finally, with all the video-enabled smart phones and tablets, don’t be surprised if you are recording your next production with an iPhone or iPad!

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Gone Girl

df_gg_4David Fincher is back with another dark tale of modern life, Gone Girl – the film adaptation of Gillian Flynn’s 2012 novel. Flynn also penned the screenplay.  It is the story of Nick and Amy Dunne (Ben Affleck and Rosamund Pike) – writers who have been hit by the latest downturn in the economy and are living in America’s heartland. Except that Amy is now mysteriously missing under suspicious circumstances. The story is told from each of their subjective points of view. Nick’s angle is revealed through present events, while Amy’s story is told through her diary in a series of flashbacks. Through these we learn that theirs is less than the ideal marriage we see from the outside. But whose story tells the truth?

To pull the film together, Fincher turned to his trusted team of professionals including director of photography Jeff Cronenweth, editor Kirk Baxter and post production supervisor Peter Mavromates. Like Fincher’s previous films, Gone Girl has blazed new digital workflows and pushed new boundaries. It is the first major feature to use the RED EPIC Dragon camera, racking up 500 hours of raw footage. That’s the equivalent of 2,000,000 feet of 35mm film. Much of the post, including many of the visual effects, were handled in-house.

df_gg_1Kirk Baxter co-edited David Fincher’s The Curious Case of Benjamin Button, The Social Network and The Girl with the Dragon Tattoo with Angus Wall – films that earned the duo two best editing Oscars. Gone Girl was a solo effort for Baxter, who had also cut the first two episodes of House of Cards for Fincher. This film now becomes the first major feature to have been edited using Adobe Premiere Pro CC. Industry insiders consider this Adobe’s Cold Mountain moment. That refers to when Walter Murch used an early version of Apple Final Cut Pro to edit the film Cold Mountain, instantly raising the application’s awareness among the editing community as a viable tool for long-form post production. Now it’s Adobe’s turn.

In my conversation with Kirk Baxter, he revealed, “In between features, I edit commercials, like many other film editors. I had been cutting with Premiere Pro for about ten months before David invited me to edit Gone Girl. The production company made the decision to use Premiere Pro, because of its integration with After Effects, which was used extensively on the previous films. The Adobe suite works well for their goal to bring as much of the post in-house as possible. So, I was very comfortable with Premiere Pro when we started this film.”

It all starts with dailies

df_gg_3Tyler Nelson, assistant editor, explained the workflow, “The RED EPIC Dragon cameras shot 6K frames (6144 x 3072), but the shots were all framed for a 5K center extraction (5120 x 2133). This overshoot allowed reframing and stabilization. The .r3d files from the camera cards were ingested into a FotoKem nextLAB unit, which was used to transcode edit media, viewing dailies, archive the media to LTO data tape and transfer to shuttle drives. For offline editing, we created down-sampled ProRes 422 (LT) QuickTime media, sized at 2304 x 1152, which corresponded to the full 6K frame. The Premiere Pro sequences were set to 1920 x 800 for a 2.40:1 aspect. This size corresponded to the same 5K center extraction within the 6K camera files. By editing with the larger ProRes files inside of this timeline space, Kirk was only viewing the center extraction, but had the same relative overshoot area to enable easy repositioning in all four directions. In addition, we also uploaded dailies to the PIX system for everyone to review footage while on location. PIX also lets you include metadata for each shot, including lens choice and camera settings, such as color temperature and exposure index.”

Kirk Baxter has a very specific way that he likes to tackle dailies. He said, “I typically start in reverse order. David tends to hone in on the performance with each successive take until he feels he’s got it. He’s not like other directors that may ask for completely different deliveries from the actors with each take. With David, the last take might not be the best, but it’s the best starting point from which to judge the other takes. Once I go through a master shot, I’ll cut it up at the points where I feel the edits will be made. Then I’ll have the assistants repeat these edit points on all takes and string out the line readings back-to-back, so that the auditioning process is more accurate. David is very gifted at blocking and staging, so it’s rare that you don’t use an angle that was shot for a scene. I’ll then go through this sequence and lift my selected takes for each line reading up to a higher track on the timeline. My assistants take the selects and assemble a sequence of all the angles in scene order. Once it’s hyper-organized, I’ll send it to David via PIX and get his feedback. After that, I’ll cut the scene. David stays in close contact with me as he’s shooting. He wants to see a scene cut together before he strikes a set or releases an actor.”

Telling the story

df_gg_5The director’s cut is often where the story gets changed from what works on paper to what makes a better film. Baxter elaborated, “When David starts a film, the script has been thoroughly vetted, so typically there isn’t a lot of radical story re-arrangement in the cutting room. As editors, we got a lot of credit for the style of intercutting used in The Social Network, but truthfully that was largely in the script. The dialogue was tight and very integral to the flow, so we really couldn’t deviate a lot. I’ve always found the assembly the toughest part, due to the volume and the pressure of the ticking clock. Trying to stay on pace with the shoot involves some long days. The shooting schedule was 106 days and I had my first cut ready about two weeks after the production wrapped. A director gets around ten weeks for a director’s cut and with some directors, you are almost starting from scratch once the director arrives. With David, most of that ten week period involves adding finesse and polish, because we have done so much of the workload during the shoot.”

df_gg_9He continued, “The first act of Gone Girl uses a lot of flashbacks to tell Amy’s side of the story and with these, we deviated a touch from the script. We dropped a couple of scenes to help speed things along and reduced the back and forth of the two timelines by grouping flashbacks together, so that we didn’t keep interrupting the present day; but, it’s mostly executed as scripted. There was one scene towards the end that I didn’t feel was in the right place. I kept trying to move it, without success. I ended up taking another pass at the cut of the scene. Once we had the emotion right in the cut, the scene felt like it was in the right place, which is where it was written to be.”

“The hardest scenes to cut are the emotional scenes, because David simplifies the shooting. You can’t hide in dynamic motion. More complex scenes are actually easier to cut and certainly quite fun. About an hour into the film is the ‘cool girls’ scene, which rapidly answers lots of question marks that come before it. The scene runs about eight minutes long and is made up of about 200 set-ups. It’s a visual feast that should be hard to put together, but was actually dessert from start to finish, because David thought it through and supplied all the exact pieces to the puzzle.”

Music that builds tension

df_gg_6Composers Trent Reznor and Atticus Ross of Nine Inch Nails fame are another set of Fincher regulars. Reznor and Ross have typically supplied Baxter with an album of preliminary themes scored with key scenes in mind. These are used in the edit and then later enhanced by the composers with the final score at the time of the mix. Baxter explained, “On Gone Girl we received their music a bit later than usual, because they were touring at the time. When it did arrive, though, it was fabulous. Trent and Atticus are very good at nailing the feeling of a film like this. You start with a piece of music that has a vibe of ‘this is a safe, loving neighborhood’ and throughout three minutes it sours to something darker, which really works.”

“The final mix is usually the first time I can relax. We mixed at Skywalker Sound and that was the first chance I really had to enjoy the film, because now I was seeing it with all the right sound design and music added. This allows me to get swallowed up in the story and see beyond my role.”

Visual effects

df_gg_7The key factor to using Premiere Pro CC was its integration with After Effects CC via Adobe’s Dynamic Link feature. Kirk Baxter explained how he uses this feature, “Gone Girl doesn’t seem like a heavy visual effects film, but there are quite a lot of invisible effects. First of all, I tend to do a lot of invisible split screens. In a two-shot, I’ll often use a different performance for each actor. Roughly one-third of the timeline contains such shots. About two-thirds of the timeline has been stabilized or reframed. Normally, this type of in-house effects work is handled by the assistants who are using After Effects. Those shots are replaced in my sequence with an After Effects composition. As they make changes, my timeline is updated.”

“There are other types of visual effects, as well. David will take exteriors and do sky replacements, add flares, signage, trees, snow, breath, etc. The shot of Amy sinking in the water, which has been used in the trailers, is an effects composite. That’s better than trying to do multiple takes with the real actress by drowning her in cold water. Her hair and the water elements were created by Digital Domain. This is also a story about the media frenzy that grows around the mystery, which meant a lot of TV and computer screen comps. That content is as critical in the timing of a scene as the actors who are interacting with it.”

Tyler Nelson added his take on this, “A total of four assistants worked with Kirk on these in-house effects. We were using the same ProRes editing files to create the composites. In order to keep the system performance high, we would render these composites for Kirk’s timeline, instead of using unrendered After Effects composites. Once a shot was finalized, then we would go back to the 6K .r3d files and create the final composite at full resolution. The beauty of doing this all internally is that you have a team of people who really care about the quality of the project as much as everyone else. Plus the entire process becomes that much more interactive. We pushed each other to make everything as good as it could possibly be.”

Optimization and finishing

df_gg_2A custom pipeline was established to make the process efficient. This was spearheaded by post production consultant Jeff Brue, CTO of Open Drives. The front end storage for all active editorial files was a 36TB RAID-protected storage network built with SSDs. A second RAID built with standard HDDs was used for the .r3d camera files and visual effects elements. The hardware included a mix of HP and Apple workstations running with NVIDIA K6000 or K5200 GPU cards. Use of the NVIDIA cards was critical to permit as much real-time performance as possible doing the edit. GPU performance was also a key factor in the de-Bayering of .r3d files, since the team didn’t use any of the RED Rocket accelerator cards in their pipeline. The Macs were primarily used for the offline edit, while the PCs tackled the visual effects and media processing tasks.

In order to keep the Premiere Pro projects manageable, the team broke down the film into eight reels with a separate project file per reel. Each project contained roughly 1,500 to 2,000 files. In addition to Dynamic Linking of After Effects compositions, most of the clips were multi-camera clips, as Fincher typically shoots scenes with two or more cameras for simultaneous coverage. This massive amount of media could have potentially been a huge stumbling block, but Brue worked closely with Adobe to optimize system performance over the life of the project. For example, project load times dropped from about six to eight minutes at the start down to 90 seconds at best towards the end.

The final conform and color grading was handled by Light Iron on their Quantel Pablo Rio system run by colorist Ian Vertovec. The Rio was also configured with NVIDIA Tesla cards to facilitate this 6K pipeline. Nelson explained, “In order to track everything I used a custom Filemaker Pro database as the codebook for the film. This contained all the attributes for each and every shot. By using an EDL in conjunction with the codebook, it was possible to access any shot from the server. Since we were doing a lot of the effects in-house, we essentially ‘pre-conformed’ the reels and then turned those elements over to Light Iron for the final conform. All shots were sent over as 6K DPX frames, which were cropped to 5K during the DI in the Pablo. We also handled the color management of the RED files. Production shot these with the camera color metadata set to RedColor3, RedGamma3 and an exposure index of 800. That’s what we offlined with. These were then switched to RedLogFilm gamma when the DPX files were rendered for Light Iron. If, during the grade, it was decided that one of the raw settings needed to be adjusted for a few shots, then we would change the color settings and re-render a new version for them.” The final mastering was in 4K for theatrical distribution.

df_gg_8As with his previous films, director David Fincher has not only told a great story in Gone Girl, but set new standards in digital post production workflows. Seeking to retain creative control without breaking the bank, Fincher has pushed to handle as many services in-house as possible. His team has made effective use of After Effects for some time now, but the new Creative Cloud tools with Premiere Pro CC as the hub, bring the power of this suite to the forefront. Fortunately, team Fincher has been very eager to work with Adobe on product advances, many of which are evident in the new application versions previewed by Adobe at IBC in Amsterdam. With a film as complex as Gone Girl, it’s clear that Adobe Premiere Pro CC is ready for the big leagues.

Kirk Baxter closed our conversation with these final thoughts about the experience. He said, “It was a joy from start to finish making this film with David. Both he and Cean [Chaffin, producer and David Fincher’s wife] create such a tight knit post production team that you fall into an illusion that you’re making the film for yourselves. It’s almost a sad day when it’s released and belongs to everyone else.”

Originally written for Digital Video magazine / CreativePlanetNetwork.

_________________________________

Needless to say, Gone Girl has received quite a lot of press. Here are just a few additional discussions of the workflow:

Adobe panel discussion with the post team

PostPerspective

FxGuide

HDVideoPro

IndieWire

IndieWire blog

ICG Magazine

RedUser

Tony Zhou’s Vimeo take on Fincher 

©2014 Oliver Peters

The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters