More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

New NLE Color Features

df_mascliplut_2_sm

As someone who does color correction as often within an NLE as in a dedicated grading application, it’s nice to see that Apple and Adobe are not treating their color tools as an afterthought. (No snide Apple Color comments, please.) Both the Final Cut Pro 10.1.2 and Creative Cloud 2014 updates include new tools specifically designed to improve color correction. (Click the images below for an expanded view with additional explanation.)

Apple Final Cut Pro 10.1.2

df_mascliplut_3_sm

This FCP X update includes a new, built-in LUT (look-up table) feature designed to correct log-encoded camera files into Rec 709 color space. This type of LUT is camera-specific and FCP X now comes with preset LUTs for ARRI, Sony, Canon and Blackmagic Design cameras. This correction is applied as part of the media file’s color profile and, as such, takes affect before any filters or color correction is applied.

These LUTs can be enabled for master clips in the event, or after a clip has been edited to a sequence (FCP X project). The log processing can be applied to a single clip or a batch of clips in the event browser. Simply highlight one or more clips, open the inspector and choice the “settings” selection. In that pane, access the “log processing” pulldown menu and choose one of the camera options. This will now apply that camera LUT to all selected clips and will stay with a clip when it’s edited to the sequence. Individual clips in the sequence can later be enabled or disabled as needed. This LUT information does not pass though as part of an FCPXML roundtrip, such as sending a sequence to Resolve for color grading.

Although camera LUTs are specific to the color science used for each camera model’s type of log encoding, this doesn’t mean you can’t use a different LUT. Naturally some will be too extreme and not desirable. Some, however, are close and using a different LUT might give you a desirable creative result, somewhat like cross-processing in a film lab.

Adobe CC 2014 – Premiere Pro CC and SpeedGrade CC

df_mascliplut_1_sm

In this CC 2014 release, Adobe added master clip effects that travel back and forth between Premiere Pro CC and SpeedGrade CC via Direct Link. Master clip effects are relational, meaning that the color correction is applied to the master clip and, therefore, every instance of this clip that is edited to the sequence will have the same correction applied to it automatically. When you send the Premiere Pro CC sequence to SpeedGrade CC, you’ll see that the 2014 version now has two correction tabs: master clip and clip. If you want to apply a master clip effect, choose that tab and do your grade. If other sections of the same clip appear on the timeline, they have automatically been graded.

Of course, with a lot of run-and-gun footage, iris levels and lighting changes, so one setting might not work for the entire clip. In that case, you can add a second level of grading by tweaking the shot in the clip tab. Effectively you now have two levels of grading. Depending on the show, you can grade in the master clip tab, the clip tab or both. When the sequence goes back to Premiere Pro CC, SpeedGrade CC corrections are applied as Lumetri effects added to each sequence clip. Any master clip effects also “ripple back” to the master clip in the bin. This way, if you cut a new section from an already-graded master clip to that or any other sequence, color correction has already been applied to it.

In the example I created for the image above, the shot was graded as a master clip effect. Then I added more primary correction and a filter effect, by using the clip mode for the first time the clip appears in the sequence. This was used to create a cartoon look for that segment on the timeline. Compare the two versions of these shots – one with only a master clip effect (shots match) and the other with a separate clip effect added to the first (shots are different).

Since master clip effects apply globally to source clips within a project, editors should be careful about changing them or copy-and-pasting them, as you may inadvertently alter another sequence within the same project.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

The East

df_east_1Director Zal Batmanglij’s The East caught the buzz at Sundance and SXSW. It was produced by Scott Free Productions with Fox Searchlight Pictures. Not bad for the young director’s sophomore outing. The film takes its name from The East, a group of eco-terrorists and anarchists led by Benji, who is played by Alexander Skarsgard (True Blood). The group engages in “jams” – their term for activist attacks on corporations, which they tape and put out on the web. Sarah, played by Brit Marling (Arbitrage), is a corporate espionage specialist who is hired to infiltrate the group. In that process, she comes to sympathize with the group’s ideals, if not its violent tactics. She finds herself both questioning her allegiances and is falling in love with Benji. Marling also co-wrote the screenplay with Batmanglij.

In addition to a thriller plot, the film’s production also had some interesting twists along the way to completion. First, it was shot with an ARRI ALEXA, but unlike most films that use the ALEXA, the recording was done as ProRes4444 to the onboard SxS cards, instead of ARRIRAW to an external recorder. That will make it one of the few films to date in mainstream release to do so. ProRes dailies were converted into color-adjusted Avid DNxHD media for editing.

Second, the film went through a change of editors due to prior commitments. After the production wrapped and a first assembly of the film was completed, Andrew Weisblum (Moonrise Kingdom, Black Swan) joined the team to cut the film. Weisblum’s availability was limited to four months, though, since he was already committed to editing Darren Aronofsky’s Noah. At that stage, Bill Pankow (The Black Dahlia, Carlito’s Way) picked up for Weisblum and carried the film through to completion.

df_east_2Andrew Weisblum explained, “When I saw the assembly of The East, I really felt like there was a good story, but I had already committed to cut Noah. I wasn’t quite sure how much could be done in the four months that I had, but left the film at what we all thought was a cut that was close to completion. It was about 80% done and we’d had an initial preview. Bill [Pankow] was a friend, so I asked if he would pick it up for me there, assuming that the rest would be mainly just a matter of tightening up the film. But it turned out to be more involved than that.”

Bill Pankow continued, “I came on board June of last year and took the picture through to the locked cut and the mix in November. After that first screening, everyone felt that the ending needed some work. The final scene between the main characters wasn’t working in the way Zal and Brit had originally expected. They decided to change some things to serve the drama better and to resolve the relationship of the main characters. This required shooting additional footage, as well as reworking some of the other scenes. At that point we took a short hiatus while Zal and Brit  rewrote and reshot the last scene. Then another preview and we were able to lock the cut.”

df_east_3Like nearly all films, The East took on a life of its own in the cutting room. According to Weisblum, “The film changed in the edit from the script. Some of what I did in the cut was to bring in more tension and mystery in the beginning to get us to the group [The East] more quickly. We also simplified a number of story points. Nothing really radical – although it might have felt like that at the time – but just removing tangents that distracted from the main story.” Pankow added, “We didn’t have any length constraints from Fox, so we were able to optimize each scene. Towards the end of the film, there were places that needed extra ‘moments’ to accentuate some of the emotion of what the Sarah character was feeling. In a few cases, this meant re-using shots that might have appeared earlier. In addition to changing the last scene, a few other areas were adjusted. One or two scenes were extended, which in some cases replaced other scenes.”

Since the activists document their activities with video cameras, The East incorporates a number of point-of-view shots taken with low-res cameras. Rather than create these as visual effects shots, low-res cameras were used for the actual photography of that footage. Some video effects were added in the edit and some through the visual effects company. Weisblum has worked as a VFX editor (The Fountain, Chicago), so creating temporary visual effects is second nature. He said, “I usually do a number of things either in the Avid or using [Adobe] After Effects. These are the typical ‘split screen’ effects where takes are mixed to offset the timing of the performances. In this film, there was one scene where two characters [Tim and Sarah] are having a conversation on the bed. I wanted to use a take where Tim is sitting up, but of course, he’s partially covered by Sarah. This took a bit more effort, because I had to rotoscope part of one shot into the other, since the actors were overlapping each other. I’ll do these things whenever I can, so that the film plays in as finished a manner as possible during screening. It also gives the visual effects team a really good roadmap to follow.”

df_east_4Bill Pankow has worked as an editor or assistant on over forty features and brings some perspective to modern editing. He said, “I started editing digitally on Lightworks, but then moved to Avid. At the time, Lightworks didn’t keep up and Avid gave you more effects and titling tools, which let editors produce a more polished cut. On this film the set-up included two Avid Media Composer systems connected to shared storage. I typically like to work with two assistants when I can. My first assistant will add temporary sound effects and clean up the dialogue, while the second assistant handles the daily business and paperwork of the cutting room. Because assistants tend to have their own specialties these days, it’s harder for assistants to learn how to edit. I try to make a point of involving my assistants in watching the dailies, reviewing a scene when it’s cut and so on. This way they have a chance to learn and can someday move into the editor’s chair themselves.”

Both editors agree that working on The East was a very positive experience. Weisblum said, “Before starting, I had a little concern for how it would be working with Zal and Brit, especially since Brit was the lead actress, but also co-writer and producer. However, it was very helpful to have her involved, as she really helped me to understand the intentions of the character. It turned out to be a great collaboration.” Pankow concluded, “I enjoyed the team, but more so, I liked the fact that this film resonates emotionally, as well as politically, with the current times. I was very happy to be able to work on it.”

Originally written for Digital Video magazine

©2013 Oliver Peters

Phil Spector

df_philspector_3Phil Spector became famous as a music industry icon. The legendary producer, who originated the “wall of sound” production technique of densely-layered arrangements, worked with a wide range of acts, including the Ronettes, the Righteous Brothers and the Beatles. Unfortunately, fame can also have its infamous side. Spector abruptly came back into public notice through the circumstances of the 2003 death of actress Lana Clarkson and his subsequent criminal trials, culminating in a 2009 conviction for second-degree murder.

The story of his first murder trial and the relationship between Spector (Al Pacino) and defense attorney Linda Kenney Baden (Helen Mirren) form the basis for the new film by HBO Films. Phil Spector, which is executive produced by Barry Levinson (Rain Man), was directed by celebrated screenwriter/director David Mamet (The Unit, The Shield, Hannibal, Wag the Dog). Rather than treat it as a biopic or news story, Mamet chose to take a fictionalized approach that chronicles Spector’s legal troubles as a fall from grace.

One key member of the production team was editor Barbara Tulliver (Too Big to Fail, Lady in the Water, Signs), who has previously collaborated with Mamet. She started as a film editor working on commercials in New York, but quickly transitioned into features. According to Tulliver, “I assisted on David’s first two films and then cut my first feature as an editor with him, so we have established a relationship. I also cut Too Big to Fail for HBO and brought a lot of the same editorial crew for this one, so it was like a big family.”

df_philspector_4As with most television schedules, Phil Spector was shot and completed in a time frame and with a budget more akin to a well-funded independent feature, rather than a typical studio film. Tulliver explained, “Our schedule to complete this film was between that of a standard TV project and a feature. If a studio film has six weeks to complete a mix, a film like this would have three. The steps are the same, but the schedule is shrunk. I was cutting during the thirty-day production phase, so I had a cut ready for David a week after he wrapped. HBO likes to see things early, so David had his initial cut done after five weeks, instead of the typical ten-week time frame. Like any studio, HBO will give us notes, but they are very respectful of the filmmakers, which is why they can attract the caliber of talent that they do for these films. At that point we went into a bit of a hold, because David wanted some additional photography and that took awhile until HBO approved it.”

The production itself was handled like a film shoot using ARRI Alexa cameras in a single-camera style. An on-set DIT generated the dailies used for the edit. Although you wouldn’t consider this a visual effects film, it still had its share of shots. Tulliver said, “There were a lot of comps that are meat-and-potatoes effects these days. For instance, the film was shot in New York, so in scenes when Spector arrives at the courthouse in Los Angeles, the visual effects department had to build up all of the exteriors to look like LA. There are a number of TV and computer screens, which were all completed in post. Plus a certain amount of frame clean-ups, like removing unwanted elements from a shot.”

df_philspector_2Mamet wrote a very lean screenplay, so the length of the cut didn’t present any creative challenges for Tulliver. She continued, “David’s scripts are beautifully crafted, so there was no need to re-arrange scenes. We might have deleted one scene. David makes decisions quickly and doesn’t overshoot. Like any director, he is open to changes in performance; but, the actors have such respect for his script, that there isn’t a lot of embellishment that might pose editing challenges in another film. Naturally with a cast like this, the performances were all good. The main challenge we had, was to find ways to integrate Spector’s songs into the story. How to use the music to open up scenes in the film and add montages. This meant all of the songs had to be cleared. We were largely successful, except with John Lennon’s Imagine, where Yoko Ono had the final say. Although she was open to our using the song, ultimately she and David couldn’t agree to how it would be integrated creatively into the film.”

Phil Spector was cut digitally on an Avid Media Composer. Like many feature editors, Barbara Tulliver started her career cutting film. She said, “I’m one of the last editors to embrace digital editing. I went into it kicking and screaming, but so did the directors I was working with at the time. When I finally moved over to Avid, they were pretty well established as the dominant nonlinear edit system for films. I do miss some things about editing on film, though. There’s a tactile sense of the film that’s almost romantic. Because it takes longer to make changes, film editing is more reflective. You talk about it more and often in the course of these discussions, you discover better solutions than if you simply tried a lot of variations. In the film days, you talked about the dramatic and emotional impact of these options. This is still the case, but one has to be more vigilant about making that happen – as opposed to just re-cutting a scene twenty different ways, because it is easy and fast – and then not know what you are looking at anymore.”

df_philspector_1“Today, I cut the same way I did when I was cutting film. I like to lay out my cut as a road map. I’ll build it rough to get a sense of the whole scene, rather than finesse each single cut as I go. After I’ve built the scene that way, I’ll go back and tweak and trim to fine-tune the cut. Digital editing for me is not all about the bells-and-whistles. I don’t use some of the Avid features, like multi-camera editing or Script Sync. While these are great features, some are labor-intensive to prepare. When you have a minimal crew without a lot of assistants, I prefer to work in a more straightforward fashion.”

Tulliver concluded with this thought, “Although I may be nostalgic about the days of film editing, it would be a complete nightmare to go back to that. In fact, several years ago one director was interested in trying it, so I investigated what it would take. It’s hard to find the gear anymore and when you do, it hasn’t been properly maintained, because no one has been using it. Not to mention finding mag stripe and other materials that you would need. The list of people and labs that actually know how to handle a complete film project is getting smaller each year, so going back would just about be impossible. While film might not be dead as a production medium, it has passed that point in post.”

Originally written for Digital Video magazine.

©2013 Oliver Peters

Zero Dark Thirty

df_zdt_1Few films have the potential to be as politically charged as Zero Dark Thirty. Director Kathryn Bigelow (The Hurt Locker, K-19: The Widowmaker) and producer/writer Mark Boal (The Hurt Locker, In the Valley of Elah) have evaded those minefields by focusing on the relentless CIA detective work that led to the finding and killing of Osama bin Laden by US Navy SEALs. Shot and edited in a cinema verite style, Zero Dark Thirty is more of a suspenseful thriller, than an action-adventure movie. It seeks to tell a raw, powerful story that’s faithful to the facts without politicizing the events.

The original concept started before the raid on bin Laden’s compound occurred. It was to be about the hunt, but not finding him, after a decade of searching. The SEAL raid changed the direction of the film; but, Bigelow and Boal still felt that the story to be told was in the work done on the ground by intelligence operatives that led to the raid. Zero Dark Thirty is based on the perspective of CIA operative Maya (Jessica Chastain), whose job it is to find terrorists. The Maya character is based on a real person.

Zero Dark Thirty was filmed digitally, using ARRI Alexa cameras. This aided Kathryn Bigelow’s style of shooting by eliminating the limitation of the length of film mags. Most scenes were shot with four cameras and some as many as six or seven at once. The equivalent of 1.8 million feet of film (about 320 hours) was recorded. The production ramped up in India with veteran film editor Dylan Tichenor (Lawless, There Will Be Blood) on board from the beginning.

According to Tichenor, “I was originally going to be on location for a short time with Kathryn and Mark and then return to the States to cut. We were getting about seven hours of footage a day and I like to watch everything. When they asked me to stay on for the entire India shoot, we set up a cutting room in Chandigarh, added assistants and Avids to stay up to camera while I was there. Then I rejoined my team in the States when the production moved to Jordan. A parallel cutting room had been set up in Los Angeles, where the same footage was loaded. There, the assistants could also help pull selects from my notes, to make going through the footage and preparing to cut more manageable.”df_zdt_3

William Goldenberg (Argo, Transformers: Dark of the Moon) joined the team as the second editor in June, after wrapping up Argo. Goldenberg continued, “This film had a short post schedule and there was a lot of footage, so they asked me to help out. I started right after they filmed the Osama bin Laden raid scene, which was one of the last locations to be shot and the first part of the film that I edited. The assembled film without the raid was about three hours long. There was forty hours of material just for the raid and this took about three weeks to a month to cut. After I finished that, Dylan and I divided up the workload to refine and hone scenes, with each making adjustments on the other’s cuts. It’s very helpful to have a second pair of eyes in this situation, bouncing ideas back and forth.”

As an Alexa-based production, the team in India, Jordan and London included a three-man digital lab. Tichenor explained, “This film was recorded using ARRIRAW. With digital features in the past, my editorial team has been tasked to handle the digital dailies workload, too. This means the editors are also responsible for dealing with the color space workflow issues and that would have been too much to deal with on this film. So, the production set up a three-person team with a Codex Digilab and Colorfront software in another hotel room to process the ARRIRAW files. These were turned into color-corrected Avid DNxHD media for us and a duplicate set of files for the assistants in LA.” Director of photography Greig Fraser (Snow White and the Huntsman, Killing Them Softly) was able to check in on the digilab team and tweak the one-light color correction, as well as get Tichenor’s input for additional shots and coverage he might need to help tell the story.

df_zdt_4Tichenor continued, “Kathryn likes to set up scenes and then capture the action with numerous cameras – almost like it’s a documentary. Then she’ll repeat that process several times for each scene. Four to seven camera keep rolling all day, so there’s a lot of footage. Plus the camera operators are very good about picking up extra shots and b-roll, even though they aren’t an official second unit team. There are a lot of ways to tell the story and Kathryn gave us – the editors – a lot of freedom to build these scenes. The objective is to have a feeling of ‘you are there’ and I think that comes across in this film. Kathryn picks people she trusts and then lets them do their job. That’s great for an editor, but you really feel the responsibility, because it’s your decisions that will end up on the screen.”

Music for the film was also handled in an unusual manner. According to Goldenberg, “On most films a composer is contracted, you turn the locked picture over to him and he scores to that cut. Zero Dark Thirty didn’t start with a decision on a composer. Like most films, Dylan and I tried different pieces of temp music under some of the scenes that needed music. Of all the music we tried, the work of Alexandre Desplat (Argo, Moonrise Kingdom) fit the best. Kathryn and Mark showed Alexandre a cut to see if he might be interested. He loved it and found time in his schedule to score the film. Right away he wrote seven pieces that he felt were right. We cut those in to fit the scene lengths, which he then used as a template for his final score. It was a very collaborative process.”

Company 3 handled the digital intermediate mastering. Goldenberg explained, “The nighttime raid scene has a very unique look. It was very dark, as shot. In fact, we had to turn off all the lights in the cutting room to even see an image on the Avid monitors. Company 3 got involved early on by color timing about ten minutes of that footage, because we were eager and excited to see what the sequence could look like when it was color timed. When it came to the final DI, the film really took on another layer of richness. We’d been looking at the one-light images so long that it actually took a few screenings to enjoy the image that we’d been missing until then.”

df_zdt_2Both Tichenor and Goldenberg have been cutting on Avid Media Composers for years, but this film didn’t tax the capabilities of the system. Tichenor said, “This isn’t an effects-heavy film. Some parts of the stealth helicopters are CG, but in the Avid, we mainly used effects for some monitor inserts, stabilization and split screens.” Goldenberg added, “One thing we both do is build our audio tracks as LCR [left, center, right channel] instead of the usual stereo. It takes a bit more work to build a dedicated center channel, but screenings sound much better.”

Avid has very good multicamera routines, so I questioned whether these were of value with the number of cameras being used. Tichenor replied, “We grouped clips, of course, but not actual multicam. You can switch cameras easily with a grouped clip. I actually did try for one second on a scene to see if I could use the multicam split screen camera display for watching dailies, but no, there was too much going on.” Goldenberg added, “There are some scenes that – although they were using multiple cameras – the operators would be shooting completely different things. For instance, actors in a car with one camera and other cameras grabbing local flavor and street life. So multicam or group clips were less useful in those cases.”

The film’s post schedule took about four months from the first full assembly until the final mix. Goldenberg said, “I don’t think you can say the cut was ever completely locked until the final mix, since we made minor adjustments even up to the end; but, there was a point at one of the internal screenings where we all knew the structure was in place. That was a big milestone, because from there, it was just a matter of tightening and honing. The story felt right.” Tichenor explained, “This movie actually came together surprisingly well in the time frame we had. Given the amount of footage, it’s the sort of film that could easily have been in post for two years. Fortunately with this script and team, it all came together. The scenes balanced out nicely and it has a good structure.”

For addition stories:

DV’s coverage of Zero Dark Thirty’s cinematography

An interview with William Goldenberg about Argo

FXGuide talks about the visual effects created for the film.

New York Times articles (here and here) about Zero Dark Thirty

Avid interview with William Goldenberg.

DP/30 interview with sound and picture editors on ZDT.

Originally written for DV magazine / Creative Planet Network

©2012, 2013 Oliver Peters

Hemingway & Gellhorn

Director Philip Kaufman has a talent for telling a good story against the backdrop of history. The Right Stuff (covering the start of the United States’ race into space) and The Unbearable Lightness of Being (the 1968 Soviet invasion of Prague) made their marks, but now the latest, Hemingway & Gellhorn continues that streak.

Originally intended as a theatrical film, but ultimately completed as a made-for-HBO feature, Hemingway & Gellhorn chronicles the short and tempestuous relationship between Ernest Hemingway (Clive Owen) and his third wife, Martha Gellhorn (Nicole Kidman). The two met in 1936 in Key West, traveled to Spain to cover the Spanish Civil War and were married in 1940. They lived in Havana and after four years of a difficult relationship were divorced in 1945. During her 60-year career as a journalist, Gellhorn was recognized as being one of the best war correspondents of the last century. She covered nearly every conflict up until and including the U. S. invasion of Panama in 1989.

The film also paired another team – that of Kaufman and film editor Walter Murch – both of whom had last teamed up for The Unbearable Lightness of Being. I recently spoke with Walter Murch upon his return from the screening of Hemingway & Gellhorn at the Cannes Film Festival. Murch commented on the similarities of these projects, “I’ve always been attracted to the intersection of history and drama. I hadn’t worked with Phil since the 1980s, so I enjoyed tackling another film together, but I was also really interested in the subject matter. When we started, I really didn’t know that much about Martha Gellhorn. I had heard the name, but that was about it. Like most folks, I knew the legend and myth of Hemingway, but not really many of the details of him as a person.”

This has been Murch’s first project destined for TV, rather than theaters. He continued, “Although it’s an HBO film, we never treated it as anything other than a feature film, except that our total schedule, including shooting, was about six months long, instead of ten or more months. In fact, seeing the film in Cannes with an audience of 2,500 was very rewarding. It was the first time we had actually screened in front of a theatrical audience that large. During post, we had a few ‘friends and family’ screenings, but never anything with a formal preview audience. That’s, of course, standard procedure with the film studios. I’m not sure what HBO’s plans are for Hemingway & Gellhorn beyond the HBO channels. Often some of their films make it into theatrical distribution in countries where HBO doesn’t have a cable TV presence.”

Hemingway & Gellhorn was produced entirely in the San Francisco Bay area, even though it was a period film and none of the story takes place there. All visual effects were done by Tippett Studio, supervised by Christopher Morley, which included placing the actors into scenes using real archival footage. Murch explained, “We had done something similar in The Unbearable Lightness of Being. The technology has greatly improved since then, and we were able to do things that would have been impossible in 1986. The archival film footage quality was vastly different from the ARRI ALEXA footage used for principal photography. The screenplay was conceived as alternating between grainless color and grainy monochrome scenes to juxtapose the intimate events in the lives of Hemingway and Gellhorn with their presence on the world stage at historical events. So it was always intended for effect, rather than trying to convince the audience that there was a completely continuous reality. As we got into editing, Phil started to play with color, using different tinting for the various locations. One place might be more yellow and another cool or green and so on. We were trying to be true to the reality of these people, but the film also has to be dramatic. Plus, Phil likes to have fun with the characters. There must be balance, so you have to find the right proportion for these elements.”

The task of finding the archival footage fell to Rob Bonz, who started a year before shooting. Murch explained, “An advantage you have today that we didn’t have in the ‘80s is YouTube. A lot of these clips exist on-line, so it’s easier to research what options you might have. Of course, then you have to find the highest quality version of what you’ve seen on-line. In the case of the events in Hemingway & Gellhorn, these took place all over the world, so Rob and his researchers were calling all kinds of sources, including film labs in Cuba, Spain and Russia that might still have some of these original nitrate materials.”

This was Walter Murch’s first experience working on a film recorded using an ARRI ALEXA. The production recorded 3K ARRIRAW files using the Codex recorder and then it was the editorial team’s responsibility to convert these files for various destinations, including ProResLT (1280 x 720) for the edit, H.264 for HBO review and DPX sequences for DI. Murch was quite happy with the ALEXA’s image. He said, “Since these were 3K frames we were able to really take advantage of the size for repositioning. I got so used to doing that with digital images, starting with Youth Without Youth, that it’s now just second nature. The ALEXA has great dynamic range and the image held up well to subtle zooms and frame divisions. Most repositionings and enlargements were on the order of 125% to 145%, but there’s one blow-up at 350% of normal.”

In addition to Bonz, the editorial team included Murch’s son Walter (first assistant editor) and David Cerf (apprentice). Walter Murch is a big proponent of using FileMaker Pro for his film editor’s code book and explained some of the changes on this film. “Dave really handled most of the FileMaker jiu-jitsu. It works well with XML, so we were able go back and forth between FileMaker Pro and Final Cut Pro 7 using XML. This time our script supervisor, Virginia McCarthy, was using ScriptE, which also does a handshake with FileMaker, so that her notes could be instantly integrated into our database. Then we could use this information to drive an action in Final Cut Pro – for instance, the assembly of dailies reels. FileMaker would organize the information about yesterday’s shooting, and then an XML out of that data would trigger an assembly in Final Cut, inserting graphics and text as needed in between shots. In the other direction, we would create visibility-disabled slugs on a dedicated video track, tagged with scene information about the clips in the video tracks below. Outputting XML from Final Cut would create an instantaneous continuity list with time markers in FileMaker.”

The way Walter Murch organizes his work is a good fit for Final Cut Pro 7, which he used on Hemingway & Gellhorn and continues to use on a current documentary project. In fact, at a Boston FCP user gathering, Murch showed one of the most elaborate screen grabs of an FCP timeline that you can imagine. He takes full advantage of the track structure to incorporate temporary sound effects and music cues, as well as updated final music and effects.

Another trick he mentioned to me was something he referred to as a QuickTime skin. Murch continued, “I edit with the complete movie on the timeline, not in reels, so I always have the full cut in front of me. I started using this simple QuickTime skin technique with Tetro. First, I export the timeline as a self-contained QuickTime file and then re-import the visual. This is placed on the upper-most video track, effectively hiding everything below. As such, it’s like a ‘skin’ that wraps the clips below it, so the computer doesn’t ‘see’ them when you scroll back and forth. The visual information is now all at one location on a hard drive, so the system isn’t bogged down with unrendered files and other clutter. When you make changes, then you ‘razor-blade’ through the QuickTime and pull back the skin, revealing the ‘internal organs’ (the clips that you want to revise) below – thus making the changes like a surgeon. Working this way also gives a quick visual overview of where you’ve made changes. You can instantly see where the skin has been ‘broken’ and how extensive the changes were. It’s the visual equivalent of a change list. After a couple of weeks of cutting, on average, I make a new QuickTime and start the process over.”

Walter Murch is currently working on a feature documentary about the Large Hadron Collider. Murch, in his many presentations and discussions on editing, considers the art part plumbing (knowing the workflow), part performance (instinctively feeling the rhythm and knowing, in a musical sense, when to cut) and part writing (building and then modifying the story through different combinations of picture and sound). Editing a documentary is certainly a great example of the editor as writer. His starting point is 300 hours of material following three theorists and three experimentalists over a four-year period, including the catastrophic failure of the accelerator nine days after it was turned on for the first time. Murch, who has always held a love and fascination for the sciences, is once again at that intersection of history and drama.

Click here to watch the trailer.

(And here’s a nice additional article from the New York Times.)

Originally written for Digital Video magazine (NewBay Media, LLC).

©2012 Oliver Peters