New NLE Color Features

df_mascliplut_2_sm

As someone who does color correction as often within an NLE as in a dedicated grading application, it’s nice to see that Apple and Adobe are not treating their color tools as an afterthought. (No snide Apple Color comments, please.) Both the Final Cut Pro 10.1.2 and Creative Cloud 2014 updates include new tools specifically designed to improve color correction. (Click the images below for an expanded view with additional explanation.)

Apple Final Cut Pro 10.1.2

df_mascliplut_3_sm

This FCP X update includes a new, built-in LUT (look-up table) feature designed to correct log-encoded camera files into Rec 709 color space. This type of LUT is camera-specific and FCP X now comes with preset LUTs for ARRI, Sony, Canon and Blackmagic Design cameras. This correction is applied as part of the media file’s color profile and, as such, takes affect before any filters or color correction is applied.

These LUTs can be enabled for master clips in the event, or after a clip has been edited to a sequence (FCP X project). The log processing can be applied to a single clip or a batch of clips in the event browser. Simply highlight one or more clips, open the inspector and choice the “settings” selection. In that pane, access the “log processing” pulldown menu and choose one of the camera options. This will now apply that camera LUT to all selected clips and will stay with a clip when it’s edited to the sequence. Individual clips in the sequence can later be enabled or disabled as needed. This LUT information does not pass though as part of an FCPXML roundtrip, such as sending a sequence to Resolve for color grading.

Although camera LUTs are specific to the color science used for each camera model’s type of log encoding, this doesn’t mean you can’t use a different LUT. Naturally some will be too extreme and not desirable. Some, however, are close and using a different LUT might give you a desirable creative result, somewhat like cross-processing in a film lab.

Adobe CC 2014 – Premiere Pro CC and SpeedGrade CC

df_mascliplut_1_sm

In this CC 2014 release, Adobe added master clip effects that travel back and forth between Premiere Pro CC and SpeedGrade CC via Direct Link. Master clip effects are relational, meaning that the color correction is applied to the master clip and, therefore, every instance of this clip that is edited to the sequence will have the same correction applied to it automatically. When you send the Premiere Pro CC sequence to SpeedGrade CC, you’ll see that the 2014 version now has two correction tabs: master clip and clip. If you want to apply a master clip effect, choose that tab and do your grade. If other sections of the same clip appear on the timeline, they have automatically been graded.

Of course, with a lot of run-and-gun footage, iris levels and lighting changes, so one setting might not work for the entire clip. In that case, you can add a second level of grading by tweaking the shot in the clip tab. Effectively you now have two levels of grading. Depending on the show, you can grade in the master clip tab, the clip tab or both. When the sequence goes back to Premiere Pro CC, SpeedGrade CC corrections are applied as Lumetri effects added to each sequence clip. Any master clip effects also “ripple back” to the master clip in the bin. This way, if you cut a new section from an already-graded master clip to that or any other sequence, color correction has already been applied to it.

In the example I created for the image above, the shot was graded as a master clip effect. Then I added more primary correction and a filter effect, by using the clip mode for the first time the clip appears in the sequence. This was used to create a cartoon look for that segment on the timeline. Compare the two versions of these shots – one with only a master clip effect (shots match) and the other with a separate clip effect added to the first (shots are different).

Since master clip effects apply globally to source clips within a project, editors should be careful about changing them or copy-and-pasting them, as you may inadvertently alter another sequence within the same project.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

The East

df_east_1Director Zal Batmanglij’s The East caught the buzz at Sundance and SXSW. It was produced by Scott Free Productions with Fox Searchlight Pictures. Not bad for the young director’s sophomore outing. The film takes its name from The East, a group of eco-terrorists and anarchists led by Benji, who is played by Alexander Skarsgard (True Blood). The group engages in “jams” – their term for activist attacks on corporations, which they tape and put out on the web. Sarah, played by Brit Marling (Arbitrage), is a corporate espionage specialist who is hired to infiltrate the group. In that process, she comes to sympathize with the group’s ideals, if not its violent tactics. She finds herself both questioning her allegiances and is falling in love with Benji. Marling also co-wrote the screenplay with Batmanglij.

In addition to a thriller plot, the film’s production also had some interesting twists along the way to completion. First, it was shot with an ARRI ALEXA, but unlike most films that use the ALEXA, the recording was done as ProRes4444 to the onboard SxS cards, instead of ARRIRAW to an external recorder. That will make it one of the few films to date in mainstream release to do so. ProRes dailies were converted into color-adjusted Avid DNxHD media for editing.

Second, the film went through a change of editors due to prior commitments. After the production wrapped and a first assembly of the film was completed, Andrew Weisblum (Moonrise Kingdom, Black Swan) joined the team to cut the film. Weisblum’s availability was limited to four months, though, since he was already committed to editing Darren Aronofsky’s Noah. At that stage, Bill Pankow (The Black Dahlia, Carlito’s Way) picked up for Weisblum and carried the film through to completion.

df_east_2Andrew Weisblum explained, “When I saw the assembly of The East, I really felt like there was a good story, but I had already committed to cut Noah. I wasn’t quite sure how much could be done in the four months that I had, but left the film at what we all thought was a cut that was close to completion. It was about 80% done and we’d had an initial preview. Bill [Pankow] was a friend, so I asked if he would pick it up for me there, assuming that the rest would be mainly just a matter of tightening up the film. But it turned out to be more involved than that.”

Bill Pankow continued, “I came on board June of last year and took the picture through to the locked cut and the mix in November. After that first screening, everyone felt that the ending needed some work. The final scene between the main characters wasn’t working in the way Zal and Brit had originally expected. They decided to change some things to serve the drama better and to resolve the relationship of the main characters. This required shooting additional footage, as well as reworking some of the other scenes. At that point we took a short hiatus while Zal and Brit  rewrote and reshot the last scene. Then another preview and we were able to lock the cut.”

df_east_3Like nearly all films, The East took on a life of its own in the cutting room. According to Weisblum, “The film changed in the edit from the script. Some of what I did in the cut was to bring in more tension and mystery in the beginning to get us to the group [The East] more quickly. We also simplified a number of story points. Nothing really radical – although it might have felt like that at the time – but just removing tangents that distracted from the main story.” Pankow added, “We didn’t have any length constraints from Fox, so we were able to optimize each scene. Towards the end of the film, there were places that needed extra ‘moments’ to accentuate some of the emotion of what the Sarah character was feeling. In a few cases, this meant re-using shots that might have appeared earlier. In addition to changing the last scene, a few other areas were adjusted. One or two scenes were extended, which in some cases replaced other scenes.”

Since the activists document their activities with video cameras, The East incorporates a number of point-of-view shots taken with low-res cameras. Rather than create these as visual effects shots, low-res cameras were used for the actual photography of that footage. Some video effects were added in the edit and some through the visual effects company. Weisblum has worked as a VFX editor (The Fountain, Chicago), so creating temporary visual effects is second nature. He said, “I usually do a number of things either in the Avid or using [Adobe] After Effects. These are the typical ‘split screen’ effects where takes are mixed to offset the timing of the performances. In this film, there was one scene where two characters [Tim and Sarah] are having a conversation on the bed. I wanted to use a take where Tim is sitting up, but of course, he’s partially covered by Sarah. This took a bit more effort, because I had to rotoscope part of one shot into the other, since the actors were overlapping each other. I’ll do these things whenever I can, so that the film plays in as finished a manner as possible during screening. It also gives the visual effects team a really good roadmap to follow.”

df_east_4Bill Pankow has worked as an editor or assistant on over forty features and brings some perspective to modern editing. He said, “I started editing digitally on Lightworks, but then moved to Avid. At the time, Lightworks didn’t keep up and Avid gave you more effects and titling tools, which let editors produce a more polished cut. On this film the set-up included two Avid Media Composer systems connected to shared storage. I typically like to work with two assistants when I can. My first assistant will add temporary sound effects and clean up the dialogue, while the second assistant handles the daily business and paperwork of the cutting room. Because assistants tend to have their own specialties these days, it’s harder for assistants to learn how to edit. I try to make a point of involving my assistants in watching the dailies, reviewing a scene when it’s cut and so on. This way they have a chance to learn and can someday move into the editor’s chair themselves.”

Both editors agree that working on The East was a very positive experience. Weisblum said, “Before starting, I had a little concern for how it would be working with Zal and Brit, especially since Brit was the lead actress, but also co-writer and producer. However, it was very helpful to have her involved, as she really helped me to understand the intentions of the character. It turned out to be a great collaboration.” Pankow concluded, “I enjoyed the team, but more so, I liked the fact that this film resonates emotionally, as well as politically, with the current times. I was very happy to be able to work on it.”

Originally written for Digital Video magazine

©2013 Oliver Peters

Phil Spector

df_philspector_3Phil Spector became famous as a music industry icon. The legendary producer, who originated the “wall of sound” production technique of densely-layered arrangements, worked with a wide range of acts, including the Ronettes, the Righteous Brothers and the Beatles. Unfortunately, fame can also have its infamous side. Spector abruptly came back into public notice through the circumstances of the 2003 death of actress Lana Clarkson and his subsequent criminal trials, culminating in a 2009 conviction for second-degree murder.

The story of his first murder trial and the relationship between Spector (Al Pacino) and defense attorney Linda Kenney Baden (Helen Mirren) form the basis for the new film by HBO Films. Phil Spector, which is executive produced by Barry Levinson (Rain Man), was directed by celebrated screenwriter/director David Mamet (The Unit, The Shield, Hannibal, Wag the Dog). Rather than treat it as a biopic or news story, Mamet chose to take a fictionalized approach that chronicles Spector’s legal troubles as a fall from grace.

One key member of the production team was editor Barbara Tulliver (Too Big to Fail, Lady in the Water, Signs), who has previously collaborated with Mamet. She started as a film editor working on commercials in New York, but quickly transitioned into features. According to Tulliver, “I assisted on David’s first two films and then cut my first feature as an editor with him, so we have established a relationship. I also cut Too Big to Fail for HBO and brought a lot of the same editorial crew for this one, so it was like a big family.”

df_philspector_4As with most television schedules, Phil Spector was shot and completed in a time frame and with a budget more akin to a well-funded independent feature, rather than a typical studio film. Tulliver explained, “Our schedule to complete this film was between that of a standard TV project and a feature. If a studio film has six weeks to complete a mix, a film like this would have three. The steps are the same, but the schedule is shrunk. I was cutting during the thirty-day production phase, so I had a cut ready for David a week after he wrapped. HBO likes to see things early, so David had his initial cut done after five weeks, instead of the typical ten-week time frame. Like any studio, HBO will give us notes, but they are very respectful of the filmmakers, which is why they can attract the caliber of talent that they do for these films. At that point we went into a bit of a hold, because David wanted some additional photography and that took awhile until HBO approved it.”

The production itself was handled like a film shoot using ARRI Alexa cameras in a single-camera style. An on-set DIT generated the dailies used for the edit. Although you wouldn’t consider this a visual effects film, it still had its share of shots. Tulliver said, “There were a lot of comps that are meat-and-potatoes effects these days. For instance, the film was shot in New York, so in scenes when Spector arrives at the courthouse in Los Angeles, the visual effects department had to build up all of the exteriors to look like LA. There are a number of TV and computer screens, which were all completed in post. Plus a certain amount of frame clean-ups, like removing unwanted elements from a shot.”

df_philspector_2Mamet wrote a very lean screenplay, so the length of the cut didn’t present any creative challenges for Tulliver. She continued, “David’s scripts are beautifully crafted, so there was no need to re-arrange scenes. We might have deleted one scene. David makes decisions quickly and doesn’t overshoot. Like any director, he is open to changes in performance; but, the actors have such respect for his script, that there isn’t a lot of embellishment that might pose editing challenges in another film. Naturally with a cast like this, the performances were all good. The main challenge we had, was to find ways to integrate Spector’s songs into the story. How to use the music to open up scenes in the film and add montages. This meant all of the songs had to be cleared. We were largely successful, except with John Lennon’s Imagine, where Yoko Ono had the final say. Although she was open to our using the song, ultimately she and David couldn’t agree to how it would be integrated creatively into the film.”

Phil Spector was cut digitally on an Avid Media Composer. Like many feature editors, Barbara Tulliver started her career cutting film. She said, “I’m one of the last editors to embrace digital editing. I went into it kicking and screaming, but so did the directors I was working with at the time. When I finally moved over to Avid, they were pretty well established as the dominant nonlinear edit system for films. I do miss some things about editing on film, though. There’s a tactile sense of the film that’s almost romantic. Because it takes longer to make changes, film editing is more reflective. You talk about it more and often in the course of these discussions, you discover better solutions than if you simply tried a lot of variations. In the film days, you talked about the dramatic and emotional impact of these options. This is still the case, but one has to be more vigilant about making that happen – as opposed to just re-cutting a scene twenty different ways, because it is easy and fast – and then not know what you are looking at anymore.”

df_philspector_1“Today, I cut the same way I did when I was cutting film. I like to lay out my cut as a road map. I’ll build it rough to get a sense of the whole scene, rather than finesse each single cut as I go. After I’ve built the scene that way, I’ll go back and tweak and trim to fine-tune the cut. Digital editing for me is not all about the bells-and-whistles. I don’t use some of the Avid features, like multi-camera editing or Script Sync. While these are great features, some are labor-intensive to prepare. When you have a minimal crew without a lot of assistants, I prefer to work in a more straightforward fashion.”

Tulliver concluded with this thought, “Although I may be nostalgic about the days of film editing, it would be a complete nightmare to go back to that. In fact, several years ago one director was interested in trying it, so I investigated what it would take. It’s hard to find the gear anymore and when you do, it hasn’t been properly maintained, because no one has been using it. Not to mention finding mag stripe and other materials that you would need. The list of people and labs that actually know how to handle a complete film project is getting smaller each year, so going back would just about be impossible. While film might not be dead as a production medium, it has passed that point in post.”

Originally written for Digital Video magazine.

©2013 Oliver Peters

Zero Dark Thirty

df_zdt_1Few films have the potential to be as politically charged as Zero Dark Thirty. Director Kathryn Bigelow (The Hurt Locker, K-19: The Widowmaker) and producer/writer Mark Boal (The Hurt Locker, In the Valley of Elah) have evaded those minefields by focusing on the relentless CIA detective work that led to the finding and killing of Osama bin Laden by US Navy SEALs. Shot and edited in a cinema verite style, Zero Dark Thirty is more of a suspenseful thriller, than an action-adventure movie. It seeks to tell a raw, powerful story that’s faithful to the facts without politicizing the events.

The original concept started before the raid on bin Laden’s compound occurred. It was to be about the hunt, but not finding him, after a decade of searching. The SEAL raid changed the direction of the film; but, Bigelow and Boal still felt that the story to be told was in the work done on the ground by intelligence operatives that led to the raid. Zero Dark Thirty is based on the perspective of CIA operative Maya (Jessica Chastain), whose job it is to find terrorists. The Maya character is based on a real person.

Zero Dark Thirty was filmed digitally, using ARRI Alexa cameras. This aided Kathryn Bigelow’s style of shooting by eliminating the limitation of the length of film mags. Most scenes were shot with four cameras and some as many as six or seven at once. The equivalent of 1.8 million feet of film (about 320 hours) was recorded. The production ramped up in India with veteran film editor Dylan Tichenor (Lawless, There Will Be Blood) on board from the beginning.

According to Tichenor, “I was originally going to be on location for a short time with Kathryn and Mark and then return to the States to cut. We were getting about seven hours of footage a day and I like to watch everything. When they asked me to stay on for the entire India shoot, we set up a cutting room in Chandigarh, added assistants and Avids to stay up to camera while I was there. Then I rejoined my team in the States when the production moved to Jordan. A parallel cutting room had been set up in Los Angeles, where the same footage was loaded. There, the assistants could also help pull selects from my notes, to make going through the footage and preparing to cut more manageable.”df_zdt_3

William Goldenberg (Argo, Transformers: Dark of the Moon) joined the team as the second editor in June, after wrapping up Argo. Goldenberg continued, “This film had a short post schedule and there was a lot of footage, so they asked me to help out. I started right after they filmed the Osama bin Laden raid scene, which was one of the last locations to be shot and the first part of the film that I edited. The assembled film without the raid was about three hours long. There was forty hours of material just for the raid and this took about three weeks to a month to cut. After I finished that, Dylan and I divided up the workload to refine and hone scenes, with each making adjustments on the other’s cuts. It’s very helpful to have a second pair of eyes in this situation, bouncing ideas back and forth.”

As an Alexa-based production, the team in India, Jordan and London included a three-man digital lab. Tichenor explained, “This film was recorded using ARRIRAW. With digital features in the past, my editorial team has been tasked to handle the digital dailies workload, too. This means the editors are also responsible for dealing with the color space workflow issues and that would have been too much to deal with on this film. So, the production set up a three-person team with a Codex Digilab and Colorfront software in another hotel room to process the ARRIRAW files. These were turned into color-corrected Avid DNxHD media for us and a duplicate set of files for the assistants in LA.” Director of photography Greig Fraser (Snow White and the Huntsman, Killing Them Softly) was able to check in on the digilab team and tweak the one-light color correction, as well as get Tichenor’s input for additional shots and coverage he might need to help tell the story.

df_zdt_4Tichenor continued, “Kathryn likes to set up scenes and then capture the action with numerous cameras – almost like it’s a documentary. Then she’ll repeat that process several times for each scene. Four to seven camera keep rolling all day, so there’s a lot of footage. Plus the camera operators are very good about picking up extra shots and b-roll, even though they aren’t an official second unit team. There are a lot of ways to tell the story and Kathryn gave us – the editors – a lot of freedom to build these scenes. The objective is to have a feeling of ‘you are there’ and I think that comes across in this film. Kathryn picks people she trusts and then lets them do their job. That’s great for an editor, but you really feel the responsibility, because it’s your decisions that will end up on the screen.”

Music for the film was also handled in an unusual manner. According to Goldenberg, “On most films a composer is contracted, you turn the locked picture over to him and he scores to that cut. Zero Dark Thirty didn’t start with a decision on a composer. Like most films, Dylan and I tried different pieces of temp music under some of the scenes that needed music. Of all the music we tried, the work of Alexandre Desplat (Argo, Moonrise Kingdom) fit the best. Kathryn and Mark showed Alexandre a cut to see if he might be interested. He loved it and found time in his schedule to score the film. Right away he wrote seven pieces that he felt were right. We cut those in to fit the scene lengths, which he then used as a template for his final score. It was a very collaborative process.”

Company 3 handled the digital intermediate mastering. Goldenberg explained, “The nighttime raid scene has a very unique look. It was very dark, as shot. In fact, we had to turn off all the lights in the cutting room to even see an image on the Avid monitors. Company 3 got involved early on by color timing about ten minutes of that footage, because we were eager and excited to see what the sequence could look like when it was color timed. When it came to the final DI, the film really took on another layer of richness. We’d been looking at the one-light images so long that it actually took a few screenings to enjoy the image that we’d been missing until then.”

df_zdt_2Both Tichenor and Goldenberg have been cutting on Avid Media Composers for years, but this film didn’t tax the capabilities of the system. Tichenor said, “This isn’t an effects-heavy film. Some parts of the stealth helicopters are CG, but in the Avid, we mainly used effects for some monitor inserts, stabilization and split screens.” Goldenberg added, “One thing we both do is build our audio tracks as LCR [left, center, right channel] instead of the usual stereo. It takes a bit more work to build a dedicated center channel, but screenings sound much better.”

Avid has very good multicamera routines, so I questioned whether these were of value with the number of cameras being used. Tichenor replied, “We grouped clips, of course, but not actual multicam. You can switch cameras easily with a grouped clip. I actually did try for one second on a scene to see if I could use the multicam split screen camera display for watching dailies, but no, there was too much going on.” Goldenberg added, “There are some scenes that – although they were using multiple cameras – the operators would be shooting completely different things. For instance, actors in a car with one camera and other cameras grabbing local flavor and street life. So multicam or group clips were less useful in those cases.”

The film’s post schedule took about four months from the first full assembly until the final mix. Goldenberg said, “I don’t think you can say the cut was ever completely locked until the final mix, since we made minor adjustments even up to the end; but, there was a point at one of the internal screenings where we all knew the structure was in place. That was a big milestone, because from there, it was just a matter of tightening and honing. The story felt right.” Tichenor explained, “This movie actually came together surprisingly well in the time frame we had. Given the amount of footage, it’s the sort of film that could easily have been in post for two years. Fortunately with this script and team, it all came together. The scenes balanced out nicely and it has a good structure.”

For addition stories:

DV’s coverage of Zero Dark Thirty’s cinematography

An interview with William Goldenberg about Argo

FXGuide talks about the visual effects created for the film.

New York Times articles (here and here) about Zero Dark Thirty

Avid interview with William Goldenberg.

DP/30 interview with sound and picture editors on ZDT.

Originally written for DV magazine / Creative Planet Network

©2012, 2013 Oliver Peters

Hemingway & Gellhorn

Director Philip Kaufman has a talent for telling a good story against the backdrop of history. The Right Stuff (covering the start of the United States’ race into space) and The Unbearable Lightness of Being (the 1968 Soviet invasion of Prague) made their marks, but now the latest, Hemingway & Gellhorn continues that streak.

Originally intended as a theatrical film, but ultimately completed as a made-for-HBO feature, Hemingway & Gellhorn chronicles the short and tempestuous relationship between Ernest Hemingway (Clive Owen) and his third wife, Martha Gellhorn (Nicole Kidman). The two met in 1936 in Key West, traveled to Spain to cover the Spanish Civil War and were married in 1940. They lived in Havana and after four years of a difficult relationship were divorced in 1945. During her 60-year career as a journalist, Gellhorn was recognized as being one of the best war correspondents of the last century. She covered nearly every conflict up until and including the U. S. invasion of Panama in 1989.

The film also paired another team – that of Kaufman and film editor Walter Murch – both of whom had last teamed up for The Unbearable Lightness of Being. I recently spoke with Walter Murch upon his return from the screening of Hemingway & Gellhorn at the Cannes Film Festival. Murch commented on the similarities of these projects, “I’ve always been attracted to the intersection of history and drama. I hadn’t worked with Phil since the 1980s, so I enjoyed tackling another film together, but I was also really interested in the subject matter. When we started, I really didn’t know that much about Martha Gellhorn. I had heard the name, but that was about it. Like most folks, I knew the legend and myth of Hemingway, but not really many of the details of him as a person.”

This has been Murch’s first project destined for TV, rather than theaters. He continued, “Although it’s an HBO film, we never treated it as anything other than a feature film, except that our total schedule, including shooting, was about six months long, instead of ten or more months. In fact, seeing the film in Cannes with an audience of 2,500 was very rewarding. It was the first time we had actually screened in front of a theatrical audience that large. During post, we had a few ‘friends and family’ screenings, but never anything with a formal preview audience. That’s, of course, standard procedure with the film studios. I’m not sure what HBO’s plans are for Hemingway & Gellhorn beyond the HBO channels. Often some of their films make it into theatrical distribution in countries where HBO doesn’t have a cable TV presence.”

Hemingway & Gellhorn was produced entirely in the San Francisco Bay area, even though it was a period film and none of the story takes place there. All visual effects were done by Tippett Studio, supervised by Christopher Morley, which included placing the actors into scenes using real archival footage. Murch explained, “We had done something similar in The Unbearable Lightness of Being. The technology has greatly improved since then, and we were able to do things that would have been impossible in 1986. The archival film footage quality was vastly different from the ARRI ALEXA footage used for principal photography. The screenplay was conceived as alternating between grainless color and grainy monochrome scenes to juxtapose the intimate events in the lives of Hemingway and Gellhorn with their presence on the world stage at historical events. So it was always intended for effect, rather than trying to convince the audience that there was a completely continuous reality. As we got into editing, Phil started to play with color, using different tinting for the various locations. One place might be more yellow and another cool or green and so on. We were trying to be true to the reality of these people, but the film also has to be dramatic. Plus, Phil likes to have fun with the characters. There must be balance, so you have to find the right proportion for these elements.”

The task of finding the archival footage fell to Rob Bonz, who started a year before shooting. Murch explained, “An advantage you have today that we didn’t have in the ‘80s is YouTube. A lot of these clips exist on-line, so it’s easier to research what options you might have. Of course, then you have to find the highest quality version of what you’ve seen on-line. In the case of the events in Hemingway & Gellhorn, these took place all over the world, so Rob and his researchers were calling all kinds of sources, including film labs in Cuba, Spain and Russia that might still have some of these original nitrate materials.”

This was Walter Murch’s first experience working on a film recorded using an ARRI ALEXA. The production recorded 3K ARRIRAW files using the Codex recorder and then it was the editorial team’s responsibility to convert these files for various destinations, including ProResLT (1280 x 720) for the edit, H.264 for HBO review and DPX sequences for DI. Murch was quite happy with the ALEXA’s image. He said, “Since these were 3K frames we were able to really take advantage of the size for repositioning. I got so used to doing that with digital images, starting with Youth Without Youth, that it’s now just second nature. The ALEXA has great dynamic range and the image held up well to subtle zooms and frame divisions. Most repositionings and enlargements were on the order of 125% to 145%, but there’s one blow-up at 350% of normal.”

In addition to Bonz, the editorial team included Murch’s son Walter (first assistant editor) and David Cerf (apprentice). Walter Murch is a big proponent of using FileMaker Pro for his film editor’s code book and explained some of the changes on this film. “Dave really handled most of the FileMaker jiu-jitsu. It works well with XML, so we were able go back and forth between FileMaker Pro and Final Cut Pro 7 using XML. This time our script supervisor, Virginia McCarthy, was using ScriptE, which also does a handshake with FileMaker, so that her notes could be instantly integrated into our database. Then we could use this information to drive an action in Final Cut Pro – for instance, the assembly of dailies reels. FileMaker would organize the information about yesterday’s shooting, and then an XML out of that data would trigger an assembly in Final Cut, inserting graphics and text as needed in between shots. In the other direction, we would create visibility-disabled slugs on a dedicated video track, tagged with scene information about the clips in the video tracks below. Outputting XML from Final Cut would create an instantaneous continuity list with time markers in FileMaker.”

The way Walter Murch organizes his work is a good fit for Final Cut Pro 7, which he used on Hemingway & Gellhorn and continues to use on a current documentary project. In fact, at a Boston FCP user gathering, Murch showed one of the most elaborate screen grabs of an FCP timeline that you can imagine. He takes full advantage of the track structure to incorporate temporary sound effects and music cues, as well as updated final music and effects.

Another trick he mentioned to me was something he referred to as a QuickTime skin. Murch continued, “I edit with the complete movie on the timeline, not in reels, so I always have the full cut in front of me. I started using this simple QuickTime skin technique with Tetro. First, I export the timeline as a self-contained QuickTime file and then re-import the visual. This is placed on the upper-most video track, effectively hiding everything below. As such, it’s like a ‘skin’ that wraps the clips below it, so the computer doesn’t ‘see’ them when you scroll back and forth. The visual information is now all at one location on a hard drive, so the system isn’t bogged down with unrendered files and other clutter. When you make changes, then you ‘razor-blade’ through the QuickTime and pull back the skin, revealing the ‘internal organs’ (the clips that you want to revise) below – thus making the changes like a surgeon. Working this way also gives a quick visual overview of where you’ve made changes. You can instantly see where the skin has been ‘broken’ and how extensive the changes were. It’s the visual equivalent of a change list. After a couple of weeks of cutting, on average, I make a new QuickTime and start the process over.”

Walter Murch is currently working on a feature documentary about the Large Hadron Collider. Murch, in his many presentations and discussions on editing, considers the art part plumbing (knowing the workflow), part performance (instinctively feeling the rhythm and knowing, in a musical sense, when to cut) and part writing (building and then modifying the story through different combinations of picture and sound). Editing a documentary is certainly a great example of the editor as writer. His starting point is 300 hours of material following three theorists and three experimentalists over a four-year period, including the catastrophic failure of the accelerator nine days after it was turned on for the first time. Murch, who has always held a love and fascination for the sciences, is once again at that intersection of history and drama.

Click here to watch the trailer.

(And here’s a nice additional article from the New York Times.)

Originally written for Digital Video magazine (NewBay Media, LLC).

©2012 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters