Adobe Anywhere and Divine Access

df0115_da_1_sm

Editors like the integration of Adobe’s software, especially Dynamic Link and Direct Link between creative applications. This sort of approach is applied to collaborative workflows with Adobe Anywhere, which permits multiple stakeholders, including editors, producers and directors, to access common media and productions from multiple, remote locations. One company that has invested in the Adobe Anywhere environment is G-Men Media of Venice, California, who installed it as their post production hub. By using Adobe Anywhere, Jeff Way (COO) and Clay Glendenning (CEO) sought to improve the efficiency of the filmmaking process for their productions. No science project – they have now tested the concept in the real world on several indie feature films.

Their latest film, Divine Access, produced by The Traveling Picture Show Company in association with G-Men Media, is a religious satire centering on reluctant prophet Jack Harriman. Forces both natural and supernatural lead Harriman down a road to redemption culminating in a final showdown with his long time foe, Reverend Guy Roy Davis. Steven Chester Prince (Boyhood, The Ringer, A Scanner Darkly) moves behind the camera as the film’s director. The entire film was shot in Austin, Texas during May of 2014, but the processing of dailies and all post production was handled back at the Venice facility. Way explains, “During principal photography we were able to utilize our Anywhere system to turn around dailies and rough cuts within hours after shooting. This reduced our turnaround time for review and approval, thus reducing budget line items. Using Anywhere enabled us to identify cuts and mark them as viable the same day, reducing the need for expensive pickup shoots later down the line.”

The production workflow

df0115_da_3_smDirector of Photography Julie Kirkwood (Hello I Must Be Going, Collaborator, Trek Nation) picked the ARRI ALEXA for this film and scenes were recorded as ProRes 4444 in 2K. An on-set data wrangler would back up the media to local hard drives and then a runner would take the media to a downtown upload site. The production company found an Austin location with 1GB upload speeds. This enabled them to upload 200GB of data in about 45 minutes. Most days only 50-80GB were uploaded at one time, since uploads happened several times throughout each day.

Way says, “We implemented a technical pipeline for the film that allowed us to remain flexible.  Adobe’s open API platform made this possible. During production we used an Amazon S3 instance in conjunction with Aspera to get the footage securely to our system and also act as a cloud back-up.” By uploading to Amazon and then downloading the media into their Anywhere system in Venice, G-Men now had secure, full-resolution media in redundant locations. Camera LUTs were also sent with the camera files, which could be added to the media for editorial purposes in Venice. Amazon will also provide a long-term archive of the 8TB of raw media for additional protection and redundancy. This Anywhere/Amazon/Aspera pipeline was supervised by software developer Matt Smith.

df0115_da_5_smBack in Venice, the download and ingest into the Anywhere server and storage was an automated process that Smith programmed. Glendenning explains, “It would automatically populate a bin named for that day with the incoming assets. Wells [Phinny, G-Men editorial assistant] would be able to grab from subfolders named ‘video’ and ‘audio’ to quickly organize clips into scene subfolders within the Anywhere production that he would create from that day’s callsheet. Wells did most of this work remotely from his home office a few miles away from the G-Men headquarters.” Footage was synced and logged for on-set review of dailies and on-set cuts the next day. Phinny effectively functioned as a remote DIT in a unique way.

Remote access in Austin to the Adobe Anywhere production for review was made possible through an iPad application. Way explains, “We had close contact with Wells via text message, phone and e-mail. The iPad access to Anywhere used a secure VPN connection over the Internet. We found that a 4G wireless data connection was sufficient to play the clips and cuts. On scenes where the director had concerns that there might not be enough coverage, the process enabled us to quickly see something. No time was lost to transcoding media or to exporting a viewable copy, which would be typical of the more traditional way of working.”

Creative editorial mixing Adobe Anywhere and Avid Media Composer

df0115_da_4_smOnce principal photography was completed, editing moved into the G-Men mothership. Instead of editing with Premiere Pro, however, Avid Media Composer was used. According to Way, “Our goal was to utilize the Anywhere system throughout as much of the production as possible. Although it would have been nice to use Premiere Pro for the creative edit, we believed going with an editor that shared our director’s creative vision was the best for the film. Kindra Marra [Scenic Route, Sassy Pants, Hick] preferred to cut in Media Composer. This gave us the opportunity to test how the system could adapt already existing Adobe productions.” G-Men has handled post on other productions where the editor worked remotely with an Anywhere production. In this case, since Marra lived close-by in Santa Monica, it was simpler just to set up the cutting room at their Venice facility. At the start of this phase, assistant editor Justin (J.T.) Billings joined the team.

Avid has added subscription pricing, so G-Men installed the Divine Access cutting room using a Mac Pro and “renting” the Media Composer 8 software for a few months. The Anywhere servers are integrated with a Facilis Technology TerraBlock shared storage network, which is compatible with most editing applications, including both Premiere Pro and Media Composer. The Mac Pro tower was wired into the TerraBlock SAN and was able to see the same ALEXA ProRes media as Anywhere. According to Billings, “Once all the media was on the TerraBlock drives, Marra was able to access these in the Media Composer project using Avid’s AMA-linking. This worked well and meant that no media had to be duplicated. The film was cut solely with AMA-linked media. External drives were also connected to the workstations for nightly back-ups as another layer of protection.”

Adobe Anywhere at the finish line

df0115_da_6_smOnce the cut was locked, an AAF composition for the edited sequence was sent from Media Composer to DaVinci Resolve 11, which was installed on an HP workstation at G-Men. This unit was also connected to the TerraBlock storage, so media instantly linked when the AAF file was imported. Freelance colorist Mark Todd Osborne graded the film on Resolve 11 and then exported a new AAF file corresponding to the rendered media, which now also existed on the SAN drives. This AAF composition was then re-imported into Media Composer.

Billings continues, “All of the original audio elements existed in the Media Composer project and there was no reason to bring them into Premiere Pro. By importing Resolve’s AAF back into Media Composer, we could then double-check the final timeline with audio and color corrected picture. From here, the audio and OMF files were exported for Pro Tools [sound editorial and the mix is being done out-of-house]. Reference video of the film for the mix could now use the graded images. A new AAF file for the graded timeline was also exported from Media Composer, which then went back into Premiere Pro and the Anywhere production. Once we get the mixed tracks back, these will be added to the Premiere Pro timeline. Final visual effects shots can also be loaded into Anywhere and then inserted into the Premiere Pro sequence. From here on, all further versions of Divine Access will be exported from Premiere Pro and Anywhere.”

Glendenning points out that, “To make sure the process went smoothly, we did have a veteran post production supervisor – Hank Braxtan – double check our workflow.  He and I have done a lot of work together over the years and has more than a decade of experience overseeing an Avid house. We made sure he was available whenever there were Avid-related technical questions from the editors.”

Way says, “Previously, on post production of [the indie film] Savageland, we were able to utilize Anywhere for full post production through to delivery. Divine Access has allowed us to take advantage of our system on both sides of the creative edit including principal photography and post finishing through to delivery. This gives us capabilities through entire productions. We have a strong mix of Apple and PC hardware and now we’ve proven that our Anywhere implementation is adaptable to a variety of different hardware and software configurations. Now it becomes a non-issue whether it’s Adobe, Avid or Resolve. It’s whatever the creative needs dictate; plus, we are happy to be able to use the fastest machines.”

Glendenning concludes, “Tight budget projects have tight deadlines and some producers have missed their deadlines because of post. We installed Adobe Anywhere and set up the ecosystem surrounding it because we feel this is a better way that can save time and money. I believe the strategy employed for Divine Access has been a great improvement over the usual methods. Using Adobe Anywhere really let us hit it out of the park.”

Originally written for DV magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Unbroken

df0315_unbroken_7_sm

Some films might be hard to believe if it weren’t for the fact that the stories are true. Such is the case with Unbroken, Angelina Jolie’s largest studio film to date as a director. Unbroken tells the amazing true life story of Louis Zamperini, who participated in the 1936 Olympics in Berlin and then went on to serve as a bombardier on a B-24 in the Pacific during World War II. The plane went down and Zamperini plus two other crew members (one of whom subsequently died) were adrift at sea for 47 days. They were picked up by the Japanese and spent two years in brutal prisoner-of-war camps until the war ended. Zamperini came home to a hero’s welcome, but struggled with what we now know as post traumatic stress disorder. Through his wife, friends and attending a 1949 Billy Graham crusade in Los Angeles, Zamperini was able to turn his life around by finding the path of forgiveness, including forgiving his former prison guards.df0315_unbroken_8_sm

Since Zamperini’s full life story could easily take up several films, Unbroken focuses on his early years culminating with his heroic return home. Jack O’Connell (300: Rise of an Empire, Starred Up) plays the lead role as Louis Zamperini. Jolie pulled in a “dream team” of creative professionals to help her realize the vision for the film, including Roger Deakins (Prisoners, Skyfall, True Grit) as director of photographer, Alexandre Desplat (The Imitation Game, The Grand Budapest Hotel, The Monuments Men) to compose the score and Joel and Ethan Coen (Inside Llewyn Davis, True Grit, No Country for Old Men) to polish the final draft of the screenplay.  ILM was the main visual effects house and rounding out this team were Tim Squyres (Life of Pi, Lust Caution, Syrianna) and William Goldenberg (The Imitation Game, Transformers: Age of Extinction, Argo) who co-edited Unbroken.

Australia to New York

df0315_unbroken_3_smThe film was shot entirely in Australia during a 67-day period starting in October 2013. The crew shot ARRIRAW on Alexa XT cameras and dailies were handled by EFILM in Sydney. Avid DNxHD 115 files were encoded and sent to New York via Aspera for the editors. Dailies for the studio were sent via the PIX system and to Roger Deakins using the eVue system.

Tim Squyres started with the project in October as well, but was based in New York, after spending the first week in Australia. According to Squyres, “This was mainly a single camera production, although about one-third of the film used two cameras. Any takes that used two cameras to cover the same action where grouped in the Avid. Angie and Roger were very deliberate in what they shot. They planned carefully, so there was a modest amount of footage.” While in New York, Squyres and his assistants used three Avid Media Composer 7 systems connected to Avid Isis shared storage. This grew to five stations when the production moved to the Universal lot in Los Angeles and eventually eight when William Goldenberg came on board.

df0315_unbroken_6_smDuring production Squyres was largely on his own to develop the first assembly of the film. He continues, “I really didn’t have extensive contact with Angie until after she got back to LA. After all, directing on location is a full-time job, plus there’s a big time zone difference. Before I was hired, I had a meeting with her to discuss some of the issues surrounding shooting in a wave tank, based on my experiences with Life of Pi. This was applicable to the section of Unbroken when they are lost at sea in a life raft. While they were in Australia, I’d send cut scenes and notes to her over PIX. Usually this would be a couple of versions of a scene. Sometimes I’d get feedback, but not always. This is pretty normal, since the main thing the director wants to know is if they have the coverage and to be alerted if there are potential problems.”

Editor interaction

The first assembly of the film was finished in February and Squyres continued to work with Jolie to hone and tighten the film. Squyres says, “This was my first time working with an actor/director. I really appreciated how much care she took with performances. That was very central to her focus. Even though there’s a lot of action in some of the scenes, it never loses sight of the characters. She’s also a very smart director and this film has hundreds of visual effects. She quickly picked up on how effects could be used to enhance a shot and how an effect, like CG water, needed to be tweaked to make it look realistic.”

df0315_unbroken_2_smWilliam Goldenberg joined the team in June and stayed with it until October 2014. This was Tim Squyres’ first time working with a co-editor and since the film was well past the first cut stage, the two handled their duties differently than two editors might on other films. Goldenberg explains, “Since I wasn’t on from the beginning, we didn’t split up the film between us, with one editor doing the action scenes and the other the drama. Instead, we both worked on everything very collaboratively. I guess I was brought in to be another set of eyes, since this was such a huge film. I’d review a section and tweak the cut and then run it by Tim. Then the two of us would work through the scene with Angie, kicking around ideas. She’s a very respectful person and wanted to make sure that everyone’s opinion was heard.” Squyres adds, “Each editor would push the other to make it better.”

This is usually the stage at which a film might be completely rearranged in the edit and deviate significantly from the script. That wasn’t the case with Unbroken. Goldenberg explains, “The movie is partly told through flashbacks, but these were scripted and not a construct created during editing. We experimented with rearranging some of the scenes, but in the end, they largely ended up back as they were scripted.”

The art of cutting dialogue scenes

df0315_unbroken_1_smBoth editors are experienced Media Composer users and one unique Avid tool used on Unbroken was Script Integration, often referred to as ScriptSync. This feature enables the editor to see the script as text in a bin with every take synced to the individual dialogue lines. Squyres says, “It had already been set up when I came on board to cut a previous film and was handy for me there to learn the footage. So I decided to try it on this film. Although handy, I didn’t find it essential for me, because of how I go through the dailies. I typically build a sequence out of pieces of each take, which gives me a set of ‘pulls’ – my favorite readings from each setup strung together in script order. Then I’ll cut at least two versions of a scene based on structure or emotion – for example, a sad version and an angry version. When I’ve cut the first version, I try not to repeat shots in building the second version. By using the same timeline and turning on dupe detection, it’s easy to see if you’ve repeated something. Assembling is about getting a good cut, but it’s also about learning all the options and getting used to the footage. By having a few versions of each scene, you are already prepared when the director wants to explore alternatives.”

Goldenberg approaches the raw footage in a similar fashion. He says, “I build up my initial sequences in a way that accomplishes the same thing that ScriptSync does. The sequence will have each line reading from each take in scene order going from wide to tight. This results in a long sequence for a scene, with each line repeated, but I can immediately see what all the coverage is for that scene. When a director wants to see alternates for any reading, I can go back to this sequence and then it’s easy to evaluate the performance options.”

Building up visual effects and the score

df0315_unbroken_5_smUnbroken includes approximately 1100 visual effects shots, ranging from water effects, composites and set extensions to invisible split screens used to adjust performance timing. Squyres explains, “Although the editing was very straightforward, we did a lot of temporary compositing. I would identify a shot and then my assistant would do the work in either Media Composer or After Effects. The spilt screens used for timing is a common example. We had dozens of those. On the pure CG shots, we had pre-vis clips in the timeline and then, as more and more polished versions of the effects came in, these would be replaced in the sequence.” Any temp editorial effects were redone in final form by one of the VFX suppliers or during the film’s digital intermediate finishing at EFILM in Hollywood.

In addition to temp effects, the editors also used temp music. Squyres explains, “At the beginning we used a lot of temp music pulled from other scores. This helps you understand what a scene is going to be, especially if it’s a scene designed for music. As we got farther into the cut, we started to receive some of Alexandre’s temp cues. These were recorded with synth samples, but still sounded great and gave you an good sense of the music. The final score was recorded using an orchestra at Abbey Road Studios, which really elevated the score beyond what you can get with samples.”

Finding the pace

In a performance-driven film, the editing is all about drama and pacing. Naturally Squires and Goldenberg have certain scenes that stand out to them. Squyres says, “A big scene in the film is when Louis meets the head guard for the first time. You have to set up that slow burn as they trade lines. Later in the film, there’s a second face-off, but now the dynamic between them is completely different. It’s a large, complex scene without spoken dialogue, so you have to get the pacing right, based on feel, not on the rhythm of the dialogue. The opening scene is an action air battle. It was shot in a very measured fashion, but you had to get the right balance between the story and the characters. You have to remember that the film is personal and you can’t lose sight of that.”

For Goldenberg, the raft sequence was the most difficult to get right. He says, “You want it to feel epic, but in reality – being lost at sea on a raft – there are long stretches of boredom and isolation, interrupted by some very scary moments. You have to find the right balance without the section feeling way too long.”

Both editors are fans of digital editing. Goldenberg puts it this way, “Avid lets you be wrong a lot of the time, because you can always make another version without physical constraints. I feel lucky, though, in having learned to work in the ‘think before you edit’ era. The result – more thinking – means you know where you are going when you start to cut a scene. You are satisfied that you know the footage well. I like to work in reels to keep the timeline from getting too cumbersome and I do keep old versions, in case I need to go back and compare or restore something.”

The film was produced with involvement by Louis Zamperini and his family, but unfortunately he died before the film was completed. Goldenberg sums it up, “It was a thrill. Louie was the type of person you hope your kids grow up to be. We were all proud to be part of telling the story for Louie.” The film opened its US release run on Christmas Day 2014.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Birdman

Birdman-PosterIt’s rare, but exhilarating, when you watch a movie with a unique take on film’s visual language, without the crutch of extensive computer generated imagery. That’s precisely the beauty of Birdman or (The Unexpected Virtue of Ignorance). The film is directed and co-written by Alejandro González Iñárritu (Biutiful, Babel, 21 Grams) and features a dynamic ensemble cast dominated by Michael Keaton’s lead performance as Riggan Thomson. While most films are constructed of intercutting master shots, two-shots and singles, Birdman is designed to look like a continuous, single take. While this has been done before in films, approximately 100 minutes out of the two-hour movie appear as a completely seamless composite of lengthy Steadicam and hand-held takes.

Riggan Thomson (Keaton) is a movie star who rode to fame as the comic book super hero Birdman; but, it’s a role that he walked away from. Searching for contemporary relevance, Riggan has decided to mount a Broadway play, based on the real-life Raymond Carter short story, What We Talk About When We Talk About Love. The film takes place entirely at the historic St. James Theater near Times Square and the surrounding area in New York. Principal photography occurred over a 30-day period, both at the real theater and Times Square, as well as at Kaufman Astoria Studios. The soundstage sets were for the backstage and dressing room portions of the theater. Throughout the film, Riggan struggles with the challenges of getting the play to opening day and dealing with his fellow cast members, but more notably confronting his super ego Birdman, seen in person and heard in voice-over. This is, of course, playing out in Riggan’s imagination. The film, like the play within the film, wrestles with the overall theme of the confusion between love and affection.

Bringing this ambitious vision to life fell heavily upon the skills of the director of photography and the editors. Emmanuel Lubezki, known as Chivo, served as DoP. He won the 2014 Cinematography Oscar for Gravity, a film that was also heralded for its long, seemingly continuous shots. Stephen Mirrione (The Monuments Men, Ocean’s Thirteen, Traffic) and Douglas Crise (Arbitrage, Deception, Babel) teamed up for the edit. Both editors had worked together before, as well as with the director. Mirrione started during the rehearsal process. At the time of production, Crise handled the editing in New York, while Mirrione, who was back LA at this time, was getting dailies and checking in on the cut as well as sending scenes back and forth with changes every day.

It starts with preparation

Stephen Mirrione explains, “When I first saw what they wanted to do, I was a bit skeptical that it could be pulled off successfully. Just one scene that didn’t work would ruin the whole film. Everything really had to align. One of the things that was considered, was to tape and edit all of the rehearsals. This was done about two months before the principal photography was set to start. The rehearsals were edited together, which allowed Alejandro to adjust the script, pacing and performances. We could see what would work and what wouldn’t. Before cameras even rolled, we had an assembly made up of the rehearsal footage and some of the table read. So, together with Alejandro, we could begin to gauge what the film would look and sound like, where a conversation was redundant, where the moves would be. It was like a pre-vis that you might create for a large-scale CGI or animated feature.”

Once production started in New York, Douglas Crise picked up the edit. Typically, the cast and crew would rehearse the first half of the day and then tape during the second half. ARRI ALEXA cameras were used. The media was sent to Technicolor, who would turn around color corrected Avid DNxHD36 dailies for the next day. The team of editors and assistants used Avid Media Composer systems. According to Crise, “I would check the previous day’s scenes and experiment to see how the edit points would ‘join’ together. You are having to make choices based on performance, but also how the camera work would edit together. Alejandro would have to commit to certain editorial decisions, because those choices would dictate where the shot would pick up on the next day. Stephen would check in on the progress during this period and then he picked up once the cut shifted to visual effects.”

Naturally the editing challenge was to make the scenes flow seamlessly in both a figurative and literal sense. “The big difference with this film was that we didn’t have the conventional places where one scene started and another ended. Every scene walks into the next one. Alejandro described it as going down a hill and not stopping. There wasn’t really a transition. The characters just keep moving on,” Crise says.

“I think we really anticipated a lot of the potential pitfalls and really prepared, but what we didn’t plan on were all the speed changes,” Mirrione adds. “At certain points, when the scene was not popping for us, if the tempo was a little off, we could actually dial up the pace or slow it down as need be without it being perceptible to the audience and that made a big difference.”

Score and syncopation

To help drive pace, much of the track uses a drum score composed and performed by Mexican drummer Antonio Sanchez. In some scenes within the film, the camera also pans over to a drummer with a kit who just happens to be playing in an alley or even in a backstage hallway. Sanchez and Iñárritu went into a studio and recorded sixty improvised tracks based on the emotions that the film needed. Mirrione says, “Alejandro would explain the scene to the drummer in the studio and then he’d create it.” Crise continues, “Alejandro had all these drum recordings and he told me to pick six of my favorites. We cut those together so that he could have a track that the drummer could mimic when they shot that scene. He had the idea for the soundtrack from the very beginning and we had those samples cut in from the start, too.”

“And then Martín [Hernández, supervising sound editor] took it to another level. Once there was an first pass at the movie, with a lot of those drum tracks laid in as an outline, he spent a lot of time working with Alejandro, to strip layers away, add some in, trying a lot of different beats. Obviously, in every movie, music will have an impact on point of view and mood and tone. But with this, I think it was especially important, because the rhythm is so tied to the camera and you can’t make those kinds of cadence adjustments with as much flexibility as you can with cuts. We had to lean on the music a little more than normal at times, to push back or pull forward,” Mirrione says.

The invisible art

The technique of this seamless sequence of scenes really allows you to get into the head of Riggan more so than other films, but the editors are reserved in discussing the actual editing technique. Mirrione explains, “Editing is often called the ‘invisible art’. We shape scenes and performances on every film. There has been a lot of speculation over the internet about the exact number and length of shots. I can tell you it’s more than most people would guess. But we don’t want that to be the focus of the discussion. The process is still the same of affecting performance and pace. It’s just that the dynamic has been shifted, because much of the effort was front-loaded in the early days. Unlike other films, where the editing phase after the production is completed, focuses on shaping the story – on Birdman it was about fine-tuning.”

Crise continues, “Working on this film was a different process and a different way to come up with new ideas. It’s also important to know that most of the film was shot practically. Michael [Keaton] really is running through Times Square in his underwear. The shots are not comped together with green screen actors against CGI buildings.” There are quite a lot of visual effects used to enhance and augment the transitions from one shot to the next to make these seamless. On the other hand, when Riggan’s Birdman delusions come to life on screen, we also see more recognizable visual effects, such as a brief helicopter and creature battle playing out over the streets of New York.

Winking at the audience

The film is written as a black comedy with quite a few insider references. Clearly, the casting of Michael Keaton provides allusion to his real experiences in starting the Batman film franchise and in many ways the whole super hero film genre. However, there was also a conscious effort during rehearsals and tapings to adjust the dialogue in ways that kept these references as current as possible. Crise adds, “Ironically, in the scenes on the rooftop there was a billboard in the background behind Emma Stone and Edward Norton, with a reference to Tom Hanks. We felt that audiences would believe that we created it on purpose, when if fact it was a real billboard. It was changed in post, just to keep from appearing to be an insider reference that was too obvious.”

The considerations mandated during the edit by a seamless film presented other challenges, too. For example, simple concerns, like where to structure reel breaks and how to hand off shots for visual effects. Mirrione points out, “Simple tasks such as sending out shots for VFX, color correction, or even making changes for international distribution requirements were complicated by the fact that once we finished, there weren’t individual ‘shots’ to work with – just one long never ending strand.  It meant inventing new techniques along the way.  Steven Scott, the colorist, worked with Chivo at Technicolor LA to perfect all the color and lighting and had to track all of these changes across the entire span of the movie.  The same way we found places to hide stitches between shots, they had to hide these color transitions which are normally changed at the point of a cut from one shot to the next.”

In total, the film was in production and post for about a year, starting with rehearsals in early 2013. Birdman was mixed and completed by mid-February 2014. While it posed a technical and artistic challenge from the start, everything amazingly fell into place, highlighted by perfect choreography of cast, crew and the post production team. It will be interesting to see how Birdman fares during awards season, because it succeeds on so many levels.

Originally written for Digital Video magazine / CreativePlanetNetwork

©2014, 2015 Oliver Peters

DaVinci Resolve 11

df_resolve11_1_sm

With the release of DaVinci Resolve 11, Blackmagic Design has firmly moved into the ranks of nonlinear editing. In addition to a redesigned logo and splash screen, Resolve 11 sports more editorial tools than ever before. Now for the first time it is worthy for consideration as your NLE of choice. I have covered previous releases of Resolve, so I’ll only briefly touch on color correction in this article.

As before, DaVinci Resolve 11 comes in four versions: Resolve Lite (free), Resolve Software ($995), Resolve (with the control surface for $29,995) and the Linux configuration. Both free and paid versions support a variety of third-party control surfaces, with the most popular being the Avid Artist Color and the Tangent Devices Element panels. Resolve Lite supports output up to UltraHD (3840 x 2160). It includes most of the features of the paid software, except collaboration, stereo 3D and noise reduction. Although you can operate Resolve without any third-party i/o hardware, if you want external monitoring or output to tape, you’ll need to purchase one of Blackmagic Design’s PCIe capture cards or Thunderbolt i/o devices.

Color Match

df_resolve11_2_smThe interface is divided into four modules: media, edit, color and deliver. All color correction occurs in the color module. Here you’ll find a wealth of grading tools, including camera raw settings, color wheels, primary sliders and more. Color Match is a new correction tool. If you included a color chart when you shot your footage, Resolve can use the image of that chart to set an automatic correction for the color balance of the scene.

Color Match features three template settings for charts, including X-Rite ColorChecker, Datacolor SpyderCheckr and DSC Labs OneShot. If you used one of these charts and it’s in your footage, then select the appropriate set of color swatches in the Color Match menu. Next, select the Color Chart grid from the viewer tools, which opens an overlay for that chart. Corner-pin the overlay so that the grid lines up over the color swatches in the image and hit the Match button. Resolve will instantly adjust its curves to correct the color balance of the shot, so that the chart in the image matches the template for that chart in Resolve. Now you can copy this grade and apply it to the rest of the shots within that same set-up.

Although this isn’t a one-shot fix, it’s intended to give you a good starting point for your grade. While this feature demos really well and is certainly a whizz-bang attention-getter, it has the most value for novice users or for DITs who need to get a quick grade for dailies while on location.

Editing

df_resolve11_3_smThe biggest spark of interest I’ve seen for Resolve 11 is due to the editing tools. As an NLE, it’s somewhat of a mash-up between Final Cut Pro 7 and Final Cut Pro X. It copies a lot of X’s design aesthetic and even some features, like clip skimming in the media bins; yet, it is clearly track-based. For editors who like a lot of FCP X, but are put off by Apple’s trackless, magnetic timeline, Resolve 11 becomes a very tantalizing, cross-platform alternative.

Resolve 11’s edit module most closely aligns with Final Cut Pro 7, although there is no multi-cam feature, yet. The keyboard commands mirror the FCP 7 set, as do menu options and much of the working style. One big improvement is a very advanced trim mode, which offers good asymmetrical trimming. If you start a project from the beginning in Resolve 11, you can easily import media, organize clips into bins/folders, add logging information and, in general, do all of the nuts-and-bolts things you do in every editing application.

The interface uses a modal design and supports dual and single monitor configurations. Although there are numerous panels and windows that can be opened as needed, the general layout is fixed. Certain functions are restricted to the edit module and others to the color module. For example, transition effects, titles and generators would be added and adjusted in the edit module, while working with a standard timeline. Color correction and other image effects are reserved for the color module, which uses a node-based hierarchy. Resizing and repositioning can be done in either.

Resolve includes an inspector pane on the right side of the timeline viewer that is much like that of FCP X. Here you’ll find composite, transform, cropping, retiming and scaling controls. If you select a transition, then its adjustment controls appear in the inspector panel. Resolve supports the OpenFX video plug-in architecture. Third-party transitions will show up in the edit mode’s OpenFX library, while the filters only show up when you are in the color mode. Like FCP X, inspector controls are limited to sliders, color pickers and numerical entry, with no allowance for custom third-party plug-in interfaces.

My biggest beef is performance. Resolve 11 is optimized to pass the highest quality images through its pipeline, which seems to impede real-time playback, even with ungraded footage. In other NLEs, hitting play or the space bar brings you to full-speed, real-time playback in a fraction of a second. In Resolve it takes a few seconds, which is clearly evident in its dropped-frame indicator. Even with proper real-time playback, video motion does not look as smooth and fluid in the viewer as I would expect. There are a number of factors that affect this, including drive performance (high-performance storage is good), GPU performance (one or more high-end cards are desirable) and age of the machine (a new top-of-the-line system is ideal). Resolve is also not as gracious with a wide range of native media types as some of the other NLEs.

df_resolve11_4_smColor grades will affect performance. What if you start grading in the color mode and bounce back into the edit mode? This has the same impact on the computer as applying several filters in a traditional NLE. Add a stack of effects on most NLEs and playback performance through those clips is often terrible until you render. To mitigate this issue, Resolve includes smart caching, which is a similar sort of background render as that of FCP X. The software renders clips with a grade or an effect applied anytime the machine is idle.

Audio in Resolve 11 is still in the very early stages. There is no audio plug-in architecture. Hopefully Blackmagic will add AU and/or VST support down the road. Having multiple audio tracks also hurts system performance. Complex audio in the timeline quickly choked my system. Having even a few tracks caused the audio to drop out during playback. Resolve employs a similar track design to Adobe Premiere Pro. This means adaptive tracks, where a single timeline track can contain one mono channel, two stereo channels or multiple surround channels. This is an interesting design, but it seems to impact round-tripping between other applications. For example, I’ve exported multi-channel timelines via XML. In this process, when I brought that timeline into FCP 7 or Premiere Pro, these tracks only showed up as mono tracks with one channel of audio.

Roundtrips

df_resolve11_6_smWhere Resolve 11 really shines is in its roundtrip capabilities. It can take media and edit list formats from a range of systems, then let you process the media and finally output a new set of media files and corresponding lists. EDL, AAF, XML and FCPXML formats are supported, making Resolve 11 one of the better cross-application conversion tools. For instance, you can edit in FCP X, conform and grade in Resolve 11 and then output that in a compatible format to finish in the same or different NLE, such as Media Composer, Smoke, Premiere Pro, etc. Of course, with Resolve 11, you could simply finish in Resolve and output final deliverables from right within the application. That’s clearly the design goal Blackmagic had in mind.

Personally, I still prefer to use the roundtrip method, but there are a few wrinkles in this process. I have already mentioned audio issues. Another is resizing, such as FCP X’s “spatial conform” and Premiere Pro’s “scale to frame size”. These are automatic timeline functions to fit oversized images into smaller timeline frames, such as putting 4K media into a 1080 timeline. This feature automatically down-scales the source image so that either horizontal or vertical dimensions match. Unfortunately some of this information gets lost in the translation between applications.

df_resolve11_8_smI recently ran into this on two jobs with 4K RED media and Resolve 11. The first was a project cut in FCP X. The roundtrip went fine, but when the newly rendered 1080 media was back in FCP X, the application still thought it needed to enable spatial conform, which had been used in the offline edit. Disabling spatial conform caused FCP X to blow up the 1080 media 200%. The simple fix was just to leave spatial conform on and let FCP X render this media on export. There were no visible issues that I could detect.

The second was a music video project that the director had cut on Premiere Pro CC2014. There was extensive reframing and repositioning throughout. Importing this timeline into Resolve 11 was a complete disaster and would have meant rebuilding all of this work to reframe images. Ultimately I opted to use SpeedGrade CC2014 on this particular job, since it correctly translated the Premiere Pro timeline via Adobe’s Direct Link feature.

As a general rule, I would recommend that if you know you are going outside of the application, do not use any of these automatic resizing tools in the offline NLE. Instead, manually set the scale and position values, because Resolve does an excellent job of interpreting these parameters when set during the offline edit.

OpenFX

df_resolve11_5_smBlackmagic added the OpenFX architecture with Resolve 10, but now that Resolve 11 is out, new developers are joining the party. On my test system I installed both the FilmConvert 2.0 plug-in and the Boris Continuum Complete 9 package. The filters are accessed in the color modules and are applied to nodes, just like other grading functions. Although other host versions of the FilmConvert filter include color wheels within the filter’s control panel, they are excluded in the OpenFX version. You do get the camera and film emulsion presets. This is my favorite film emulation and grain plug-in and it makes a suitable complement to Resolve.

Boris FX’s BCC 9 for Resolve includes most of the same filters as for other hosts, including the new FX Browser. You can launch it from inside the Resolve interface, but when I tried to use it, the browser crashed the application. I’m running the public beta of 11.1, so that could be part of it. Otherwise, the filters themselves worked fine. So, if you need to add a glow, cartoon effect or spray paint noise to a shot, you can do so from inside Resolve with BCC 9.

OpenFX filters installed for other applications also show up in Resolve. I discovered this during my review of the HP Z1G2 workstation. Sony Vegas Pro 13 was installed, which also uses OpenFX. The NewBlueFX filters that were installed for Vegas also showed up in Resolve 11 on that machine.

A key point to remember it to apply OpenFX filters in a separate node. If you need to change the filter, simply delete the node and create a new one for a different filter. That way you won’t lose any of the correction applied to the clip.

Collaboration

df_resolve11_7_smResolve 11 enables collaboration among multiple users on the same project. This requires a paid version of Resolve 11 for each collaborator, a network and a shared DaVinci Resolve database. To test this feature, I enlisted the help of colorist and trainer Patrick Inhofer (Tao of Color, Mixing Light). Patrick set up a simple ethernet network between a Mac Pro and a MacBook Pro, each running a paid version of Resolve 11. You have to set up a shared project and open both Resolve seats in the collaboration mode. Once both systems are open with the same project, then it is possible to work interactively.

This is not like two or more Avid Media Composers running in a Unity-style sharing configuration. Rather, this approach is intended for an editor and a colorist to be able to simultaneously work on one timeline at the same time. One person is the “owner” of the project, while anyone else is a “collaborator”. In this model, the “owner” has control of the editing timeline and the “collaborator” is the colorist working in the color module. You could also have a third collaborator logging metadata for clips.

df_resolve11_9_smIn the collaboration mode, a bell-shaped alert icon is added to the lower left corner of the interface. Whenever the colorist adds or changes a correction on one or more clips and publishes his changes, the editor receives an alert to update the clips. When the update is made, the colorist’s changes become visible on the clips in the editor’s timeline. If the editor makes editorial changes to the timeline, such as trimming, adding or deleting clips, then he or she must save the project. Once saved, the colorist can reload the project to see these updates.

As long as you follow these procedures, things work well; however, in our tests, when we went the other direction, updates didn’t happen correctly. For example, color changes made by the editor or timeline edits made by the colorist, did not show up as expected on the other person’s system. Collaboration worked well, once we both got the hang of it, but the feature does feel like a 1.0 version. Updating changes worked, but you can also reject a change by choosing “revert”. This is supposed to take the clip back to the previous grade. Instead, it dropped the grade entirely and went back to an un-corrected version of the clip with all nodes removed.

DaVinci Resolve 11 is a powerful new version of this best-in-class color grading application. Although you might not edit a project from start-to-finish in Resolve, you certainly could. For now, Blackmagic Design is positioning Resolve as an NLE designed for finishing. Edit your creative cut in Media Composer, Final Cut Pro or Premiere Pro – mix in Logic Pro X, Pro Tools or Audition – and then bring them all together in Resolve 11. As we all know, clients like to tweak the cut until the very end. Now the grading environment can enjoy more interactivity than ever before.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

Gone Girl

df_gg_4David Fincher is back with another dark tale of modern life, Gone Girl – the film adaptation of Gillian Flynn’s 2012 novel. Flynn also penned the screenplay.  It is the story of Nick and Amy Dunne (Ben Affleck and Rosamund Pike) – writers who have been hit by the latest downturn in the economy and are living in America’s heartland. Except that Amy is now mysteriously missing under suspicious circumstances. The story is told from each of their subjective points of view. Nick’s angle is revealed through present events, while Amy’s story is told through her diary in a series of flashbacks. Through these we learn that theirs is less than the ideal marriage we see from the outside. But whose story tells the truth?

To pull the film together, Fincher turned to his trusted team of professionals including director of photography Jeff Cronenweth, editor Kirk Baxter and post production supervisor Peter Mavromates. Like Fincher’s previous films, Gone Girl has blazed new digital workflows and pushed new boundaries. It is the first major feature to use the RED EPIC Dragon camera, racking up 500 hours of raw footage. That’s the equivalent of 2,000,000 feet of 35mm film. Much of the post, including many of the visual effects, were handled in-house.

df_gg_1Kirk Baxter co-edited David Fincher’s The Curious Case of Benjamin Button, The Social Network and The Girl with the Dragon Tattoo with Angus Wall – films that earned the duo two best editing Oscars. Gone Girl was a solo effort for Baxter, who had also cut the first two episodes of House of Cards for Fincher. This film now becomes the first major feature to have been edited using Adobe Premiere Pro CC. Industry insiders consider this Adobe’s Cold Mountain moment. That refers to when Walter Murch used an early version of Apple Final Cut Pro to edit the film Cold Mountain, instantly raising the application’s awareness among the editing community as a viable tool for long-form post production. Now it’s Adobe’s turn.

In my conversation with Kirk Baxter, he revealed, “In between features, I edit commercials, like many other film editors. I had been cutting with Premiere Pro for about ten months before David invited me to edit Gone Girl. The production company made the decision to use Premiere Pro, because of its integration with After Effects, which was used extensively on the previous films. The Adobe suite works well for their goal to bring as much of the post in-house as possible. So, I was very comfortable with Premiere Pro when we started this film.”

It all starts with dailies

df_gg_3Tyler Nelson, assistant editor, explained the workflow, “The RED EPIC Dragon cameras shot 6K frames (6144 x 3072), but the shots were all framed for a 5K center extraction (5120 x 2133). This overshoot allowed reframing and stabilization. The .r3d files from the camera cards were ingested into a FotoKem nextLAB unit, which was used to transcode edit media, viewing dailies, archive the media to LTO data tape and transfer to shuttle drives. For offline editing, we created down-sampled ProRes 422 (LT) QuickTime media, sized at 2304 x 1152, which corresponded to the full 6K frame. The Premiere Pro sequences were set to 1920 x 800 for a 2.40:1 aspect. This size corresponded to the same 5K center extraction within the 6K camera files. By editing with the larger ProRes files inside of this timeline space, Kirk was only viewing the center extraction, but had the same relative overshoot area to enable easy repositioning in all four directions. In addition, we also uploaded dailies to the PIX system for everyone to review footage while on location. PIX also lets you include metadata for each shot, including lens choice and camera settings, such as color temperature and exposure index.”

Kirk Baxter has a very specific way that he likes to tackle dailies. He said, “I typically start in reverse order. David tends to hone in on the performance with each successive take until he feels he’s got it. He’s not like other directors that may ask for completely different deliveries from the actors with each take. With David, the last take might not be the best, but it’s the best starting point from which to judge the other takes. Once I go through a master shot, I’ll cut it up at the points where I feel the edits will be made. Then I’ll have the assistants repeat these edit points on all takes and string out the line readings back-to-back, so that the auditioning process is more accurate. David is very gifted at blocking and staging, so it’s rare that you don’t use an angle that was shot for a scene. I’ll then go through this sequence and lift my selected takes for each line reading up to a higher track on the timeline. My assistants take the selects and assemble a sequence of all the angles in scene order. Once it’s hyper-organized, I’ll send it to David via PIX and get his feedback. After that, I’ll cut the scene. David stays in close contact with me as he’s shooting. He wants to see a scene cut together before he strikes a set or releases an actor.”

Telling the story

df_gg_5The director’s cut is often where the story gets changed from what works on paper to what makes a better film. Baxter elaborated, “When David starts a film, the script has been thoroughly vetted, so typically there isn’t a lot of radical story re-arrangement in the cutting room. As editors, we got a lot of credit for the style of intercutting used in The Social Network, but truthfully that was largely in the script. The dialogue was tight and very integral to the flow, so we really couldn’t deviate a lot. I’ve always found the assembly the toughest part, due to the volume and the pressure of the ticking clock. Trying to stay on pace with the shoot involves some long days. The shooting schedule was 106 days and I had my first cut ready about two weeks after the production wrapped. A director gets around ten weeks for a director’s cut and with some directors, you are almost starting from scratch once the director arrives. With David, most of that ten week period involves adding finesse and polish, because we have done so much of the workload during the shoot.”

df_gg_9He continued, “The first act of Gone Girl uses a lot of flashbacks to tell Amy’s side of the story and with these, we deviated a touch from the script. We dropped a couple of scenes to help speed things along and reduced the back and forth of the two timelines by grouping flashbacks together, so that we didn’t keep interrupting the present day; but, it’s mostly executed as scripted. There was one scene towards the end that I didn’t feel was in the right place. I kept trying to move it, without success. I ended up taking another pass at the cut of the scene. Once we had the emotion right in the cut, the scene felt like it was in the right place, which is where it was written to be.”

“The hardest scenes to cut are the emotional scenes, because David simplifies the shooting. You can’t hide in dynamic motion. More complex scenes are actually easier to cut and certainly quite fun. About an hour into the film is the ‘cool girls’ scene, which rapidly answers lots of question marks that come before it. The scene runs about eight minutes long and is made up of about 200 set-ups. It’s a visual feast that should be hard to put together, but was actually dessert from start to finish, because David thought it through and supplied all the exact pieces to the puzzle.”

Music that builds tension

df_gg_6Composers Trent Reznor and Atticus Ross of Nine Inch Nails fame are another set of Fincher regulars. Reznor and Ross have typically supplied Baxter with an album of preliminary themes scored with key scenes in mind. These are used in the edit and then later enhanced by the composers with the final score at the time of the mix. Baxter explained, “On Gone Girl we received their music a bit later than usual, because they were touring at the time. When it did arrive, though, it was fabulous. Trent and Atticus are very good at nailing the feeling of a film like this. You start with a piece of music that has a vibe of ‘this is a safe, loving neighborhood’ and throughout three minutes it sours to something darker, which really works.”

“The final mix is usually the first time I can relax. We mixed at Skywalker Sound and that was the first chance I really had to enjoy the film, because now I was seeing it with all the right sound design and music added. This allows me to get swallowed up in the story and see beyond my role.”

Visual effects

df_gg_7The key factor to using Premiere Pro CC was its integration with After Effects CC via Adobe’s Dynamic Link feature. Kirk Baxter explained how he uses this feature, “Gone Girl doesn’t seem like a heavy visual effects film, but there are quite a lot of invisible effects. First of all, I tend to do a lot of invisible split screens. In a two-shot, I’ll often use a different performance for each actor. Roughly one-third of the timeline contains such shots. About two-thirds of the timeline has been stabilized or reframed. Normally, this type of in-house effects work is handled by the assistants who are using After Effects. Those shots are replaced in my sequence with an After Effects composition. As they make changes, my timeline is updated.”

“There are other types of visual effects, as well. David will take exteriors and do sky replacements, add flares, signage, trees, snow, breath, etc. The shot of Amy sinking in the water, which has been used in the trailers, is an effects composite. That’s better than trying to do multiple takes with the real actress by drowning her in cold water. Her hair and the water elements were created by Digital Domain. This is also a story about the media frenzy that grows around the mystery, which meant a lot of TV and computer screen comps. That content is as critical in the timing of a scene as the actors who are interacting with it.”

Tyler Nelson added his take on this, “A total of four assistants worked with Kirk on these in-house effects. We were using the same ProRes editing files to create the composites. In order to keep the system performance high, we would render these composites for Kirk’s timeline, instead of using unrendered After Effects composites. Once a shot was finalized, then we would go back to the 6K .r3d files and create the final composite at full resolution. The beauty of doing this all internally is that you have a team of people who really care about the quality of the project as much as everyone else. Plus the entire process becomes that much more interactive. We pushed each other to make everything as good as it could possibly be.”

Optimization and finishing

df_gg_2A custom pipeline was established to make the process efficient. This was spearheaded by post production consultant Jeff Brue, CTO of Open Drives. The front end storage for all active editorial files was a 36TB RAID-protected storage network built with SSDs. A second RAID built with standard HDDs was used for the .r3d camera files and visual effects elements. The hardware included a mix of HP and Apple workstations running with NVIDIA K6000 or K5200 GPU cards. Use of the NVIDIA cards was critical to permit as much real-time performance as possible doing the edit. GPU performance was also a key factor in the de-Bayering of .r3d files, since the team didn’t use any of the RED Rocket accelerator cards in their pipeline. The Macs were primarily used for the offline edit, while the PCs tackled the visual effects and media processing tasks.

In order to keep the Premiere Pro projects manageable, the team broke down the film into eight reels with a separate project file per reel. Each project contained roughly 1,500 to 2,000 files. In addition to Dynamic Linking of After Effects compositions, most of the clips were multi-camera clips, as Fincher typically shoots scenes with two or more cameras for simultaneous coverage. This massive amount of media could have potentially been a huge stumbling block, but Brue worked closely with Adobe to optimize system performance over the life of the project. For example, project load times dropped from about six to eight minutes at the start down to 90 seconds at best towards the end.

The final conform and color grading was handled by Light Iron on their Quantel Pablo Rio system run by colorist Ian Vertovec. The Rio was also configured with NVIDIA Tesla cards to facilitate this 6K pipeline. Nelson explained, “In order to track everything I used a custom Filemaker Pro database as the codebook for the film. This contained all the attributes for each and every shot. By using an EDL in conjunction with the codebook, it was possible to access any shot from the server. Since we were doing a lot of the effects in-house, we essentially ‘pre-conformed’ the reels and then turned those elements over to Light Iron for the final conform. All shots were sent over as 6K DPX frames, which were cropped to 5K during the DI in the Pablo. We also handled the color management of the RED files. Production shot these with the camera color metadata set to RedColor3, RedGamma3 and an exposure index of 800. That’s what we offlined with. These were then switched to RedLogFilm gamma when the DPX files were rendered for Light Iron. If, during the grade, it was decided that one of the raw settings needed to be adjusted for a few shots, then we would change the color settings and re-render a new version for them.” The final mastering was in 4K for theatrical distribution.

df_gg_8As with his previous films, director David Fincher has not only told a great story in Gone Girl, but set new standards in digital post production workflows. Seeking to retain creative control without breaking the bank, Fincher has pushed to handle as many services in-house as possible. His team has made effective use of After Effects for some time now, but the new Creative Cloud tools with Premiere Pro CC as the hub, bring the power of this suite to the forefront. Fortunately, team Fincher has been very eager to work with Adobe on product advances, many of which are evident in the new application versions previewed by Adobe at IBC in Amsterdam. With a film as complex as Gone Girl, it’s clear that Adobe Premiere Pro CC is ready for the big leagues.

Kirk Baxter closed our conversation with these final thoughts about the experience. He said, “It was a joy from start to finish making this film with David. Both he and Cean [Chaffin, producer and David Fincher’s wife] create such a tight knit post production team that you fall into an illusion that you’re making the film for yourselves. It’s almost a sad day when it’s released and belongs to everyone else.”

Originally written for Digital Video magazine / CreativePlanetNetwork.

_________________________________

Needless to say, Gone Girl has received quite a lot of press. Here are just a few additional discussions of the workflow:

Adobe panel discussion with the post team

PostPerspective

FxGuide

HDVideoPro

IndieWire

IndieWire blog

ICG Magazine

RedUser

Tony Zhou’s Vimeo take on Fincher 

©2014 Oliver Peters

The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters