COUP 53

The last century is littered with examples of European powers and the United States attempting to mold foreign governments in their own direction. In some cases, the view at the time may have seemed like these efforts would yield positive results. In others, self-interest or oil was the driving force. We have only to point to the Sykes-Picot Agreement of 1916 (think Lawrence of Arabia) to see the unintended consequences these policies have had in the middle east over the past 100+ years, including current politics.

In 1953, Britain’s spy agency MI6 and the United States’ CIA orchestrated a military coup in Iran that replaced the democratic prime minister, Mohammad Mossadegh, with the absolute monarchy headed by Shah Mohammad Reza Pahlavi. Although the CIA has acknowledged its involvement, MI6 never has. Filmmaker Taghi Amirani, an Iranian-British citizen, set out to tell the true story of the coup, known as Operation Ajax. Five years ago he elicited the help of noted film editor, Walter Murch. What was originally envisioned as a six month edit turned into a four yearlong odyssey of discovery and filmmaking that has become the feature documentary COUP 53.

COUP 53 was heavily researched by Amirani and leans on End of Empire, a documentary series produced by Britain’s Granada TV. That production started in 1983 and culminated in its UK broadcast in May of 1985. While this yielded plenty of interviews with first-hand accounts to pull from, one key omission was an interview with Norman Darbyshire, the MI6 Chief of Station for Iran. Darbyshire was the chief architect of the coup – the proverbial smoking gun. Yet he was inexplicably cut out of the final version of End of Empire, along with others’ references to him.

Amirani and Murch pulled back the filmmaking curtain as part of COUP 53. We discover along with Amirani the missing Darbyshire interview transcript, which adds an air of a whodunit to the film. Ultimately what sets COUP 53 apart was the good fortune to get Ralph Fiennes to portray Norman Darbyshire in that pivotal 1983 interview.

COUP 53 premiered last year at the Telluride Film Festival and then played other festivals until coronavirus closed such events down. In spite of rave reviews and packed screenings, the filmmakers thus far have failed to secure distribution. Most likely the usual distributors and streaming channels deem the subject matter to be politically toxic. Whatever the reason, the filmmakers opted to self-distribute, including a virtual cinema event with 100 cinemas on August 19th, the 67th anniversary of the coup.

Walter Murch is certainly no stranger to readers. Despite a long filmography, including working with documentary material, COUP 53 is only his second documentary feature film. (Particle Fever was the first.) This film posed another challenge for Murch, who is known for his willingness to try out different editing platforms. This was the first outing with Adobe Premiere Pro CC, his fifth major editing system. I had a chance to catch up with Walter Murch over the web from his home in London the day before the virtual cinema event. We discussed COUP 53, documentaries, and working with Premiere Pro.

___________________________________________________

[Oliver Peters] You and I have emailed back-and-forth on the progress of this film for the past few years. It’s great to see it done. How long have you been working on this film?

[Walter Murch] We had to stop a number of times, because we ran out of money. That’s absolutely typical for this type of privately-financed documentary without a script. If you push together all of the time that I was actually standing at the table editing, it’s probably two years and nine months. Particle Fever – the documentary about the Higgs Boson – took longer than that.

My first day on the job was in June of 2015 and here we are talking about it in August of 2020. In between, I was teaching at the National Film School and at the London Film School. My wife is English and we have this place in London, so I’ve been here the whole time. Plus I have a contract for another book, which is a follow-on to In the Blink of an Eye. So that’s what occupies me when my scissors are in hiding.

[OP] Let’s start with Norman Darbyshire, who is key to the storyline. That’s still a bit of an enigma. He’s no longer alive, so we can’t ask him now. Did he originally want to give the 1983 interview and MI6 came in and said ‘no’ – or did he just have second thoughts? Or was it always supposed to be an off the record interview?

[WM] We don’t know. He had been forced into early retirement by the Thatcher government in 1979, so I think there was a little chip on his shoulder regarding his treatment. The full 14-page transcript has just been released by the National Security Archives in Washington, DC, including the excised material that the producers of the film were thinking about putting into the film.

If they didn’t shoot the material, why did they cut up the transcript as if it were going to be a production script? There was other circumstantial evidence that we weren’t able to include in the film that was pretty indicative that yes, they did shoot film. Reading between the lines, I would say that there was a version of the film where Norman Darbyshire was in it – probably not named as such – because that’s a sensitive topic. Sometime between the summer of 1983 and 1985 he was removed and other people were filmed to fill in the gaps. We know that for a fact.

[OP] As COUP 53 shows, the original interview cameraman clearly thought it was a good interview, but the researcher acts like maybe someone got to management and told them they couldn’t include this.

[WM] That makes sense given what we know about how secret services work. What I still don’t understand is why then was the Darbyshire transcript leaked to The Observer newspaper in 1985. A huge article was published the day before the program went out with all of this detail about Norman Darbyshire – not his name, but his words. And Stephen Meade – his CIA counterpart – who is named. Then when the program ran, there was nothing of him in it. So there was a huge discontinuity between what was published on Sunday and what people saw on Monday. And yet, there was no follow-up. There was nothing in the paper the next week, saying we made a mistake or anything.

I think eventually we will find out. A lot of the people are still alive. Donald Trelford, the editor of The Observer, who is still alive, wrote something a week ago in a local paper about what he thought happened. Alison [Rooper] – the original research assistant – said in a letter to The Observer that these are Norman Darbyshire’s words, and “I did the interview with him and this transcript is that interview.”

[OP] Please tell me a bit about working with the discovered footage from End of Empire.

[WM] End of Empire was a huge, fourteen-episode project that was produced over a three or four year period. It’s dealing with the social identity of Britain as an empire and how it’s over. The producer, Brian Lapping, gave all of the outtakes to the British Film Institute. It was a breakthrough to discover that they have all of this stuff. We petitioned the Institute and sure enough they had it. We were rubbing our hands together thinking that maybe Darbyshire’s interview was in there. But, of all of the interviews, that’s the one that’s not there.

Part of our deal with the BFI was that we would digitize this 16mm material for them. They had reconstituted everything. If there was a section that was used in the film, they replaced it with a reprint from the original film, so that you had the ability to not see any blank spots. Although there was a quality shift when you are looking at something used in the film, because it’s generations away from the original 16mm reversal film.

For instance, Stephen Meade’s interview is not in the 1985 film. Once Darbyshire was taken out, Meade was also taken out. Because it’s 16mm we can still see the grease pencil marks and splices for the sections that they wanted to use. When Meade talks about Darbyshire, he calls him Norman and when Darbyshire talks about Meade he calls him Stephen. So they’re a kind of double act, which is how they are in our film. Except that Darbyshire is Ralph Fiennes and Stephen Meade – who has also passed on – appears through his actual 1983 interview.

[OP] Between the old and new material, there was a ton of footage. Please explain your workflow for shaping this into a story.

[WM] Taghi is an inveterate shooter of everything. He started filming in 2014 and had accumulated about 40 hours by the time I joined in the following year. All of the scenes where you see him cutting transcripts up and sliding them together – that’s all happening as he was doing it. It’s not recreated at all. The moment he discovered the Darbyshire transcript is the actual instance it happened. By the end, when we added it all up, it was 532 hours of material.

Forgetting all of the creative aspects, how do you keep track of 532 hours of stuff? It’s a challenge. I used my Filemaker Pro database that I’ve been using since the mid-1980s on The Unbearable Lightness of Being. Every film, I rewrite the software slightly to customize it for the film I’m on. I took frame-grabs of all the material so I had stacks and stacks of stills for every set-up.

By 2017 we’d assembled enough material to start on a structure. Using my cards, we spent about two weeks sitting and thinking ‘we could begin here and go there, and this is really good.’ Each time we’d do that, I’d write a little card. We had a stack of cards and started putting them up on the wall and moving them around. We finally had two blackboards of these colored cards with a start, middle, and end. Darbyshire wasn’t there yet. There was a big card with an X on it – the mysterious X. ‘We’re going to find something on this film that nobody has found before.’ That X was just there off to the side looking at us with an accusing glare. And sure enough that X became Norman Darbyshire.

At the end of 2017 I just buckled my seat belt and started assembling it all. I had a single timeline of all of the talking heads of our experts. It would swing from one person to another, which would set up a dialogue among themselves – each answering the other one’s question or commenting on a previous answer. Then a new question would be asked and we’d do the same thing. That was 4 1/2 hours long. Then I did all of the same thing for all of the archival material, arranging it chronologically. Where was the most interesting footage and the highest quality version of that? That was almost 4 hours long. Then I did the same thing with all of the Iranian interviews, and when I got it, all of the End of Empire material.

We had four, 4-hour timelines, each of them self-consistent. Putting on my Persian hat, I thought, ‘I’m weaving a rug!’ It was like weaving threads. I’d follow the talking heads for a while and then dive into some archive. From that into an Iranian interview and then some End of Empire material. Then back into some talking heads and a bit of Taghi doing some research. It took me about five months to do that work and it produced an 8 1/2 hour timeline.

We looked at that in June of 2018. What were we going to do with that? Is it a multi-part series? It could be, but Netflix didn’t show any interest. We were operating on a shoe string, which meant that the time was running out and we wanted to get it out there. So we decided to go for a feature-length film. It was right about that time that Ralph Fiennes agreed to be in the film. Once he agreed, that acted like a condenser. If you have Ralph Fiennes, things tend to gravitate around that performance. We filmed his scenes in October of 2018. I had roughed it out using the words of another actor who came in and read for us, along with stills of Ralph Fiennes as M. What an irony! Here’s a guy playing a real MI6 agent who overthrew a whole country, who plays M, the head of MI6, who dispatches James Bond to kill malefactors!

Ralph was recorded in an hour and a half in four takes at the Savoy Hotel – the location of the original 1983 interviews. At the time, he was acting in Shakespeare’s Anthony and Cleopatra every evening. So he came in the late morning and had breakfast. By 1:30-ish we were set-up. We prayed for the right weather outside – not too sunny and not rainy. It was perfect. He came and had a little dialogue with the original cameraman about what Darbyshire was like. Then he sat down and entered the zone – a fascinating thing to see. There was a little grooming touch-up to knock off the shine and off we went.

Once we shot Ralph, we were a couple of months away from recording the music and then final color timing and the mix. We were done with a finished, showable version in March of 2019. It was shown to investors in San Francisco and at the TED conference in Vancouver. We got the usual kind of preview feedback and dove back in and squeezed another 20 minutes or so out of the film, which got it to its present length of just under two hours.

[OP] You have a lot of actual stills and some footage from 1953, but as with most historical documentaries, you also have re-enactments. Another unique touch was the paint effect used to treat these re-enactments to differentiate them stylistically from the interviews and archival footage.

[WM] As you know, 1953 is 50+ years before the invention of the smart phone. When coups like this happen today you get thousands of points-of-view. Everyone is photographing everything. That wasn’t the case in 1953. On the final day of the coup, there’s no cinematic material – only some stills. But we have the testimony of Mossadegh’s bodyguard on one side and the son of the general who replaced Mossadegh on the other, plus other people as well. That’s interesting up to a point, but it’s in a foreign language with subtitles, so we decided to go the animation path.

This particular technique was something Taghi’s brother suggested and we thought it was a great idea. It gets us out of the uncanny valley, in the sense that you know you’re not looking at reality and yet it’s visceral. The idea is that we are looking at what is going on in the head of the person telling us these stories. So it’s intentionally impressionistic. We were lucky to find Martyn Pick, the animator who does this kind of stuff. He’s Mr. Oil Paint Animation in London. He storyboarded it with us and did a couple of days of filming with soldiers doing the fight. Then he used that as the base for his rotoscoping.

[OP] Quite a few of the first-hand Iranian interviews are in Persian with subtitles. How did you tackle those?

[WM] I speak French and Italian, but not Persian. I knew I could do it, but it was a question of the time frame. So our workflow was that Taghi and I would screen the Iranian language dailies. He would point out the important points and I would take notes. Then Taghi would do a first pass on his workstation to get rid of the chaff. That’s what he would give to the translators. We would hire graduate students. Fateme Ahmadi, one of the associate producers on the film, is Iranian and she would also do translation. Anyone that was available would work on the additional workstation and add subtitling. That would then come to me and I would use that as raw material.

To cut my teeth on this, I tried using the interview with Hamid Admadi, the Iranian historical expert that was recorded in Berlin. Without translating it, I tried to cut it solely on body language and tonality. I just dove in and imagined, if he is saying ‘that’ then I’m thinking ‘this.’ I was kind of like the way they say people with aphasia are. They don’t understand the words, but they understand the mood. To amuse myself, I put subtitles on it, pretending that I knew what he was saying. I showed it to Taghi and he laughed, but said that in terms of the continuity of the Persian, it made perfect sense. The continuity of the dialogue and moods didn’t have any jumps for a Persian speaker. That was a way to tune myself into the rhythms of the Persian language. That’s almost half of what editing is – picking up the rhythm of how people say things – which is almost as important or even sometimes more important than the words they are using.

[OP] I noticed in the credits that you had three associate editors on the project.  Please tell me a bit about their involvement.

[WM] Dan [Farrell] worked on the film through the first three months and then a bit on the second section. He got a job offer to edit a whole film himself, which he absolutely should do. Zoe [Davis] came in to fill in for him and then after a while also had to leave. Evie [Evelyn Franks] came along and she was with us for the rest of the time. They all did a fantastic job, but Evie was on it the longest and was involved in all of the finishing of the film. She’s is still involved, handling all of the media material that we are sending out.

[OP] You are also known for your work as a sound designer and re-recording mixer, but I noticed someone else handled that for this film. What was you sound role on COUP 53?

[WM] I was busy in the cutting room, so I didn’t handle the final mix. But I was the music editor for the film, as well as the picture editor. Composer Robert Miller recorded the music in New York and sent a rough mixdown of his tracks. I would lay that onto my Premiere Pro sequence, rubber-banding the levels to the dialogue.

When he finally sent over the instrument stems – about 22 of them – I copied and pasted the levels from the mixdown onto each of those stems and then tweaked the individual levels to get the best out of every instrument. I made certain decisions about whether or not to use an instrument in the mix. So in a sense, I did mix the music on the film, because when it was delivered to Boom Post in London, where we completed the mix, all of the shaping that a music mixer does was already taken care of. It was a one-person mix and so Martin [Jensen] at Boom only had to get a good level for the music against the dialogue, place it in a 5.1 environment with the right equalization, and shape that up and down slightly. But he didn’t have to get into any of the stems.

[OP] I’d love to hear your thoughts on working with Premiere Pro over these several years. You’ve mentioned a number of workstations and additional personnel, so I would assume you had devised some type of a collaborative workflow. That is something that’s been an evolution for Adobe over this same time frame.

[WM] We had about 60TB of shared storage. Taghi, Evie Franks, and I each had workstations. Plus there was fourth station for people doing translations. The collaborative workflow was clunky at the beginning. The idea of shared spaces was not what it is now and not what I was used to from Avid, but I was willing to go with it.

Adobe introduced the basics of a more fluid shared workspace in early 2018 I think, and that began a six months’ rough ride, because there were a lot of bugs that came along  with that deep software shift. One of them was what I came to call ‘shrapnel.’ When I imported a cut from another workstation into my workstation, the software wouldn’t recognize all the related media clips, which were already there. So these duplicate files would be imported again, which I nicknamed ‘shrapnel.’ I created a bin just to stuff these clips in, because you couldn’t delete them without causing other problems.

Those bugs went away in the late summer of 2018. The ‘shrapnel’ disappeared along with other miscellaneous problems – and the back-and-forth between systems became very transparent. Things can always be improved, but from a hands-on point-of-view, I was very happy with how everything worked from August or September of 2018 through to the completion of the film.

We thought we might stay with Premiere Pro for the color timing, which is very good. But DaVinci Resolve was the system for the colorist that we wanted to get. We had to make some adjustments to go to Resolve and back to Premiere Pro. There were a couple of extra hurdles, but it all worked and there were no kludges. Same for the sound. The export for Pro Tools was very transparent.

[OP] A lot of what you’ve written and lectured about is the rhythm of editing – particularly dramatic films. How does that equate to a documentary?

[WM] Once you have the initial assembly – ours was 8 hours, Apocalypse Now was 6 hours, Cold Mountain was 5 1/2 hours – the jobs are not that different. You see that it’s too long by a lot. What can we get rid of? How can we condense it to make it more understandable, more emotional, clarify it, and get a rhythmic pulse to the whole film?

My approach is not to make a distinction at that point. You are dealing with facts and have to pay attention to the journalistic integrity of the film. On a fiction film you have to pay attention to the integrity of the story, so it’s similar. Getting to that point, however, is highly different, because the editor of an unscripted documentary is writing the story. You are an author of the film. What an author does is stare at a blank piece of paper and say, ‘what am I going to begin with?’ That is part of the process. I’m not writing words, necessarily, but I am writing. The adjectives and nouns and verbs that I use are the shots and sounds available to me.

I would occasionally compare the process for cutting an individual scene to churning butter. You take a bunch of milk – the dailies – and you put them into a churn – Premiere Pro – and you start agitating it. Could this go with that? No. Could this go with that? Maybe. Could this go? Yes! You start globbing things together and out of that butter churning process you’ve eventually got a big ball of butter in the churn and a lot of whey – buttermilk. In other words, the outtakes.

That’s essentially how I work. This is potentially a scene. Let me see what kind of scene it will turn into. You get a scene and then another and another. That’s when I go to the card system to see what order I can put these scenes in. That’s like writing a script. You’re not writing symbols on paper, you are taking real images and sound and grappling with them as if they are words themselves.

___________________________________________________

Whether you are a student of history, filmmaking, or just love documentaries, COUP 53 is definitely worth the watch. It’s a study in how real secret services work. Along the way, the viewer is also exposed to the filmmaking process of discovery that goes into every well-crafted documentary.

Images from COUP 53 courtesy of Amirani Media and Adobe.

(Click on any image for an enlarged view.)

You can learn more about the film at COUP53.com.

For more, check out these interviews at Art of the Cut, CineMontage, and Forbes.

©2020 Oliver Peters

Paul McCartney’s “Who Cares”

Paul McCartney hasn’t been the type of rock star to rest on his past. Many McCartney-related projects have embraced new technologies, such as the 360VR. The music video for Who Cares – McCartney’s musical answer to bullying – was filmed in both 16mm and 65mm film. And it was edited using Final Cut Pro X.

Who Cares features Paul McCartney and actress Emma Stone in a stylized, surreal song and dance number filmed in 65mm, which is bookended by a reality-based 16mm segment. The video was directed by Brantley Gutierrez, choreographed by Ryan Heffington, and produced through LA production company Subtractive.

Gutierrez has collaborated for over 14 years with Santa Monica-based editor Ryan P. Adams on a range of projects, including commercials, concerts, and music videos. Adams also did a stint with Nitro Circus, cutting action sports documentaries for NBC and NBCSN. In that time he’s used the various NLEs, including Premiere Pro, Media Composer, and Final Cut Pro 7. But it was the demands of concert videos that really brought about his shift to Final Cut Pro X.

___________________________________

[OP] Please tell me a bit about what style you were aiming for in Who Cares. Why the choice to shoot in both 16mm and 65mm film?

[Brantley Gutierrez] In this video, I was going for an homage to vaudevillian theater acts and old Beatles-style psychedelia. My background is working with a lot of photography. I was working in film labs when I was pretty young. So my DP and friend, Linus Sandgren, suggested film and had the idea, “What if we shot 65mm?” I was open to it, but it came down to asking the folks at Kodak. They’re the ones that made that happen for us, because they saw it as an opportunity to try out their new Ektachrome 16mm motion film stock.

They facilitated us getting the 65mm at a very reasonable price and getting the unreleased Ektachrome 16mm film. The reason for the two stocks was the separation of the reality of the opening scene – kind of grainy and hand-held – with the song portion. It was almost dreamlike in its own way. This was in contrast to the 65mm psychedelic part, which was all on crane, starkly lit, and with very controlled choreography. The Ektachrome had this hazy effect with its grain. We wanted something that would jump as you went between these worlds and 16 to 65 was about as big of a jump as we could get in film formats.

[OP] What challenges did you face with this combination of film stocks? Was it just a digital transfer and then you were only dealing with video files? Or was the process different than that?

[BG] The film went to London where they could process and scan the 65mm film. It actually went in with Star Wars. Lucafilm had all of the services tied up, but they were kind enough put our film in with The Rise of Skywalker and help us get it processed and scanned. But we had to wait a couple of extra days, so it was a bit of a nervous time. I have full faith in Linus, so I knew we had it. However, it’s a little strange these days to wait eight or nine days to see what you had shot.

We were a guinea pig for Kodak for the 16mm stock. When we got it back, it looked crazy! We were like, “Oh crap.” It looked like it had been cross-processed  – super grainy and super contrasty. It did have a cool look, but more like a Tony Scott style of craziness. When we showed it to Kodak they agreed that it didn’t look right. Then we had Tom Poole, our colorist at Company 3 in New York, rescan the 16mm and it looked beautiful.

[Ryan P. Adams] Ektachrome is a positive stock, which hasn’t been used in a while. So the person in London scanning it just wasn’t familiar with it.

[BG] They just didn’t have the right color profile built for that stock yet, since it hadn’t been released yet. Of course, someone with a more experienced eye would know that wasn’t correct.

[OP] How did this delay impact your editing?

[BG] It was originally scanned and we started cutting with the incorrect version. In the meantime, the film was being rescanned by Poole. He didn’t really have to do any additional color correction to it once he had rescanned it. This was probably our quickest color correction session for any music video – probably 15 minutes total.

[RPA] One of the amazing things I learned, is that all you have to do is give it some minor contrast and then it is done. What it does give you is perfect skin tones. Once we got the proper scan and sat in the color session, that’s what really jumped out.

[OP] So then, what was the workflow like with Final Cut Pro X?

[RPA] The scans came in as DPX files. Here at Subtractive, we took those into DaVinci Resolve and spit out ProRes 422 HQ QuickTime files to edit with. To make things easy for Company 3, we did the final conform in-house using Resolve. An FCPXML file was imported into Resolve, we linked back to the DPX files, and then sent a Resolve project file to Company 3 for the final grade. This way we could make sure everything was working. There were a few effects shots that came in and we set all of that up so Tom could just jump on it and grade. Since he’s in New York, the LA and New York locations for Company 3 worked through a remote, supervised grading session.

[OP] The video features a number of effects, especially speed effects. Were those shot in-camera or added in post?

[RPA] The speed effects were done in post. The surreal world was very well choreographed, which just plays out. We had a lot of fun with the opening sequence in figuring out the timing. Especially in the transitional moment where Emma is staring into the hypnotic wheel. We were able to mock up a lot of the effects that we wanted to do in Final Cut. We would freeze-frame these little characters called “the idiots” that would jump into Emma’s head. I would do a loose rotoscope in Final Cut and then get the motion down to figure out the timing. Our effects people then remade that in After Effects.

[OP] How involved was Paul McCartney in the edit and in review-and-approval?

[BG] I’ve know Paul for about 13 years and we have a good relationship. I feel lucky that he’s very trusting of me and goes along with ideas like this. The record label didn’t even know this video was happening until the day of production. It was clandestine in a lot of ways, but you can get away with that when it’s Paul McCartney. If I had tried that with some other artist, I would have been in trouble. But Paul just said, “We’re going to do it ourselves.”

We showed him the cut once we had picture lock, before final color. He called on the phone, “Great. I don’t have any notes. It’s cool. I love it and will sign off.” That was literally it for Paul. It’s one of the few music videos where there was no going back and forth between the management, the artist, and the record label. Once Paul signed off on it, the record label was fine with it.

[OP] How did you manage to get Emma Stone to be a part of this video?

[BG] Emma is a really close friend of mine. Independently of each other, we both know Paul. Their paths have crossed over the years. We’ve all hung out together and talked about wanting to do something. When Paul’s album came out, I hit them both up with the idea for the music video and they both said yes.

The hardest part of the whole process was getting schedules to align. We finally had an open date in October with only a week and a half to get ready. That’s not a lot of time when you have to build sets and arrange the choreography. It was a bit of a mad dash. The total time was about six weeks from prep through to color.

Because of the nature of this music video, we only filmed two takes for Paul’s performance to the song. I had timed out each set-up so that we knew how long each scene would be. The car sequence was going to be “x” amount of seconds, the camera sequence would be “x” amount, and so on. As a result, we were able to tackle the edit pretty quickly. Since we were shooting 65mm film, we only had two or three takes max of everything. We didn’t have to spend a lot of time looking through hours of footage – just pick the best take for each. It was very old school in that way, which was fun.

[OP] Ryan, what’s your approach to organizing a project like this in Final Cut Pro X?

[RPA] I labelled every set-up and then just picked the best take. The first pass was just a rough to see what was the best version of this video. Then there were a few moments that we could just put in later, like when the group of idiots sings, “Who cares.”

My usual approach is to lay in the sections of synced song segments to the timeline first. We’ll go through that first to find the best performance moments and cut those into the video, which is our baseline. Then I’ll build on top of that. I like to organize that in the timeline rather than the browser so that I can watch it play against the music. But I will keyword each individual set-up or scene.

I also work that way when I cut commercials. I can manage this for a :30 commercial. When it’s a much bigger project, that’s where the organization needs to be a little more detailed. I will always break things down to the individual set-ups so I can reference them quickly. If we are doing something like a concert film, that organization may be broken up by the multiple days of the event. A great feature of Final Cut Pro X is the skim tool and that you can look at clips like a filmstrip. It’s very easy to keyword the angles for a scene and quickly go through it.

[OP] Brantley, I’m sure you’ve sat over the shoulder of the editor in many sessions. From a director’s point of view, what do you think about working with Final Cut Pro X?

[BG] This particular project was pretty well laid out in my head and it didn’t have a lot of footage, so it was already streamlined. On more complex projects, like a multi-cam edit, FCPX is great for me, because I get to look at it like a moving contact sheet from photography. I get to see my choices and I really respond to that. That feels very intuitive and it blows me away that every system isn’t like that.

[OP] Ryan, what attracted you about Final Cut Pro X in order to use it whenever possible?

[RPA] I started with Final Cut Pro X when they added multi-cam. At that time we were doing more concert productions. We had a lot of photographers who would fill in on camera and Canon 5Ds were prevalent. I like to call them “trigger-happy filmers,” because they wouldn’t let it roll all the way through.

FCPX came up with the solution to sync cameras with the audio on the back end. So I could label each photographer’s clips. Each clip might only be a few seconds long. I could then build the concert by letting FCPX sync the clips to audio even without proper timecode. That’s when I jumped on, because FCPX solved a problem that was very painful in Final Cut Pro 7 and a lot of other editing systems. That was an interesting moment in time when photographic cameras could shoot video and we hired a lot of those shooters. Final Cut Pro X solved the problem in a very cool way and it helped me tremendously.

We did this Tom Petty music video, which really illustrates why Final Cut Pro X is a go-to tool. After Tom had passed, we had to take a lot of archival footage as part of a music video, called Gainesville, that we did for his boxed set. Brantley shot a lot of video around Tom’s hometown of Gainesville [Florida], but they also brought us a box with a massive amount of footage that we put into the system. A mix of old films and tapes, some of Tom’s personal footage, all this archival stuff. It gave the video a wonderful feeling.

[BG] It’s very nostalgic from the point of view of Tom and the band. A lot of it was stuff they had shot in their 20s and had a real home movie feel. I shot Super 8mm footage around Tom’s original home and places where they grew up to match that tone. I was trying to capture the love his hometown has for him.

[RPA] That’s a situation where FCPX blows the competition out of the water. It’s easy to use the strip view to hunt for those emotional moments. So the skimmer and the strip view were ways for us to cull all of this hodge-podge of footage for those moments and to hit beats and moments in the music for a song that had been unreleased at that time. We had one week to turn that around. It’s a complicated situation to look through box of footage on a very tight deadline and put a story to it and make it feel correct for the song. That’s where all of those tools in Final Cut shine. When I have to build a montage, that’s when I love Final Cut Pro X the most.

[OP] You’ve worked with the various NLEs. You know DaVinci Resolve and Blackmagic is working hard to make it the best all-in-one tool on the market. When you look at this type of application, what features would you love to see added to Final Cut Pro X?

[RPA] If I had a wishlist, I would love to see if FCPX could be scaled up for multiple seats and multiple editors. I wish some focus was being put on that. I still go to Resolve for color. I look at compositing as just mocking something up so we can figure out timing and what it is generally going to look like. However, I don’t see a situation currently where I do everything in the editor. To me, DaVinci Resolve is kind of like a Smoke system and I tip my hat to them.

I find that Final Cut still edits faster than a lot of other systems, but speed is not the most important thing. If you can do things quickly, then you can try more things out. That helps creatively. But I think that typically things take about as long from one system to the next. If an edit takes me a week in Adobe it still takes me a week in FCPX. But if I can try more things out creatively, then that’s beneficial to any project.

Originally written for FCP.co.

©2020 Oliver Peters

Jezebel

If you’ve spent any time in Final Cut Pro X discussion forums, then you’ve probably run across posts by Tangier Clarke, a film editor based in Los Angeles. Clarke was an early convert to FCPX and recently handled the post-production finishing for the film, Jezebel. I was intrigued by the fact that Jezebel was picked up by Netflix, a streaming platform that has been driving many modern technical standards. This was a good springboard to chat with Clarke and see how FCPX fared as the editing tool of choice.

_______________________________________________________________

[OP] Please tell me a little about your background in becoming a film editor.

[TC] I’m a very technical person and have always had a love for computers. I went to college for computer science, but along the way I discovered Avid Videoshop and started to explore editing more, since it married my technical side with creative storytelling. So, at UC Berkeley I switched from computer science to film.

My first job was at motion graphics company Montgomery/Cobb, which was in Los Angeles. They later became Montgomery & Co. Creative. I was a production assistant for main titles and branding packages for The Weather Channel, Fox, NBC, CBS, and a whole host of cable shows. Then, I worked for 12 years with Loyola Productions (no affiliation with Loyola Marymount University).

I moved on to a company called Black & Sexy TV, which was started by Dennis Dortch as a company to have more control over black images in media. He created a movie called A Good Day to be Black and Sexy in 2008, which was picked up and distributed by Magnolia Pictures and became a cult hit. It ended up in Blockbuster Video stores, Target, and Netflix. The success of that film was leveraged to launch Black & Sexy TV and its online streaming platform.

[OP] You’ve worked on several different editing applications, but tell me a bit about your transition to Final Cut Pro X.

[TC] I started my career on Avid, which was also at the time when Final Cut Pro “legacy” was taking off. During 2011 at Loyola Productions, I had an opportunity to create a commercial for a contest put out by American Airlines. We thought this was an opportunity for us as a company to try Final Cut Pro X.

I knew that it was for us once we installed it. Of course, there were a lot of things missing coming from Final Cut Pro 7, and a couple of bugs here and there. The one thing that was astonishing for me, despite the initial learning curve, was that within one week of use my productivity compared to Final Cut Pro 7 went through the roof. There was no correlation between anything I had used before and what I was experiencing with Final Cut X in that first week. I also noticed that our interns – whose only experience was iMovie – just picked up Final Cut Pro X with no problems whatsoever.

Final Cut Pro X was very liberating, which I expressed to my boss, Eddie Siebert, the president and founder of Loyola Productions. We decided to keep using it to the extent that we could on certain projects and worked with Final Cut Pro 7 and Final Cut Pro X side-by-side until we eventually just switched over.

[OP] You recently were the post supervisor and finishing editor for the film Jezebel, which was picked up by Netflix. What is this film about?

[TC] Jezebel is a semi-autobiographical film written and directed by Numa Perrier, who is a co-founder of Black & Sexy TV. The plot follows a 19-year-old girl, who after the death of her mother begins to do sex work as an online chat room cam girl to financially support herself. Numa is starring in the film, playing her older sister, and an actress named Tiffany Tenille is playing Numa. This is also Numa Perrier’s feature film directorial debut. It’s a side of her that people didn’t know – about how she survived as a young adult in Las Vegas. So, she is really putting herself out there.

The film made its debut at South by Southwest last year, where it was selected as a “Best of SXSW” film by The Hollywood Reporter. After that it went to other domestic and international festivals. At some point it was seen by Ava DuVernay, who decided to to pick up Numa’s film through her company, Array. That’s how it got to Netflix.

[OP] Please walk me through the editorial workflow for Jezebel. How did FCPX play a unique role in the post?

[TC] I was working on a documentary at the time, so I couldn’t fully edit Jezebel, but I was definitely instrumental in the process. A former coworker of mine, Brittany Lyles, was given the task of actually editing the project in Final Cut Pro X, which I had introduced her to a couple of years ago and trained her on how to use it. The crew shot with a Canon C300 camera and we used the Final Cut proxy workflow. Brittany wouldn’t have been able to work on it if we weren’t using proxies, because of her hardware. I was using a late 2013 Mac Pro, as well as a 2016 MacBook Pro.

At the front end, I assisted the the production team with storage and media management. Frances Ampah (a co-producer on the film) and I worked to sync all the footage for Brittany, who was working with copy of the footage on a dedicated drive. We provided Bittany with XMLs during the syncing process as she was getting familiar with the footage.

While Brittany was working on the cut, Numa and I were trying to figure out how best to come up with a look and a style for the text during the chat room scenes in the movie. It hadn’t been determined yet if I was going to get the entire film and put the graphics in myself or if I was going to hand it off to Brittany for her to do it. I pitched Numa on the idea of creating a Motion template so that I could have more control over the look, feel, and animation of the graphics. That way either Brittany or I could do it and it would look the same.

Brittany and Numa refined the edit to a point where it made more sense for me to put in a lot of the graphics and do any updating per Numa’s notes, because some of the text had changed as well. And we wanted to really situate the motion of the chat so that it was characteristic of what it looked like back then – in the 90s. We needed specific colors for each user who was logged into the screen. I had some odd color issues with Final Cut and ended up actually just going into the FCPXML file to modify color values. I’m used to going into files like that and I’m not afraid of it. I also used the FCP X feature in the text inspector to save format and appearance attributes. This was tremendously helpful to quickly assign the color and formatting for the different users in the chat room – saving a lot of time.

Our secondary editor, Bobby Field, worked closely with Numa to do the majority of color grading on the film. He was more familiar with Premiere Pro than FCP X, but really enjoyed the color tools in Final Cut Pro X. Through experimentation, Bobby learned how to use adjustment layers to apply color correction. I was fascinated by this and it was a learning experience for me as well. I’m used to working directly with the clip itself and in my many years of using FCP X, this wasn’t a method I used or saw anyone else firsthand doing.

[OP] What about sound post and music?

[TC] I knew that there’s only so much technically that I’d had the skillset to do and I would not dare to pretend that I know how to do certain things. I called on the help of Jim Schaefer – a skilled and trusted friend that I worked with at Loyola productions. I knew he wanted an opportunity to work on a big project, particularly a feature. The film needed a tremendous amount of sound work, so he took it on along with Travis Prater, a coworker of his at Source Sound in Woodland Hills. Together they really transformed the film.

Jim and Travis worked in Pro Tools, so I used X2Pro to get files to them. Jim gave me a list of how he wanted the film broken down. Because of the length of Jezebel, he preferred that the film was broken up into reels. In addition to reels, I also gave him the entire storyline with all of the roles. Everything was broken down very nicely using AAFs and he didn’t really have any problems.  In his words, “It’s awesome that all the tracks are sorted by character and microphone – that’ll cut down significantly on the sorting/organizing pass for me.” The only hiccup experienced was that metadata was missing in the AAF up to a certain point in ProTools. Yet that metadata did exist in the original WAV. Some clip names were inconsistent as well, but that may have happened during production.

[OP] Jezebel is streaming on Netflix, which has a reputation for having tough technical specs. Were there any special things you had to do to make it ready for the platform?

[TC] We supplied Array with a DCI 2K (full frame) Quicktime master in ProRes 422HQ per their delivery schedule, along with other elements such as stereo and 5.1 mixes from Jim, Blu-Rays, DVD, and DCP masters. I expected to do special things to make it ready for Netflix. Numa and I discussed this, but to my knowledge, the Quicktime that I provided to Array is what Netflix received. There were no special conversions made just for Netflix on the part of Array.

[OP] Now that you have this Final Cut Pro X experience under your belt, what would you change if you could? Any special challenges or shortcomings?

[TC] I had to do some composite shots for the film, so the only shortcoming for me was Final Cut’s compositing tool set. I’d love to have better tools built right into FCP X, like in DaVinci Resolve. I love Apple Motion and it’s fine for what it is, but it could go a little further for me. I’d love to see an update with improved compositing and better tracking. Better reporting for missing files, plugins, and other elements would also be tremendously helpful in troubleshooting vague alerts.

In spite of this, there was no doubt in any part of the process whether or not Final Cut was fully capable of being at the center of everything that needed to be done – whether it was leveraging Motion for template graphics between Brittany and me, using a third-party tool to make sure that the post sound team had precisely what they needed, or exchanging XMLs or backup libraries with Bobby to make sure that his work got to me intact. I was totally happy with the performance of FCP X. It was just rock solid and for the most part did everything I needed it to do without slowing me down.

Originally written for FCPco.

A special thanks to Lumberjack System for their assistance in transcribing this interview.

©2020 Oliver Peters

Everest VR and DaVinci Resolve Studio

In April of 2017, world famous climber Ueli Steck died while preparing to climb both Mount Everest and Mount Lhotse without the use of bottled oxygen. Ueli’s close friends Jonathan Griffith and Sherpa Tenji attempted to finish this project while director/photographer Griffith captured the entire story. The result is the 3D VR documentary, Everest VR: Journey to the Top of the World. It was produced by Facebook’s Oculus and teased at last year’s Oculus Connect event. Post-production was completed in February and the documentary is being distributed through Oculus’ content channel.

Veteran visual effects artist Matthew DeJohn was added to the team to handle end-to-end post as a producer, visual effects supervisor, and editor. DeJohn’s background includes camera, editing, and visual effects with a lot of experience in both traditional visual effects, 2D to 3D conversion, and 360 virtual reality. Before going freelance, he worked at In3, Digital Domain, Legend3D, and VRTUL.

As an editor, DeJohn was familiar with most of the usual tools, but opted to use Blackmagic’s DaVinci Resolve Studio and Fusion Studio applications as the post-production hub for the Everest VR documentary. Posting stereoscopic, 360-degree content can be quite challenging, so I took the opportunity to speak with DeJohn about using DaVinci Resolve Studio on this project.

_______________________________________________________

[OP] Please tell me a bit about your shift to DaVinci Resolve Studio as the editing tool of choice.

[MD] I have had a high comfort level with Premiere Pro and also know Final Cut Pro. Premiere has good VR tools and there’s support for it. In addition to these tools I was using Fusion Studio in my workflow so it was a natural to look at DaVinci Resolve Studio as a way to combine my Fusion Studio work with my editorial work.

I made the switch about a year and half ago and it simplified my workflow dramatically. It integrated a lot of different aspects all under one roof – the editorial page, the color page, the Fusion page, and the speed to work with high-res footage. From an editing perspective, the tools are all there that I was used to in what I would argue is a cleaner interface. Sometimes, software just collects all of these features over time. DaVinci Resolve Studio is early in its editorial development trajectory, but it’s still deep. Yet it doesn’t feel like it has a lot of baggage.

[OP] Stereo and VR projects can often be challenging, because of the large frame sizes. How did DaVinci Resolve Studio help you there?

[MD] Traditionally 360 content uses a 2:1 aspect ratio, so 4K x 2K. If it’s going to be a stereoscopic 360 experience, then you stack a left and right eye image on top of each other. It ends up being 4K x 4K square – two 4K x 2K frames stacked on top of each other. With DaVinci Resolve Studio and the graphics card I have, I can handle a 4K x 4K full online workflow. This project was to be delivered as 8K x 8K. The hardware I had wasn’t quite up to it, so I used an offline/online approach. I created 2K x 2K proxy files and then relinked to the full resolution sources later.  I just had to unlink the timeline and then reconnect it to another bin with my 8K media.

You can cut a stereo project just looking at the image for one eye, then conform the other eye, and then combine them. I chose to cut with the stacked format. My editing was done looking at the full 360 unwrapped, but my review was done through a VR headset from the Fusion page. From there I was also able to review the stereoscopic effect on a 3D monitor. 3D monitoring can also be done on the color page, though I didn’t use that feature on this project.

[OP] I know that successful VR is equal parts production and post. And that post goes much more smoothly with a lot of planning before anyone starts. Walk me through the nuts and bolts of the camera systems and how Everest VR was tackled in post.

[MD] Jon Griffith – the director, cameraman, and alpinist – a man of many talents – utilized a number of different systems. He used the Yi Halo, which is a 17-camera circular array. Jon also used the Z CAM V1 and V1 Pro cameras. All were stereoscopic 360 camera systems.

The Yi Halo camera used the Jump cloud stitcher from Google. You upload material to that service and it produces an 8K x 8K final stitch and also a proxy 2K x 2K stitch. I would cut with the 2K x 2K and then conform to the 8K x 8K. That was for the earlier footage. The Jump stitcher is no longer active, so for the more recent footage Jon switched to the Z CAM systems. For those, he would run through Z CAM’s Wonderstitch application, with is auto-stitching software. For the final, we would either clean up any stitching artifacts in Fusion Studio or restitch it in Mistika VR.

Once we had done that, we would use Fusion Studio for any rig removal and fine-tuned adjustments. No matter how good these cameras and stitching software are, they can fail in some situations. For instance, if the subject is too close to the camera or walks between seams. There’s quite a bit of composting/fixing that needs to be done and Fusion Studio was used heavily for that.

[OP] Everest VR consists of three episodes ranging from just under 10 minutes to under 17 minutes. A traditional cinema film, shot conservatively, might have a 10:1 shooting ratio. How does that sort of ratio equate on a virtual reality film like this?

[MD] As far as the percentage of shots captured versus used, we were in the 80-85% range of clips that ended up in the final piece. It’s a pretty high figure, but Jon captured every shot for a reason with many challenging setups – sometimes on the side of an ice waterfall. Obviously there weren’t many retakes. Of course the running time of raw footage would result in a much higher ratio. That’s because we had to let the cameras run for an extended period of time. It takes a while for a climber to make his way up a cliff face!

[OP] Both VR and stereo imagery present challenges in how shots are planned and edited. Not only for story and pacing, but also to keep the audience comfortable without the danger of motion-induced nausea. What was done to address those issues with Everest VR?

[MD] When it comes to framing, bear in mind there really is no frame in VR. Jon has a very good sense of what will work in a VR headset. He constructed shots that make sense for that medium, staging his shots appropriately without any moving camera shots. The action moved around you as the viewer. As such, the story flows and the imagery doesn’t feel slow even though the camera doesn’t move. When they were on a cliffside, he would spend a lot of time rigging the camera system. It would be floated off the side of the cliff enough so that we could paint the rigging out. Then you just see the climber coming up next to you.

The editorial language is definitely different for 360 and stereoscopic 360. Where you might normally have shots that would go for three seconds or so, our shots go for 10 to 20 seconds, so the action on-screen really matters. The cutting pace is slower, but what’s happening within the frame isn’t. During editing, we would plan from cut to cut exactly where we believed the viewer would be looking. We would make sure that as we went to the next shot, the scene would be oriented to where we wanted the viewer to look. It was really about managing the 360 hand-off between shots, so that viewers could follow the story. They didn’t have to whip their head from one side of the frame to the other to follow the action.

In some cases, like an elevation change – where someone is climbing at the top of the view and the next cut is someone climbing below – we would use audio cues. The entire piece was mixed in ambisonic third order, which means you get spatial awareness around and vertically. If the viewer was looking up, an audio cue from below would trigger them to look down at the subject for the next shot. A lot of that orchestration happens in the edit, as well as the mix.

[OP] Please explain what you mean by the orientation of the image.

[MD] The image comes out of the camera system at a fixed point, but based on your edit, you will likely need to change that. For the shots where we needed to adjust the XYZ axis orientation, we would add a Panomap node in the Fusion page within DaVinci Resolve Studio and shift the orientation as needed. That would show up live in the edit page. This way we could change what would become the center of the view.

The biggest 3D issue is to make sure the vertical alignment is done correctly. For the most part these camera systems handled it very well, but there are usually some corrections to be made. One of these corrections is to flatten the 3D effect at the poles of the image. The stereoscopic effect requires that images be horizontally offset. There is no correct way to achieve this at the poles, because we can’t guarantee how the viewer’s head is oriented when they look at the poles. In traditional cinema, the stereo image can affect your cutting, but with our pacing, there was enough time for a viewer to re-converge their view to a different distance comfortably.

[OP] Fusion was used for some of the visual effects, but when do you simply use the integrated Fusion page within DaVinci Resolve Studio versus a standalone version of the Fusion Studio application?

[MD] All of the orientation was handled by me during the edit by using the integrated Fusion page within DaVinci Resolve Studio. Some simple touch-ups, like painting out tripods, were also done in the Fusion page. There are some graphics that show the elevation of Everest or the climbers’ paths. These were all animated in the Fusion page and then they showed up live in the timeline. This way, changes and quick tweaks were easy to do and they updated in real-time.

We used the standalone version of Fusion Studio for some of the more complex stitches and for fixing shots. Fusion Studio is used a lot in the visual effects industry, because of its scriptability, speed, and extensive toolset. Keith Kolod was the compositor/stitcher for those shots. I sent him the files to work on in the standalone version of Fusion Studio. This work was a bit heavier and would take longer to render. He would send those back and I would cut those into the timeline as a finished file.

[OP] Since DaVinci Resolve Studio is an all-in-one tool covering edit, effects, color, and audio, how did you approach audio post and the color grade?

[MD] The Initial audio editing was done in the edit and Fairlight pages of DaVinci Resolve Studio. I cut in all of the temp sounds and music tracks to get the bone structure in place. The Fairlight page allowed me to get in deeper than a normal edit application would. Jon recorded multiple takes for his narration lines. I would stack those on the Fairlight page as audio layers and audition different takes very quickly just by re-arranging the layers. Once I had the take I liked, I left the others there so I could always go back to them. But only the top layer is active.

After that, I made a Pro Tools turnover package for Brendan Hogan and his team at Impossible Acoustic. They did the final mix in Pro Tools, because there are some specific built-in tools for 3D ambisonic audio. They took the bones, added a lot of Foley, and did a much better job of the final mix than I ever could.

I worked on the color correction myself. The way this piece was shot, you only had one opportunity to get up the mountain. At least on the actual Everest climb, there aren’t a lot of takes. I ended up doing color right from the beginning, just to make sure the color matched for all of those different cameras. Each had a different color response and log curve. I wanted to get a base grade from the very beginning just to make sure the snow looked the same from shot to shot. By the time we got to the end, there were very minimal changes to the color. It was mainly to make sure that the grade we had done while looking at Rec. 709 monitoring translated correctly to the headset, because the black levels are a bit different in the headsets.

[OP] In the end, were you 100% satisfied with the results?

[MD] Jon and Oculus held us to a high level in regards to the stitch and the rig removals. As a visual effects guy, there’s always something, if you look really hard! (laughs) Every single shot is a visual effects shot in a show like this. The tripod always has to be painted out. The cameraman always needs to be painted out if they didn’t hide well enough.

The Yi Halo doesn’t actually capture the bottom 40 degrees out of the full 360. You have to make up that bottom part with matte painting to complete the 360. Jon shot reference photos and we used those in some cases. There is a lot of extra material in a 360 shot, so it’s all about doing a really nice clone paint job within Fusion Studio or the Fusion page of DaVinci Resolve Studio to complete the 360.

Overall, as compared with all the other live-action VR experiences I’ve seen, the quality of this piece is among the very best. Jon’s shooting style, his drive for a flawless experience, the tools we used, and the skill of all those involved helped make this project a success.

The article originally written for Creative Planet Network.

©2020 Oliver Peters

The Banker

Apple has launched its new TV+ service and this provides another opportunity for filmmakers to bring untold stories to the world. That’s the case for The Banker, an independent film picked up by Apple. It tells the story of two African American entrepreneurs attempting to earn their piece of the American dream during the repressive 1960s through real estate and banking. It stars Samuel L. Jackson, Anthony Mackie, Nia Long, and Nicholas Hoult.

The film was directed by George Nolfi (The Adjustment Bureau) and produced by Joel Viertel, who also signed on to edit the film. Viertel’s background hasn’t followed the usual path for a feature film editor. Interested in editing while still in high school, the move to LA after college landed him a job at Paramount where he eventually became a creative executive. During that time he kept up his editing chops and eventually left Paramount to pursue independent filmmaking as a writer, producer, and editor. His editing experience included Apple Final Cut Pro 1.0 through 7.0 and Avid Media Composer, but cutting The Banker was his first time using Apple’s Final Cut Pro X.

I recently chatted with Joel Viertel about the experience of making this film and working with Apple’s innovative editing application.

____________________________________________

[OP] How did you get involved with co-producing and cutting The Banker?

[JV] This film originally started while I was at Paramount. Through a connection from a friend, I met with David Smith and he pitched me the film. I fell in love with it right away, but as is the case with these films, it took a long while to put all the pieces together. While I was doing The Adjustment Bureau with George Nolfi and Anthony Mackie, I pitched it to them, and they agreed it would be a great project for us all to collaborate on. From there it took a few years to get to a script we were all happy with, cast the roles, get the movie financed, and off the ground.

[OP] I imagine that it’s exciting to be one of the first films picked up by Apple for their TV+ service. Was that deal arranged before you started filming or after everything was in the can, so to speak?

[JV] Apple partnered with us after it was finished. It was made and financed completely independently through Romulus Entertainment. While we were in the finishing stages, Endeavor Content repped the film and got us into discussions with Apple. It’s one of their first major theatrical releases and then goes on the platform after that. Apple is a great company and brand, so it’s exciting to get in on the ground floor of what they’re doing.

[OP] When I screened the film, one of the things I enjoyed was the use of montages to quickly cover a series of events. Was that how it was written or were those developed during the edit as a way to cut running time?

[JV] Nope, it was all scripted. Those segments can bedevil a production, because getting all of those little pieces is a lot of effort for very little yield. But it was very important to George and myself and the collaborators on the film to get them. It’s a film about banking and real estate, so you have to figure out how to make that a fun and interesting story. Montages were one way to keep the film propulsive and moving forward – to give it motion and excitement. We just had to get through production finding places to pick off those pieces, because none of those were developed in post.

[OP] What was your overall time frame to shoot and post this film?

[JV] We started in late September 2018 and finished production in early November. It was about 30 days in Atlanta and then a few days of pick-ups in LA. We started post right after Thanksgiving and locked in May, I think. Once Apple got involved, there were a few minor changes. However, Apple’s delivery specs were completely different from our original delivery specs, so we had to circle back on a bunch of our finishing.

[OP] Different in what way?

[JV] We had planned to finish in 2K with a 5.1 mix. Their deliverables are 4K with a Dolby Atmos mix. Because we had shot on 35mm film, we had the capacity, but it meant that we had to rescan and redo the visual effects at 4K. We had to lay the groundwork to do an Atmos mix and DolbyVision finish for theatrical and home video, which required the 35mm film negative to be rescanned and dust-busted.

Our DP, Charlotte Bruus Christensen, has shot mostly on 35mm – films like A Quiet Place and The Girl on a Train and those movies are beautiful. And so we wanted to accommodate that, but it presents challenges if you aren’t shooting in LA. Between Kodak in Atlanta and Technicolor in LA we were able to make it work.

Kodak would process the negative and Technicolor made a one-light transfer for 2K dailies. Those were archived and then I edited with ProResLT copies in HD. Once we were done, Technicolor onlined the movie from their 2K scans. After the change in deliverable specs, Technicolor rescanned the clips used for the online finish at 4K and conformed the cut at 4K.

[OP] I felt that the eclectic score fit this movie well and really places it in time. As an editor, how did you work to build up your temp tracks? Or did you simply leave it up to the composer?

[JV] George and I have worked with our composer, Scott Salinas, for a very long time on a bunch of things. Typically, I give him a script and then he pulls samples that he thinks are in the ballpark. He gave me a grab bag of stuff for The Banker – some of which was score, some of which was jazz. I start laying that against the picture myself as I go and find these little things that feel right and set the tone of the movie. I’m finding my way for the right marriage of music and picture. If it works, it sticks. If it doesn’t, we replace it. Then at the end, he’s got to score over that stuff.

Most of the jazz in The Banker is original, but there are a couple tracks where we just licensed them. There’s a track called “Cash and Carry” that I used over the montage when they get rich. They’ve just bought the Banker’s Building and popped the champagne. This wacky, French 1970s bit of music comes in with a dude scatting over it while they are buying buildings or looking at the map of LA. That was a track Scott gave me before we shot a frame of film, so when we got to that section of the movie, I chose it out of the bin and put that sequence to it and it just stuck.

There are some cases where it’s almost impossible to temp, so I just cut it dry and give it to him. Sometimes he’ll temp it and sometimes he’ll do a scratch score. For example, the very beginning of the movie never had temp in any way. I just cut it dry. I gave it to Scott. He scored it and then we revised his scoring a bunch of times to get to the final version.

[OP] Did you do any official or “friends and family” screenings of The Banker while editing it? If so, did that impact the way the film turned out?

[JV] The post process is largely dictated by how good your first cut is. If the movie works, but needs improvement – that’s one thing. If it fundamentally doesn’t – that’s another. It’s a question of where you landed from the get-go and what needs to be fixed to get to the end of the road.

We’re big fans of doing mini-testing – bringing in people we know and people whose opinions we want to hear. At some point you have to get outside of the process and aggregate what you hear over and over again. You need to address the common things that people pick up on. The only way to keep improving your movie is to get outside feedback so they tell you what to focus on.

Over time that significantly impacted the film. It’s not like any one person said that one thing that caused us to re-edit the film. People see the problem that sticks out to them in the cut and you work on that. The next time there’s something else and then you work on that. You keep trying to make all the improvements you can make. So it’s an iterative process.

[OP] This film marked a shift for you from using earlier versions of Final Cut Pro to now cutting on Final Cut Pro X for the first time. Why did you make that choice and what was the experience like?

[JV] George has a relationship with Apple and they had suggested using Final Cut Pro X on his next project. I had always used Final Cut Pro 7 as my preference. We had used it on an NBC show called Allegiance in 2014 and then on Birth of the Dragon in 2015 and 2016 – long after it had been discontinued. We all could see the writing on the wall – operating systems would quit running it and it’s not harnessing what the computers can do.

I got involved in the conversation and was invited to come to a seminar at the Editors Guild about Final Cut Pro X that was taught by Kevin Bailey, who was the assistant editor for Whiskey Tango Foxtrot. I had looked at Final Cut Pro X when it first came out and then again several years later. I felt like it had been vastly improved and was in a place where I could give it a shot. So I committed at that point to cutting this film on Final Cut Pro X and teaching myself how to use it. I also hired Kevin to help as my assistant for the start of the film. He became unavailable later in the production, so we found Steven Moyer to be my assistant and he was fantastic. I would have never made it through without the both of them.

[OP] How did you feel about Final Cut Pro X once you got your sea legs?

[JV] It’s always hard to learn to walk again. That’s what a lot of editors bump into with Final Cut Pro X, because it is a very different approach than any other NLE. I found that once you get to know it and rewire your brain that you can be very fast on it. A lot of the things that it does are revolutionary and pretty incredible. And there are still other areas that are being worked on. Those guys are constantly trying to make it better. We’ve had multiple conversations with them about the possibilities and they are very open to feedback.

[OP] Every editor has their own way of tackling dailies and wading through an avalanche of footage coming in from production. And of course, Final Cut Pro X features some interesting ways to organize media. What was the process like for The Banker?

[JV] The sound and picture were both running at 24fps. I would upload the sound files from my hotel room in Atlanta to Technicolor in LA, who would sync the sound. They would send back the dailies and sound, which Kevin – who was assisting at that time – would load into Final Cut. He would multi-clip the sound files and the two camera angles. Everything is in a multi-clip, except for purely MOS B-roll shots. Each scene had its own event. Kevin used the same system he had devised with Jan [Kovac, editor on Whiskey Tango Foxtrot and Focus]. He would keyword each dialogue line, so that when you select a keyword collection in the browser, every take for that line comes up. That’s labor-intensive for the assistant, but it makes life that much faster for me once it’s set up.

[OP] I suppose that method also makes it much faster when you are working with the director and need to quickly get to alternate takes.

[JV] It speeds things along for George, but also for me. I don’t have to hunt around to find the lines when I have to edit a very long dialogue scene. You could assemble selects reels first, but I like to look at everything. I fundamentally believe there’s something good in every bad take. It doesn’t take very long to watch every take of a line. Plus I do a fair amount of ‘Franken-biting’ with dialogue where needed.

[OP] Obviously the final mix and color correction were done at specialty facilities. Since The Banker was shot on film, I would imagine that complicated the hand-off slightly. Please walk me through the process you followed.

[JV] Marti Humphrey did the sound at The Dub Stage in Burbank. We have a good relationship with him and can call him very early in the process to work out the timeline of how we are going to do things. He had to soup up his system a bit to handle the Atmos near-field stuff, but it was a good opportunity for him to get into that space. So he was able to do all the various versions of our mix.

Technicolor was the new guy for us. Mike Hatzer did the color grade. It was a fairly complex process for them and they were a good partner. For the conform, we handed them an XML and EDL. They had their Flex files to get back to the film edge code. Steven had to break up the sequence to generate separate tracks for the 35mm original, stock, and VFX shots, because Technicolor needed separate EDLs for those. But it wasn’t like we invented anything that hasn’t been done before.

We did use third-party apps for some of this. The great thing about that is you can just contact the developer directly. There was one EDL issue and Steven could just call up the app developer to explain the issue and they’d fix it in a couple of days.

[OP] What sort of visual effects were required? The film is set more or less 70 years ago, so were the majority of effects just to make the locations look right? Like cars, signs, and so on?

[JV] It was mostly period clean-up. You have to paint out all sorts of boring stuff, like road paint. In the 50s and 60s, those white lines have to come out. Wires, of course. A couple of shots we wanted to ‘LA-ify’ Georgia. We shot some stuff in LA, but when you put Griffith Park right next to a shot of Newnan, Georgia, the way to blend that over is to put palm trees in the Newnan shot.

We also did a pick-up with Anthony while he was on another show the required a beard for that role. So we had to paint out his beard. Good luck figuring out which was the shot where we had to paint out his beard!

[OP] Now that you have a feature film under your belt with Final Cut Pro X, what are your thoughts about it? Anything you feel that it’s missing?

[JV] All the NLEs have their particular strengths. Final Cut has several that are amazing, like background exports and rendering. It has Roles, where you can differentiate dialogue, sound effects, and music sources. You can bus things to different places. This is the first time I’ve ever edited in 5.1, because Final Cut supports that. That was a fun challenge.

We used Final Cut Pro X to edit a movie shot on film, which is kind of a first at this level, but it’s not like we crashed into some huge problem with that. We gamed it out and it all worked like it was supposed to. Obviously it doesn’t do some stuff the same way. Fortunately through our relationship with Apple we can make some suggestions about that. But there really isn’t anything it doesn’t do. If that were the case, we would have just said that we can’t cut with this.

Final Cut Pro X is an evolving NLE – as they all are. What I realized at the seminar is that it changed a lot from when it first appeared. It was a good experience cutting a movie on it. Some editors are hesitant, because that first hour is difficult and I totally get that. But if you push through that and get to know it – there are many things that are very good and addictively good. I would certainly cut another movie on it.

____________________________________________

The Banker started a limited theatrical release on March 6 and will be available on the Apple TV+ streaming service on March 20.

For even more details on the post process for The Baker, check out Pro Video Coalition

Originally written for FCPco.

®2020 Oliver Peters