Kirk Baxter, ACE on editing Mank

Mank, David Fincher’s eleventh film, chronicles Herman Mankiewicz (portrayed by Gary Oldman) during the writing of the film classic, Citizen Kane. Mankiewicz, known as Mank, was a witty New York journalist and playwright who moved to Los Angles in the 1930s to become a screenwriter. He wrote or co-wrote about 40 films, often uncredited, including the first draft of The Wizard of Oz. Together with Orson Welles, he won an Academy Award for the screenplay of Citizen Kane. It’s long been disputed whether or not he, rather than Welles, actually did the bulk of the work on the screenplay. 

The script for Mank was penned decades ago by David Fincher’s father, Jack Fincher, and was finally brought to the screen thanks to Netflix this past year. Fincher deftly blends two parallel storylines: Mankiewicz’ writing of Kane during his convalescence from an accident – and his earlier Hollywood experiences with the studios, as told through flashbacks. These experiences, including his acquaintance with William Randolph Hearst – the media mogul of his time and the basis for Charles Foster Kane in Citizen Kane – inspired Mankiewicz’ script. This earlier period is infused with the political undercurrent of the Great Depression and the California gubernatorial race between Upton Sinclair and Frank Merriam.

David Fincher and director of photography Erik Messerschmidt, ASC (Mindhunter) used many techniques to pay homage to the look of Citizen Kane and other classic films of the era, including shooting in true black-and-white with RED Monstro 8K Monochrome cameras and Leica Summilux lenses. Fincher also tapped other frequent collaborators, including Trent Reznor and Atticus Ross for a moving, vintage score, and Oscar-winning editor, Kirk Baxter, ACE. I recently caught up with Baxter to discuss Mank, the fourth film he’s edited for David Fincher.

***

Citizen Kane is the 800 pound gorilla. Had you seen that film before this or was it research for the project?

I get so nervous about this topic, because with cinephiles, it’s almost like talking about religion. I had seen Citizen Kane when I was younger, but I was too young to appreciate it. I was growing up on Star Wars, Indiana Jones, and Conan the Barbarian. Then advancing my tastes to the Godfather films and French Connection. Citizen Kane is still just such a departure from all of that. I was kind of like, “What?” That was probably in my late teens.

I went back and watched it again before the shoot after reading the screenplay. There were certain technical aspects to the film that I thought were incredible. I loved the way OrsonWelles chose to leave his scenes by turning off lights like it was in the theater. There was this sort of slow decay and I enjoy how David picked up on that and took it into Mank. Each time one of those shots came up in the bungalow scenes, I thought it was fantastic.

Overall, I don’t consider myself any sort of expert on 1930s and 1940s movie-making and I didn’t make a conscious effort to try to replicate any styles. I approached the work in the same way I do with all of David’s work – by being reactionary to the material and the coverage that he shot. In regard to how close David took the stylings, well, that was more his tight rope walk. So, I felt no shackling to slow down an edit pace or stay in masters or stay in 50-50s as might have been common in the genre. I used all the tools at my disposal to exploit every scene the best I could. 

Since you are cutting while the shooting goes on, do you have the ability to ask for coverage that you might feel is missing? 

I think a little bit of that goes on, but it’s not me telling Fincher what’s required. It’s me building assemblies and giving them to David as he’s going and he will assess where he’s short and where he’s not. I’ve read many editor interviews over the years and I’ve always kind of gone, “huh,” when someone’s projecting they’re in the control seat. When you’re with someone with the ability that Fincher has, then I’m in a support position of helping him make his movie as best he can. Any other way of looking at it is delusional. But, I take a lot of pride in where I do get to contribute. 

Mank is a different style of film than Fincher’s previous projects. Did that change the workflow or add any extra pressure? 

I don’t think it did for me. I think it was harder for David. The film was in his head for so many decades and there were a couple of attempts to make it happen. Obviously a lot changes in that time frame. So, I think he had a lot of internal pressure about what he was making. For me, I found the entire process to be really buoyant and bubbly and just downright fun. 

As with all films, there were moments when it was hard to keep up during the shoot. And definitely moments coming down to that final crunch. That’s when I really put a lot of pressure on myself to deliver cut scenes to David to help him. I felt the pressure of that, but my main memory of it really was one of joy. Not that the other movies aren’t, but I think sometimes the subject matter can control the mood of the day. For instance, in other movies, like Dragon Tattoo, the feeling was a bit like your head in a vise when I look back at it.

Sure. Dragoon Tattoo is dark subject matter. On the other hand, Gary Oldman’s portrayal of Mankiewicz really lights up the screen. It certainly looks like he’s having fun with the character. 

Right. I loved all the bungalow scenes. I thought there was so much warmth in those. And I had so much compassion for the lead character, Mank. Those scenes really made me adore him. But also when the flashback scenes came, they’re just a hoot and great fun to put together. There was this warmth and playfulness of the two different opposing storylines. No matter which one turned up, I was happy to see it. 

Was the inter-cutting of those parallel storylines the way it was scripted? Or was that a construction in post? 

Yes, it was scripted that way. There was a little bit of pulling at the thread later. Can we improve on this? There was a bit of reshuffling later on and then working out that ‘as written’ was the best path. We certainly kicked the tires a few times. After we put the blueprint together, mostly the job became tightening and shortening. 

Obviously one of the technical differences was that this film was a true black-and-white film shot with modified, monochrome RED cameras. So not color and then changed to black-and-white in the grade. Did that impact your thinking in how to tackle the edit?

For the first ten minutes. At first you sit down and you go, “Oh, we work in black and white.” And then you get used to it very quickly. I forwarded the trailer when it was released to my mother in Australia. She texted back, “It’s black and white????” [laugh] You’ve got to love family!

Black-and-white has a unique look, but I know that other films, like Roma, were shot in color to satisfy some international distribution requirements. 

That’s never going to happen with someone like David. I can’t picture who that person would be that would tell him with any authority that his movie requires color. 

Of course, it matches films of the era and more importantly Citizen Kane. It does bring an intentional, stylistic treatment to the content. 

Black-and-white has got a great way of focusing your attention and focusing your eye. There’s a discipline that’s required with how shots are framed and how you’re using the images for eye travel. But I think all of David work comes with that discipline anyway. So to me, it didn’t alter it. He’s already in that ballpark.

In terms of recreating the era, I’ve seen a few articles and comments about creating the backgrounds and sets using visual effects, but also classic techniques, like rear projection. What about the effects in Mank

As in most of David’s movies, it’s everywhere and a lot of the time it looks invisible, but things are being replaced. I don’t have a ratio for it, but I’d say almost half the movie. We’ve got a team that’s stabilizing shots as we’re going. We’ve got an in-house visual effects team that is building effects, just to let us know that certain choices can be made. The split screen thing is constant, but I’ll do a lot of that myself. I’ll do a fairly haphazard job of it and then pass it on for our assistant editors to follow up on. Even the montage kaleidoscope effect was all done in-house down the hall by Christopher Doulgeris, one of our VFX artists. A lot of it’s farmed out, but a fair slice is done under the roof. 

Please tell me a bit about working with Adobe Premiere Pro again to cut this film.

It’s best for me not even to attempt to answer technical questions. I don’t mind exposing myself as a luddite. My first assistant editor, Ben Insler, set it up so that I’m able to move the way I want to move. For me, it’s all muscle memory. I’m hitting the same keystrokes that I was hitting back when we were using Avid. Then I crossed those keys over to Final Cut and then over to Premiere Pro. 

In previous versions, Premiere Pro required projects to contain copies of all the media used in that project.  As you would hand the scene off to other people to work on in parallel, all the media would travel into that new project, and the same was true when combining projects back together to merge your work.  You had monstrously huge projects with every piece of media, and frequently duplicate copies of that media, packed into them. They often took 15 minutes to open. Now Adobe has solved that and streamlined the process. They knew it was a massive overhaul, but I think that’s been completely solved. Because it’s functioning, I can now purely concentrate on the thought process of where I’m going in the edit. I’m spoiled with having very technical people around me so that I can exist as a child. [laugh]

How was the color grade handled?

We had Eric Weidt working downstairs at Fincher’s place on Baselight. David is really fortunate that he’s not working in this world of “Here’s three weeks for color. Go into this room each day and where you come out is where you are at.” There’s an ongoing grade that’s occurring in increments and traveling with the job that we’re doing. It’s  updated and brought into the cut. We experience editing with it and then it’s updated again and brought back into the cut. So it’s this constant progression. 

Let’s talk about project organization. You’ve told me in the past that your method of organizing a selects reel was to string out shots in the order of wide shots, mediums, close ups, and so on. And then bump up the ones you like. Finally, you’d reduce the choices before those were presented to David as possible selects. Did you handle it the same way on Mank?

Over time, I’ve streamlined that further. I’ve found that if I send something that’s too long while he’s in the middle of shooting that he might watch the first two minutes of it, give me a couple of notes of what he likes and what he doesn’t like, and move on. So, I’ve started to really reduce what I send. It’s more cut scenes with some choices. That way I get the most relevant information and can move  forward.

With scenes that are extremely dense, like Louis B. Mayer’s birthday party at Hearst’s, it really is an endless multiple choice of how to tackle it. I’ll often present a few paths. Here’s what it is if I really hold out these wides at the front and I hang back for a bit longer. Here’s what it is if I stay more with Gary [Oldmam] listening. It’s not that this take is better than the other take, but more options featuring different avenues and ways to tell the story. 

I like working that way, even if it wasn’t for the sake of presenting it to David. I can’t watch a scene that’s that dense and go, “Oh, I know what to do.” I wouldn’t have a clue. I like to explore it. I’ve got to turn the soil and snuff the truffles and try it all out. And then the answers present themselves. It all just becomes clear. Unfortunately, the world of the editor, regardless of past experiences, is always destined to be filled with labor. There is no shortcut to doing it properly.

With large-scale theatrical distribution out of the question – and the shift to Netflix streaming as the prime focus – did the nature of studio notes change at all? 

David’s generous about thought and opinion, if it’s constructive and helpful.  He’s got a long history of forwarding those notes to me and exploring them. I’m not positive if I get all of them. Anything that’s got merit will reach me, which is wise. Having spent so many years in the commercial world, there’s a part of me that’s always a little eager to solve a puzzle. If I’m delivered a pile of notes, good or bad, I’m going to try my best to execute them.  So, David is wise to just not let me see the bad ones.

Were you able to finish Mank before the virus-related lockdowns started? Did you have to move to a remote workflow? 

The shooting had finished and we already had the film assembled. I work at a furious rate whilst David’s shooting, so that we can interface during the shoot. That way he knows what he’s captured, what he needs, and he can move on and strike sets, release actors, etc. There’s this constant back and forth.

At the point when he stops shooting, we’re pretty far along in terms of replicating the original plan, the blueprint. Then it’s what I call the sweeps, where you go back to the top and you just start sweeping through the movie, improving it. I think we’d already done one of those when we went remote. So, it was very fortunate timing.

We’re quite used to it. During shooting, we work in a remote way anyway. It’s a language and situation that we’re completely used to. I think from David’s perspective, it didn’t change anything. 

If the timing had been different and you would have had to handle all of the edit under remote conditions, would anything change? Or would you approach it the same way? 

Exactly the same. It wouldn’t have changed the amount of time that I get directly with David. I don’t want to give the impression that I cut this movie and David was on the sidelines. He’s absolutely involved, but pops in and out and looks at things that are made. He’s not a director that sits there the whole time. A lot of it is, “I’ve made this cut, let’s watch it together. I’ve done these selects, let’s watch them together.” It’s really possible to do that remotely. 

I prefer to be with David when he’s shooting and especially in this one that he shot in Los Angeles. I really tried to have one day a week where we got to be together on the weekends and his world quieted down. David loves that. I would sort of construct my week’s thinking towards that goal. If on a Wednesday I had six scenes that were backed up, I’d sort of think to myself, “What can I achieve in the time frame before David’s with me on Saturday? Should I just select all these scenes and then we’ll go through the selects together? Or should I tackle this hardest one and get a good cut of that going?”

A lot of the time I would choose – if he was coming in and had the time to watch things – to do selects. Sometimes we could bounce through them just from having a conversation of what his intent was and the things that he was excited about when he was capturing them. With that, I’m good to go. Then I don’t need David for another week or so. We were down to the short hand of one sentence, one email, one text. That can inform me with all the fuel I need to drive cross-country. 

The film’s back story clearly has political overtones that have an eerie similarity to 2020. I realize the script was written a while back at a different time, but was some of that context added in light of recent events? 

That was already there. But, it really felt like we are reliving this now. In the beginning of the shutdown, you didn’t quite know where it was going to go. The parallels to the Great Depression were extreme. There were a lot of lessons for me.

The character of Louis B. Mayer slashes all of his studio employees’ salaries to 50 percent. He promises to give every penny back and then doesn’t do it. I was crafting that villain’s performance, but at the same time I run a company [Exile Edit] that has a lot of employees in Los Angeles and New York. We had no clue if we would be able to get through the pandemic at the time when it hit. We also asked staff to take a pay cut, so that we could keep everyone employed and keep everybody on health insurance. But the moment we realized we could get through it six months later, there was no way I could ever be that villain. We returned every cent. 

I think most companies are set up to be able to exist for four months. If everything stops dead – no one’s anticipating that – the 12-month brake pull. It was really, really frightening. I would hope that I would think this way anyway, but with crafting that villain’s performance, there was no way I was going to replicate it.

***

Mank was released in select theaters in November and launched on Netflix December 4, 2020.

Be sure to check out Steve Hullfish’s podcast interview with Kirk Baxter.

This article originally written for postPerspective.

©2021 Oliver Peters

Avid’s Hidden Gems

Avid Media Composer offers a few add-on options, but two are considered gems by the editors that rely on them. ScriptSync and PhraseFind are essential for many drama and documentary editors who wield Media Composer keyboards every day. I’ve written about these tools in the past, including how you can get similar functionality in other NLEs. New transcription services, like Simon Says, make them more viable than ever for the average editor.

Driven by the script

Avid’s script-based editing, also called script integration, builds a representation of the script supervisor’s lined script directly into the Avid Media Composer workflow and interface. While often referred to as ScriptSync, Avid’s script integration is actually not the same. Script-based editing and script bins are part of the core Media Composer system and does not cost extra.

The concept originated with the Cinedco Ediflex NLE and migrated to Avid. In the regular Media Composer system, preparing a script bin and aligning takes to that script is a manual process, often performed by assistant editors that are part of a larger editorial team. Because it is labor-intensive, most individual editors working on projects that aren’t major feature films or TV series avoid using this workflow.

Avid ScriptSync (a paid option) automates this script bin preparation process, by automatically aligning spoken words in a take to the text lines within the written script. It does this using speech recognition technology licensed from Nexidia. This technology is based on phonemes, the sounds that are combined to create spoken words. Clips can be imported (transcoded into Avid MediaFiles) or linked.

Through automatic analysis of the audio within a take, ScriptSync can correlate a line in the script to its relative position within that take or within multiple takes. Once clips have been properly aligned to the written dialogue, ScriptSync is largely out of the picture. And so, in Avid’s script-based editing, the editor can then click on a line of dialogue within the script bin and see all of the coverage for that line.

Script integration with non-scripted content

You might think, “Great, but I’m not cutting TV shows and films with a script.” If you work in documentaries or corporate videos built around lengthy interviews, then script integration may have little meaning – unless you have transcripts. Getting long interviews transcribed can be costly and/or time-consuming.  That’s where an automated transcription service like Simon Says comes in. There are certainly other, equally good services. However, Simon Says, offers export options tailored for each NLE, including Avid Media Composer.

With a transcription available on a fast turnaround, it becomes easy to import an interview transcript into a Media Composer script bin and align clips to it. ScriptSync takes care of the automatic alignment making script-based editing quick, easy, and painless – even for an individual editor without any assistants.

Finding that needle in the haystack

The second gem is PhraseFind, which builds upon the same Nexidia speech recognition technology. It’s a tool that’s even more essential for the documentary editor than script integration. PhraseFind (a paid option) is a phonetic search tool that analyzes the audio for clips within an Avid MediaFiles folder. Type in a word or phrase and PhraseFind will return a number of “hits” with varying degrees of accuracy.

The search is based on phonemes, so the results are based on words that “sound like” the search term. On one side this means that low-accuracy results may include unrelated finds that sound similar. On the other hand, you can enter a search word that is spelled differently or inaccurately, but as long as it still sounds the same, then useful results will be returned.

PhraseFind is very helpful in editing “Frankenbites.” Those are edits were sentences are ended in the middle, because a speaker went off on a tangent, or when different phrases are combined to complete a thought. Often you need to find a word that matches your edit point, but with the correct inflection, such as ending a sentence. PhraseFind is great for these types of searches, since your only alternative is scouring through multiple clips in search of a single word.

Working with the options

Script-based editing, ScriptSync, and PhraseFind are unique features that are only available in Avid Media Composer. No other NLE offers similar built-in features. Boris FX does offer Soundbite, which is a standalone equivalent to the PhraseFind technology licensed to them by Nexidia. It’s still available, but not actively promoted nor developed. Adobe had offered Story as a way to integrate script-based editing into Premiere Pro. That feature is no longer available. So today, if you want the accepted standard for script and phonetic editing features, then Media Composer is where it’s at.

These are separate add-on options. You can pick one or the other or both (or neither) depending on your needs and style of work. They are activated through Avid Link. If you own multiple seats of Media Composer, then you can purchase one license of ScriptSync and/or PhraseFind and float them between Media Composers via Avid Link activation. While these tools aren’t for everyone, they do offer a new component to how you work as an editor. Many who’ve adopted them have never looked back.

©2020, 2021 Oliver Peters

Avid Media Composer 2020

Avid Media Composer has been at the forefront of nonlinear, digital video editing for three decades. While most editors and audio mixers know Avid for Media Composer and Pro Tools, the company has grown considerably in that time. Whether by acquisition or internal development, Avid Technology encompasses such products as storage, live and post mixing consoles, newsroom software, broadcast graphics, asset management, and much more.

In spite of this diverse product line, Media Composer, as well as Pro Tools, continue to be the marquee products that define the brand. Use the term “Avid” and generally people understand that you are talking about Media Composer editing software. If you are an active Media Composer editor, then most of this article will be old news. But if you are new to Media Composer, read on.

The Media Composer heritage

Despite challenges from other NLEs, such as Final Cut Pro,  Final Cut Pro X, Premiere Pro, and DaVinci Resolve, Media Composer continues to be the dominant NLE for television and feature film post around the world. Even in smaller broadcast markets and social media, it’s not a given that the other options are exclusively used. If you are new to the industry and intend to work in one of the major international media hubs, then knowing the Media Composer application is helpful and often required.

Media Composer software comes in four versions, ranging from Media Composer | First (free) up to Media Composer Enterprise. Most freelance editors will opt for one of the two middle options: Media Composer or Media Composer | Ultimate. Licenses may be “rented” via a subscription or bought as a perpetual license. The latter includes a year of support with a renewal at the end of that year. If you opt not to renew support, then your Media Composer software will be frozen at the last valid version issued within that year; but it will continue to work. No active internet connection or periodic sign-in is required to use Media Composer, so you could be off the grid for months and the software works just fine.

A Media Composer installation is full-featured, including effects, audio plug-ins, and background rendering software. Depending on the version, you may also receive loyalty offers (free) for additional software from third-party vendors, like Boris FX, NewBlueFX, iZotope, and Accusonus.

Avid only offers three add-on options for Media Composer itself: ScriptSync, PhraseFind, and Symphony. Media Composer already incorporates manual script-based editing. Plain text script documents can be imported into a special bin and clips aligned to sentences and paragraphs in that script. Synchronization has to be done manually to use this feature. The ScriptSync option saves time – automating the process by phonetically analyzing and syncing clips to the script text. Click on a script line and any corresponding takes can be played starting from that point within the scene.

The PhraseFind option is a phonetic search engine, based on the same technology as ScriptSync. It’s ideal for documentary and reality editors. PhraseFind automatically indexes the phonetics of the audio for your clips. Search by a word or phrase and all matching  instances will appear, regardless of actual spelling. You can dial in the sensitivity to find only the most accurate hits, or broader in cases where dialogue is hard to hear or heavily accented.

Media Composer includes good color correction, featuring wheels and curves. In fact, Avid had this long before other NLEs. The Symphony option expands the internal color correction with more capabilities, as well as a full color correction workflow. Grade clips by source, timeline, or both. Add vector-based secondary color correction and more. Symphony is not as powerful as Baselight or Resolve, but you avoid any issues associated with roundtrips to other applications. That’s why it dominates markets where turnaround time is critical, like finishing for non-scripted (“reality”) TV shows. A sequence from a Symphony-equipped Media Composer system can still be opened on another Media Composer workstation that does not have the Symphony option. Clips play fine (no “media offline” or “missing plug-in” screen); however, the editor cannot access or alter any of the color correction settings specific to Symphony.

Overhauling Media Composer

When Jeff Rosica took over as CEO of Avid Technology in 2018, the company embraced an effort to modernize Media Composer. Needless to say, that’s a challenge. Any workflow or user interface changes affect familiarity and muscle memory. This is made tougher in an application with a loyal, influential, and vocal customer base.  An additional complication for every software developer is keeping up with changes to the underlying operating system. Changes from Windows 7 to Windows 10, or from macOS High Sierra to Mojave to Catalina, all add their own peculiar speed bumps to the development roadmap.

For example, macOS Catalina is Apple’s first, full 64-bit operating system. Apple dropped any 32-bit QuickTime library components that were used by developers to support certain codecs. Of course, this change impacted Media Composer. Without Apple rewriting 64-bit versions of these legacy components, the alternative is for a developer to add their own support back into the application, which Avid has had to do. Unfortunately, this introduces some inevitable media compatibility issues between older and newer versions of Media Composer. Avid is not alone in this case.

Nevertheless, Media Composer changes aren’t just cosmetic, but also involve many “under the hood” improvements. These include a 32-bit float color pipeline, support for ACES projects, HDR support, dealing with new camera raw codecs, and the ability to read and write ProRes media on both macOS and Windows systems.

Avid Media Composer 2020.10

Avid bases its product version numbers by the year and month of release. Media Composer 2020.10 – the most recent version as of this writing – was just released. The versions prior to that were Media Composer 2020.9 and 2020.8, released in September and August respectively. But before that it was 2020.6 from June, skipping .7. (Some of the features that I will describe were introduced in earlier versions and are not necessarily new in 2020.10.)

Media Composer 2020.10 is fully compatible with macOS Catalina. Due to the need to shift to a 64-bit architecture, the AMA framework – used to access media using non-Avid codecs – has been revamped as UME (Universal Media Engine). Also the legacy Title Tool has been replaced with the 64-bit Titler+.

If you are a new Media Composer user or moving to a new computer, then several applications will be installed. In addition to the Media Composer application and its built-in plug-ins and codecs, the installer will add Avid Link to your computer. This is a software management tool to access your Avid account, update software, activate/deactivate licenses, search a marketplace, and interact with other users via a built-in social component.

The biggest difference for Premiere Pro, Resolve, or Final Cut Pro X users who are new to Media Composer is understanding the Avid approach to media. Yes, you can link to any compatible codec, add it to a bin, and edit directly with it – just like the others. But Avid is designed for and works best with optimized media.

This means transcoding the linked media to MXF-wrapped Avid DNxHD or HR media. This media can be OPatom (audio and video as separate files) or OP1a (interleaved audio/video files). It’s stored in an Avid MediaFiles folder located at the root level of the designated media volume. That’s essentially the exact same process adopted by Final Cut Pro X when media is transcoded and placed inside an FCPX Library file. The process for each enables a bullet-proof way to move project files and media around without breaking links to that media.

The second difference is that each Avid bin within the application is also a dedicated data file stored within the project folder on your hard drive. Bins can be individually locked (under application control). This facilitates multiple editors working in a collaborative environment. Adobe adopted an analog of this method in their new Adobe Productions feature.

The new user interface

Avid has always offered a highly customizable user interface. The new design, introduced in 2019, features bins, windows, and panels that can be docked, tabbed, or floated. Default workspaces have been streamlined, but you can also create your own. A unique feature compared to the competing NLEs is that open panes can be slid left or right to move them off of the active screen. They aren’t actually closed, but compacted into the side of the screen. Simply slide the edge inward again to reveal that pane.

One key to Avid’s success is that the keyboard layout, default workspaces, and timeline interactions tend to be better focused on the task of editing. You can get more done with fewer keystrokes. In all fairness, Final Cut Pro X also shares some of this, if you can get comfortable with their very different approach. My point is that the new Media Composer workspaces cover most of what I need and I don’t feel the need for a bunch of custom layouts. I also don’t feel the need to remap more levels of custom keyboard commands than what’s already there.

Media Composer for Premiere and Final Cut editors

My first recommendation is to invest in a custom Media Composer keyboard from LogicKeyboard or Editors Keys. Media Composer mapping is a bit different than the Final Cut “legacy” mapping that many NLEs offer. It’s worth learning the standard Media Composer layout. A keyboard with custom keycaps will be a big help.

My second recommendation is to learn all about Media Composer’s settings (found under Preferences and Settings). There are a LOT of them, which may seem daunting at first. Once you understand these settings, you can really customize the software just for you.

Getting started

Start by establishing a new project from the projects panel. Projects can be saved to any available drive and do not have to be in a folder at the root level. When you create a new project, you are setting the format for frame size, rate, and color space. All sequences created inside of this project will adhere to these settings. However, other sequences using different formats can be imported into any project.

Once you open a project, Media Composer follows a familiar layout of bins, timeline, and source/record windows. There are three normal bin views, plus script-based editing (if you use it): frame, column, and storyboard. In column view, you may create custom columns as needed. Clips can be sorted and filtered based on the criteria you pick. In the frame view, clips can be arranged in a freeform manner, which many film editors really like.

The layout works on single and dual-monitor set-ups. If you have two screens, it’s easy to spread out your bins on one screen in any manner you like. But if you only have one screen, you may want to switch to a single viewer mode, which then displays only the record side. Click a source clip from a bin and it open its own floating window. Mark in/out, make the edit, and close. I wish the viewer would toggle between source and record, but that’s not the case, yet

Sequences

Media Composer does not use stacked or tabbed sequences, but there is a history pulldown for quick access to recent sequences and/or source clips. Drag and load any sequence into the source window and toggle the timeline view between the source or the record side. This enables easy editing of portions from one sequence into another sequence.

Mono and stereo audio tracks are treated separately on the timeline. If you have a clip with left and right stereo audio on two separate channels (not interleaved), then these will cut to the timeline as two mono tracks with a default pan setting to the middle for each. You’ll need to pan these tracks back to left and right in the timeline. If you have a clip with interleaved, stereo audio, like a music cue, it will be edited to a new interleaved stereo track, with default stereo panning. You can’t mix interleaved stereo and mono content onto the same timeline track.

Effects

Unlike other NLEs, timeline clips are only modified when a specific effect is applied. When clips of a different format than the sequence format are cut to the timeline, a FrameFlex effect is automatically applied for transform and color space changes. There is no persistent Inspector or Effects Control panel. Instead you have to select a clip with an effect applied to it and open the effect mode editor. While this may seem more cumbersome, the advantage is that you won’t inadvertently change the settings of one clip thinking that another has been selected.

Media Composer installs a fair amount of video and audio plug-ins, but for more advanced effects, I recommend augmenting with BorisFX’s Continuum Complete or Sapphire. What is often overlooked is that Media Composer does include paint, masking, and tracking tools. And, if you work on stereo 3D projects, Avid was one of the first companies to integrate a stereoscopic toolkit into Media Composer

The audio plug-ins provide a useful collection of filters for video editors. These plug-ins come from the Pro Tools side of the company. Media Composer and Pro Tools use the AAX plug-in format; therefore, no AU or VST audio plug-ins will show up inside Media Composer.

Due to the 64-bit transition, Avid dropped the legacy Title Tool and Marquee titler, and rewrote a new Titler+. Honestly, it’s not as intuitive as it should be and took some time for me to warm up to it. Once you play with it, though, the controls are straight-forward. It includes roll and crawl options, along with keyframed moves and tracking. Unfortunately, there are no built-in graphics templates.

Trimming

When feature film editors are asked why they like Media Composer, the trim mode is frequently at the top of the list. The other NLEs offer advanced trimming modes, but none seems as intuitive to use as Avid’s. Granted, you don’t have to stick with the mouse to use them, but I definitely find it easier to trim by mouse in Premiere or Final Cut.

Trimming in Media Composer is geared towards fluid keyboard operation. I find that when I’m building up a sequence, my flow is completely different in Media Composer. Some will obviously prefer the others’ tools and, in fact, Media Composer’s smart keys enable mouse-based trimming, too. It’s certainly preference, but once you get comfortable with the flow and speed of Media Composer’s trim mode, it’s hard to go to something else.

Avid’s journey to modernize Media Composer has gone surprisingly well. If anything, the pace of feature enhancements might be too incremental for users wishing to see more radical changes. For now, there hasn’t been too much resistance from the old guard and new editors are indeed taking a fresh look. Whether you are cutting spots, social media, or indie features, you owe it to yourself to take an objective look at Media Composer as a viable editing option.

To get more familiar with Media Composer, check out Kevin P. McAuliffe’s Let’s Edit with Media Composer tutorial series on YouTube.

Originally written for Pro Video Coalition.

©2020 Oliver Peters

Drive – Postlab’s Virtual Storage Volume

Postlab is the only service designed for multi-editor, remote collaboration with Final Cut Pro X. It works whether you have a team collaborating on-premises within a facility or spread out at various locations around the globe. Since the initial launch, Hedge has also extended Postlab’s collaboration to Premiere Pro.

When using Postlab, projects containing Final Cut Pro X libraries or Premiere Pro project files are hosted on Hedge’s servers. But, the media lives on local drives or shared storage and not “in the cloud.” When editors work remotely, media needs to be transferred to them by way of “sneakernet,” High Tail, WeTransfer, or other methods.

Hedge has now solved that media issue with the introduction of Drive, a virtual storage volume for media, documents, and other files. Postlab users can utilize the original workflow and continue with local media – or they can expand remote capabilities with the addition of Drive storage. Since it functions much like DropBox, Drive can also be used by team members who aren’t actively engaged in editing. As a media volume, files on Drive are also accessible to Avid Media Composer and DaVinci Resolve editors.

Drive promises significantly better performance than a general business cloud service, because it has been fine-tuned for media. The ability to use Drive is included with each Postlab plan; but, storage costs are based on a flat rate per month for the amount of storage you need. Unlike other cloud services, there are no hidden egress charges for downloads. If you only want to use Drive as a single user, then Hedge’s Postlab Solo or Pro plan would be the place to start.

How Drive works

Once Drive storage has been added to an account, each team member simply needs to connect to Drive from the Postlab interface. This mounts a Drive volume on the desktop just like any local hard drive. In addition, a cache file is stored at a designated location. Hedge recommends using a fast SSD or RAID for this cache file. NAS or SAN network volumes cannot be used.

After the initial set up, the operation is similar to DropBox’s SmartSync function. When an editor adds media to the local Drive volume, that media is uploaded to Hedge’s cloud storage. It will then sync to all other editors’ Drive volumes. Initially those copies of the media are only virtual. The first time a file is played by a remote team member, it is streamed from the cloud server. As it streams, it is also being added the local Drive cache. Every file that has been fully played is now stored locally within the cache for faster access in the future.

Hedge feels that latency is as or more important than outright connection speed for a fluid editing experience. They recommend wired, rather than wi-fi, internet connections. However, I tested the system using wi-fi with office speeds of around 575Mbps down / 38Mbps up. This is a business connection and was fast enough to stream 720p MP4 and 1080p ProRes Proxy files with minimal hiccups on the initial streamed playback. Naturally, after it was locally cached, access was instantaneous.

From the editor’s point of view, virtual files still appear in the FCPX event browser as if local and the timeline is populated with clips. Files can also be imported or dragged in from Drive as if they are local. As you play the individual clips or the timeline from within FCPX or Premiere, the files become locally cached. All in all, the editing experience is very fluid.

In actual practice

The process works best with lightweight, low-res files and not large camera originals. That is possible, too, of course, but not very efficient. Drive and the Hedge servers support most common media files, but not a format like REDCODE raw. As before, each editor will need to have the same effects, LUTs, Motion templates, and fonts installed for proper collaboration.

I did run into a few issues, which may be related to the recent 10.4.9 Final Cut update. For example, the built-in proxy workflow is not very stable. I did get it to work. Original files were on a NAS volume (not Drive) and the generated proxies (H.264 or ProRes Proxy) were stored on the Drive volume of the main system. The remote editing system would only get the proxies, synced through Drive. In theory that should work, but it was hit or miss. When it worked, some LUTs, like the standard ARRI Log-C LUTs, were not applied on the remote system in proxy mode. Also the “used” range indicator lines for the event browser clips were present on the original system, but not the remote system. Other than these few quirks, everything was largely seamless.

My suggested workflow would be to generate editing proxies outside of the NLE and copy those to Drive. H.264 or ProRes Proxy with matching audio configurations to the original camera files work well. Treat these low-res files as original media and import them into Final Cut Pro X or Premiere Pro for editing. Once the edit is locked, go to the main system and transfer the final sequence to a local FCPX Library or Premiere Pro project for finishing. Relink that sequence to the original camera files for grading and delivery. Alternatively, you could export an FCPXML or XML file for a Resolve roundtrip.

One very important point to know is that the entire Postlab workflow is designed around team members staying logged into the account. This maintains the local caches. It’s OK to quit the Postlab application, plus eject and reconnect the Drive volume. However, if you log out, those local caches for editing files and Drive media will be flushed. The next time you log back in, connection to Drive will need to be re-established, Drive information must be synced again, and clips within FCPX or Premiere Pro will have to be relinked. So stay logged in for the best experience.

Additional features

Thanks to the Postlab interface, Drive offers features not available for regular hard drives. For example, any folder within Drive can be bookmarked in Postlab. Simply click on a Bookmark to directly open that folder. The Drop Off feature lets you generate a URL with an expiration date for any Bookmarked folder. Send that link to any non-team member, such as an outside contributor or client, and they will be able to upload additional media or other files to Drive. Once uploaded to Hedge’s servers, those files show up in Drive within the folder and will be synced to all team members.

Hedge offers even more features, including Mail Drop, designed for projects with too much media to efficiently upload. Ship Hedge a drive to copy dailies straight into their servers. Pick Up is another feature still in development. When updated, you will be able to select files on Drive, generate a Pick Up link, and send that to your client for download.

Editing with Drive and Postlab makes remote collaboration nearly like working on-site. The Hedge team is dedicated to expanding these capabilities with more services and broader NLE support. Given the state of post this year, these products are at the right time and place.

Check out this Soho Editors masterclass in collaboration using Postlab and Drive.

Originally written for FCP.co.

©2020 Oliver Peters

COUP 53

The last century is littered with examples of European powers and the United States attempting to mold foreign governments in their own direction. In some cases, the view at the time may have seemed like these efforts would yield positive results. In others, self-interest or oil was the driving force. We have only to point to the Sykes-Picot Agreement of 1916 (think Lawrence of Arabia) to see the unintended consequences these policies have had in the middle east over the past 100+ years, including current politics.

In 1953, Britain’s spy agency MI6 and the United States’ CIA orchestrated a military coup in Iran that replaced the democratic prime minister, Mohammad Mossadegh, with the absolute monarchy headed by Shah Mohammad Reza Pahlavi. Although the CIA has acknowledged its involvement, MI6 never has. Filmmaker Taghi Amirani, an Iranian-British citizen, set out to tell the true story of the coup, known as Operation Ajax. Five years ago he elicited the help of noted film editor, Walter Murch. What was originally envisioned as a six month edit turned into a four yearlong odyssey of discovery and filmmaking that has become the feature documentary COUP 53.

COUP 53 was heavily researched by Amirani and leans on End of Empire, a documentary series produced by Britain’s Granada TV. That production started in 1983 and culminated in its UK broadcast in May of 1985. While this yielded plenty of interviews with first-hand accounts to pull from, one key omission was an interview with Norman Darbyshire, the MI6 Chief of Station for Iran. Darbyshire was the chief architect of the coup – the proverbial smoking gun. Yet he was inexplicably cut out of the final version of End of Empire, along with others’ references to him.

Amirani and Murch pulled back the filmmaking curtain as part of COUP 53. We discover along with Amirani the missing Darbyshire interview transcript, which adds an air of a whodunit to the film. Ultimately what sets COUP 53 apart was the good fortune to get Ralph Fiennes to portray Norman Darbyshire in that pivotal 1983 interview.

COUP 53 premiered last year at the Telluride Film Festival and then played other festivals until coronavirus closed such events down. In spite of rave reviews and packed screenings, the filmmakers thus far have failed to secure distribution. Most likely the usual distributors and streaming channels deem the subject matter to be politically toxic. Whatever the reason, the filmmakers opted to self-distribute, including a virtual cinema event with 100 cinemas on August 19th, the 67th anniversary of the coup.

Walter Murch is certainly no stranger to readers. Despite a long filmography, including working with documentary material, COUP 53 is only his second documentary feature film. (Particle Fever was the first.) This film posed another challenge for Murch, who is known for his willingness to try out different editing platforms. This was the first outing with Adobe Premiere Pro CC, his fifth major editing system. I had a chance to catch up with Walter Murch over the web from his home in London the day before the virtual cinema event. We discussed COUP 53, documentaries, and working with Premiere Pro.

___________________________________________________

[Oliver Peters] You and I have emailed back-and-forth on the progress of this film for the past few years. It’s great to see it done. How long have you been working on this film?

[Walter Murch] We had to stop a number of times, because we ran out of money. That’s absolutely typical for this type of privately-financed documentary without a script. If you push together all of the time that I was actually standing at the table editing, it’s probably two years and nine months. Particle Fever – the documentary about the Higgs Boson – took longer than that.

My first day on the job was in June of 2015 and here we are talking about it in August of 2020. In between, I was teaching at the National Film School and at the London Film School. My wife is English and we have this place in London, so I’ve been here the whole time. Plus I have a contract for another book, which is a follow-on to In the Blink of an Eye. So that’s what occupies me when my scissors are in hiding.

[OP] Let’s start with Norman Darbyshire, who is key to the storyline. That’s still a bit of an enigma. He’s no longer alive, so we can’t ask him now. Did he originally want to give the 1983 interview and MI6 came in and said ‘no’ – or did he just have second thoughts? Or was it always supposed to be an off the record interview?

[WM] We don’t know. He had been forced into early retirement by the Thatcher government in 1979, so I think there was a little chip on his shoulder regarding his treatment. The full 14-page transcript has just been released by the National Security Archives in Washington, DC, including the excised material that the producers of the film were thinking about putting into the film.

If they didn’t shoot the material, why did they cut up the transcript as if it were going to be a production script? There was other circumstantial evidence that we weren’t able to include in the film that was pretty indicative that yes, they did shoot film. Reading between the lines, I would say that there was a version of the film where Norman Darbyshire was in it – probably not named as such – because that’s a sensitive topic. Sometime between the summer of 1983 and 1985 he was removed and other people were filmed to fill in the gaps. We know that for a fact.

[OP] As COUP 53 shows, the original interview cameraman clearly thought it was a good interview, but the researcher acts like maybe someone got to management and told them they couldn’t include this.

[WM] That makes sense given what we know about how secret services work. What I still don’t understand is why then was the Darbyshire transcript leaked to The Observer newspaper in 1985. A huge article was published the day before the program went out with all of this detail about Norman Darbyshire – not his name, but his words. And Stephen Meade – his CIA counterpart – who is named. Then when the program ran, there was nothing of him in it. So there was a huge discontinuity between what was published on Sunday and what people saw on Monday. And yet, there was no follow-up. There was nothing in the paper the next week, saying we made a mistake or anything.

I think eventually we will find out. A lot of the people are still alive. Donald Trelford, the editor of The Observer, who is still alive, wrote something a week ago in a local paper about what he thought happened. Alison [Rooper] – the original research assistant – said in a letter to The Observer that these are Norman Darbyshire’s words, and “I did the interview with him and this transcript is that interview.”

[OP] Please tell me a bit about working with the discovered footage from End of Empire.

[WM] End of Empire was a huge, fourteen-episode project that was produced over a three or four year period. It’s dealing with the social identity of Britain as an empire and how it’s over. The producer, Brian Lapping, gave all of the outtakes to the British Film Institute. It was a breakthrough to discover that they have all of this stuff. We petitioned the Institute and sure enough they had it. We were rubbing our hands together thinking that maybe Darbyshire’s interview was in there. But, of all of the interviews, that’s the one that’s not there.

Part of our deal with the BFI was that we would digitize this 16mm material for them. They had reconstituted everything. If there was a section that was used in the film, they replaced it with a reprint from the original film, so that you had the ability to not see any blank spots. Although there was a quality shift when you are looking at something used in the film, because it’s generations away from the original 16mm reversal film.

For instance, Stephen Meade’s interview is not in the 1985 film. Once Darbyshire was taken out, Meade was also taken out. Because it’s 16mm we can still see the grease pencil marks and splices for the sections that they wanted to use. When Meade talks about Darbyshire, he calls him Norman and when Darbyshire talks about Meade he calls him Stephen. So they’re a kind of double act, which is how they are in our film. Except that Darbyshire is Ralph Fiennes and Stephen Meade – who has also passed on – appears through his actual 1983 interview.

[OP] Between the old and new material, there was a ton of footage. Please explain your workflow for shaping this into a story.

[WM] Taghi is an inveterate shooter of everything. He started filming in 2014 and had accumulated about 40 hours by the time I joined in the following year. All of the scenes where you see him cutting transcripts up and sliding them together – that’s all happening as he was doing it. It’s not recreated at all. The moment he discovered the Darbyshire transcript is the actual instance it happened. By the end, when we added it all up, it was 532 hours of material.

Forgetting all of the creative aspects, how do you keep track of 532 hours of stuff? It’s a challenge. I used my Filemaker Pro database that I’ve been using since the mid-1980s on The Unbearable Lightness of Being. Every film, I rewrite the software slightly to customize it for the film I’m on. I took frame-grabs of all the material so I had stacks and stacks of stills for every set-up.

By 2017 we’d assembled enough material to start on a structure. Using my cards, we spent about two weeks sitting and thinking ‘we could begin here and go there, and this is really good.’ Each time we’d do that, I’d write a little card. We had a stack of cards and started putting them up on the wall and moving them around. We finally had two blackboards of these colored cards with a start, middle, and end. Darbyshire wasn’t there yet. There was a big card with an X on it – the mysterious X. ‘We’re going to find something on this film that nobody has found before.’ That X was just there off to the side looking at us with an accusing glare. And sure enough that X became Norman Darbyshire.

At the end of 2017 I just buckled my seat belt and started assembling it all. I had a single timeline of all of the talking heads of our experts. It would swing from one person to another, which would set up a dialogue among themselves – each answering the other one’s question or commenting on a previous answer. Then a new question would be asked and we’d do the same thing. That was 4 1/2 hours long. Then I did all of the same thing for all of the archival material, arranging it chronologically. Where was the most interesting footage and the highest quality version of that? That was almost 4 hours long. Then I did the same thing with all of the Iranian interviews, and when I got it, all of the End of Empire material.

We had four, 4-hour timelines, each of them self-consistent. Putting on my Persian hat, I thought, ‘I’m weaving a rug!’ It was like weaving threads. I’d follow the talking heads for a while and then dive into some archive. From that into an Iranian interview and then some End of Empire material. Then back into some talking heads and a bit of Taghi doing some research. It took me about five months to do that work and it produced an 8 1/2 hour timeline.

We looked at that in June of 2018. What were we going to do with that? Is it a multi-part series? It could be, but Netflix didn’t show any interest. We were operating on a shoe string, which meant that the time was running out and we wanted to get it out there. So we decided to go for a feature-length film. It was right about that time that Ralph Fiennes agreed to be in the film. Once he agreed, that acted like a condenser. If you have Ralph Fiennes, things tend to gravitate around that performance. We filmed his scenes in October of 2018. I had roughed it out using the words of another actor who came in and read for us, along with stills of Ralph Fiennes as M. What an irony! Here’s a guy playing a real MI6 agent who overthrew a whole country, who plays M, the head of MI6, who dispatches James Bond to kill malefactors!

Ralph was recorded in an hour and a half in four takes at the Savoy Hotel – the location of the original 1983 interviews. At the time, he was acting in Shakespeare’s Anthony and Cleopatra every evening. So he came in the late morning and had breakfast. By 1:30-ish we were set-up. We prayed for the right weather outside – not too sunny and not rainy. It was perfect. He came and had a little dialogue with the original cameraman about what Darbyshire was like. Then he sat down and entered the zone – a fascinating thing to see. There was a little grooming touch-up to knock off the shine and off we went.

Once we shot Ralph, we were a couple of months away from recording the music and then final color timing and the mix. We were done with a finished, showable version in March of 2019. It was shown to investors in San Francisco and at the TED conference in Vancouver. We got the usual kind of preview feedback and dove back in and squeezed another 20 minutes or so out of the film, which got it to its present length of just under two hours.

[OP] You have a lot of actual stills and some footage from 1953, but as with most historical documentaries, you also have re-enactments. Another unique touch was the paint effect used to treat these re-enactments to differentiate them stylistically from the interviews and archival footage.

[WM] As you know, 1953 is 50+ years before the invention of the smart phone. When coups like this happen today you get thousands of points-of-view. Everyone is photographing everything. That wasn’t the case in 1953. On the final day of the coup, there’s no cinematic material – only some stills. But we have the testimony of Mossadegh’s bodyguard on one side and the son of the general who replaced Mossadegh on the other, plus other people as well. That’s interesting up to a point, but it’s in a foreign language with subtitles, so we decided to go the animation path.

This particular technique was something Taghi’s brother suggested and we thought it was a great idea. It gets us out of the uncanny valley, in the sense that you know you’re not looking at reality and yet it’s visceral. The idea is that we are looking at what is going on in the head of the person telling us these stories. So it’s intentionally impressionistic. We were lucky to find Martyn Pick, the animator who does this kind of stuff. He’s Mr. Oil Paint Animation in London. He storyboarded it with us and did a couple of days of filming with soldiers doing the fight. Then he used that as the base for his rotoscoping.

[OP] Quite a few of the first-hand Iranian interviews are in Persian with subtitles. How did you tackle those?

[WM] I speak French and Italian, but not Persian. I knew I could do it, but it was a question of the time frame. So our workflow was that Taghi and I would screen the Iranian language dailies. He would point out the important points and I would take notes. Then Taghi would do a first pass on his workstation to get rid of the chaff. That’s what he would give to the translators. We would hire graduate students. Fateme Ahmadi, one of the associate producers on the film, is Iranian and she would also do translation. Anyone that was available would work on the additional workstation and add subtitling. That would then come to me and I would use that as raw material.

To cut my teeth on this, I tried using the interview with Hamid Admadi, the Iranian historical expert that was recorded in Berlin. Without translating it, I tried to cut it solely on body language and tonality. I just dove in and imagined, if he is saying ‘that’ then I’m thinking ‘this.’ I was kind of like the way they say people with aphasia are. They don’t understand the words, but they understand the mood. To amuse myself, I put subtitles on it, pretending that I knew what he was saying. I showed it to Taghi and he laughed, but said that in terms of the continuity of the Persian, it made perfect sense. The continuity of the dialogue and moods didn’t have any jumps for a Persian speaker. That was a way to tune myself into the rhythms of the Persian language. That’s almost half of what editing is – picking up the rhythm of how people say things – which is almost as important or even sometimes more important than the words they are using.

[OP] I noticed in the credits that you had three associate editors on the project.  Please tell me a bit about their involvement.

[WM] Dan [Farrell] worked on the film through the first three months and then a bit on the second section. He got a job offer to edit a whole film himself, which he absolutely should do. Zoe [Davis] came in to fill in for him and then after a while also had to leave. Evie [Evelyn Franks] came along and she was with us for the rest of the time. They all did a fantastic job, but Evie was on it the longest and was involved in all of the finishing of the film. She’s is still involved, handling all of the media material that we are sending out.

[OP] You are also known for your work as a sound designer and re-recording mixer, but I noticed someone else handled that for this film. What was you sound role on COUP 53?

[WM] I was busy in the cutting room, so I didn’t handle the final mix. But I was the music editor for the film, as well as the picture editor. Composer Robert Miller recorded the music in New York and sent a rough mixdown of his tracks. I would lay that onto my Premiere Pro sequence, rubber-banding the levels to the dialogue.

When he finally sent over the instrument stems – about 22 of them – I copied and pasted the levels from the mixdown onto each of those stems and then tweaked the individual levels to get the best out of every instrument. I made certain decisions about whether or not to use an instrument in the mix. So in a sense, I did mix the music on the film, because when it was delivered to Boom Post in London, where we completed the mix, all of the shaping that a music mixer does was already taken care of. It was a one-person mix and so Martin [Jensen] at Boom only had to get a good level for the music against the dialogue, place it in a 5.1 environment with the right equalization, and shape that up and down slightly. But he didn’t have to get into any of the stems.

[OP] I’d love to hear your thoughts on working with Premiere Pro over these several years. You’ve mentioned a number of workstations and additional personnel, so I would assume you had devised some type of a collaborative workflow. That is something that’s been an evolution for Adobe over this same time frame.

[WM] We had about 60TB of shared storage. Taghi, Evie Franks, and I each had workstations. Plus there was fourth station for people doing translations. The collaborative workflow was clunky at the beginning. The idea of shared spaces was not what it is now and not what I was used to from Avid, but I was willing to go with it.

Adobe introduced the basics of a more fluid shared workspace in early 2018 I think, and that began a six months’ rough ride, because there were a lot of bugs that came along  with that deep software shift. One of them was what I came to call ‘shrapnel.’ When I imported a cut from another workstation into my workstation, the software wouldn’t recognize all the related media clips, which were already there. So these duplicate files would be imported again, which I nicknamed ‘shrapnel.’ I created a bin just to stuff these clips in, because you couldn’t delete them without causing other problems.

Those bugs went away in the late summer of 2018. The ‘shrapnel’ disappeared along with other miscellaneous problems – and the back-and-forth between systems became very transparent. Things can always be improved, but from a hands-on point-of-view, I was very happy with how everything worked from August or September of 2018 through to the completion of the film.

We thought we might stay with Premiere Pro for the color timing, which is very good. But DaVinci Resolve was the system for the colorist that we wanted to get. We had to make some adjustments to go to Resolve and back to Premiere Pro. There were a couple of extra hurdles, but it all worked and there were no kludges. Same for the sound. The export for Pro Tools was very transparent.

[OP] A lot of what you’ve written and lectured about is the rhythm of editing – particularly dramatic films. How does that equate to a documentary?

[WM] Once you have the initial assembly – ours was 8 hours, Apocalypse Now was 6 hours, Cold Mountain was 5 1/2 hours – the jobs are not that different. You see that it’s too long by a lot. What can we get rid of? How can we condense it to make it more understandable, more emotional, clarify it, and get a rhythmic pulse to the whole film?

My approach is not to make a distinction at that point. You are dealing with facts and have to pay attention to the journalistic integrity of the film. On a fiction film you have to pay attention to the integrity of the story, so it’s similar. Getting to that point, however, is highly different, because the editor of an unscripted documentary is writing the story. You are an author of the film. What an author does is stare at a blank piece of paper and say, ‘what am I going to begin with?’ That is part of the process. I’m not writing words, necessarily, but I am writing. The adjectives and nouns and verbs that I use are the shots and sounds available to me.

I would occasionally compare the process for cutting an individual scene to churning butter. You take a bunch of milk – the dailies – and you put them into a churn – Premiere Pro – and you start agitating it. Could this go with that? No. Could this go with that? Maybe. Could this go? Yes! You start globbing things together and out of that butter churning process you’ve eventually got a big ball of butter in the churn and a lot of whey – buttermilk. In other words, the outtakes.

That’s essentially how I work. This is potentially a scene. Let me see what kind of scene it will turn into. You get a scene and then another and another. That’s when I go to the card system to see what order I can put these scenes in. That’s like writing a script. You’re not writing symbols on paper, you are taking real images and sound and grappling with them as if they are words themselves.

___________________________________________________

Whether you are a student of history, filmmaking, or just love documentaries, COUP 53 is definitely worth the watch. It’s a study in how real secret services work. Along the way, the viewer is also exposed to the filmmaking process of discovery that goes into every well-crafted documentary.

Images from COUP 53 courtesy of Amirani Media and Adobe.

(Click on any image for an enlarged view.)

You can learn more about the film at COUP53.com.

For more, check out these interviews at Art of the Cut, CineMontage, and Forbes.

©2020 Oliver Peters