Audio Plug-ins for Editors – Part 3

FabFilter Software Instruments

There are plenty of paid and free audio plug-ins on the market. They all fit into the good, the bad, or the ugly categories – some great, some not so much. One of the better developers of modern audio plug-in effects is FabFilter Software Instruments in the Netherlands. While FabFilter products are known and respected in the music recording industry, they are not as well known among video editors. Any of their plug-ins would provide you with a great software tool, but the plug-in that I felt was the best fit for a video editor was their Pro-L 2 limiter plug-in.

An audio limiter, just like a broadcast safe video limiter, is typically used as a mastering tool applied to the last stage of the audio chain. You can certainly use a limiter on an individual track, like drums in a recording session or a voice-over in a video mix. However, limiting is most often applied to the final output – the master or mix output bus. While a limiter is really just a variant of a regular compressor, it is optimized to catch and restrict all peak levels and transients in order to make sure that your mix is compliant with a given loudness target.

FabFilter Pro-L 2 Limiter

Like most third-party plug-ins, the Pro-L 2 limiter installs as an AAX, AU, and/or VST/VST3 plug-in and so is compatible with most DAWs and NLEs. FabFilter plug-ins use a license key activation code after installation, so no need to mess with separate license management applications or a physical iLok hardware key. I tested the Pro-L 2 limiter in various applications and performance and behavior was great, even in Final Cut Pro, which has lately been touchy for me when using some third-party audio effects.

At first glance, the Pro-L 2 limiter might seem like most other limiter filters, but looks can be deceiving. This plug-in is rather deep with many nuanced adjustments that are easy to overlook. The good news is that FabFilter has done a good job with video tutorials and both an online and PDF user guide.

There are three big selling points for me. First, Pro-L 2 supports various mix configurations – not just mono and stereo, but also surround, including Dolby Atmos. Second, there’s built-in loudness metering. This includes an earlier K-system metering method (developed in the late 90s by noted mastering engineer Bob Katz), as well as current ATSC and EBU loudness scales. Finally, it’s the sound. You can drive the input truly hard into gain reduction and the audio stays extremely smooth-sounding without coming across as heavily compressed or distorted.

Interface

The Pro-L2 user interface is well-designed with several size options, including full screen, as well as a compact mode that hides the audio waveform graph. Metering can be changed from standard (input, output, gain reduction meters) to full loudness. Several of the components, like the advanced control panel and output gain knob are fly-out panels that might not be readily obvious until you get used to the plug-in. As this is a minimalist UI design, there are other controls, like oversampling and true peak limiting, which are enabled by small control buttons along the bottom.

One UI tool that I really liked was the lock icon. When this is unlocked (disabled) then every time you switch between limiter algorithms or presets the input and output gain levels reset, which makes it harder to compare settings. However, when it’s enabled, the gain levels are “locked” as you toggle through the options.

One final UI feature to note is that you have control over the waveform scrolling method. The display represents audio levels, gain reduction, and peaks. There are four scrolling modes depending on how you prefer to see the waveform being drawn onto the screen.

Operation

The key to the FabFilter Pro-L limiter is how it handles sound. There are numerous presets and eight limiter algorithms designed with distinct character depending on the type of audio you are processing. The last four (Aggressive, Modern, Bus, and Safe) were newly added in version two of the limiter. So whether you want something with a little crunch or totally transparent, this limiter offers you choices.

The general operating controls are similar to other compressors and limiters. There are input and output gain controls – the combination of which determines the amount of gain reduction (limiting). Attack and release controls affect how quickly and how long afterwards limiting is taking place. In addition to lookahead (how far ahead the software is looking for predicted peaks), there is also an oversampling control, which may be CPU intensive. Sound is analog, so fast peaks can occur between the regular digital sampling intervals. These peaks can, therefore, be missed by a limiter. Oversampling is a technique to catch and process any inter-sample peaks.

Channel linking is another powerful tool. Generally, a plug-in is going to process the left and right sides of a stereo signal equally. But what if your track has harder peaks on one side or the other? That’s where the channel linking controls come into play. The Transient control knob alters the amount of linking on short transients. 100% is equal on both sides, but then you can dial down the percentage of linking from there. When working with surround, these control knobs change to add functionality for the C (center) and LFE (subwoofer) channels. When these buttons are engaged, the C and LFE channels are integrated into the linking process.

One feature that is supported by most DAWs, but not by most NLEs, is side-chaining. This is a method by which the dynamics of one track control the compression/limiting being applied to a different track. For example, you could apply the limiter to a music track, but use a voice-over track as the side-chain input. This technique can be used to duck the music under the voice every time the person speaks.

Honestly, I’m not a huge fan of music ducking in the first place, because I don’t think it sounds good compared to riding the levels manually. However, it is available. I tested this with the Pro-L 2 in Logic Pro. Quite frankly, using the same process and the native Logic Pro compressor yielded more pleasing results. That’s not surprising. Although compressors and limiters are audio cousins, they do process audio a bit differently. Since it’s not a method I use anyway, it wasn’t a big deal, but still worth noting.

Conclusion

FabFilter Pro-L 2 offers a lot of depth and you really need to go through the user guide to fully appreciate its intricacies. That being said, it’s super easy to use. But for me, the quality of the sound is the key. I was impressed with how hard I could drive it when I needed to and still maintain good sound and proper loudness levels. That makes it worth the price of admission.

As a developer, FabFilter Software Instruments seems to be on top of things. If you are a Mac user, these plug-ins are already Apple Silicon-compatible. That not true of every audio plug-in maker. If, like me, you work across multiple NLEs, then it’s nice to have a consistent set of plug-ins that work and sound the same regardless of which NLE I’m working in. FabFilter Pro-L 2 definitely fits that bill.

In Part 4 of this series, I’ll take a look at some of the free filter options on the market.

Click here to read Part 1 and Part 2 of this series.

©2021 Oliver Peters

Audio Plug-ins for Editors – Part 2

In the previous post I presented an overview of common plug-ins. Two types used in nearly every project are equalization and compression to tame volume levels and sculpt the sound. Let’s take a closer look into how each operates.

Equalizers (EQs)

All equalizer plug-ins work with several common controls. Some EQ models have more features, but the general concepts are the same. An equalizer will boost (raise) or cut (lower) the volume of a specific frequency within the sound spectrum of the track. Some EQs feature only a single control point, while others include more – three, four, or even unlimited.

As you boost or cut volume, the audio frequencies around the control frequency are also progressively raised or lowered in what is presented as a bell-shaped curve on a frequency graph. The width of this bell is called the Q value. As you widen the curve – the Q value – more of the surrounding frequencies are also affected. A smaller Q – a tighter curve – results in a more surgical adjustment. An extremely tight Q value is often referred to as a notch, because you are only affecting that frequency and very little else. Notch settings and separate notch filters are often used to remove or reduce specific annoying background sounds in the audio.

Most multi-point EQs are designed so that the lowest and highest frequency control points are shelf controls. When you adjust the high frequency and it operates as a shelf, then everything above that frequency is rolled off. The same for a low shelf, except that the roll-off is in the other direction – lower frequencies. The slope of this roll-off can be gradual or sharp, depending on the features on the plug-in.

An extremely sharp slope at the low-end creates a high-pass filter (higher frequencies are allowed, lower frequencies are cut). An extremely sharp slope at the top is a low-pass filter. Depending on the equalizer model, a high-pass filter control may also be referred to as a low-cut control. High-pass versus low-cut is purely a semantics difference as the controls work the same. Some EQs allow the slope of the low-cut to be adjusted along with the frequency, while others leave the slope at a fixed amount.

Compressors

Compressors come in more varieties and with a wider range of features than EQ plug-ins. For most, the core operation is the same. The intent is to squash audio peak levels to reduce the overall dynamic range of a track between low and high volume levels. The smoother a compressor works, the more natural and unobtrusive the compression effect is.

The threshold control determines the volume level at which the compressor starts to bite into the signal. As you lower the threshold, more of the signal is impacted. Some compressors also include an input gain control to raise the audio coming into the filter ahead of the threshold control.

The ratio control determines the amount of signal compression, i.e. gain reduction above the threshold. A 2:1 ratio means that 2dB of gain over the threshold would be reduced by half to 1dB. A 4:1 ratio would be a reduction from 4dB down to 1dB for any audio peaks that exceed the threshold.

The make-up gain control (when available) is often used to compensate for the gain reduction. When you apply a heavy amount of compression, affecting a larger range of the signal, the overall output will sound lower. Increasing the make-up gain compensates for this volume loss. However, this risks also bringing up the noise floor, since quiet portions of the tracks have been attenuated, too.

When you see the compressor settings displayed graphically, the adjustment appears as a hockey stick standing on its end. The threshold point is displayed on the graph as the point where the line bends. The angle of this bend is the ratio. The higher the ratio, the flatter the bent section of the line. A slightly curved bend is referred to as a soft knee, meaning that compression kicks in more gradually.

The response of the compressor to peaks is controlled by the attack and release adjustments. Set a fast attack time and the compressor will react quickly to peaks. A slow release time means that bite of the gain reduction holds on longer before the compressor returns to a neutral effect. The attack and release times determine the characteristics of how that compressor sounds. As an example, your adjustments would be different for speech than if you were recording drums. The impact of the compressor would sound different in each case.

The lookahead setting determines how far ahead the compressor plug-in is analyzing the track in order to respond to future peak levels. But, you are balancing performance versus precision. Long lookahead times require more processing power for the computer. A very short lookahead value means that some peaks will get through. Lookahead only works when you are working with a recorded track and isn’t applicable to compression on live sources, such as in a recording session.

Multi-band compressors and limiters

There are two special types of compressors. The multi-band compressor divides the sound spectrum into several frequency ranges. This enables the user to control the amount of compression applied to different parts of the signal, such as low versus mid versus high frequencies. As we will see in Part 4, some equalizers can be paired with compression controls to create a combo plug-in of EQ coupled with multi-band compression.

Another variation is the limiter. This is a compressor that’s designed to block all volume peaks above a determined threshold. Limiters are important if you have to deliver files for broadcast or streaming services in order to stay within loudness parameters. Some editors and mix engineers will place a multi-band compressor followed by a limiter on their stereo output mix bus for this reason.

Finally, some compressors include a built-in limiter, often referred to as a brick wall limiter. This is a second stage of compression with a tighter ratio. Graphically, the slope after the knee would appear flat. The limiter threshold is designed to fully compress all peaks that exceed the set level of the compressor. Typically the limiter values would be set somewhat higher than the compressor adjustments in order to allow for some dynamic range between the two.

In Part 3, I’ll check out one of the more popular audio plug-in developers, FabFilter Software Instruments, and their Pro-L2 Limiter plug-in.

Click here to read Part 1.

©2021 Oliver Peters

Audio Plug-ins for Editors – Part 1

Audio mixers and audio editors who spend their time at the business end of a DAW certainly have a solid understanding of audio plug-ins. But often it’s something many video editors don’t know much about. Every NLE includes a useful complement of audio filter effects (plug-ins) that can also be augmented by a wide range of third party options. So it’s worth understanding what you have at your fingertips. After all, audio is at least 50% or more of most video projects. For this and the following three posts, I’ll focus on some thoughts pertaining to what video editors should know about commonly used audio filters.

Numerous audio effects have been highlighted in previous posts. I personally use various Accusonus and iZotope effects on my work, most often for audio clean-up. That’s been very important in this past year with restricted production activity. Quite a lot of my recent edit jobs worked with source material from Zoom calls and self-recorded smartphone video – all with marginal audio quality. So clean-up tools like iZotope RX have been quite important.

Since a lot of what I do is corporate in nature, the mixes are relatively simple – usually voice and music with a minimum of sound effects. Other than some clean-up processing (noise or reverb removal and so on), my most frequently used effects are equalization and compression. These tools let me shape the mix and control levels. 

All audio plug-ins are the same. Or are they?

Audio effects typically come in two flavors. One group could be described as “digital” and is intended to process audio in a transparent fashion without adding tonal color on its own. The other group is considered “analog,” because these filters are intended to emulate the sound of certain analog processing equipment. Naturally, since these are software plug-ins, the processing is actually digital. However, analog-style emulations are designed to mimic the tonal qualities of classic outboard gear or of channel strip circuits built into analog consoles like Neve and SSL.

Tonal color is often created by how the audio is processed, such as the slope of the attack and release characteristics when the filter begins to affect the sound. In theory, you should be able to take a digital-style EQ and boost a frequency by a given amount and Q value (the width of the effect around that frequency). Then, if you apply a second instance of the EQ and cut (lower) that same frequency by the same dB and Q values, the two should cancel each other out and the signal should sound unaffected. An analog-style filter that has been designed to emulate certain models of peripheral gear will not be transparent if you try this same experience.

If you buy two competing digital audio plug-ins that have the same controls and features, then the way each alters the sound will likely be more or less the same. The only difference is the “skin,” i.e. the user interface. However, when you buy an analog audio plug-in, you are looking for certain sound characteristics found in current or vintage analog hardware. A developer could go the route of licensing the exact signal path from the original company. They can then legally display a branded UI that is skeuomorphic and looks just like the physical version that it represents. Waves has an entire repertoire of such effects. So if you want an SSL 4000-series E-type channel strip, they’ve got a software version for you.

The other development approach is to reverse-engineer the sound of that physical gear and release a plug-in that emulates the sound. It might be dead-on or it might only be reminiscent. The skeuomorphic interface is designed to look and feel like that gear. If you know the real device, then you’ll know what that plug-in can be expected to sound like. Apple Logic Pro has a wealth of effects that are emulations. If you want to use a Vox or a Marshall guitar amp filter, simply pick the one that features a similar faceplate. Nowhere does Logic actually call it a Marshall or a Vox, because Apple hasn’t licensed the exact circuits from the original manufacturer. Instead, they classify these as “inspired by” certain musical eras or genres.

Native versus third party effects

Audio plug-ins are installed using one of several protocols, including AAX, AU, and VST/VST3. This means that you can use the same effect in multiple host applications. However, DAWs and NLEs also install their own native effects that are only available within that single application. This can mean better performance versus third-party effects, which is especially true with current versions of Final Cut Pro and macOS.

One of my favorite native filters is the Logic compressor found in both Logic Pro and Final Cut Pro. It features seven compressor styles built into a single plug-in. The choices start with Platinum Digital, which is the digital (clean or transparent) version of this filter. The next six panes are different analog models, which are emulations of such popular outboard gear as Focusrite and DBX. There are two choices each for VCA, FET, and opto-electrical circuit designs.

Set the exact same adjustments in any of the compressor’s panes and the tonal color will vary slightly as you toggle through them. If you are unfamiliar with these, then check out some of the YouTube tutorials that explain the Logic compressor’s operation and which of the actual gear each of these panes is intended to emulate. I personally like the Studio VCA pane, which is based on a Focusrite Red compressor.

In Part 2, I’ll take a deeper look at two of the most common filtering functions – compression and equalization.

©2021 Oliver Peters

Final Cut Pro at 10 and Other Musings

Recently Final Cut Pro (formerly Final Cut Pro X) hit its tenth anniversary.  Since I’ve been a bit quiet on this blog lately due to the workload, I thought it was a good time to reflect. I recently cut a set of involved commercials using FCP. While I’ve cut literally thousands of commercials in my career, my work in recent years tends to be corporate/branding/image content in the five to ten minute range. I work in a team and the tool of choice is Premiere Pro. It’s simply a better fit for us, since the bulk of staff and freelancers are very fluid in Adobe products and less so with Apple’s pro software. Sharing projects and elements also works better in the Adobe ecosystem.

Cutting the spots in Final Cut Pro

In the case of the four :60s, I originally budgeted about two days each, plus a few days for client revisions – eleven days in total. My objective was to complete the creative cut, but none of the finishing, since these spots involved extensive visual effects. I was covering for the client’s regular editor who had a scheduled vacation and would finish the project. The spots were shot with a Sony Venice, simultaneously recording 6K RAW and 4K XAVC (AVC-Intra) “proxy” files. The four spots totaled over 1200 clips with approximately an hour of footage per spot. My cutting options could be to work natively with the Sony RAW media in Premiere Pro or DaVinci Resolve, or to edit with the proxies in any NLE.

The Sony RAW files are large and don’t perform well playing from a shared storage system. I didn’t want to waste the time copying location drives to the NAS, partially for reasons of time. I also wanted to be able to access media to cut the spots whether at home or at the work facility. So I opted to use the proxies, which allowed me to cut the spots in FCP. Of course, if you think of proxies as low-res files, you’d be wrong. These Sony XAVC files are high-res, camera-original files on par with 4K ProRes HQ media. If it weren’t for VFX, these would actually be the high-quality source files used for the final edit.

I copied the proxy files to a 2TB Samsung T7 SSD portable drive. This gave me the freedom to edit wherever – either on my iMac at home or one of the iMac Pros at work. This is where Final Cut Pro comes in. When you wade through that much footage, it’s easy for an NLE to get bogged down by caching footage or for the editor to get lost in the volume of clips. Thanks to skimming and keyword collections, I was able to cut these spots far more quickly than using any of the other NLE options. I could go from copying proxy files to my first cut on a commercial within a single day. That’s half of the budgeted time.

The one wrinkle was that I had to turn over a Premiere Pro project linked to the RAW media files. There are various ways to do that, but automatic relinking is dicier with these RAW files, because each clip is within its own subfolder, similar to RED. This complicates Premiere’s ability to easily relink files. So rather than go through XtoCC, I opted to import the Sony RAW clips into Resolve, then import the FCPXML, which in turn automatically relinked to the RAW files in Resolve.

There are a few quirks in this method that you have to suss out, but once everything was correct in Resolve, I exported an XML for Premiere. In Premiere Pro, I imported that XML, made sure that Premiere linked to the RAW files, corrected any size and speed issues, removed any duplicate clips, and then the project was ready for turnover. While one could look at these steps and question the decision to not cut in Premiere in the first place, I can assure you that cutting with Final Cut was considerably faster and these roundtrip steps were minor.

Remote workflows

Over the past year, remote workflows and a general “work from home” movement has shifted how the industry moves forward. So much of what I do requires connection to shared storage, so totally working from home is impractical. These spots were the exception for me, but the client and director lived across the country. In years past, they used to fly in and work in supervised sessions with me. However, in more recent years, that work has been unattended using various review-and-approval solutions for client feedback and revisions. Lately that’s through Frame.io. In the case of these spots, my workflow wasn’t any different than it would have been two years ago.

On the other hand, since I have worked with these clients in supervised sessions, as well as remote projects, it’s easy to see what’s been lost in this shift. Remote workflows present two huge drawbacks. The first is turnaround time. It’s inherently an inefficient process. You’ll cut a new version, upload it for review, and then wait – often for hours or even the next day. Then make the tweaks, rinse, and repeat. This impacts not only the delivery schedule, but also your own ability to book sessions and determine fair billing.

Secondly, ideation takes a back seat. When a client is in the room, you can quickly go through options, show a rearranged cut, alternate takes, and so on. Final Cut’s audition function is great for this, but it’s a wasted feature in these modern workflows. During on-prem sessions, you could quickly show a client the options, evaluate, and move on. With remote workflows, that’s harder to show and is subject to the same latency of replying, so as a result, you have fewer options that can be properly vetted in the cut.

The elephant in the room is security. I know there are tons of solutions for “drilling” into your system from home that are supposed to be secure. In reality, the only true security is to have your system disconnected from the internet (but also not totally bulletproof). As Sony Pictures, QNAP owners, Colonial Pipeline, agencies of the US government, or multiple other corporations have found out, if a bad actor wants to get into your system, they can. No amount of encryption, firewalls, VPNs, multi-factor authentication, or anything else is going to be guaranteed to stop them. While remote access might have been a necessary evil due to COVID lockdowns, it’s not something that should be encouraged going forward.

However, I know that I’m swimming against the stream on this. Many editors/designers/colorists don’t seem to ever want to return to an office. This is at odds with surveys indicating the majority of producers and agencies are chomping to get back to working one-on-one. Real estate and commuting costs are factors that affect such decisions, so I suspect hybrids will evolve and the situation in the future may vary geographically.

Final Cut Pro’s future

I mention the WFH dilemma, because remote collaboration is one of the features that Apple has been encouraged to build into Final Cut Pro by some users. It’s clearly a direction Adobe has moved towards and where Avid already has a track record.

I’m not sure that’s in Apple’s best interest. For one thing, I don’t personally believe Apple does a good job of this. Access and synchronization performance of iCloud is terrible compared with Google’s solutions. Would a professional collaboration solution really be industry-leading and robust? I highly doubt it.

Naturally Apple wants to make money, but they are also interested in empowering the creative individual – be that a professional or an enthusiast. Define those terms in whatever way you like, but the emphasis is on the individual. That direction seems to be at odds with what “pro” users think should be the case for Apple ProApps software, based on their experiences in the late years of FCP 1-7/FCP Studio (pre-X).

I certainly have my own feature request list for Final Cut Pro, but ultimately the lack of these did not stop me from a rapid turnaround on the spots I just discussed. Nor on other projects when I turn to FCP as the tool of choice. I use all four major NLEs and probably never will settle on a single “best” NLE for all cases.

The term “YouTube content creator” or “influencer” is often used as a pejorative, but for many filmmakers and marketeers outlets like YouTube, facebook, and Instagram have become the new “broadcast.” I recently interviewed Alexander Fedorov for FCP.co. He’s a Russian photographer/filmmaker/vlogger who epitomizes the type of content creator for whom Apple is designing its professional products. I feel that Apple can indeed service multiple types of users, from the individual, self-taught filmmaker to the established broadcast pro. How Apple does that moving forward within a tool like Final Cut Pro is anyone’s guess. All I know is that using the measurements of what is and isn’t “pro” no longer works in so many different arenas.

©2021 Oliver Peters

Kirk Baxter, ACE on editing Mank

Mank, David Fincher’s eleventh film, chronicles Herman Mankiewicz (portrayed by Gary Oldman) during the writing of the film classic, Citizen Kane. Mankiewicz, known as Mank, was a witty New York journalist and playwright who moved to Los Angles in the 1930s to become a screenwriter. He wrote or co-wrote about 40 films, often uncredited, including the first draft of The Wizard of Oz. Together with Orson Welles, he won an Academy Award for the screenplay of Citizen Kane. It’s long been disputed whether or not he, rather than Welles, actually did the bulk of the work on the screenplay. 

The script for Mank was penned decades ago by David Fincher’s father, Jack Fincher, and was finally brought to the screen thanks to Netflix this past year. Fincher deftly blends two parallel storylines: Mankiewicz’ writing of Kane during his convalescence from an accident – and his earlier Hollywood experiences with the studios, as told through flashbacks. These experiences, including his acquaintance with William Randolph Hearst – the media mogul of his time and the basis for Charles Foster Kane in Citizen Kane – inspired Mankiewicz’ script. This earlier period is infused with the political undercurrent of the Great Depression and the California gubernatorial race between Upton Sinclair and Frank Merriam.

David Fincher and director of photography Erik Messerschmidt, ASC (Mindhunter) used many techniques to pay homage to the look of Citizen Kane and other classic films of the era, including shooting in true black-and-white with RED Monstro 8K Monochrome cameras and Leica Summilux lenses. Fincher also tapped other frequent collaborators, including Trent Reznor and Atticus Ross for a moving, vintage score, and Oscar-winning editor, Kirk Baxter, ACE. I recently caught up with Baxter to discuss Mank, the fourth film he’s edited for David Fincher.

***

Citizen Kane is the 800 pound gorilla. Had you seen that film before this or was it research for the project?

I get so nervous about this topic, because with cinephiles, it’s almost like talking about religion. I had seen Citizen Kane when I was younger, but I was too young to appreciate it. I was growing up on Star Wars, Indiana Jones, and Conan the Barbarian. Then advancing my tastes to the Godfather films and French Connection. Citizen Kane is still just such a departure from all of that. I was kind of like, “What?” That was probably in my late teens.

I went back and watched it again before the shoot after reading the screenplay. There were certain technical aspects to the film that I thought were incredible. I loved the way OrsonWelles chose to leave his scenes by turning off lights like it was in the theater. There was this sort of slow decay and I enjoy how David picked up on that and took it into Mank. Each time one of those shots came up in the bungalow scenes, I thought it was fantastic.

Overall, I don’t consider myself any sort of expert on 1930s and 1940s movie-making and I didn’t make a conscious effort to try to replicate any styles. I approached the work in the same way I do with all of David’s work – by being reactionary to the material and the coverage that he shot. In regard to how close David took the stylings, well, that was more his tight rope walk. So, I felt no shackling to slow down an edit pace or stay in masters or stay in 50-50s as might have been common in the genre. I used all the tools at my disposal to exploit every scene the best I could. 

Since you are cutting while the shooting goes on, do you have the ability to ask for coverage that you might feel is missing? 

I think a little bit of that goes on, but it’s not me telling Fincher what’s required. It’s me building assemblies and giving them to David as he’s going and he will assess where he’s short and where he’s not. I’ve read many editor interviews over the years and I’ve always kind of gone, “huh,” when someone’s projecting they’re in the control seat. When you’re with someone with the ability that Fincher has, then I’m in a support position of helping him make his movie as best he can. Any other way of looking at it is delusional. But, I take a lot of pride in where I do get to contribute. 

Mank is a different style of film than Fincher’s previous projects. Did that change the workflow or add any extra pressure? 

I don’t think it did for me. I think it was harder for David. The film was in his head for so many decades and there were a couple of attempts to make it happen. Obviously a lot changes in that time frame. So, I think he had a lot of internal pressure about what he was making. For me, I found the entire process to be really buoyant and bubbly and just downright fun. 

As with all films, there were moments when it was hard to keep up during the shoot. And definitely moments coming down to that final crunch. That’s when I really put a lot of pressure on myself to deliver cut scenes to David to help him. I felt the pressure of that, but my main memory of it really was one of joy. Not that the other movies aren’t, but I think sometimes the subject matter can control the mood of the day. For instance, in other movies, like Dragon Tattoo, the feeling was a bit like your head in a vise when I look back at it.

Sure. Dragoon Tattoo is dark subject matter. On the other hand, Gary Oldman’s portrayal of Mankiewicz really lights up the screen. It certainly looks like he’s having fun with the character. 

Right. I loved all the bungalow scenes. I thought there was so much warmth in those. And I had so much compassion for the lead character, Mank. Those scenes really made me adore him. But also when the flashback scenes came, they’re just a hoot and great fun to put together. There was this warmth and playfulness of the two different opposing storylines. No matter which one turned up, I was happy to see it. 

Was the inter-cutting of those parallel storylines the way it was scripted? Or was that a construction in post? 

Yes, it was scripted that way. There was a little bit of pulling at the thread later. Can we improve on this? There was a bit of reshuffling later on and then working out that ‘as written’ was the best path. We certainly kicked the tires a few times. After we put the blueprint together, mostly the job became tightening and shortening. 

Obviously one of the technical differences was that this film was a true black-and-white film shot with modified, monochrome RED cameras. So not color and then changed to black-and-white in the grade. Did that impact your thinking in how to tackle the edit?

For the first ten minutes. At first you sit down and you go, “Oh, we work in black and white.” And then you get used to it very quickly. I forwarded the trailer when it was released to my mother in Australia. She texted back, “It’s black and white????” [laugh] You’ve got to love family!

Black-and-white has a unique look, but I know that other films, like Roma, were shot in color to satisfy some international distribution requirements. 

That’s never going to happen with someone like David. I can’t picture who that person would be that would tell him with any authority that his movie requires color. 

Of course, it matches films of the era and more importantly Citizen Kane. It does bring an intentional, stylistic treatment to the content. 

Black-and-white has got a great way of focusing your attention and focusing your eye. There’s a discipline that’s required with how shots are framed and how you’re using the images for eye travel. But I think all of David work comes with that discipline anyway. So to me, it didn’t alter it. He’s already in that ballpark.

In terms of recreating the era, I’ve seen a few articles and comments about creating the backgrounds and sets using visual effects, but also classic techniques, like rear projection. What about the effects in Mank

As in most of David’s movies, it’s everywhere and a lot of the time it looks invisible, but things are being replaced. I don’t have a ratio for it, but I’d say almost half the movie. We’ve got a team that’s stabilizing shots as we’re going. We’ve got an in-house visual effects team that is building effects, just to let us know that certain choices can be made. The split screen thing is constant, but I’ll do a lot of that myself. I’ll do a fairly haphazard job of it and then pass it on for our assistant editors to follow up on. Even the montage kaleidoscope effect was all done in-house down the hall by Christopher Doulgeris, one of our VFX artists. A lot of it’s farmed out, but a fair slice is done under the roof. 

Please tell me a bit about working with Adobe Premiere Pro again to cut this film.

It’s best for me not even to attempt to answer technical questions. I don’t mind exposing myself as a luddite. My first assistant editor, Ben Insler, set it up so that I’m able to move the way I want to move. For me, it’s all muscle memory. I’m hitting the same keystrokes that I was hitting back when we were using Avid. Then I crossed those keys over to Final Cut and then over to Premiere Pro. 

In previous versions, Premiere Pro required projects to contain copies of all the media used in that project.  As you would hand the scene off to other people to work on in parallel, all the media would travel into that new project, and the same was true when combining projects back together to merge your work.  You had monstrously huge projects with every piece of media, and frequently duplicate copies of that media, packed into them. They often took 15 minutes to open. Now Adobe has solved that and streamlined the process. They knew it was a massive overhaul, but I think that’s been completely solved. Because it’s functioning, I can now purely concentrate on the thought process of where I’m going in the edit. I’m spoiled with having very technical people around me so that I can exist as a child. [laugh]

How was the color grade handled?

We had Eric Weidt working downstairs at Fincher’s place on Baselight. David is really fortunate that he’s not working in this world of “Here’s three weeks for color. Go into this room each day and where you come out is where you are at.” There’s an ongoing grade that’s occurring in increments and traveling with the job that we’re doing. It’s  updated and brought into the cut. We experience editing with it and then it’s updated again and brought back into the cut. So it’s this constant progression. 

Let’s talk about project organization. You’ve told me in the past that your method of organizing a selects reel was to string out shots in the order of wide shots, mediums, close ups, and so on. And then bump up the ones you like. Finally, you’d reduce the choices before those were presented to David as possible selects. Did you handle it the same way on Mank?

Over time, I’ve streamlined that further. I’ve found that if I send something that’s too long while he’s in the middle of shooting that he might watch the first two minutes of it, give me a couple of notes of what he likes and what he doesn’t like, and move on. So, I’ve started to really reduce what I send. It’s more cut scenes with some choices. That way I get the most relevant information and can move  forward.

With scenes that are extremely dense, like Louis B. Mayer’s birthday party at Hearst’s, it really is an endless multiple choice of how to tackle it. I’ll often present a few paths. Here’s what it is if I really hold out these wides at the front and I hang back for a bit longer. Here’s what it is if I stay more with Gary [Oldmam] listening. It’s not that this take is better than the other take, but more options featuring different avenues and ways to tell the story. 

I like working that way, even if it wasn’t for the sake of presenting it to David. I can’t watch a scene that’s that dense and go, “Oh, I know what to do.” I wouldn’t have a clue. I like to explore it. I’ve got to turn the soil and snuff the truffles and try it all out. And then the answers present themselves. It all just becomes clear. Unfortunately, the world of the editor, regardless of past experiences, is always destined to be filled with labor. There is no shortcut to doing it properly.

With large-scale theatrical distribution out of the question – and the shift to Netflix streaming as the prime focus – did the nature of studio notes change at all? 

David’s generous about thought and opinion, if it’s constructive and helpful.  He’s got a long history of forwarding those notes to me and exploring them. I’m not positive if I get all of them. Anything that’s got merit will reach me, which is wise. Having spent so many years in the commercial world, there’s a part of me that’s always a little eager to solve a puzzle. If I’m delivered a pile of notes, good or bad, I’m going to try my best to execute them.  So, David is wise to just not let me see the bad ones.

Were you able to finish Mank before the virus-related lockdowns started? Did you have to move to a remote workflow? 

The shooting had finished and we already had the film assembled. I work at a furious rate whilst David’s shooting, so that we can interface during the shoot. That way he knows what he’s captured, what he needs, and he can move on and strike sets, release actors, etc. There’s this constant back and forth.

At the point when he stops shooting, we’re pretty far along in terms of replicating the original plan, the blueprint. Then it’s what I call the sweeps, where you go back to the top and you just start sweeping through the movie, improving it. I think we’d already done one of those when we went remote. So, it was very fortunate timing.

We’re quite used to it. During shooting, we work in a remote way anyway. It’s a language and situation that we’re completely used to. I think from David’s perspective, it didn’t change anything. 

If the timing had been different and you would have had to handle all of the edit under remote conditions, would anything change? Or would you approach it the same way? 

Exactly the same. It wouldn’t have changed the amount of time that I get directly with David. I don’t want to give the impression that I cut this movie and David was on the sidelines. He’s absolutely involved, but pops in and out and looks at things that are made. He’s not a director that sits there the whole time. A lot of it is, “I’ve made this cut, let’s watch it together. I’ve done these selects, let’s watch them together.” It’s really possible to do that remotely. 

I prefer to be with David when he’s shooting and especially in this one that he shot in Los Angeles. I really tried to have one day a week where we got to be together on the weekends and his world quieted down. David loves that. I would sort of construct my week’s thinking towards that goal. If on a Wednesday I had six scenes that were backed up, I’d sort of think to myself, “What can I achieve in the time frame before David’s with me on Saturday? Should I just select all these scenes and then we’ll go through the selects together? Or should I tackle this hardest one and get a good cut of that going?”

A lot of the time I would choose – if he was coming in and had the time to watch things – to do selects. Sometimes we could bounce through them just from having a conversation of what his intent was and the things that he was excited about when he was capturing them. With that, I’m good to go. Then I don’t need David for another week or so. We were down to the short hand of one sentence, one email, one text. That can inform me with all the fuel I need to drive cross-country. 

The film’s back story clearly has political overtones that have an eerie similarity to 2020. I realize the script was written a while back at a different time, but was some of that context added in light of recent events? 

That was already there. But, it really felt like we are reliving this now. In the beginning of the shutdown, you didn’t quite know where it was going to go. The parallels to the Great Depression were extreme. There were a lot of lessons for me.

The character of Louis B. Mayer slashes all of his studio employees’ salaries to 50 percent. He promises to give every penny back and then doesn’t do it. I was crafting that villain’s performance, but at the same time I run a company [Exile Edit] that has a lot of employees in Los Angeles and New York. We had no clue if we would be able to get through the pandemic at the time when it hit. We also asked staff to take a pay cut, so that we could keep everyone employed and keep everybody on health insurance. But the moment we realized we could get through it six months later, there was no way I could ever be that villain. We returned every cent. 

I think most companies are set up to be able to exist for four months. If everything stops dead – no one’s anticipating that – the 12-month brake pull. It was really, really frightening. I would hope that I would think this way anyway, but with crafting that villain’s performance, there was no way I was going to replicate it.

***

Mank was released in select theaters in November and launched on Netflix December 4, 2020.

Be sure to check out Steve Hullfish’s podcast interview with Kirk Baxter.

This article originally written for postPerspective.

©2021 Oliver Peters