COUP 53

The last century is littered with examples of European powers and the United States attempting to mold foreign governments in their own direction. In some cases, the view at the time may have seemed like these efforts would yield positive results. In others, self-interest or oil was the driving force. We have only to point to the Sykes-Picot Agreement of 1916 (think Lawrence of Arabia) to see the unintended consequences these policies have had in the middle east over the past 100+ years, including current politics.

In 1953, Britain’s spy agency MI6 and the United States’ CIA orchestrated a military coup in Iran that replaced the democratic prime minister, Mohammad Mossadegh, with the absolute monarchy headed by Shah Mohammad Reza Pahlavi. Although the CIA has acknowledged its involvement, MI6 never has. Filmmaker Taghi Amirani, an Iranian-British citizen, set out to tell the true story of the coup, known as Operation Ajax. Five years ago he elicited the help of noted film editor, Walter Murch. What was originally envisioned as a six month edit turned into a four yearlong odyssey of discovery and filmmaking that has become the feature documentary COUP 53.

COUP 53 was heavily researched by Amirani and leans on End of Empire, a documentary series produced by Britain’s Granada TV. That production started in 1983 and culminated in its UK broadcast in May of 1985. While this yielded plenty of interviews with first-hand accounts to pull from, one key omission was an interview with Norman Darbyshire, the MI6 Chief of Station for Iran. Darbyshire was the chief architect of the coup – the proverbial smoking gun. Yet he was inexplicably cut out of the final version of End of Empire, along with others’ references to him.

Amirani and Murch pulled back the filmmaking curtain as part of COUP 53. We discover along with Amirani the missing Darbyshire interview transcript, which adds an air of a whodunit to the film. Ultimately what sets COUP 53 apart was the good fortune to get Ralph Fiennes to portray Norman Darbyshire in that pivotal 1983 interview.

COUP 53 premiered last year at the Telluride Film Festival and then played other festivals until coronavirus closed such events down. In spite of rave reviews and packed screenings, the filmmakers thus far have failed to secure distribution. Most likely the usual distributors and streaming channels deem the subject matter to be politically toxic. Whatever the reason, the filmmakers opted to self-distribute, including a virtual cinema event with 100 cinemas on August 19th, the 67th anniversary of the coup.

Walter Murch is certainly no stranger to readers. Despite a long filmography, including working with documentary material, COUP 53 is only his second documentary feature film. (Particle Fever was the first.) This film posed another challenge for Murch, who is known for his willingness to try out different editing platforms. This was the first outing with Adobe Premiere Pro CC, his fifth major editing system. I had a chance to catch up with Walter Murch over the web from his home in London the day before the virtual cinema event. We discussed COUP 53, documentaries, and working with Premiere Pro.

___________________________________________________

[Oliver Peters] You and I have emailed back-and-forth on the progress of this film for the past few years. It’s great to see it done. How long have you been working on this film?

[Walter Murch] We had to stop a number of times, because we ran out of money. That’s absolutely typical for this type of privately-financed documentary without a script. If you push together all of the time that I was actually standing at the table editing, it’s probably two years and nine months. Particle Fever – the documentary about the Higgs Boson – took longer than that.

My first day on the job was in June of 2015 and here we are talking about it in August of 2020. In between, I was teaching at the National Film School and at the London Film School. My wife is English and we have this place in London, so I’ve been here the whole time. Plus I have a contract for another book, which is a follow-on to In the Blink of an Eye. So that’s what occupies me when my scissors are in hiding.

[OP] Let’s start with Norman Darbyshire, who is key to the storyline. That’s still a bit of an enigma. He’s no longer alive, so we can’t ask him now. Did he originally want to give the 1983 interview and MI6 came in and said ‘no’ – or did he just have second thoughts? Or was it always supposed to be an off the record interview?

[WM] We don’t know. He had been forced into early retirement by the Thatcher government in 1979, so I think there was a little chip on his shoulder regarding his treatment. The full 14-page transcript has just been released by the National Security Archives in Washington, DC, including the excised material that the producers of the film were thinking about putting into the film.

If they didn’t shoot the material, why did they cut up the transcript as if it were going to be a production script? There was other circumstantial evidence that we weren’t able to include in the film that was pretty indicative that yes, they did shoot film. Reading between the lines, I would say that there was a version of the film where Norman Darbyshire was in it – probably not named as such – because that’s a sensitive topic. Sometime between the summer of 1983 and 1985 he was removed and other people were filmed to fill in the gaps. We know that for a fact.

[OP] As COUP 53 shows, the original interview cameraman clearly thought it was a good interview, but the researcher acts like maybe someone got to management and told them they couldn’t include this.

[WM] That makes sense given what we know about how secret services work. What I still don’t understand is why then was the Darbyshire transcript leaked to The Observer newspaper in 1985. A huge article was published the day before the program went out with all of this detail about Norman Darbyshire – not his name, but his words. And Stephen Meade – his CIA counterpart – who is named. Then when the program ran, there was nothing of him in it. So there was a huge discontinuity between what was published on Sunday and what people saw on Monday. And yet, there was no follow-up. There was nothing in the paper the next week, saying we made a mistake or anything.

I think eventually we will find out. A lot of the people are still alive. Donald Trelford, the editor of The Observer, who is still alive, wrote something a week ago in a local paper about what he thought happened. Alison [Rooper] – the original research assistant – said in a letter to The Observer that these are Norman Darbyshire’s words, and “I did the interview with him and this transcript is that interview.”

[OP] Please tell me a bit about working with the discovered footage from End of Empire.

[WM] End of Empire was a huge, fourteen-episode project that was produced over a three or four year period. It’s dealing with the social identity of Britain as an empire and how it’s over. The producer, Brian Lapping, gave all of the outtakes to the British Film Institute. It was a breakthrough to discover that they have all of this stuff. We petitioned the Institute and sure enough they had it. We were rubbing our hands together thinking that maybe Darbyshire’s interview was in there. But, of all of the interviews, that’s the one that’s not there.

Part of our deal with the BFI was that we would digitize this 16mm material for them. They had reconstituted everything. If there was a section that was used in the film, they replaced it with a reprint from the original film, so that you had the ability to not see any blank spots. Although there was a quality shift when you are looking at something used in the film, because it’s generations away from the original 16mm reversal film.

For instance, Stephen Meade’s interview is not in the 1985 film. Once Darbyshire was taken out, Meade was also taken out. Because it’s 16mm we can still see the grease pencil marks and splices for the sections that they wanted to use. When Meade talks about Darbyshire, he calls him Norman and when Darbyshire talks about Meade he calls him Stephen. So they’re a kind of double act, which is how they are in our film. Except that Darbyshire is Ralph Fiennes and Stephen Meade – who has also passed on – appears through his actual 1983 interview.

[OP] Between the old and new material, there was a ton of footage. Please explain your workflow for shaping this into a story.

[WM] Taghi is an inveterate shooter of everything. He started filming in 2014 and had accumulated about 40 hours by the time I joined in the following year. All of the scenes where you see him cutting transcripts up and sliding them together – that’s all happening as he was doing it. It’s not recreated at all. The moment he discovered the Darbyshire transcript is the actual instance it happened. By the end, when we added it all up, it was 532 hours of material.

Forgetting all of the creative aspects, how do you keep track of 532 hours of stuff? It’s a challenge. I used my Filemaker Pro database that I’ve been using since the mid-1980s on The Unbearable Lightness of Being. Every film, I rewrite the software slightly to customize it for the film I’m on. I took frame-grabs of all the material so I had stacks and stacks of stills for every set-up.

By 2017 we’d assembled enough material to start on a structure. Using my cards, we spent about two weeks sitting and thinking ‘we could begin here and go there, and this is really good.’ Each time we’d do that, I’d write a little card. We had a stack of cards and started putting them up on the wall and moving them around. We finally had two blackboards of these colored cards with a start, middle, and end. Darbyshire wasn’t there yet. There was a big card with an X on it – the mysterious X. ‘We’re going to find something on this film that nobody has found before.’ That X was just there off to the side looking at us with an accusing glare. And sure enough that X became Norman Darbyshire.

At the end of 2017 I just buckled my seat belt and started assembling it all. I had a single timeline of all of the talking heads of our experts. It would swing from one person to another, which would set up a dialogue among themselves – each answering the other one’s question or commenting on a previous answer. Then a new question would be asked and we’d do the same thing. That was 4 1/2 hours long. Then I did all of the same thing for all of the archival material, arranging it chronologically. Where was the most interesting footage and the highest quality version of that? That was almost 4 hours long. Then I did the same thing with all of the Iranian interviews, and when I got it, all of the End of Empire material.

We had four, 4-hour timelines, each of them self-consistent. Putting on my Persian hat, I thought, ‘I’m weaving a rug!’ It was like weaving threads. I’d follow the talking heads for a while and then dive into some archive. From that into an Iranian interview and then some End of Empire material. Then back into some talking heads and a bit of Taghi doing some research. It took me about five months to do that work and it produced an 8 1/2 hour timeline.

We looked at that in June of 2018. What were we going to do with that? Is it a multi-part series? It could be, but Netflix didn’t show any interest. We were operating on a shoe string, which meant that the time was running out and we wanted to get it out there. So we decided to go for a feature-length film. It was right about that time that Ralph Fiennes agreed to be in the film. Once he agreed, that acted like a condenser. If you have Ralph Fiennes, things tend to gravitate around that performance. We filmed his scenes in October of 2018. I had roughed it out using the words of another actor who came in and read for us, along with stills of Ralph Fiennes as M. What an irony! Here’s a guy playing a real MI6 agent who overthrew a whole country, who plays M, the head of MI6, who dispatches James Bond to kill malefactors!

Ralph was recorded in an hour and a half in four takes at the Savoy Hotel – the location of the original 1983 interviews. At the time, he was acting in Shakespeare’s Anthony and Cleopatra every evening. So he came in the late morning and had breakfast. By 1:30-ish we were set-up. We prayed for the right weather outside – not too sunny and not rainy. It was perfect. He came and had a little dialogue with the original cameraman about what Darbyshire was like. Then he sat down and entered the zone – a fascinating thing to see. There was a little grooming touch-up to knock off the shine and off we went.

Once we shot Ralph, we were a couple of months away from recording the music and then final color timing and the mix. We were done with a finished, showable version in March of 2019. It was shown to investors in San Francisco and at the TED conference in Vancouver. We got the usual kind of preview feedback and dove back in and squeezed another 20 minutes or so out of the film, which got it to its present length of just under two hours.

[OP] You have a lot of actual stills and some footage from 1953, but as with most historical documentaries, you also have re-enactments. Another unique touch was the paint effect used to treat these re-enactments to differentiate them stylistically from the interviews and archival footage.

[WM] As you know, 1953 is 50+ years before the invention of the smart phone. When coups like this happen today you get thousands of points-of-view. Everyone is photographing everything. That wasn’t the case in 1953. On the final day of the coup, there’s no cinematic material – only some stills. But we have the testimony of Mossadegh’s bodyguard on one side and the son of the general who replaced Mossadegh on the other, plus other people as well. That’s interesting up to a point, but it’s in a foreign language with subtitles, so we decided to go the animation path.

This particular technique was something Taghi’s brother suggested and we thought it was a great idea. It gets us out of the uncanny valley, in the sense that you know you’re not looking at reality and yet it’s visceral. The idea is that we are looking at what is going on in the head of the person telling us these stories. So it’s intentionally impressionistic. We were lucky to find Martyn Pick, the animator who does this kind of stuff. He’s Mr. Oil Paint Animation in London. He storyboarded it with us and did a couple of days of filming with soldiers doing the fight. Then he used that as the base for his rotoscoping.

[OP] Quite a few of the first-hand Iranian interviews are in Persian with subtitles. How did you tackle those?

[WM] I speak French and Italian, but not Persian. I knew I could do it, but it was a question of the time frame. So our workflow was that Taghi and I would screen the Iranian language dailies. He would point out the important points and I would take notes. Then Taghi would do a first pass on his workstation to get rid of the chaff. That’s what he would give to the translators. We would hire graduate students. Fateme Ahmadi, one of the associate producers on the film, is Iranian and she would also do translation. Anyone that was available would work on the additional workstation and add subtitling. That would then come to me and I would use that as raw material.

To cut my teeth on this, I tried using the interview with Hamid Admadi, the Iranian historical expert that was recorded in Berlin. Without translating it, I tried to cut it solely on body language and tonality. I just dove in and imagined, if he is saying ‘that’ then I’m thinking ‘this.’ I was kind of like the way they say people with aphasia are. They don’t understand the words, but they understand the mood. To amuse myself, I put subtitles on it, pretending that I knew what he was saying. I showed it to Taghi and he laughed, but said that in terms of the continuity of the Persian, it made perfect sense. The continuity of the dialogue and moods didn’t have any jumps for a Persian speaker. That was a way to tune myself into the rhythms of the Persian language. That’s almost half of what editing is – picking up the rhythm of how people say things – which is almost as important or even sometimes more important than the words they are using.

[OP] I noticed in the credits that you had three associate editors on the project.  Please tell me a bit about their involvement.

[WM] Dan [Farrell] worked on the film through the first three months and then a bit on the second section. He got a job offer to edit a whole film himself, which he absolutely should do. Zoe [Davis] came in to fill in for him and then after a while also had to leave. Evie [Evelyn Franks] came along and she was with us for the rest of the time. They all did a fantastic job, but Evie was on it the longest and was involved in all of the finishing of the film. She’s is still involved, handling all of the media material that we are sending out.

[OP] You are also known for your work as a sound designer and re-recording mixer, but I noticed someone else handled that for this film. What was you sound role on COUP 53?

[WM] I was busy in the cutting room, so I didn’t handle the final mix. But I was the music editor for the film, as well as the picture editor. Composer Robert Miller recorded the music in New York and sent a rough mixdown of his tracks. I would lay that onto my Premiere Pro sequence, rubber-banding the levels to the dialogue.

When he finally sent over the instrument stems – about 22 of them – I copied and pasted the levels from the mixdown onto each of those stems and then tweaked the individual levels to get the best out of every instrument. I made certain decisions about whether or not to use an instrument in the mix. So in a sense, I did mix the music on the film, because when it was delivered to Boom Post in London, where we completed the mix, all of the shaping that a music mixer does was already taken care of. It was a one-person mix and so Martin [Jensen] at Boom only had to get a good level for the music against the dialogue, place it in a 5.1 environment with the right equalization, and shape that up and down slightly. But he didn’t have to get into any of the stems.

[OP] I’d love to hear your thoughts on working with Premiere Pro over these several years. You’ve mentioned a number of workstations and additional personnel, so I would assume you had devised some type of a collaborative workflow. That is something that’s been an evolution for Adobe over this same time frame.

[WM] We had about 60TB of shared storage. Taghi, Evie Franks, and I each had workstations. Plus there was fourth station for people doing translations. The collaborative workflow was clunky at the beginning. The idea of shared spaces was not what it is now and not what I was used to from Avid, but I was willing to go with it.

Adobe introduced the basics of a more fluid shared workspace in early 2018 I think, and that began a six months’ rough ride, because there were a lot of bugs that came along  with that deep software shift. One of them was what I came to call ‘shrapnel.’ When I imported a cut from another workstation into my workstation, the software wouldn’t recognize all the related media clips, which were already there. So these duplicate files would be imported again, which I nicknamed ‘shrapnel.’ I created a bin just to stuff these clips in, because you couldn’t delete them without causing other problems.

Those bugs went away in the late summer of 2018. The ‘shrapnel’ disappeared along with other miscellaneous problems – and the back-and-forth between systems became very transparent. Things can always be improved, but from a hands-on point-of-view, I was very happy with how everything worked from August or September of 2018 through to the completion of the film.

We thought we might stay with Premiere Pro for the color timing, which is very good. But DaVinci Resolve was the system for the colorist that we wanted to get. We had to make some adjustments to go to Resolve and back to Premiere Pro. There were a couple of extra hurdles, but it all worked and there were no kludges. Same for the sound. The export for Pro Tools was very transparent.

[OP] A lot of what you’ve written and lectured about is the rhythm of editing – particularly dramatic films. How does that equate to a documentary?

[WM] Once you have the initial assembly – ours was 8 hours, Apocalypse Now was 6 hours, Cold Mountain was 5 1/2 hours – the jobs are not that different. You see that it’s too long by a lot. What can we get rid of? How can we condense it to make it more understandable, more emotional, clarify it, and get a rhythmic pulse to the whole film?

My approach is not to make a distinction at that point. You are dealing with facts and have to pay attention to the journalistic integrity of the film. On a fiction film you have to pay attention to the integrity of the story, so it’s similar. Getting to that point, however, is highly different, because the editor of an unscripted documentary is writing the story. You are an author of the film. What an author does is stare at a blank piece of paper and say, ‘what am I going to begin with?’ That is part of the process. I’m not writing words, necessarily, but I am writing. The adjectives and nouns and verbs that I use are the shots and sounds available to me.

I would occasionally compare the process for cutting an individual scene to churning butter. You take a bunch of milk – the dailies – and you put them into a churn – Premiere Pro – and you start agitating it. Could this go with that? No. Could this go with that? Maybe. Could this go? Yes! You start globbing things together and out of that butter churning process you’ve eventually got a big ball of butter in the churn and a lot of whey – buttermilk. In other words, the outtakes.

That’s essentially how I work. This is potentially a scene. Let me see what kind of scene it will turn into. You get a scene and then another and another. That’s when I go to the card system to see what order I can put these scenes in. That’s like writing a script. You’re not writing symbols on paper, you are taking real images and sound and grappling with them as if they are words themselves.

___________________________________________________

Whether you are a student of history, filmmaking, or just love documentaries, COUP 53 is definitely worth the watch. It’s a study in how real secret services work. Along the way, the viewer is also exposed to the filmmaking process of discovery that goes into every well-crafted documentary.

Images from COUP 53 courtesy of Amirani Media and Adobe.

(Click on any image for an enlarged view.)

You can learn more about the film at COUP53.com.

For more, check out these interviews at Art of the Cut, CineMontage, and Forbes.

©2020 Oliver Peters

Building that Zoom Look

COVID-19 has altered our lives in many ways, but it has also changed our visual language. Video conference calls didn’t start with this pandemic, but by now Skype, Zoom, WebEx, Blue Jeans, and other services have become part of our daily lives – both as participants and as viewers. We use these for communicating with friends, distance learning, entertainment, and remote corporate meetings. Not only has video conferencing become an accepted production and broadcast method, but the “video conference look” is now a familiar entertainment style for all of us.

Many of these productions are actually live. Through elaborate and clever production techniques they can indeed achieve a quality level that’s better than the average Zoom call. However, in many cases, the video conference appearance with multiple participants on screen, was actually created own post, precisely because that aesthetic is now instantly recognizable to all of us. The actual interaction might have happened over Zoom, but full-frame video was simultaneously captured. This enables an editor to polish the overall production and rebuild the multi-screen images where appropriate without being tied to the highly-compressed, composite Zoom feed.

Building multi-screen composites in post can be time-consuming, which is where templates come in handy. Apple Final Cut Pro X offers a perfect solution for editing this style of project. There are a number of paid and/or free video conference-style Motion templates on the market. Enterprising editors can also build their own templates using Apple Motion. A nice free offering is idustrial revolution’s XEffects Video Conference – a toolkit of effects templates to easily build 4-up, 9-up, and 16-up displays.

If you need something more involved, then check out Video Walls 2 from developer Luca Visual FX, which can be purchased and installed through the FxFactory platform. This Motion template includes a series of 15 FCPX generators that cover a range of video wall and video conference styles.

The templates use image drop wells for videos and stills, which are arranged into a grid or row with adjustable borders and drop shadows. Some of the generators permit circles as well as rectangles with adjustable rounded corners. Positioning may be controlled to re-arrange the grid pattern and even overlap the panes. These generators include build-in animation effects along with keyframeable parameters.

If you want to mimic a video conference call, there’s also a dedicated generator for a Zoom-style menu bar that appears at the bottom of the screen. Border highlights around an image well may be changed as you edit to maintain the illusion that the highlight color syncs to whichever speaker in the group is talking at any given time..

Overall I found these temples easy to use and adjust. The one thing to be mindful of is that if you build up a video wall of 20+ video clips, this is like 20+ layers of video. Therefore, large video walls will require some horsepower. However, it was possible to do this on my mid-2014 MacBook Pro, albeit a bit more slowly. The good news is that all of this happens within the generator, so there’s only one clip on the timeline. You may also stack multiple instances of these templates if you need to have more images on-screen at once. Or if you want to add the menu bar template on top of a video conference template.

There’s no telling how long the pseudo-Zoom look will be in vogue. However, Video Walls 2 gives you enough variety that it should have legs beyond our current “work from home” mode.

©2020 Oliver Peters

Dialogue Mixing Tips

 

Video is a visual medium, but the audio side of a project is as important – often more important – than the picture side. When story context is based on dialogue, then the story will make no sense if you can’t hear or understand that spoken information. In theatrical mixes, it’s common for a three person team of rerecording mixers to operate the console for the final mix. Their responsibilities are divided into dialogue, sound effects, and music. The dialogue mixer is usually the team lead, precisely because intelligible dialogue is paramount to a successful motion picture mix. For this reason, dialogue is also mixed as primarily mono coming from the center speaker in a 5.1 surround set-up.

A lot of my work includes documentary-style entertainment and corporate projects, which frequently lean on recorded interviews to tell the story. In many cases, sending the mix outside isn’t in the budget, which means that mix falls to me. You can mix in a DAW or in your NLE. Many video editors are intimidated by or unfamiliar with ProTools or Logic Pro X – or even the Fairlight page in DaVinci Resolve. Rest assured that every modern NLE is capable of turning out an excellent stereo mix for the purposes of TV, web, or mobile viewing. Given the right monitoring and acoustic environment, you can also turn out solid LCR or 5.1 surround mixes, adequate for TV viewing.

I have covered audio and mix tips in the past, especially when dealing with Premiere. The following are a few more pointers.

Original location recording

You typically have no control over the original sound recording. On many projects, the production team will have recorded double-system sound controlled by a separate location mixer (recordist). They generally use two microphones on the subject – a lav and an overhead shotgun/boom mic.

The lav will often be tucked under clothing to filter out ambient noise from the surrounding environment and to hide it from the camera. This will sound closer, but may also sound a bit muffled. There may also be occasional clothes rustle from the clothing rubbing against the mic as the speaker moves around. For these reasons I will generally select the shotgun as the microphone track to use. The speaker’s voice will sound better and the recording will tend to “breathe.” The downside is that you’ll also pick up more ambient noise, such as HVAC fans running in the background. Under the best of circumstances these will be present during quiet moments, but not too noticeable when the speaker is actually talking.

Processing

The first stage of any dialogue processing chain or workflow is noise reduction and gain correction. At the start of the project you have the opportunity to clean up any raw voice tracks. This is ideal, because it saves you from having to do that step later. In the double-system sound example, you have the ability to work with the isolated .wav file before syncing it within a multicam group or as a synchronized clip.

Most NLEs feature some audio noise reduction tools and you can certainly augment these with third party filters and standalone apps, like those from iZotope. However, this is generally a process I will handle in Adobe Audition, which can process single tracks, as well as multitrack sessions. Audition starts with a short noise print (select a short quiet section in the track) used as a reference for the sounds to be suppressed. Apply the processing and adjust settings if the dialogue starts sounding like the speaker is underwater. Leaving some background noise is preferable to over-processing the track.

Once the noise reduction is where you like it, apply gain correction. Audition features an automatic loudness match feature or you can manually adjust levels. The key is to get the overall track as loud as you can without clipping the loudest sections and without creating a compressed sound. You may wish to experiment with the order of these processes. For example, you may get better results adjusting gain first and then applying the noise reduction afterwards.

After both of these steps have been completed, bounce out (export) the track to create a new, processed copy of the original. Bring that into your NLE and combine it with the picture. From here on, anytime you cut to that clip, you will be using the synced, processed audio.

If you can’t go through such a pre-processing step in Audition or another DAW, then the noise reduction and correction must be handled within your NLE. Each of the top NLEs includes built-in noise reduction tools, but there are plenty of plug-in offerings from Waves, iZotope, Accusonus, and Crumplepop to name a few. In my opinion, such processing should be applied on the track (or audio role in FCPX) and not on the clip itself. However, raising or lowering the gain/volume of clips should be performed on the clip or in the clip mixer (Premiere Pro) first.

Track/audio role organization

Proper organization is key to an efficient mix. When a speaker is recorded multiple times or at different locations, then the quality or tone of those recordings will vary. Each situation may need to be adjusted differently in the final mix. You may also have several speakers interviewed at the same time in the same location. In that case, the same adjustments should work for all. Or maybe you only need to separate male from female speakers, based on voice characteristics.

In a track-based NLE like Media Composer, Resolve, Premiere Pro, or others, simply place each speaker onto a separate track so that effects processing can be specific for that speaker for the length of the program. In some cases, you will be able to group all of the speaker clips onto one or a few tracks. The point is to arrange VO, sync dialogue, sound effects, and music together as groups of tracks. Don’t intermingle voice, effects, or music clips onto the same tracks.

Once you have organized your clips in this manner, then you are ready for the final mix. Unfortunately this organization requires some extra steps in Final Cut Pro X, because it has no tracks. Audio clips in FCPX must be assigned specific audio roles, based on audio types, speaker names, or any other criteria. Such assignments should be applied immediately upon importing a clip. With proper audio role designations, the process can work quite smoothly. Without it, you are in a world of hurt.

Since FCPX has no traditional track mixer, the closest equivalent is to apply effects to audio lanes based on the assigned audio roles. For example, all clips designated as dialogue will have their audio grouped together into the dialogue lane. Your sequence (or just the audio) must first be compounded before you are able to apply effects to entire audio lanes. This effectively applies these same effects to all clips of a given audio role assignment. So think of audio lanes as the FCPX equivalent to audio tracks in Premiere, Media Composer, or Resolve.

The vocal chain

The objective is to get your dialogue tracks to sound consistent and stand out in the mix. To do this, I typically use a standard set of filter effects. Noise reduction processing is applied either through preprocessing (described above) or as the first plug-in filter applied to the track. After that, I will typically apply a de-esser and a plosive remover. The first reduces the sibilance of the spoken letter “s” and the latter reduces mic pops from the spoken letter “p.” As with all plug-ins, don’t get heavy-handed with the effect, because you want to maintain a natural sound.

You will want the audio – especially interviews – to have a consistent level throughout. This can be done manually by adjusting clip gain, either clip by clip, or by rubber banding volume levels within clips. You can also apply a track effect, like an automatic volume filter (Waves, Accusonus, Crumplepop, other). In some cases a compressor can do the trick. I like the various built-in plug-ins offered within Premiere and FCPX, but there are a ton of third-party options. I may also apply two compression effects – one to lightly level the volume changes, and the second to compress/limit the loudest peaks. Again, the key is to apply light adjustments, because I will also compress/limit the master output in addition to these track effects.

The last step is equalization. A parametric EQ is usually the best choice. The objective is to assure vocal clarity by accentuating certain frequencies. This will vary based on the sound quality of each speaker’s voice. This is why you often separate speakers onto their own tracks according to location, voice characteristics, and so on. In actual practice, only two to three tracks are usually needed for dialogue. For example, interviews may be consistent, but the voice-over recordings require a different touch.

Don’t get locked into the specific order of these effects. What I have presented in this post isn’t necessarily gospel for the hierarchical order in which to use them. For example, EQ and level adjusting filters might sound best when placed at different positions in this stack. A certain order might be better for one show, whereas a different order may be best the next time. Experiment and listen to get the best results!

©2020 Oliver Peters

FilmConvert Nitrate

When it comes to film emulation software and plug-ins, FilmConvert is the popular choice for many editors. It was one of the earliest tools for film stock emulation in digital editing workflows. It not only provides excellent film looks, but also functions as a primary color correction tool in its own right. FilmConvert has now been updated into FilmConvert Nitrate – a name that’s a tip of the hat to the chemical composition of early film stocks.

The basics of film emulation with Nitrate

FilmConvert Nitrate uses built-in looks based on 19 film stocks. These include a variety of motion and still photo negative and positive stocks, ranging from Kodak and Fuji to Polaroid and Ilford. Each stock preset includes built-in film grain based on 6K film scans. Unlike other plug-ins that simply add a grain overlay, FilmConvert calculates and integrates grain based on the underlying color of the image. Whenever you apply a film stock style, a matching grain preset, which changes with each stock choice, is automatically added. The grain amount and texture can be changed or you can dial the settings back to zero if you simply want a clean image.

These film stock emulations are not simply LUTs applied to the image. In order to work its magic, FilmConvert Nitrate starts with a camera profile. Custom profiles have been built for different camera makes and models and these work inside the plug-in. This allows the software to tailor the film stock to the color science of the selected camera for more accurate picture styles. When you select a specific camera from the pulldown menu instead of the FilmConvert default, you’ll be prompted to download any camera pack that hasn’t already been installed. Free camera profile packs are available from the FilmConvert website and currently cover most of the major brands, including ARRI, Sony, Blackmagic, Canon, Panasonic, and more. You don’t have to download all of the packs at first and can add new camera packs at any time as your productions require it.

New features in FilmConvert Nitrate include Cineon log emulation, curves, and more advanced grain controls. The Cineon-to-print option appears whenever you apply FilmConvert Nitrate to a log clip, such as from an ARRI Alexa recorded in Log-C. This option enables greater control over image contrast and saturation. Remember to first remove any automatic or manually-applied LUTs, otherwise the log conversion will be doubled.

Taking FilmConvert Nitrate for a spin

As with my other color reviews, I’ve tested a variety of stock media from various cameras. This time I added a clip from Philip Bloom’s Sony FX9 test. The clip was recorded with that camera’s S-Cinetone profile, which is based on Sony’s Venice color. It looks quite nice to begin with, but of course, that doesn’t mean you shouldn’t tweak it! Other clips included ARRI Alexa log and Blackmagic BRAW files.

In Final Cut Pro X, apply the FilmConvert Nitrate plug-in to a clip and launch the floating control panel from the inspector. In Premiere, all of the controls are normally exposed in the effects controls panel. The plug-in starts with a default preset applied, so next select the camera manufacturer, model, and profile. If you haven’t already installed that specific camera pack, you’ll be prompted to download and install it. Once that’s done, simply select the film stock and adjust the settings to taste. Non-log profiles present you with film chroma and luma sliders. Log profiles change those sliders into film color and Cineon-to-print film emulation.

Multiple panes in the panel expand to reveal the grain response and primary color controls. Grading adjustments include exposure/temperature/tint, low/mid/high color wheels, and saturation. As you move the temperature and tint sliders left or right, the slider bar shows the color for the direction in which you are moving that control. That’s a nice UI touch. In addition, there are RGB curves (which can be split by color) and a levels control. Overall, this plug-in plays nice with Final Cut Pro X and Premiere Pro. It’s responsive and real-time playback performance is typically not impacted.

It is common in other film emulation filters to include grain as an overlay effect. Adjusting the filter with and without grain often results in a large difference in level. Since Nitrate’s grain is a built-in part of the preset, you won’t get an unexpected level change as you apply more grain. In addition to grain presets for film stocks from 8mm to 35mm Full Frame, you can adjust grain luminance, saturation, and size. You can also soften the picture under the grain, which might be something you’d want to do for a more convincing 8mm emulation. One unique feature is a separate response curve for grain, allowing you to adjust the grain brightness levels for lows, mids, and highs. In order to properly judge the amount of grain you apply, set Final Cut Pro X’s playback setting to Better Quality.

For a nice trick, apply two instances of Nitrate to a clip. On the first one, set the camera profile to a motion film negative stock, like Kodak 5207 Vision 3. Then apply a second instance with the default preset, but select a still photo positive stock, like Fuji Astia 100. Finally, tweak the color settings to get the most pleasing look. At this point, however, you will need to render for smooth playback. The result is designed to mimic a true film process where you would shoot a negative stock and then print it to a photograph or release print.

FilmConvert Nitrate supports the ability to export your settings as a 3D LUT (.cube) file, which will carry the color information, although not the grain. To test the transparency of this workflow, I exported my custom Nitrate setting as a LUT. Next, I removed the plug-in effect from the clip and added the Custom LUT effect back to it. This was linked to the new LUT that I had just exported. When I compared the clip with the Nitrate setting versus just the LUT, they were very close with only a minor level difference between. This is a great way to move a look between systems or into other applications without having FilmConvert Nitrate installed in all of them.

Wrap-up

Any color correction effect – especially film emulation styles – are highly subjective, so no single filter is going to be a perfect match for everyone’s taste. FilmConvert Nitrate advances the original FilmConvert plug-in with an updated interface, built around a venerable set of film stock choices. This makes it a good choice if you want to nail the look of film. There’s plenty you can tweak to fine-tune the look, not to mention a wide variety of specific camera profiles. Even Apple iPhones are covered.

FilmConvert Nitrate is available for Final Cut Pro X 10.4.8 and Motion running under macOS 10.13.6 or later. It is also available for Premiere Pro/After Effects, DaVinci Resolve, and Media Composer on both macOS and Windows 10. The plug-in can be purchased for individual applications or as a bundle that covers all of the NLEs. If you already own FilmConvert, then the company has upgrade offers to switch to FilmConvert Nitrate.

Originally written for FCP.co.

©2020 Oliver Peters

Digital Anarchy’s Video Anarchy Bundle

 

There are many reasons to add plug-ins and effects filters to your NLE, but the best reason is for video repair or enhancement. That’s where Digital Anarchy’s four main video plug-in products fit. These include Beauty Box Video, Samurai Sharpen, Flicker Free, and Light Wrap Fantastic. They are compatible with a range of NLE hosts and may be purchased individually or as part of several bundles. Digital Anarchy also offers photography filters, as well as a few free offerings, such as Ugly Box. That’s an offshoot of Beauty Box, but designed to achieve the opposite effect.

Beauty Box Video

Let’s face it, even the most attractive person doesn’t always come across with the most pleasing appearance on camera, in spite of good make-up and lighting. Some people simply have a skin texture, wrinkles, or blemishes that look worse on screen than face-to-face. This is where Beauty Box comes in. It is a skin retouching plug-in that uses basic face detection to isolate the skin area within the image. The mask is based on the range between the dark and light skin colors within the image. You can adjust the colors and settings to refine the area of the mask.

Like all skin smoothing filters, Beauty Box works by blurring the contrast within the affected area. However, it offers a nice range of control, along with GPU acceleration. If you apply the filter with a light touch, then you get a more subtle effect. Crank it up and you’ll get a result not unlike high-gloss, fashion photography with sprayed-on make-up. Both looks can be good, given the appropriate circumstance.

Unfortunately, out of the four, Beauty Box was the only one of these plug-ins that had an issue in Final Cut Pro X. The full control panel did not show up within the inspector pane. This was tested on three different Macs running Mojave, so I’m pretty sure it’s a bug, which I’ve reported to Digital Anarchy. Others may not run into this, but nevertheless, it worked perfectly inside Motion. While that’s a nuisance, it’s not a deal-breaker, given the usefulness of this filter. Simply process the clip in Motion and bring the corrected file back into Final Cut. I tested the same thing in Premiere Pro and no such issue appeared there.

Samurai Sharpen

Sharpening filters work by increasing contrast around the detected edges of contrasting areas within an image. This localized contrast increase results in the perception that the image is sharper. Taken to an extreme, it can also create a cartoon effect. Samurai Sharpen uses edge detection to create a mask for the areas to be sharpened. This mask prevents image noise from also being sharpened. The mask can be adjusted to achieve the desired effect.

For example, the eye make-up used by most actresses provides a nice edge to which sharpening can be applied. A subtle application of the effect will result in the clip appearing to be sharper. However, you can also push the various controls to achieve a more stylized look.

Flicker Free

As the name implies Flicker Free is designed to get rid of image flicker. Typical situations where you might have image flicker include timelapse/hyperlapse clips, archival footage, strobing lights, computer and TV screens within the shot, LED displays, and the propeller shadows in drone footage. Flicker Free does a great job of tackling these situations, but is also more processing intensive than the other three plug-ins. All of these conditions involve some variation in exposure within the frame or from one frame to the next and that’s what Flicker Free will even out.

There are several pulldown presets (more than other similar plug-ins) and adjustment controls for sensitivity and frame intervals. In a few cases, a single instance of the plug-in with one setting will not completely eliminate all of the flicker. That’s when you may opt to apply a second instance of the effect in order to catch the remainder of the flicker. Each instance would use different settings so that the combination yields the desired result.

According to Digital Anarchy, Flicker Free 2.0 is in public beta. First for Adobe hosts and then soon for Final Cut Pro X. This update shifts the load to GPU acceleration, so you’ll need a good GPU card to benefit from this update.

Light Wrap Fantastic

The last of these four plug-ins isn’t designed for image repair, but rather enhancing chromakey composites. Whenever you composite blue-screen or green-screen shots, the trick is getting the foreground to properly blend with the background image for a composite that appears natural.

When a person stands in a natural environment, the ambient light reflected from the surroundings onto the person is visible on the edges of their image. That’s how the camera lens see it. That subtle lighting artifact is called light wrap. The foreground subject in a green-screen shoot doesn’t naturally have this same ambient light wrap – or it’s seen as green spill. This can be corrected through careful lighting, but such care is often not taken – especially on budget-conscious productions. Therefore, you have to add light wrap in post. Some keyers include a built-in light wrap tool or function, while others rely on a separate light wrap filter. That’s where Light Wrap Fantastic comes in. It’s not a keyer by itself, but is designed to work in conjunction with a keyer as part of the effects stack applied to the foreground layer.

You can use a background color or drop the background layer into the image well, which then becomes the source for the light wrap around the foreground image. That light blends as a subtle glow around the interior edge of the subject. Since you want the shot to feel natural, you are generally going to want to select the background image, rather than a stock color. This has the benefit of not only looking like the same environment, but if there are lighting changes within the background image, the light wrap edge will react dynamically. The light wrap itself can be adjusted for brightness, softness, and various blend modes. These settings allow you to control the subtlety of the light wrap.

As a group, these four plug-ins form the Anarchy Video Bundle, but you have to purchase separate bundles for each host. The Apple bundle covers Final Cut Pro X and Motion, but if you also want to use these filters in After Effects, then you’ll need to also purchase the Adobe version of the bundle. Same for other host applications. You probably won’t use one of these on every session. On the other hand, when you do need to use one, it’s often the kind of enhancement that can ward off a reshoot and let you save the job in post.

Originally written for FCP.co.

©2020 Oliver Peters