May 2021 Links

It’s time to check out some more articles and reviews that I’ve written for other publications since January, but which I haven’t reposted here.

Understanding Premiere Pro’s Color Management 

I wrote about trusting Apple Displays. This is a follow-up article about color management and Premiere Pro (Pro Video Coalition).

Aviation and Final Cut Pro – Combining Passions for Compelling Videos 

YouTube influencers are a big part of the content creation landscape today. Their videos cover many, different niches and often have a surprisingly large base of followers. I take a look at YouTube, aviation, and the use of Final Cut Pro to post-produce these videos (FCP.co).

Is Your Audio Mix Too Loud?

Unless you’ve delivered master files for broadcast, you might not be as tuned into meeting delivery specs, especially when it comes to the perceived loudness levels of your mix. I discuss working with audio in Final Cut Pro to mix and master at optimal levels (FCP.co).

Color Finale Transcoder – BRAW, ARRIRAW, DNG and CinemaDNG in FCP

The folks at Color Trix have come up with an ingenious solution to augment Final Cut Pro’s camera raw support. The new Color Finale Transcoder adds additional camera raw formats, notably Blackmagic RAW. Check out my review of the software (FCP.co).

©2021 Oliver Peters

The Mole Agent

At times you have to remind yourself that you are watching a documentary and not actors in a fictional drama. I’m talking about The Mole Agent, one of the nominees for Best Documentary Feature in this year’s Academy Awards competition. What starts as film noir with a humorous slant evolves into a film essay on aging and loneliness.

Chilean filmmaker Maite Alberdi originally set out to document the work being done by private investigator Romulo Aitkin. The narrative became quite different, thanks to Romulo’s mole, Sergio Chamy. The charming, 83-year-old widower was hired to be the inside man to follow a case at a retirement home. Once on the inside, we see life from Sergio’s perspective.

The Mole Agent is a touching film about humanity, deftly told without the benefit of an all-knowing narrator or on-camera interviews. The thread that binds the film is often Sergio’s phoned reports to Romulo, but the film’s approach is largely cinema verite. Building that structure fell to Carolina Siraqyan, a Chile-based editor, whose main experience has been cutting short-form projects and commercials. I recently connected with Carolina over Zoom to discuss the post behind this Oscar contender.

* * *

Please tell me how you got the chance to edit this film.

I met Maite years ago while giving a presentation about editing trailers for documentaries, which is a speciality of mine. She was finishing the The Grown-Ups and I’m Not From Here, a short documentary film. I ended up doing the trailers for both and we connected. She shared that she was developing The Mole Agent. I loved the mixture of film noir and observational documentary, so I asked to work on the film and ended up cutting it.

Did her original idea start with the current premise of the film or was the concept broader at that point?

Maite wanted to do a documentary about the workings of a private detective agency, since detectives are often only represented in fiction. She worked with Romulo for a few months and realized that investigations into retirement homes are quite common. She loved the idea for the film and started focusing on that aspect.

Romulo already had a mole that he used inside the homes on these cases, but the mole broke his hip. So Romulo placed a newspaper want ad for someone in his 80s who could work as his new mole on this case. A number of credible older men applied. Out of those applicants, Sergio was hired and turned out to be perfect for the film. He entered into the retirement home after some initial training, including how to discretely communicate with Romulo and how to use the spy cameras.

How was the director able to convince the home and the residents to be in the film?

The film crew had arrived a couple of weeks before Sergio. It was explained that they were doing a film on old age and would be focusing on any new residents in the home. So, the existing residents were already comfortable with the presence of the cameras before he arrived. Maite was very empathetic about where to place cameras so that they wouldn’t bother residents or interfere with what the staff was doing, even if that might not be the best location aesthetically.

Maite is very popular here. She’s written and directed a number of films about social issues and her point-of-view is very humble and very respectful. This is a good retirement home with nothing to hide, so both the staff and the residents were OK with filming. But to be clear, only people who consented appear in the film.

I understand that there were 300 hours of raw footage filmed for this documentary. How did you approach that?

The crew filmed for over three months. It’s actually more that 300 hours of footage, because of the spy cameras. Probably as much as 50 hours more. I couldn’t use a lot of that spy camera material, because Sergio would accidentally press record instead of pressing stop. The camera was in his pocket all the time, so I might have black for 20 minutes. [laugh] 

I starting on the project in January [2019] after it had been shot and the camera footage merged with the sound files. The native footage was shot with Sony cameras in their MXF format. The spy cameras generated H.264 files. To keep everything smooth, I was working with proxy files.

Essentially I started from zero on the edit. It took me two months to categorize the footage. I have an assistant, but I wanted to watch all of the material first. I like to add markers while I’m watching and then add text to those markers as I react to that footage. The first impression is very important for me.

We had a big magnetic blackboard and I placed magnetic cards on the wall for each of the different situations that I had edited. Then Maite came during the middle of March and we worked together like playing Tetris to structure the film. After that we shifted to Amsterdam for two months to work in a very focused way in order to refine the film’s structure. The first edition was completed in November and the final mix and color correction was done in December.

Did you have a particular method to create the structure of this documentary?

I feel that every film is different and you have to think a lot about how you are going to face each movie. In this film I had two certainties, the beginning – Romulo training Sergio – and the ending – what Sergio’s thoughts were. The rest is all emotion. That’s the spine. I have to analyze the emotion to converge to the conflict. First, there’s the humor and then the evolution to the sadness and loneliness. That’s how I approached the material – by the emotion.

I color-coded the magnetic cards for different emotions. For example, pink was for the funny scenes. When Maite was there, the cards provided the big picture showing all the situations. We could look and decide if a certain order worked or not.

What sort of changes to the film came out of the review stage?

This is a very international film with co-producers in the United States, Germany, the Netherlands, Spain, and Chile. We would share cuts with them to get helpful feedback. It let us make the movie more universal, because we had the input of many professionals from different parts of the world. 

When we arrived in Amsterdam the first cut of the film was about three hours long. Originally the first part was 30 minutes long and that was cut down to 10 minutes. When we watched the longer cut, we felt that we were losing interest in the investigation; however, the relationship that Sergio was establishing with the women was wonderful. All the women are in love with him. It starts like film noir, but with humor.  So we focused on the relationships and edited the investigation parts into shorter humorous segments that were interspersed throughout the film.

The reality was incredible and definitely nothing was scripted. But some of the co-producers commented that various scenes in the film didn’t feel real to them. So, we considered those opinions as we were tightening the film.

You edited this film with Adobe Premiere Pro. How do you like using it and why was it the right tool for this film?

I started on film with Moviola and then edited on U-matic, which I hated. I moved to Avid, because it was the first application we had. Then I moved to Final Cut Pro; but after FCP7 died, I switched to Premiere Pro. I love it and am very comfortable with how the timeline works. The program leaves you a lot of freedom as to how and where you put your material. You have control – none of that magnetic stuff that forces you to do something by default.

Premiere Pro was great for this documentary. If a program shuts down unexpectedly, it’s very frustrating, because the creative process stops. I didn’t have any problems even though everything was in one, large project. I did occasionally clean up the project to get rid of stuff I wasn’t using, so it wasn’t too heavy. But Premiere allowed me to work very fluidly, which is crucial.

You completed the The Mole Agent at the end of 2019. That’s prior to the “work from home” remote editing reality that most of the world has lived through during this past year. What would be different if you had worked on the film a year later?

The Mole Agent was completed in time for Sundance in January of 2020. Fortunately we were able to work without lockdowns. I’ve worked a lot remotely during this past year and it’s difficult. You get accustomed to it, but there is something missing. You don’t get the same feeling looking through a [web] camera as being together in the room. Something in the creative communication is lost in the technology. If the movie had been edited like this [communicating through Zoom] – and considering the mood during the lockdowns and how that affects your perception of the material – then it really would be a different film.

Any final thoughts about your experience editing this film?

I had previously worked sporadically on films, but have spent most of my career in the advertising industry. A few years ago I decided that I wanted to work full-time on long-form films. Then this project came to me. So I was very open during the process to all of the notes and comments. I understood the process, of course, but because I had worked so much in advertising, I now had to put this new information into practice. I learned a lot!

The Mole Agent is a very touching film. It’s different – very innovative. It’s an incredible movie for people who have seen the film. It affects the conscience and they take action. I feel very glad to have worked on this film.

This article also appears at postPerspective.

©2021 Oliver Peters

Drive – Postlab’s Virtual Storage Volume

Postlab is the only service designed for multi-editor, remote collaboration with Final Cut Pro X. It works whether you have a team collaborating on-premises within a facility or spread out at various locations around the globe. Since the initial launch, Hedge has also extended Postlab’s collaboration to Premiere Pro.

When using Postlab, projects containing Final Cut Pro X libraries or Premiere Pro project files are hosted on Hedge’s servers. But, the media lives on local drives or shared storage and not “in the cloud.” When editors work remotely, media needs to be transferred to them by way of “sneakernet,” High Tail, WeTransfer, or other methods.

Hedge has now solved that media issue with the introduction of Drive, a virtual storage volume for media, documents, and other files. Postlab users can utilize the original workflow and continue with local media – or they can expand remote capabilities with the addition of Drive storage. Since it functions much like DropBox, Drive can also be used by team members who aren’t actively engaged in editing. As a media volume, files on Drive are also accessible to Avid Media Composer and DaVinci Resolve editors.

Drive promises significantly better performance than a general business cloud service, because it has been fine-tuned for media. The ability to use Drive is included with each Postlab plan; but, storage costs are based on a flat rate per month for the amount of storage you need. Unlike other cloud services, there are no hidden egress charges for downloads. If you only want to use Drive as a single user, then Hedge’s Postlab Solo or Pro plan would be the place to start.

How Drive works

Once Drive storage has been added to an account, each team member simply needs to connect to Drive from the Postlab interface. This mounts a Drive volume on the desktop just like any local hard drive. In addition, a cache file is stored at a designated location. Hedge recommends using a fast SSD or RAID for this cache file. NAS or SAN network volumes cannot be used.

After the initial set up, the operation is similar to DropBox’s SmartSync function. When an editor adds media to the local Drive volume, that media is uploaded to Hedge’s cloud storage. It will then sync to all other editors’ Drive volumes. Initially those copies of the media are only virtual. The first time a file is played by a remote team member, it is streamed from the cloud server. As it streams, it is also being added the local Drive cache. Every file that has been fully played is now stored locally within the cache for faster access in the future.

Hedge feels that latency is as or more important than outright connection speed for a fluid editing experience. They recommend wired, rather than wi-fi, internet connections. However, I tested the system using wi-fi with office speeds of around 575Mbps down / 38Mbps up. This is a business connection and was fast enough to stream 720p MP4 and 1080p ProRes Proxy files with minimal hiccups on the initial streamed playback. Naturally, after it was locally cached, access was instantaneous.

From the editor’s point of view, virtual files still appear in the FCPX event browser as if local and the timeline is populated with clips. Files can also be imported or dragged in from Drive as if they are local. As you play the individual clips or the timeline from within FCPX or Premiere, the files become locally cached. All in all, the editing experience is very fluid.

In actual practice

The process works best with lightweight, low-res files and not large camera originals. That is possible, too, of course, but not very efficient. Drive and the Hedge servers support most common media files, but not a format like REDCODE raw. As before, each editor will need to have the same effects, LUTs, Motion templates, and fonts installed for proper collaboration.

I did run into a few issues, which may be related to the recent 10.4.9 Final Cut update. For example, the built-in proxy workflow is not very stable. I did get it to work. Original files were on a NAS volume (not Drive) and the generated proxies (H.264 or ProRes Proxy) were stored on the Drive volume of the main system. The remote editing system would only get the proxies, synced through Drive. In theory that should work, but it was hit or miss. When it worked, some LUTs, like the standard ARRI Log-C LUTs, were not applied on the remote system in proxy mode. Also the “used” range indicator lines for the event browser clips were present on the original system, but not the remote system. Other than these few quirks, everything was largely seamless.

My suggested workflow would be to generate editing proxies outside of the NLE and copy those to Drive. H.264 or ProRes Proxy with matching audio configurations to the original camera files work well. Treat these low-res files as original media and import them into Final Cut Pro X or Premiere Pro for editing. Once the edit is locked, go to the main system and transfer the final sequence to a local FCPX Library or Premiere Pro project for finishing. Relink that sequence to the original camera files for grading and delivery. Alternatively, you could export an FCPXML or XML file for a Resolve roundtrip.

One very important point to know is that the entire Postlab workflow is designed around team members staying logged into the account. This maintains the local caches. It’s OK to quit the Postlab application, plus eject and reconnect the Drive volume. However, if you log out, those local caches for editing files and Drive media will be flushed. The next time you log back in, connection to Drive will need to be re-established, Drive information must be synced again, and clips within FCPX or Premiere Pro will have to be relinked. So stay logged in for the best experience.

Additional features

Thanks to the Postlab interface, Drive offers features not available for regular hard drives. For example, any folder within Drive can be bookmarked in Postlab. Simply click on a Bookmark to directly open that folder. The Drop Off feature lets you generate a URL with an expiration date for any Bookmarked folder. Send that link to any non-team member, such as an outside contributor or client, and they will be able to upload additional media or other files to Drive. Once uploaded to Hedge’s servers, those files show up in Drive within the folder and will be synced to all team members.

Hedge offers even more features, including Mail Drop, designed for projects with too much media to efficiently upload. Ship Hedge a drive to copy dailies straight into their servers. Pick Up is another feature still in development. When updated, you will be able to select files on Drive, generate a Pick Up link, and send that to your client for download.

Editing with Drive and Postlab makes remote collaboration nearly like working on-site. The Hedge team is dedicated to expanding these capabilities with more services and broader NLE support. Given the state of post this year, these products are at the right time and place.

Check out this Soho Editors masterclass in collaboration using Postlab and Drive.

Originally written for FCP.co.

©2020 Oliver Peters

COUP 53

The last century is littered with examples of European powers and the United States attempting to mold foreign governments in their own direction. In some cases, the view at the time may have seemed like these efforts would yield positive results. In others, self-interest or oil was the driving force. We have only to point to the Sykes-Picot Agreement of 1916 (think Lawrence of Arabia) to see the unintended consequences these policies have had in the middle east over the past 100+ years, including current politics.

In 1953, Britain’s spy agency MI6 and the United States’ CIA orchestrated a military coup in Iran that replaced the democratic prime minister, Mohammad Mossadegh, with the absolute monarchy headed by Shah Mohammad Reza Pahlavi. Although the CIA has acknowledged its involvement, MI6 never has. Filmmaker Taghi Amirani, an Iranian-British citizen, set out to tell the true story of the coup, known as Operation Ajax. Five years ago he elicited the help of noted film editor, Walter Murch. What was originally envisioned as a six month edit turned into a four yearlong odyssey of discovery and filmmaking that has become the feature documentary COUP 53.

COUP 53 was heavily researched by Amirani and leans on End of Empire, a documentary series produced by Britain’s Granada TV. That production started in 1983 and culminated in its UK broadcast in May of 1985. While this yielded plenty of interviews with first-hand accounts to pull from, one key omission was an interview with Norman Darbyshire, the MI6 Chief of Station for Iran. Darbyshire was the chief architect of the coup – the proverbial smoking gun. Yet he was inexplicably cut out of the final version of End of Empire, along with others’ references to him.

Amirani and Murch pulled back the filmmaking curtain as part of COUP 53. We discover along with Amirani the missing Darbyshire interview transcript, which adds an air of a whodunit to the film. Ultimately what sets COUP 53 apart was the good fortune to get Ralph Fiennes to portray Norman Darbyshire in that pivotal 1983 interview.

COUP 53 premiered last year at the Telluride Film Festival and then played other festivals until coronavirus closed such events down. In spite of rave reviews and packed screenings, the filmmakers thus far have failed to secure distribution. Most likely the usual distributors and streaming channels deem the subject matter to be politically toxic. Whatever the reason, the filmmakers opted to self-distribute, including a virtual cinema event with 100 cinemas on August 19th, the 67th anniversary of the coup.

Walter Murch is certainly no stranger to readers. Despite a long filmography, including working with documentary material, COUP 53 is only his second documentary feature film. (Particle Fever was the first.) This film posed another challenge for Murch, who is known for his willingness to try out different editing platforms. This was the first outing with Adobe Premiere Pro CC, his fifth major editing system. I had a chance to catch up with Walter Murch over the web from his home in London the day before the virtual cinema event. We discussed COUP 53, documentaries, and working with Premiere Pro.

___________________________________________________

[Oliver Peters] You and I have emailed back-and-forth on the progress of this film for the past few years. It’s great to see it done. How long have you been working on this film?

[Walter Murch] We had to stop a number of times, because we ran out of money. That’s absolutely typical for this type of privately-financed documentary without a script. If you push together all of the time that I was actually standing at the table editing, it’s probably two years and nine months. Particle Fever – the documentary about the Higgs Boson – took longer than that.

My first day on the job was in June of 2015 and here we are talking about it in August of 2020. In between, I was teaching at the National Film School and at the London Film School. My wife is English and we have this place in London, so I’ve been here the whole time. Plus I have a contract for another book, which is a follow-on to In the Blink of an Eye. So that’s what occupies me when my scissors are in hiding.

[OP] Let’s start with Norman Darbyshire, who is key to the storyline. That’s still a bit of an enigma. He’s no longer alive, so we can’t ask him now. Did he originally want to give the 1983 interview and MI6 came in and said ‘no’ – or did he just have second thoughts? Or was it always supposed to be an off the record interview?

[WM] We don’t know. He had been forced into early retirement by the Thatcher government in 1979, so I think there was a little chip on his shoulder regarding his treatment. The full 14-page transcript has just been released by the National Security Archives in Washington, DC, including the excised material that the producers of the film were thinking about putting into the film.

If they didn’t shoot the material, why did they cut up the transcript as if it were going to be a production script? There was other circumstantial evidence that we weren’t able to include in the film that was pretty indicative that yes, they did shoot film. Reading between the lines, I would say that there was a version of the film where Norman Darbyshire was in it – probably not named as such – because that’s a sensitive topic. Sometime between the summer of 1983 and 1985 he was removed and other people were filmed to fill in the gaps. We know that for a fact.

[OP] As COUP 53 shows, the original interview cameraman clearly thought it was a good interview, but the researcher acts like maybe someone got to management and told them they couldn’t include this.

[WM] That makes sense given what we know about how secret services work. What I still don’t understand is why then was the Darbyshire transcript leaked to The Observer newspaper in 1985. A huge article was published the day before the program went out with all of this detail about Norman Darbyshire – not his name, but his words. And Stephen Meade – his CIA counterpart – who is named. Then when the program ran, there was nothing of him in it. So there was a huge discontinuity between what was published on Sunday and what people saw on Monday. And yet, there was no follow-up. There was nothing in the paper the next week, saying we made a mistake or anything.

I think eventually we will find out. A lot of the people are still alive. Donald Trelford, the editor of The Observer, who is still alive, wrote something a week ago in a local paper about what he thought happened. Alison [Rooper] – the original research assistant – said in a letter to The Observer that these are Norman Darbyshire’s words, and “I did the interview with him and this transcript is that interview.”

[OP] Please tell me a bit about working with the discovered footage from End of Empire.

[WM] End of Empire was a huge, fourteen-episode project that was produced over a three or four year period. It’s dealing with the social identity of Britain as an empire and how it’s over. The producer, Brian Lapping, gave all of the outtakes to the British Film Institute. It was a breakthrough to discover that they have all of this stuff. We petitioned the Institute and sure enough they had it. We were rubbing our hands together thinking that maybe Darbyshire’s interview was in there. But, of all of the interviews, that’s the one that’s not there.

Part of our deal with the BFI was that we would digitize this 16mm material for them. They had reconstituted everything. If there was a section that was used in the film, they replaced it with a reprint from the original film, so that you had the ability to not see any blank spots. Although there was a quality shift when you are looking at something used in the film, because it’s generations away from the original 16mm reversal film.

For instance, Stephen Meade’s interview is not in the 1985 film. Once Darbyshire was taken out, Meade was also taken out. Because it’s 16mm we can still see the grease pencil marks and splices for the sections that they wanted to use. When Meade talks about Darbyshire, he calls him Norman and when Darbyshire talks about Meade he calls him Stephen. So they’re a kind of double act, which is how they are in our film. Except that Darbyshire is Ralph Fiennes and Stephen Meade – who has also passed on – appears through his actual 1983 interview.

[OP] Between the old and new material, there was a ton of footage. Please explain your workflow for shaping this into a story.

[WM] Taghi is an inveterate shooter of everything. He started filming in 2014 and had accumulated about 40 hours by the time I joined in the following year. All of the scenes where you see him cutting transcripts up and sliding them together – that’s all happening as he was doing it. It’s not recreated at all. The moment he discovered the Darbyshire transcript is the actual instance it happened. By the end, when we added it all up, it was 532 hours of material.

Forgetting all of the creative aspects, how do you keep track of 532 hours of stuff? It’s a challenge. I used my Filemaker Pro database that I’ve been using since the mid-1980s on The Unbearable Lightness of Being. Every film, I rewrite the software slightly to customize it for the film I’m on. I took frame-grabs of all the material so I had stacks and stacks of stills for every set-up.

By 2017 we’d assembled enough material to start on a structure. Using my cards, we spent about two weeks sitting and thinking ‘we could begin here and go there, and this is really good.’ Each time we’d do that, I’d write a little card. We had a stack of cards and started putting them up on the wall and moving them around. We finally had two blackboards of these colored cards with a start, middle, and end. Darbyshire wasn’t there yet. There was a big card with an X on it – the mysterious X. ‘We’re going to find something on this film that nobody has found before.’ That X was just there off to the side looking at us with an accusing glare. And sure enough that X became Norman Darbyshire.

At the end of 2017 I just buckled my seat belt and started assembling it all. I had a single timeline of all of the talking heads of our experts. It would swing from one person to another, which would set up a dialogue among themselves – each answering the other one’s question or commenting on a previous answer. Then a new question would be asked and we’d do the same thing. That was 4 1/2 hours long. Then I did all of the same thing for all of the archival material, arranging it chronologically. Where was the most interesting footage and the highest quality version of that? That was almost 4 hours long. Then I did the same thing with all of the Iranian interviews, and when I got it, all of the End of Empire material.

We had four, 4-hour timelines, each of them self-consistent. Putting on my Persian hat, I thought, ‘I’m weaving a rug!’ It was like weaving threads. I’d follow the talking heads for a while and then dive into some archive. From that into an Iranian interview and then some End of Empire material. Then back into some talking heads and a bit of Taghi doing some research. It took me about five months to do that work and it produced an 8 1/2 hour timeline.

We looked at that in June of 2018. What were we going to do with that? Is it a multi-part series? It could be, but Netflix didn’t show any interest. We were operating on a shoe string, which meant that the time was running out and we wanted to get it out there. So we decided to go for a feature-length film. It was right about that time that Ralph Fiennes agreed to be in the film. Once he agreed, that acted like a condenser. If you have Ralph Fiennes, things tend to gravitate around that performance. We filmed his scenes in October of 2018. I had roughed it out using the words of another actor who came in and read for us, along with stills of Ralph Fiennes as M. What an irony! Here’s a guy playing a real MI6 agent who overthrew a whole country, who plays M, the head of MI6, who dispatches James Bond to kill malefactors!

Ralph was recorded in an hour and a half in four takes at the Savoy Hotel – the location of the original 1983 interviews. At the time, he was acting in Shakespeare’s Anthony and Cleopatra every evening. So he came in the late morning and had breakfast. By 1:30-ish we were set-up. We prayed for the right weather outside – not too sunny and not rainy. It was perfect. He came and had a little dialogue with the original cameraman about what Darbyshire was like. Then he sat down and entered the zone – a fascinating thing to see. There was a little grooming touch-up to knock off the shine and off we went.

Once we shot Ralph, we were a couple of months away from recording the music and then final color timing and the mix. We were done with a finished, showable version in March of 2019. It was shown to investors in San Francisco and at the TED conference in Vancouver. We got the usual kind of preview feedback and dove back in and squeezed another 20 minutes or so out of the film, which got it to its present length of just under two hours.

[OP] You have a lot of actual stills and some footage from 1953, but as with most historical documentaries, you also have re-enactments. Another unique touch was the paint effect used to treat these re-enactments to differentiate them stylistically from the interviews and archival footage.

[WM] As you know, 1953 is 50+ years before the invention of the smart phone. When coups like this happen today you get thousands of points-of-view. Everyone is photographing everything. That wasn’t the case in 1953. On the final day of the coup, there’s no cinematic material – only some stills. But we have the testimony of Mossadegh’s bodyguard on one side and the son of the general who replaced Mossadegh on the other, plus other people as well. That’s interesting up to a point, but it’s in a foreign language with subtitles, so we decided to go the animation path.

This particular technique was something Taghi’s brother suggested and we thought it was a great idea. It gets us out of the uncanny valley, in the sense that you know you’re not looking at reality and yet it’s visceral. The idea is that we are looking at what is going on in the head of the person telling us these stories. So it’s intentionally impressionistic. We were lucky to find Martyn Pick, the animator who does this kind of stuff. He’s Mr. Oil Paint Animation in London. He storyboarded it with us and did a couple of days of filming with soldiers doing the fight. Then he used that as the base for his rotoscoping.

[OP] Quite a few of the first-hand Iranian interviews are in Persian with subtitles. How did you tackle those?

[WM] I speak French and Italian, but not Persian. I knew I could do it, but it was a question of the time frame. So our workflow was that Taghi and I would screen the Iranian language dailies. He would point out the important points and I would take notes. Then Taghi would do a first pass on his workstation to get rid of the chaff. That’s what he would give to the translators. We would hire graduate students. Fateme Ahmadi, one of the associate producers on the film, is Iranian and she would also do translation. Anyone that was available would work on the additional workstation and add subtitling. That would then come to me and I would use that as raw material.

To cut my teeth on this, I tried using the interview with Hamid Admadi, the Iranian historical expert that was recorded in Berlin. Without translating it, I tried to cut it solely on body language and tonality. I just dove in and imagined, if he is saying ‘that’ then I’m thinking ‘this.’ I was kind of like the way they say people with aphasia are. They don’t understand the words, but they understand the mood. To amuse myself, I put subtitles on it, pretending that I knew what he was saying. I showed it to Taghi and he laughed, but said that in terms of the continuity of the Persian, it made perfect sense. The continuity of the dialogue and moods didn’t have any jumps for a Persian speaker. That was a way to tune myself into the rhythms of the Persian language. That’s almost half of what editing is – picking up the rhythm of how people say things – which is almost as important or even sometimes more important than the words they are using.

[OP] I noticed in the credits that you had three associate editors on the project.  Please tell me a bit about their involvement.

[WM] Dan [Farrell] worked on the film through the first three months and then a bit on the second section. He got a job offer to edit a whole film himself, which he absolutely should do. Zoe [Davis] came in to fill in for him and then after a while also had to leave. Evie [Evelyn Franks] came along and she was with us for the rest of the time. They all did a fantastic job, but Evie was on it the longest and was involved in all of the finishing of the film. She’s is still involved, handling all of the media material that we are sending out.

[OP] You are also known for your work as a sound designer and re-recording mixer, but I noticed someone else handled that for this film. What was you sound role on COUP 53?

[WM] I was busy in the cutting room, so I didn’t handle the final mix. But I was the music editor for the film, as well as the picture editor. Composer Robert Miller recorded the music in New York and sent a rough mixdown of his tracks. I would lay that onto my Premiere Pro sequence, rubber-banding the levels to the dialogue.

When he finally sent over the instrument stems – about 22 of them – I copied and pasted the levels from the mixdown onto each of those stems and then tweaked the individual levels to get the best out of every instrument. I made certain decisions about whether or not to use an instrument in the mix. So in a sense, I did mix the music on the film, because when it was delivered to Boom Post in London, where we completed the mix, all of the shaping that a music mixer does was already taken care of. It was a one-person mix and so Martin [Jensen] at Boom only had to get a good level for the music against the dialogue, place it in a 5.1 environment with the right equalization, and shape that up and down slightly. But he didn’t have to get into any of the stems.

[OP] I’d love to hear your thoughts on working with Premiere Pro over these several years. You’ve mentioned a number of workstations and additional personnel, so I would assume you had devised some type of a collaborative workflow. That is something that’s been an evolution for Adobe over this same time frame.

[WM] We had about 60TB of shared storage. Taghi, Evie Franks, and I each had workstations. Plus there was fourth station for people doing translations. The collaborative workflow was clunky at the beginning. The idea of shared spaces was not what it is now and not what I was used to from Avid, but I was willing to go with it.

Adobe introduced the basics of a more fluid shared workspace in early 2018 I think, and that began a six months’ rough ride, because there were a lot of bugs that came along  with that deep software shift. One of them was what I came to call ‘shrapnel.’ When I imported a cut from another workstation into my workstation, the software wouldn’t recognize all the related media clips, which were already there. So these duplicate files would be imported again, which I nicknamed ‘shrapnel.’ I created a bin just to stuff these clips in, because you couldn’t delete them without causing other problems.

Those bugs went away in the late summer of 2018. The ‘shrapnel’ disappeared along with other miscellaneous problems – and the back-and-forth between systems became very transparent. Things can always be improved, but from a hands-on point-of-view, I was very happy with how everything worked from August or September of 2018 through to the completion of the film.

We thought we might stay with Premiere Pro for the color timing, which is very good. But DaVinci Resolve was the system for the colorist that we wanted to get. We had to make some adjustments to go to Resolve and back to Premiere Pro. There were a couple of extra hurdles, but it all worked and there were no kludges. Same for the sound. The export for Pro Tools was very transparent.

[OP] A lot of what you’ve written and lectured about is the rhythm of editing – particularly dramatic films. How does that equate to a documentary?

[WM] Once you have the initial assembly – ours was 8 hours, Apocalypse Now was 6 hours, Cold Mountain was 5 1/2 hours – the jobs are not that different. You see that it’s too long by a lot. What can we get rid of? How can we condense it to make it more understandable, more emotional, clarify it, and get a rhythmic pulse to the whole film?

My approach is not to make a distinction at that point. You are dealing with facts and have to pay attention to the journalistic integrity of the film. On a fiction film you have to pay attention to the integrity of the story, so it’s similar. Getting to that point, however, is highly different, because the editor of an unscripted documentary is writing the story. You are an author of the film. What an author does is stare at a blank piece of paper and say, ‘what am I going to begin with?’ That is part of the process. I’m not writing words, necessarily, but I am writing. The adjectives and nouns and verbs that I use are the shots and sounds available to me.

I would occasionally compare the process for cutting an individual scene to churning butter. You take a bunch of milk – the dailies – and you put them into a churn – Premiere Pro – and you start agitating it. Could this go with that? No. Could this go with that? Maybe. Could this go? Yes! You start globbing things together and out of that butter churning process you’ve eventually got a big ball of butter in the churn and a lot of whey – buttermilk. In other words, the outtakes.

That’s essentially how I work. This is potentially a scene. Let me see what kind of scene it will turn into. You get a scene and then another and another. That’s when I go to the card system to see what order I can put these scenes in. That’s like writing a script. You’re not writing symbols on paper, you are taking real images and sound and grappling with them as if they are words themselves.

___________________________________________________

Whether you are a student of history, filmmaking, or just love documentaries, COUP 53 is definitely worth the watch. It’s a study in how real secret services work. Along the way, the viewer is also exposed to the filmmaking process of discovery that goes into every well-crafted documentary.

Images from COUP 53 courtesy of Amirani Media and Adobe.

(Click on any image for an enlarged view.)

You can learn more about the film at COUP53.com.

For more, check out these interviews at Art of the Cut, CineMontage, and Forbes.

©2020 Oliver Peters

Dialogue Mixing Tips

 

Video is a visual medium, but the audio side of a project is as important – often more important – than the picture side. When story context is based on dialogue, then the story will make no sense if you can’t hear or understand that spoken information. In theatrical mixes, it’s common for a three person team of rerecording mixers to operate the console for the final mix. Their responsibilities are divided into dialogue, sound effects, and music. The dialogue mixer is usually the team lead, precisely because intelligible dialogue is paramount to a successful motion picture mix. For this reason, dialogue is also mixed as primarily mono coming from the center speaker in a 5.1 surround set-up.

A lot of my work includes documentary-style entertainment and corporate projects, which frequently lean on recorded interviews to tell the story. In many cases, sending the mix outside isn’t in the budget, which means that mix falls to me. You can mix in a DAW or in your NLE. Many video editors are intimidated by or unfamiliar with ProTools or Logic Pro X – or even the Fairlight page in DaVinci Resolve. Rest assured that every modern NLE is capable of turning out an excellent stereo mix for the purposes of TV, web, or mobile viewing. Given the right monitoring and acoustic environment, you can also turn out solid LCR or 5.1 surround mixes, adequate for TV viewing.

I have covered audio and mix tips in the past, especially when dealing with Premiere. The following are a few more pointers.

Original location recording

You typically have no control over the original sound recording. On many projects, the production team will have recorded double-system sound controlled by a separate location mixer (recordist). They generally use two microphones on the subject – a lav and an overhead shotgun/boom mic.

The lav will often be tucked under clothing to filter out ambient noise from the surrounding environment and to hide it from the camera. This will sound closer, but may also sound a bit muffled. There may also be occasional clothes rustle from the clothing rubbing against the mic as the speaker moves around. For these reasons I will generally select the shotgun as the microphone track to use. The speaker’s voice will sound better and the recording will tend to “breathe.” The downside is that you’ll also pick up more ambient noise, such as HVAC fans running in the background. Under the best of circumstances these will be present during quiet moments, but not too noticeable when the speaker is actually talking.

Processing

The first stage of any dialogue processing chain or workflow is noise reduction and gain correction. At the start of the project you have the opportunity to clean up any raw voice tracks. This is ideal, because it saves you from having to do that step later. In the double-system sound example, you have the ability to work with the isolated .wav file before syncing it within a multicam group or as a synchronized clip.

Most NLEs feature some audio noise reduction tools and you can certainly augment these with third party filters and standalone apps, like those from iZotope. However, this is generally a process I will handle in Adobe Audition, which can process single tracks, as well as multitrack sessions. Audition starts with a short noise print (select a short quiet section in the track) used as a reference for the sounds to be suppressed. Apply the processing and adjust settings if the dialogue starts sounding like the speaker is underwater. Leaving some background noise is preferable to over-processing the track.

Once the noise reduction is where you like it, apply gain correction. Audition features an automatic loudness match feature or you can manually adjust levels. The key is to get the overall track as loud as you can without clipping the loudest sections and without creating a compressed sound. You may wish to experiment with the order of these processes. For example, you may get better results adjusting gain first and then applying the noise reduction afterwards.

After both of these steps have been completed, bounce out (export) the track to create a new, processed copy of the original. Bring that into your NLE and combine it with the picture. From here on, anytime you cut to that clip, you will be using the synced, processed audio.

If you can’t go through such a pre-processing step in Audition or another DAW, then the noise reduction and correction must be handled within your NLE. Each of the top NLEs includes built-in noise reduction tools, but there are plenty of plug-in offerings from Waves, iZotope, Accusonus, and Crumplepop to name a few. In my opinion, such processing should be applied on the track (or audio role in FCPX) and not on the clip itself. However, raising or lowering the gain/volume of clips should be performed on the clip or in the clip mixer (Premiere Pro) first.

Track/audio role organization

Proper organization is key to an efficient mix. When a speaker is recorded multiple times or at different locations, then the quality or tone of those recordings will vary. Each situation may need to be adjusted differently in the final mix. You may also have several speakers interviewed at the same time in the same location. In that case, the same adjustments should work for all. Or maybe you only need to separate male from female speakers, based on voice characteristics.

In a track-based NLE like Media Composer, Resolve, Premiere Pro, or others, simply place each speaker onto a separate track so that effects processing can be specific for that speaker for the length of the program. In some cases, you will be able to group all of the speaker clips onto one or a few tracks. The point is to arrange VO, sync dialogue, sound effects, and music together as groups of tracks. Don’t intermingle voice, effects, or music clips onto the same tracks.

Once you have organized your clips in this manner, then you are ready for the final mix. Unfortunately this organization requires some extra steps in Final Cut Pro X, because it has no tracks. Audio clips in FCPX must be assigned specific audio roles, based on audio types, speaker names, or any other criteria. Such assignments should be applied immediately upon importing a clip. With proper audio role designations, the process can work quite smoothly. Without it, you are in a world of hurt.

Since FCPX has no traditional track mixer, the closest equivalent is to apply effects to audio lanes based on the assigned audio roles. For example, all clips designated as dialogue will have their audio grouped together into the dialogue lane. Your sequence (or just the audio) must first be compounded before you are able to apply effects to entire audio lanes. This effectively applies these same effects to all clips of a given audio role assignment. So think of audio lanes as the FCPX equivalent to audio tracks in Premiere, Media Composer, or Resolve.

The vocal chain

The objective is to get your dialogue tracks to sound consistent and stand out in the mix. To do this, I typically use a standard set of filter effects. Noise reduction processing is applied either through preprocessing (described above) or as the first plug-in filter applied to the track. After that, I will typically apply a de-esser and a plosive remover. The first reduces the sibilance of the spoken letter “s” and the latter reduces mic pops from the spoken letter “p.” As with all plug-ins, don’t get heavy-handed with the effect, because you want to maintain a natural sound.

You will want the audio – especially interviews – to have a consistent level throughout. This can be done manually by adjusting clip gain, either clip by clip, or by rubber banding volume levels within clips. You can also apply a track effect, like an automatic volume filter (Waves, Accusonus, Crumplepop, other). In some cases a compressor can do the trick. I like the various built-in plug-ins offered within Premiere and FCPX, but there are a ton of third-party options. I may also apply two compression effects – one to lightly level the volume changes, and the second to compress/limit the loudest peaks. Again, the key is to apply light adjustments, because I will also compress/limit the master output in addition to these track effects.

The last step is equalization. A parametric EQ is usually the best choice. The objective is to assure vocal clarity by accentuating certain frequencies. This will vary based on the sound quality of each speaker’s voice. This is why you often separate speakers onto their own tracks according to location, voice characteristics, and so on. In actual practice, only two to three tracks are usually needed for dialogue. For example, interviews may be consistent, but the voice-over recordings require a different touch.

Don’t get locked into the specific order of these effects. What I have presented in this post isn’t necessarily gospel for the hierarchical order in which to use them. For example, EQ and level adjusting filters might sound best when placed at different positions in this stack. A certain order might be better for one show, whereas a different order may be best the next time. Experiment and listen to get the best results!

©2020 Oliver Peters