Avid’s Hidden Gems

Avid Media Composer offers a few add-on options, but two are considered gems by the editors that rely on them. ScriptSync and PhraseFind are essential for many drama and documentary editors who wield Media Composer keyboards every day. I’ve written about these tools in the past, including how you can get similar functionality in other NLEs. New transcription services, like Simon Says, make them more viable than ever for the average editor.

Driven by the script

Avid’s script-based editing, also called script integration, builds a representation of the script supervisor’s lined script directly into the Avid Media Composer workflow and interface. While often referred to as ScriptSync, Avid’s script integration is actually not the same. Script-based editing and script bins are part of the core Media Composer system and does not cost extra.

The concept originated with the Cinedco Ediflex NLE and migrated to Avid. In the regular Media Composer system, preparing a script bin and aligning takes to that script is a manual process, often performed by assistant editors that are part of a larger editorial team. Because it is labor-intensive, most individual editors working on projects that aren’t major feature films or TV series avoid using this workflow.

Avid ScriptSync (a paid option) automates this script bin preparation process, by automatically aligning spoken words in a take to the text lines within the written script. It does this using speech recognition technology licensed from Nexidia. This technology is based on phonemes, the sounds that are combined to create spoken words. Clips can be imported (transcoded into Avid MediaFiles) or linked.

Through automatic analysis of the audio within a take, ScriptSync can correlate a line in the script to its relative position within that take or within multiple takes. Once clips have been properly aligned to the written dialogue, ScriptSync is largely out of the picture. And so, in Avid’s script-based editing, the editor can then click on a line of dialogue within the script bin and see all of the coverage for that line.

Script integration with non-scripted content

You might think, “Great, but I’m not cutting TV shows and films with a script.” If you work in documentaries or corporate videos built around lengthy interviews, then script integration may have little meaning – unless you have transcripts. Getting long interviews transcribed can be costly and/or time-consuming.  That’s where an automated transcription service like Simon Says comes in. There are certainly other, equally good services. However, Simon Says, offers export options tailored for each NLE, including Avid Media Composer.

With a transcription available on a fast turnaround, it becomes easy to import an interview transcript into a Media Composer script bin and align clips to it. ScriptSync takes care of the automatic alignment making script-based editing quick, easy, and painless – even for an individual editor without any assistants.

Finding that needle in the haystack

The second gem is PhraseFind, which builds upon the same Nexidia speech recognition technology. It’s a tool that’s even more essential for the documentary editor than script integration. PhraseFind (a paid option) is a phonetic search tool that analyzes the audio for clips within an Avid MediaFiles folder. Type in a word or phrase and PhraseFind will return a number of “hits” with varying degrees of accuracy.

The search is based on phonemes, so the results are based on words that “sound like” the search term. On one side this means that low-accuracy results may include unrelated finds that sound similar. On the other hand, you can enter a search word that is spelled differently or inaccurately, but as long as it still sounds the same, then useful results will be returned.

PhraseFind is very helpful in editing “Frankenbites.” Those are edits were sentences are ended in the middle, because a speaker went off on a tangent, or when different phrases are combined to complete a thought. Often you need to find a word that matches your edit point, but with the correct inflection, such as ending a sentence. PhraseFind is great for these types of searches, since your only alternative is scouring through multiple clips in search of a single word.

Working with the options

Script-based editing, ScriptSync, and PhraseFind are unique features that are only available in Avid Media Composer. No other NLE offers similar built-in features. Boris FX does offer Soundbite, which is a standalone equivalent to the PhraseFind technology licensed to them by Nexidia. It’s still available, but not actively promoted nor developed. Adobe had offered Story as a way to integrate script-based editing into Premiere Pro. That feature is no longer available. So today, if you want the accepted standard for script and phonetic editing features, then Media Composer is where it’s at.

These are separate add-on options. You can pick one or the other or both (or neither) depending on your needs and style of work. They are activated through Avid Link. If you own multiple seats of Media Composer, then you can purchase one license of ScriptSync and/or PhraseFind and float them between Media Composers via Avid Link activation. While these tools aren’t for everyone, they do offer a new component to how you work as an editor. Many who’ve adopted them have never looked back.

©2020, 2021 Oliver Peters

Avid Media Composer 2020

Avid Media Composer has been at the forefront of nonlinear, digital video editing for three decades. While most editors and audio mixers know Avid for Media Composer and Pro Tools, the company has grown considerably in that time. Whether by acquisition or internal development, Avid Technology encompasses such products as storage, live and post mixing consoles, newsroom software, broadcast graphics, asset management, and much more.

In spite of this diverse product line, Media Composer, as well as Pro Tools, continue to be the marquee products that define the brand. Use the term “Avid” and generally people understand that you are talking about Media Composer editing software. If you are an active Media Composer editor, then most of this article will be old news. But if you are new to Media Composer, read on.

The Media Composer heritage

Despite challenges from other NLEs, such as Final Cut Pro,  Final Cut Pro X, Premiere Pro, and DaVinci Resolve, Media Composer continues to be the dominant NLE for television and feature film post around the world. Even in smaller broadcast markets and social media, it’s not a given that the other options are exclusively used. If you are new to the industry and intend to work in one of the major international media hubs, then knowing the Media Composer application is helpful and often required.

Media Composer software comes in four versions, ranging from Media Composer | First (free) up to Media Composer Enterprise. Most freelance editors will opt for one of the two middle options: Media Composer or Media Composer | Ultimate. Licenses may be “rented” via a subscription or bought as a perpetual license. The latter includes a year of support with a renewal at the end of that year. If you opt not to renew support, then your Media Composer software will be frozen at the last valid version issued within that year; but it will continue to work. No active internet connection or periodic sign-in is required to use Media Composer, so you could be off the grid for months and the software works just fine.

A Media Composer installation is full-featured, including effects, audio plug-ins, and background rendering software. Depending on the version, you may also receive loyalty offers (free) for additional software from third-party vendors, like Boris FX, NewBlueFX, iZotope, and Accusonus.

Avid only offers three add-on options for Media Composer itself: ScriptSync, PhraseFind, and Symphony. Media Composer already incorporates manual script-based editing. Plain text script documents can be imported into a special bin and clips aligned to sentences and paragraphs in that script. Synchronization has to be done manually to use this feature. The ScriptSync option saves time – automating the process by phonetically analyzing and syncing clips to the script text. Click on a script line and any corresponding takes can be played starting from that point within the scene.

The PhraseFind option is a phonetic search engine, based on the same technology as ScriptSync. It’s ideal for documentary and reality editors. PhraseFind automatically indexes the phonetics of the audio for your clips. Search by a word or phrase and all matching  instances will appear, regardless of actual spelling. You can dial in the sensitivity to find only the most accurate hits, or broader in cases where dialogue is hard to hear or heavily accented.

Media Composer includes good color correction, featuring wheels and curves. In fact, Avid had this long before other NLEs. The Symphony option expands the internal color correction with more capabilities, as well as a full color correction workflow. Grade clips by source, timeline, or both. Add vector-based secondary color correction and more. Symphony is not as powerful as Baselight or Resolve, but you avoid any issues associated with roundtrips to other applications. That’s why it dominates markets where turnaround time is critical, like finishing for non-scripted (“reality”) TV shows. A sequence from a Symphony-equipped Media Composer system can still be opened on another Media Composer workstation that does not have the Symphony option. Clips play fine (no “media offline” or “missing plug-in” screen); however, the editor cannot access or alter any of the color correction settings specific to Symphony.

Overhauling Media Composer

When Jeff Rosica took over as CEO of Avid Technology in 2018, the company embraced an effort to modernize Media Composer. Needless to say, that’s a challenge. Any workflow or user interface changes affect familiarity and muscle memory. This is made tougher in an application with a loyal, influential, and vocal customer base.  An additional complication for every software developer is keeping up with changes to the underlying operating system. Changes from Windows 7 to Windows 10, or from macOS High Sierra to Mojave to Catalina, all add their own peculiar speed bumps to the development roadmap.

For example, macOS Catalina is Apple’s first, full 64-bit operating system. Apple dropped any 32-bit QuickTime library components that were used by developers to support certain codecs. Of course, this change impacted Media Composer. Without Apple rewriting 64-bit versions of these legacy components, the alternative is for a developer to add their own support back into the application, which Avid has had to do. Unfortunately, this introduces some inevitable media compatibility issues between older and newer versions of Media Composer. Avid is not alone in this case.

Nevertheless, Media Composer changes aren’t just cosmetic, but also involve many “under the hood” improvements. These include a 32-bit float color pipeline, support for ACES projects, HDR support, dealing with new camera raw codecs, and the ability to read and write ProRes media on both macOS and Windows systems.

Avid Media Composer 2020.10

Avid bases its product version numbers by the year and month of release. Media Composer 2020.10 – the most recent version as of this writing – was just released. The versions prior to that were Media Composer 2020.9 and 2020.8, released in September and August respectively. But before that it was 2020.6 from June, skipping .7. (Some of the features that I will describe were introduced in earlier versions and are not necessarily new in 2020.10.)

Media Composer 2020.10 is fully compatible with macOS Catalina. Due to the need to shift to a 64-bit architecture, the AMA framework – used to access media using non-Avid codecs – has been revamped as UME (Universal Media Engine). Also the legacy Title Tool has been replaced with the 64-bit Titler+.

If you are a new Media Composer user or moving to a new computer, then several applications will be installed. In addition to the Media Composer application and its built-in plug-ins and codecs, the installer will add Avid Link to your computer. This is a software management tool to access your Avid account, update software, activate/deactivate licenses, search a marketplace, and interact with other users via a built-in social component.

The biggest difference for Premiere Pro, Resolve, or Final Cut Pro X users who are new to Media Composer is understanding the Avid approach to media. Yes, you can link to any compatible codec, add it to a bin, and edit directly with it – just like the others. But Avid is designed for and works best with optimized media.

This means transcoding the linked media to MXF-wrapped Avid DNxHD or HR media. This media can be OPatom (audio and video as separate files) or OP1a (interleaved audio/video files). It’s stored in an Avid MediaFiles folder located at the root level of the designated media volume. That’s essentially the exact same process adopted by Final Cut Pro X when media is transcoded and placed inside an FCPX Library file. The process for each enables a bullet-proof way to move project files and media around without breaking links to that media.

The second difference is that each Avid bin within the application is also a dedicated data file stored within the project folder on your hard drive. Bins can be individually locked (under application control). This facilitates multiple editors working in a collaborative environment. Adobe adopted an analog of this method in their new Adobe Productions feature.

The new user interface

Avid has always offered a highly customizable user interface. The new design, introduced in 2019, features bins, windows, and panels that can be docked, tabbed, or floated. Default workspaces have been streamlined, but you can also create your own. A unique feature compared to the competing NLEs is that open panes can be slid left or right to move them off of the active screen. They aren’t actually closed, but compacted into the side of the screen. Simply slide the edge inward again to reveal that pane.

One key to Avid’s success is that the keyboard layout, default workspaces, and timeline interactions tend to be better focused on the task of editing. You can get more done with fewer keystrokes. In all fairness, Final Cut Pro X also shares some of this, if you can get comfortable with their very different approach. My point is that the new Media Composer workspaces cover most of what I need and I don’t feel the need for a bunch of custom layouts. I also don’t feel the need to remap more levels of custom keyboard commands than what’s already there.

Media Composer for Premiere and Final Cut editors

My first recommendation is to invest in a custom Media Composer keyboard from LogicKeyboard or Editors Keys. Media Composer mapping is a bit different than the Final Cut “legacy” mapping that many NLEs offer. It’s worth learning the standard Media Composer layout. A keyboard with custom keycaps will be a big help.

My second recommendation is to learn all about Media Composer’s settings (found under Preferences and Settings). There are a LOT of them, which may seem daunting at first. Once you understand these settings, you can really customize the software just for you.

Getting started

Start by establishing a new project from the projects panel. Projects can be saved to any available drive and do not have to be in a folder at the root level. When you create a new project, you are setting the format for frame size, rate, and color space. All sequences created inside of this project will adhere to these settings. However, other sequences using different formats can be imported into any project.

Once you open a project, Media Composer follows a familiar layout of bins, timeline, and source/record windows. There are three normal bin views, plus script-based editing (if you use it): frame, column, and storyboard. In column view, you may create custom columns as needed. Clips can be sorted and filtered based on the criteria you pick. In the frame view, clips can be arranged in a freeform manner, which many film editors really like.

The layout works on single and dual-monitor set-ups. If you have two screens, it’s easy to spread out your bins on one screen in any manner you like. But if you only have one screen, you may want to switch to a single viewer mode, which then displays only the record side. Click a source clip from a bin and it open its own floating window. Mark in/out, make the edit, and close. I wish the viewer would toggle between source and record, but that’s not the case, yet

Sequences

Media Composer does not use stacked or tabbed sequences, but there is a history pulldown for quick access to recent sequences and/or source clips. Drag and load any sequence into the source window and toggle the timeline view between the source or the record side. This enables easy editing of portions from one sequence into another sequence.

Mono and stereo audio tracks are treated separately on the timeline. If you have a clip with left and right stereo audio on two separate channels (not interleaved), then these will cut to the timeline as two mono tracks with a default pan setting to the middle for each. You’ll need to pan these tracks back to left and right in the timeline. If you have a clip with interleaved, stereo audio, like a music cue, it will be edited to a new interleaved stereo track, with default stereo panning. You can’t mix interleaved stereo and mono content onto the same timeline track.

Effects

Unlike other NLEs, timeline clips are only modified when a specific effect is applied. When clips of a different format than the sequence format are cut to the timeline, a FrameFlex effect is automatically applied for transform and color space changes. There is no persistent Inspector or Effects Control panel. Instead you have to select a clip with an effect applied to it and open the effect mode editor. While this may seem more cumbersome, the advantage is that you won’t inadvertently change the settings of one clip thinking that another has been selected.

Media Composer installs a fair amount of video and audio plug-ins, but for more advanced effects, I recommend augmenting with BorisFX’s Continuum Complete or Sapphire. What is often overlooked is that Media Composer does include paint, masking, and tracking tools. And, if you work on stereo 3D projects, Avid was one of the first companies to integrate a stereoscopic toolkit into Media Composer

The audio plug-ins provide a useful collection of filters for video editors. These plug-ins come from the Pro Tools side of the company. Media Composer and Pro Tools use the AAX plug-in format; therefore, no AU or VST audio plug-ins will show up inside Media Composer.

Due to the 64-bit transition, Avid dropped the legacy Title Tool and Marquee titler, and rewrote a new Titler+. Honestly, it’s not as intuitive as it should be and took some time for me to warm up to it. Once you play with it, though, the controls are straight-forward. It includes roll and crawl options, along with keyframed moves and tracking. Unfortunately, there are no built-in graphics templates.

Trimming

When feature film editors are asked why they like Media Composer, the trim mode is frequently at the top of the list. The other NLEs offer advanced trimming modes, but none seems as intuitive to use as Avid’s. Granted, you don’t have to stick with the mouse to use them, but I definitely find it easier to trim by mouse in Premiere or Final Cut.

Trimming in Media Composer is geared towards fluid keyboard operation. I find that when I’m building up a sequence, my flow is completely different in Media Composer. Some will obviously prefer the others’ tools and, in fact, Media Composer’s smart keys enable mouse-based trimming, too. It’s certainly preference, but once you get comfortable with the flow and speed of Media Composer’s trim mode, it’s hard to go to something else.

Avid’s journey to modernize Media Composer has gone surprisingly well. If anything, the pace of feature enhancements might be too incremental for users wishing to see more radical changes. For now, there hasn’t been too much resistance from the old guard and new editors are indeed taking a fresh look. Whether you are cutting spots, social media, or indie features, you owe it to yourself to take an objective look at Media Composer as a viable editing option.

To get more familiar with Media Composer, check out Kevin P. McAuliffe’s Let’s Edit with Media Composer tutorial series on YouTube.

Originally written for Pro Video Coalition.

©2020 Oliver Peters

Drive – Postlab’s Virtual Storage Volume

Postlab is the only service designed for multi-editor, remote collaboration with Final Cut Pro X. It works whether you have a team collaborating on-premises within a facility or spread out at various locations around the globe. Since the initial launch, Hedge has also extended Postlab’s collaboration to Premiere Pro.

When using Postlab, projects containing Final Cut Pro X libraries or Premiere Pro project files are hosted on Hedge’s servers. But, the media lives on local drives or shared storage and not “in the cloud.” When editors work remotely, media needs to be transferred to them by way of “sneakernet,” High Tail, WeTransfer, or other methods.

Hedge has now solved that media issue with the introduction of Drive, a virtual storage volume for media, documents, and other files. Postlab users can utilize the original workflow and continue with local media – or they can expand remote capabilities with the addition of Drive storage. Since it functions much like DropBox, Drive can also be used by team members who aren’t actively engaged in editing. As a media volume, files on Drive are also accessible to Avid Media Composer and DaVinci Resolve editors.

Drive promises significantly better performance than a general business cloud service, because it has been fine-tuned for media. The ability to use Drive is included with each Postlab plan; but, storage costs are based on a flat rate per month for the amount of storage you need. Unlike other cloud services, there are no hidden egress charges for downloads. If you only want to use Drive as a single user, then Hedge’s Postlab Solo or Pro plan would be the place to start.

How Drive works

Once Drive storage has been added to an account, each team member simply needs to connect to Drive from the Postlab interface. This mounts a Drive volume on the desktop just like any local hard drive. In addition, a cache file is stored at a designated location. Hedge recommends using a fast SSD or RAID for this cache file. NAS or SAN network volumes cannot be used.

After the initial set up, the operation is similar to DropBox’s SmartSync function. When an editor adds media to the local Drive volume, that media is uploaded to Hedge’s cloud storage. It will then sync to all other editors’ Drive volumes. Initially those copies of the media are only virtual. The first time a file is played by a remote team member, it is streamed from the cloud server. As it streams, it is also being added the local Drive cache. Every file that has been fully played is now stored locally within the cache for faster access in the future.

Hedge feels that latency is as or more important than outright connection speed for a fluid editing experience. They recommend wired, rather than wi-fi, internet connections. However, I tested the system using wi-fi with office speeds of around 575Mbps down / 38Mbps up. This is a business connection and was fast enough to stream 720p MP4 and 1080p ProRes Proxy files with minimal hiccups on the initial streamed playback. Naturally, after it was locally cached, access was instantaneous.

From the editor’s point of view, virtual files still appear in the FCPX event browser as if local and the timeline is populated with clips. Files can also be imported or dragged in from Drive as if they are local. As you play the individual clips or the timeline from within FCPX or Premiere, the files become locally cached. All in all, the editing experience is very fluid.

In actual practice

The process works best with lightweight, low-res files and not large camera originals. That is possible, too, of course, but not very efficient. Drive and the Hedge servers support most common media files, but not a format like REDCODE raw. As before, each editor will need to have the same effects, LUTs, Motion templates, and fonts installed for proper collaboration.

I did run into a few issues, which may be related to the recent 10.4.9 Final Cut update. For example, the built-in proxy workflow is not very stable. I did get it to work. Original files were on a NAS volume (not Drive) and the generated proxies (H.264 or ProRes Proxy) were stored on the Drive volume of the main system. The remote editing system would only get the proxies, synced through Drive. In theory that should work, but it was hit or miss. When it worked, some LUTs, like the standard ARRI Log-C LUTs, were not applied on the remote system in proxy mode. Also the “used” range indicator lines for the event browser clips were present on the original system, but not the remote system. Other than these few quirks, everything was largely seamless.

My suggested workflow would be to generate editing proxies outside of the NLE and copy those to Drive. H.264 or ProRes Proxy with matching audio configurations to the original camera files work well. Treat these low-res files as original media and import them into Final Cut Pro X or Premiere Pro for editing. Once the edit is locked, go to the main system and transfer the final sequence to a local FCPX Library or Premiere Pro project for finishing. Relink that sequence to the original camera files for grading and delivery. Alternatively, you could export an FCPXML or XML file for a Resolve roundtrip.

One very important point to know is that the entire Postlab workflow is designed around team members staying logged into the account. This maintains the local caches. It’s OK to quit the Postlab application, plus eject and reconnect the Drive volume. However, if you log out, those local caches for editing files and Drive media will be flushed. The next time you log back in, connection to Drive will need to be re-established, Drive information must be synced again, and clips within FCPX or Premiere Pro will have to be relinked. So stay logged in for the best experience.

Additional features

Thanks to the Postlab interface, Drive offers features not available for regular hard drives. For example, any folder within Drive can be bookmarked in Postlab. Simply click on a Bookmark to directly open that folder. The Drop Off feature lets you generate a URL with an expiration date for any Bookmarked folder. Send that link to any non-team member, such as an outside contributor or client, and they will be able to upload additional media or other files to Drive. Once uploaded to Hedge’s servers, those files show up in Drive within the folder and will be synced to all team members.

Hedge offers even more features, including Mail Drop, designed for projects with too much media to efficiently upload. Ship Hedge a drive to copy dailies straight into their servers. Pick Up is another feature still in development. When updated, you will be able to select files on Drive, generate a Pick Up link, and send that to your client for download.

Editing with Drive and Postlab makes remote collaboration nearly like working on-site. The Hedge team is dedicated to expanding these capabilities with more services and broader NLE support. Given the state of post this year, these products are at the right time and place.

Check out this Soho Editors masterclass in collaboration using Postlab and Drive.

Originally written for FCP.co.

©2020 Oliver Peters

COUP 53

The last century is littered with examples of European powers and the United States attempting to mold foreign governments in their own direction. In some cases, the view at the time may have seemed like these efforts would yield positive results. In others, self-interest or oil was the driving force. We have only to point to the Sykes-Picot Agreement of 1916 (think Lawrence of Arabia) to see the unintended consequences these policies have had in the middle east over the past 100+ years, including current politics.

In 1953, Britain’s spy agency MI6 and the United States’ CIA orchestrated a military coup in Iran that replaced the democratic prime minister, Mohammad Mossadegh, with the absolute monarchy headed by Shah Mohammad Reza Pahlavi. Although the CIA has acknowledged its involvement, MI6 never has. Filmmaker Taghi Amirani, an Iranian-British citizen, set out to tell the true story of the coup, known as Operation Ajax. Five years ago he elicited the help of noted film editor, Walter Murch. What was originally envisioned as a six month edit turned into a four yearlong odyssey of discovery and filmmaking that has become the feature documentary COUP 53.

COUP 53 was heavily researched by Amirani and leans on End of Empire, a documentary series produced by Britain’s Granada TV. That production started in 1983 and culminated in its UK broadcast in May of 1985. While this yielded plenty of interviews with first-hand accounts to pull from, one key omission was an interview with Norman Darbyshire, the MI6 Chief of Station for Iran. Darbyshire was the chief architect of the coup – the proverbial smoking gun. Yet he was inexplicably cut out of the final version of End of Empire, along with others’ references to him.

Amirani and Murch pulled back the filmmaking curtain as part of COUP 53. We discover along with Amirani the missing Darbyshire interview transcript, which adds an air of a whodunit to the film. Ultimately what sets COUP 53 apart was the good fortune to get Ralph Fiennes to portray Norman Darbyshire in that pivotal 1983 interview.

COUP 53 premiered last year at the Telluride Film Festival and then played other festivals until coronavirus closed such events down. In spite of rave reviews and packed screenings, the filmmakers thus far have failed to secure distribution. Most likely the usual distributors and streaming channels deem the subject matter to be politically toxic. Whatever the reason, the filmmakers opted to self-distribute, including a virtual cinema event with 100 cinemas on August 19th, the 67th anniversary of the coup.

Walter Murch is certainly no stranger to readers. Despite a long filmography, including working with documentary material, COUP 53 is only his second documentary feature film. (Particle Fever was the first.) This film posed another challenge for Murch, who is known for his willingness to try out different editing platforms. This was the first outing with Adobe Premiere Pro CC, his fifth major editing system. I had a chance to catch up with Walter Murch over the web from his home in London the day before the virtual cinema event. We discussed COUP 53, documentaries, and working with Premiere Pro.

___________________________________________________

[Oliver Peters] You and I have emailed back-and-forth on the progress of this film for the past few years. It’s great to see it done. How long have you been working on this film?

[Walter Murch] We had to stop a number of times, because we ran out of money. That’s absolutely typical for this type of privately-financed documentary without a script. If you push together all of the time that I was actually standing at the table editing, it’s probably two years and nine months. Particle Fever – the documentary about the Higgs Boson – took longer than that.

My first day on the job was in June of 2015 and here we are talking about it in August of 2020. In between, I was teaching at the National Film School and at the London Film School. My wife is English and we have this place in London, so I’ve been here the whole time. Plus I have a contract for another book, which is a follow-on to In the Blink of an Eye. So that’s what occupies me when my scissors are in hiding.

[OP] Let’s start with Norman Darbyshire, who is key to the storyline. That’s still a bit of an enigma. He’s no longer alive, so we can’t ask him now. Did he originally want to give the 1983 interview and MI6 came in and said ‘no’ – or did he just have second thoughts? Or was it always supposed to be an off the record interview?

[WM] We don’t know. He had been forced into early retirement by the Thatcher government in 1979, so I think there was a little chip on his shoulder regarding his treatment. The full 14-page transcript has just been released by the National Security Archives in Washington, DC, including the excised material that the producers of the film were thinking about putting into the film.

If they didn’t shoot the material, why did they cut up the transcript as if it were going to be a production script? There was other circumstantial evidence that we weren’t able to include in the film that was pretty indicative that yes, they did shoot film. Reading between the lines, I would say that there was a version of the film where Norman Darbyshire was in it – probably not named as such – because that’s a sensitive topic. Sometime between the summer of 1983 and 1985 he was removed and other people were filmed to fill in the gaps. We know that for a fact.

[OP] As COUP 53 shows, the original interview cameraman clearly thought it was a good interview, but the researcher acts like maybe someone got to management and told them they couldn’t include this.

[WM] That makes sense given what we know about how secret services work. What I still don’t understand is why then was the Darbyshire transcript leaked to The Observer newspaper in 1985. A huge article was published the day before the program went out with all of this detail about Norman Darbyshire – not his name, but his words. And Stephen Meade – his CIA counterpart – who is named. Then when the program ran, there was nothing of him in it. So there was a huge discontinuity between what was published on Sunday and what people saw on Monday. And yet, there was no follow-up. There was nothing in the paper the next week, saying we made a mistake or anything.

I think eventually we will find out. A lot of the people are still alive. Donald Trelford, the editor of The Observer, who is still alive, wrote something a week ago in a local paper about what he thought happened. Alison [Rooper] – the original research assistant – said in a letter to The Observer that these are Norman Darbyshire’s words, and “I did the interview with him and this transcript is that interview.”

[OP] Please tell me a bit about working with the discovered footage from End of Empire.

[WM] End of Empire was a huge, fourteen-episode project that was produced over a three or four year period. It’s dealing with the social identity of Britain as an empire and how it’s over. The producer, Brian Lapping, gave all of the outtakes to the British Film Institute. It was a breakthrough to discover that they have all of this stuff. We petitioned the Institute and sure enough they had it. We were rubbing our hands together thinking that maybe Darbyshire’s interview was in there. But, of all of the interviews, that’s the one that’s not there.

Part of our deal with the BFI was that we would digitize this 16mm material for them. They had reconstituted everything. If there was a section that was used in the film, they replaced it with a reprint from the original film, so that you had the ability to not see any blank spots. Although there was a quality shift when you are looking at something used in the film, because it’s generations away from the original 16mm reversal film.

For instance, Stephen Meade’s interview is not in the 1985 film. Once Darbyshire was taken out, Meade was also taken out. Because it’s 16mm we can still see the grease pencil marks and splices for the sections that they wanted to use. When Meade talks about Darbyshire, he calls him Norman and when Darbyshire talks about Meade he calls him Stephen. So they’re a kind of double act, which is how they are in our film. Except that Darbyshire is Ralph Fiennes and Stephen Meade – who has also passed on – appears through his actual 1983 interview.

[OP] Between the old and new material, there was a ton of footage. Please explain your workflow for shaping this into a story.

[WM] Taghi is an inveterate shooter of everything. He started filming in 2014 and had accumulated about 40 hours by the time I joined in the following year. All of the scenes where you see him cutting transcripts up and sliding them together – that’s all happening as he was doing it. It’s not recreated at all. The moment he discovered the Darbyshire transcript is the actual instance it happened. By the end, when we added it all up, it was 532 hours of material.

Forgetting all of the creative aspects, how do you keep track of 532 hours of stuff? It’s a challenge. I used my Filemaker Pro database that I’ve been using since the mid-1980s on The Unbearable Lightness of Being. Every film, I rewrite the software slightly to customize it for the film I’m on. I took frame-grabs of all the material so I had stacks and stacks of stills for every set-up.

By 2017 we’d assembled enough material to start on a structure. Using my cards, we spent about two weeks sitting and thinking ‘we could begin here and go there, and this is really good.’ Each time we’d do that, I’d write a little card. We had a stack of cards and started putting them up on the wall and moving them around. We finally had two blackboards of these colored cards with a start, middle, and end. Darbyshire wasn’t there yet. There was a big card with an X on it – the mysterious X. ‘We’re going to find something on this film that nobody has found before.’ That X was just there off to the side looking at us with an accusing glare. And sure enough that X became Norman Darbyshire.

At the end of 2017 I just buckled my seat belt and started assembling it all. I had a single timeline of all of the talking heads of our experts. It would swing from one person to another, which would set up a dialogue among themselves – each answering the other one’s question or commenting on a previous answer. Then a new question would be asked and we’d do the same thing. That was 4 1/2 hours long. Then I did all of the same thing for all of the archival material, arranging it chronologically. Where was the most interesting footage and the highest quality version of that? That was almost 4 hours long. Then I did the same thing with all of the Iranian interviews, and when I got it, all of the End of Empire material.

We had four, 4-hour timelines, each of them self-consistent. Putting on my Persian hat, I thought, ‘I’m weaving a rug!’ It was like weaving threads. I’d follow the talking heads for a while and then dive into some archive. From that into an Iranian interview and then some End of Empire material. Then back into some talking heads and a bit of Taghi doing some research. It took me about five months to do that work and it produced an 8 1/2 hour timeline.

We looked at that in June of 2018. What were we going to do with that? Is it a multi-part series? It could be, but Netflix didn’t show any interest. We were operating on a shoe string, which meant that the time was running out and we wanted to get it out there. So we decided to go for a feature-length film. It was right about that time that Ralph Fiennes agreed to be in the film. Once he agreed, that acted like a condenser. If you have Ralph Fiennes, things tend to gravitate around that performance. We filmed his scenes in October of 2018. I had roughed it out using the words of another actor who came in and read for us, along with stills of Ralph Fiennes as M. What an irony! Here’s a guy playing a real MI6 agent who overthrew a whole country, who plays M, the head of MI6, who dispatches James Bond to kill malefactors!

Ralph was recorded in an hour and a half in four takes at the Savoy Hotel – the location of the original 1983 interviews. At the time, he was acting in Shakespeare’s Anthony and Cleopatra every evening. So he came in the late morning and had breakfast. By 1:30-ish we were set-up. We prayed for the right weather outside – not too sunny and not rainy. It was perfect. He came and had a little dialogue with the original cameraman about what Darbyshire was like. Then he sat down and entered the zone – a fascinating thing to see. There was a little grooming touch-up to knock off the shine and off we went.

Once we shot Ralph, we were a couple of months away from recording the music and then final color timing and the mix. We were done with a finished, showable version in March of 2019. It was shown to investors in San Francisco and at the TED conference in Vancouver. We got the usual kind of preview feedback and dove back in and squeezed another 20 minutes or so out of the film, which got it to its present length of just under two hours.

[OP] You have a lot of actual stills and some footage from 1953, but as with most historical documentaries, you also have re-enactments. Another unique touch was the paint effect used to treat these re-enactments to differentiate them stylistically from the interviews and archival footage.

[WM] As you know, 1953 is 50+ years before the invention of the smart phone. When coups like this happen today you get thousands of points-of-view. Everyone is photographing everything. That wasn’t the case in 1953. On the final day of the coup, there’s no cinematic material – only some stills. But we have the testimony of Mossadegh’s bodyguard on one side and the son of the general who replaced Mossadegh on the other, plus other people as well. That’s interesting up to a point, but it’s in a foreign language with subtitles, so we decided to go the animation path.

This particular technique was something Taghi’s brother suggested and we thought it was a great idea. It gets us out of the uncanny valley, in the sense that you know you’re not looking at reality and yet it’s visceral. The idea is that we are looking at what is going on in the head of the person telling us these stories. So it’s intentionally impressionistic. We were lucky to find Martyn Pick, the animator who does this kind of stuff. He’s Mr. Oil Paint Animation in London. He storyboarded it with us and did a couple of days of filming with soldiers doing the fight. Then he used that as the base for his rotoscoping.

[OP] Quite a few of the first-hand Iranian interviews are in Persian with subtitles. How did you tackle those?

[WM] I speak French and Italian, but not Persian. I knew I could do it, but it was a question of the time frame. So our workflow was that Taghi and I would screen the Iranian language dailies. He would point out the important points and I would take notes. Then Taghi would do a first pass on his workstation to get rid of the chaff. That’s what he would give to the translators. We would hire graduate students. Fateme Ahmadi, one of the associate producers on the film, is Iranian and she would also do translation. Anyone that was available would work on the additional workstation and add subtitling. That would then come to me and I would use that as raw material.

To cut my teeth on this, I tried using the interview with Hamid Admadi, the Iranian historical expert that was recorded in Berlin. Without translating it, I tried to cut it solely on body language and tonality. I just dove in and imagined, if he is saying ‘that’ then I’m thinking ‘this.’ I was kind of like the way they say people with aphasia are. They don’t understand the words, but they understand the mood. To amuse myself, I put subtitles on it, pretending that I knew what he was saying. I showed it to Taghi and he laughed, but said that in terms of the continuity of the Persian, it made perfect sense. The continuity of the dialogue and moods didn’t have any jumps for a Persian speaker. That was a way to tune myself into the rhythms of the Persian language. That’s almost half of what editing is – picking up the rhythm of how people say things – which is almost as important or even sometimes more important than the words they are using.

[OP] I noticed in the credits that you had three associate editors on the project.  Please tell me a bit about their involvement.

[WM] Dan [Farrell] worked on the film through the first three months and then a bit on the second section. He got a job offer to edit a whole film himself, which he absolutely should do. Zoe [Davis] came in to fill in for him and then after a while also had to leave. Evie [Evelyn Franks] came along and she was with us for the rest of the time. They all did a fantastic job, but Evie was on it the longest and was involved in all of the finishing of the film. She’s is still involved, handling all of the media material that we are sending out.

[OP] You are also known for your work as a sound designer and re-recording mixer, but I noticed someone else handled that for this film. What was you sound role on COUP 53?

[WM] I was busy in the cutting room, so I didn’t handle the final mix. But I was the music editor for the film, as well as the picture editor. Composer Robert Miller recorded the music in New York and sent a rough mixdown of his tracks. I would lay that onto my Premiere Pro sequence, rubber-banding the levels to the dialogue.

When he finally sent over the instrument stems – about 22 of them – I copied and pasted the levels from the mixdown onto each of those stems and then tweaked the individual levels to get the best out of every instrument. I made certain decisions about whether or not to use an instrument in the mix. So in a sense, I did mix the music on the film, because when it was delivered to Boom Post in London, where we completed the mix, all of the shaping that a music mixer does was already taken care of. It was a one-person mix and so Martin [Jensen] at Boom only had to get a good level for the music against the dialogue, place it in a 5.1 environment with the right equalization, and shape that up and down slightly. But he didn’t have to get into any of the stems.

[OP] I’d love to hear your thoughts on working with Premiere Pro over these several years. You’ve mentioned a number of workstations and additional personnel, so I would assume you had devised some type of a collaborative workflow. That is something that’s been an evolution for Adobe over this same time frame.

[WM] We had about 60TB of shared storage. Taghi, Evie Franks, and I each had workstations. Plus there was fourth station for people doing translations. The collaborative workflow was clunky at the beginning. The idea of shared spaces was not what it is now and not what I was used to from Avid, but I was willing to go with it.

Adobe introduced the basics of a more fluid shared workspace in early 2018 I think, and that began a six months’ rough ride, because there were a lot of bugs that came along  with that deep software shift. One of them was what I came to call ‘shrapnel.’ When I imported a cut from another workstation into my workstation, the software wouldn’t recognize all the related media clips, which were already there. So these duplicate files would be imported again, which I nicknamed ‘shrapnel.’ I created a bin just to stuff these clips in, because you couldn’t delete them without causing other problems.

Those bugs went away in the late summer of 2018. The ‘shrapnel’ disappeared along with other miscellaneous problems – and the back-and-forth between systems became very transparent. Things can always be improved, but from a hands-on point-of-view, I was very happy with how everything worked from August or September of 2018 through to the completion of the film.

We thought we might stay with Premiere Pro for the color timing, which is very good. But DaVinci Resolve was the system for the colorist that we wanted to get. We had to make some adjustments to go to Resolve and back to Premiere Pro. There were a couple of extra hurdles, but it all worked and there were no kludges. Same for the sound. The export for Pro Tools was very transparent.

[OP] A lot of what you’ve written and lectured about is the rhythm of editing – particularly dramatic films. How does that equate to a documentary?

[WM] Once you have the initial assembly – ours was 8 hours, Apocalypse Now was 6 hours, Cold Mountain was 5 1/2 hours – the jobs are not that different. You see that it’s too long by a lot. What can we get rid of? How can we condense it to make it more understandable, more emotional, clarify it, and get a rhythmic pulse to the whole film?

My approach is not to make a distinction at that point. You are dealing with facts and have to pay attention to the journalistic integrity of the film. On a fiction film you have to pay attention to the integrity of the story, so it’s similar. Getting to that point, however, is highly different, because the editor of an unscripted documentary is writing the story. You are an author of the film. What an author does is stare at a blank piece of paper and say, ‘what am I going to begin with?’ That is part of the process. I’m not writing words, necessarily, but I am writing. The adjectives and nouns and verbs that I use are the shots and sounds available to me.

I would occasionally compare the process for cutting an individual scene to churning butter. You take a bunch of milk – the dailies – and you put them into a churn – Premiere Pro – and you start agitating it. Could this go with that? No. Could this go with that? Maybe. Could this go? Yes! You start globbing things together and out of that butter churning process you’ve eventually got a big ball of butter in the churn and a lot of whey – buttermilk. In other words, the outtakes.

That’s essentially how I work. This is potentially a scene. Let me see what kind of scene it will turn into. You get a scene and then another and another. That’s when I go to the card system to see what order I can put these scenes in. That’s like writing a script. You’re not writing symbols on paper, you are taking real images and sound and grappling with them as if they are words themselves.

___________________________________________________

Whether you are a student of history, filmmaking, or just love documentaries, COUP 53 is definitely worth the watch. It’s a study in how real secret services work. Along the way, the viewer is also exposed to the filmmaking process of discovery that goes into every well-crafted documentary.

Images from COUP 53 courtesy of Amirani Media and Adobe.

(Click on any image for an enlarged view.)

You can learn more about the film at COUP53.com.

For more, check out these interviews at Art of the Cut, CineMontage, and Forbes.

©2020 Oliver Peters

Dialogue Mixing Tips

 

Video is a visual medium, but the audio side of a project is as important – often more important – than the picture side. When story context is based on dialogue, then the story will make no sense if you can’t hear or understand that spoken information. In theatrical mixes, it’s common for a three person team of rerecording mixers to operate the console for the final mix. Their responsibilities are divided into dialogue, sound effects, and music. The dialogue mixer is usually the team lead, precisely because intelligible dialogue is paramount to a successful motion picture mix. For this reason, dialogue is also mixed as primarily mono coming from the center speaker in a 5.1 surround set-up.

A lot of my work includes documentary-style entertainment and corporate projects, which frequently lean on recorded interviews to tell the story. In many cases, sending the mix outside isn’t in the budget, which means that mix falls to me. You can mix in a DAW or in your NLE. Many video editors are intimidated by or unfamiliar with ProTools or Logic Pro X – or even the Fairlight page in DaVinci Resolve. Rest assured that every modern NLE is capable of turning out an excellent stereo mix for the purposes of TV, web, or mobile viewing. Given the right monitoring and acoustic environment, you can also turn out solid LCR or 5.1 surround mixes, adequate for TV viewing.

I have covered audio and mix tips in the past, especially when dealing with Premiere. The following are a few more pointers.

Original location recording

You typically have no control over the original sound recording. On many projects, the production team will have recorded double-system sound controlled by a separate location mixer (recordist). They generally use two microphones on the subject – a lav and an overhead shotgun/boom mic.

The lav will often be tucked under clothing to filter out ambient noise from the surrounding environment and to hide it from the camera. This will sound closer, but may also sound a bit muffled. There may also be occasional clothes rustle from the clothing rubbing against the mic as the speaker moves around. For these reasons I will generally select the shotgun as the microphone track to use. The speaker’s voice will sound better and the recording will tend to “breathe.” The downside is that you’ll also pick up more ambient noise, such as HVAC fans running in the background. Under the best of circumstances these will be present during quiet moments, but not too noticeable when the speaker is actually talking.

Processing

The first stage of any dialogue processing chain or workflow is noise reduction and gain correction. At the start of the project you have the opportunity to clean up any raw voice tracks. This is ideal, because it saves you from having to do that step later. In the double-system sound example, you have the ability to work with the isolated .wav file before syncing it within a multicam group or as a synchronized clip.

Most NLEs feature some audio noise reduction tools and you can certainly augment these with third party filters and standalone apps, like those from iZotope. However, this is generally a process I will handle in Adobe Audition, which can process single tracks, as well as multitrack sessions. Audition starts with a short noise print (select a short quiet section in the track) used as a reference for the sounds to be suppressed. Apply the processing and adjust settings if the dialogue starts sounding like the speaker is underwater. Leaving some background noise is preferable to over-processing the track.

Once the noise reduction is where you like it, apply gain correction. Audition features an automatic loudness match feature or you can manually adjust levels. The key is to get the overall track as loud as you can without clipping the loudest sections and without creating a compressed sound. You may wish to experiment with the order of these processes. For example, you may get better results adjusting gain first and then applying the noise reduction afterwards.

After both of these steps have been completed, bounce out (export) the track to create a new, processed copy of the original. Bring that into your NLE and combine it with the picture. From here on, anytime you cut to that clip, you will be using the synced, processed audio.

If you can’t go through such a pre-processing step in Audition or another DAW, then the noise reduction and correction must be handled within your NLE. Each of the top NLEs includes built-in noise reduction tools, but there are plenty of plug-in offerings from Waves, iZotope, Accusonus, and Crumplepop to name a few. In my opinion, such processing should be applied on the track (or audio role in FCPX) and not on the clip itself. However, raising or lowering the gain/volume of clips should be performed on the clip or in the clip mixer (Premiere Pro) first.

Track/audio role organization

Proper organization is key to an efficient mix. When a speaker is recorded multiple times or at different locations, then the quality or tone of those recordings will vary. Each situation may need to be adjusted differently in the final mix. You may also have several speakers interviewed at the same time in the same location. In that case, the same adjustments should work for all. Or maybe you only need to separate male from female speakers, based on voice characteristics.

In a track-based NLE like Media Composer, Resolve, Premiere Pro, or others, simply place each speaker onto a separate track so that effects processing can be specific for that speaker for the length of the program. In some cases, you will be able to group all of the speaker clips onto one or a few tracks. The point is to arrange VO, sync dialogue, sound effects, and music together as groups of tracks. Don’t intermingle voice, effects, or music clips onto the same tracks.

Once you have organized your clips in this manner, then you are ready for the final mix. Unfortunately this organization requires some extra steps in Final Cut Pro X, because it has no tracks. Audio clips in FCPX must be assigned specific audio roles, based on audio types, speaker names, or any other criteria. Such assignments should be applied immediately upon importing a clip. With proper audio role designations, the process can work quite smoothly. Without it, you are in a world of hurt.

Since FCPX has no traditional track mixer, the closest equivalent is to apply effects to audio lanes based on the assigned audio roles. For example, all clips designated as dialogue will have their audio grouped together into the dialogue lane. Your sequence (or just the audio) must first be compounded before you are able to apply effects to entire audio lanes. This effectively applies these same effects to all clips of a given audio role assignment. So think of audio lanes as the FCPX equivalent to audio tracks in Premiere, Media Composer, or Resolve.

The vocal chain

The objective is to get your dialogue tracks to sound consistent and stand out in the mix. To do this, I typically use a standard set of filter effects. Noise reduction processing is applied either through preprocessing (described above) or as the first plug-in filter applied to the track. After that, I will typically apply a de-esser and a plosive remover. The first reduces the sibilance of the spoken letter “s” and the latter reduces mic pops from the spoken letter “p.” As with all plug-ins, don’t get heavy-handed with the effect, because you want to maintain a natural sound.

You will want the audio – especially interviews – to have a consistent level throughout. This can be done manually by adjusting clip gain, either clip by clip, or by rubber banding volume levels within clips. You can also apply a track effect, like an automatic volume filter (Waves, Accusonus, Crumplepop, other). In some cases a compressor can do the trick. I like the various built-in plug-ins offered within Premiere and FCPX, but there are a ton of third-party options. I may also apply two compression effects – one to lightly level the volume changes, and the second to compress/limit the loudest peaks. Again, the key is to apply light adjustments, because I will also compress/limit the master output in addition to these track effects.

The last step is equalization. A parametric EQ is usually the best choice. The objective is to assure vocal clarity by accentuating certain frequencies. This will vary based on the sound quality of each speaker’s voice. This is why you often separate speakers onto their own tracks according to location, voice characteristics, and so on. In actual practice, only two to three tracks are usually needed for dialogue. For example, interviews may be consistent, but the voice-over recordings require a different touch.

Don’t get locked into the specific order of these effects. What I have presented in this post isn’t necessarily gospel for the hierarchical order in which to use them. For example, EQ and level adjusting filters might sound best when placed at different positions in this stack. A certain order might be better for one show, whereas a different order may be best the next time. Experiment and listen to get the best results!

©2020 Oliver Peters