Improving your mix with iZotope

In classic analog mixing consoles like Neve or SSL, each fader includes a channel strip. This is a series of in-line processors that can be applied to each individual input and usually consists of some combination of an EQ, gate, and compressor. If a studio mixing engineer doesn’t use the built-in effects, then they may have a rack of outboard effects units that can be patched in and out of the mixing console. iZotope offers a number of processing products that are the software equivalent of the channel strip or effects rack.

I’ve written about iZotope products in the past, so I decided to take a look at their Mix & Master Bundle Plus, with is a collection of three of their top products – Neutron 3, Nectar 3, and Ozone 9. These products, along with RX, are typically what would be of interest to most video editors or audio post mixers. RX 8 is a bundle of repair effects, such as noise reduction, click repair, and so on.

Depending on the product, it may be available within a single plug-in effect, or several plug-ins, or both a plug-in and a standalone application. For instance, RX8 and Ozone 9 can be used within a DAW or an NLE, in addition to being a separate application. Most of the comprehensive iZotope products are available in three versions – Elements (a “lite” version), Standard, and Advanced. As the name implies, you get more features with the Advanced version; however, nearly everything an editor would want can be handled in the Standard product or for some, in an Elements version.

The mothership

Each of these products is an AU, VST, and/or AAX plug-in compatible with most DAWs and NLEs. It shows up as a single plug-in effect, which in iZotope’s parlance is the mothership for processing modules. Each product features its own variety of processing modules, such as EQ or compression. These modules can be stacked and arranged in any order within the mothership plug-in. Instead of having three individual effects applied to a track, you would only have one iZotope plug-in, which in turn contains the processing modules that you’d like to use. While each product might offer a similar module, like EQ, these modules do not function in exactly the same way from one product to the next. The range of control or type of function will differ. For example, only Ozone 9 includes mid/side EQ. In addition to new features, this newest series of iZotope updates includes faster processing with real-time performance and some machine learning functions.

If you can only buy one of these products and they perform somewhat similar tasks, how do you know what to use? First, there’s nothing to prevent you from applying Ozone, Nectar, or Neutron interchangeably to any individual track or a master bus. Or to a voice-over or a music mix. From the standpoint of a video editor using these plug-ins for the audio mix of my videos, I would simplify it down this way. Nectar 3 is designed for vocal processing. Neutron 3 is designed for music. Ozone 9 is designed for mastering. If I own all three, then in a simple mix of a dialogue track against music, I would apply Nectar 3 to the dialogue track, Neutron 3 to the music track, and Ozone 9 to the master bus.

Working with iZotope’s processing

Neutron, Nectar, and Ozone each include a wealth of presets that configure a series of modules depending on the style you want – from subtle to aggressive. You can add or remove modules or rearrange their order in the chain by dragging a module left or right within the plug-in’s interface. Or start from a blank shell and build an effects chain from the module selection available within that iZotope product. Neutron offers six basic modules, Nectar nine, and Ozone eleven. Many audiophiles love vintage processing to warm up the sound. In spite of iZotope’s sleek, modern approach, you’re covered here, too. Ozone 9 includes several dedicated vintage modules for tape saturation, limiting, EQ, and compression.

All three standard versions of these products include an Assistant function. If you opt to use the Assistant, then play your track and Nectar, Neutron, or Ozone will automatically calculate and apply the modules and settings needed, based on the parameters that you choose and the detected audio from the mix or track. You can then decide to accept or reject the recommendation. If you accept, then use that as a starting point and make adjustments to the settings or add/delete modules to customize the mix.

Neutron 3 Advanced includes Mix Assistant, an automated mix that uses machine learning. Let’s say you have a song mix with stems for vocals, bass, drums, guitars, and synths. Apply the Relay effect to each track and then iZotope’s Visual Mixer to the master bus. With the Standard version, you can use the Visual Mixer to control the levels, panning, and stereo width for each track from a single interface. The Relay plug-ins control those settings on each track based on what you’ve done using the Visual Mixer controls. If you have Neutron 3 Advanced, then this is augmented by Mix Assistant. Play the song through and let Mix Assistant set a relative balance based on your designated focus tracks. In other words, you can tell the algorithm whether vocals or guitars should be the focus and thereby dominant in the mix.

Note that iZotope regularly updates versions with new features, which may or may not be needed in your particular workflow. As an example, RX8 was just released with new features over RX7. But if you owned an earlier version, then it might still do everything you need. While new features are always welcome, don’t feel any pressure that you have to update. Just rest assured that iZotope is continually taking customer feedback and developing its products.

Be sure to check out iZotope’s wealth of tutorials and learning materials, including their “Are you listening?” YouTube series. Even if you don’t use any iZotope products, Grammy-nominated mastering engineer Jonathan Wyner offers plenty of great tips for getting the best out of your mixes.

©2020 Oliver Peters

Soundtheory Gullfoss Intelligent EQ

There are zillions of audio plug-ins on the market to enhance your DAW or NLE. In most cases, the operation and user interface design is based on familiar physical processing hardware. Often the user interface design is intentionally skeuomorphic as either a direct analog to the physical version or as a prompt to give you a clue about its processed sound and control functions.

When you first open the Gullfoss equalizer plug-in, you might think it works like many other EQ plug-ins. Grab a frequency point on the graph line, pull it up or down, and spread out or tighten the Q value. But you would be totally wrong. In fact, this is a plug-in that absolutely requires you to read the manual. Check out the tutorial videos on the Soundtheory site and its operation will make sense to you.

Soundtheory launched Gullfoss (which gets its name from the Gullfoss waterfall in Iceland) as its first commercial product after years of research into perceived loudness. According to Soundtheory, Gullfoss is not using artificial intelligence or other machine learning algorithms. Instead, it employs their computational auditory perception technology. More on that in a moment.

Gullfoss installs as an AU, VST, and AAX plug-in, so it’s compatible with a wide range of DAWs and NLEs. License management is handled via iLok – something most Pro Tools users are very familiar with. If you don’t own a physical iLok USB key (dongle), then license management is handled through the iLok License Manager application. You would install this with a free iLok account onto your computer. iLok management allows you to move the plug-in authorization between computers.

The Gullfoss equalization technology is based on balancing dominant and dominated frequencies. The plug-in automatically determines what it considers dominant and dominated frequencies and dynamically updates its processing 300 times per second. User control is via the Recover and Tame controls.

Increasing the Recover value accentuates dominated frequencies while Tame adjusts the emphasis of dominant frequencies in the mix. Bias controls the balance between Recover and Tame. A positive value shifts more of the processing based on the Recover frequencies, whereas a negative value shifts the emphasis towards Tame. Brighten tells the Recover/Tame mechanism to prefer lower or higher frequencies. Boost balances low versus mid frequencies. Positive values favor bass and negative Boost values decrease bass and increase mids. Finally, there’s an overall gain control and, of course, Bypass.

By default, you are applying Gullfoss processing to the complete sound spectrum of a track. There are left and right range boundaries that you can slide inwards. This restricts the frequencies being analyzed and processed to the area between the two boundary lines. For instance, you can use this with a tight range to make Gullfoss function like a de-esser. If you invert the range by sliding the left or right lines past each other, then the processing occurs outside of that range.

One tip Soundtheory offers as a beginning point is to set the Recover and Tame controls each to 50. Then adjust Bias and Brightness so that the small meters to the left and bottom of the graph hover around their zero mark. This provides a good starting point and then adjust more as needed. Quite frankly it requires a bit of experimentation as to how best to use it. Naturally, whether or not you like the result depends on your own taste. In general, this EQ probably appeals more to music mixers and less to video editors or audio post engineers. I found that it worked nicely as a mastering EQ at the end of a mix chain or applied to a completed, mixed track.

I’m a video editor and not a music mixer, so I also tested files from a corporate production, consisting of a dialogue and a music stem. I ran two tests – once to the fully mixed and exported track and then also at the mix with the two stems isolated. I found that the processing sounded best when I kept the stems separate and applied Gullfoss to the master bus. Of course, this isn’t the best scenario, because the voices and music cues would change within each stem. However, with a bit of experimentation I found a setting that worked overall. It did result in a mix that sounded clearer and more open. Under a proper mix scenario, each voice and each music cue would be on separate tracks for individual adjustments prior to hitting the Gullfoss processing.

In regards to music mixes, it sounded best to me with tracks that weren’t extremely dense. For example, acoustic-style songs with vocals, acoustic guitars, or woodwind-based tracks seemed to benefit the most from Gullfoss. When it works well, the processing really opens up the track – almost like removing a layer of mushiness from the sound. When it was less effective, the results weren’t bad – just more in the take-it-or-leave-it category. The Soundtheory home page features several before and after examples. As a video editor, I did find that it had value when applied to a music track that I might use in a mix with voice-over. However, for voice control, I would stick with a traditional EQ plug-in. If I need de-essing, then I would use a traditional, dedicated de-esser.

Gullfoss is a nice tool to have in the toolkit for music and mastering mixers, even though it wouldn’t be the only EQ you’d ever use. However, it can be that sparkle that brings a song up a notch. Some mixers have commented that Gullfoss saved them a ton of time versus sculpting a sound with standard EQs. When it’s at its most effective, Gullfoss processing adds that “glue” that mixers want for a music track or song.

©2020 Oliver Peters

Does Apple’s mid-2020 iMac deliver?

Apple told us at WWDC that more Intel Macs were on the way. The latest iMac refresh is the first fulfillment of that promise. In the Mac desktop line-up, iMac covers a span from two to ten CPU cores and up to 128GB of RAM. iMac Pro covers 10 to 18 cores and up to 256GB of RAM. This makes the 10-core configuration a bridge where the two branches overlap. It offers cost-effective performance and poses a great value for consumer power users along with professional editors, designers, photographers, engineers, and others. The recent refresh includes changes to the 21.5-inch iMac model, as well as the iMac Pro line. But I’m going focus on the 27-inch 5K iMac, since that model will most interest video professionals.

More power, faster storage, and nano-texture glass

The 5K iMac supplied by Apple for this review was configured with the Intel “Comet Lake” Core i9 10-core CPU (3.6GHz, Turbo Boost up to 5GHz), 64GB of DDR4 RAM, the Radeon Pro 5700 XT GPU (16GB of GDDR6 VRAM), and a 4TB SSD. It also came with the optional nano-texture glass display, keyboard with numeric keypad, trackpad, mouse, and 10Gb Ethernet. As tested, this would cost $6,158 USD (without AppleCare or tax). However, if you opted for a 1TB SSD, that retail cost would drop significantly. Fusion Drives are gone and replaced by all-flash storage options ranging from 256GB up to 8TB. The Blackmagic Disk Speed Test application clocked the internal 4TB SSD read/write speeds at around 2500-2900 MB/s respectively.

Before talking performance, let’s look at the rest of the iMac. It’s still the familiar silver form factor, but with a cooling system optimized for the 125W CPU. Four USB-A ports, two Thunderbolt 3/USB-C ports, 1Gb Ethernet, headphone jack, and a faster SDXC (UHS-II) card reader. Plus Wi-Fi and Bluetooth 5.0. If you need to connect to NAS storage (LumaForge Jellyfish, QNAS, Synology, etc.), then you’ll want to order your iMac with the optional 10GbE upgrade.

Recognizing that we are all spending more time at home, Apple improved the webcam to 1080p with an updated image sensor, enhanced the speakers with a variable EQ, added a three-point, “studio quality” mic system, and enabled “Hey Siri.”

The Retina 5K display sports 500 nits of brightness, one billion colors, and support for P3 wide color. True Tone color technology has been added. It’s a nice feature for the non-pro user. Turn it off if you are doing anything color-critical, since it warms or cools the color temperature of the display depending on the lighting environment.

The biggest buzz will be around the nano-texture glass option, first introduced as an option for the Pro Display XDR. Traditional matte finishes use a coating that reduces glare and reflections, but with a loss of contrast. Nano-texture is a method to etch the glass at the nanometer level so that it redirects light. The objective is to reduce glare while maintaining contrast on par with that of the standard finish. It achieves that goal, although at a close viewing distance, text will look crisper on a display with standard glass.

At $500, it’s a reasonable option and less costly compared with the XDR. However, if your room doesn’t have a lot of direct light hitting the screen anyway, then you may not appreciate as large of a benefit from the nano-texture finish. In theory, heavy-handed cleaning could scuff the display. Apple claims that if you use the supplied cleaning cloth and occasional water (if needed), then screen damage is highly unlikely. Be gentle, don’t scrub, and you’ll be fine.

How does it stand up to an iMac Pro?

I have access to several similarly-configured 10-core 2017 iMac Pros, so it seemed like a great opportunity for some head-to-head testing. The iMac Pro’s Xeon/Vega combo versus the new iMac’s Core i9/5700 XT combo. Both have 10-core CPUs, 64GB RAM, and a GPU with 16GB VRAM. The iMac Pro is designed as a workstation with appropriate parts and thermal cooling system. Until the 2019 Mac Pro was released, the iMac Pro was Apple’s most powerful Mac. On the other hand, the iMacs use components designed for general computing and gaming. That’s not to say they aren’t powerful. In fact by the numbers, the 10-core iMac features faster components than the equivalent iMac Pro model.

As a generality, you can say that the iMac should deliver better burst performance, whereas the iMac Pro is designed for lengthy, taxing performance, like constant use, extended rendering/encoding, and so on. But it really depends on the applications you are using and how much demand you place on the machine. When it comes to value, if we were to spec a 2020 27-inch iMac to closely match the 2017 iMac Pro I am using, then the iMac Pro currently runs about $1400 more (standard glass, no AppleCare, no tax). Is that added $1400 worth it? That’s where performance testing comes in.

Benchmark performance testing

I ran both machines through a series of identical benchmarks, including BruceX 5K for Final Cut Pro X, Puget Systems’ Premiere Pro and After Effects benchmarks, as well as custom projects in Final Cut Pro X, Motion, and DaVinci Resolve. These tests covered a range of media formats and codecs, such as DNG image sequences, ProRes, H.264, REDCODE raw, ProRes RAW, and BRAW. Media sizes ranged from HD to 8K and my sequences and exports were 4K. These projects tested scaling, camera raw decodes, color correction, effects, synthetic media, and so on. I stuck to the internal drive for all media locations and export destinations, since both the iMac and iMac Pro disk speed tests came in with very similar numbers.

The export results for the new iMac and the iMac Pro were neck-and-neck when using Apple’s applications – a few seconds faster from FCPX for the iMac and the same for both with Motion. The one exception was a 4K HEVC export of my 11-layer FCPX timeline. In that case the iMac clocked in a couple of minutes faster.

The Puget Systems’ Premiere Pro and After Effects benchmarks are designed around an overall target score of 1,000 possible points. Most Macs score in the 500 to 750-point range, while custom-built PCs often achieve 1,000 or better. Both the iMac and iMac Pro fell into the expected range, with the new iMac still beating out the iMac Pro. What really surprised me was that the iMac hit 1,027 in the After Effects benchmark! That so amazed me that I had to run the test again. Same result. I can only surmise that After Effects or the testing parameters favor the architecture of the Core-series CPUs and 5700 XT GPU over that of the Xeon/Vega combo used in the iMac Pro.

The Resolve test was the only instance in which the 2017 iMac Pro beat the 2020 iMac, with export times about one minute faster for a complex 7 minute, 4K color corrected sequence. During all of this testing, the cooling fans kicked into higher speeds for roughly the same amount of time and at the same places on both machines. For example, when exporting a Resolve clip that used temporal/spatial video noise reduction.

Should you buy one?

Clearly the new 27-inch iMac is a powerful performer equipped with one of the best-looking computer displays available anywhere. If you are an editor, designer, audio engineer, or similar creative professional, then you really can’t go wrong with one. A facility owner may skew towards the pricier iMac Pro, because it’s a workstation-class machine or they need more cores, more RAM, or additional Thunderbolt 3 ports. Customer upgradeability is limited – essentially none for the iMac Pro and only RAM for the iMac.

Of course, the “elephant in the room” question is: Should you buy an Intel Mac now, with Apple silicon presumably coming within a few months? If you need a machine now and can’t wait, then the answer is yes. Maybe you want to wait until second-generation Apple silicon hardware is out before taking the plunge into new technology. Or you need something that requires Intel, such as running Windows via Boot Camp. All good reasons for staying with an Intel hardware investment a while longer.

In reality, the transition to Apple silicon will take two years according to Apple. It may well be towards the end of that two-year period before we see comparable machines to today’s higher-end MacBook Pro, iMac, iMac Pro, or Mac Pro. We’ll know better once the first Apple silicon machines hit the market. In any case, Apple intends to support its Intel-based lines well after the transition is complete. Therefore, purchasing an Intel-based Mac today is likely to be less of a risk than people make it out to be.

The bottom line is that the mid-2020 27-inch 5K iMac in the 10-core configuration that I tested for this review is ideal for nearly any HD and 4K editing, color correction, graphics, and mixing. You can certainly go bigger with an iMac Pro or Mac Pro, but this configuration offers a tremendous value for these iconic, all-in-one desktop Macs. The 10-core model is that “sweet spot” where nearly every application can take full advantage of the available horsepower. If you need a desktop Mac now, then it should certainly be at the top of the list.

Originally written for FCP.co.

©2020 Oliver Peters

COUP 53

The last century is littered with examples of European powers and the United States attempting to mold foreign governments in their own direction. In some cases, the view at the time may have seemed like these efforts would yield positive results. In others, self-interest or oil was the driving force. We have only to point to the Sykes-Picot Agreement of 1916 (think Lawrence of Arabia) to see the unintended consequences these policies have had in the middle east over the past 100+ years, including current politics.

In 1953, Britain’s spy agency MI6 and the United States’ CIA orchestrated a military coup in Iran that replaced the democratic prime minister, Mohammad Mossadegh, with the absolute monarchy headed by Shah Mohammad Reza Pahlavi. Although the CIA has acknowledged its involvement, MI6 never has. Filmmaker Taghi Amirani, an Iranian-British citizen, set out to tell the true story of the coup, known as Operation Ajax. Five years ago he elicited the help of noted film editor, Walter Murch. What was originally envisioned as a six month edit turned into a four yearlong odyssey of discovery and filmmaking that has become the feature documentary COUP 53.

COUP 53 was heavily researched by Amirani and leans on End of Empire, a documentary series produced by Britain’s Granada TV. That production started in 1983 and culminated in its UK broadcast in May of 1985. While this yielded plenty of interviews with first-hand accounts to pull from, one key omission was an interview with Norman Darbyshire, the MI6 Chief of Station for Iran. Darbyshire was the chief architect of the coup – the proverbial smoking gun. Yet he was inexplicably cut out of the final version of End of Empire, along with others’ references to him.

Amirani and Murch pulled back the filmmaking curtain as part of COUP 53. We discover along with Amirani the missing Darbyshire interview transcript, which adds an air of a whodunit to the film. Ultimately what sets COUP 53 apart was the good fortune to get Ralph Fiennes to portray Norman Darbyshire in that pivotal 1983 interview.

COUP 53 premiered last year at the Telluride Film Festival and then played other festivals until coronavirus closed such events down. In spite of rave reviews and packed screenings, the filmmakers thus far have failed to secure distribution. Most likely the usual distributors and streaming channels deem the subject matter to be politically toxic. Whatever the reason, the filmmakers opted to self-distribute, including a virtual cinema event with 100 cinemas on August 19th, the 67th anniversary of the coup.

Walter Murch is certainly no stranger to readers. Despite a long filmography, including working with documentary material, COUP 53 is only his second documentary feature film. (Particle Fever was the first.) This film posed another challenge for Murch, who is known for his willingness to try out different editing platforms. This was the first outing with Adobe Premiere Pro CC, his fifth major editing system. I had a chance to catch up with Walter Murch over the web from his home in London the day before the virtual cinema event. We discussed COUP 53, documentaries, and working with Premiere Pro.

___________________________________________________

[Oliver Peters] You and I have emailed back-and-forth on the progress of this film for the past few years. It’s great to see it done. How long have you been working on this film?

[Walter Murch] We had to stop a number of times, because we ran out of money. That’s absolutely typical for this type of privately-financed documentary without a script. If you push together all of the time that I was actually standing at the table editing, it’s probably two years and nine months. Particle Fever – the documentary about the Higgs Boson – took longer than that.

My first day on the job was in June of 2015 and here we are talking about it in August of 2020. In between, I was teaching at the National Film School and at the London Film School. My wife is English and we have this place in London, so I’ve been here the whole time. Plus I have a contract for another book, which is a follow-on to In the Blink of an Eye. So that’s what occupies me when my scissors are in hiding.

[OP] Let’s start with Norman Darbyshire, who is key to the storyline. That’s still a bit of an enigma. He’s no longer alive, so we can’t ask him now. Did he originally want to give the 1983 interview and MI6 came in and said ‘no’ – or did he just have second thoughts? Or was it always supposed to be an off the record interview?

[WM] We don’t know. He had been forced into early retirement by the Thatcher government in 1979, so I think there was a little chip on his shoulder regarding his treatment. The full 14-page transcript has just been released by the National Security Archives in Washington, DC, including the excised material that the producers of the film were thinking about putting into the film.

If they didn’t shoot the material, why did they cut up the transcript as if it were going to be a production script? There was other circumstantial evidence that we weren’t able to include in the film that was pretty indicative that yes, they did shoot film. Reading between the lines, I would say that there was a version of the film where Norman Darbyshire was in it – probably not named as such – because that’s a sensitive topic. Sometime between the summer of 1983 and 1985 he was removed and other people were filmed to fill in the gaps. We know that for a fact.

[OP] As COUP 53 shows, the original interview cameraman clearly thought it was a good interview, but the researcher acts like maybe someone got to management and told them they couldn’t include this.

[WM] That makes sense given what we know about how secret services work. What I still don’t understand is why then was the Darbyshire transcript leaked to The Observer newspaper in 1985. A huge article was published the day before the program went out with all of this detail about Norman Darbyshire – not his name, but his words. And Stephen Meade – his CIA counterpart – who is named. Then when the program ran, there was nothing of him in it. So there was a huge discontinuity between what was published on Sunday and what people saw on Monday. And yet, there was no follow-up. There was nothing in the paper the next week, saying we made a mistake or anything.

I think eventually we will find out. A lot of the people are still alive. Donald Trelford, the editor of The Observer, who is still alive, wrote something a week ago in a local paper about what he thought happened. Alison [Rooper] – the original research assistant – said in a letter to The Observer that these are Norman Darbyshire’s words, and “I did the interview with him and this transcript is that interview.”

[OP] Please tell me a bit about working with the discovered footage from End of Empire.

[WM] End of Empire was a huge, fourteen-episode project that was produced over a three or four year period. It’s dealing with the social identity of Britain as an empire and how it’s over. The producer, Brian Lapping, gave all of the outtakes to the British Film Institute. It was a breakthrough to discover that they have all of this stuff. We petitioned the Institute and sure enough they had it. We were rubbing our hands together thinking that maybe Darbyshire’s interview was in there. But, of all of the interviews, that’s the one that’s not there.

Part of our deal with the BFI was that we would digitize this 16mm material for them. They had reconstituted everything. If there was a section that was used in the film, they replaced it with a reprint from the original film, so that you had the ability to not see any blank spots. Although there was a quality shift when you are looking at something used in the film, because it’s generations away from the original 16mm reversal film.

For instance, Stephen Meade’s interview is not in the 1985 film. Once Darbyshire was taken out, Meade was also taken out. Because it’s 16mm we can still see the grease pencil marks and splices for the sections that they wanted to use. When Meade talks about Darbyshire, he calls him Norman and when Darbyshire talks about Meade he calls him Stephen. So they’re a kind of double act, which is how they are in our film. Except that Darbyshire is Ralph Fiennes and Stephen Meade – who has also passed on – appears through his actual 1983 interview.

[OP] Between the old and new material, there was a ton of footage. Please explain your workflow for shaping this into a story.

[WM] Taghi is an inveterate shooter of everything. He started filming in 2014 and had accumulated about 40 hours by the time I joined in the following year. All of the scenes where you see him cutting transcripts up and sliding them together – that’s all happening as he was doing it. It’s not recreated at all. The moment he discovered the Darbyshire transcript is the actual instance it happened. By the end, when we added it all up, it was 532 hours of material.

Forgetting all of the creative aspects, how do you keep track of 532 hours of stuff? It’s a challenge. I used my Filemaker Pro database that I’ve been using since the mid-1980s on The Unbearable Lightness of Being. Every film, I rewrite the software slightly to customize it for the film I’m on. I took frame-grabs of all the material so I had stacks and stacks of stills for every set-up.

By 2017 we’d assembled enough material to start on a structure. Using my cards, we spent about two weeks sitting and thinking ‘we could begin here and go there, and this is really good.’ Each time we’d do that, I’d write a little card. We had a stack of cards and started putting them up on the wall and moving them around. We finally had two blackboards of these colored cards with a start, middle, and end. Darbyshire wasn’t there yet. There was a big card with an X on it – the mysterious X. ‘We’re going to find something on this film that nobody has found before.’ That X was just there off to the side looking at us with an accusing glare. And sure enough that X became Norman Darbyshire.

At the end of 2017 I just buckled my seat belt and started assembling it all. I had a single timeline of all of the talking heads of our experts. It would swing from one person to another, which would set up a dialogue among themselves – each answering the other one’s question or commenting on a previous answer. Then a new question would be asked and we’d do the same thing. That was 4 1/2 hours long. Then I did all of the same thing for all of the archival material, arranging it chronologically. Where was the most interesting footage and the highest quality version of that? That was almost 4 hours long. Then I did the same thing with all of the Iranian interviews, and when I got it, all of the End of Empire material.

We had four, 4-hour timelines, each of them self-consistent. Putting on my Persian hat, I thought, ‘I’m weaving a rug!’ It was like weaving threads. I’d follow the talking heads for a while and then dive into some archive. From that into an Iranian interview and then some End of Empire material. Then back into some talking heads and a bit of Taghi doing some research. It took me about five months to do that work and it produced an 8 1/2 hour timeline.

We looked at that in June of 2018. What were we going to do with that? Is it a multi-part series? It could be, but Netflix didn’t show any interest. We were operating on a shoe string, which meant that the time was running out and we wanted to get it out there. So we decided to go for a feature-length film. It was right about that time that Ralph Fiennes agreed to be in the film. Once he agreed, that acted like a condenser. If you have Ralph Fiennes, things tend to gravitate around that performance. We filmed his scenes in October of 2018. I had roughed it out using the words of another actor who came in and read for us, along with stills of Ralph Fiennes as M. What an irony! Here’s a guy playing a real MI6 agent who overthrew a whole country, who plays M, the head of MI6, who dispatches James Bond to kill malefactors!

Ralph was recorded in an hour and a half in four takes at the Savoy Hotel – the location of the original 1983 interviews. At the time, he was acting in Shakespeare’s Anthony and Cleopatra every evening. So he came in the late morning and had breakfast. By 1:30-ish we were set-up. We prayed for the right weather outside – not too sunny and not rainy. It was perfect. He came and had a little dialogue with the original cameraman about what Darbyshire was like. Then he sat down and entered the zone – a fascinating thing to see. There was a little grooming touch-up to knock off the shine and off we went.

Once we shot Ralph, we were a couple of months away from recording the music and then final color timing and the mix. We were done with a finished, showable version in March of 2019. It was shown to investors in San Francisco and at the TED conference in Vancouver. We got the usual kind of preview feedback and dove back in and squeezed another 20 minutes or so out of the film, which got it to its present length of just under two hours.

[OP] You have a lot of actual stills and some footage from 1953, but as with most historical documentaries, you also have re-enactments. Another unique touch was the paint effect used to treat these re-enactments to differentiate them stylistically from the interviews and archival footage.

[WM] As you know, 1953 is 50+ years before the invention of the smart phone. When coups like this happen today you get thousands of points-of-view. Everyone is photographing everything. That wasn’t the case in 1953. On the final day of the coup, there’s no cinematic material – only some stills. But we have the testimony of Mossadegh’s bodyguard on one side and the son of the general who replaced Mossadegh on the other, plus other people as well. That’s interesting up to a point, but it’s in a foreign language with subtitles, so we decided to go the animation path.

This particular technique was something Taghi’s brother suggested and we thought it was a great idea. It gets us out of the uncanny valley, in the sense that you know you’re not looking at reality and yet it’s visceral. The idea is that we are looking at what is going on in the head of the person telling us these stories. So it’s intentionally impressionistic. We were lucky to find Martyn Pick, the animator who does this kind of stuff. He’s Mr. Oil Paint Animation in London. He storyboarded it with us and did a couple of days of filming with soldiers doing the fight. Then he used that as the base for his rotoscoping.

[OP] Quite a few of the first-hand Iranian interviews are in Persian with subtitles. How did you tackle those?

[WM] I speak French and Italian, but not Persian. I knew I could do it, but it was a question of the time frame. So our workflow was that Taghi and I would screen the Iranian language dailies. He would point out the important points and I would take notes. Then Taghi would do a first pass on his workstation to get rid of the chaff. That’s what he would give to the translators. We would hire graduate students. Fateme Ahmadi, one of the associate producers on the film, is Iranian and she would also do translation. Anyone that was available would work on the additional workstation and add subtitling. That would then come to me and I would use that as raw material.

To cut my teeth on this, I tried using the interview with Hamid Admadi, the Iranian historical expert that was recorded in Berlin. Without translating it, I tried to cut it solely on body language and tonality. I just dove in and imagined, if he is saying ‘that’ then I’m thinking ‘this.’ I was kind of like the way they say people with aphasia are. They don’t understand the words, but they understand the mood. To amuse myself, I put subtitles on it, pretending that I knew what he was saying. I showed it to Taghi and he laughed, but said that in terms of the continuity of the Persian, it made perfect sense. The continuity of the dialogue and moods didn’t have any jumps for a Persian speaker. That was a way to tune myself into the rhythms of the Persian language. That’s almost half of what editing is – picking up the rhythm of how people say things – which is almost as important or even sometimes more important than the words they are using.

[OP] I noticed in the credits that you had three associate editors on the project.  Please tell me a bit about their involvement.

[WM] Dan [Farrell] worked on the film through the first three months and then a bit on the second section. He got a job offer to edit a whole film himself, which he absolutely should do. Zoe [Davis] came in to fill in for him and then after a while also had to leave. Evie [Evelyn Franks] came along and she was with us for the rest of the time. They all did a fantastic job, but Evie was on it the longest and was involved in all of the finishing of the film. She’s is still involved, handling all of the media material that we are sending out.

[OP] You are also known for your work as a sound designer and re-recording mixer, but I noticed someone else handled that for this film. What was you sound role on COUP 53?

[WM] I was busy in the cutting room, so I didn’t handle the final mix. But I was the music editor for the film, as well as the picture editor. Composer Robert Miller recorded the music in New York and sent a rough mixdown of his tracks. I would lay that onto my Premiere Pro sequence, rubber-banding the levels to the dialogue.

When he finally sent over the instrument stems – about 22 of them – I copied and pasted the levels from the mixdown onto each of those stems and then tweaked the individual levels to get the best out of every instrument. I made certain decisions about whether or not to use an instrument in the mix. So in a sense, I did mix the music on the film, because when it was delivered to Boom Post in London, where we completed the mix, all of the shaping that a music mixer does was already taken care of. It was a one-person mix and so Martin [Jensen] at Boom only had to get a good level for the music against the dialogue, place it in a 5.1 environment with the right equalization, and shape that up and down slightly. But he didn’t have to get into any of the stems.

[OP] I’d love to hear your thoughts on working with Premiere Pro over these several years. You’ve mentioned a number of workstations and additional personnel, so I would assume you had devised some type of a collaborative workflow. That is something that’s been an evolution for Adobe over this same time frame.

[WM] We had about 60TB of shared storage. Taghi, Evie Franks, and I each had workstations. Plus there was fourth station for people doing translations. The collaborative workflow was clunky at the beginning. The idea of shared spaces was not what it is now and not what I was used to from Avid, but I was willing to go with it.

Adobe introduced the basics of a more fluid shared workspace in early 2018 I think, and that began a six months’ rough ride, because there were a lot of bugs that came along  with that deep software shift. One of them was what I came to call ‘shrapnel.’ When I imported a cut from another workstation into my workstation, the software wouldn’t recognize all the related media clips, which were already there. So these duplicate files would be imported again, which I nicknamed ‘shrapnel.’ I created a bin just to stuff these clips in, because you couldn’t delete them without causing other problems.

Those bugs went away in the late summer of 2018. The ‘shrapnel’ disappeared along with other miscellaneous problems – and the back-and-forth between systems became very transparent. Things can always be improved, but from a hands-on point-of-view, I was very happy with how everything worked from August or September of 2018 through to the completion of the film.

We thought we might stay with Premiere Pro for the color timing, which is very good. But DaVinci Resolve was the system for the colorist that we wanted to get. We had to make some adjustments to go to Resolve and back to Premiere Pro. There were a couple of extra hurdles, but it all worked and there were no kludges. Same for the sound. The export for Pro Tools was very transparent.

[OP] A lot of what you’ve written and lectured about is the rhythm of editing – particularly dramatic films. How does that equate to a documentary?

[WM] Once you have the initial assembly – ours was 8 hours, Apocalypse Now was 6 hours, Cold Mountain was 5 1/2 hours – the jobs are not that different. You see that it’s too long by a lot. What can we get rid of? How can we condense it to make it more understandable, more emotional, clarify it, and get a rhythmic pulse to the whole film?

My approach is not to make a distinction at that point. You are dealing with facts and have to pay attention to the journalistic integrity of the film. On a fiction film you have to pay attention to the integrity of the story, so it’s similar. Getting to that point, however, is highly different, because the editor of an unscripted documentary is writing the story. You are an author of the film. What an author does is stare at a blank piece of paper and say, ‘what am I going to begin with?’ That is part of the process. I’m not writing words, necessarily, but I am writing. The adjectives and nouns and verbs that I use are the shots and sounds available to me.

I would occasionally compare the process for cutting an individual scene to churning butter. You take a bunch of milk – the dailies – and you put them into a churn – Premiere Pro – and you start agitating it. Could this go with that? No. Could this go with that? Maybe. Could this go? Yes! You start globbing things together and out of that butter churning process you’ve eventually got a big ball of butter in the churn and a lot of whey – buttermilk. In other words, the outtakes.

That’s essentially how I work. This is potentially a scene. Let me see what kind of scene it will turn into. You get a scene and then another and another. That’s when I go to the card system to see what order I can put these scenes in. That’s like writing a script. You’re not writing symbols on paper, you are taking real images and sound and grappling with them as if they are words themselves.

___________________________________________________

Whether you are a student of history, filmmaking, or just love documentaries, COUP 53 is definitely worth the watch. It’s a study in how real secret services work. Along the way, the viewer is also exposed to the filmmaking process of discovery that goes into every well-crafted documentary.

Images from COUP 53 courtesy of Amirani Media and Adobe.

(Click on any image for an enlarged view.)

You can learn more about the film at COUP53.com.

For more, check out these interviews at Art of the Cut, CineMontage, and Forbes.

©2020 Oliver Peters

Boris FX Optics

Adobe Photoshop and Lightroom are ubiquitous digital photography processing tools that hold a place in nearly every pro and semi-pro photographer’s toolkit. From straight-up image correction and enhancement to wildly creative looks, it’s hard to beat what these tools offer. However, when you get into the stylistic filter options, Photoshop looks a bit stale. You can certainly push the artwork to new levels, but it takes talent and often a lot of work. That’s not in step with today’s mindset, where powerful, yet simple-to-use effects tools are the norm. (Click any image in this post for an enlarged view.)

Enter Optics for Photoshop

Last September Boris FX acquired the award-winning effects developers Digital Film Tools and Silhouette. Optics is a new tool developed since this acquisition, specifically designed for the photography market. It features a plug-in for Photoshop and Lightroom (as well as Bridge), which is paired with its own standalone application. Optics shares design similarities with DFT, but also integrates other BorisFX products, such as 75 of the Sapphire filters – a first for Photoshop users. According to Marco Paolini, Optics product designer for Boris FX (and co-founder of DFT and Silhouette), “Optics is the only Photoshop plug-in that specifically simulates optical camera filters with presets based on real-world diffusion filters, as well as realistic simulations of film stocks and motion picture lab processes.”

To use Optics from within Photoshop, simply apply the Optics filter effect to a layer, which opens the Optics Photoshop plug-in. If you first converted that layer into a Smart Object in Photoshop, then the final Optics result will be applied as a Smart Filter and can be toggled on and off in Photoshop. Otherwise, that layer will appear with the “baked in” result once you exit Optics. From Lightroom or Bridge, use the “edit with” command to route the image to the Optics application. Lightroom will send either the original version of the image or with any Lightroom effects applied. When done, a processed copy of the “sent” image appears in Lightroom. The Optics Standalone application supports an extensive set of camera raw file formats in addition to JPEG, TIFF, DPX and Kodak CIN files.

Filters and looks galore

Optics offers 160 filters with thousands of customizable presets. The filters are grouped into nine categories, including color, diffusion, stylize, and more. The user interface is designed with tools and controls bordering around the image. Top – tool bar for masking and view control. Left side – the layers stack. Bottom – filter groups and selection. Right side – two tabs for presets and parameter adjustments. You can show or hide these panels as you like, depending on what you need to see at the time. Resolution choices for the image viewer include 1K, 2K, 4K, 5K, 6K, 8K, and Full resolution. The available choices in the resolution menu are dynamic depending on the size of your image. A lower resolution helps to speed up processing results on lower-powered machines, but you’ll want Full to correctly judge some effects, like sharpening.

If you are comfortable in Photoshop, then you already know how to use Optics. You can build up complex effects using a combination of different filters by using layers. Each layer can be masked and includes all of the usual composite modes. Optics uses floating point processing. This means you can blow out highlights or exposure in one layer, but then bring it down again without information loss in a higher layer. Test out different looks simply by building them onto different layers. Then toggle a layer on or off to see one look versus another. For instance, maybe you’re not sure if you want a sepia look. Just make one layer sepia, disable it, and add a new layer for a different style. Then enable or disable layers to compare.

The EZ Mask is a super-cool function. Let’s say you want to separate a fashion model from the background. First draw rough mask lines for the interior (the model), then rough lines for the exterior or background. Optics will then calculate a very accurate mask. Trim/adjust the mask and re-calculate as needed to better refine the edge. Masks may be inverted as well as copied between layers, which enables you to apply separate effects inside and outside of the mask area. In the example of the model, this means you can create one look or set of effects for the background and a completely different style for the model.

Optics includes a number of stylized render elements that can be added to images, like the moon or lightning zaps. This also includes a ton of lens flare effects, thanks to the included Sapphire filters. In addition to the variety of presets, you can further customize the flares by launching the separate Lens Flare Designer, which is integrated into Optics.

Working with Optics

Optics runs on Macs (macOS 10.13 or higher) and PCs (Windows 10 or higher) with fairly basic hardware requirements. I was able to test Optics on both an iMac Pro and my mid-2014 MacBook Pro. There was a minor license activation issue with the laptop, which was quickly sorted out by Boris FX’s customer service technician. Otherwise, the installations were very smooth. No hiccups with the iMac Pro. Optics responds well on less powerful computers; however, processing-intense effects as well as workflows with a stack of complex layers will perform better on a faster machine. For example, effects that were instantly responsive on the iMac Pro took a bit more time on the older MacBook Pro. If you are only photo developing/color correcting, then you probably won’t notice much difference.

The Optics Standalone application may also be used to process single stills without coming in through Photoshop. The new files can be left in their original size or optionally resized. You can save custom presets, which may be used for single images or to batch process a folder of stills. For example, if I wanted all my vacation stills to be processed with a certain Kodak film stock preset.

Batch processing offers another interesting possibility. Optics will batch process any image sequence, whether from a camera (such as drones) or from a video file exported/rendered out of After Effects. As long as they are JPEG, TIFF, DPX, CIN, or camera raw files, you are good to go. This is a cool way to apply a custom look that you may not have access to as a video filter or plug-in effect, even though Optics is a still photography application.

Select “batch process” and load the image sequence. Then load a saved Optics setup that you have created. Batch processing will save these files as a new image sequence complete with the custom look applied. Finally, reconstruct the processed image sequence back into a video file using After Effects, Resolve, or any other application that supports image sequences.

If you work with a lot of stills and hate going through the gymnastics that Photoshop requires in order to create truly unique looks, then Boris FX Optics will be a game changer. It’s very addictive, but more importantly, Optics offers a huge improvement in efficiency. Plus you’ll have filter options at your fingertips not normally available in Photoshop alone. You might quickly find yourself doing all of your image processing strictly in Optics.

As with other Boris FX products, Optics is available as a perpetual license or subscription. Click this link for Optics video tutorials.

Click through the gallery images below to see further examples of looks and styles created with Boris FX Optics.

©2020 Oliver Peters