COUP 53

The last century is littered with examples of European powers and the United States attempting to mold foreign governments in their own direction. In some cases, the view at the time may have seemed like these efforts would yield positive results. In others, self-interest or oil was the driving force. We have only to point to the Sykes-Picot Agreement of 1916 (think Lawrence of Arabia) to see the unintended consequences these policies have had in the middle east over the past 100+ years, including current politics.

In 1953, Britain’s spy agency MI6 and the United States’ CIA orchestrated a military coup in Iran that replaced the democratic prime minister, Mohammad Mossadegh, with the absolute monarchy headed by Shah Mohammad Reza Pahlavi. Although the CIA has acknowledged its involvement, MI6 never has. Filmmaker Taghi Amirani, an Iranian-British citizen, set out to tell the true story of the coup, known as Operation Ajax. Five years ago he elicited the help of noted film editor, Walter Murch. What was originally envisioned as a six month edit turned into a four yearlong odyssey of discovery and filmmaking that has become the feature documentary COUP 53.

COUP 53 was heavily researched by Amirani and leans on End of Empire, a documentary series produced by Britain’s Granada TV. That production started in 1983 and culminated in its UK broadcast in May of 1985. While this yielded plenty of interviews with first-hand accounts to pull from, one key omission was an interview with Norman Darbyshire, the MI6 Chief of Station for Iran. Darbyshire was the chief architect of the coup – the proverbial smoking gun. Yet he was inexplicably cut out of the final version of End of Empire, along with others’ references to him.

Amirani and Murch pulled back the filmmaking curtain as part of COUP 53. We discover along with Amirani the missing Darbyshire interview transcript, which adds an air of a whodunit to the film. Ultimately what sets COUP 53 apart was the good fortune to get Ralph Fiennes to portray Norman Darbyshire in that pivotal 1983 interview.

COUP 53 premiered last year at the Telluride Film Festival and then played other festivals until coronavirus closed such events down. In spite of rave reviews and packed screenings, the filmmakers thus far have failed to secure distribution. Most likely the usual distributors and streaming channels deem the subject matter to be politically toxic. Whatever the reason, the filmmakers opted to self-distribute, including a virtual cinema event with 100 cinemas on August 19th, the 67th anniversary of the coup.

Walter Murch is certainly no stranger to readers. Despite a long filmography, including working with documentary material, COUP 53 is only his second documentary feature film. (Particle Fever was the first.) This film posed another challenge for Murch, who is known for his willingness to try out different editing platforms. This was the first outing with Adobe Premiere Pro CC, his fifth major editing system. I had a chance to catch up with Walter Murch over the web from his home in London the day before the virtual cinema event. We discussed COUP 53, documentaries, and working with Premiere Pro.

___________________________________________________

[Oliver Peters] You and I have emailed back-and-forth on the progress of this film for the past few years. It’s great to see it done. How long have you been working on this film?

[Walter Murch] We had to stop a number of times, because we ran out of money. That’s absolutely typical for this type of privately-financed documentary without a script. If you push together all of the time that I was actually standing at the table editing, it’s probably two years and nine months. Particle Fever – the documentary about the Higgs Boson – took longer than that.

My first day on the job was in June of 2015 and here we are talking about it in August of 2020. In between, I was teaching at the National Film School and at the London Film School. My wife is English and we have this place in London, so I’ve been here the whole time. Plus I have a contract for another book, which is a follow-on to In the Blink of an Eye. So that’s what occupies me when my scissors are in hiding.

[OP] Let’s start with Norman Darbyshire, who is key to the storyline. That’s still a bit of an enigma. He’s no longer alive, so we can’t ask him now. Did he originally want to give the 1983 interview and MI6 came in and said ‘no’ – or did he just have second thoughts? Or was it always supposed to be an off the record interview?

[WM] We don’t know. He had been forced into early retirement by the Thatcher government in 1979, so I think there was a little chip on his shoulder regarding his treatment. The full 14-page transcript has just been released by the National Security Archives in Washington, DC, including the excised material that the producers of the film were thinking about putting into the film.

If they didn’t shoot the material, why did they cut up the transcript as if it were going to be a production script? There was other circumstantial evidence that we weren’t able to include in the film that was pretty indicative that yes, they did shoot film. Reading between the lines, I would say that there was a version of the film where Norman Darbyshire was in it – probably not named as such – because that’s a sensitive topic. Sometime between the summer of 1983 and 1985 he was removed and other people were filmed to fill in the gaps. We know that for a fact.

[OP] As COUP 53 shows, the original interview cameraman clearly thought it was a good interview, but the researcher acts like maybe someone got to management and told them they couldn’t include this.

[WM] That makes sense given what we know about how secret services work. What I still don’t understand is why then was the Darbyshire transcript leaked to The Observer newspaper in 1985. A huge article was published the day before the program went out with all of this detail about Norman Darbyshire – not his name, but his words. And Stephen Meade – his CIA counterpart – who is named. Then when the program ran, there was nothing of him in it. So there was a huge discontinuity between what was published on Sunday and what people saw on Monday. And yet, there was no follow-up. There was nothing in the paper the next week, saying we made a mistake or anything.

I think eventually we will find out. A lot of the people are still alive. Donald Trelford, the editor of The Observer, who is still alive, wrote something a week ago in a local paper about what he thought happened. Alison [Rooper] – the original research assistant – said in a letter to The Observer that these are Norman Darbyshire’s words, and “I did the interview with him and this transcript is that interview.”

[OP] Please tell me a bit about working with the discovered footage from End of Empire.

[WM] End of Empire was a huge, fourteen-episode project that was produced over a three or four year period. It’s dealing with the social identity of Britain as an empire and how it’s over. The producer, Brian Lapping, gave all of the outtakes to the British Film Institute. It was a breakthrough to discover that they have all of this stuff. We petitioned the Institute and sure enough they had it. We were rubbing our hands together thinking that maybe Darbyshire’s interview was in there. But, of all of the interviews, that’s the one that’s not there.

Part of our deal with the BFI was that we would digitize this 16mm material for them. They had reconstituted everything. If there was a section that was used in the film, they replaced it with a reprint from the original film, so that you had the ability to not see any blank spots. Although there was a quality shift when you are looking at something used in the film, because it’s generations away from the original 16mm reversal film.

For instance, Stephen Meade’s interview is not in the 1985 film. Once Darbyshire was taken out, Meade was also taken out. Because it’s 16mm we can still see the grease pencil marks and splices for the sections that they wanted to use. When Meade talks about Darbyshire, he calls him Norman and when Darbyshire talks about Meade he calls him Stephen. So they’re a kind of double act, which is how they are in our film. Except that Darbyshire is Ralph Fiennes and Stephen Meade – who has also passed on – appears through his actual 1983 interview.

[OP] Between the old and new material, there was a ton of footage. Please explain your workflow for shaping this into a story.

[WM] Taghi is an inveterate shooter of everything. He started filming in 2014 and had accumulated about 40 hours by the time I joined in the following year. All of the scenes where you see him cutting transcripts up and sliding them together – that’s all happening as he was doing it. It’s not recreated at all. The moment he discovered the Darbyshire transcript is the actual instance it happened. By the end, when we added it all up, it was 532 hours of material.

Forgetting all of the creative aspects, how do you keep track of 532 hours of stuff? It’s a challenge. I used my Filemaker Pro database that I’ve been using since the mid-1980s on The Unbearable Lightness of Being. Every film, I rewrite the software slightly to customize it for the film I’m on. I took frame-grabs of all the material so I had stacks and stacks of stills for every set-up.

By 2017 we’d assembled enough material to start on a structure. Using my cards, we spent about two weeks sitting and thinking ‘we could begin here and go there, and this is really good.’ Each time we’d do that, I’d write a little card. We had a stack of cards and started putting them up on the wall and moving them around. We finally had two blackboards of these colored cards with a start, middle, and end. Darbyshire wasn’t there yet. There was a big card with an X on it – the mysterious X. ‘We’re going to find something on this film that nobody has found before.’ That X was just there off to the side looking at us with an accusing glare. And sure enough that X became Norman Darbyshire.

At the end of 2017 I just buckled my seat belt and started assembling it all. I had a single timeline of all of the talking heads of our experts. It would swing from one person to another, which would set up a dialogue among themselves – each answering the other one’s question or commenting on a previous answer. Then a new question would be asked and we’d do the same thing. That was 4 1/2 hours long. Then I did all of the same thing for all of the archival material, arranging it chronologically. Where was the most interesting footage and the highest quality version of that? That was almost 4 hours long. Then I did the same thing with all of the Iranian interviews, and when I got it, all of the End of Empire material.

We had four, 4-hour timelines, each of them self-consistent. Putting on my Persian hat, I thought, ‘I’m weaving a rug!’ It was like weaving threads. I’d follow the talking heads for a while and then dive into some archive. From that into an Iranian interview and then some End of Empire material. Then back into some talking heads and a bit of Taghi doing some research. It took me about five months to do that work and it produced an 8 1/2 hour timeline.

We looked at that in June of 2018. What were we going to do with that? Is it a multi-part series? It could be, but Netflix didn’t show any interest. We were operating on a shoe string, which meant that the time was running out and we wanted to get it out there. So we decided to go for a feature-length film. It was right about that time that Ralph Fiennes agreed to be in the film. Once he agreed, that acted like a condenser. If you have Ralph Fiennes, things tend to gravitate around that performance. We filmed his scenes in October of 2018. I had roughed it out using the words of another actor who came in and read for us, along with stills of Ralph Fiennes as M. What an irony! Here’s a guy playing a real MI6 agent who overthrew a whole country, who plays M, the head of MI6, who dispatches James Bond to kill malefactors!

Ralph was recorded in an hour and a half in four takes at the Savoy Hotel – the location of the original 1983 interviews. At the time, he was acting in Shakespeare’s Anthony and Cleopatra every evening. So he came in the late morning and had breakfast. By 1:30-ish we were set-up. We prayed for the right weather outside – not too sunny and not rainy. It was perfect. He came and had a little dialogue with the original cameraman about what Darbyshire was like. Then he sat down and entered the zone – a fascinating thing to see. There was a little grooming touch-up to knock off the shine and off we went.

Once we shot Ralph, we were a couple of months away from recording the music and then final color timing and the mix. We were done with a finished, showable version in March of 2019. It was shown to investors in San Francisco and at the TED conference in Vancouver. We got the usual kind of preview feedback and dove back in and squeezed another 20 minutes or so out of the film, which got it to its present length of just under two hours.

[OP] You have a lot of actual stills and some footage from 1953, but as with most historical documentaries, you also have re-enactments. Another unique touch was the paint effect used to treat these re-enactments to differentiate them stylistically from the interviews and archival footage.

[WM] As you know, 1953 is 50+ years before the invention of the smart phone. When coups like this happen today you get thousands of points-of-view. Everyone is photographing everything. That wasn’t the case in 1953. On the final day of the coup, there’s no cinematic material – only some stills. But we have the testimony of Mossadegh’s bodyguard on one side and the son of the general who replaced Mossadegh on the other, plus other people as well. That’s interesting up to a point, but it’s in a foreign language with subtitles, so we decided to go the animation path.

This particular technique was something Taghi’s brother suggested and we thought it was a great idea. It gets us out of the uncanny valley, in the sense that you know you’re not looking at reality and yet it’s visceral. The idea is that we are looking at what is going on in the head of the person telling us these stories. So it’s intentionally impressionistic. We were lucky to find Martyn Pick, the animator who does this kind of stuff. He’s Mr. Oil Paint Animation in London. He storyboarded it with us and did a couple of days of filming with soldiers doing the fight. Then he used that as the base for his rotoscoping.

[OP] Quite a few of the first-hand Iranian interviews are in Persian with subtitles. How did you tackle those?

[WM] I speak French and Italian, but not Persian. I knew I could do it, but it was a question of the time frame. So our workflow was that Taghi and I would screen the Iranian language dailies. He would point out the important points and I would take notes. Then Taghi would do a first pass on his workstation to get rid of the chaff. That’s what he would give to the translators. We would hire graduate students. Fateme Ahmadi, one of the associate producers on the film, is Iranian and she would also do translation. Anyone that was available would work on the additional workstation and add subtitling. That would then come to me and I would use that as raw material.

To cut my teeth on this, I tried using the interview with Hamid Admadi, the Iranian historical expert that was recorded in Berlin. Without translating it, I tried to cut it solely on body language and tonality. I just dove in and imagined, if he is saying ‘that’ then I’m thinking ‘this.’ I was kind of like the way they say people with aphasia are. They don’t understand the words, but they understand the mood. To amuse myself, I put subtitles on it, pretending that I knew what he was saying. I showed it to Taghi and he laughed, but said that in terms of the continuity of the Persian, it made perfect sense. The continuity of the dialogue and moods didn’t have any jumps for a Persian speaker. That was a way to tune myself into the rhythms of the Persian language. That’s almost half of what editing is – picking up the rhythm of how people say things – which is almost as important or even sometimes more important than the words they are using.

[OP] I noticed in the credits that you had three associate editors on the project.  Please tell me a bit about their involvement.

[WM] Dan [Farrell] worked on the film through the first three months and then a bit on the second section. He got a job offer to edit a whole film himself, which he absolutely should do. Zoe [Davis] came in to fill in for him and then after a while also had to leave. Evie [Evelyn Franks] came along and she was with us for the rest of the time. They all did a fantastic job, but Evie was on it the longest and was involved in all of the finishing of the film. She’s is still involved, handling all of the media material that we are sending out.

[OP] You are also known for your work as a sound designer and re-recording mixer, but I noticed someone else handled that for this film. What was you sound role on COUP 53?

[WM] I was busy in the cutting room, so I didn’t handle the final mix. But I was the music editor for the film, as well as the picture editor. Composer Robert Miller recorded the music in New York and sent a rough mixdown of his tracks. I would lay that onto my Premiere Pro sequence, rubber-banding the levels to the dialogue.

When he finally sent over the instrument stems – about 22 of them – I copied and pasted the levels from the mixdown onto each of those stems and then tweaked the individual levels to get the best out of every instrument. I made certain decisions about whether or not to use an instrument in the mix. So in a sense, I did mix the music on the film, because when it was delivered to Boom Post in London, where we completed the mix, all of the shaping that a music mixer does was already taken care of. It was a one-person mix and so Martin [Jensen] at Boom only had to get a good level for the music against the dialogue, place it in a 5.1 environment with the right equalization, and shape that up and down slightly. But he didn’t have to get into any of the stems.

[OP] I’d love to hear your thoughts on working with Premiere Pro over these several years. You’ve mentioned a number of workstations and additional personnel, so I would assume you had devised some type of a collaborative workflow. That is something that’s been an evolution for Adobe over this same time frame.

[WM] We had about 60TB of shared storage. Taghi, Evie Franks, and I each had workstations. Plus there was fourth station for people doing translations. The collaborative workflow was clunky at the beginning. The idea of shared spaces was not what it is now and not what I was used to from Avid, but I was willing to go with it.

Adobe introduced the basics of a more fluid shared workspace in early 2018 I think, and that began a six months’ rough ride, because there were a lot of bugs that came along  with that deep software shift. One of them was what I came to call ‘shrapnel.’ When I imported a cut from another workstation into my workstation, the software wouldn’t recognize all the related media clips, which were already there. So these duplicate files would be imported again, which I nicknamed ‘shrapnel.’ I created a bin just to stuff these clips in, because you couldn’t delete them without causing other problems.

Those bugs went away in the late summer of 2018. The ‘shrapnel’ disappeared along with other miscellaneous problems – and the back-and-forth between systems became very transparent. Things can always be improved, but from a hands-on point-of-view, I was very happy with how everything worked from August or September of 2018 through to the completion of the film.

We thought we might stay with Premiere Pro for the color timing, which is very good. But DaVinci Resolve was the system for the colorist that we wanted to get. We had to make some adjustments to go to Resolve and back to Premiere Pro. There were a couple of extra hurdles, but it all worked and there were no kludges. Same for the sound. The export for Pro Tools was very transparent.

[OP] A lot of what you’ve written and lectured about is the rhythm of editing – particularly dramatic films. How does that equate to a documentary?

[WM] Once you have the initial assembly – ours was 8 hours, Apocalypse Now was 6 hours, Cold Mountain was 5 1/2 hours – the jobs are not that different. You see that it’s too long by a lot. What can we get rid of? How can we condense it to make it more understandable, more emotional, clarify it, and get a rhythmic pulse to the whole film?

My approach is not to make a distinction at that point. You are dealing with facts and have to pay attention to the journalistic integrity of the film. On a fiction film you have to pay attention to the integrity of the story, so it’s similar. Getting to that point, however, is highly different, because the editor of an unscripted documentary is writing the story. You are an author of the film. What an author does is stare at a blank piece of paper and say, ‘what am I going to begin with?’ That is part of the process. I’m not writing words, necessarily, but I am writing. The adjectives and nouns and verbs that I use are the shots and sounds available to me.

I would occasionally compare the process for cutting an individual scene to churning butter. You take a bunch of milk – the dailies – and you put them into a churn – Premiere Pro – and you start agitating it. Could this go with that? No. Could this go with that? Maybe. Could this go? Yes! You start globbing things together and out of that butter churning process you’ve eventually got a big ball of butter in the churn and a lot of whey – buttermilk. In other words, the outtakes.

That’s essentially how I work. This is potentially a scene. Let me see what kind of scene it will turn into. You get a scene and then another and another. That’s when I go to the card system to see what order I can put these scenes in. That’s like writing a script. You’re not writing symbols on paper, you are taking real images and sound and grappling with them as if they are words themselves.

___________________________________________________

Whether you are a student of history, filmmaking, or just love documentaries, COUP 53 is definitely worth the watch. It’s a study in how real secret services work. Along the way, the viewer is also exposed to the filmmaking process of discovery that goes into every well-crafted documentary.

Images from COUP 53 courtesy of Amirani Media and Adobe.

(Click on any image for an enlarged view.)

You can learn more about the film at COUP53.com.

For more, check out these interviews at Art of the Cut, CineMontage, and Forbes.

©2020 Oliver Peters

HDR and RAW Demystified, Part 2

(Part 1 of this series is linked here.) One of the surprises of NAB 2018 was the announcement of Apple ProRes RAW. This brought camera raw video to the forefront for many who had previously discounted it. To understand the ‘what’ and ‘why’ about raw, we first have to understand camera sensors.

For quite some years now, cameras have been engineering with a single, CMOS sensor. Most of these sensors use a Bayer-pattern array of photosites. Bayer – named for Bryce Bayer, a Kodak color scientist who developed the system. Photosites are the light-receiving elements of a sensor. The Bayer pattern is a checkerboard filter that separates light according to red/blue/green wavelengths. Each photosite captures light as monochrome data that has been separated according to color components. In doing so, the camera captures a wide exposure latitude as linear data. This is greater than what can be squeezed into standard video in this native form. There is a correlation between physical photosite size and resolution. With smaller photosites, more can fit on the sensor, yielding greater native resolution. But, with fewer, larger photosites, the sensor has better low-light capabilities. In short, resolution and exposure latitude are a trade-off in sensor design.

Log encoding

Typically, raw data is converted into RGB video by the internal electronics of the camera. It is then subsequently converted into component digital video and recorded using a compressed or uncompressed codec and one of the various color sampling schemes (4:4:4, 4:2:2, 4:1:1, 4:2:0). These numbers express a ratio that represents YCrCb – where Y = luminance (the first number) and CrCb = two difference signals (the second two numbers) used to derive color information. You may also see this written as YUV, Y/R-Y/B-Y or other forms. In the conversion, sampling, and compression process, some information is lost. For instance, a 4:4:4 codec preserves twice as much color information than a 4:2:2 codec. Two methods are used to preserve wide-color gamuts and extended dynamic range: log encoding and camera raw capture.

Most camera manufacturers offer some form of logarithmic video encoding, but the best-known is ARRI’s Log-C. Log encoding applies a logarithm to linear sensor data in order to compress that data into a “curve”, which will fit into the available video signal “bucket”. Log-C video, when left uncorrected and viewed in Rec. 709, will appear to lack contrast and saturation. To correct the image, a LUT (color look-up table) must be applied, which is the mathematic inverse of the process used to encode the Log-C signal. Once restored, the image can be graded to use and/or discard as much of the data as needed, depending on whether you are working in an SDR or HDR mode.

Remember that the conversion from a flat, log image to full color will only look good when you have bit-depth precision. This means that if you are working with log material in an 8-bit system, you only have 256 steps between black and white. That may not be enough and the grade from log to full color may result in banding. If you work in a 10-bit system, then you have 1024 steps instead of only 256 between the same black and white points. This greater precision yields a smoother transition in gradients and, therefore, no banding. If you work with ProRes recordings, then according to Apple, “Apple ProRes 4444 XQ and Apple ProRes 4444 support image sources up to 12 bits and preserve alpha sample depths up to 16 bits. All Apple ProRes 422 codecs support up to 10-bit image sources, though the best 10-bit quality is obtained with the higher-bit-rate family members – Apple ProRes 422 and Apple ProRes 422 HQ.”

Camera raw

RAW is not an acronym. It’s simply shorthand for camera raw information. Before video, camera raw was first used in photography, typified by Canon raw (.cr2) and Adobe’s Digital Negative (.dng) formats. The latter was released as an open standard and is widely used in video as Cinema DNG.

Camera raw in video cameras made its first practical introduction when RED Digital Cinema introduced their RED ONE cameras equipped with REDCODE RAW. While not the first with raw, RED’s innovation was to record a compressed data stream as a movie file (.r3d), which made post-production significantly easier. The key difference between raw workflows and non-raw workflows, is that with raw, the conversion into video no longer takes place in the camera or an external recorder. This conversion happens in post. Since the final color and dynamic range data is not “baked” into the file, the post-production process used can be improved in future years, making an even better result possible with an updated software version.

Camera raw data is usually proprietary to each manufacturer. In order for any photographic or video application to properly decode a camera raw signal, it must have a plug-in from that particular manufacturer. Some of these are included with a host application and some require that you download and install a camera-specific add-on. Such add-ons or plug-ins are considered to be a software “black box”. The decoding process is hidden from the host application, but the camera supplier will enable certain control points that an editor or colorist can adjust. For example, with RED’s raw module, you have access to exposure, the demosaicing (de-Bayering) resolution, RED’s color science method, and color temperature/tint. Other camera manufacturers will offer less.

Apple ProRes RAW

The release of ProRes RAW gives Apple a raw codec that is optimized for multi-stream playback performance in Final Cut Pro X and on the newest Apple hardware. This is an acquisition codec, so don’t expect to see the ability to export a timeline from your NLE and record it into ProRes RAW. Although I wouldn’t count out a transcode from another raw format into ProRes RAW, or possibly an export from FCPX when your timeline only consists of ProRes RAW content. In any case, that’s not possible today. In fact, you can only play ProRes RAW files in Final Cut Pro X or Apple Motion, but only FCPX displays the correct color information at default settings.

Currently ProRes RAW has only been licensed by Apple to Atomos and DJI. The Atomos Inferno and Sumo 19 units are equipped with ProRes RAW. This is only active with certain Canon, Panasonic, and Sony camera models that can send their raw signal out over an SDI cable. Then the Atomos unit will remap the camera’s raw values to ProRes RAW and encode the file. DJI’s Zenmuse X7 gimbal camera has also been updated to support ProRes RAW. With DJI, the acquisition occurs in-camera, rather than via an external recorder.

Like RED’s RECODE, Apple ProRes RAW is a variable bit-rate, compressed codec with different quality settings. ProRes RAW and ProRes RAW HQ fall in line similar to the data rates of ProRes and ProRes HQ. Unlike RED, no controls are exposed within Final Cut Pro X to access specific raw controls. Therefore, Final Cut Pro X’s color processing controls may or may not take affect prior to the conversion from raw to video. At this point that’s an unknown.

(Read more about ProRes RAW here.)

Conclusion

The main advantage of the shift to using movie file formats for camera raw – instead of image sequence files – is that processing is faster and the formats are conducive to working natively in most editing applications.

It can be argued whether or not there is really much difference in starting with a log-encoded versus a camera raw file. Leading feature films presented at the highest resolutions have originated both ways. Nevertheless, both methods empower you with extensive creative control in post when grading the image. Both accommodate a move into HDR and wider color gamuts. Clearly log and raw workflows future-proof your productions for little or no additional investment.

Originally written for RedShark News.

©2018 Oliver Peters

HDR and RAW Demystified, Part 1

Two buzzwords have been the highlight of many tech shows within this past year – HDR and RAW. In this first part, I will attempt to clarify some of the concepts surrounding video signals, including High Dynamic Range (HDR). In part 2, I’ll cover more about camera raw recordings.

Color space

Four things define the modern video signal: color space (aka color gamut), white point, gamma curve, and dynamic range. The easiest way to explain color space is with the standard triangular plot of the color spectrum, known as a chromaticity diagram. This chart defines the maximum colors visible to most humans when visualized on an x,y grid. Within it are numerous ranges that define a less-than-full range of colors for various standards. These represent the technical color spaces that cameras and display systems can achieve. On most charts, the most restrictive ranges are sRGB and Rec. 709. The former is what many computer displays have used until recently, while Rec. 709 is the color space standard for high definition TV. (These recommendations were developed by the International Telecommunications Union, so Rec. 709 is simply shorthand for ITU-R Recommendation BT.709.)

Next out is P3, a standard adopted for digital cinema projection and more recently, new computer displays, like those on the Apple iMac Pro. While P3 doesn’t display substantially more color than Rec. 709, colors at the extremes of the range do appear different. For example, the P3 color space will render more vibrant reds with a more accurate hue than Rec. 709 or sRGB. With UHD/4K becoming mainstream, there’s also a push for “better pixels”, which has brought about the Rec. 2020 standard for 4K video. This standard covers about 75% of the visible spectrum, although, it’s perfectly acceptable to deliver 4K content that was graded in a Rec. 709 color space. That’s because most current displays that are Rec. 2020 compatible can’t actually display 100% of the colors defined in this standard, yet.

The center point of the chromaticity diagram is white. However, different systems consider a slightly different color temperature to be white. Color temperature is measured in Kelvin degrees. Displays are a direct illumination source and for those, 6500-degrees (more accurately 6504) is considered pure white. This is commonly referred to as D-65. Digital cinema, which is a projected image, uses 6300-degrees as its white point. Therefore, when delivering something intended for P3, it is important to specify whether that is P3 D-65 or P3 DCI (digital cinema).

Dynamic range

Color space doesn’t live on its own, because the brightness of the image also defines what we see. Brightness (and contrast) are expressed as dynamic range. Up until the advent of UHD/4K we have been viewing displays in SDR (standard dynamic range). If you think of the chromaticity diagram as lying flat and dynamic range as a column that extends upward from the chart on the z-axis, you can quickly see that the concept can be thought of as a volumetric combination of color space and dynamic range. With SDR, that “column” goes from 0 IRE up to 100 IRE (also expressed as 0-100 percent).

Gamma is the function that changes linear brightness values into the weighted values that are translated to our screens. It defines numerical pixel value to its actual brightness. By increasing or decreasing gamma values, you are in effect, bending that straight-line between darkest and lightest values into a curve. This changes the midtone of the displayed image, making the image appear darker or lighter. Gamma values are applied to both the original image and to the display system. When they don’t match, then you run into situations where the image will look vastly different when viewed on one system versus another.

With the advent of UHD/4K, users have also been introduced to HDR (high dynamic range), which allows us to display brighter images and recover the overshoot elements in a frame, like bright lights and reflections. It is important to understand that HDR video is not the same as HDR photography. HDR photos are created by capturing several bracketed exposures of the same image and then blending those into a composite – either in-camera or via software, like Photoshop or Lightroom. HDR photos often yield hyper-real results, such as when high-contrast sky and landscape elements are combined.

HDR video is quite different. HDR photography is designed to work with existing technology, whereas HDR video actually takes advantage of the extended brightness range made possible in new displays. It is also only visible with the newest breed of UHD/4K TV sets that are HDR-capable. Display illumination is measured in nits. One nit equals one candela per square meter – in other words, the light of a single candle spread over a square meter. SDR displays have been capable of up to 100 nits. Modern computer displays, monitors, and consumer television sets can now display brightness in the range of 500 to 1,000 nits and even brighter. Anything over 1,000 nits is considered HDR. But that’s not the end of the story, as there are currently four competing standards: Dolby Vision, HDR10, HDR10+, and HLG. I won’t get into the weeds about the specifics of each, but they all apply different peak brightness levels and methods. Their nit levels range from 1,000 up to Dolby Vision’s theoretical limit of 10,000 nits.

Just because you own a high-nits display doesn’t mean you are seeing HDR. It isn’t simply turning up the brightness “to 11”, but rather providing the headroom to extend the parts of the image that exceed the normal range. These peaks can now be displayed with detail, without compressing or clipping them, as we do now. When an HDR master is created, metadata is stored with the file that tells the display device that the signal is an HDR signal and to turn on the necessary circuitry. That metadata is carried over HDMI. Therefore, every device in the playback chain must be HDR-capable.

HDR also means more hardware to work with it accurately. Although you may have grading software that accommodates HDR – and you have a 500 nits display, like those in an iMac Pro – you can’t effectively see HDR in order to properly grade it. That still requires proper capture/playback hardware from Blackmagic Design or AJA, along with a studio-grade, external HDR monitor.

Unfortunately, there’s one dirty little secret with HDR. Monitors and TV sets cannot display a full screen image at maximum brightness. You can’t display a total white background at 1,000 nits on a 1,000 nits display. These displays employ gain circuitry to darken the image in those cases. The responsiveness of any given display model will vary widely depending on how much of the screen is at full brightness and for how long. No two models will be at exactly the same brightness for any given percentage at peak level.

Today HDR is still the “wild west” and standards will evolve as the market settles in on a preference. The good news is that cameras have been delivering content that is “HDR-ready” for several years. This brings us to camera raw and log encoding, which will be covered in Part 2.

(Here is some additional information from SpectraCal and AVForums.)

Originally written for RedShark News.

©2018 Oliver Peters

Bricklayers and Sculptors

One of the livelier hangouts on the internet for editors to kick around their thoughts is the Creative COW’s Apple Final Cut Pro X Debates forum. Part forum, part bar room brawl, it started as a place to discuss the relative merits (or not) of Apple’s FCP X. As such, the COW’s bosses allow a bit more latitude than in other forums. However, often threads derail into really thoughtful discussions about editing concepts.

Recently one of its frequent contributors, Simon Ubsdell, posted a thread called Bricklayers and Sculptors. In his words, “There are two different types of editors: Those who lay one shot after another like a bricklayer builds a wall. And those who discover the shape of their film by sculpting the raw material like a sculptor works with clay. These processes are not the same. There is no continuum that links these two approaches. They are diametrically opposed.”

Simon Ubsdell is the creative director, partner, and editor/mixer for London-based trailer shop Tokyo Productions. Ubsdell is also an experienced plug-in developer, having developed and/or co-developed the TKY, Tokyo, and Hawaiki effects plug-ins. But beyond that, Simon is one of the folks with whom I often have e-mail discussions regarding the state of editing today. We were both early adopters of FCP X who have since shifted almost completely to Adobe Premiere Pro. In keeping with the theme of his forum post, I asked him to share his ideas about how to organize an edit.

With Simon’s permission, the following are his thoughts on how best to organize editing projects in a way that keeps you immersed in the material and results in editing with greater assurance that you’ve make the best possible edit decisions.

________________________________________________

Simon Ubsdell – Bricklayers and Sculptors in practical terms

To avoid getting too general about this, let me describe a job I did this week. The producer came to us with a documentary that’s still shooting and only roughly “edited” into a very loose assembly – it’s the stories of five different women that will eventually be interweaved, but that hasn’t happened yet. As I say, extremely rough and unformed.

I grabbed all the source material and put it on a timeline. That showed me at a glance that there was about four hours of it in total. I put in markers to show where each woman’s material started and ended, which allowed me to see how much material I had for each of them. If I ever needed to go back to “everything”, it would make searching easier. (Not an essential step by any means.)

I duplicated that sequence five times to make sequences of all the material for each woman. Then I made duplicates of those duplicates and began removing everything I didn’t want. (At this point I am only looking for dialogue and “key sound”, not pictures which I will pick up in a separate set of passes.)

Working subtractively

From this point on I am working almost exclusively subtractively. A lot of people approach string-outs by adding clips from the browser – but here all my clips are already on the timeline and I am taking away anything I don’t want. This is for me the key part of the process because each edit is not a rough approximation, but a very precise “topping and tailing” of what I want to use. If you’re “editing in the Browser” (or in Bins), you’re simply not going to be making the kind of frame accurate edits that I am making every single time with this method.

The point to grasp here is that instead of “making bricks” for use later on, I am already editing in the strictest sense – making cuts that will stand up later on. I don’t have to select and then trim – I am doing both operations at the same time. I have my editing hat on, not an organizing hat. I am focused on a timeline that is going to form the basis of the final edit. I am already thinking editorially (in the sense of creative timeline-based editing) and not wasting any time merely thinking organizationally.

I should mention here that this is an iterative process – not just one pass through the material, but several. At certain points I will keep duplicates as I start to work on shorter versions. I won’t generally keep that many duplicates – usually just an intermediate “long version”, which has lost all the material I definitely don’t want. And by “definitely don’t want” I’m not talking about heads and tails that everybody throws away where the camera is being turned on or off or the crew are in shot – I am already making deep, fine-grained editorial and editing decisions that will be of immense value later on. I’m going straight to the edit point that I know I’ll want for my finished show. It’s not a provisional edit point – it’s a genuine editorial choice. From this point of view, the process of rejecting slates and tails is entirely irrelevant and pointless – a whole process that I sidestep entirely. I am cutting from one bit that I want to keep directly to the next bit I want to keep and I am doing so with fine-tuned precision. And because I am working subtractively I am actually incorporating several edit decisions in one – in other words, with one delete step I am both removing the tail from the outgoing clip and setting the start of the next clip.

Feeling the pacing and flow

Another key element here is that I can see how one clip flows into another – even if I am not going to be using those two clips side-by-side. I can already get a feel for the pacing. I can also start to see what might go where, so as part of this phase, I am moving things around as options start suggesting themselves. Because I am working in the timeline with actual edited material, those options present themselves very naturally – I’m getting offered creative choices for free. I can’t stress too strongly how relevant this part is. If I were simply sorting through material in a Browser/Bin, this process would not be happening or at least not happening in anything like the same way. The ability to reorder clips as the thought occurs to me and for this to be an actual editorial decision on a timeline is an incredibly useful thing and again a great timesaver. I don’t have to think about editorial decisions twice.

And another major benefit that is simply not available to Browser/Bin-based methods, is that I am constructing editorial chunks as I go. I’m taking this section from Clip A and putting it side-by-side with this other section from Clip A, which may come from earlier in the actual source, and perhaps adding a section from Clip B to the end and something from Clip C to the front. I am forming editorial units as I work through the material. And these are units that I can later use wholesale.

Another interesting spin-off is that I can very quickly spot “duplicate material”, by which I mean instances where the same information or sentiment is conveyed in more or less the same terms at different places in the source material. Because I am reviewing all of this on the timeline and because I am doing so iteratively, I can very quickly form an opinion as to which of the “duplicates” I want to use in my final edit.

Working towards the delivery target

Let’s step back and look at a further benefit of this method. Whatever your final film is, it will have the length that it needs to be – unless you’re Andy Warhol. You’re delivering a documentary for broadcast or theatrical distribution, or a short form promo or a trailer or TV spot. In each case you have a rough idea of what final length you need to arrive at. In my case, I knew that the piece needed to be around three minutes long. And that, of course, throws up a very obvious piece of arithmetic that it helps me to know. I had five stories to fit into those three minutes, which meant that the absolute maximum of dialogue that I would need would be just over 30 seconds from each story!  The best way of getting to those 30 seconds is obviously subtractively.

I know I need to get my timeline of each story down to something approaching this length. Because I’m not simply topping and tailing clips in the Browser, but actually sculpting them on the timeline (and forming them into editorial units, as described above), I can keep a very close eye on how this is coming along for each story strand. I have a continuous read-out of how well I am getting on with reducing the material down to the target length. By contrast, if I approach my final edit with 30 minutes of loosely selected source material to juggle, I’m going to spend a lot more time on editorial decisions that I could have successfully made earlier.

So the final stage of the process in this case was simply to combine and rearrange the pre-edited timelines into a final timeline – a process that is now incredibly fast and a lot of fun. I’ve narrowed the range of choices right down to the necessary minimum. A great deal of the editing has literally already been done, because I’ve been editing from the very first moment that I laid all the material on the original timeline containing all the source material for the project.

As you can see, the process has been essentially entirely subtractive throughout – a gradual whittling down of the four hours to something closer to three minutes. This is not to say there won’t be additive parts to the overall edit. Of course, I added music, SFX, and graphics, but from the perspective of the process as a whole, this is addition at the most trivial level.

Learning to tell the story in pictures

There is another layer of addition that I have left out and that’s what happens with the pictures. So far I’ve only mentioned what is happening with what is sometimes called the “radio edit”. In my case, I will perform the exact same (sometimes iterative) process of subtracting the shots I want to keep from the entirety of the source material – again, this is obviously happening on a timeline or timelines. The real delight of this method is to review all the “pictures” without reference to the sound, because in doing so you can get a real insight into how the story can be told pictorially. I will often review the pictures having very, very roughly laid up some of the music tracks that I have planned on using. It’s amazing how this lets you gauge both whether your music suits the material and conversely whether the pictures are the right ones for the way you are planning to tell the story.

This brings to me a key point I would make about how I personally work with this method and that’s that I plunge in and experiment even at the early stages of the project. For me, the key thing is to start to get a feel for how it’s all going to come together. This loose experimentation is a great way of approaching that. At some point in the experimentation something clicks and you can see the whole shape or at the very least get a feeling for what it’s all going to look like. The sooner that click happens, the better you can work, because now you are not simply randomly sorting material, you are working towards a picture you have in your head. For me, that’s the biggest benefit of working in the timeline from the very beginning. You’re getting immersed in the shape of the material rather than just its content and the immersion is what sparks the ideas. I’m not invoking some magical thinking here – I’m just talking about a method that’s proven itself time and time again to be the best and fastest way to unlock the doors of the edit.

Another benefit is that although one would expect this method to make it harder to collaborate, in fact the reverse is the case if each editor is conversant with the technique. You’re handing over vastly more useful creative edit information with this process than you could by any other means. What you’re effectively doing is “showing your workings” and not just handing over some versions. It means that the editor taking over from you can easily backtrack through your work and find new stuff and see the ideas that you didn’t end up including in the version(s) that you handed over. It’s an incredibly fast way for the new editor to get up to speed with the project without having to start from scratch by acquainting him or herself with where the useful material can be found.

Even on a more conventional level, I personally would far rather receive string-outs of selects than all the most carefully organized Browser/Bin info you care to throw at me. Obviously if I’m cutting a feature, I want to be able to find 323T14 instantly, but beyond that most basic level, I have no interest in digging through bins or keyword collections or whatever else you might be using, as that’s just going to slow me down.

Freeing yourself of the Browser/Bins

Another observation about this method is how it relates to the NLE interface. When I’m working with my string-outs, which is essentially 90% of the time, I am not ever looking at the Browser/Bins. Accordingly, in Premiere Pro or Final Cut Pro X, I can fully close down the Project/Browser windows/panes and avail myself of the extra screen real estate that gives me, which is not inconsiderable. The consequence of that is to make the timeline experience even more immersive and that’s exactly what I want. I want to be immersed in the details of what I’m doing in the timeline and I have no interest in any other distractions. Conversely, having to keep going back to Bins/Browser means shifting the focus of attention away from my work and breaking the all-important “flow” factor. I just don’t want any distractions from the fundamentally crucial process of moving from one clip to another in a timeline context. As soon as I am dragged away from that, there’s is a discontinuity in what I am doing.

The edit comes to shape organically

I find that there comes a point, if you work this way, when the subsequence you are working on organically starts to take on the shape of the finished edit and it’s something that happens without you having to consciously make it happen. It’s the method doing the work for you. This means that I never find myself starting a fresh sequence and adding to it from the subsequences and I think that has huge advantages. It reinforces my point that you are editing from the very first moment when you lay all your source material onto one timeline. That process leads without pause or interruption to the final edit through the gradual iterative subtraction.

I talked about how the iterative sifting process lets you see “duplicates”, that’s to say instances where the same idea is repeated in an alternative form – and that it helps you make the choice between the different options. Another aspect of this is that it helps you to identify what is strong and what is not so strong. If I were cutting corporates or skate videos this might be different, but for what I do, I need to be able to isolate the key “moments” in my material and find ways to promote those and make them work as powerfully as possible.

In a completely literal sense, when you’re cutting promos and trailers, you want to create an emotional, visceral connection to the material in the audience. You want to make them laugh or cry, you want to make them hold their breath in anticipation, or gasp in astonishment. You need to know how to craft the moments that will elicit the response you are looking for. I find that this method really helps me identify where those moments are going to come from and how to structure everything around them so as to build them as strongly as possible. The iterative sifting method means you can be very sure of what to go for and in what context it’s going to work the best. In other words, I keep coming back to the realization that this method is doing a lot of the creative work for you in a way that simply won’t happen with the alternatives. Even setting aside the manifest efficiency, it would be worth it for this alone.

There’s a huge amount more that I could say about this process, but I’ll leave it there for now. I’m not saying this method works equally well for all types of projects. It’s perhaps less suited to scripted drama, for instance, but even there it can work effectively with certain modifications. Like every method, every editor wants to tweak it to their own taste and inclinations. The one thing I have found to its advantage above all others is that it almost entirely circumvents the problem of “what shot do I lay down next?” Time and again I’ve seen Browser/Bin-focused editors get stuck in exactly this way and it can be a very real block.

– Simon Ubsdell

For an expanded version of this concept, check out Simon’s in-depth article at Creative COW. Click here to link.

For more creative editing tips, click on this link for Film Editor Techniques.

©2017 Simon Ubsdell, Oliver Peters

Tips for Production Success – Part 2

df2015_prodtips_2_smPicking up from my last post (part 1), here are 10 more tips to help you plan for a successful production.

Create a plan and work it. Being a successful filmmaker – that is, making a living at it – is more than just producing a single film. Such projects almost never go beyond the festival circuit, even if you do think it is the “great American film”. An indie producer may work on a project for about four years, from the time they start planning and raising the funds – through production and post – until real distribution starts. Therefore, the better approach is to start small and work your way up. Start with a manageable project or film with a modest budget and then get it done on time and in budget. If that’s a success, then start the next one – a bit bigger and more ambitious. If it works, rinse and repeat. If you can make that work, then you can call yourself a filmmaker.

Budget. I have a whole post on this subject, but in a nutshell, an indie film that doesn’t involve union talent or big special effects will likely cost close to one million dollars, all in. You can certainly get by on less. I’ve cut films that were produced for under $150,000 and one even under $50,000, but that means calling in a lot of favors and having many folks working for free or on deferment. You can pull that off one time, but it’s not a way to build a business, because you can’t go back to those same resources and ask to do it a second time. Learn how to raise the money to do it right and proceed from there.

Contingencies at the end. Intelligent budgeting means leaving a bit for the end. A number of films that I’ve cut had to do reshoots or spend extra days to shoot more inserts, establishing shots, etc. Plan for this to happen and make sure you’ve protected these items in the budget. You’ll need them.

Own vs. rent. Some producers see their film projects as a way to buy gear. That may or may not make sense. If you need a camera and can otherwise make money with it, then buy it. Or if you can buy it, use it, and then resell it to come out ahead – by all means follow that path. But if gear ownership is not your thing and if you have no other production plans for the gear after that one project, then it will most likely be a better deal to work out rentals. After all, you’re still going to need a lot of extras to round out the package.

Shooting ratios. In the early 90s I worked on the post of five half-hour and hourlong episodic TV series that were shot on 35mm film. Back then shooting ratios were pretty tight. A half-hour episode is about 20-22 minutes of content, excluding commercials, bumpers, open, and credits. An hourlong episode is about 44-46 minutes of program content. Depending on the production, these were shot in three to five days and exposed between 36,000 and 50,000 feet of negative. Therefore, a typical day meant 50-60 minutes of transferred “dailies” to edit from – or no more than five hours of source footage, depending on the series. This would put them close to the ideal mark (on average) of approximately a 10:1 shooting ratio.

Today, digital cameras make life easier and with the propensity to shoot two or more cameras on a regular basis, this means the same projects today might have conservatively generated more than 10 hours of source footage for each episode. This impacts post tremendously – especially if deadline is a factor. As a new producer, you should strive to control these ratios and stay within the goal of a 10:1 ratio (or lower).

Block and rehearse. The more a scene is buttoned down, the fewer takes you’ll need, which leads to a tighter shooting ratio. This means rehearse a scene and make sure the camera work is properly blocked. Don’t wing it! Once everything is ready, shoot it. Odds are you’ll get it in two to three takes instead of the five or more that might otherwise be required.

Control the actors. Unless there’s a valid reason to let your actors improvise, make sure the acting is consistent. That is, lines are read in the same order each take, props are handled at the same point, and actors consistently hit their marks each take. If you stray from that discipline, the editorial time becomes longer. If allowed to engage in too much freewheeling improvisation, actors may inadvertently paint you into a corner. To avoid that outcome, control it from the start.

Visual effects planning. Most films don’t require special effects, but there are often “invisible” fixes that can be created through visual effects. For example, combining elements of two takes or adding items to a set. A recent romantic drama I post-supervised used 76 effects shots of one type or another. If this is something that helps the project, make sure to plan for it from the outset. Adobe After Effects is the ubiquitous tool that makes such effects affordable. The results are great and there are plenty of talented designers who can assist you within almost any budget range.

Multiple cameras vs. single camera vs. 4K. Some producers like the idea of shooting interviews (especially two-shots) in 4K (for a 1080 finish) and then slice out the frame they want. I contend that often 4K presents focus issues, due to the larger sensors used in these cameras. In addition, the optics of slicing a region out of a 4K image are different than using another camera or zooming in to reframe the shot. As a result, the look that you get isn’t “quite right”. Naturally, it also adds one more component that the editor has to deal with – reframing each and every shot.

Conversely, when shooting a locked-off interview with one person on-camera, using two cameras makes the edit ideal. One camera might be placed face-on towards the speaker and the other from a side angle. This makes cutting between the camera angles visually more exciting and makes editing without visible jump cuts easier.

In dramatic productions, many new directors want to emulate the “big boys” and also shoot with two or more cameras for every scene. Unfortunately this isn’t always productive, because the lighting is compromised, one camera is often in an awkward position with poor framing, or even worse, often the main camera blocks the secondary camera. At best, you might get 25% usability out of this second camera. A better plan is to shoot in a traditional single-camera style. Move the camera around for different angles. Tweak the lighting to optimize the look and run the scene again for that view.

The script is too long. An indie film script is generally around 100 pages with 95-120 scenes. The film gets shot in 20-30 days and takes about 10-15 weeks to edit. If your script is inordinately long and takes many more days to shoot, then it will also take many more days to edit. The result will usually be a cut that is too long. The acceptable “standard” for most films is 90-100 minutes. If you clock in at three hours, then obviously a lot of slashing has to occur. You can lose 10-15% (maybe) through trimming the fat, but a reduction of 25-40% (or more) means you are cutting meat and bone. Scenes have to be lost, the story has to be re-arranged, or even more drastic solutions. A careful reading of the script and conceiving that as a finished concept can head off issues before production ever starts. Losing a scene before you shoot it can save time and money on a large scale. So analyze your script carefully.

Click here for Part 1.

©2015 Oliver Peters