Glass – Editing an Unconventional Trilogy

Writer/director M. Night Shyamalan has become synonymous with films about the supernatural that end with a twist. He first gained broad attention with The Sixth Sense and in the two decades since, has written, produced, and directed a range of large and small films. In recent years, he has taken a more independent route to filmmaking, working with lower budgets and keeping close control of production and post.

His latest endeavor, Glass, also becomes the third film in what is now an unconventional trilogy, starting first with Unbreakable, released 19 years ago. 2017’s Split was the second in this series. Glass combines the three principal characters from the previous two films – David Dunn/The Overseer (Bruce Willis), Elijah Price/Mr. Glass (Samuel L. Jackson), and Kevin Wendell Crumb (James McAvoy), who has 23 multiple personalities.

Shyamalan likes to stay close to his northeastern home base for production and post, which has afforded an interesting opportunity to young talent. One of those is Luke Ciarrocchi, who edited the final two installments of the trilogy, Split and Glass. This is only his third film in the editor’s chair. 2015’s The Visit was his first. Working with Shyamalan has provided him with a unique opportunity, but also a master class in filmmaking. I recently spoke with Luke Ciarrocchi about his experience editing Glass.

_________________________________________________

[OP] You’ve had the enviable opportunity to start your editing career at a pretty high level. Please tell me a bit about the road to this point.

[LC] I live in a suburb of Philadelphia and studied film at Temple University. My first job after college was as a production assistant to the editing team on The Happening with editor Conrad Buff (The Huntsman: Winter’s War, Rise of the Planet of the Apes, The Last Airbender) and his first assistant Carole Kenneally. When the production ended, I got a job cutting local market commercials. It wasn’t glamorous stuff, but it is where I got my first experience working on Avid [Media Composer] and really started to develop my technical knowledge. I was doing that for about seven months when The Last Airbender came to town.

I was hired as an apprentice editor by the same editing crew that I had worked with on The Happening. It was on that film that I started to get onto Night’s radar. I was probably the first Philly local to break into his editing team. There’s a very solid and talented group of local production crew in Philly, but I think I was the first local to join the Editors Guild and work in post on one of his films. Before that, all of the editing crew would come from LA or New York. So that was a big ‘foot in the door’ moment, getting that opportunity from Conrad and Carole.  I learned a lot on Airbender. It was a big studio visual effects film, so it was a great experience to see that up close – just a really exciting time for me.

During development of After Earth, even before preproduction began, Night asked me to build a type of pre-vis animatic from the storyboards for all the action sequences. I would take these drawings into After Effects and cut them up into moveable pieces, animate them, then cut them together into a scene in Avid. I was putting in music and sound effects, subtitles for the dialogue, and really taking them to a pretty serious and informative level. I remember animating the pupils on one of the drawings at one point to convey fear (laughs). We did this for a few months. I would do a cut, Night would give me notes, maybe the storyboard artist would create a new shot, and I would do a recut. That was my first back-and-forth creative experience with him.

Once the film began to shoot, I joined the editing team as an assistant editor. At the end of post – during crunch time – I got the opportunity to jump in and cut some actual scenes with Night. It was surreal. I remember sitting in the editing room auditioning cuts for him and him giving notes and all the while I’m just repeating in my head, ‘Don’t mess this up, don’t mess this up.’ I feel like we had a very natural rapport though, besides the obvious nervousness that would come from a situation like that. We really worked well together from the start. We both had a strong desire to dig deep and really analyze things, to not leave anything on the table. But at the same time we also had the ability to laugh at things and break the seriousness when we needed to. We have a similar sense of humor that to this day I think helps us navigate the more stressful days in the editing room. Personality plays a big roll in the editing room. Maybe more so then experience. I may owe my career to my immature sense of humor. I’m not sure.     

After that, I assisted on some other films passing through Philly and just kept myself busy. Then I got a call from Night’s assistant to come by to talk about his next film, The Visit. I got there and he handed me a script and told me he wanted me to be the sole editor on it. Looking back it seems crazy, because he was self-financing the film. He had lot on the line and he could have gotten any editor, but he saw something. So that was the first of the three films I would cut for him. The odds have to be one-in-a-million for that to pan out the way that it did in the suburbs of Philly. Right place, right time, right people. It’s a lot of luck, but when you find yourself in that situation, you just have to keep telling yourself, ‘Don’t mess this up.’

[OP] These three films, including Glass, are being considered a trilogy, even though they span about two decades. How do they tie together, not just in story, but also style?

[LC] I think it’s fair to call Glass the final installment of a trilogy – but definitely an untraditional one. First Unbreakable, then 19 years later Split, and now Glass. They’re all in the same universe and hopefully it feels like a satisfying philosophical arc through the three. The tone of the films is ingrained in the scripts and footage. Glass is sort of a mash-up of what Unbreakable was and what Split was. Unbreakable was a drama that then revealed itself as a comic book origin story. Split was more of a thriller – even horror at times – that then revealed itself as part of this Unbreakable comic book universe. Glass is definitely a hybrid of tone and genre representing the first two films. 

[OP] Did you do research into Unbreakable to study its style?

[LC] I didn’t have to, because Unbreakable has been one of my favorite films since I was 18. It’s just a beautiful film. I loved that in the end it wasn’t just about David Dunn accepting who he was, but also Elijah finding his place in the world only by committing these terrible crimes to discover his opposite. He had to become a villain to find the hero. It’s such a cool idea and for me, very rewatchable. The end never gets old to me. So I knew that film very, very well. 

[OP] Please walk me through your schedule for post-production.

[LC] We started shooting in October of 2017 and shot for about two month. I was doing my assembly during that time and the first week of December. Then Night joined me and we started the director’s cut. The way that Night has set up these last three films is with a very light post crew. It’s just my first assistant, Kathryn Cates, and me set up at Night’s offices here in the suburbs of Philadelphia with two Avids. We had a schedule that we were aiming for, but the release date was over a year out, so there was wiggle room if it was needed. 

Night’s doing this in a very unconventional way. He’s self-financing, so we didn’t need to go into a phase of a studio cut. After his director’s cut, we would go into a screening phase – first just for close crew, then more of a friends-and-family situation. Eventually we get to a general audience screening. We’re working and addressing notes from these screenings, and there isn’t an unbearable amount of pressure to lock it up before we’re happy. 

[OP] I understand that your first cut was about 3 1/2 hours long. It must take a lot of trimming and tweaking to get down to the release length of 129 minutes. What sort of things did you do to cut down the running time from that initial cut?

[LC] One of our obstacles throughout post was that initial length. You’re trying to get to the length that the film wants to be without gutting it in the process. You don’t want to overcut as much as you don’t want to undercut. We had a similar situation on Split, which was a long assembly as well. The good news is that there’s a lot of great stuff to work with and choose from.

We approach it very delicately. After each screening we trimmed a little and carefully pulled things out, so each screening was incrementally shorter, but never dramatically so. Sometimes you will learn from a screenings that you pulled the wrong thing out and it needed to go back in. Ultimately no major storyline was cut out of Glass. It was really just finding where we are saying the same thing twice, but differently – diagnosing which one of those versions is the more impactful one – then cutting the others. And so, we just go like that. Pass after pass. Reel by reel.

An interesting thing I’ve found is that when you are repeating things, you will often feel that the second time is the offensive moment of that information and the one to remove, because you’ve heard it once before. But the truth is that the first telling of that information is more often what you want to get rid of. By taking away the first one, you are saving something for later. Once you remove something earlier, it becomes an elevated scene, because you are aren’t giving away so much up front. 

[OP] What is your approach to getting started when you are first confronted with the production footage? What is your editing workflow like?

[LC] I’m pretty much paper-based. I have all of the script supervisor’s notes. Night is very vocal on set about what he likes and doesn’t like, and Charlie Rowe, our script supervisor, is very good at catching those thoughts. On top of that, Night still does dailies each day – either at lunch or the end of the day. As a crew, we get together wherever we are and screen all of the previous day’s footage, including B-roll. I will sit next to Night with a sheet that has all of the takes and set-ups with descriptions and I’ll take notes both on Night’s reactions, as well as my own feelings towards the footage. 

With that information, I’ll start an assembly to construct the scene in a very rough fashion without getting caught up in the small details of every edit. It starts to bring the shape of the scene out for me. I can see where the peaks and valleys are. Once I have a clearer picture of the scene and its intention, I’ll go back through my detailed notes – there’s a great look for this, there’s a great reading for that – and I find where those can fit in and whether they serve the edit. You might have a great reaction to something, but the scene might not want that to be on-camera. So first I find the bones of the scene and then I dress it up. 

Night gets a lot range from the actors from the first take to the last take. It is sometimes so vast that if you built a film out of only the last takes, it would be a dramatically different movie than if you only used take one. With each take he just pushes the performances further. So he provides you with a lot of control over how animated the scene is going to be. In Glass, Elijah is an eccentric driven by a strong ideology, so in the first take you get the subdued, calculated villain version of him, but by the last take it’s the carnival barker version. The madman. 

[OP] Do you get a sense when screening the dailies of which way Night wants to go with a scene?

[LC] Yes, he’ll definitely indicate a leaning and we can boil it down to a couple of selects. I’ll initially cut a scene with the takes that spoke to him the most during the dailies and never cut anything out ahead of time. He’ll see the first cuts as they were scripted, storyboarded, and shot. I’ll also experiment with a different take or approach if it seems valid and have that in my back pocket. He’s pretty quick to acknowledge that he might have liked a raw take on set and in dailies, but it doesn’t work as well when cut together into a scene. So then we’ll address that. 

[OP] As an Avid editor, have you used Media Composer’s script integration features, like ScriptSync?

[LC] I just had my first experience with it on a Netflix show. I came on later in their post, so the show had already been set up for ScriptSync. It was very cool and helpful to be able to jump in and quickly compare the different takes for the reading of a line. It’s a great ‘late in the game’ tool. Maybe you have a great take, but just one word is bobbled and you’d like to find a replacement for just that word. Or the emotion of a key word isn’t exactly what you want. It could be a time-saver for a lot of that kind of polishing work.

[OP] What takeaways can you share from your experiences working with M. Night Shyamalan?

[LC] Night works in the room with you everyday. He doesn’t just check in once a week or something like that. It’s really nice to have that other person there. I feel like often times the best stuff comes from discussing it and talking it through. He loves to deconstruct things and figure out the ‘why’. Why does this work and this doesn’t? I enjoy that as well. After three films of doing that, you learn a lot. You’re not aware of it, but you’re building a toolkit. These tools and choices start to become second nature. 

On the Netflix show that I just did, there were times where I didn’t have anyone else in the room for long stretches and I started to hear those things that have become inherent in my process clearer. I started to take notice of what had become my second nature – what the last decade had produced. Editing is something you just have to do to learn. You can’t just read about it or study a great film. You have to do it, do it again, and struggle with it. You need to mess it up to get it right.

________________________________________________

This interview is going online after Glass has scored its third consecutive weekend in the number one box office slot. Split was also number one for three weeks in a row. That’s a pretty impressive feat and fitting for the final installment of a trilogy.

Be sure to also check out Steve Hullfish’s AOTC interview with Luke Ciarrocchi here.

©2019 Oliver Peters

Advertisements

Editing with the 2018 Mac mini

It’s hard to pigeonhole the new Mac mini into any specific market, since the size and modular design fit the needs of many different users. Data centers, servers, and Compressor encoding clusters come to mind, but it’s also ideal for many location productions, such as DIT work, stage lighting and sound control. If you are replacing an aging computer, already own the other peripherals, and prefer the macOS ecosystem, then the Mac mini may be enticing.

The 2018 Mac mini features a familiar form factor that’s been revamped with a new thermal architecture, bigger fans, and redesigned power supply. It features eighth-generation Intel Core quad-core and six-core processor options, RAM that tops out at 64GB, and flash storage (SSD) up to 2TB. Connectivity includes four Thunderbolt 3 / USB-C ports (two internal buses), HDMI 2.0, two standard USB 3.1 ports, Bluetooth, wi-fi, a headphone jack, and an ethernet port. The latter can be bumped up to 10GigE in build-to-order machines. RAM is technically upgradeable, but Apple recommends Apple-certified service centers and not user replacement. Apple loaned me a six-core 3.2 GHz i7 model with 32GB of RAM and a 1TB SSD. Mac minis start at $799, but this configuration would cost you $2,499.

Getting started

Many have asked online, “Why is the only GPU choice an Intel UHD Graphics 630?” We are now in the era of external GPU devices and Apple has clearly designed the mini with that in mind. There are many applications where a powerful GPU simply isn’t necessary, such as standard desktop computing, like surfing the web, home accounting, and writing. But also, most pro audio, most graphics and photography, and creative editing that isn’t effects-intensive will work just fine with this Mac. If you need or want more GPU horsepower, then add an eGPU to the mix. (An upcoming review will assess the performance of the Mac mini together with a Blackmagic eGPU Pro.)

When you first unbox the Mac you will need to figure out how to connect an external display. A Thunderbolt 3 display, like the LG UltraFine 5K on Apple’s website, or a low-end display that uses HDMI are both clear options. However, if you already own a monitor that connects via Mini DisplayPort, DisplayPort, VGA, or DVI, then you’ll need to purchase a Thunderbolt 3 adapter specific to that connection standard. Other possibilities include connecting your monitor through an eGPU or a Thunderbolt dock that has the correct ports. I tested both CalDigit and OWC docks with 27″ Apple Retina and Dell displays and everything worked fine. A minor issue, but something to consider before you can even start using your Mac mini.

I put the Mac mini through its paces with Premiere Pro, Final Cut Pro X, DaVinci Resolve, and Pixelmater Pro to cover editing, color correction, and photo manipulation. Although I didn’t test the Mac mini extensively with Logic Pro X, this computer would also be a good choice for sound design, mixing, and music creation. My initial impressions are that this is a very capable computer for creative pros and that the Intel GPU is more than adequate for most tasks.

Real-world testing

I’ve been testing the Mac mini with an episode from a real production that I work on, which is a nine-minute-long travel segment edited in Premiere Pro and graded in Resolve. I also brought the Premiere sequence into FCPX for comparison testing. To me that’s more telling than any artificial benchmark score. The native media sources are 4K in a 1080p/23.98 timeline. Footage covers a mix of cameras and codecs, including ProResHQ, XAVC, H.264, and H.265. Sequence clip effects include resizing, speed changes, Lumetri color correction (or FCPX’s color tools), plus an audio mix. In short, everything that the offline/creative editor used. The Resolve grade consists of 145 clips averaging three to five nodes on every clip. To keep my render tests consistent across several machines, all media and project files were loaded to an external LaCie Rugged portable drive connected over USB-3.

ProRes and H.264 exports from each application were used to compare the Mac mini against two other Macs – my mid-2014 Retina MacBook Pro (the last series using Nvidia GPU cards) and a current 10-core iMac Pro. Premiere Pro and Resolve rendering was set to OpenCL, an open GPU standard, which still seems to yield the fastest results for these apps. Final Cut Pro X uses Metal, Apple’s method to leverage the combined power of the GPU and CPU.

Naturally the iMac Pro bested all of the times by half or more. The mini’s times – using only the Intel GPU – were actually similar to the older MacBook Pro, though noticeably faster with Resolve. The general editing experience was good, but video was a bit “sticky” when scrubbing/skimming through 4K media – thanks to the slow external drive. Once I moved the media onto the Mac mini’s blazingly fast SSD (around 2800 MB/s read-write speeds), the result was a super-responsive editing experience. I don’t recommend working with your raw camera footage on the internal drive, so if you edit large projects with a lot of media, then adding a fast, external Thunderbolt 3 drive or RAID array is the way to go. The 1TB size of the internal flash drive is the sweet spot for most editors. Companies with ethernet-based NAS shared storage systems will want to get the 10GigE upgrade when purchasing a Mac mini if they intend to edit with it.

That’s not to say the Mac mini is the most powerful without the extra GPU power. There are some GPU-accelerated effects that will definitely cause stuttering playback and dropped frames. Blurs are an obvious example. When I tested some blurs, playback generally held up until I added a mask to the effect in Premiere. But remember, I’m working with 4K media in native codecs. As a rule, Premiere Pro simply doesn’t handle this type of content as fluidly as Final Cut Pro X. I was able to push FCPX a bit farther without issues than I could Premiere. And, of course, if you want to use it, FCPX can aid the situation with background rendering.

Speaking as an editor and colorist, I’ve been happy with how the Mac mini performs. While not the most powerful Mac made, the mini is still a robust creative tool. Do you edit commercials, corporate video, or entertainment programming? If so, then there’s very little you’ll find issue with in daily operation. The mini presents a good price/performance bargain for editors, musicians, sound designers, graphic artists, photographers, and others. That’s even more the case if you already own the rest of the package.

I think it’s worth making a cost comparison before I close. You can certainly beef up the Mac mini quite a bit; however, in doing so, you should compare the other Mac options before buying. For example, let’s say you completely option out the mini and then add all the Apple store peripherals, including Apple keyboard/mouse, the LG 5K display, and a BMD eGPU Pro. That total would run $6945. Naturally those items from Apple are going to cost a bit more than third-party options. But to compare, the equivalent package in an eight-core iMac Pro with the base GPU, 64GB RAM, and a 2TB SSD would run $6599. That’s the same Vega 56 GPU as in the eGPU Pro, plus you have an eight-core Xeon instead of a Core i7 CPU. Clearly the iMac Pro would be the better choice, because you aren’t buying three enclosures, cooling systems, and power supplies. But if you don’t need that horsepower, already own some of the peripherals, or are better served by the modular design of the Mac mini, then the calculation shifts.

When I work on my own, it’s either with the MacBook Pro or an aging Mac Pro tower. My home editing demands are not as taxing as when I work freelance at other shops. I certainly would have no qualms about shifting projects like those to a Mac mini as a replacement computer, because it can deliver a reliable level of performance without breaking the bank.

Originally written for RedShark News.

For more on the Mac mini and editing, check out this coverage at FCP.co.

©2019 Oliver Peters

The State of the NLE 2019

It’s a new year, but the doesn’t mean that the editing software landscape will change drastically in the coming months. For all intents and purpose, professional editing options boil down to four choices: Avid Media Composer, Adobe Premiere Pro, Apple Final Cut Pro X, and Blackmagic Design DaVinci Resolve. Yes, I know Vegas, Lightworks, Edius, and others are still out there, but those are far off on the radar by comparison (no offense meant to any happy practitioners of these tools). Naturally, since blogs are mainly about opinions, everything I say from here on is purely conjecture. Although it’s informed by my own experiences with these tools and my knowing many of the players involved on the respective product design and management teams – past and present.

Avid continues to be the go-to NLE in the feature film and episodic television world. That’s certainly a niche, but it’s a niche that determines the tools developed by designers for the broader scope of video editing. Apple officially noted two million users for Final Cut Pro X last year and I’m sure it’s likely to be at least 2.5M by now. Adobe claims Premiere Pro to be the most widely used NLE by a large margin. I have no reason to doubt that statement, but I have also never seen any actual stats. I’m sure through the Creative Cloud subscription mechanism Adobe not only knows how many Premiere Pro installations have been downloaded, but probably has a good idea as to actual usage (as opposed to simply downloading the software). Bringing up the rear in this quartet is Resolve. While certainly a dominant color correction application, I don’t yet see it as a key player in the creative editing (as opposed to finishing) space. With the stage set, let’s take a closer look.

Avid Media Composer

Editors who have moved away from Media Composer or who have never used it, like to throw shade on Avid and its marquee product. But loyal users – who include some of the biggest names in film editing – stick by it due in part to familiarity, but also its collaborative features and overall stability. As a result, the development pace and rate of change is somewhat slow compared with the other three. In spite of that, Avid is currently on a schedule of a solid, incremental update nearly every month – each of which chips away at a long feature request list. The most recent one dropped on December 31st. Making significant changes without destroying the things that people love is a difficult task. Development pace is also hindered by the fact that each one of these developers is also chasing changes in the operating system, particularly Apple and macOS. Sometimes you get the feeling that it’s two steps forward, one step back.

As editors, we focus on Media Composer, but Avid is a much bigger company than just that, with its fingers in sound, broadcast, storage, cloud, and media management. If you are a Pro Tools user, you are just as concerned about Avid’s commitment to you, as editors are to them. Like any large company, Avid must advance not just a single core product, but its ecosystem of products. Yet it still must advance the features in these products, because that’s what gets users’ attention. In an effort to improve its attraction to new users, Avid has introduced subscription plans and free versions to make it easier to get started. They now cover editing and sound needs with a lower cost-of-entry than ever before.

I started nonlinear editing with Avid and it will always hold a spot in my heart. Truth be told, I use it much less these days. However, I still maintain current versions for the occasional project need plus compatibility with incoming projects. I often find that Media Composer is the single best NLE for certain tasks, mainly because of Avid’s legacy with broadcast. This includes issues like proper treatment of interlaced media and closed captioning. So for many reasons, I don’t see Avid going away any time soon, but whether or not they can grow their base remains an unknown. Fortunately many film and media schools emphasize Avid when they teach editing. If you know Media Composer, it’s an easy jump to any other editing tool.

Adobe Premiere Pro CC

The most widely used NLE? At least from what I can see around me, it’s the most used NLE in my market, including individual editors, corporate media departments, and broadcasters. Its attraction comes from a) the versatility in editing with a wide range of native media formats, and b) the similarity to – and viable replacement for – Final Cut Pro “legacy”. It picked up steam partly as a reaction to the Final Cut Pro X roll-out and users have generally been happy with that choice. While the shift by Adobe to a pure subscription model has been a roadblock for some (who stopped at CS6), it’s also been an advantage for others. I handle the software updates at a production company with nine edit systems and between the Adobe Creative Cloud and Apple Mac App Store applications, upgrades have never been easier.

A big criticism of Adobe has been Premiere’s stability. Of course, that’s based on forum reads, where people who have had problems will pipe up. Rarely does anyone ever post how uneventful their experience has been. I personally don’t find Premiere Pro to be any less stable than any other NLE or application. Nonetheless, working with a mix of oddball native media will certainly tax your system. Avid and Apple get around this by pushing optimized and proxy media. As such, editors reap the benefits of stability. And the same is true with Premiere. Working with consistent, optimized media formats (transcoded in advance) – or working with Adobe’s own proxies – results in a more stable project and a better editing experience.

Avid Media Composer is the dominant editing tool in major markets, but mainly in the long-form entertainment media space. Many of the top trailer and commercial edit shops in those same markets use Premiere Pro. Again, that goes back to the FCP7-to-Premiere Pro shift. Many of these companies had been using the old Final Cut rather than Media Composer. Since some of these top editors also cut features and documentaries, you’ll often see them use Premiere on the features that they cut, too. Once you get below the top tier of studio films and larger broadcast network TV shows, Premiere Pro has a much wider representation. That certainly is good news for Adobe and something for Avid to worry about.

Another criticism is that of Adobe’s development pace. Some users believed that moving to a subscription model would speed the development pace of new versions – independent of annual or semi-annual cycles. Yet cycles still persist – much to the disappointment of those users. This gets down to how software is actually developed, keeping up with OS changes, and to some degree, marketing cycles. For example, if there’s a big Photoshop update, then it’s possible that the marketing “wow” value of a large Premiere Pro update might be overshadowed and needs to wait. Not ideal, but that’s the way it is.

Just because it’s possible, doesn’t mean that users really want to constantly deal with automatic software updates that they have to keep track of. This is especially true with After Effects and Premiere Pro, where old project files often have to be updated once you update the application. And those updates are not backwards compatible. Personally, I’m happy to restrict that need to a couple of times a year.

Users have the fear that a manufacturer is going to end-of-life their favorite application at some point. For video users, this was made all too apparent by Apple and FCPX. Neither Apple nor Adobe has been exempt from killing off products that no longer fit their plans. Markets and user demands shift. Photography is an obvious example here. In recent years, smart phones have become the dominant photographic device, which has enabled cloud-syncing and storage of photos. Adobe and Apple have both shifted the focus for their photo products accordingly. If you follow any of the photo blogs, you’ll know there’s some concern that Adobe Lightroom Classic (the desktop version) will eventually give way completely to Lightroom CC (the cloud version). When a company names something as “classic”, you have to wonder how long it will be supported.

If we apply that logic to Premiere Pro, then the new Adobe Rush comes to mind. Rush is a simpler, nimbler, cross-platform/cross-device NLE targeted as users who produce video starting with their smart phone or tablet. Since there’s also a desktop version, one could certainly surmise that in the future Rush might replace Premiere Pro in the same way that FCPX replaced FCP7. Personally, I don’t think that will happen any time soon. Adobe treats certain software as core products. Photoshop, Illustrator, and After Effects are such products. Premiere Pro may or may not be viewed that way internally, but certainly more so now than ever in the past. Premiere Pro is being positioned as a “hub” application with connections to companion products, like Prelude and Audition. For now, Rush is simply an interesting offshoot to address a burgeoning market. It’s Adobe’s second NLE, not a replacement. But time will tell.

Apple Final Cut Pro X

Apple released Final Cut Pro X in the summer of 2011 – going on eight years now. It’s a versatile, professional tool that has improved greatly since that 2011 launch and gained a large and loyal fan base. Many FCPX users are also Premiere Pro users and the other way around. It can be used to cut nearly any type of project, but the interface design is different from the others, making it an acquired taste. Being a Mac-only product and developed within the same company that makes the hardware and OS, FCPX is optimized to run on Macs more so than any cross-platform product can be. For example, the fluidity of dealing with 4K ProRes media on even older Macs surpasses that of any other NLE.

Prognosticating Apple’s future plans is a fool’s errand. Some guesses have put the estimated lifespan of FCPX at 10 years, based in part on the lifespan of FCP “legacy”. I have no idea whether that’s true of not. Often when I read interviews with key Apple management (as well as off-the-record, casual discussions I’ve had with people I know on the inside), it seems like a company that actually has less of a concrete plan when it comes to “pro” users. Instead, it often appears to approach them with an attitude of “let’s throw something against the wall and see what sticks”. The 2013 Mac Pro is a striking example of this. It was clearly innovative and a stellar exhibit for Apple’s “think different” mantra. Yet it was a product that obviously was not designed by actually speaking with that product’s target user. Apple’s current “shunning” of Nvidia hardware seems like another example.

One has to ask whether a company so dominated by the iPhone is still agile enough to respond to the niche market of professional video editors. While Apple products (hardware and software) still appeal to creatives and video professionals, it seems like the focus with FCPX is towards the much broader sphere of pro video. Not TV shows and feature films (although that’s great when it comes) – or even high-end commercials and trailers – but rather the world of streaming channels, social media influencers, and traditional publishers who have shifted to an online media presence from a print legacy. These segments of the market have a broad range of needs. After all, so called “YouTube stars” shoot with everything from low-end cameras and smart phones all the way up to Alexas and REDs. Such users are equally professional in their need to deliver a quality product on a timetable and I believe that’s a part of the market that Apple seeks to address with FCPX.

If you are in the world of the more traditional post facility or production company, then those users listed above may be market segments that you don’t see or possibly even look down upon. I would theorize that among the more traditional sectors, FCPX may have largely made the inroads that it’s going to. Its use in films and TV shows (with the exception of certain high-profile, international examples) doesn’t seem to be growing, but I could be wrong. Maybe the marketing is just behind or it no longer has PR value. Regardless, I do see FCPX as continuing strong as a product. Even if it’s not your primary tool, it should be something in your toolkit. Apple’s moves to open up ProRes encoding and offering LumaForge and Blackmagic eGPU products in their online store are further examples that the pro customer (in whatever way you define “pro”) continues to have value to them. That’s a good thing for our industry.

Blackmagic Design DaVinci Resolve

No one seems to match the development pace of Blackmagic Design. DaVinci Resolve underwent a wholesale transformation from a tool that was mainly a high-end color corrector into an all-purpose editing application. Add to this the fact that Blackmagic has acquired and integrated a number of companies, whose tools have been modernized and integrated into Resolve. Blackmagic now offers a post-production solution with some similarities to FCPX while retaining a traditional, track-based interface. It includes modes for advanced audio post (Fairlight) and visual effects (Fusion) that have been adapted from those acquisitions. Unlike past all-in-one applications, Resolve’s modal pages retain the design and workflow specific to the task at hand, rather than making them fit into the editing application’s interface design. All of this in a very short order and across three operating systems, thus making their pace the envy of the industry.

But a fast development pace doesn’t always translate into a winning product. In my experience each version update has been relatively solid. There are four ways to get Resolve (free and paid, Mac App Store and reseller). That makes it a no-brainer for anyone starting out in video editing, but who doesn’t have the specific requirement for one application over another. I have to wonder though, how many new users go deep into the product. If you only edit, there’s no real need to tap into the Fusion, Fairlight, or color correction pages. Do Resolve editors want to finish audio in Fairlight or would they rather hand off the audio post and mix to a specialist who will probably be using Pro Tools? The nice thing about Resolve is that you can go as deep as you like – or not – depending on your mindset, capabilities, and needs.

On the other hand, is the all-in-one approach better than the alternatives: Media Composer/Pro Tools, Premiere Pro/After Effects/Audition, or Final Cut Pro X/Motion/Logic Pro X? I don’t mean for the user, but rather the developer. Does the all-in-one solution give you the best product? The standalone version of Fusion is more full-featured than the Fusion page in Resolve. Fusion users are rightly concerned that the standalone will go away, leaving them with a smaller subset of those tools. I would argue that there are already unnecessary overlaps in effects and features between the pages. So are you really getting the best editor or is it being compromised by the all-in-one approach? I don’t know the answer to these questions. Resolve for me is a good color correction/grading application that can also work for my finishing needs (although I still prefer to edit in something else and roundtrip to/from Resolve). It’s also a great option for the casual editor who wants a free tool. Yet in spite of all its benefits, I believe Resolve will still be a distant fourth in the NLE world, at least for the next year.

The good news is that there are four great editing options in the lead and even more coming from behind. There are no bad choices and with a lower cost than ever, there’s no reason to limit your knowledge to only one. After all, the products that are on top now may be gone in a decade. So broaden your knowledge and define your skills by your craft – not your tools!

©2019 Oliver Peters

Edit Collaboration and Best Practices

There are many workflows that involve collaboration, with multiple editors and designers working on the same large project or group of projects. Let me say up front that if you want the best possible collaborative experience with multiple editors, then work with Avid Media Composer. Full stop. I have worked both sides of the equation and without a doubt, Media Composer connected to Avid Unity/Isis/Nexis shared storage is simply not matched by Final Cut Pro, Final Cut Pro X, Premiere Pro, or any other editing software/storage/cloud combination. Everything else is a compromise, which is why feature film and TV series editorial teams continue to select Avid solutions as their first choice.

In spite of that, there are many reasons to use other editing tools. I work most of the time in Adobe Premiere Pro CC and freelance at a shop with nine edit workstations connected to shared storage. We work mainly in Adobe Creative Cloud applications and our projects involve a lot of collaboration. Some of these are corporate videos that are frequently edited and revised by different editors. Some are entertainment shows, cut by a small editorial team focused on those shows. For some projects, Premiere Pro is the perfect tool. For others, we have to develop strategies to adapt Premiere to our workflow.

With that in mind, the following are tips and best practices that I’ll share for what has worked best for us over the past three years, while working on large projects with a team of editors. Although it applies to our work with Premiere Pro, the same would generally be true if we were working with Apple Final Cut Pro X instead.

Organization. We organize all projects into a specific folder structure, using a Post Haste template. All media files, like camera footage, audio, graphic elements, etc. go into common folders. Editors know where to look to find things. When new camera footage comes in, files are organized as “dailies” into specific folders by date, camera, and camera card. Non-pro formats, like GoPro and DSLR footage will be batch-renamed to reflect the project, date, and camera card. The objective is to have unique file names for each and every media file.

Optimized, transcoded, or proxy media. Depending on the performance and amount of media, you may need to do some prep work before even starting the edit process. Premiere and FCPX work well with some media formats and not with others. NAS/SAN storage is particularly taxing, especially once you get to resolutions greater than HD. If you want the most fluid experience in a shared workflow, then you will likely need to transcode proxy files from within the application. The reason to stay inside of FCPX or Premiere Pro is so that frame size offsets are properly tracked. Once proxies have been transcoded, it’s a simple matter of toggling between the proxy media (best playback performance) and full-resolution media (best image quality).

On the other hand, if you’d rather stick to full-resolution, native media, then some formats will have to be transcoded into “optimized” media. For instance, GoPro 4K footage is terrible to edit with natively. It should always be transcoded to ProRes or DNxHD before editing, if you don’t want to go the proxy route. This can be done inside or outside of the application and is an easy task with DaVinci Resolve, EditReady, Adobe Media Encoder, or Apple Compressor.

Finally, if you have image sequences from a drone or other source, forget trying to edit from these off of a network. Transcode them right away into some format of master movie file. I find Resolve to be the best tool for this. It’s fast and since these are often camera raw files, you can apply a base grade to them as a starting point for future color correction.

Break up your projects. Depending on the type and size of the job and number of editors working on it, you may choose to work in multiple Premiere projects. There may be a master file where all media is imported and initially organized. Then there may be multiple projects that are offshoots from this for component parts. In a corporate environment, it could be several different videos cut from a single, larger set of media. In a feature film, there could be different Premiere projects for each reel of the film.

Since Premiere Pro employs project locking, any project opened by one editor can also be opened in a read-only mode by other editors. Editors can have multiple Premiere projects open at one time. Thus, it’s simple to bring in elements from one project into another, even while they are all open. This workflow mimics Avid’s bin-locking strategy.

It helps to keep project files streamlined as progress on the production extends over time. You want to keep the number of sequences in any given project small. Periodically duplicate your project(s), strip out old sequences from the current project, and archive the older project files.

As a general note, while working to build the creative story edits – i.e. “offline editing” – you will want to keep plug-in filter effects to a minimum. In fact, it’s generally a good idea to keep the plug-in selection on each system small, so that all workstations in this shared environment are able to have the same set of installed plug-ins. The same is true of fonts.

Finishing stages of post. There are generally two paths in the finishing, aka “online editing” stage. Either all final color correction and assembly of effects is completed within Premiere Pro, or there is a roundtrip through a color correction application, like Blackmagic Design DaVinci Resolve. The same holds true for audio, where a separate sound editor/designer/mixer may handle the finishing touches in Avid Pro Tools.

To accomplish an easy roundtrip with Resolve, create a sequence with all color correction and effects removed. Flatten the video to a single track (if possible), and remove the audio or do a simple stereo mixdown for reference. Ideally, media with mixed frame rates should be addressed as slow motion in the edited sequence. Avoid modifying the frame rate through any sort of “interpret” function within the application. Export an XML or AAF and send that and the associated media to Resolve. When color correction is complete, you can render the entire timeline at the sequence resolution as a single master file.

Conversely, if you want to send it back to Premiere Pro for final assembly and to complete the roundtrip, then render individual clips at their source resolution with handles of one to two seconds. Back in Premiere, re-apply titles, insert completed visual effects, and add any missing plug-in effects.

With audio post, there will be no roundtrip of elements, since the mixer will deliver a completed mixed stereo or surround track. This should be imported into Premiere (or Resolve if the final master is created in Resolve) and married back to the final video sequence. The mixer should also supply “stems” – the individual dialogue, music, and sound effects (D/M/E) submix tracks.

Mastering. Final sequences should be exported in a master file format (ProRes, DNxHD/HR, uncompressed) in at least two forms: 1) master with final mix and titles, and 2) textless submaster with split-track audio (multiple channels containing the D/M/E stems). All of these files are stored within the same job-based folder structure outlined at the top. It is quite common that future revisions will be made using the textless submaster rather than re-opening the full project, or that it may be used as source material in another edit.

Another aspect of finishing the project is media consolidation. This means taking the final sequence and generating a new project file from it. That file contained only those elements from the sequence, along with a copy of the media used, where each file has been trimmed to the portion within the sequence (plus handles). This is the Project Manager function in Premiere Pro. Unfortunately, Premiere is not consistently good at this task. Some formats will be properly trimmed, while others will be copied in their entirety. That’s OK for a :10 take, but a bummer when it’s a 30-minute interview.

The good news is that if you went through the Resolve roundtrip workflow and rendered individual clips, then effectively Resolve has already done media consolidation as a byproduct. In addition, if your source media is 4K, but you only finished in HD, the Resolve renders will be 4K. If in the future, you need to deliver the same master in 4K, everything is already set. Of course, that assumes that you didn’t do a lot of “punching in” and reframing in your edit sequence.

Cloud-based services. Often collaboration requires a distributed team, when not everyone is under one roof. While Adobe does offer cloud-based team editing methods, this doesn’t really work when editors are on different Creative Cloud accounts or when the collaboration is between an editor and a graphic designer/animator/VFX artist working in non-Adobe tools. In that case the old standbys have been Dropbox, Box, or Google Drive. Syncing is easy and relatively reliable. However, these are really just designed for sharing assets. But when this involves a couple of editors and each has a local, mirrored set of media, then simple sharing/syncing of only small project files makes for a working collaborative method.

Frame.io is the newbie here, with updated extension tools designed for in-application workspace panels within Final Cut Pro X, After Effects, and Premiere Pro. While they tout the ease of moving full-resolution media into their cloud, including camera files, I really wouldn’t recommend doing that. It’s simply not very practical on must projects. But for sharing cuts using a standard review-and-approach workflow, Frame.io definitely hits most of the buttons.

©2018 Oliver Peters

Five Decades of Edit Suite Evolution

I spent last Friday setting up two new Apple iMac Pros as editing workstations. When I started as an editor in the 1970s, it was the early days of computer-assisted video editing. Edit suites (or bays) were intended for either “offline” editing with simple hardware, where creative cutting was the goal – or they were “online”, designed for finishing and used the most expensive gear. Sometimes the online bay would do double-duty for both creative and final post.

The minimum investment for such a linear edit suite would include three 2” videotape recorders, a video switcher (vision mixer), edit controller, audio mixer, and a small camera for titles and artwork. Suites were designed with creature comforts, since clients would often spend days at a time supervising the edit session. Before smart phones and the internet, clients welcomed the chance to get out of the office and go to the edit. Outfitting one of these edit suites would start at several hundred thousand dollars.

At my current edit gig, the company runs nine Mac workstations within a footprint that would have only supported three edit suites of the past, including a centralized machine room. Clients rarely come to supervise an edit, so the layout is more akin to the open office plan of a design studio. Editing can be self-contained on a Mac or PC and editors work in a more collegial, collaborative environment. There’s one “hero” room for when clients do decide to drop in.

In these five decades, computer-assisted editing has gone through four phases:

Phase 1 – Offline and online edit suites, primarily based on linear videotape technology.

Phase 2 – Nonlinear editing took hold with the introduction of Avid, EMC, Media 100, and Lightworks. The resolution was too poor for finishing, but the systems were ideal for the creative process. VTR-based linear rooms still handled finishing.

Phase 3 – As the quality improved, nonlinear systems could deliver finished masters. But camera acquisition and delivery was still centered on videotape. Nonlinear systems still had to be able to output to tape, which required specialized i/o hardware.

Phase 4 (current) – Editing is completely based around the computer. Most general-purpose desktop and even laptop computers are capable of the whole gamut of post services without the need for specialized hardware. That has become optional. The full shift to Phase 4 came when file-based acquisition and delivery became the norm.

This transition brought about a sea change in cost, workflow, facility design, and talent needs. It has been driven by technology, but also a number of socioeconomic factors.

1. Technology always advances. Computers get more powerful at a lower cost point. Moore’s Law and all that. Although our demands increase – SD, HD, 4K, 8K, and beyond – computers, so far, have not been outpaced. I can edit 4K today with an investment of under $10K, which was impossible in 1980, even with an investment of $500K or more. This cost reduction also applies to shared storage solutions (NAS and SAN systems). They are cheaper, easier to install, and more reliable than ever. Even the smallest production company can now afford to design editing around the collaboration of several editors and workstations.

2. The death of videotape came with the 2011 Tohoku earthquake and tsunami in Japan that disabled the Fukushima nuclear plant. A byproduct of this natural disaster was that it damaged the Sony videotape manufacturing plant, putting supplies of HDCAM-SR stock on indefinite backorder. This pointed to the vulnerability of videotape and hastened the acceptance of file-based delivery for masters by key networks and distributors.

3. Interactions with clients and human beings in general has changed – thanks to smartphones, personal computers, and the internet. While both good and bad, the result is a shift in our communication with clients. Most of the time, edit session review and approval is handled over internet services. Post your cut. Get feedback. Make your changes and post again. Repeat. Along with a smaller hardware footprint than in the past, this is one of the prime reasons that room designs have changed. You don’t need a big, comfortable edit suite designed for clients, if they aren’t going to come. A smaller room will do as long as your editors are happy and productive.

Such a transition isn’t new. It’s been mirrored in the worlds of publishing, graphic design, and recording studios. Nevertheless, it is interesting to look back at how far things have come. Naturally, some will view this evolution as a threat and others as filled with opportunities And, of course, where it goes from here is anyone’s guess.

All I know is that setting up two edit systems in a day would have been inconceivable in 1975!

Originally written for RedShark News

The hear a bit more about the changes and evolution of facilities, check out the Dec. 13th edition of the Digital Production Buzz. Click this link.

©2018 Oliver Peters

Editing and Music Composition

Editing and Music Composition

A nip is in the air and snow is falling in some regions. All signs of Fall and Winter soon to come. The sights, smells, and sounds of the season will be all around us. Festive events. Holiday celebrations. Joy. But no other season is so associated with memorable music to put us in the mood. That makes this a perfect time to talk about how video and film editing has intrinsic similarities with musical composition.

Fellow editor Simon Ubsdell has a lot of thoughts on the subject – perfect for one of my rare guest blog posts. Simon is Creative Director of Tokyo Productions, a London-based post-production shop specializing in trailers. Simon is multi-talented with experience in music, audio post, editing, and software development.

Grab a cup of holiday cheer and sit back for this enlightening read.

______________________________________

Simon Ubsdell – Editing and Music Composition

There is a quote attributed to several different musicians, including Elvis Costello, Miles Davis, and Thelonius Monk, which goes: “Talking about music is like dancing about architecture“. It sounds good and it seems superficially plausible, but I think it’s wrong on two levels. Firstly, a good choreographer would probably say that it’s perfectly possible to use dance to say something interesting about architecture and a good architect might well say that they could design a building that said something about dance. But I think it’s also unhelpful to imply that one art form can’t tell us useful things about another. We can learn invaluable lessons both from the similarities and the differences, particularly if we focus on process rather than the end result.

Instead, here’s Ingmar Bergman: “I would say that there is no art form that has so much in common with film as music. Both affect our emotions directly, not via the intellect. And film is mainly rhythm; it is inhalation and exhalation in continuous sequence.

Bergman is certainly not the only filmmaker to have made this observation and I think everyone can recognise the essential truth of it. However, what I want to consider here is not so much what film and music have in common as art forms, but rather whether the process of music composition can teach us anything useful about the process of film editing. As an editor who also composes music, I have found thinking about this to be useful in both directions.

In films you’ll often see a composer sitting down at a piano and laboriously writing a score one note after another. He bangs around until he finds one note and then he scribbles it into the manuscript; then he bangs around looking for the next one. Music composition is made to look like a sequential process where each individual note is decided upon (with some difficulty usually!) before moving on to the next. The reality is of course that music composition doesn’t work this way at all. So I’d like to look at some of the ways that one does actually go about writing a piece of music and how the same principles might apply to how we edit films. Because music is such a vast subject, I’m going to limit myself largely to the concepts of classical music composition, but the same overall ideas apply to whatever kind of music you might be writing in whatever genre.

What both music and film have in common is that they unfold over time: they are experienced sequentially. So the biggest question that both the composer and the editor need to address is how to organise the material across time, and to do that we need to think about structure.

Musical Structure

From the Baroque period onwards and even before, composers have drawn on a very specific set of musical structures around which to build their compositions. 

The Canon (as in Pachelbel’s famous example) is the repetition of the same theme over and over again with added ornamentation that becomes increasingly more elaborate. The Minuet and Trio is an A/B/A sandwich in which a theme is repeated (Minuet), but with a contrasting middle section (Trio). The Rondo is a repeated theme that alternates with multiple contrasting sections, in other words A/B/A/C/A/D, etc. The Theme and Variations sets out a basic theme and follows it with a series of elaborations in different keys, tempi, time signatures, and so on. 

Sonata Form, widely used for the opening movements of most symphonic works, is a much more sophisticated scheme, that starts by setting out two contrasting themes (the “1st and 2nd Subjects”) in two different keys (the “Exposition”), before moving into an extended section where those ideas undergo numerous changes and augmentations and key modulations (the “Development Section”), before returning to the original themes, both now in the home key of the piece (the “Recapitulation Section”), often leading to a final epilogue called the “Coda”. 

In all these cases the structure is built out of thematic and other contrasts, and contrast is a word I’m going to be coming back to repeatedly here, because it goes to the core of where music composition and editing come together.

Now the point of using musical structures of this kind is that the listener can form an idea of how the piece is unfolding even when hearing it for the first time. They provide a map that helps you orientate yourself within the music, so it doesn’t come across as just some kind of confused and arbitrary ramble across terrain that’s hard to read. Music that doesn’t come with signposts is not easy to listen to with concentration, precisely because you don’t know where you are. (Of course, the humble pop song illustrates this, too. We can all recognise where the verse ends and the chorus begins and the chorus repetitions give us clear anchor points that help us understand the structure. The difference with the kind of classical music I’m talking about is that a pop song doesn’t have to sustain itself for more than a few minutes, whereas some symphonies last well over an hour and that means structure becomes vastly more important.) 

What structure does is effectively twofold: on the one hand it gives us a sense of comprehensibility, predictability, even familiarity; and on the other hand it allows the composer to surprise us by diverging from what is expected. The second part obviously follows from the first. If we don’t know where we are, then we don’t know what to expect and everything is a constant surprise. And that means nothing is a surprise. We need familiarity and comprehensibility in order to be able to be surprised by the surprises when they come. Conversely, music that is wholly without surprises gets dull very quickly. Just as quickly as music that is all surprise, because again it offers us no anchor points. 

Editing Structure

So what comparisons can we draw with editing in terms of structure? Just as with our fictional movie “composer” sitting at the piano picking out one note after another, so you’ll find that many newcomers to editing believe that that’s how you put together a film. Starting at the beginning, you take your first shot and lay it down, and then you go looking for your next shot and you add that, and then the next one and the next one. Of course, you can build a film this way, but what you are likely to end up with is a shapeless ramble rather than something that’s going to hold the viewer’s attention. It will be the equivalent of a piece of music that has no structural markers and doesn’t give us the clues we need to understand where we are and where we are going. Without those cues the viewer quickly gets lost and we lose concentration. Not understanding the structure means we can’t fully engage with the film.

So how do we go about creating structure in our editing? Music has an inherently much more formal character, so in many ways the composer has an easier job, but I’d suggest that many of the same principles apply.

Light and Shade in Music

Music has so many easy to use options to help define structure. We have tempo – how fast or slow the music is at any one point. Rhythm – the manner in which accented notes are grouped with non-accented notes. Pitch – how high or low the musical sounds are. Dynamics – how loud or soft the music is, and how soft becomes loud and vice versa. Key – how far we have moved harmonically from the dominant key of the piece. Mode – whether we are experiencing the bright optimism of a major key or the sombre darkness of a minor key (yes, that’s a huge over-simplification!). Harmony – whether we are moving from the tension of dissonance to the resolution of consonance, or vice versa.

All of these options allow for contrasts – faster/slower, brighter/darker, etc. It’s out of those contrasts that we can build structure. For example, we can set out our theme in a bright, shiny major key with a sprightly rhythm and tempo, and then move into a slow minor key variation shrouded in mystery and suspense. It’s from those contrasts that we grasp the musical structure. And of course moving through those contrasts becomes a journey. We’re not fixed in one place, but instead we’re moving from light to dark, from peaceful to agitated, from tension to resolution, and so on. Music satisfies and nourishes and delights and surprises us, because it takes us on those journeys and because it is structured so that we experience change.

Light and Shade in Editing

So what are the editing equivalents? Let’s start with the easiest scenario and that’s where we are cutting with music. Because music has the properties we’ve discussed above, we can leverage those to give our films the same contrasts. We can change the pace and the mood simply by changing the pace and mood of the music we use. That’s easy and obvious, but very often overlooked. Far too many music-driven pieces are remorselessly monotonous, relying far too heavily for far too long on music of the same pace and mood. That very quickly dissipates the viewer’s engagement for the reasons we have talked about. Instead of feeling as though we are going on a journey of contrasts, we are stuck in one repetitive loop and it’s dull – and that means we stop caring and listening and watching. Instead of underscoring where the film is going, it effectively tells us that the film is going nowhere, except in circles.

(Editing Tip: So here’s a suggestion: if you’re cutting with pre-composed music, don’t let that music dictate the shape of your film. Instead cut the music so it works for you. Make sure you have changes of pace and intensity, changes of key and mode, that work to enhance the moments that are important for your film. Kill the music, or change it, or cut it so that it’s driving towards the moments that really matter. Master it and don’t let it master you. Far too often we see music that steamrolls through everything, obliterating meaning, flattening out the message – music that fails to point up what’s important and de-emphasise what is not. Be in control of your structure and don’t let anything dictate what you are doing, unless it’s the fundamental meaning you are trying to convey.

Footnote: Obviously what I’ve said here about music applies to the soundtrack generally. Sound is one of the strongest structural markers we have as editors. It builds tension and relaxation, it tells us where moments begin and end, it guides us through the shape of the film in a way that’s even more important than the pictures.)

And that brings me to a really important general point. Too many films feel like they are going in circles, because they haven’t given enough thought to when and how the narrative information is delivered. So many film-makers think it’s important to tell us everything as quickly as possible right up front.They’re desperate to make sure they’ve got their message across right here right now in its entirety. And then they simply end up recycling stuff we already know and that we care about less and less with each repetition. It’s a bit like a composer piling all his themes and all their variations into the first few bars (a total, unapproachable cacophony) and then being left with nothing new to say for the rest of the piece.

A far better approach is to break your narrative down into a series of key revelations and delay each one as long as you dare. Narrative revelations are your key structural points and you must cherish them and nurture them and give them all the love you can and they will repay you with enhanced audience engagement. Whatever you do, don’t throw them away unthinkingly and too soon. Every narrative revelation marks a way station on the viewer’s journey, and those way stations are every bit as crucial and valuable as their musical equivalents. They are the map of the journey. They are why we care. They are the hooks that make us re-engage.

Tension and Relaxation

This point about re-engagement is important too and its brings me back to music. Music that is non-stop tension is exhausting to listen to, just as music that is non-stop relaxation quickly becomes dull. As we’ve discussed, good music moves between tension and relaxation the whole time at both the small and the large scale, and that alternation creates and underpins structure. We feel the relaxation, because it has been preceded by tension and vice versa.

And the exact same principle applies to editing. We want the viewer to experience alternating tension and relaxation, moments of calm and moments of frenzied activity, moments where we are absorbing lots of information and moments where we have time to digest it. (Remember, Bergman talking about “inhalation and exhalation”.) Tension/relaxation applies at every level of editing, from the micro-level of the individual cuts to the macro level of whole scenes and whole sequences. 

As viewers we understand very well that a sudden burst of drama after a period of quiet is going to be all the more striking and effective. Conversely we know about the effect of getting our breath back in the calms that come after narrative storms. That’s at the level of sequences, but even within scenes, we know that they work best when the mood and pace are not constant, when they have corners and changes of pace, and their own moments of tension and relaxation. Again it’s those changes that keep us engaged. Constant tension and its opposite, constant relaxation, have the opposite effect. They quickly end up alienating us. The fact is we watch films, because we want to experience that varied journey – those changes between tension and relaxation.

Even at the level of the cut, this same principle applies. I was recently asked by a fellow editor to comment on a flashy piece of cutting that was relentlessly fast, with no shot even as long as half a second. Despite the fact that the piece was only a couple of minutes long, it felt monotonous very quickly – I’d say after barely 20 seconds. Whereas of course, if there had been even just a few well-judged changes of pace, each one of those would have hooked me back in and re-engaged my attention. It’s not about variety for variety’s sake, it’s about variety for structure’s sake.

The French have an expression: “reculer pour mieux sauter“, which roughly means taking a step back so you can jump further, and I think that’s a good analogy for this process. Slower shots in the context of a sequence of faster shots act like “springs”. When faster shots hit slower shots, it’s as if they apply tension to the spring, so that when the spring is released the next sequence of faster shots feels faster and more exciting. It’s the manipulation of that tension of alternating pace that creates exciting visceral cutting, not just relentlessly fast cutting in its own right.

Many great editors build tension by progressively increasing the pace of the cutting, with each shot getting incrementally shorter than the last. We may not be aware of that directly as viewers, but we definitely sense the “accelerated heartbeat” effect. The obvious point to make is that acceleration depends on having started slow, and deceleration depends on having increased the pace. Editing effects are built out of contrasts. It’s the contrasts that create the push/pull effect on the viewer and bring about engagement.

(Editing Tip: It’s not strictly relevant to this piece, but I wanted to say a few words on the subject of cutting to music. Many editors seem to think it’s good practice to cut on the downbeats of the music track and that’s about as far as they ever get. Let’s look at why this strategy is flawed. If our music track has a typical four beats to the bar, the four beats have the following strengths: the first, the downbeat, is the dominant beat; the third beat (often the beat where the snare hits) is the second strongest beat; then the fourth beat (the upbeat); and finally the second beat, the weakest of the four.

Cutting on the downbeat creates a pull of inertia, because of its weight. If you’re only ever cutting on that beat, then you’re actually creating a drag on the flow of your edit. If you cut on the downbeat and the third beat, you create a kind of stodgy marching rhythm that’s also lacking in fluid forward movement. Cutting on the upbeat, however, because it’s an “offbeat”, actually helps to propel you forward towards the downbeat. What you’re effectively doing is setting up a kind of cross-rhythm between our pictures and your music, and that has a really strong energy and flow. But again the trick is to employ variety and contrast. Imagine a drummer playing the exact same pattern in each bar: that would get monotonous very quickly, so what the drummer actually does is to throw in disruptions to the pattern that build the forward energy. He will, for example, de-emphasise the downbeat by exaggerating the snare, or he will even shift where the downbeat happens, and add accents that destabilise the four-square underlying structure. And all that adds to the energy and the sense of forward movement. And that’s the exact principle we should be aiming for when cutting to music.

There’s one other crucial, but often overlooked, aspect to this: making your cut happen on a beat is far less effective than making a specific moment in the action happen on a beat. That creates a much stronger sense of forward-directed energy and a much more satisfying effect of synchronisation overall. But that’s not to say you should only ever cut this way. Again variety is everything, but always with a view to what is going to work best to propel the sequence forward, rather than let it get dragged back. Unless, of course, dragging back on the forward motion is exactly what you want for a particular moment in your film, in which case, that’s the way to go.)

Building Blocks

You will remember that our fictional composer sits down at the piano and picks out his composition note by note. The implicit assumption there is that individual notes are the building blocks of a piece of music. But that’s not how composers work. The very smallest building block for a composer is the motif – a set of notes that exists as a tiny seed out of which much larger musical ideas are encouraged to grow. The operas of Wagner, despite notoriously being many hours long, are built entirely out of short motifs that grow through musical development to truly massive proportions. You might be tempted to think that a motif is the same thing as a riff, but riffs are merely repetitive patterns, whereas motifs contain within them the DNA for vast organic structures and the motifs themselves can typically grow other motifs.

Wagner is, of course, more of a exception than a rule and other composers work with building blocks on a larger scale than the simple motif. The smallest unit is typically something we call a phrase, which might be several bars long. And then again one would seldom think of a phrase in isolation, since it only really exists as part of larger thematic whole. If we look at this famous opening to Mozart’s 40th Symphony we can see that he starts with a two bar phrase that rises on the last note, that is answered by a phrase that descends back down from that note. The first phrase is then revisited along with its answering phrase – both shifted one step lower. 

But that resulting eight bars is only half of the complete theme, while the complete 1st Subject is 42 two bars long. So what is Mozart’s basic building block here? It most certainly isn’t a note, or even a phrase. In this case it’s something much more like a combination of a rhythm pattern (da-da-Da) and a note pattern (a falling interval of two adjacent notes). But built into that is a clear sense of how those patterns are able to evolve to create the theme. In other words, it’s complicated.

The fundamental point is that notes on their own are nothing; they are inert; they have no meaning. It’s only when they form sequences that they start to become music.

The reason I wanted to highlight this point is that I think it too gives us a useful insight into the editing process. The layperson tends to think of the single shot as being the basic building block, but just as single notes on their own are inert, so the single shot on its own (typically, unless it’s an elaborate developing shot) is lacking in meaning. It’s when we build shots into sequences that they start to take on life. It’s the dynamic, dialectical interplay of shots that creates shape and meaning and audience engagement. And that means it’s much more helpful to think of shot sequences as the basic building blocks. It’s as sequences that shots acquire the potential to create structure. Shots on their own do not have that quality. So it pays to have an editing strategy that is geared towards the creation and concatenation of “sequence modules”, rather than simply a sifting of individual shots. That’s a huge subject that I won’t go into in any more detail here, but which I’ve written about elsewhere.

Horizontal and Vertical Composition

Although the balance keeps shifting down the ages, music is both horizontal and vertical and exists in a tension between those aspects. Melody is horizontal – a string of notes that flows left to right across the page. Harmony is vertical – a set of notes that coexist in time. But these two concepts are not in complete opposition. Counterpoint is what happens when two or more melodies combine vertically to create harmony. The fugue is one of the most advanced expressions of that concept, but there are many others. It’s a truly fascinating, unresolved question that runs throughout the history of music, with harmony sometimes in the ascendant and sometimes melody.

Melody typically has its own structure, most frequently seen in terms of groups of four bars, or multiples of four bars. It tends to have shapes that we instinctively understand even when hearing it for the first time. Harmony, too, has a temporal structure, even though we more typically think of it as static and vertical. Vertical harmonies tend to suggest a horizontal direction of travel, again based on the notion of tension and relaxation, with dissonance resolving towards consonance. Harmonies typically point to where they are planning to go, although of course, just as with melody, the reason they appeal to us so much is that they can lead us to anticipate one thing and then deliver a surprise twist.

In editing we mostly consider only melody, in other words, how one shot flows into another. But there is also a vertical, harmonic component. It’s only occasionally that we layer our pictures to combine them vertically (see footnote). But we do it almost all the time with sound – layering sound components to add richness and complexity. I suppose one way of looking at this would be to think of pictures as the horizontal melody and the soundtrack as the vertical harmony, or counterpoint.

One obvious way in which we can approach this is to vary the vertical depth to increase and decrease tension. A sound texture that is uniformly dense quickly becomes tiresome. But if we think in terms of alternating moments where the sound is thickly layered and moments where it thins out, then we can again increase and decrease tension and relaxation.

(Footnote: One famous example of vertical picture layering comes in Apocalypse Now where Martin Sheen is reading Kurz’ letter while the boat drives upstream towards the waiting horror. Coppola layers up gliding images of the boat’s passage in dissolves that are so long they are more like superimpositions – conveying the sense of the hypnotic, awful, disorientating journey into the unknowable. But again contrast is the key here, because punctuating that vertical layering, Coppola interjects sharp cuts that hit us full in the face: suspended corpses, the burning helicopter in the branches of a tree. The key thing to notice is the counterpoint between the hard cuts and the flowing dissolves/superimpositions. The dissolves lull us into an eery fugue-like state, while the cuts repeatedly jolt us out of it to bring us face to face with the horror. The point is that they both work together to draw us inexorably towards the climax. The cuts work, because of the dissolves, and the dissolves work because of the cuts.)

Moments

The moments that we remember in both music and films are those points where something changes suddenly and dramatically. They are the magical effects that take your breath away. There is an incredibly famous cut in David Lean’s Lawrence of Arabia that is a perfect case in point. Claude Rains (Mr. Dryden) and Peter O’Toole (Lawrence) have been having a lively discussion about whether Lawrence really understands how brutal and unforgiving the desert is going to be. O’Toole insists that “it’s going to be fun”. He holds up a lighted match, and we cut to a close-up as he blows it out. On the sound of him blowing, we cut to an almost unimaginably wide shot of the desert as the sun rises almost imperceptibly slowly in what feels like complete silence. The sudden contrast of the shot size, the sudden absence of sound, the abruptness of cutting on the audio of blowing out the match – all of these make this one of the most memorable moments in film history. And of course, it’s a big narrative moment too. It’s not just clever, it has meaning. 

Or take another famous moment, this time from music. Beethoven’s massive Choral Symphony, the Ninth, is best known for its famous final movement, the Ode to Joy, based on Schiller’s poem of the same name. The finale follows on from a slow movement of celestial tranquillity and beauty, but it doesn’t launch immediately into the music that everyone knows so well. Instead there is a sequence built on the most incredible dissonance, which Wagner referred to as “the terror fanfare”. Beethoven has the massed ranks of the orchestra blast out a phenomenally powerful fortissimo chord that stacks up all seven notes of the D minor harmonic scale. It’s as if we are hearing the foul demons of hatred and division being sent screeching back to the depths of hell. And as the echoes of that terrifying sound are still dying away, we suddenly hear the solo baritone, the first time in nearly an hour of music that we have heard a human voice: “O Freunde, nicht diese Töne“, “Friends, let us not hear these sounds”. And so begins that unforgettable ode to the brotherhood of all mankind.

The point about both the Lawrence of Arabia moment and the Beethoven moment is that in each case, they form giant pivots upon which the whole work turns. The Lawrence moment shows us one crazy Englishman pitting himself against the limitless desert. The Beethoven moment gives us one lone voice stilling the forces of darkness and calling out for something better, something to unite us all. These are not mere stylistic tricks, they are fundamental structural moments that demand our attention and engage us with what each work is really about.

I’m not suggesting that everything we cut is going to have moments on this kind of epic scale, but the principle is one we can always benefit from thinking about and building into our work. When we’re planning our edit, it pays to ask ourselves where we are going to make these big turning points and what we can do with all the means at our disposal to make them memorable and attention-engaging. Our best, most important stuff needs to be reserved for these pivotal moments and we need to do everything we can to do it justice. And the best way of doing that, as Beethoven and David Lean both show us, is to make everything stop.

When the Music Stops

Arguably the greatest composer ever has one of my favourite ever quotes about music: “The music is not in the notes, but in the silence between.” Mozart saw that the most magical and profound moments in music are when the music stops. The absence of music is what makes music. To me that is one of the most profound insights in art.

From an editing point of view, that works, too. We need to understand the importance of not cutting, of not having sound, of not filling every gap, of creating breaths and pauses and beats, of not rushing onto the next thing, of allowing moments to resonate into nothingness, of stepping away and letting a moment simply be.

The temptation in editing is always to fill every moment with something. It’s a temptation we need to resist wherever we can. Our films will be infinitely better for it. Because it’s in those moments that the magic happens.

Composing and Editing with Structure

I hope by now you’ll agree with me about the fundamental importance of structure in editing. So let’s come back to our original image of the composer hammering out his piece of music note by note, and our novice editor laying out his film shot by shot.

It should be obvious that a composer needs to pre-visualise the structure of the piece before starting to think about the individual notes. At every level of the structure he needs to have thought about where the structural changes might happen – both on a large and small scale. He needs to plan the work in outline: where the key changes are going to happen, where the tempo shifts from fast to slow or slow to fast, where the tension escalates and where it subsides, where the whole orchestra is playing as one and where we hear just one solitary solo line. 

It goes without saying that very few composers have ever plotted out an entire work in detail and then stuck rigidly to the plan. But that’s not the point. The plan is just a plan until a better one comes along. The joy of composition is that it throws up its own unexpected surprises, ideas that grow organically out of other ideas and mushroom into something bigger, better and more complex than the composer could envisage when starting out. But those ideas don’t just shoot off at random. They train themselves around the trelliswork of the original structure. 

As I’ve mentioned, classical composers have it easy, because they can build upon pre-conceived structures like Sonata Form and the rest.  As editors we don’t have access to the same wealth of ready-built conventions, but we do have a few. 

One of the structures that we very frequently call upon is the famous three-act structure. It works not only for narrative, but for pretty much any kind of film you can think of. The three-act structure does in fact have a lot in common with Sonata Form. Act One is the Exposition, where we set out the themes to be addressed. Act Two is the Development Section, where the themes start to get complicated and we unravel the problems and questions that they pose. And Act Three is the Recapitulation (and Coda), where we finally resolve the themes set out in Act One. Almost anything you cut at whatever length can benefit from being thought of in these structural terms: a) set out your theme or themes; b) develop your themes and explore their complexities; c) resolve your themes (or at least point to ways in which they might be resolved). And make sure your audience is aware of how those sections break down. As an editor who has spent a lot of my working life cutting movie trailers, I know that every experienced trailer editor deploys three-act structure pretty much all the time and works it very hard indeed.

 Of course, scripted drama comes into the cutting room with its own prebuilt structure, but the script is by no means necessarily the structural blueprint for the finished film. Thinking about how to structure what was actually shot (as against what was on the page) is still vitally important. The originally conceived architecture might well not actually function as it was planned, so we can’t simply rely on that to deliver a film that will engage as it should. The principles that we’ve discussed of large scale composition, of pace, of contrast, of rhythm, and so on are all going to be useful in building a structure that works for the finished film.

Other kinds of filmmaking rely heavily on structural planning in the cutting room and a huge amount of work can go into building the base architecture. And it really helps if we think of that structural planning as more than simply shifting inert blocks into a functional whole. If we take inspiration from the musical concepts described here, we can create films that breathe a far more dynamic structural rhythm, that become journeys through darkness and light, through tension and relaxation, between calm and storm, journeys that engage and inspire.

Conclusion

Obviously this is just an overview of what is in reality a huge subject, but what I want to stress is that it really pays to be open to thinking about the processes of editing from different perspectives. Music, as a time-based art form, has so many useful lessons to draw from, both in terms of large scale architecture and small scale rhythms, dynamics, colours, and more. And those lessons can help us to make much more precise, refined and considered decisions about editing practice, whatever we are cutting.

– Simon Ubsdell

For more of Simon’s thoughts on editing, check out his blog post Bricklayers and Sculptors.

© 2018 Simon Ubsdell, Oliver Peters

Mary Queen of Scots

Few feature film editors have worked on such a diverse mix of films as Chris Dickens. His work ranges from Shaun of the Dead to Les Misérables, picking up an Oscar award along the way for editing Slumdog Millionaire. The latest film is Mary Queen of Scots, starring Gemma Chan, Margot Robbie, and Saoirse Ronan. This historical drama is helmed by Josie Rourke (Much Ado About Nothing), an experienced theatre director, who has also worked with film and TV projects. Readers will be familiar with Dickens from my Hot Fuzz interview. I recently had the pleasure to chat with him again about Mary Queen of Scots.

______________________________________________________

[OP] I know that there’s a big mindset difference between directing for the stage and directing for film. How was it working with Josie Rourke for this film?

[CD] She was very solid with the actors’ performances and how to rehearse them. There are great performances and that was the major thing she was concentrating on. She knew about the creative side of filmmaking, but not about the technical. We were essentially helping her with that to get what she wanted on screen. It’s a dialogue-driven movie and so she was very at home with that. But we had to work with her to adapt her normal approach for the screen, such as when to use images instead of dialogue.

Filmmaking is all about seeing something more than you can just see with the naked eye. Plus seeing emotionally what an actor is delivering. The way they’re doing it is different than on stage. It’s smaller. Film acting is much subtler. I don’t think we ever had a difference of opinion about that. It was more that in the theatre you are trying to communicate things through an actor’s movement and language and not so much through their eyes and the subtleties of their face. With film, one close-up of an actor can do more than a whole page of dialogue. Nevertheless, she certainly gave the cameramen freedom, while she concentrated on performance. And she shot all of that stuff so we had enough to use to make it work on the screen.

[OP] Did that dynamic affect how and where you edited?

[CD] I was mostly at the studio, but I did go on location with them. We shot at Pinewood and then on location in Scotland and around England. I went up to Scotland, where we had some action scenes, to help with that. Josie needed feedback about what she was shooting and needed to see at it quickly. I also did some second unit shooting and things like that.

[OP] Typically, period dramas require extensive visual effects to disguise modern locations and make them appear historically appropriate. They also are still frequently shot on film. What was the case with this film?

[CD] It was shot digitally. The DoP [John Mathieson, Logan, The Man from U.N.C.L.E., X-Men: First Class] would have preferred film, because of the genre, but that would have been too expensive. There were always two and sometimes three cameras for most set-ups. But, there are very few visual effects. Just a few clean-ups. There is an epic feel, but that’s not the main direction. The film is a more psychological story about these two women, the Queen of England and the Queen of Scotland. They are both opposed to each other, but also like each other. It’s about their relationship and the sort of psychological connection between them. The story is more intimate in that way. So it’s about the performance and the subtleties to that story.

[OP] Walk me through the production and post timeline.

[CD] We shot it a year ago last August for about three months. I assembled the film during that time and then we started the director’s cut in October of last year. We actually had a long edit and didn’t finish until July of this year. I think we spent about thirteen weeks doing a director’s cut. Then the producer’s cut, and then, a director’s cut again. I think we did about two or three test screenings and we had sound editors on board quite early. In fact, we never stopped cutting almost right until the end. If you have a lot of screenings, everyone involved with the film wants to do a lot of changes and it keeps happening right down to the wire. So we basically carried on cutting almost right through till the middle of June.

[OP] It sounds like you had more changes than usual for most film edits – especially after your test screenings. Tell me more.

[CD] The core of the film is about Mary, who was a Catholic Queen, and Elizabeth, who was a Protestant Queen. Mary had the claim to not just be Queen of Scotland, but the Queen of England, as well. She’s a threat to Elizabeth, so the film is about that threat. These women essentially had an agreement between them. Elizabeth agreed that Mary’s child would succeed her if she died. This was a private agreement between the two women. The men around them who are in their government are trying tp stop them from interacting with each other and having any kind of agreement. So it’s about women in a very archaic world. They are leaders, but they are not men, and the system around them are not happy for them to be leaders. This was the first time there was a queen in either country ever – and at the same time.

The theme is kind of modern, so the script – written by Beau Willimon, who writes House of Cards – was a bit like a political drama. In his writing, he intercuts scenes to give it a modern, more interesting feel. I followed that pattern – crosscutting scenes and stuff like that. When we started screening, a lot of people found that difficult to understand, so we went the other way around. We put things together and made the structure more classic. But when we then started screening it again, we realized that the film had ceased to be unique. It started becoming more like other dramas from this genre. So we put it all the way back to how it originally was. We went back to the spirit of what Beau had written and did more intercutting, but in different places. That is why it took so long to cut the film, because the balance was difficult to arrive at. Often a script is written in a very linear fashion and you cut it up later. But in this case it was the opposite way around.

If you listen too much to the audience or even producers of the film you can lose what makes it unique. The hands of the director are very important. Particularly here, because this is a women’s story, directed by a woman director, and it was very important to preserve that point of view, which could very easily be eroded. She wrote it with Beau and he doesn’t explain everything. He doesn’t have characters telling you how they got to a certain place or why. We needed to preserve that, but we also needed to let people into the story a little more. So we had to make adjustments to allow an audience to understand it.

[OP] I’m sure that such changes, as with every film, affected its final length. How was Mary Queen of Scots altered through these various cuts and recuts?

[CD] The original cut was about two hours and 45 minutes, but we ended up at an hour and 55. To get there, we started to cut back on the more epic scenes within the film. For instance, we had a battle scene early on in the film and there was a battle at the end of the film where Mary is beaten and expelled from Scotland. They didn’t really have the budget for a classic battle like in Braveheart. It was a slightly more impressionistic battle – more abstract and about how it feels. It was a beautiful sequence, but we found that the film didn’t need that. It just didn’t need to be that complete. We had to make a lot of choices like that – cutting things down.

We cut nearly an hour of material, which obviously I’m used to doing. However, what we found is that, because it was a performance piece, by cutting it down so far, we also lost a little bit of the air between scenes. It became quite brutal – just story without any kind of feeling. So once we got the the story working well, we then had to breathe life back into it. I literally went all the way back to the first edit of the film and looked at what was good about it in terms of the life and the subtleties. Then we very carefully started putting that back into the film. When you screen the film for audiences, you get very tunneled into making the story tighter and understandable, which is often at the expense of quite a lot. It’s an interesting part of the process – going back to the core of the story. You always have to do that. Sometimes you lose a little through the editing process and then you have to try and get it back.

We also had quite a lot of work on music. We had a composer on board [Max Richter, White Boy Rick, Hostiles, Morgan] quite early and he gave us a lot of ideas. But, as we changed the edit, we had to change the direction of the music somewhat. Of course, this also contributed to the length of the editing schedule. 

[OP] Music can certainly make or break a film. Some editors start with it right away and others wait until the end to play with options. It sounds like music was a bit of a challenge.

[CD] I normally go with it dry at the beginning. When I start putting the scenes together I tend to start using temp music. But I try to avoid it for as long as possible – even into the director’s cut. I think sometimes you can just use it as a bandage if you’re not careful. But on this film, we had a very specific tone that we needed to sell. It was a slightly more modern, suspenseful take on the music. We did end up using music a little earlier than I would have hoped.

We had a cut the film and we had a soundtrack, but we were constantly changing it – trying new things – as the edit changed. The music was more avant garde to start with and that was our intention, but the studio wanted it to be a little more melodic. The composer is very respected in the classical world, so he took that on board and wrote some themes for us that took it into a slightly different direction. He would write something – maybe not even to picture – and then give us the stems. The music editor and I would edit the music and try it out in different places. Then the composer would see what we had done with it to picture. We would then give it back to him. He would do a bit more work and give it back to us. It was actually a very unusual process.

[OP] With such a diverse set of films under your belt, what are some of your tips in tackling a scene?

[CD] I go through the rushes and try to watch everything that they shot. If there are A and B cameras, then I try to watch the B camera, as well. You get different emotional things from that, since it is a different angle. In the ideal situation when there’s time, I watch everything, mark what I like, and then make a roll with all my selected takes. Then I watch it again. I prune it down even more and then start a cut. Ideally, I try to find one take that works all the way through a scene as my first port of call. Then I go through the roll of my selects and look at what I marked and what I liked and try to work those things into the cut. I look at each one to see if that’s the best performance for that line and I literally craft it like that.

When you’ve watched half of a roll of rushes, you don’t know how to cut the scene. But once you’ve watched it all – everything they’ve shot – you then can organize the scene in your head. The actual cutting is quite quick then. I tend to watch it and think, ‘Okay I know what I’m going to do for the first cut. I’m going to use that shot for the beginning, that bit for the end, and so on.’ I map it in my head and quickly put that together with largely the selected takes that I like. Then I watch it and start refining it, honing it, and going through the roll again – adding things. Of course that depends on time. If I don’t have much time, I have to work fast, so I can’t do that all the time.

[OP] Any closing thoughts to wrap this up?

[CD] The experience of editing Mary Queen of Scots really reminded me how important it is to stick to the original intention and ambition of the film and make editorial decisions based on that. This doesn’t mean sticking to the letter of the script, but looking at how to communicate its intent overall. Film editing, of course, always means lots of changes and so it’s easy to get lost. Therefore, going back to the original thought always helps in making the right choices in the end.

For more on editing Mary Queen of Scots, check out Steve Hullfish’s Art of the Cut interview with Chris Dickens.

© 2018 Oliver Peters