Terminator: Dark Fate

“I’ll be back,” has turned out to be more than simply an iconic movie line. Sarah Connor (Linda Hamilton) and the T-800 (Arnold Schwarzenegger) are indeed back to save humanity from a dystopian future in this latest installment of the Terminator franchise. James Cameron is back on board, as well, with writing and producing credits. Terminator: Dark Fate is in essence Cameron’s sequel to Terminator 2: Judgment Day.

Tim Miller (Deadpool) is at the helm to direct the tale. It’s roughly two decades after the time of T2 and a new Rev-9 machine has been sent from an alternate future to kill Dani Ramos (Natalia Reyes), an unsuspecting auto plant worker in Mexico. But the new future’s resistance has sent back Grace (Mackenzie Davis), an enhanced super-soldier, to combat the Rev-9 and save her. They cross paths with Connor and the story sets off for a mad dash to the finale at Hoover Dam.

Miller brought back much of his Deadpool team, including his VFX shop Blur, DP Ken Seng, and editor Julian Clarke. This is also the second pairing of Miller and Clarke with Adobe. Both Deadpool and Terminator: Dark Fate were edited using Premiere Pro. In fact, Adobe was also happy to tie in with the film’s promotion through their own CreateYourFate trailer remix challenge. Participants could re-edit their own trailer using supplied content from the film.

I recently spoke with Julian Clarke about the challenges and fun of cutting this latest iteration of such an iconic film franchise.

___________________________________________________________

[OP] The Terminator: Dark Fate picks up two decades after Terminator 2, leaving out the timelines of the subsequent sequels. Was that always the plan or did it evolve out of the process of making the film?

[JC] That had to do with the screenplay. You were written into a corner by the various sequels. We really wanted to bring Linda Hamilton’s character back. With Jim involved, we wanted to get back to first principles and have it based on Cameron’s mythology alone. To get back to the Linda/Arnold character arcs and then add some new stuff to that.

[OP] Many fans were attracted to the franchise by Cameron’s two original Terminator films. Was there a conscious effort at integrating that nostalgia?

[JC] I come from a place of deep fandom for Terminator 2. As a teenager I had VHS copies of Aliens and Terminator 2 and watched them on repeat after school! Those films are deeply embedded in my psyche and both of them have aged well – they still hold up. I watched the sequels and they just didn’t feel like a Terminator film to me. So the goal was definitely to make it of the DNA of those first two movies. There’s going to be a chase. It’s going to be more grounded. It’s going to get back into the Sarah Connor character and have more heart.

[OP] This film tends to have elements of humor unlike most other action films. That must have posed a challenge to set the right tone without getting campy.

[JC] The humor thing is interesting. Terminator 2 has a lot of humor throughout. We have a little bit of humor in the first half and then more once Arnold shows up, but that’s really the way it had to be. The Dani Ramos character – who’s your entry point into the movie – is devastated when her whole family is killed. The idea that you can have a lot of jokes happening would be terrible. It’s not the same in Terminator 2, because John Connor’s step-parents get very little screen time and they don’t seem that nice. You feel bad for them, but it’s OK that you get into this funny stuff right off the bat. On this one we had to ease into the humor so you could live into the gravity of the situation at the start of the movie.

[OP] Did you have to do much to alter that balance during the edit?

[JC] There were one or two jokes that we nipped out, but it wasn’t like that whole first act was chock full of jokes. The tone of the first act is more like Terminator, which is more of a thriller or horror movie. Then it becomes more like T2 as the action gets bigger and the jokes come in. So the first half is like a bigger Terminator and the second half more like T2.

[OP] Deadpool, which Tim Miller also directed, used a very nonlinear story structure, balancing action, comedic moments, and drama. Terminator was always designed with a linear, straight-forward storyline. Right?

[JC] A movie hands you certain editing tools. Deadpool was designed to be nonlinear with characters in different places, so there are a whole bunch of options for you. Terminator: Dark Fate is more like a road movie. The detonation of certain paths along the road are predetermined. You can’t be in Texas before Mexico. So the structural options you had were where to check in with the Rev-9, as well as, the inter-scene structure. Once you are in the detention center, where are you cutting to Sarah or where to cut to Dani? However, where that is placed in the movie is pretty much set. All you can do is pace it up, pace it down, adjust how to get there. There aren’t a lot of mobile pieces that can be swapped around.

[OP] When we had talked after Deadpool, you discussed how you liked the assistants to build string-outs – what some call a KEM roll. Similar action is assembled back-to-back in order from every take into a sequence. Did you use that same organizational method on Terminator: Dark Fate?

[JC] Sometimes we were so swamped with material that there wasn’t time to create string-outs. I still like to have those. It’s a nice way to quickly see all the pieces that cover a moment. If you are trying to find the one take or action that’s five percent better than another, then it’s good to see them all in a row, rather than trying to keep it all in your head for a five minute take. There was a lot of footage that we shot in the action scenes, but we didn’t do 11 or 12 takes for a dialogue scene. I didn’t feel like I needed some tool to quickly navigate through the dialogue takes. We would string-out the ones that were more complicated.

[OP] Depending on the directing style, a series of takes may have increasingly calibrated performances with successive takes. With other directors, each take might be a lot different than the one before and after it. What is your approach to evaluating which is the best take to use?

[JC] It’s interesting when you use the earlier takes versus the later takes and what you get from them. The later takes are usually the ones that are most directed. The actors are warmed up and most closely nail what the director has in mind. So they are strong in that regard. But sometimes they can become more self-conscious. And so sometimes the first take is more thrown away and may have less power, but feels more real – more off the cuff. Sometimes a delivered dialogue line feels less written and you’ll buy it more. Other times you’ll want that more dramatic quality of the later takes. My instinct is to first use the later takes, but as you start to revise a scene, you often go back to pieces of the earlier takes to ground it a little more.

[OP] Hold long did the production and post take?

[JC] It took a little over 100 days of shooting with a lot of units. I work on a lot of mid-budget films, so this seemed like a really long shoot. It was a little relentless for everyone – even squeezing it into those 100 days. Shooting action with a lot of VFX is slow, due to the reset time needed between takes. The ending of the movie is 30 minutes of action in a row. That’s a big job shooting all of that stuff. When they have a couple of units cranking through the dialogue scenes plus shooting action sequences – that’s when I have to really work hard to keep up. Once you hit the roadblocks of shooting just those little action pieces, you get a little time to catch up.

We had the usual director’s cut period and finished by the end of this September. The original plan was to finish by the beginning of September, but we needed the time for VFX. So everything piled up with the DI and the mix in order to still hit the release date. September got a little crazy.  It seems like a long time – a total of 13 or 14 months – but it still was an absolute sprint to get the movie in shape and get the VFX into the film in time. This is maybe normal for some of these film, but compared to the other VFX movies I’ve done, it was definitely turning things up a notch!

[OP] I imagine that there was a fair amount of pre-viz required to layout the action for the large VFX and CG scenes. Did you have that to work with as placeholder shots? How did you handle adjusting the cut as the interim and final shots were delivered?

[JC] Tim is big into pre-viz with his background in VFX and animation and owning his own VFX company. We had very detailed animatics going into production. Depending on a lot of factors, you still abandon a lot of things. For example, the freeway chases are quite a bit different, because when you go there and do it with real cars, they do different things. Or only part of the cars look like they are going fast enough. Those scenes became quite different than the pre-viz.

Others are almost 100% CG, so you can drop in the pre-viz as placeholders. Although, even in those cases, sometimes the finished shot doesn’t feel real enough. In the “cartoon” world of pre-viz you can do wild camera moves and say, “Wow, that seems cool!” But when you start doing it at photoreal quality, then you go, “This seems really fake.” And so we tried to get ahead of that stuff and find what to do with the camera to ground it. Kind of mess it up so it’s not too dynamic and perfect.

[OP] How involved were you with shaping the music? Did you use previous Terminator films scores as a temp track to cut with?

[JC] I was very involved with the music production. I definitely used a lot of temp music. Some of it ripped from old Terminator movies, but there’s only so much Terminator 2 music you can put in. Those scores used a lot of synthesizers that date the sound. I did use “Desert Suite” from Terminator 2 when Sarah is in the hotel room. I loved having a very direct homage to a Sarah Connor moment while she’s talking about John. Then I begged our composer, Tom Holkenborg [Junkie XL], to consider doing a version of it for our movie. So it is essentially the same chord progression.

That was an interesting musical and general question about how much do you lean into the homage thing. It’s powerful when you do it, but if you do it too much, it starts to feel artificial or pandering. And so, I tried to hit the sweet spot so you knew you were watching a Terminator movie, but not so much that it felt like Terminator karaoke. How many times can you go da-dum-dum-da-da-dum? You have to pick your moments for those Terminator motifs. It’s diminishing returns if you do it too much.

Another inspirational moment for me was another part in Terminator 2. There’s a disturbing industrial sound for the T-1000. It sounds more like a foghorn or something in a factory rather than music and it created this unnerving quality to the T-1000 scenes when he’s just scoping things out. So we came up with a modern day electronic equivalent for the Rev-9 character and that was very potent.

[OP] Was James Cameron involved much in the post-production?

[JC] He’s quite busy with his Avatar movies. Some of the time he was in New Zealand, some of the time he was in Los Angeles. Depending on where he was and where we were in the process, we would hit milestones, like screenings or the first cut. We would send him versions and download a bunch of his thoughts.

Editing is very much a part of his wheelhouse. Unlike many other directors, he really thinks about this shot, then that shot, then the next shot. His mind really works that way. Sometimes he would give us pretty specific, dialed-in notes on things. Sometimes it would just be bigger suggestions, like, “Maybe the action cutting pattern could be more like this…” So we’d get his thoughts – and, of course, he’s Jim Cameron and he knows the business and the Terminator franchise – so I listened pretty carefully to that input.

[OP] This is the second film that you’ve cut with Premiere Pro. Deadpool was first and there were challenges using it on such a complex project. What was the experience like this time around?

[JC] Whenever you set out to use a new workflow – – Not to say Premiere is new. It’s been around a long time and has millions of users, but it’s unusual to use it on large VFX movies for specific reasons. On Deadpool, that led to certain challenges and that’s just what happens when you try to do something new. The fact that we had to split the movie into separate projects for each reel, instead of one large project. Even so, the size of our project files made it tough. They were so full of media that they would take five minutes to open. Nevertheless, we made it work and there are lots of benefits to using Adobe over other applications.

In comparison, the interface to Avid [Media Composer] looks like it was designed 20 years ago; but they have multi-user collaboration nailed and I love the trim tool. Yet, some things are old and creaky. Adobe’s not that at all. It’s nice and elegant in terms of the actual editing process. We got through it and sat down with Adobe to point out things that needed work and they worked on them. When we started up Terminator, they had a whole new build for us. Project files now opened in 15 seconds. They are about halfway there in terms of multi-user editing. Now everyone can go into a big shared project and you can move bins back and forth. Although only one user at a time has write access to the master project.

This is not simple software they are writing. Adobe is putting a lot of work into making it a more fitting tool for this type of movie. Even though this film was exponentially larger than Deadpool, from the Adobe side it was a smoother process. Props to them for doing that! The cool part about pioneering this stuff is the amount of work that Adobe is on board to do. They’ll have people work on stuff that is helpful to us, so we get to participate a little in how Adobe’s software gets made.

[OP] With two large Premiere Pro projects under your belt, what sort of new features would you like to see Adobe add to the application to make it even better for feature film editors?

[JC] They’ve built out the software from being a single-user application to being a multi-user software, but the inherent software at the base level is still single-user. Sometimes your render files get unlinked when you go back and forth between multiple users. There’s probably stuff where they have to dig deep into the code to make those minor annoyances go away. Other items I’d like to see – let’s not use third party software to send change lists to the mix stage.

I know Premiere Pro integrates beautifully with After Effects, but for me, After Effects is this precise tool for executing shots. I don’t want a fine tool for compositing – I want to work in broad strokes and then have someone come back and clean it up. I would love to have a tracking tool to composite two shots together for a seamless, split screen of two combined takes – features like that.

The After Effects integration and the color correction are awesome features for a single user to execute the film, but I don’t have the time to be the guy to execute the film at that high level. I just have to keep going. I want to be able to do a fast and dirty version so I know it’s not a terrible idea and then turn to someone else, “OK, make that good.” After Effects is cool, but it’s more for the VFX editor or the single-user who is trying to make a film on their own.

[OP] After all of these action films, are you ready to do a different type of film, like a period drama?

[JC] Funny you should say that. After Deadpool I worked on The Handmaid’s Tale pilot and it was exactly that. I was working on this beautifully acted, elegant project with tons of women characters and almost everything was done in camera. It was a lot of parlor room drama and power dynamics. And that was wonderful to work on after all of this VFX/action stuff. Periodically it’s nice to flex a different creative muscle.

It’s not that I only work on science fiction/VFX projects – which I love – but, in part, people start associating you with a certain genre and then that becomes an easy thing to pursue and get work for. Much like acting, if you want to be known for doing a lot of different things you have to actively pursue it. It’s easy to go with where momentum will take you. If you want to be the editor who can cut any genre, you have to make it a mission to pursue those projects that will keep your resume looking diverse. For a brief moment after Deadpool, I might have been able to pivot to a comedy career (laughs). That was a real hybrid, so it was challenging to thread the needle of the different tones of the film and making it feel like one piece.

[OP] Any final thoughts on the challenges of editing Terminator: Dark Fate?

[JC] The biggest challenge of the film was that in a way the film was an ensemble with the Dani character, the Grace character,  the Sarah character, and Arnold’s character – the T-800. All of these characters are protagonists that all have their individual arcs. Feeling that you were adequately servicing those arcs without grinding the movie to a halt or not touching bases with a character often enough – finding out how to dial that in was the major challenge of the movie, plus the scale of the VFX and finessing all the action scenes. I learned a lot.

The article also available at postPerspective.

And more from Julian Clarke in this interview with Steve Hullfish.

©2019 Oliver Peters

A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

DaVinci Resolve Editor Keyboard

Blackmagic Design doubled-down on advanced editing features in 2019 by introducing a new editing mode to DaVinci Resolve 16 called the cut page. They also added a dedicated editor’s keyboard – something that warms the heart of any editor who started their career in a linear edit suite. After some post-NAB feedback and adjustment, the keyboard is finally ready for prime time, running with DaVinci Resolve 16.1 (currently in public beta) or later.

Blackmagic Design’s Grant Petty comes from a broadcast engineering background and knows how fast tape editing was with the right controller. Speed is lost using a mouse-centric, drag-and-drop approach, so the DaVinci Resolve keyboard is designed to put speed back into modern edit workflows. Blackmagic Design was kind enough to loan me a keyboard for a couple of weeks of testing for this review.

Hardware design

The keyboard is very reminiscent of Sony’s BVE keyboards of the past. That’s not simply cosmetic – there are a number of plastic editing keyboards with a shuttle knob – it’s about precision engineering. The DaVinci Resolve search dial (job/shuttle/scroll wheel) truly feels like it has the same type of ballistics and tactile feedback that a Sony dial gave you. The DaVinci Resolve keyboard is built into a sturdy metal case with keycaps that are designed to take some pounding. They intend for the keyboard to last and will offer replacement parts as needed. In short, don’t think of this as a product you’ll have to toss out in a few years.

The keyboard connects via USB-C. But it also worked on the USB3.0 connection of a two-year old iMac and MacBook Pro by using a USB-A to USB-C cable. The back of the keyboard includes two additional USB-A ports for a thumb drive, mouse, or a DaVinci Resolve license key (“dongle”). The keyboard is wider than a standard extended keyboard due to dedicated edit keys on the left and the search dial on the right. It has a replaceable wrist rest on the front edge and adjustable feet to elevate the keyboard angle.

The Cut Page

The Editor Keyboard is optimized for the cut and edit pages. It does work as a standard keyboard in the color, Fairlight, and Fusion pages. However, I found the dial operation in those modes to be rather finicky. Outside of DaVinci Resolve, it’s a generic QWERTY keyboard, but the special edit keys and dial will not work with other editing software.

It’s hard to talk about the keyboard without delving into the cut page. While the keyboard works effectively and correctly in the edit page, you’ll still find yourself needing the mouse, which defeats the purpose. In short, the design motivation is fast editing where your hands never leave the keyboard. That ideal plays out best in the cut page and the two have been developed in tandem.

While the DaVinci Resolve cut page shares many similarities to Apple’s Final Cut Pro X, Blackmagic Design software engineers added a number of unique functions that improve editing speed. The best of these is the source tape view. The bin can be sorted by timecode, camera, duration, or name order using dedicated keys and then viewed as if from a single source – essentially a virtual string-out. Quickly scroll through the footage using the search dial as effortlessly as using the FCPX skimming function. Large, dedicated buttons for source and timeline, in and out, and sort methods make for easy navigation and quick assembly. Smart edit and special function buttons, such as the unique “close-up” button (automatically does a basic punch-in of high-res footage), round out the picture.

The cut page itself has a number of other unique features that are beyond the scope of this article. Nevertheless, one unique tool that is worth mentioning is the dual timeline view. The timeline pane is divided into a top mini-display of the full timeline, while the lower area always shows the zoomed-in section of the timeline at the current time indicator (cursor). You never have to zoom in and zoom out to navigate your timeline. The search dial makes it a breeze to quickly scroll through the full timeline (top) and then hit the jog key to zero in on the frame you want (bottom).

Trimming is where the dial shines. Dedicated keys quickly select in-point, out-point, roll, slip, or slide trimming. Simply hit the key and DaVinci Resolve automatically jumps to the nearest cut point. Then use the search dial for the rest. As you adjust the head or tail of a cut the rest of the timeline ripples accordingly. It’s one of the best trim models of any NLE.

Some additional thoughts

I do have a few quibbles. Trim functions in the cut and edit pages are inconsistent with each other. The cut page uses a similar model to FCPX, where audio and video from the clip are combined into a single timeline clip rather than on separate tracks. Unfortunately, Blackmagic Design has yet to implement a way to expand a/v clips and perform L-cut or J-cut trimming on the cut page. You’ll have to shift to the edit page to perform those.

This is a right-handed device, so left-handed editors will have the same dilemma that left-handed guitar players encounter. In addition, these are imprinted keycaps based on DaVinci Resolve’s default keyboard map. If you use a custom layout or one of the other keyboard maps that DaVinci Resolve offers, then the QWERTY command portion of the keyboard becomes less useful.

The search dial will not override the J-K-L or the space bar play commands. In order to jog once the sequence is playing, you must first hit the K key or the space bar to stop playback before you can properly jog through frames. Otherwise, playback continues the minute you let go of the dial.

Conclusion

This keyboard is addictive. But, is its $995 (USD) price tag justified? That’s steep, but many plastic gaming keyboards can run up to $200 and some even $500. That’s without any extra pointers, dials, or keys. I’ve also found precision metal keyboards with force-sensitive pointers as high as $3,000. Given that, Blackmagic Design may be in the right ballpark. Just like control surfaces for grading or mixing, this keyboard isn’t for everyone. If you are already a fast, keyboard-oriented editor, then the DaVinci Resolve Editor Keyboard may not make you faster. Likewise, a Final Cut Pro X editor who flies by skimming with a mouse is also going to have a hard time justifying the expense, not to mention a shift to a different application.

This keyboard is designed for DaVinci Resolve editors and not colorists. It’s for facilities that intend to deploy DaVinci Resolve as their full-time editing application. I could easily see DaVinci Resolve and this keyboard used in a fast turnaround edit environment, like broadcast news. Under that scenario, it will certainly enhance speed and workflow, especially for editors who want to make the most out of the new cut page.

Originally written for RedShark News.

Be sure to also check out Scott Simmons’ review at ProVideoCoalition.

©2019 Oliver Peters

Foolproof Relinking Strategy

Prior to file-based camera capture, film and then videotape were the dominant visual acquisition technologies. To accommodate, post-production adopted a two-stage solution: work print editing + negative conform for film, offline/online editing for video. During the linear editing era high-res media on tape was transferred to a low-res tape format, like 3/4″, for creative editing (offline). The locked cut was assembled and enhanced with effects and graphics in a high-end online suite using an edit decision list and the high-res media. The inherent constraints of tape formats forced consistency in media standards and frame rates.

In the early nonlinear days, storage capacities were low and hard drives expensive, so this offline/online methodology persisted. Eventually storage could cost-effectively handle high-res media, but this didn’t eliminate these workflows. File-based camera acquisition has brought down operating cost, but the proliferation of formats and ever-increasing resolutions have meant that there is still a need for such a two-stage approach. This is now generally referred to as proxy versus full-resolution editing. The reasons vary, but typically it’s a matter of storage size, system performance, or the capabilities of the systems and operator/artist running the finishing/full-res (aka “online”) system.

All of this requires moving media around among drives, systems, locations, and facilities, thus making correct list management essential. Whether or not it works well depends on the ability to accurately relink media with each of these moves. Despite the ability of most modern NLEs to freely mix and match formats, sizes, frame rates, etc., ignoring certain criteria will break media relinking. You must be able to relink the same media between systems or between low and high-res media on the same or different systems.

Criterial for successful relinking

– Unique file names that match between low and high-res media (extensions are usually not important).

– Proper timecode that does not repeat within a single clip.

– A single, standard frame rate that matches the project’s base frame-rate. Using conform or interpret functions within an NLE to alter a clip’s frame rate will mess up relinking on another system. Constant speed changes (such as slomo at 50%) is generally OK, but speed ramp effects tend to be proprietary with every NLE and typically do not translate correctly between different edit or grading applications.

– Match audio configurations between low and high-res media. If your camera source has eight channels of audio, then so must the low-res proxy media.

– Match clip duration. High-res media and proxies must be of the exact same length.

– Note that what is not important is matching frame size or codec or movie wrapper type (extension).

Proxy workflows

Several NLE applications – particularly Final Cut Pro X and Premiere Pro – offer built-in proxy workflows, which automatically generate proxy media and let the editor seamlessly toggle between full-res and proxy files. These are nice as long as you don’t move files around between hard drives.

In the case of Premiere Pro, you can delete proxy files once you no longer need them. From that point on you are only working with full-res media. However, the Premiere project continues to expect to have the proxy file available and wants to locate them when you launch the project. You can, of course, ignore this prompt, but it’s still hard to get rid of completely.

With FCPX, any time you move media and the Library file to another drive with a different volume name, FCPX prompts a relink dialogue. It seems to relink master clips just fine, but not the proxy media that it generated IF stored outside of the Library package. The solution is to set your proxy location to be inside the Library. However, this will cause the Library file to bloat in size, making transfers of Library files between drives and editors that much more cumbersome. So for these and other reasons (like not adhering strictly to the criteria listed above) relinking can often be problematic to impossible (Avid, I’m looking at you).

Instead of using the built-in proxy workflows for projects with extended timetables or huge amounts of media, I prefer an old-school method. Simply transcode everything, work with low-res media, and then relink to the master clips for finishing. Final Cut Pro X, Premiere Pro, and Resolve all allow the relinking of master clips to different media if the criteria match.

Here are five simple steps to make that foolproof.

1. Transcode all non-professional camera originals to a high-quality mastering codec for optimized performance on your systems. I’m talking about footage from DSLRs, GoPros, drones, smart phones, etc. On Macs this will tend to be the ProRes codec family. On PCs, I would recommend DNxHD/HR. Make sure file names are unique (rename if needed) and that there is proper timecode. Adjust frame rates in the transcode if needed. For example, 29.97fps recordings for a playback base rate of 23.98fps should be transcoded to play natively at 23.98fps. This new media will become your master files, so park the camera originals on the shelf with the intent of never needing them (but for safety, DO NOT erase).

2. Transcode all master clips (both pro formats like RED or ARRI, as well as those transcoded in step 1) to your proxy format. Typically this might be ProRes Proxy at a lower frame size, like 1280 x 720. (This is obviously an optional step. If your system has sufficient performance and you have enough available drive space, then you may be able to simply edit with your master source files.)

3. Edit with your proxy media.

4. When you are ready to finish, relink the locked cut to your master files – pro formats like RED and ARRI – and/or the high-res transcodes from step 1.

5. Color correct/grade and add any final effects for finish and delivery.

©2019 Oliver Peters

Rocketman

The last two of years have been rich for film audiences interested in the lives of rock legends. Rocketman was this year’s stylized biography about Elton John. Helmed by British actor/director Dexter Fletcher and starring Taron Egerton of the Kingsman film series, Rocketman tells John’s life through his songs. Astute film buffs also know that Fletcher was the uncredited, additional director who completed Bohemian Rhapsody through the end of principal photography and post, which will invite obvious comparisons between the two rock biopics.

Shepherding Rocketman through the cut was seasoned film editor, Chris Dickens. With experience cutting comedies, dramas, and musicals, it’s impossible to pin Dickens down to any particular film genre. I had recently interviewed him for Mary Queen of Scots, which was a good place to pick up this conversation about editing Rocketman.

__________________________________________

[OP] Our last conversation was about Mary Queen of Scots. I presume you were in the middle of cutting Rocketman at that time. Those are two very different films, so what brought you to edit Rocketman?

[CD] I made a quick shift onto Rocketman after Mary Queen of Scots. It was a fast production with eight or nine months filming and editing. The project had been in the cards a year before and I had met with Dexter to discuss doing the film. But, it didn’t happen, so I had forgotten about it until it got greenlit. I like musicals and have done one before – Les Miserables. This one was more ambitious creatively. Right from the beginning I liked the treatment of it. Rocketman was a classic kind of musical, but it was different in that the themes were adult and had a strong visual sense. Also the treatment using Elton John’s songs and illustrating his life with those was interesting.

[OP] The director had a connection with both Rocketman and Bohemian Rhapsody. Both films are about rock legends, so audiences may draw an obvious comparison. What’s your feeling about the contrast between these two films?

[CD] Obviously, there are a lot of similarities. Both films are essentially rock biopics about a musical figure. Both Freddie and Elton were gay. So that theme is similar, but that’s where it ends. Bohemian Rhapsody was aimed at a wider audience, i.e. less adult material – sex and drug-taking – things like that. And secondly, it’s about music, but it’s not a musical. It’s always grounded in reality. Characters don’t get up and sing to the camera. It’s about Freddie Mercury and Queen and their music. So the treatment of it is very different. Another fundamental difference is that Elton John is still alive and Freddie Mercury is not, so that was right at the film’s core. From the start you know that, so it has a different kind of power.

[OP] Whenever a film deals with popular music – especially when the rights-owners are still alive and active – the treatment and use of that music can be a sticking point. Were Elton John or Bernie Taupin actively involved in the production of Rocketman?

[CD] Yes, they were. Bernie less so – mainly Elton. He didn’t come in the edit room that much, but his husband, David Furnish, was a bit more involved. Elton is not someone who goes out in public that much, except to perform. He’s such a massive star. But, he did watch cuts of the film and had notes – not at every stage – but, David Furnish was the conduit between us and him. Naturally, Elton sanctioned all of the music tracks that were used. But the film was not made by them, i.e. we were making the film and they were giving us notes.

[OP] How were the tracks handled? Was the music remixed from the original studio masters with Taron lip-syncing to Elton’s voice – or was it different?

[CD] The music was radically changed in some cases from the original – the arrangements, the scoring. The music was completely re-recorded and sung by Taron, the actor playing Elton. We evolved the choices made at the beginning during the edit. So alongside of the picture edit was a music edit and a music mix going on constantly. In some cases Taron was singing on-set and we used that for about a quarter of the tracks. These were going in and out of scenes that had natural dialogue. Taron would start singing and we would play the track underneath. Then at that point perhaps, he would start lip-syncing, so it was a combination. On some tracks he was completely lip-syncing to what he had recorded before. This set the tempo for those scenes, but the arrangements evolved during the edit.

Even when he was lip-syncing, it was to his own voice. The whole idea was that the singing would not be Elton, except at the end where we have a track with both singing in the credits roll. So it’s a key thing that these were new recordings. Giles Martin, son of the legendary George Martin, was the music producer who took care of everything and put up with our constant changes. We had a team of two music editors who worked alongside us and a score as well, written by Matt Margeson, which we were rolling into the film in places. It was a real team process of building the film slowly.

[OP] Please expand on the structure of a film musical and what it takes to edit one.

[CD] The editing process was challenging, because of the complex structure. It was fundamentally a musical, with fifteen or sixteen tracks – meaning songs or music numbers – that were initially planned to be shot. Some of these were choreographed song-and-dance sequences. Combined with that was a sort of kitchen sink drama about Elton’s life, his childhood, his teenage years, and then into manhood. And then becoming a superstar. The script has the songs and then long sequences of more classic storytelling. What I found – slowly, as we were putting the film together, even during the shoot – was that we needed to unify those two things within the edit.

For instance, the first song number in the movie is “The Bitch is Back.” It’s a dance sequence with Elton as a boy walking down the street while people are singing and dancing around him. Then his adult self is chasing him around. It’s a very stylized sequence, which then went into about an eight minute sequence of storytelling about his childhood. We needed to give the film the same tone all the way through, i.e. that slightly fantastical feel of a musical. We screened it a few times for some of the core people and it became clear that we wanted to go with the fantastical elements of the film, not the more down-to-earth, realistic elements. Obviously, you could have made the choice to cut back on the music, but that seemed counter-intuitive. So we had to make some deep cuts in the sections between the musical pieces to get the story to flow and have that same kind of tone.

There was also a flashback structure. The film starts with Elton later on as an adult in rehab, after having fallen into drug and alcohol addiction. We framed the film with this device, so it was another element that we had to make work in the edit to get it to feel as an organic part of the story. We found that we didn’t have enough of these rehab sequences and had to shoot a few more of them during the edit to knit the film together in this way in order to remind you that he was telling this story – looking back on his past.

Cutting back sections between the musical numbers wasn’t our only solution to get the right tone. We had to work out how to get in and out of the musical sequences and that’s where the score comes in. I played with this quite a lot with the composer and Giles to have themes from Elton’s song coming throughout the film. For example, “Goodbye Yellow Brick Road” had some musical themes in it that we started using as the theme that went with his rehab. The theme of the film is that Elton lost any sense of where he came from as a person, because of his stardom and “Goodby Yellow Brick Road” – the song – is about that. It’s actually about going back to the farm and your roots. The song isn’t actually in the film until the very end when he performs it. So we found that using this musical theme as a motif throughout the film is very powerful and helped to combine the classical storytelling scenes with the musical scenes.

[OP] Was this process of figuring out the right balance something that happened at the beginning and then became a type of template for the rest of the film? Or was is a constant adjustment process throughout the cutting of Rocketman?

[CD] It was a constant thing trying to make the film work as a whole so people wouldn’t be confused about the tone. At one point we had far too much music and had to take some out. It became very minimal in some areas. In others, it led you more. It was about getting that balance right all the way through. I’m primarily a picture editor, but on this film you couldn’t just concentrate on the picture and then leave the music to the music editors and composer, because it was absolutely a fundamental part of the film. It was about music and so how you were using music was very key within the edit. Sometimes we had to cut longer songs down. Very few are at their original length. Some are half their recorded length.

[OP] This process sounds intriguing, since the scenes use a song as the underlying building block. Elton John’s songs tend to be pop songs – or at least they received a lot of radio airplay – so did those recorded lengths tend to drive the film?

[CD] No. At first I thought we’d have to be very faithful, but as we started cutting, the producers -and particularly Elton John’s side of it – didn’t care whether we cut things down or made them longer or added bits. They weren’t precious about it. In fact, they wanted us to be creative. The producers would say, “Don’t worry about cutting that down, Giles will deal with it.” Of course he would. Although sometimes he’d come back to me and say, “Look, this doesn’t quite work musically. You need to add a bit more time to this, or another couple of bars of music.” So we had a whole back-and-forth process like that.

For instance, in the track “Rocketman,” which is the film’s centerpiece, Elton tries to commit suicide. He’s at a party, gets drunk, and jumps into the swimming pool. While he’s underneath he starts seeing visions of himself as a child under there. He starts singing and gets fished out of the pool and then put on stage in a stadium. It’s a whole sequence that’s been planned to play like that. Of course, I couldn’t fit what they’d shot into the song – there wasn’t enough time. It was all good stuff, so I added a few bars. I’d give it to music and they’d say, “Oh, you can’t add that in that way.” So I’d go back and try different ways of doing it.

At the end, when he’s put back on stage at Dodger Stadium, he’s in a baseball uniform and then fires into the air like a rocket. They shot it in a studio without a big crowd and it looked okay. As soon as we started getting the visual effects, we thought, “Wow. This looks great.” So we doubled the length of that – added on, repeated the chorus, and all of that – because we thought people were going to love this. It looked and sounded great. But, when we then tested the film, it was way too long. It had just outstayed its welcome. We then had to cut it down again, although it was still longer than they’d originally planned it.

[OP] With a regular theatrical musical, the song are written to tell the story. Here, you are using existing songs that weren’t written with that story in mind. I presume you have to be careful that you don’t end up with just a bunch of music videos strung back-to-back.

[CD] Exactly. I don’t think we ever strayed into that. It was always about – does it make its point? These songs were written at all times in his career, but we didn’t use them in their original chronological order. “Honky Cat” was written later than when we used it. He’s just getting successful and at the end of “Honky Cat” they are buying Rolls Royces and clothes and football teams. At the end of that there was a great song-and-dance routine with them dancing on a record – Elton and John Reid, his manager and also a kind of boyfriend. That part went on for two minutes and we ended cutting it out. Partly because people and the producers who saw it thought it wasn’t the right style. It had a kind of 1920s or 1930s style with lots of dancers. It was a big number and took a long time to edit, but we took it out. I thought it was quite a nice sequence, but most people thought the film was better without it, because it wasn’t moving the story on.

[OP] Other than adjusting scenes and length, did friends-and-family and test audience screenings change your edit significantly?

We did three big screenings in Los Angeles, San Francisco, and Kansas City, plus a number of smaller ones in England. The audiences were a mix of people who were Elton John fans, as well as those that weren’t. Essentially people liked the film right from the start, but the audiences weren’t getting some parts, like the flashback structure with the rehab scenes – particularly at the beginning. They didn’t really understand what he was singing about. 

That first song [“The Bitch Is Back”] caused a lot of difficulty, because it starts the film and says this is a musical. You have to handle that the right way. I think the initial problems were partly in how I had cut the sequence originally. I tried to show too much of the crowd around him and the dancers and I thought that was the way to go with it. Actually what it turned out was the way to go was the relationship between the two of them – Elton and Elton as the little boy – because that’s what the song was about. I then readjusted the edits, taking out a lot of the wide shots.

Also Taron had done some improvised dialogue to the little boy rather than just singing all the way through – dialogue lines like, “Stop doing that.” That was in the film a long time, but people didn’t like it and didn’t understand why he was angry with the boy. So we cut that out completely. Another issue was that right at the start, the little boy starts singing to Taron as Elton first, but audiences did not feel comfortable with it. We discussed it a lot and decided that the lead actor should be the one we hear singing first. We did a reshoot of that beginning portion of the scene. You have to let the audience into it more slowly than we had originally done. That’s a prime example of how editing decisions can lead to additional filming to really make it work.

[OP] You mentioned visual effects to complete the “Rocketman” scene. Were there a lot of effects used to make the film period-accurate or just for visual style?

[CD] Quite a lot, though not excessively, like a comic book movie. I imagine it was similar to Bohemian Rhapsody that had to shoot gigs and concerts and places were you couldn’t go now and film that. But our visual effects weren’t as fundamental in that I didn’t need them to cut with. The boy underwater was all created, of course. Taron in the pool was actually him underwater, because he had breathing apparatus. But the little boy couldn’t, so he was singing ‘dry for wet’ – shot in the studio and put into the scene later. There were different evolutions of that scene. In one version we took the boy out completely and just had Taron singing.

The end of the film as written was going to be a re-imagined version of Elton John’s “I’m Still Standing” music video, which is on the beach in Cannes, shot in the 80s. The idea was to go there and shoot it with a lot more dancers. By the time the film was being shot, the weather changed and we couldn’t shoot that sequence. That whole ending was shot later, partly in a studio. Because we couldn’t afford to go to Cannes and reshoot the whole thing, someone was able to get the original rushes from that music video, which had been shot on 16mm film, but edited on videotape. We had to get permission from the original director of that music video and he was very happy for us to do it. We had the 16mm film rescanned and also removed the grain. Instead of Elton, we put Taron into it.  In every shot with Elton, we replaced his head with Taron’s and that became the ending sequence of the film. As a visual effect, that took quite a leap of faith, but it did work in the end. That wasn’t the original plan, but I think it’s better.

[OP] In Bohemian Rhapsody there was a conscious consideration of matching the Live Aid concert angles and actions. Was there anything like that in Rocketman?

[CD] There was no point in trying to do that on Rocketman. It was always going to be stylized and different from reality. We staged Dodger Stadium the way it looked, but we didn’t try to match it. The original concert was late afternoon and ours is more towards night, which was visually better. The visual inspiration came from the stills taken by a famous rock photographer and they look a little more night. At one point we talked about having a concert at the end and we tried shooting something, but it just didn’t feel right. We were going to get compared to BoRhap anyway, so we didn’t want to even try and do something the same way.

[OP] Any final thoughts or advice on how to approach a film like Rocketman?

[CD] Every movie is different. Every single time you come to a story, you nearly have to start again. The director wants to do it a certain way and you have to adapt to that. With some of the dramas or comedies that I’ve cut, it’s a less immediate process. You don’t really know how the whole thing is coming together until you get a sense of it quite late. With this, they shot a few of the song sequences early and as soon as I saw that, I thought right away, “Oh, this is great.” You can build a quick three-minute sequence to show people and you get a feel for the whole film. You can get excited about it. On a drama or even worse, on a thriller, you’re guessing how it’s coming together and you’re using all of your skills to do that.

The director and the story are the differences and I try to adapt. Dexter wanted the film to be popular, but also distinctive. He wanted to see very quickly how it was coming together. As soon as he was done filming he wanted to go to the edit and see how it was coming along. In that scenario you try to get some things done more quickly. So I would try to get some sequences put together knowing that, and then come back to them later if you’ve rushed them.

Since it’s a musical you could string together the songs and get a feel, but that would be misleading. When you start off you can produce a sequence very quickly that looks good, because you’ve got the music that makes it feel almost finished and that it’s working. But that can lead you into a dead end if you’re not careful – if you are too precious about the music – the length of it and such. You still have to be hard about the storytelling element. Ultimately all of the decisions come from the story – how long the scene is, whether you start on a close-up or a wide – I always try to approach everything like that. If you keep that in your head, you’ll make the right decisions.

©2019 Oliver Peters

Why editors prefer Adobe Premiere Pro CC

Over my career I’ve cut client jobs with well over a dozen different linear and nonlinear editing systems and/or brands. I’ve been involved with Adobe Premiere/Premiere Pro as a user on and off since Premiere 5.5 (yes kids – before, Pro, CS, and CC). But I seriously jumped into regular use at the start of the Creative Cloud era, thanks to many of my clients’ shift away from Final Cut Pro. Some seriously gave FCPX a go, yet could never warm up to it. Others bailed right away. In any case, the market I work in and the nature of my clients dictate a fluency in Premiere Pro. While I routinely bounce between Final Cut Pro X, Media Composer, DaVinci Resolve, and Premiere Pro, the latter is my main axe at the day job.

Before I proceed, let me stop and acknowledge those readers who are now screaming, “But Premiere always crashes!” I certainly don’t want to belittle anyone’s bad experiences with an app; however in my experience, Premiere Pro has been just as stable as the others. All software crashes on occasion and usually at the most inopportune time. Nevertheless, I currently manage about a dozen Mac workstations between home and work, which are exposed to our regular pool of freelance editors. Over the course of the past three to four years, Premiere Pro (as well as the other Creative Cloud applications) has performed solidly for us across a wide range of commercial, corporate, and entertainment projects. Realistically, if our experiences were as bad as many others proclaim, we would certainly have shifted to some other editing software!

Stability questions aside, why do so many professional editors prefer Adobe Premiere Pro given the choices available? The Final Cut Pro X fans will point to Premiere’s similarities with Final Cut Pro 7, thus providing a comfort zone. The less benevolent FCPX fanboys like to think these editors are set in their ways and resistant to change. Yet many Premiere Pro users have gone through several software or system changes in their careers and are no strangers to a learning curve. Some have even worked with Final Cut Pro X, but find Premiere Pro to be a better fit. Whatever the reason, the following is a short list (in no order of importance) of why Premiere Pro becomes such a good option for many editors, given the available alternatives.

Responsive interface – I find the Premiere Pro user interface to be the most responsive application of any of the NLEs. I’m not talking about media handling, but rather the time between clicking on something or commanding a function and having that action occur. For example, in my Final Cut Pro X experience – which is an otherwise fast application – it feels slower for this type of response time. When I click to select a clip in the timeline, it takes a fraction of a second to respond. The same action is nearly instant in Premiere Pro. The reason seems to be that FCPX is constantly writing each action to the Library in a “constant save” mode. I have seen such differences across multiple Macs and hard drive types over the eight years since its introduction with very little improvement. Not a deal-breaker, but meanwhile, Premiere Pro has continued to become more responsive in the same period.

Customizable user interface – Users first exposed to Premiere Pro’s interface may feel it’s very complex. The truth is that you can completely customize the look, style, and complexity of the interface by re-arranging the stacked, tabbed, or floating panels. Make it as minimalistic or complex as you need and save these as workspaces. It’s not just the ability to show/hide panels, but unlike other NLEs, it’s the complete control over their size and location.

Media Browser – Premiere Pro includes a built in Media Browser panel that enables the immediate review and import of clips external to your project. It’s not just a view of folders in a clip name or thumbnail format to be imported. Media Browser offers the same scrubbing capabilities as for clips in a bin. Furthermore, the editor can directly edit clips to the timeline from the Media Browser, which then automatically also imports that clip into the project in a one-step process. You could start with a completely blank project (no imported media clips) and work directly between the Media Browser and the timeline if you wanted to.

Bins – Editors rely on bins for the organization of raw media. It’s the first level of project organization. FCPX went deep down this hole with Events and Keywords. Premiere Pro uses a more traditional approach and features three primary modes – list, thumbnail, and freeform. List and thumbnail are obvious, but what needs to be reiterated is that the thumbnail view enables Adobe’s hover scrubbing. While not as fluid as FCPX’s skimming, it’s a quick way to see what a clip contains. But more importantly, the thumbnails are completely resizable. If you want to see a few very large thumbnails in the bin, simply crank up the slider. The newest is a freeform view – something Avid editors know well. This removes the grid arrangement of the bin view and allows the editor to rearrange the position of clips within the panel for that bin. This is how many editors like to work, because it gives them visual cues about how material is organized, much like a storyboard.

Versatile media and project locations – Since Premiere Pro treats all of your external storage as available media locations (without the need for a structured MediaFiles folder or Library file), this gives the editor a better handle on controlling where media should be located. Of course, this puts the responsibility for proper media management on the user, without the application playing nanny. The big plus is that projects can be organized within a siloed folder structure on your hard drive. One main folder for each job, with subfolders for associated video clips, graphics, audio, and Premiere Pro project files. Once you are done, simply archive the job folder and everything is there. Or… If a completely different organizational structure better fits your needs – no sweat. Premiere Pro makes it just as easy.

Multiple open sequences/timelines – One big feature that brings editors to Premiere Pro instead of Media Composer or Final Cut Pro X is the ability work with multiple, open sequences in the timeline panel and easily edit between them. Thanks to the UI structure of Premiere Pro, editors can also have multiple stacked timeline panels open in their workspace – the so-called “pancake timeline” mode. Open a “KEM roll” (selects sequence) in one panel and your working sequence in another. Then edit between the two timeline panels without ever needing to go back-and-forth between bins and the timeline.

Multiple open projects/collaboration – Premiere Pro’s collaboration capabilities (working with multiple editors on one job) are not as robust as with Avid Media Composer. That being said, Premiere’s structure does enable a level of versatility not possible in the Avid environment – so it’s a trade-off. With Premiere project locking, the first editor to open a project has read/write control, while additional editors to open one of those open projects can access the files in a read-only mode. Clips and sequences can be pulled (copied/imported) from a read-only project into your own active project. The two will then be independent of each other. This is further enhanced by the fact that Premiere offers standard “save as” computer functions. If Editor #1 wants to offload part of the work to Editor #2, simply saving the project as a new file permits Editor #2 to work in their own active version of the project with complete read/write control.

Mixed frame rates and sizes – Premiere Pro projects can freely mix media and timelines with different sizes, aspect ratios and frame rates. It’s not the only NLE to do that, but some applications still start by having the project file based on a specific sequence format. Everything in the project must conform or be modified to those settings. Both solutions are viable, but Premiere’s open approach is more versatile for editors working in the hodgepodge that is today’s media landscape.

Audio mixing – While all NLEs offer decent audio mixing capabilities, Premiere Pro offers more refined mixing functions, including track automation, submaster tracks, proper loudness measurement, and AU, VST, and VST3 plug-in support. FCPX attempts to offer a trackless mixing model using audio roles, but the mixing routine breaks done pretty quickly when you get to a complex scenario, often requiring multiple levels of compound clips (nested sequences). None of that is needed in Premiere Pro. In addition, Creative Cloud subscribers also have access to Adobe Audition, a full-fledged DAW application. Premiere Pro sequences can be sent directly to Audition for more advanced mixing, plus additional Audition-specific tools, like Loudness Match and Music Remix. Adobe markets these as powered by Adobe Sensei (Adobe’s banded artificial intelligence). Loudness Match analyzes an audio clip and intelligently rises the gain of the quieter sections. Traditional loudness controls raise or lower the entire clip by a fixed amount. Music Remix doesn’t actually remix a track. Instead, it automatically edits a track based on a target length. Set a desired duration and Audition will determine the correct music edit points to get close to that target. You can use the default or set it to favor shorter sections, which will result in more edit points.

Interoperability – Most professional editors do not work within a single software ecosystem. You often have to work with After Effects and Photoshop files. Needless to say, Premiere Pro features excellent interoperability with the other Adobe applications, whether or not you use the Dynamic Link function. In addition, there’s the outside world. You may send out to a Pro Tools mixer for a final mix. Or a Resolve colorist for grading. Built-in list/file export formats make this easy without the requirement for third-party applications to facilitate such roundtrips.

Built-in tools that enhance editing – This could be a rather long list, but I’ll limit myself to a few functions. The first one I use a lot is the Replace command. This appears to be the best and easiest to use of all the apps. I can easily replace clips on the timeline from the source clips loaded into the viewer or directly from any clip in a bin. No drag-and-drop required. The second very useful operation is built-in masking and tracking for nearly every video filter and color correction layer. This is right at your fingertips in the Effects Control panel without requiring any extra steps or added plug-ins. Need more? Bounce out to After Effects with its more advanced tools, including the bundled Mocha tracker.

Proxy workflow – Premiere Pro includes a built-in Proxy workflow, which permits low-res edit proxies to be created externally and attached, or created within the application itself. In addition, working with proxies in not an all-or-nothing feature. You can toggle between proxies and high-res master clips, but you can also work with a mixture of proxies and high-res files. In other words, not all of your clips have to be transcoded into proxies to gain the benefit of a proxy workflow. Premiere takes care of tracking the various clip sizes and making sure that the correct size is displayed. It also calculates the size shift between proxy frame sizes and larger high-res frame sizes to keep the toggle between these two seamless.

Relinking – Lastly,  Premiere Pro can work with media on any of the available attached drives; therefore, it’s got to be able to quickly relink these files if you move locations. I tend to work in a siloed folder structure, where everything I need for a project is contained within a job folder and its subfolders. These folders are often moved to other drives (for instance, if I need to travel with a project) or archived to an external drive and later restored. It’s critical that a project easily find and relink to the correct media files. Generally, as long as files stay in the same relative folder paths – in relation to the location of the project files on the drive – then Premiere can easily find all the necessary offline media files once a project is moved from its original location. This is true whether you move to a different drive with a different volume name or whether you move the entire job folder up or down a level within the drive’s folder hierarchy. Media relinking is either automatic or worst case, requires one dialogue box for the editor to point Premiere to the new path for the first file. From there, Premiere Pro will locate all of the other files. I find this process to be the fastest and least onerous relink operation of all the NLEs.

©2019 Oliver Peters

Free Solo

Every now and then a documentary comes along that simply blows away the fictional super-hero feats of action films. Free Solo is a testament to the breathtaking challenges real life can offer. This documentary chronicles the first free solo climb (no ropes) by Alex Honnold of El Capitan’s 3,000-feet-high sheer rock face. This was the first and so far only successful free solo climb of the mountain.

Free Solo was produced by the filmmaking team of Elizabeth Chai Vasarhelyi and Jimmy Chin, who is renowned as both an action-adventure cinematographer/photographer and mountaineer. Free Solo was produced in partnership with National Geographic Documentary Films and has garnered numerous awards, including OSCAR and BAFTA awards for best documentary, as well as an ACE award for its editor Bob Eisenhardt, ACE. Free Solo enjoyed IMAX and regular theatrical distribution and can now be seen on the National Geographic Television streaming service.

Bob Eisenhardt is a well-known documentary film editor with over 60 films to his credit. Along with his ACE award for Free Solo, Eisenhardt is currently an editing nominee in this year’s EMMY Awards for his work in cutting the documentary. I recently had a chance to speak with Bob Eisenhardt and what follows is that conversation.

_________________________________________

[OP] You have a long history in the New York documentary film scene. Please tell me a bit about your background.

[BE] I’ve done a lot of different kinds of films. The majority is cinema vérité work, but some films use a lot of archival footage and some are interview-driven. I’ve worked on numerous films with the Maysles, Barbara Kopple, Matt Tyrnauer, a couple of Alex Gibney’s films – and I often did more than one film with people. I also teach in the documentary program at the New York Film Academy, which is interesting and challenging. It’s really critiquing their thesis projects and discussing some general editing principles. I went to architecture school. Architectural design is taught by critique, so I understand that way of teaching.

[OP] It’s interesting that you studied architecture. I know that a lot of editors come from a musical background or are amateur musicians and that influences their approach to cutting. How do you think architecture affects your editing style?

[BE] They say architecture is frozen music, so that’s how I was taught to design. I’m very much into structure – thinking about the structure of the film and solving problems. Architecture is basically problem solving and that’s what editing is, too. How do I best tell this story with these materials that I have or a little bit of other material that I can get? What is the essence of that and how do I go about it?

[OP] What led to you working on Free Solo?

[BE] This is the second film I’ve made with Chai and Jimmy. The first was Meru. So we had some experience together and it’s the second film about climbing. I did learn about the challenges of climbing the first time and was familiar with the process – what the climbing involved and how you use the ropes. 

Meru was very successful, so we immediately began discussing Free Solo. But the filming took about a year-and-a-half. That was partly due to accidents and injuries Alex had. It went into a second season and then a third season of climbing and you just have to follow along. That’s what documentaries are all about. You hitch your wagon to this person and you have to go where they take you. And so, it became a much longer project than initially thought. I began editing six months before Alex made the final climb. At that point they had been filming for about a year. So I came on in January and he made the climb in June – at which point I was well into the process of editing.

[OP] There’s a point in Free Solo, where Alex had started the ascent once and then stopped, because he wasn’t feeling good about it. Then it was unclear whether or not he would even attempt it again. Was that the six-month point when you joined the production?

[BE] Yes, that’s it. It’s very much the climbers’ philosophy that you have to feel it, or you don’t do it. That’s very true of free soloing. We wanted him to signal the action, “This is what I plan to do.” And he wouldn’t do it – ever – because that’s against the mentality of climbing. “If I feel it, I may do it. Otherwise, not.” It’s great for climbing, but not so good for film production.

[OP] Unlike any other film project, failure in this case would have meant Alex’s death. In that event you would have had a completely different film. That was touched on in the film, but what was the behind-the-scenes thinking about the possibility of such as catastrophe? Any Plan B?

[BE] In these vérité documentaries you never know what’s going to happen, but this is an extreme example. He was either going to do it and succeed, decide he wasn’t going to do it, or die trying, and that’s quite a range. So we didn’t know what film we were making when I started editing. We were going to go with the idea of him succeeding and then we’d reconsider if something else happened. That was our mentality, although in the back of our minds we knew this could be quite different.

When they started, it wasn’t with the intention of making this film. Jimmy knew Alex for 10 years. They were old friends and had done a lot of filming together. He thought Alex would be a great subject for a documentary. That’s what they proposed to Nat Geo – just a portrait of Alex – and Alex said, “If you are going to do that, then I’ve got to do something worthwhile. I’m going to try to free solo El Cap.” He told that to Chai while Jimmy wasn’t there. Chai is not a climber and she thought, “Great, that sounds like it will be a good film.” Jimmy completely freaked out when he found out, because he knew what it meant.

It’s an outrageous concept even to climbers. They actually backed off and had to reconsider whether this was something they wanted to get involved in. Do you really want to see your friend jeopardize his life for this? Would the filming add additional pressure on Alex? They had to deal with this even before they started shooting, which is why that was part of the film. I felt it was a very important idea to get across. Alex is taciturn, so you needed ways to understand him and what he was doing. The crew as a character really helped us do that. They were people Alex could interact with and the audience could identify with.

The other element that I felt was very important, was Sanni [McCandless, Alex Honnold’s girlfriend], who suddenly came onto the scene after the filming began. This felt like a very important way to get to know Alex. It also became another challenge for Alex – whether he would be able not only to climb this mountain, but whether he would be able to have a relationship with this woman. And aren’t those two diametrically opposed? Being able to open yourself up emotionally to someone, but also control your emotions enough to be able to hang by your fingertips 2,000 feet in the air on the side of a cliff.

[OP] Sanni definitely added a lot of humanity to him. Before the climb they discuss the possibility of his falling to his death and Alex’s point of view is that’s OK. “If I die, I die.” I’m not sure he really believed that deep inside. Or did he?

[BE] Alex is very purposeful and lives every day with intention. That’s what’s so intriguing. He knows any minute on the wall could be his last and he’s comfortable with that. He felt like he was going to succeed. He didn’t think he was going to fall. And if he didn’t feel that way he wasn’t going to do it. Seeing the whole thing through Sanni’s eyes allowed us as the audience to get closer to and identify with Alex. We call that moment the ‘Take me into consideration’ scene, which I felt was vitally important.

[OP] Did you have any audience screenings of the rough cuts? If so, how did that inform your editing choices?

[BE] We did do some screenings and it’s a tricky thing. Nat Geo was a great partner throughout. Most companies wouldn’t be able to deal with this going on for a year-and-a-half. It’s in Nat Geo’s DNA to fund exploration and make exploratory films. They were completely supportive, but they did decide they wanted to get into Sundance and we were a month from the deadline. We brought in three other editors (Keiko Deguchi, Jay Freund, and Brad Fuller) to jump in and try to make it. Even though we got an extension and we did a great job, we didn’t get in. The others left and I had another six months to work on the film and make it better. Because of all of this, the screenings were probably too early. The audience had trouble understanding Alex, understanding what he’s trying to do – so the first couple screenings were difficult.

We knew when we saw the initial climbing footage that the climb itself was going to be amazing. By the time we showed it to an audience, we were completely immune to any tension from the climb – I mean, we’d seen it 200 times. It was no longer as scary to us as it had been the first time we saw it. In editing you have to remember the initial reaction you had to the footage so that you can bring it to bear later on. It was a real struggle to make the rest of the story as strong as possible to keep you engaged, until we got to the climb. So we were pleasantly surprised to see that people were so involved and very tense during the climb. We had underestimated that.

We also figured that everyone would already know how this thing ends. It was well-publicized that he successfully climbed El Cap. The film had to be strong enough that people could forget they knew what happened. Although I’ve had people tell me they could not have watched the climb if they hadn’t known the outcome.

[OP] Did you end up emphasizing some aspects over others as a result of the screenings?

[BE] The main question to the audience is, “Do you understand what we are trying to say?” And then, “What do you think of him or her as a character?” That’s interesting information that you get from an audience. We really had to clarify what his goal was. He never says at the beginning, “I’m going to do this thing.” In fact, I couldn’t get him to say it after he did it. So it was difficult to set up his intention. And then it was also difficult to make clear what the steps were. Obviously we couldn’t cover the whole 3,000 feet of El Capitan, so they had to concentrate on certain areas.

We decided to cover five or six of the most critical pitches – sections of the climb – to concentrate on those and really cover them properly during the filming. These were challenging to explain and it took a lot of effort to make that clear. People ask, “How did you manage to cut the final climb – it was amazing.” Well, it worked because of the second act that explains what he is trying to do. We didn’t have to say anything in the third act. You just watch because you understand. 

When we started people didn’t understand what free soloing is. At first we were calling the film Solo. The nomenclature of climbing is confusing. Soloing is actually climbing with a rope, but only for protection. Then we’d have to explain what free soloing was as opposed to soloing. However, Hans Solo came along and stole our title, so it was much easier to call it Free Solo. Explaining the mentality of climbing, the history of climbing, the history of El Capitan, and then what exactly the steps were for him to accomplish what he was trying to do – all that took a long time to get right and a lot of that came out of good feedback from the audience.

Then, “Do you understand the character?” At one point we didn’t have enough of Sanni and then we had too much of Sanni. It became this love story and you forgot that he was going to climb. So the balancing was tricky.

[OP] Since you were editing before the final outcome and production was still in progress, did you have an opportunity to request more footage or that something in particular be filmed that you were missing in the edit?

[BE] That was the big advantage to starting the edit before the filming was done. I often end up coming into projects that are about 80-90% shot on average. So they have the ability to get pick-ups if people are alive or if the event can still be filmed in some way. This one was more ‘in progress.’ For instance, he practiced a specific move a lot for the most difficult pitch and I kept asking for more of that. We wanted to show how many times he practiced it in order to get the feel of it.

[OP] Let’s switch gears and talk about the technical side. Which edit system did you use to cut Free Solo?

[BE] We were using Avid Media Composer 8.8.5 with Nexis shared storage. Avid is my first choice for editing. I’ve done about four films on the old Final Cut – Meru being one of them – but, I much prefer Avid. I’ve often inherited projects that were started on something else, so you are stuck. On this one we knew going in that we would do it on Avid. Their ScriptSync feature is terrific. Any long discussions or sit-down interviews were transcribed. We could then word-search them, which was invaluable. My associate editor, Simona Ferrari, set up everything and was also there for the output.

[OP] Did you handle the finishing – color correction and sound post – in-house or go outside to another facility?

[BE] We up-rezzed in the office on [Blackmagic Design DaVinci] Resolve and then took that to Company 3 for finishing and color correction. Deborah Wallach did a great job sound editing and we mixed with Tommy Fleischman [Hugo, The Wolf of Wall Street, BlacKkKlansman]. They shot this on about every camera, aspect ratio, and frame rate imaginable. But if they’re hanging 2,000 feet in the air and didn’t happen to hit the right button for the frame rate – you really can’t complain too much! So there was an incredible wide range and Simona managed to handle all that in the finishing. There wasn’t a lot of archival footage, but there were photos for the backstory of the family.

The other big graphic element was the mountain itself. We needed to be able to trace his route up the mountain and that took forever. It wasn’t just to show his climb, but also to connect the pitches that we had concentrated on, since there wasn’t much coverage between them. Making this graphic became very complicated. We tried one house and they couldn’t do it. Finally, Big Star, who was doing the other graphics – photomontages and titles – took this on. It was the very last thing done and was dropped in during the color correction session.

For the longest time in the screenings, the audience was watching a drawing that I had shot off of the cutting room wall and traced in red. It was pretty lame. For the screenings, it was a shot of the mountain and then I would dissolve through to get the line moving. After a while we had some decent in and out shots, but nothing in-between, except this temporary graphic that I created. 

[OP] I caught Free Solo on the plane to Las Vegas for NAB and it had me on the edge of my seat. I know the film was also released in IMAX, so I can only image what that experience was like.

[BE] The film wasn’t made for IMAX – that opportunity came up later. It’s a different film on IMAX. Although there is incredible high-angle photography, it’s an intimate story. So it worked well on a moderately big screen. But in IMAX it becomes a spectacle, because you can really see all those details in the high-angle shots. I have cut an IMAX film before and you do pace them different, because of the ability to look around. However, there wasn’t a different version of Free Solo made for IMAX – we didn’t have the freedom to do that. Of course, the whole film is largely handheld, so we did stabilize a few shots. IMAX merely used their algorithm to bump it up to their format. I was shocked – it was beautiful.

[OP] Let’s talk a bit about your process as an editor. For instance, music. Different editors approach music differently. Do you cut with temp music or wait until the very end to introduce the score?

[BE] Marco Beltrami [Fantastic Four, Logan, Velvet Buzzsaw] was our composer, but I use temp music from very early on. I assemble a huge library of scratch music – from other films or from the potential composers’ past films. I use that until we get the right feel for the music and that’s what we show to the composer. It gives us something to talk about. It’s much easier to say, “We like what the music is doing here, but it’s the wrong instrumentation.” Or, “This is the right instrument, but the wrong tempo.” It’s a baseline.

[OP] How do you tackle the footage at the very beginning? Do you create selects or Kem rolls or some other approach?

[BE] I create a road map to know where I’m going. I go through all the dailies and pull the stuff that I think might be useful. Everything from the good-looking shots to a taste of something that I may never use, but I want to remember. Then I screen selects reels. I try to do that with the director. Sometimes we can schedule that and sometimes not. On Free Solo there was over 700 hours of footage, so it’s hard to get your arms around that. By the time you get through looking at the 700th hour you’ve forgotten the first one. That’s why the selecting process is so important to me. The selects amount to maybe a third of the dailies footage. After screening the selects, I can start to see the story and how to tell it. 

I make index cards for every scene and storyboard the whole thing. By that I mean arrange the cards on a wall. They are color-coded for places, years, or characters. It allows me to stand back and see the flow of the film, to think about the structure, and the points that I have to hit. I basically cut to that. Of course, if it doesn’t work, I re-arrange the index cards (laugh).

A few years ago, I did a film about the Dixie Chicks [Shut Up & Sing] at the time they got into trouble for comments they had made about President Bush. We inherited half of the footage and shot half. The Dixie Chicks went on to produce a concert and an album based upon their feelings about the whole experience. It was kind of present and past, so there were basically two different colors to the cards. It was not cut in chronological order, so you could see very quickly whether you were in the past or the present just by looking at the wall. There were four editors working on Shut Up & Sing and we could look at the wall, discuss, and decide if the story was working or not. If we moved this block of cards, what would be the consequences of telling the story in a different order?

[OP] Were Jimmy or Chai very hands-on as directors during the edit – in the room with you every day at the end?

[BE] Chai and Jimmy are co-directors and so Jimmy tended to be more in the field and Chai more in the edit room. Since we had worked together before, we had built a common language and a trust. I would propose ideas to Chai and try them and she would take a look. My feeling is that the director is very close to it and not able to see the dailies with fresh eyes. I have the fresh perspective. I like to take advantage of that and let them step back a little. By the end, I’m the one that’s too close to it and they have a little distance if they pace themselves properly.

[OP] To wrap it up, what advice would you have for young editors tackling a documentary project like this?

[BE] Well, don’t climb El Cap – you probably won’t make it (laugh)! I always preach this to my students: I encourage them to make an outline and work towards it. You can make index cards like I do, you can make a Word document, a spreadsheet; but try to figure out what your intentions are and how you are going to use the material. Otherwise, you are just going to get lost. You may be cutting things that are lovely, but then don’t fit into the overall structure. That’s my big encouragement.

Sometimes with vérité projects there’s a written synopsis, but for Free Solo there was nothing on paper at the beginning. They went in with one idea and came out with a different film. You have to figure out what the story is and that’s all part of the editing process. This goes back to the Maysles’ approach. Go out and capture what happened and then figure out the story. The meaning is found in the cutting room.

Images courtesy of National Geographic and Bob Eisenhardt.

©2019 Oliver Peters