Paul McCartney’s “Who Cares”

Paul McCartney hasn’t been the type of rock star to rest on his past. Many McCartney-related projects have embraced new technologies, such as the 360VR. The music video for Who Cares – McCartney’s musical answer to bullying – was filmed in both 16mm and 65mm film. And it was edited using Final Cut Pro X.

Who Cares features Paul McCartney and actress Emma Stone in a stylized, surreal song and dance number filmed in 65mm, which is bookended by a reality-based 16mm segment. The video was directed by Brantley Gutierrez, choreographed by Ryan Heffington, and produced through LA production company Subtractive.

Gutierrez has collaborated for over 14 years with Santa Monica-based editor Ryan P. Adams on a range of projects, including commercials, concerts, and music videos. Adams also did a stint with Nitro Circus, cutting action sports documentaries for NBC and NBCSN. In that time he’s used the various NLEs, including Premiere Pro, Media Composer, and Final Cut Pro 7. But it was the demands of concert videos that really brought about his shift to Final Cut Pro X.

___________________________________

[OP] Please tell me a bit about what style you were aiming for in Who Cares. Why the choice to shoot in both 16mm and 65mm film?

[Brantley Gutierrez] In this video, I was going for an homage to vaudevillian theater acts and old Beatles-style psychedelia. My background is working with a lot of photography. I was working in film labs when I was pretty young. So my DP and friend, Linus Sandgren, suggested film and had the idea, “What if we shot 65mm?” I was open to it, but it came down to asking the folks at Kodak. They’re the ones that made that happen for us, because they saw it as an opportunity to try out their new Ektachrome 16mm motion film stock.

They facilitated us getting the 65mm at a very reasonable price and getting the unreleased Ektachrome 16mm film. The reason for the two stocks was the separation of the reality of the opening scene – kind of grainy and hand-held – with the song portion. It was almost dreamlike in its own way. This was in contrast to the 65mm psychedelic part, which was all on crane, starkly lit, and with very controlled choreography. The Ektachrome had this hazy effect with its grain. We wanted something that would jump as you went between these worlds and 16 to 65 was about as big of a jump as we could get in film formats.

[OP] What challenges did you face with this combination of film stocks? Was it just a digital transfer and then you were only dealing with video files? Or was the process different than that?

[BG] The film went to London where they could process and scan the 65mm film. It actually went in with Star Wars. Lucafilm had all of the services tied up, but they were kind enough put our film in with The Rise of Skywalker and help us get it processed and scanned. But we had to wait a couple of extra days, so it was a bit of a nervous time. I have full faith in Linus, so I knew we had it. However, it’s a little strange these days to wait eight or nine days to see what you had shot.

We were a guinea pig for Kodak for the 16mm stock. When we got it back, it looked crazy! We were like, “Oh crap.” It looked like it had been cross-processed  – super grainy and super contrasty. It did have a cool look, but more like a Tony Scott style of craziness. When we showed it to Kodak they agreed that it didn’t look right. Then we had Tom Poole, our colorist at Company 3 in New York, rescan the 16mm and it looked beautiful.

[Ryan P. Adams] Ektachrome is a positive stock, which hasn’t been used in a while. So the person in London scanning it just wasn’t familiar with it.

[BG] They just didn’t have the right color profile built for that stock yet, since it hadn’t been released yet. Of course, someone with a more experienced eye would know that wasn’t correct.

[OP] How did this delay impact your editing?

[BG] It was originally scanned and we started cutting with the incorrect version. In the meantime, the film was being rescanned by Poole. He didn’t really have to do any additional color correction to it once he had rescanned it. This was probably our quickest color correction session for any music video – probably 15 minutes total.

[RPA] One of the amazing things I learned, is that all you have to do is give it some minor contrast and then it is done. What it does give you is perfect skin tones. Once we got the proper scan and sat in the color session, that’s what really jumped out.

[OP] So then, what was the workflow like with Final Cut Pro X?

[RPA] The scans came in as DPX files. Here at Subtractive, we took those into DaVinci Resolve and spit out ProRes 422 HQ QuickTime files to edit with. To make things easy for Company 3, we did the final conform in-house using Resolve. An FCPXML file was imported into Resolve, we linked back to the DPX files, and then sent a Resolve project file to Company 3 for the final grade. This way we could make sure everything was working. There were a few effects shots that came in and we set all of that up so Tom could just jump on it and grade. Since he’s in New York, the LA and New York locations for Company 3 worked through a remote, supervised grading session.

[OP] The video features a number of effects, especially speed effects. Were those shot in-camera or added in post?

[RPA] The speed effects were done in post. The surreal world was very well choreographed, which just plays out. We had a lot of fun with the opening sequence in figuring out the timing. Especially in the transitional moment where Emma is staring into the hypnotic wheel. We were able to mock up a lot of the effects that we wanted to do in Final Cut. We would freeze-frame these little characters called “the idiots” that would jump into Emma’s head. I would do a loose rotoscope in Final Cut and then get the motion down to figure out the timing. Our effects people then remade that in After Effects.

[OP] How involved was Paul McCartney in the edit and in review-and-approval?

[BG] I’ve know Paul for about 13 years and we have a good relationship. I feel lucky that he’s very trusting of me and goes along with ideas like this. The record label didn’t even know this video was happening until the day of production. It was clandestine in a lot of ways, but you can get away with that when it’s Paul McCartney. If I had tried that with some other artist, I would have been in trouble. But Paul just said, “We’re going to do it ourselves.”

We showed him the cut once we had picture lock, before final color. He called on the phone, “Great. I don’t have any notes. It’s cool. I love it and will sign off.” That was literally it for Paul. It’s one of the few music videos where there was no going back and forth between the management, the artist, and the record label. Once Paul signed off on it, the record label was fine with it.

[OP] How did you manage to get Emma Stone to be a part of this video?

[BG] Emma is a really close friend of mine. Independently of each other, we both know Paul. Their paths have crossed over the years. We’ve all hung out together and talked about wanting to do something. When Paul’s album came out, I hit them both up with the idea for the music video and they both said yes.

The hardest part of the whole process was getting schedules to align. We finally had an open date in October with only a week and a half to get ready. That’s not a lot of time when you have to build sets and arrange the choreography. It was a bit of a mad dash. The total time was about six weeks from prep through to color.

Because of the nature of this music video, we only filmed two takes for Paul’s performance to the song. I had timed out each set-up so that we knew how long each scene would be. The car sequence was going to be “x” amount of seconds, the camera sequence would be “x” amount, and so on. As a result, we were able to tackle the edit pretty quickly. Since we were shooting 65mm film, we only had two or three takes max of everything. We didn’t have to spend a lot of time looking through hours of footage – just pick the best take for each. It was very old school in that way, which was fun.

[OP] Ryan, what’s your approach to organizing a project like this in Final Cut Pro X?

[RPA] I labelled every set-up and then just picked the best take. The first pass was just a rough to see what was the best version of this video. Then there were a few moments that we could just put in later, like when the group of idiots sings, “Who cares.”

My usual approach is to lay in the sections of synced song segments to the timeline first. We’ll go through that first to find the best performance moments and cut those into the video, which is our baseline. Then I’ll build on top of that. I like to organize that in the timeline rather than the browser so that I can watch it play against the music. But I will keyword each individual set-up or scene.

I also work that way when I cut commercials. I can manage this for a :30 commercial. When it’s a much bigger project, that’s where the organization needs to be a little more detailed. I will always break things down to the individual set-ups so I can reference them quickly. If we are doing something like a concert film, that organization may be broken up by the multiple days of the event. A great feature of Final Cut Pro X is the skim tool and that you can look at clips like a filmstrip. It’s very easy to keyword the angles for a scene and quickly go through it.

[OP] Brantley, I’m sure you’ve sat over the shoulder of the editor in many sessions. From a director’s point of view, what do you think about working with Final Cut Pro X?

[BG] This particular project was pretty well laid out in my head and it didn’t have a lot of footage, so it was already streamlined. On more complex projects, like a multi-cam edit, FCPX is great for me, because I get to look at it like a moving contact sheet from photography. I get to see my choices and I really respond to that. That feels very intuitive and it blows me away that every system isn’t like that.

[OP] Ryan, what attracted you about Final Cut Pro X in order to use it whenever possible?

[RPA] I started with Final Cut Pro X when they added multi-cam. At that time we were doing more concert productions. We had a lot of photographers who would fill in on camera and Canon 5Ds were prevalent. I like to call them “trigger-happy filmers,” because they wouldn’t let it roll all the way through.

FCPX came up with the solution to sync cameras with the audio on the back end. So I could label each photographer’s clips. Each clip might only be a few seconds long. I could then build the concert by letting FCPX sync the clips to audio even without proper timecode. That’s when I jumped on, because FCPX solved a problem that was very painful in Final Cut Pro 7 and a lot of other editing systems. That was an interesting moment in time when photographic cameras could shoot video and we hired a lot of those shooters. Final Cut Pro X solved the problem in a very cool way and it helped me tremendously.

We did this Tom Petty music video, which really illustrates why Final Cut Pro X is a go-to tool. After Tom had passed, we had to take a lot of archival footage as part of a music video, called Gainesville, that we did for his boxed set. Brantley shot a lot of video around Tom’s hometown of Gainesville [Florida], but they also brought us a box with a massive amount of footage that we put into the system. A mix of old films and tapes, some of Tom’s personal footage, all this archival stuff. It gave the video a wonderful feeling.

[BG] It’s very nostalgic from the point of view of Tom and the band. A lot of it was stuff they had shot in their 20s and had a real home movie feel. I shot Super 8mm footage around Tom’s original home and places where they grew up to match that tone. I was trying to capture the love his hometown has for him.

[RPA] That’s a situation where FCPX blows the competition out of the water. It’s easy to use the strip view to hunt for those emotional moments. So the skimmer and the strip view were ways for us to cull all of this hodge-podge of footage for those moments and to hit beats and moments in the music for a song that had been unreleased at that time. We had one week to turn that around. It’s a complicated situation to look through box of footage on a very tight deadline and put a story to it and make it feel correct for the song. That’s where all of those tools in Final Cut shine. When I have to build a montage, that’s when I love Final Cut Pro X the most.

[OP] You’ve worked with the various NLEs. You know DaVinci Resolve and Blackmagic is working hard to make it the best all-in-one tool on the market. When you look at this type of application, what features would you love to see added to Final Cut Pro X?

[RPA] If I had a wishlist, I would love to see if FCPX could be scaled up for multiple seats and multiple editors. I wish some focus was being put on that. I still go to Resolve for color. I look at compositing as just mocking something up so we can figure out timing and what it is generally going to look like. However, I don’t see a situation currently where I do everything in the editor. To me, DaVinci Resolve is kind of like a Smoke system and I tip my hat to them.

I find that Final Cut still edits faster than a lot of other systems, but speed is not the most important thing. If you can do things quickly, then you can try more things out. That helps creatively. But I think that typically things take about as long from one system to the next. If an edit takes me a week in Adobe it still takes me a week in FCPX. But if I can try more things out creatively, then that’s beneficial to any project.

Originally written for FCP.co.

©2020 Oliver Peters

Jezebel

If you’ve spent any time in Final Cut Pro X discussion forums, then you’ve probably run across posts by Tangier Clarke, a film editor based in Los Angeles. Clarke was an early convert to FCPX and recently handled the post-production finishing for the film, Jezebel. I was intrigued by the fact that Jezebel was picked up by Netflix, a streaming platform that has been driving many modern technical standards. This was a good springboard to chat with Clarke and see how FCPX fared as the editing tool of choice.

_______________________________________________________________

[OP] Please tell me a little about your background in becoming a film editor.

[TC] I’m a very technical person and have always had a love for computers. I went to college for computer science, but along the way I discovered Avid Videoshop and started to explore editing more, since it married my technical side with creative storytelling. So, at UC Berkeley I switched from computer science to film.

My first job was at motion graphics company Montgomery/Cobb, which was in Los Angeles. They later became Montgomery & Co. Creative. I was a production assistant for main titles and branding packages for The Weather Channel, Fox, NBC, CBS, and a whole host of cable shows. Then, I worked for 12 years with Loyola Productions (no affiliation with Loyola Marymount University).

I moved on to a company called Black & Sexy TV, which was started by Dennis Dortch as a company to have more control over black images in media. He created a movie called A Good Day to be Black and Sexy in 2008, which was picked up and distributed by Magnolia Pictures and became a cult hit. It ended up in Blockbuster Video stores, Target, and Netflix. The success of that film was leveraged to launch Black & Sexy TV and its online streaming platform.

[OP] You’ve worked on several different editing applications, but tell me a bit about your transition to Final Cut Pro X.

[TC] I started my career on Avid, which was also at the time when Final Cut Pro “legacy” was taking off. During 2011 at Loyola Productions, I had an opportunity to create a commercial for a contest put out by American Airlines. We thought this was an opportunity for us as a company to try Final Cut Pro X.

I knew that it was for us once we installed it. Of course, there were a lot of things missing coming from Final Cut Pro 7, and a couple of bugs here and there. The one thing that was astonishing for me, despite the initial learning curve, was that within one week of use my productivity compared to Final Cut Pro 7 went through the roof. There was no correlation between anything I had used before and what I was experiencing with Final Cut X in that first week. I also noticed that our interns – whose only experience was iMovie – just picked up Final Cut Pro X with no problems whatsoever.

Final Cut Pro X was very liberating, which I expressed to my boss, Eddie Siebert, the president and founder of Loyola Productions. We decided to keep using it to the extent that we could on certain projects and worked with Final Cut Pro 7 and Final Cut Pro X side-by-side until we eventually just switched over.

[OP] You recently were the post supervisor and finishing editor for the film Jezebel, which was picked up by Netflix. What is this film about?

[TC] Jezebel is a semi-autobiographical film written and directed by Numa Perrier, who is a co-founder of Black & Sexy TV. The plot follows a 19-year-old girl, who after the death of her mother begins to do sex work as an online chat room cam girl to financially support herself. Numa is starring in the film, playing her older sister, and an actress named Tiffany Tenille is playing Numa. This is also Numa Perrier’s feature film directorial debut. It’s a side of her that people didn’t know – about how she survived as a young adult in Las Vegas. So, she is really putting herself out there.

The film made its debut at South by Southwest last year, where it was selected as a “Best of SXSW” film by The Hollywood Reporter. After that it went to other domestic and international festivals. At some point it was seen by Ava DuVernay, who decided to to pick up Numa’s film through her company, Array. That’s how it got to Netflix.

[OP] Please walk me through the editorial workflow for Jezebel. How did FCPX play a unique role in the post?

[TC] I was working on a documentary at the time, so I couldn’t fully edit Jezebel, but I was definitely instrumental in the process. A former coworker of mine, Brittany Lyles, was given the task of actually editing the project in Final Cut Pro X, which I had introduced her to a couple of years ago and trained her on how to use it. The crew shot with a Canon C300 camera and we used the Final Cut proxy workflow. Brittany wouldn’t have been able to work on it if we weren’t using proxies, because of her hardware. I was using a late 2013 Mac Pro, as well as a 2016 MacBook Pro.

At the front end, I assisted the the production team with storage and media management. Frances Ampah (a co-producer on the film) and I worked to sync all the footage for Brittany, who was working with copy of the footage on a dedicated drive. We provided Bittany with XMLs during the syncing process as she was getting familiar with the footage.

While Brittany was working on the cut, Numa and I were trying to figure out how best to come up with a look and a style for the text during the chat room scenes in the movie. It hadn’t been determined yet if I was going to get the entire film and put the graphics in myself or if I was going to hand it off to Brittany for her to do it. I pitched Numa on the idea of creating a Motion template so that I could have more control over the look, feel, and animation of the graphics. That way either Brittany or I could do it and it would look the same.

Brittany and Numa refined the edit to a point where it made more sense for me to put in a lot of the graphics and do any updating per Numa’s notes, because some of the text had changed as well. And we wanted to really situate the motion of the chat so that it was characteristic of what it looked like back then – in the 90s. We needed specific colors for each user who was logged into the screen. I had some odd color issues with Final Cut and ended up actually just going into the FCPXML file to modify color values. I’m used to going into files like that and I’m not afraid of it. I also used the FCP X feature in the text inspector to save format and appearance attributes. This was tremendously helpful to quickly assign the color and formatting for the different users in the chat room – saving a lot of time.

Our secondary editor, Bobby Field, worked closely with Numa to do the majority of color grading on the film. He was more familiar with Premiere Pro than FCP X, but really enjoyed the color tools in Final Cut Pro X. Through experimentation, Bobby learned how to use adjustment layers to apply color correction. I was fascinated by this and it was a learning experience for me as well. I’m used to working directly with the clip itself and in my many years of using FCP X, this wasn’t a method I used or saw anyone else firsthand doing.

[OP] What about sound post and music?

[TC] I knew that there’s only so much technically that I’d had the skillset to do and I would not dare to pretend that I know how to do certain things. I called on the help of Jim Schaefer – a skilled and trusted friend that I worked with at Loyola productions. I knew he wanted an opportunity to work on a big project, particularly a feature. The film needed a tremendous amount of sound work, so he took it on along with Travis Prater, a coworker of his at Source Sound in Woodland Hills. Together they really transformed the film.

Jim and Travis worked in Pro Tools, so I used X2Pro to get files to them. Jim gave me a list of how he wanted the film broken down. Because of the length of Jezebel, he preferred that the film was broken up into reels. In addition to reels, I also gave him the entire storyline with all of the roles. Everything was broken down very nicely using AAFs and he didn’t really have any problems.  In his words, “It’s awesome that all the tracks are sorted by character and microphone – that’ll cut down significantly on the sorting/organizing pass for me.” The only hiccup experienced was that metadata was missing in the AAF up to a certain point in ProTools. Yet that metadata did exist in the original WAV. Some clip names were inconsistent as well, but that may have happened during production.

[OP] Jezebel is streaming on Netflix, which has a reputation for having tough technical specs. Were there any special things you had to do to make it ready for the platform?

[TC] We supplied Array with a DCI 2K (full frame) Quicktime master in ProRes 422HQ per their delivery schedule, along with other elements such as stereo and 5.1 mixes from Jim, Blu-Rays, DVD, and DCP masters. I expected to do special things to make it ready for Netflix. Numa and I discussed this, but to my knowledge, the Quicktime that I provided to Array is what Netflix received. There were no special conversions made just for Netflix on the part of Array.

[OP] Now that you have this Final Cut Pro X experience under your belt, what would you change if you could? Any special challenges or shortcomings?

[TC] I had to do some composite shots for the film, so the only shortcoming for me was Final Cut’s compositing tool set. I’d love to have better tools built right into FCP X, like in DaVinci Resolve. I love Apple Motion and it’s fine for what it is, but it could go a little further for me. I’d love to see an update with improved compositing and better tracking. Better reporting for missing files, plugins, and other elements would also be tremendously helpful in troubleshooting vague alerts.

In spite of this, there was no doubt in any part of the process whether or not Final Cut was fully capable of being at the center of everything that needed to be done – whether it was leveraging Motion for template graphics between Brittany and me, using a third-party tool to make sure that the post sound team had precisely what they needed, or exchanging XMLs or backup libraries with Bobby to make sure that his work got to me intact. I was totally happy with the performance of FCP X. It was just rock solid and for the most part did everything I needed it to do without slowing me down.

Originally written for FCPco.

A special thanks to Lumberjack System for their assistance in transcribing this interview.

©2020 Oliver Peters

Everest VR and DaVinci Resolve Studio

In April of 2017, world famous climber Ueli Steck died while preparing to climb both Mount Everest and Mount Lhotse without the use of bottled oxygen. Ueli’s close friends Jonathan Griffith and Sherpa Tenji attempted to finish this project while director/photographer Griffith captured the entire story. The result is the 3D VR documentary, Everest VR: Journey to the Top of the World. It was produced by Facebook’s Oculus and teased at last year’s Oculus Connect event. Post-production was completed in February and the documentary is being distributed through Oculus’ content channel.

Veteran visual effects artist Matthew DeJohn was added to the team to handle end-to-end post as a producer, visual effects supervisor, and editor. DeJohn’s background includes camera, editing, and visual effects with a lot of experience in both traditional visual effects, 2D to 3D conversion, and 360 virtual reality. Before going freelance, he worked at In3, Digital Domain, Legend3D, and VRTUL.

As an editor, DeJohn was familiar with most of the usual tools, but opted to use Blackmagic’s DaVinci Resolve Studio and Fusion Studio applications as the post-production hub for the Everest VR documentary. Posting stereoscopic, 360-degree content can be quite challenging, so I took the opportunity to speak with DeJohn about using DaVinci Resolve Studio on this project.

_______________________________________________________

[OP] Please tell me a bit about your shift to DaVinci Resolve Studio as the editing tool of choice.

[MD] I have had a high comfort level with Premiere Pro and also know Final Cut Pro. Premiere has good VR tools and there’s support for it. In addition to these tools I was using Fusion Studio in my workflow so it was a natural to look at DaVinci Resolve Studio as a way to combine my Fusion Studio work with my editorial work.

I made the switch about a year and half ago and it simplified my workflow dramatically. It integrated a lot of different aspects all under one roof – the editorial page, the color page, the Fusion page, and the speed to work with high-res footage. From an editing perspective, the tools are all there that I was used to in what I would argue is a cleaner interface. Sometimes, software just collects all of these features over time. DaVinci Resolve Studio is early in its editorial development trajectory, but it’s still deep. Yet it doesn’t feel like it has a lot of baggage.

[OP] Stereo and VR projects can often be challenging, because of the large frame sizes. How did DaVinci Resolve Studio help you there?

[MD] Traditionally 360 content uses a 2:1 aspect ratio, so 4K x 2K. If it’s going to be a stereoscopic 360 experience, then you stack a left and right eye image on top of each other. It ends up being 4K x 4K square – two 4K x 2K frames stacked on top of each other. With DaVinci Resolve Studio and the graphics card I have, I can handle a 4K x 4K full online workflow. This project was to be delivered as 8K x 8K. The hardware I had wasn’t quite up to it, so I used an offline/online approach. I created 2K x 2K proxy files and then relinked to the full resolution sources later.  I just had to unlink the timeline and then reconnect it to another bin with my 8K media.

You can cut a stereo project just looking at the image for one eye, then conform the other eye, and then combine them. I chose to cut with the stacked format. My editing was done looking at the full 360 unwrapped, but my review was done through a VR headset from the Fusion page. From there I was also able to review the stereoscopic effect on a 3D monitor. 3D monitoring can also be done on the color page, though I didn’t use that feature on this project.

[OP] I know that successful VR is equal parts production and post. And that post goes much more smoothly with a lot of planning before anyone starts. Walk me through the nuts and bolts of the camera systems and how Everest VR was tackled in post.

[MD] Jon Griffith – the director, cameraman, and alpinist – a man of many talents – utilized a number of different systems. He used the Yi Halo, which is a 17-camera circular array. Jon also used the Z CAM V1 and V1 Pro cameras. All were stereoscopic 360 camera systems.

The Yi Halo camera used the Jump cloud stitcher from Google. You upload material to that service and it produces an 8K x 8K final stitch and also a proxy 2K x 2K stitch. I would cut with the 2K x 2K and then conform to the 8K x 8K. That was for the earlier footage. The Jump stitcher is no longer active, so for the more recent footage Jon switched to the Z CAM systems. For those, he would run through Z CAM’s Wonderstitch application, with is auto-stitching software. For the final, we would either clean up any stitching artifacts in Fusion Studio or restitch it in Mistika VR.

Once we had done that, we would use Fusion Studio for any rig removal and fine-tuned adjustments. No matter how good these cameras and stitching software are, they can fail in some situations. For instance, if the subject is too close to the camera or walks between seams. There’s quite a bit of composting/fixing that needs to be done and Fusion Studio was used heavily for that.

[OP] Everest VR consists of three episodes ranging from just under 10 minutes to under 17 minutes. A traditional cinema film, shot conservatively, might have a 10:1 shooting ratio. How does that sort of ratio equate on a virtual reality film like this?

[MD] As far as the percentage of shots captured versus used, we were in the 80-85% range of clips that ended up in the final piece. It’s a pretty high figure, but Jon captured every shot for a reason with many challenging setups – sometimes on the side of an ice waterfall. Obviously there weren’t many retakes. Of course the running time of raw footage would result in a much higher ratio. That’s because we had to let the cameras run for an extended period of time. It takes a while for a climber to make his way up a cliff face!

[OP] Both VR and stereo imagery present challenges in how shots are planned and edited. Not only for story and pacing, but also to keep the audience comfortable without the danger of motion-induced nausea. What was done to address those issues with Everest VR?

[MD] When it comes to framing, bear in mind there really is no frame in VR. Jon has a very good sense of what will work in a VR headset. He constructed shots that make sense for that medium, staging his shots appropriately without any moving camera shots. The action moved around you as the viewer. As such, the story flows and the imagery doesn’t feel slow even though the camera doesn’t move. When they were on a cliffside, he would spend a lot of time rigging the camera system. It would be floated off the side of the cliff enough so that we could paint the rigging out. Then you just see the climber coming up next to you.

The editorial language is definitely different for 360 and stereoscopic 360. Where you might normally have shots that would go for three seconds or so, our shots go for 10 to 20 seconds, so the action on-screen really matters. The cutting pace is slower, but what’s happening within the frame isn’t. During editing, we would plan from cut to cut exactly where we believed the viewer would be looking. We would make sure that as we went to the next shot, the scene would be oriented to where we wanted the viewer to look. It was really about managing the 360 hand-off between shots, so that viewers could follow the story. They didn’t have to whip their head from one side of the frame to the other to follow the action.

In some cases, like an elevation change – where someone is climbing at the top of the view and the next cut is someone climbing below – we would use audio cues. The entire piece was mixed in ambisonic third order, which means you get spatial awareness around and vertically. If the viewer was looking up, an audio cue from below would trigger them to look down at the subject for the next shot. A lot of that orchestration happens in the edit, as well as the mix.

[OP] Please explain what you mean by the orientation of the image.

[MD] The image comes out of the camera system at a fixed point, but based on your edit, you will likely need to change that. For the shots where we needed to adjust the XYZ axis orientation, we would add a Panomap node in the Fusion page within DaVinci Resolve Studio and shift the orientation as needed. That would show up live in the edit page. This way we could change what would become the center of the view.

The biggest 3D issue is to make sure the vertical alignment is done correctly. For the most part these camera systems handled it very well, but there are usually some corrections to be made. One of these corrections is to flatten the 3D effect at the poles of the image. The stereoscopic effect requires that images be horizontally offset. There is no correct way to achieve this at the poles, because we can’t guarantee how the viewer’s head is oriented when they look at the poles. In traditional cinema, the stereo image can affect your cutting, but with our pacing, there was enough time for a viewer to re-converge their view to a different distance comfortably.

[OP] Fusion was used for some of the visual effects, but when do you simply use the integrated Fusion page within DaVinci Resolve Studio versus a standalone version of the Fusion Studio application?

[MD] All of the orientation was handled by me during the edit by using the integrated Fusion page within DaVinci Resolve Studio. Some simple touch-ups, like painting out tripods, were also done in the Fusion page. There are some graphics that show the elevation of Everest or the climbers’ paths. These were all animated in the Fusion page and then they showed up live in the timeline. This way, changes and quick tweaks were easy to do and they updated in real-time.

We used the standalone version of Fusion Studio for some of the more complex stitches and for fixing shots. Fusion Studio is used a lot in the visual effects industry, because of its scriptability, speed, and extensive toolset. Keith Kolod was the compositor/stitcher for those shots. I sent him the files to work on in the standalone version of Fusion Studio. This work was a bit heavier and would take longer to render. He would send those back and I would cut those into the timeline as a finished file.

[OP] Since DaVinci Resolve Studio is an all-in-one tool covering edit, effects, color, and audio, how did you approach audio post and the color grade?

[MD] The Initial audio editing was done in the edit and Fairlight pages of DaVinci Resolve Studio. I cut in all of the temp sounds and music tracks to get the bone structure in place. The Fairlight page allowed me to get in deeper than a normal edit application would. Jon recorded multiple takes for his narration lines. I would stack those on the Fairlight page as audio layers and audition different takes very quickly just by re-arranging the layers. Once I had the take I liked, I left the others there so I could always go back to them. But only the top layer is active.

After that, I made a Pro Tools turnover package for Brendan Hogan and his team at Impossible Acoustic. They did the final mix in Pro Tools, because there are some specific built-in tools for 3D ambisonic audio. They took the bones, added a lot of Foley, and did a much better job of the final mix than I ever could.

I worked on the color correction myself. The way this piece was shot, you only had one opportunity to get up the mountain. At least on the actual Everest climb, there aren’t a lot of takes. I ended up doing color right from the beginning, just to make sure the color matched for all of those different cameras. Each had a different color response and log curve. I wanted to get a base grade from the very beginning just to make sure the snow looked the same from shot to shot. By the time we got to the end, there were very minimal changes to the color. It was mainly to make sure that the grade we had done while looking at Rec. 709 monitoring translated correctly to the headset, because the black levels are a bit different in the headsets.

[OP] In the end, were you 100% satisfied with the results?

[MD] Jon and Oculus held us to a high level in regards to the stitch and the rig removals. As a visual effects guy, there’s always something, if you look really hard! (laughs) Every single shot is a visual effects shot in a show like this. The tripod always has to be painted out. The cameraman always needs to be painted out if they didn’t hide well enough.

The Yi Halo doesn’t actually capture the bottom 40 degrees out of the full 360. You have to make up that bottom part with matte painting to complete the 360. Jon shot reference photos and we used those in some cases. There is a lot of extra material in a 360 shot, so it’s all about doing a really nice clone paint job within Fusion Studio or the Fusion page of DaVinci Resolve Studio to complete the 360.

Overall, as compared with all the other live-action VR experiences I’ve seen, the quality of this piece is among the very best. Jon’s shooting style, his drive for a flawless experience, the tools we used, and the skill of all those involved helped make this project a success.

The article originally written for Creative Planet Network.

©2020 Oliver Peters

The Banker

Apple has launched its new TV+ service and this provides another opportunity for filmmakers to bring untold stories to the world. That’s the case for The Banker, an independent film picked up by Apple. It tells the story of two African American entrepreneurs attempting to earn their piece of the American dream during the repressive 1960s through real estate and banking. It stars Samuel L. Jackson, Anthony Mackie, Nia Long, and Nicholas Hoult.

The film was directed by George Nolfi (The Adjustment Bureau) and produced by Joel Viertel, who also signed on to edit the film. Viertel’s background hasn’t followed the usual path for a feature film editor. Interested in editing while still in high school, the move to LA after college landed him a job at Paramount where he eventually became a creative executive. During that time he kept up his editing chops and eventually left Paramount to pursue independent filmmaking as a writer, producer, and editor. His editing experience included Apple Final Cut Pro 1.0 through 7.0 and Avid Media Composer, but cutting The Banker was his first time using Apple’s Final Cut Pro X.

I recently chatted with Joel Viertel about the experience of making this film and working with Apple’s innovative editing application.

____________________________________________

[OP] How did you get involved with co-producing and cutting The Banker?

[JV] This film originally started while I was at Paramount. Through a connection from a friend, I met with David Smith and he pitched me the film. I fell in love with it right away, but as is the case with these films, it took a long while to put all the pieces together. While I was doing The Adjustment Bureau with George Nolfi and Anthony Mackie, I pitched it to them, and they agreed it would be a great project for us all to collaborate on. From there it took a few years to get to a script we were all happy with, cast the roles, get the movie financed, and off the ground.

[OP] I imagine that it’s exciting to be one of the first films picked up by Apple for their TV+ service. Was that deal arranged before you started filming or after everything was in the can, so to speak?

[JV] Apple partnered with us after it was finished. It was made and financed completely independently through Romulus Entertainment. While we were in the finishing stages, Endeavor Content repped the film and got us into discussions with Apple. It’s one of their first major theatrical releases and then goes on the platform after that. Apple is a great company and brand, so it’s exciting to get in on the ground floor of what they’re doing.

[OP] When I screened the film, one of the things I enjoyed was the use of montages to quickly cover a series of events. Was that how it was written or were those developed during the edit as a way to cut running time?

[JV] Nope, it was all scripted. Those segments can bedevil a production, because getting all of those little pieces is a lot of effort for very little yield. But it was very important to George and myself and the collaborators on the film to get them. It’s a film about banking and real estate, so you have to figure out how to make that a fun and interesting story. Montages were one way to keep the film propulsive and moving forward – to give it motion and excitement. We just had to get through production finding places to pick off those pieces, because none of those were developed in post.

[OP] What was your overall time frame to shoot and post this film?

[JV] We started in late September 2018 and finished production in early November. It was about 30 days in Atlanta and then a few days of pick-ups in LA. We started post right after Thanksgiving and locked in May, I think. Once Apple got involved, there were a few minor changes. However, Apple’s delivery specs were completely different from our original delivery specs, so we had to circle back on a bunch of our finishing.

[OP] Different in what way?

[JV] We had planned to finish in 2K with a 5.1 mix. Their deliverables are 4K with a Dolby Atmos mix. Because we had shot on 35mm film, we had the capacity, but it meant that we had to rescan and redo the visual effects at 4K. We had to lay the groundwork to do an Atmos mix and DolbyVision finish for theatrical and home video, which required the 35mm film negative to be rescanned and dust-busted.

Our DP, Charlotte Bruus Christensen, has shot mostly on 35mm – films like A Quiet Place and The Girl on a Train and those movies are beautiful. And so we wanted to accommodate that, but it presents challenges if you aren’t shooting in LA. Between Kodak in Atlanta and Technicolor in LA we were able to make it work.

Kodak would process the negative and Technicolor made a one-light transfer for 2K dailies. Those were archived and then I edited with ProResLT copies in HD. Once we were done, Technicolor onlined the movie from their 2K scans. After the change in deliverable specs, Technicolor rescanned the clips used for the online finish at 4K and conformed the cut at 4K.

[OP] I felt that the eclectic score fit this movie well and really places it in time. As an editor, how did you work to build up your temp tracks? Or did you simply leave it up to the composer?

[JV] George and I have worked with our composer, Scott Salinas, for a very long time on a bunch of things. Typically, I give him a script and then he pulls samples that he thinks are in the ballpark. He gave me a grab bag of stuff for The Banker – some of which was score, some of which was jazz. I start laying that against the picture myself as I go and find these little things that feel right and set the tone of the movie. I’m finding my way for the right marriage of music and picture. If it works, it sticks. If it doesn’t, we replace it. Then at the end, he’s got to score over that stuff.

Most of the jazz in The Banker is original, but there are a couple tracks where we just licensed them. There’s a track called “Cash and Carry” that I used over the montage when they get rich. They’ve just bought the Banker’s Building and popped the champagne. This wacky, French 1970s bit of music comes in with a dude scatting over it while they are buying buildings or looking at the map of LA. That was a track Scott gave me before we shot a frame of film, so when we got to that section of the movie, I chose it out of the bin and put that sequence to it and it just stuck.

There are some cases where it’s almost impossible to temp, so I just cut it dry and give it to him. Sometimes he’ll temp it and sometimes he’ll do a scratch score. For example, the very beginning of the movie never had temp in any way. I just cut it dry. I gave it to Scott. He scored it and then we revised his scoring a bunch of times to get to the final version.

[OP] Did you do any official or “friends and family” screenings of The Banker while editing it? If so, did that impact the way the film turned out?

[JV] The post process is largely dictated by how good your first cut is. If the movie works, but needs improvement – that’s one thing. If it fundamentally doesn’t – that’s another. It’s a question of where you landed from the get-go and what needs to be fixed to get to the end of the road.

We’re big fans of doing mini-testing – bringing in people we know and people whose opinions we want to hear. At some point you have to get outside of the process and aggregate what you hear over and over again. You need to address the common things that people pick up on. The only way to keep improving your movie is to get outside feedback so they tell you what to focus on.

Over time that significantly impacted the film. It’s not like any one person said that one thing that caused us to re-edit the film. People see the problem that sticks out to them in the cut and you work on that. The next time there’s something else and then you work on that. You keep trying to make all the improvements you can make. So it’s an iterative process.

[OP] This film marked a shift for you from using earlier versions of Final Cut Pro to now cutting on Final Cut Pro X for the first time. Why did you make that choice and what was the experience like?

[JV] George has a relationship with Apple and they had suggested using Final Cut Pro X on his next project. I had always used Final Cut Pro 7 as my preference. We had used it on an NBC show called Allegiance in 2014 and then on Birth of the Dragon in 2015 and 2016 – long after it had been discontinued. We all could see the writing on the wall – operating systems would quit running it and it’s not harnessing what the computers can do.

I got involved in the conversation and was invited to come to a seminar at the Editors Guild about Final Cut Pro X that was taught by Kevin Bailey, who was the assistant editor for Whiskey Tango Foxtrot. I had looked at Final Cut Pro X when it first came out and then again several years later. I felt like it had been vastly improved and was in a place where I could give it a shot. So I committed at that point to cutting this film on Final Cut Pro X and teaching myself how to use it. I also hired Kevin to help as my assistant for the start of the film. He became unavailable later in the production, so we found Steven Moyer to be my assistant and he was fantastic. I would have never made it through without the both of them.

[OP] How did you feel about Final Cut Pro X once you got your sea legs?

[JV] It’s always hard to learn to walk again. That’s what a lot of editors bump into with Final Cut Pro X, because it is a very different approach than any other NLE. I found that once you get to know it and rewire your brain that you can be very fast on it. A lot of the things that it does are revolutionary and pretty incredible. And there are still other areas that are being worked on. Those guys are constantly trying to make it better. We’ve had multiple conversations with them about the possibilities and they are very open to feedback.

[OP] Every editor has their own way of tackling dailies and wading through an avalanche of footage coming in from production. And of course, Final Cut Pro X features some interesting ways to organize media. What was the process like for The Banker?

[JV] The sound and picture were both running at 24fps. I would upload the sound files from my hotel room in Atlanta to Technicolor in LA, who would sync the sound. They would send back the dailies and sound, which Kevin – who was assisting at that time – would load into Final Cut. He would multi-clip the sound files and the two camera angles. Everything is in a multi-clip, except for purely MOS B-roll shots. Each scene had its own event. Kevin used the same system he had devised with Jan [Kovac, editor on Whiskey Tango Foxtrot and Focus]. He would keyword each dialogue line, so that when you select a keyword collection in the browser, every take for that line comes up. That’s labor-intensive for the assistant, but it makes life that much faster for me once it’s set up.

[OP] I suppose that method also makes it much faster when you are working with the director and need to quickly get to alternate takes.

[JV] It speeds things along for George, but also for me. I don’t have to hunt around to find the lines when I have to edit a very long dialogue scene. You could assemble selects reels first, but I like to look at everything. I fundamentally believe there’s something good in every bad take. It doesn’t take very long to watch every take of a line. Plus I do a fair amount of ‘Franken-biting’ with dialogue where needed.

[OP] Obviously the final mix and color correction were done at specialty facilities. Since The Banker was shot on film, I would imagine that complicated the hand-off slightly. Please walk me through the process you followed.

[JV] Marti Humphrey did the sound at The Dub Stage in Burbank. We have a good relationship with him and can call him very early in the process to work out the timeline of how we are going to do things. He had to soup up his system a bit to handle the Atmos near-field stuff, but it was a good opportunity for him to get into that space. So he was able to do all the various versions of our mix.

Technicolor was the new guy for us. Mike Hatzer did the color grade. It was a fairly complex process for them and they were a good partner. For the conform, we handed them an XML and EDL. They had their Flex files to get back to the film edge code. Steven had to break up the sequence to generate separate tracks for the 35mm original, stock, and VFX shots, because Technicolor needed separate EDLs for those. But it wasn’t like we invented anything that hasn’t been done before.

We did use third-party apps for some of this. The great thing about that is you can just contact the developer directly. There was one EDL issue and Steven could just call up the app developer to explain the issue and they’d fix it in a couple of days.

[OP] What sort of visual effects were required? The film is set more or less 70 years ago, so were the majority of effects just to make the locations look right? Like cars, signs, and so on?

[JV] It was mostly period clean-up. You have to paint out all sorts of boring stuff, like road paint. In the 50s and 60s, those white lines have to come out. Wires, of course. A couple of shots we wanted to ‘LA-ify’ Georgia. We shot some stuff in LA, but when you put Griffith Park right next to a shot of Newnan, Georgia, the way to blend that over is to put palm trees in the Newnan shot.

We also did a pick-up with Anthony while he was on another show the required a beard for that role. So we had to paint out his beard. Good luck figuring out which was the shot where we had to paint out his beard!

[OP] Now that you have a feature film under your belt with Final Cut Pro X, what are your thoughts about it? Anything you feel that it’s missing?

[JV] All the NLEs have their particular strengths. Final Cut has several that are amazing, like background exports and rendering. It has Roles, where you can differentiate dialogue, sound effects, and music sources. You can bus things to different places. This is the first time I’ve ever edited in 5.1, because Final Cut supports that. That was a fun challenge.

We used Final Cut Pro X to edit a movie shot on film, which is kind of a first at this level, but it’s not like we crashed into some huge problem with that. We gamed it out and it all worked like it was supposed to. Obviously it doesn’t do some stuff the same way. Fortunately through our relationship with Apple we can make some suggestions about that. But there really isn’t anything it doesn’t do. If that were the case, we would have just said that we can’t cut with this.

Final Cut Pro X is an evolving NLE – as they all are. What I realized at the seminar is that it changed a lot from when it first appeared. It was a good experience cutting a movie on it. Some editors are hesitant, because that first hour is difficult and I totally get that. But if you push through that and get to know it – there are many things that are very good and addictively good. I would certainly cut another movie on it.

____________________________________________

The Banker started a limited theatrical release on March 6 and will be available on the Apple TV+ streaming service on March 20.

For even more details on the post process for The Baker, check out Pro Video Coalition

Originally written for FCPco.

®2020 Oliver Peters

Ford v Ferrari

Outraged by a failed attempt to acquire European carmaker Ferrari, Henry Ford II sets out to trounce Enzo Ferrari on his own playing field – automobile endurance racing. Unfortunately, the effort falls short, leading Ford to turn to independent car designer, Carroll Shelby. But Shelby’s outspoken lead test driver, Ken Miles, complicates the situation by making an enemy out of Ford Senior VP Leo Beebe. Nevertheless, Shelby and his team are able to build one of the greatest race cars ever – the GT40 MkII – setting the showdown between the two auto legends at the 1966 24 Hours of Le Mans. Matt Damon and Christian Bale star as Shelby and Miles.

The challenge of bringing this clash of personalities to the screen was taken on by director James Mangold (Logan, Wolverine, 3:10 to Yuma) and his team of long time collaborators. I recently spoke with film editors Michael McCusker, ACE (Walk the Line, 3:10 to Yuma, Logan) and Andrew Buckland (The Girl on the Train) about what it took to bring Ford v Ferrari together.

_____________________________________________

[OP] The post team for this film has worked with James Mangold on quite a few films. Tell me a bit about the relationship.

[MM] I cut my very first movie, Walk The Line, for Jim 15 years ago and have since cut his last six movies. I was the first assistant editor on Kate & Leopold, which was shot in New York in 2001. That’s where I met Andrew, who was hired as one of the local New York film assistants. We became fast friends. Andrew moved out to LA in 2009 and I hired him to assist me on Knight & Day. We’ve been working together for 10 years now.

I always want to keep myself available for Jim, because he chooses good material, attracts great talent, and is a filmmaker with a strong vision who works across multiple genres. Since I’ve worked with him, I’ve cut a musical movie, a western, a rom-com, an action movie, a straight-up superhero movie, a dystopian superhero movie, and now a car racing film.

[OP] As a film editor, it must be great not to get type-cast for any particular cutting style.

[MM] Exactly. I worked for David Brenner for years as his first. He was able to cross genres and that’s what I wanted to do. I knew even then that the most important decisions I would make would be choosing projects. I couldn’t have foreseen that Jim was going to work across all these genres – I simply knew that we worked well together and that the end product was good.  

[OP] In preparing for Ford v Ferrari, did you study any other recent racing films, like Ron Howard’s Rush?

[MM] I saw that movie and liked it. Jim was aware of it, too, but I think he wanted to do something a little more organic. We watched a lot of older racing films, like Steve McQueen’s Le Mans and Frankenheimer’s Grand Prix. Jim’s original intention was to play the racing in long takes and bring the audience along for the ride. As he was developing the script and we were in preproduction, it became clear that there was so much more drama that was available for him to portray during the racing sequences than he anticipated. And so, the races took on more of an energized pace.

[OP] Energized in what way? Do you mean in how you cut it or in a change of production technique, like more stunt cameras and angles?

[MM] I was fortunate to get involved about two-and-a-half months prior to the start of production. We were developing the Le Mans race in pre-vis, which required a lot of editing and discussions about shot design and figuring out what the intercutting was going to be during that sequence, which is like the fourth act of the movie. You’re dealing with Mollie and Peter [Ken Miles’ wife and son] at home watching the race, the pit drama, what’s going on with Shelby and his crew, with Ford and Leo Beebe, and also, of course, what’s going on in the car with Ken. It’s a three act movie unto itself, so Jim was trying to figure out how it was all going to work, before he had to shoot it. That’s where I came in. The frenetic pace of Le Mans was more a part of the writing process – and part of the writing process was the pre-vis. The trick was how to make sure we weren’t just following cars around a track. That’s where redundancy can tend to beleaguer an audience in racing movies. 

[OP] What was the timeline for production and post?

[MM] I started at the end of May 2018. Production began at the the beginning of August and went all the way through to the end of November. We started post in earnest at the beginning of November of last year, took some time off for the holidays, and then showed the film to the studios around February or March.

The challenge was that there was going to be a lot of racing footage, which meant there was going to be a LOT of footage. I knew I was going to need a strong co-editor, so Andrew was the natural choice. He had been cutting on his own and cutting with me over the years. We share a common approach to editing and have a similar aesthetic. There was a point when things got really intense and we needed another pair of hands, so I brought in Dirk Westervelt to help out for a couple of months. That kept our noses above water, but the process was really enjoyable. We were never in a crisis mode. We got a great response from preview audiences and, of course, that calms everybody down. At that point it was just about quality control and making sure we weren’t resting on our laurels. 

[OP] How long was your initial cut and what was your process for trimming the film down to the present run time?

[MM] We’re at 2:30:00 right now and I think the first cut was 3:10:00 or 3:12:00. The Le Mans section was longer. The front end of the movie had more scenes in it. We ended up lifting some scenes and rearranging others.  Plus, the basic trimming of scenes brought the length down. But nothing was the result of a panic, like, “Oh my God, we’ve got to get to 2:30:00!” There were no demands by the studio or any pressures we placed upon ourselves to hit a particular running time. I like to say that there’s real time and there’s cinematic time. You can watch Once Upon a Time in America, which is 3:45:00, and feel likes it’s an hour. Or you can watch an 89-minute movie and feel like it’s drudgery. We just wanted to make sure we weren’t overstaying our welcome. 

[OP] How extensively did you re-arrange scenes during the edit? Or did the structure of the film stay pretty much as scripted?

[MM] To a great degree it stayed as scripted. We had some scenes in the beginning that we felt were a little bit tangential and weren’t serving the narrative directly and those were cut. The real endeavor of this movie starts the moment that these two guys [Shelby and Miles] decide to tackle the challenge of developing this car. There’s a scene where Miles sees the car for the first time at LAX. We understood that we had to get to that point in a very efficient way, but also set up all the other characters – their motives and their desires.

It’s an interesting movie, because it starts off with a lot of characters. But then it develops into a movie about two guys and their friendship. So it goes from an ensemble piece to being about Ken and Carroll, while at the same time the scope of the movie is opening up and becoming larger as the racing is going on. For us, the trickiest part was the front end – to make sure we spent enough time with each character so that we understood them, but not so much time that audience would go, “Enough already! Get on with it!”

[OP] Were you both racing fans before you signed onto this film?

[AB] I was not.

[MM] When I was a kid, I watched a lot of racing. I liked CART racing – open wheel racing – not so much stock car racing. As I grew older, I lost interest, particularly when CART disbanded and NASCAR took over. So, I had an appreciation for it. I went to races, like the old Ontario 500 here in California.

[OP] Did that help inform your cutting style for this film?

[MM] I don’t think so. Where it helped was knowing the sound of the broadcasters and race announcers. I liked Chris Economaki and Jim McKay – guys who were broadcasting the races when I was a kid. I was intrigued about how they gave us the narrative of the race. It came in handy while we were making this movie, because we were able to get our hands on some of Jim McKay’s actual coverage of Le Mans and used it in the movie. That brings so much authenticity.

[OP] Let’s dive deeper into the sound for this film. I would imagine that sound design was integral to your rough cuts. How did you tackle that?

 [AB] We were fortunate to have the sound team on very early during preproduction. We were cutting in a 5.1 environment, so we wanted to create sound design early in the process. The sounds may have not been the exact engine sounds that would end up in the final, but they were adequate to allow you to experience the scenes as intended and to give the right feel.  Because we needed to get Jim’s response early, some of the races were cut with the production sound – from the live mics during filming. This allowed us and Jim to quickly see how the scenes would flow. Other scenes were cut strictly MOS, because the sound design would have been way too complicated for the initial cut of the scene. Once the scene was cut visually, we’d hand over the scene to Don [Sylvester, sound supervisor] who was able to provide us with a set of 5.1 stems. That was great, because we could recut and repurpose those stems for other races.

[MM] We had developed a strategy with Don to split the sound design into four or five stems to give us enough discrete channels to recut these sequences. The stems were a palette of interior perspectives, exterior perspectives, crowds, car-bys, and so on. By employing this strategy, we didn’t need to continually turn over the cut to sound for patch-up work. Then, as Don went out and recorded the real cars and was developing the actual sounds for what was going to be used in the mix, he’d generate new stems and we would put them into the Avid. This was extremely informative to Jim, because he could experience our Avid temp mix in 5.1 and give notes, which ultimately informed the final sound design and the mix. 

[OP] What about temp music? Did you also weave that into your rough cuts?

[MM] Ted Caplan, our music editor, has also worked with Jim for 15 years. He’s a bit of a renaissance man – a screenwriter, a novelist, a one-time musician, and a sound designer in his own right. When he sits down to work with music, he’s coming at it from a story point-of-view. He has a very instinctual knowledge of where music should start and it happens to dovetail into the aesthetic that Jim, Andrew, and I are working towards. None of us like music to lead scenes in a way that anticipates what the scene is going to be about before you experience it.

Specifically, for this movie, it was challenging to develop what the musical tone of the movie would be. Ted was developing the temp track along with us from a very early stage. We found over time that not one particular musical style was going to work. Which is to say that this is a very complex score. It includes a kind of surf rock sound with Carroll Shelby in LA; an almost jaunty, lounge jazz sound for Detroit and the Ford executives; and then the hard-driving rhythmic sound for the racing.

(The final score was composed by Marco Beltrami and Buck Sanders.)

[OP] I presume you were housed in multiple cutting rooms at a central facility. Right?

[MM] We cut at 20th Century Fox, where Jim has a large office space. We cut Logan and Wolverine there before this movie. It has several cutting spaces, I was situated between Andrew and Don. Ted was next to Don and John Berri, our additional editor, and assistants were right around the corner. It makes for a very efficient working environment. 

[OP] Since the team was cutting with Avid Media Composer, did any of its features stand out to you for this film?

[Both] FluidMorph! (laughs)

[MM] FluidMorph, speed-ramping – we often had to manipulate the shot speeds to communicate the speed of the cars. A lot of these cars were kit cars that could drive safely at a certain speed for photography, but not at race speed. So we had to manipulate the speed a lot to get the sense of action that these cars have.

[OP] What about Avid’s Script Integration feature, often referred to as ScriptSync? I know a lot of narrative editors love it.

[MM] I used ScriptSync once a few years ago and I never cut a scene faster. I was so excited. Then I watched it and it was terrible. To me there’s so much more to editing than hitting the next line of dialogue. I’m more interested in the lines between the lines – subtext. I found that with ScriptSync I could put the scene together quickly, but it was flat as a pancake. I do understand the value of it in certain applications. For instance, I think it’s great on straight comedy. It’s helpful to get around and find things when you are shooting tons of coverage for a particular joke. But for me, it’s not something I lean on. I mark up my own dailies and find stuff that way.

[OP] Tell me a bit more about your organizational process. Do you start with a KEM roll or stringouts of selected takes?

[MM] I don’t watch dailies, which sounds weird. By that I mean, I don’t watch them in a traditional sense. I don’t start in the morning, watch the dailies, and then start cutting. And I don’t ask my assistants to organize any of my dailies in bins. I come in and grab the scene that I have in front on me. I’ll look at the last take of every set-up really quickly and then I spend an enormous amount of time – particularly on complex scenes – creating a bin structure that I can work with. Sometimes it’s the beats in a scene, sometimes I organize by shot size, sometimes by character – it depends on what’s driving the scene. That’s the way I learn my footage – by organizing it. I remember shot sizes. I remember what was shot from set-up to set-up. I have a strong visual memory of where things are in a bin. So, if I ask an assistant to do that, then I’m not going to remember it. If I do it myself, then I’ll remember it. If there are a lot of resets or restarts in a take, I’ll have the assistant mark those up. But, I’ll go through and mark up beats or pivotal points in a scene, or particularly beautiful moments. And then I’ll start cutting.

[AB] I’ve adopted a lot of Mike’s methodology, mainly because I assisted Mike on a few films. But it actually works for me, as well. I have a similar aesthetic to Mike. I’ve used ScriptSync before and I tend to agree that it discourages you from seeing – as Mike described – the moments between lines. Those moments are valuable to remember.  

[OP] I presume this film was shot digitally. Right?

[MM] It was primarily shot with [ARRI] Alexa 65 LF cameras, plus some other small format cameras. A lot of it was shot with old anamorphic lenses on the Alexa that allowed them to give it a bit of a vintage feeling. It’s interesting that as you watch it, you see the effect of the old lenses. There’s a fall-off on the edges, which is kind of cool. There were a couple of places where the subject matter was framed into the curve of the lens, which affects the focus. But we stuck with it, because it feels ‘of the time.’

[OP] Since the film takes place in the 1960s and with racing action sequences, I presume there were quite a few visual effects to properly place the film in time. Right?

[MM] There’s a ton of that. The whole movie is a period film. We could temp certain things in the Avid for the rough cuts. John Berri was wrangling visual effects. He’s a master in the Avid, but also Adobe After Effects. He has some clever ways of filling in backgrounds or green screens with temp elements to give the director an idea of what’s going to go there. We try to do as much temp work in the Avid as we are capable of doing, but there’s so much 3D visual effects work in this movie that we weren’t able to do that all of the time.

The caveat, though, is that the racing is real. The cars are real. The visual effects work was for a lot of the backgrounds. The movie was shot almost entirely in Los Angeles with some second unit footage shot in Georgia. The current, modern day Le Mans track isn’t at all representative of what Le Mans was in 1966, so there was no way to shoot Le Mans. Everything had to be doubled and then augmented with visual effects. In addition to Georgia, where they shot most of the actual racing for Le Mans, they went for a week to France to get some shots of the actual town of Le Mans. Of those, I think only about four of those shots are left. (laughs)

[OP] Any final thoughts about how this film turned out? 

[MM] I’m psyched that people seem to like the film. Our concern was that we had a lot of story to tell. Would we wear audiences out? We continually have people tell us, “That was two and a half hours? We had no idea.” That’s humbling for us and it’s a great feeling. It’s a movie about these really great characters with great scope and great racing. That goes back to the very advent of movies. You can put all the big visual effects in a film that you want to, but it’s really about people.

[AB] I would absolutely agree. It’s more of a character movie with racing.  Also, because I am not a ‘racing fan’ per se, the character drama really pulled me into the film while working on it.

[MM] It’s classic Hollywood cinema. I feel proud to be part of a movie that does what Hollywood does best.

The article is also available at postPerspective.

For more, check out this interview with Steve Hullfish.

©2019 Oliver Peters