Paul McCartney’s “Who Cares”

Paul McCartney hasn’t been the type of rock star to rest on his past. Many McCartney-related projects have embraced new technologies, such as the 360VR. The music video for Who Cares – McCartney’s musical answer to bullying – was filmed in both 16mm and 65mm film. And it was edited using Final Cut Pro X.

Who Cares features Paul McCartney and actress Emma Stone in a stylized, surreal song and dance number filmed in 65mm, which is bookended by a reality-based 16mm segment. The video was directed by Brantley Gutierrez, choreographed by Ryan Heffington, and produced through LA production company Subtractive.

Gutierrez has collaborated for over 14 years with Santa Monica-based editor Ryan P. Adams on a range of projects, including commercials, concerts, and music videos. Adams also did a stint with Nitro Circus, cutting action sports documentaries for NBC and NBCSN. In that time he’s used the various NLEs, including Premiere Pro, Media Composer, and Final Cut Pro 7. But it was the demands of concert videos that really brought about his shift to Final Cut Pro X.

___________________________________

[OP] Please tell me a bit about what style you were aiming for in Who Cares. Why the choice to shoot in both 16mm and 65mm film?

[Brantley Gutierrez] In this video, I was going for an homage to vaudevillian theater acts and old Beatles-style psychedelia. My background is working with a lot of photography. I was working in film labs when I was pretty young. So my DP and friend, Linus Sandgren, suggested film and had the idea, “What if we shot 65mm?” I was open to it, but it came down to asking the folks at Kodak. They’re the ones that made that happen for us, because they saw it as an opportunity to try out their new Ektachrome 16mm motion film stock.

They facilitated us getting the 65mm at a very reasonable price and getting the unreleased Ektachrome 16mm film. The reason for the two stocks was the separation of the reality of the opening scene – kind of grainy and hand-held – with the song portion. It was almost dreamlike in its own way. This was in contrast to the 65mm psychedelic part, which was all on crane, starkly lit, and with very controlled choreography. The Ektachrome had this hazy effect with its grain. We wanted something that would jump as you went between these worlds and 16 to 65 was about as big of a jump as we could get in film formats.

[OP] What challenges did you face with this combination of film stocks? Was it just a digital transfer and then you were only dealing with video files? Or was the process different than that?

[BG] The film went to London where they could process and scan the 65mm film. It actually went in with Star Wars. Lucafilm had all of the services tied up, but they were kind enough put our film in with The Rise of Skywalker and help us get it processed and scanned. But we had to wait a couple of extra days, so it was a bit of a nervous time. I have full faith in Linus, so I knew we had it. However, it’s a little strange these days to wait eight or nine days to see what you had shot.

We were a guinea pig for Kodak for the 16mm stock. When we got it back, it looked crazy! We were like, “Oh crap.” It looked like it had been cross-processed  – super grainy and super contrasty. It did have a cool look, but more like a Tony Scott style of craziness. When we showed it to Kodak they agreed that it didn’t look right. Then we had Tom Poole, our colorist at Company 3 in New York, rescan the 16mm and it looked beautiful.

[Ryan P. Adams] Ektachrome is a positive stock, which hasn’t been used in a while. So the person in London scanning it just wasn’t familiar with it.

[BG] They just didn’t have the right color profile built for that stock yet, since it hadn’t been released yet. Of course, someone with a more experienced eye would know that wasn’t correct.

[OP] How did this delay impact your editing?

[BG] It was originally scanned and we started cutting with the incorrect version. In the meantime, the film was being rescanned by Poole. He didn’t really have to do any additional color correction to it once he had rescanned it. This was probably our quickest color correction session for any music video – probably 15 minutes total.

[RPA] One of the amazing things I learned, is that all you have to do is give it some minor contrast and then it is done. What it does give you is perfect skin tones. Once we got the proper scan and sat in the color session, that’s what really jumped out.

[OP] So then, what was the workflow like with Final Cut Pro X?

[RPA] The scans came in as DPX files. Here at Subtractive, we took those into DaVinci Resolve and spit out ProRes 422 HQ QuickTime files to edit with. To make things easy for Company 3, we did the final conform in-house using Resolve. An FCPXML file was imported into Resolve, we linked back to the DPX files, and then sent a Resolve project file to Company 3 for the final grade. This way we could make sure everything was working. There were a few effects shots that came in and we set all of that up so Tom could just jump on it and grade. Since he’s in New York, the LA and New York locations for Company 3 worked through a remote, supervised grading session.

[OP] The video features a number of effects, especially speed effects. Were those shot in-camera or added in post?

[RPA] The speed effects were done in post. The surreal world was very well choreographed, which just plays out. We had a lot of fun with the opening sequence in figuring out the timing. Especially in the transitional moment where Emma is staring into the hypnotic wheel. We were able to mock up a lot of the effects that we wanted to do in Final Cut. We would freeze-frame these little characters called “the idiots” that would jump into Emma’s head. I would do a loose rotoscope in Final Cut and then get the motion down to figure out the timing. Our effects people then remade that in After Effects.

[OP] How involved was Paul McCartney in the edit and in review-and-approval?

[BG] I’ve know Paul for about 13 years and we have a good relationship. I feel lucky that he’s very trusting of me and goes along with ideas like this. The record label didn’t even know this video was happening until the day of production. It was clandestine in a lot of ways, but you can get away with that when it’s Paul McCartney. If I had tried that with some other artist, I would have been in trouble. But Paul just said, “We’re going to do it ourselves.”

We showed him the cut once we had picture lock, before final color. He called on the phone, “Great. I don’t have any notes. It’s cool. I love it and will sign off.” That was literally it for Paul. It’s one of the few music videos where there was no going back and forth between the management, the artist, and the record label. Once Paul signed off on it, the record label was fine with it.

[OP] How did you manage to get Emma Stone to be a part of this video?

[BG] Emma is a really close friend of mine. Independently of each other, we both know Paul. Their paths have crossed over the years. We’ve all hung out together and talked about wanting to do something. When Paul’s album came out, I hit them both up with the idea for the music video and they both said yes.

The hardest part of the whole process was getting schedules to align. We finally had an open date in October with only a week and a half to get ready. That’s not a lot of time when you have to build sets and arrange the choreography. It was a bit of a mad dash. The total time was about six weeks from prep through to color.

Because of the nature of this music video, we only filmed two takes for Paul’s performance to the song. I had timed out each set-up so that we knew how long each scene would be. The car sequence was going to be “x” amount of seconds, the camera sequence would be “x” amount, and so on. As a result, we were able to tackle the edit pretty quickly. Since we were shooting 65mm film, we only had two or three takes max of everything. We didn’t have to spend a lot of time looking through hours of footage – just pick the best take for each. It was very old school in that way, which was fun.

[OP] Ryan, what’s your approach to organizing a project like this in Final Cut Pro X?

[RPA] I labelled every set-up and then just picked the best take. The first pass was just a rough to see what was the best version of this video. Then there were a few moments that we could just put in later, like when the group of idiots sings, “Who cares.”

My usual approach is to lay in the sections of synced song segments to the timeline first. We’ll go through that first to find the best performance moments and cut those into the video, which is our baseline. Then I’ll build on top of that. I like to organize that in the timeline rather than the browser so that I can watch it play against the music. But I will keyword each individual set-up or scene.

I also work that way when I cut commercials. I can manage this for a :30 commercial. When it’s a much bigger project, that’s where the organization needs to be a little more detailed. I will always break things down to the individual set-ups so I can reference them quickly. If we are doing something like a concert film, that organization may be broken up by the multiple days of the event. A great feature of Final Cut Pro X is the skim tool and that you can look at clips like a filmstrip. It’s very easy to keyword the angles for a scene and quickly go through it.

[OP] Brantley, I’m sure you’ve sat over the shoulder of the editor in many sessions. From a director’s point of view, what do you think about working with Final Cut Pro X?

[BG] This particular project was pretty well laid out in my head and it didn’t have a lot of footage, so it was already streamlined. On more complex projects, like a multi-cam edit, FCPX is great for me, because I get to look at it like a moving contact sheet from photography. I get to see my choices and I really respond to that. That feels very intuitive and it blows me away that every system isn’t like that.

[OP] Ryan, what attracted you about Final Cut Pro X in order to use it whenever possible?

[RPA] I started with Final Cut Pro X when they added multi-cam. At that time we were doing more concert productions. We had a lot of photographers who would fill in on camera and Canon 5Ds were prevalent. I like to call them “trigger-happy filmers,” because they wouldn’t let it roll all the way through.

FCPX came up with the solution to sync cameras with the audio on the back end. So I could label each photographer’s clips. Each clip might only be a few seconds long. I could then build the concert by letting FCPX sync the clips to audio even without proper timecode. That’s when I jumped on, because FCPX solved a problem that was very painful in Final Cut Pro 7 and a lot of other editing systems. That was an interesting moment in time when photographic cameras could shoot video and we hired a lot of those shooters. Final Cut Pro X solved the problem in a very cool way and it helped me tremendously.

We did this Tom Petty music video, which really illustrates why Final Cut Pro X is a go-to tool. After Tom had passed, we had to take a lot of archival footage as part of a music video, called Gainesville, that we did for his boxed set. Brantley shot a lot of video around Tom’s hometown of Gainesville [Florida], but they also brought us a box with a massive amount of footage that we put into the system. A mix of old films and tapes, some of Tom’s personal footage, all this archival stuff. It gave the video a wonderful feeling.

[BG] It’s very nostalgic from the point of view of Tom and the band. A lot of it was stuff they had shot in their 20s and had a real home movie feel. I shot Super 8mm footage around Tom’s original home and places where they grew up to match that tone. I was trying to capture the love his hometown has for him.

[RPA] That’s a situation where FCPX blows the competition out of the water. It’s easy to use the strip view to hunt for those emotional moments. So the skimmer and the strip view were ways for us to cull all of this hodge-podge of footage for those moments and to hit beats and moments in the music for a song that had been unreleased at that time. We had one week to turn that around. It’s a complicated situation to look through box of footage on a very tight deadline and put a story to it and make it feel correct for the song. That’s where all of those tools in Final Cut shine. When I have to build a montage, that’s when I love Final Cut Pro X the most.

[OP] You’ve worked with the various NLEs. You know DaVinci Resolve and Blackmagic is working hard to make it the best all-in-one tool on the market. When you look at this type of application, what features would you love to see added to Final Cut Pro X?

[RPA] If I had a wishlist, I would love to see if FCPX could be scaled up for multiple seats and multiple editors. I wish some focus was being put on that. I still go to Resolve for color. I look at compositing as just mocking something up so we can figure out timing and what it is generally going to look like. However, I don’t see a situation currently where I do everything in the editor. To me, DaVinci Resolve is kind of like a Smoke system and I tip my hat to them.

I find that Final Cut still edits faster than a lot of other systems, but speed is not the most important thing. If you can do things quickly, then you can try more things out. That helps creatively. But I think that typically things take about as long from one system to the next. If an edit takes me a week in Adobe it still takes me a week in FCPX. But if I can try more things out creatively, then that’s beneficial to any project.

Originally written for FCP.co.

©2020 Oliver Peters

Jezebel

If you’ve spent any time in Final Cut Pro X discussion forums, then you’ve probably run across posts by Tangier Clarke, a film editor based in Los Angeles. Clarke was an early convert to FCPX and recently handled the post-production finishing for the film, Jezebel. I was intrigued by the fact that Jezebel was picked up by Netflix, a streaming platform that has been driving many modern technical standards. This was a good springboard to chat with Clarke and see how FCPX fared as the editing tool of choice.

_______________________________________________________________

[OP] Please tell me a little about your background in becoming a film editor.

[TC] I’m a very technical person and have always had a love for computers. I went to college for computer science, but along the way I discovered Avid Videoshop and started to explore editing more, since it married my technical side with creative storytelling. So, at UC Berkeley I switched from computer science to film.

My first job was at motion graphics company Montgomery/Cobb, which was in Los Angeles. They later became Montgomery & Co. Creative. I was a production assistant for main titles and branding packages for The Weather Channel, Fox, NBC, CBS, and a whole host of cable shows. Then, I worked for 12 years with Loyola Productions (no affiliation with Loyola Marymount University).

I moved on to a company called Black & Sexy TV, which was started by Dennis Dortch as a company to have more control over black images in media. He created a movie called A Good Day to be Black and Sexy in 2008, which was picked up and distributed by Magnolia Pictures and became a cult hit. It ended up in Blockbuster Video stores, Target, and Netflix. The success of that film was leveraged to launch Black & Sexy TV and its online streaming platform.

[OP] You’ve worked on several different editing applications, but tell me a bit about your transition to Final Cut Pro X.

[TC] I started my career on Avid, which was also at the time when Final Cut Pro “legacy” was taking off. During 2011 at Loyola Productions, I had an opportunity to create a commercial for a contest put out by American Airlines. We thought this was an opportunity for us as a company to try Final Cut Pro X.

I knew that it was for us once we installed it. Of course, there were a lot of things missing coming from Final Cut Pro 7, and a couple of bugs here and there. The one thing that was astonishing for me, despite the initial learning curve, was that within one week of use my productivity compared to Final Cut Pro 7 went through the roof. There was no correlation between anything I had used before and what I was experiencing with Final Cut X in that first week. I also noticed that our interns – whose only experience was iMovie – just picked up Final Cut Pro X with no problems whatsoever.

Final Cut Pro X was very liberating, which I expressed to my boss, Eddie Siebert, the president and founder of Loyola Productions. We decided to keep using it to the extent that we could on certain projects and worked with Final Cut Pro 7 and Final Cut Pro X side-by-side until we eventually just switched over.

[OP] You recently were the post supervisor and finishing editor for the film Jezebel, which was picked up by Netflix. What is this film about?

[TC] Jezebel is a semi-autobiographical film written and directed by Numa Perrier, who is a co-founder of Black & Sexy TV. The plot follows a 19-year-old girl, who after the death of her mother begins to do sex work as an online chat room cam girl to financially support herself. Numa is starring in the film, playing her older sister, and an actress named Tiffany Tenille is playing Numa. This is also Numa Perrier’s feature film directorial debut. It’s a side of her that people didn’t know – about how she survived as a young adult in Las Vegas. So, she is really putting herself out there.

The film made its debut at South by Southwest last year, where it was selected as a “Best of SXSW” film by The Hollywood Reporter. After that it went to other domestic and international festivals. At some point it was seen by Ava DuVernay, who decided to to pick up Numa’s film through her company, Array. That’s how it got to Netflix.

[OP] Please walk me through the editorial workflow for Jezebel. How did FCPX play a unique role in the post?

[TC] I was working on a documentary at the time, so I couldn’t fully edit Jezebel, but I was definitely instrumental in the process. A former coworker of mine, Brittany Lyles, was given the task of actually editing the project in Final Cut Pro X, which I had introduced her to a couple of years ago and trained her on how to use it. The crew shot with a Canon C300 camera and we used the Final Cut proxy workflow. Brittany wouldn’t have been able to work on it if we weren’t using proxies, because of her hardware. I was using a late 2013 Mac Pro, as well as a 2016 MacBook Pro.

At the front end, I assisted the the production team with storage and media management. Frances Ampah (a co-producer on the film) and I worked to sync all the footage for Brittany, who was working with copy of the footage on a dedicated drive. We provided Bittany with XMLs during the syncing process as she was getting familiar with the footage.

While Brittany was working on the cut, Numa and I were trying to figure out how best to come up with a look and a style for the text during the chat room scenes in the movie. It hadn’t been determined yet if I was going to get the entire film and put the graphics in myself or if I was going to hand it off to Brittany for her to do it. I pitched Numa on the idea of creating a Motion template so that I could have more control over the look, feel, and animation of the graphics. That way either Brittany or I could do it and it would look the same.

Brittany and Numa refined the edit to a point where it made more sense for me to put in a lot of the graphics and do any updating per Numa’s notes, because some of the text had changed as well. And we wanted to really situate the motion of the chat so that it was characteristic of what it looked like back then – in the 90s. We needed specific colors for each user who was logged into the screen. I had some odd color issues with Final Cut and ended up actually just going into the FCPXML file to modify color values. I’m used to going into files like that and I’m not afraid of it. I also used the FCP X feature in the text inspector to save format and appearance attributes. This was tremendously helpful to quickly assign the color and formatting for the different users in the chat room – saving a lot of time.

Our secondary editor, Bobby Field, worked closely with Numa to do the majority of color grading on the film. He was more familiar with Premiere Pro than FCP X, but really enjoyed the color tools in Final Cut Pro X. Through experimentation, Bobby learned how to use adjustment layers to apply color correction. I was fascinated by this and it was a learning experience for me as well. I’m used to working directly with the clip itself and in my many years of using FCP X, this wasn’t a method I used or saw anyone else firsthand doing.

[OP] What about sound post and music?

[TC] I knew that there’s only so much technically that I’d had the skillset to do and I would not dare to pretend that I know how to do certain things. I called on the help of Jim Schaefer – a skilled and trusted friend that I worked with at Loyola productions. I knew he wanted an opportunity to work on a big project, particularly a feature. The film needed a tremendous amount of sound work, so he took it on along with Travis Prater, a coworker of his at Source Sound in Woodland Hills. Together they really transformed the film.

Jim and Travis worked in Pro Tools, so I used X2Pro to get files to them. Jim gave me a list of how he wanted the film broken down. Because of the length of Jezebel, he preferred that the film was broken up into reels. In addition to reels, I also gave him the entire storyline with all of the roles. Everything was broken down very nicely using AAFs and he didn’t really have any problems.  In his words, “It’s awesome that all the tracks are sorted by character and microphone – that’ll cut down significantly on the sorting/organizing pass for me.” The only hiccup experienced was that metadata was missing in the AAF up to a certain point in ProTools. Yet that metadata did exist in the original WAV. Some clip names were inconsistent as well, but that may have happened during production.

[OP] Jezebel is streaming on Netflix, which has a reputation for having tough technical specs. Were there any special things you had to do to make it ready for the platform?

[TC] We supplied Array with a DCI 2K (full frame) Quicktime master in ProRes 422HQ per their delivery schedule, along with other elements such as stereo and 5.1 mixes from Jim, Blu-Rays, DVD, and DCP masters. I expected to do special things to make it ready for Netflix. Numa and I discussed this, but to my knowledge, the Quicktime that I provided to Array is what Netflix received. There were no special conversions made just for Netflix on the part of Array.

[OP] Now that you have this Final Cut Pro X experience under your belt, what would you change if you could? Any special challenges or shortcomings?

[TC] I had to do some composite shots for the film, so the only shortcoming for me was Final Cut’s compositing tool set. I’d love to have better tools built right into FCP X, like in DaVinci Resolve. I love Apple Motion and it’s fine for what it is, but it could go a little further for me. I’d love to see an update with improved compositing and better tracking. Better reporting for missing files, plugins, and other elements would also be tremendously helpful in troubleshooting vague alerts.

In spite of this, there was no doubt in any part of the process whether or not Final Cut was fully capable of being at the center of everything that needed to be done – whether it was leveraging Motion for template graphics between Brittany and me, using a third-party tool to make sure that the post sound team had precisely what they needed, or exchanging XMLs or backup libraries with Bobby to make sure that his work got to me intact. I was totally happy with the performance of FCP X. It was just rock solid and for the most part did everything I needed it to do without slowing me down.

Originally written for FCPco.

A special thanks to Lumberjack System for their assistance in transcribing this interview.

©2020 Oliver Peters

Everest VR and DaVinci Resolve Studio

In April of 2017, world famous climber Ueli Steck died while preparing to climb both Mount Everest and Mount Lhotse without the use of bottled oxygen. Ueli’s close friends Jonathan Griffith and Sherpa Tenji attempted to finish this project while director/photographer Griffith captured the entire story. The result is the 3D VR documentary, Everest VR: Journey to the Top of the World. It was produced by Facebook’s Oculus and teased at last year’s Oculus Connect event. Post-production was completed in February and the documentary is being distributed through Oculus’ content channel.

Veteran visual effects artist Matthew DeJohn was added to the team to handle end-to-end post as a producer, visual effects supervisor, and editor. DeJohn’s background includes camera, editing, and visual effects with a lot of experience in both traditional visual effects, 2D to 3D conversion, and 360 virtual reality. Before going freelance, he worked at In3, Digital Domain, Legend3D, and VRTUL.

As an editor, DeJohn was familiar with most of the usual tools, but opted to use Blackmagic’s DaVinci Resolve Studio and Fusion Studio applications as the post-production hub for the Everest VR documentary. Posting stereoscopic, 360-degree content can be quite challenging, so I took the opportunity to speak with DeJohn about using DaVinci Resolve Studio on this project.

_______________________________________________________

[OP] Please tell me a bit about your shift to DaVinci Resolve Studio as the editing tool of choice.

[MD] I have had a high comfort level with Premiere Pro and also know Final Cut Pro. Premiere has good VR tools and there’s support for it. In addition to these tools I was using Fusion Studio in my workflow so it was a natural to look at DaVinci Resolve Studio as a way to combine my Fusion Studio work with my editorial work.

I made the switch about a year and half ago and it simplified my workflow dramatically. It integrated a lot of different aspects all under one roof – the editorial page, the color page, the Fusion page, and the speed to work with high-res footage. From an editing perspective, the tools are all there that I was used to in what I would argue is a cleaner interface. Sometimes, software just collects all of these features over time. DaVinci Resolve Studio is early in its editorial development trajectory, but it’s still deep. Yet it doesn’t feel like it has a lot of baggage.

[OP] Stereo and VR projects can often be challenging, because of the large frame sizes. How did DaVinci Resolve Studio help you there?

[MD] Traditionally 360 content uses a 2:1 aspect ratio, so 4K x 2K. If it’s going to be a stereoscopic 360 experience, then you stack a left and right eye image on top of each other. It ends up being 4K x 4K square – two 4K x 2K frames stacked on top of each other. With DaVinci Resolve Studio and the graphics card I have, I can handle a 4K x 4K full online workflow. This project was to be delivered as 8K x 8K. The hardware I had wasn’t quite up to it, so I used an offline/online approach. I created 2K x 2K proxy files and then relinked to the full resolution sources later.  I just had to unlink the timeline and then reconnect it to another bin with my 8K media.

You can cut a stereo project just looking at the image for one eye, then conform the other eye, and then combine them. I chose to cut with the stacked format. My editing was done looking at the full 360 unwrapped, but my review was done through a VR headset from the Fusion page. From there I was also able to review the stereoscopic effect on a 3D monitor. 3D monitoring can also be done on the color page, though I didn’t use that feature on this project.

[OP] I know that successful VR is equal parts production and post. And that post goes much more smoothly with a lot of planning before anyone starts. Walk me through the nuts and bolts of the camera systems and how Everest VR was tackled in post.

[MD] Jon Griffith – the director, cameraman, and alpinist – a man of many talents – utilized a number of different systems. He used the Yi Halo, which is a 17-camera circular array. Jon also used the Z CAM V1 and V1 Pro cameras. All were stereoscopic 360 camera systems.

The Yi Halo camera used the Jump cloud stitcher from Google. You upload material to that service and it produces an 8K x 8K final stitch and also a proxy 2K x 2K stitch. I would cut with the 2K x 2K and then conform to the 8K x 8K. That was for the earlier footage. The Jump stitcher is no longer active, so for the more recent footage Jon switched to the Z CAM systems. For those, he would run through Z CAM’s Wonderstitch application, with is auto-stitching software. For the final, we would either clean up any stitching artifacts in Fusion Studio or restitch it in Mistika VR.

Once we had done that, we would use Fusion Studio for any rig removal and fine-tuned adjustments. No matter how good these cameras and stitching software are, they can fail in some situations. For instance, if the subject is too close to the camera or walks between seams. There’s quite a bit of composting/fixing that needs to be done and Fusion Studio was used heavily for that.

[OP] Everest VR consists of three episodes ranging from just under 10 minutes to under 17 minutes. A traditional cinema film, shot conservatively, might have a 10:1 shooting ratio. How does that sort of ratio equate on a virtual reality film like this?

[MD] As far as the percentage of shots captured versus used, we were in the 80-85% range of clips that ended up in the final piece. It’s a pretty high figure, but Jon captured every shot for a reason with many challenging setups – sometimes on the side of an ice waterfall. Obviously there weren’t many retakes. Of course the running time of raw footage would result in a much higher ratio. That’s because we had to let the cameras run for an extended period of time. It takes a while for a climber to make his way up a cliff face!

[OP] Both VR and stereo imagery present challenges in how shots are planned and edited. Not only for story and pacing, but also to keep the audience comfortable without the danger of motion-induced nausea. What was done to address those issues with Everest VR?

[MD] When it comes to framing, bear in mind there really is no frame in VR. Jon has a very good sense of what will work in a VR headset. He constructed shots that make sense for that medium, staging his shots appropriately without any moving camera shots. The action moved around you as the viewer. As such, the story flows and the imagery doesn’t feel slow even though the camera doesn’t move. When they were on a cliffside, he would spend a lot of time rigging the camera system. It would be floated off the side of the cliff enough so that we could paint the rigging out. Then you just see the climber coming up next to you.

The editorial language is definitely different for 360 and stereoscopic 360. Where you might normally have shots that would go for three seconds or so, our shots go for 10 to 20 seconds, so the action on-screen really matters. The cutting pace is slower, but what’s happening within the frame isn’t. During editing, we would plan from cut to cut exactly where we believed the viewer would be looking. We would make sure that as we went to the next shot, the scene would be oriented to where we wanted the viewer to look. It was really about managing the 360 hand-off between shots, so that viewers could follow the story. They didn’t have to whip their head from one side of the frame to the other to follow the action.

In some cases, like an elevation change – where someone is climbing at the top of the view and the next cut is someone climbing below – we would use audio cues. The entire piece was mixed in ambisonic third order, which means you get spatial awareness around and vertically. If the viewer was looking up, an audio cue from below would trigger them to look down at the subject for the next shot. A lot of that orchestration happens in the edit, as well as the mix.

[OP] Please explain what you mean by the orientation of the image.

[MD] The image comes out of the camera system at a fixed point, but based on your edit, you will likely need to change that. For the shots where we needed to adjust the XYZ axis orientation, we would add a Panomap node in the Fusion page within DaVinci Resolve Studio and shift the orientation as needed. That would show up live in the edit page. This way we could change what would become the center of the view.

The biggest 3D issue is to make sure the vertical alignment is done correctly. For the most part these camera systems handled it very well, but there are usually some corrections to be made. One of these corrections is to flatten the 3D effect at the poles of the image. The stereoscopic effect requires that images be horizontally offset. There is no correct way to achieve this at the poles, because we can’t guarantee how the viewer’s head is oriented when they look at the poles. In traditional cinema, the stereo image can affect your cutting, but with our pacing, there was enough time for a viewer to re-converge their view to a different distance comfortably.

[OP] Fusion was used for some of the visual effects, but when do you simply use the integrated Fusion page within DaVinci Resolve Studio versus a standalone version of the Fusion Studio application?

[MD] All of the orientation was handled by me during the edit by using the integrated Fusion page within DaVinci Resolve Studio. Some simple touch-ups, like painting out tripods, were also done in the Fusion page. There are some graphics that show the elevation of Everest or the climbers’ paths. These were all animated in the Fusion page and then they showed up live in the timeline. This way, changes and quick tweaks were easy to do and they updated in real-time.

We used the standalone version of Fusion Studio for some of the more complex stitches and for fixing shots. Fusion Studio is used a lot in the visual effects industry, because of its scriptability, speed, and extensive toolset. Keith Kolod was the compositor/stitcher for those shots. I sent him the files to work on in the standalone version of Fusion Studio. This work was a bit heavier and would take longer to render. He would send those back and I would cut those into the timeline as a finished file.

[OP] Since DaVinci Resolve Studio is an all-in-one tool covering edit, effects, color, and audio, how did you approach audio post and the color grade?

[MD] The Initial audio editing was done in the edit and Fairlight pages of DaVinci Resolve Studio. I cut in all of the temp sounds and music tracks to get the bone structure in place. The Fairlight page allowed me to get in deeper than a normal edit application would. Jon recorded multiple takes for his narration lines. I would stack those on the Fairlight page as audio layers and audition different takes very quickly just by re-arranging the layers. Once I had the take I liked, I left the others there so I could always go back to them. But only the top layer is active.

After that, I made a Pro Tools turnover package for Brendan Hogan and his team at Impossible Acoustic. They did the final mix in Pro Tools, because there are some specific built-in tools for 3D ambisonic audio. They took the bones, added a lot of Foley, and did a much better job of the final mix than I ever could.

I worked on the color correction myself. The way this piece was shot, you only had one opportunity to get up the mountain. At least on the actual Everest climb, there aren’t a lot of takes. I ended up doing color right from the beginning, just to make sure the color matched for all of those different cameras. Each had a different color response and log curve. I wanted to get a base grade from the very beginning just to make sure the snow looked the same from shot to shot. By the time we got to the end, there were very minimal changes to the color. It was mainly to make sure that the grade we had done while looking at Rec. 709 monitoring translated correctly to the headset, because the black levels are a bit different in the headsets.

[OP] In the end, were you 100% satisfied with the results?

[MD] Jon and Oculus held us to a high level in regards to the stitch and the rig removals. As a visual effects guy, there’s always something, if you look really hard! (laughs) Every single shot is a visual effects shot in a show like this. The tripod always has to be painted out. The cameraman always needs to be painted out if they didn’t hide well enough.

The Yi Halo doesn’t actually capture the bottom 40 degrees out of the full 360. You have to make up that bottom part with matte painting to complete the 360. Jon shot reference photos and we used those in some cases. There is a lot of extra material in a 360 shot, so it’s all about doing a really nice clone paint job within Fusion Studio or the Fusion page of DaVinci Resolve Studio to complete the 360.

Overall, as compared with all the other live-action VR experiences I’ve seen, the quality of this piece is among the very best. Jon’s shooting style, his drive for a flawless experience, the tools we used, and the skill of all those involved helped make this project a success.

The article originally written for Creative Planet Network.

©2020 Oliver Peters

FxFactory 2020

Apple Final Cut Pro X enjoys a broad ecosystem that augments and supports the product with companion applications, graphic templates, and plug-ins. Thanks to the ability to use Motion to create Final Cut Pro X template effects, the application has spawned a wide range of innovative, albeit sometimes novice, developers. But really getting the most out of what Apple offers under-the-hood requires experienced plug-in developers, such as Noise Industries (FxFactory), Coremelt, motionVFX, and others.

I’ve known the principals at Noise Industries for over a decade – going back to their days developing a plug-in package for Mac-based Avid Media Composer systems. Riding the wave of popularity enjoyed by the original Final Cut Pro, Noise Industries has become known to most users through their FxFactory platform. They were some of the first to get behind Final Cut Pro X and haven’t looked back since.

In that time frame, FxFactory has become a robust platform, which functions as a central place to purchase, install, and update plug-ins, templates, filters, and utility applications from developers that have partnered with the platform. Unlike other third-party developers, Noise Industries features not only their own products within FxFactory, but also a curated collection from a wide range of other partner-developers. The portfolio has grown and extends beyond simply supporting Apple’s ProApps applications. It now also includes products for Mac-based versions of Adobe, Avid, and Blackmagic Design software.

I have often written about and reviewed various FxFactory products in the past. Since they are constantly updating features and adding new products and partners to the catalogue, it seemed like a good time to revisit some of their offerings. I certainly can’t touch on each and every one, but here is just a sampling of what falls under the FxFactory umbrella in 2020.

Audio. FxFactory originally started with only video products, but as more developers have partnered with the platform, audio plug-ins have been added to the line-up. These work not only for the editing apps, but also DAWs, like Logic Pro X, GarageBand, Pro Tools, Audition, and Resolve (Fairlight page). These come not only from established video developers like Crumplepop, but also audio developers, like Accusonus. Check out the various options for volume control, noise reduction, and more.

Tracking. What many users may not have realized it that Apple quietly added tracking capabilities under-the-hood, which developers can use to add tracking features to their apps. This is particularly useful for graphic templates, such as where you might want a title to follow a race car around the track. Some of the effects that now offer tracking include Crumplepop’s EasyTracker and idustrial Revolution’s Tracking Callouts.

Toolkits. By and large, the most useful items in the catalogue are the various toolkit bundles. These are usually a mix of different effects designed to cover a proverbial Swiss Army knife of post needs. While some may overlap, it doesn’t hurt to load up on several. Particularly useful are those from idustrial Revolution (XEffects Toolkit) and Ripple Training (Ripple Tools Complete), along with the Andy Mees free effects.

Glows. Whether it’s sci-fi or sexy, glows can be a very creative tool. But the range of control is often quite limited with the standard glow filters included in many other effects packs. FxFactory sports quite a few options for glow and lighting effects from different developers. Along with FxFactory’s own Light Show, there’s Hawaiki’s Super Glow, Kingluma’s Radiance effects, and others.

Transitions. Cuts and dissolves? Meh. Fancy transitions are all the rage and FxFactory offers a cornucopia of options. These range from various dynamic transitions to glitch effects to light leaks. Just a few of the options are FxFactory’s Wipology as well as PremiumVFX’s Glitch, LightSpeed, and Dynamic Transitions.

Animations. The biggest two categories of effects have to be the ones that offer quick and easy title animations along with those that offer graphic design Motion templates. Both styles tap heavily into Motion’s and Final Cut Pro X’s behaviors-based movement effects. These are designed with built-in animation characteristics without the need to tweak keyframes. The effects in an animation bundle can be applied to titles and footage. They take the drudgery out of building quick DVE-type animated moves and transitions. Checkout Alex 4D’s Animation Templates, Cineflare’s Object Animator, and Stupid Raisins’ wide selection of Pop bundles.

Graphic Templates. Feeling graphically-challenged? Don’t have a designer that you are working with? FxFactory’s got your back with a wide selection of animated graphics templates. Name a style and there’s likely to be something to cover the need. Sports, social media, news, slideshows, simple titles – you name it. Various packages in different styles from idustrial revolution (XEffects), PremiumVFX, SugarFX, and UsefulFX. Most of these take a toolkit approach. Each package includes a range of title elements that can be mixed and matched to create your own custom look.

But wait, there’s more. Granted, these are just broad categories and certainly this list doesn’t include all of the partners with products in the catalogue. Many offer eclectic stylized or image processing effects that simply don’t fit into specific categories. For example, Luca Visual FX’s Lo-Fi Look, Image Sharpener, or Impackt. There are quite a number of color correction filters, too, such as Lawn Road’s Color Precision, Sheffield Softworks’ Movie Color, Hawaiki’s Color and AutoGrade, and Crumplepop’s Koji Advance. And keying filters, like the Hawaii Keyer.

I could go on, but this will give you just a small overview of the options in 2020. Enjoy!

©2020 Oliver Peters

The Banker

Apple has launched its new TV+ service and this provides another opportunity for filmmakers to bring untold stories to the world. That’s the case for The Banker, an independent film picked up by Apple. It tells the story of two African American entrepreneurs attempting to earn their piece of the American dream during the repressive 1960s through real estate and banking. It stars Samuel L. Jackson, Anthony Mackie, Nia Long, and Nicholas Hoult.

The film was directed by George Nolfi (The Adjustment Bureau) and produced by Joel Viertel, who also signed on to edit the film. Viertel’s background hasn’t followed the usual path for a feature film editor. Interested in editing while still in high school, the move to LA after college landed him a job at Paramount where he eventually became a creative executive. During that time he kept up his editing chops and eventually left Paramount to pursue independent filmmaking as a writer, producer, and editor. His editing experience included Apple Final Cut Pro 1.0 through 7.0 and Avid Media Composer, but cutting The Banker was his first time using Apple’s Final Cut Pro X.

I recently chatted with Joel Viertel about the experience of making this film and working with Apple’s innovative editing application.

____________________________________________

[OP] How did you get involved with co-producing and cutting The Banker?

[JV] This film originally started while I was at Paramount. Through a connection from a friend, I met with David Smith and he pitched me the film. I fell in love with it right away, but as is the case with these films, it took a long while to put all the pieces together. While I was doing The Adjustment Bureau with George Nolfi and Anthony Mackie, I pitched it to them, and they agreed it would be a great project for us all to collaborate on. From there it took a few years to get to a script we were all happy with, cast the roles, get the movie financed, and off the ground.

[OP] I imagine that it’s exciting to be one of the first films picked up by Apple for their TV+ service. Was that deal arranged before you started filming or after everything was in the can, so to speak?

[JV] Apple partnered with us after it was finished. It was made and financed completely independently through Romulus Entertainment. While we were in the finishing stages, Endeavor Content repped the film and got us into discussions with Apple. It’s one of their first major theatrical releases and then goes on the platform after that. Apple is a great company and brand, so it’s exciting to get in on the ground floor of what they’re doing.

[OP] When I screened the film, one of the things I enjoyed was the use of montages to quickly cover a series of events. Was that how it was written or were those developed during the edit as a way to cut running time?

[JV] Nope, it was all scripted. Those segments can bedevil a production, because getting all of those little pieces is a lot of effort for very little yield. But it was very important to George and myself and the collaborators on the film to get them. It’s a film about banking and real estate, so you have to figure out how to make that a fun and interesting story. Montages were one way to keep the film propulsive and moving forward – to give it motion and excitement. We just had to get through production finding places to pick off those pieces, because none of those were developed in post.

[OP] What was your overall time frame to shoot and post this film?

[JV] We started in late September 2018 and finished production in early November. It was about 30 days in Atlanta and then a few days of pick-ups in LA. We started post right after Thanksgiving and locked in May, I think. Once Apple got involved, there were a few minor changes. However, Apple’s delivery specs were completely different from our original delivery specs, so we had to circle back on a bunch of our finishing.

[OP] Different in what way?

[JV] We had planned to finish in 2K with a 5.1 mix. Their deliverables are 4K with a Dolby Atmos mix. Because we had shot on 35mm film, we had the capacity, but it meant that we had to rescan and redo the visual effects at 4K. We had to lay the groundwork to do an Atmos mix and DolbyVision finish for theatrical and home video, which required the 35mm film negative to be rescanned and dust-busted.

Our DP, Charlotte Bruus Christensen, has shot mostly on 35mm – films like A Quiet Place and The Girl on a Train and those movies are beautiful. And so we wanted to accommodate that, but it presents challenges if you aren’t shooting in LA. Between Kodak in Atlanta and Technicolor in LA we were able to make it work.

Kodak would process the negative and Technicolor made a one-light transfer for 2K dailies. Those were archived and then I edited with ProResLT copies in HD. Once we were done, Technicolor onlined the movie from their 2K scans. After the change in deliverable specs, Technicolor rescanned the clips used for the online finish at 4K and conformed the cut at 4K.

[OP] I felt that the eclectic score fit this movie well and really places it in time. As an editor, how did you work to build up your temp tracks? Or did you simply leave it up to the composer?

[JV] George and I have worked with our composer, Scott Salinas, for a very long time on a bunch of things. Typically, I give him a script and then he pulls samples that he thinks are in the ballpark. He gave me a grab bag of stuff for The Banker – some of which was score, some of which was jazz. I start laying that against the picture myself as I go and find these little things that feel right and set the tone of the movie. I’m finding my way for the right marriage of music and picture. If it works, it sticks. If it doesn’t, we replace it. Then at the end, he’s got to score over that stuff.

Most of the jazz in The Banker is original, but there are a couple tracks where we just licensed them. There’s a track called “Cash and Carry” that I used over the montage when they get rich. They’ve just bought the Banker’s Building and popped the champagne. This wacky, French 1970s bit of music comes in with a dude scatting over it while they are buying buildings or looking at the map of LA. That was a track Scott gave me before we shot a frame of film, so when we got to that section of the movie, I chose it out of the bin and put that sequence to it and it just stuck.

There are some cases where it’s almost impossible to temp, so I just cut it dry and give it to him. Sometimes he’ll temp it and sometimes he’ll do a scratch score. For example, the very beginning of the movie never had temp in any way. I just cut it dry. I gave it to Scott. He scored it and then we revised his scoring a bunch of times to get to the final version.

[OP] Did you do any official or “friends and family” screenings of The Banker while editing it? If so, did that impact the way the film turned out?

[JV] The post process is largely dictated by how good your first cut is. If the movie works, but needs improvement – that’s one thing. If it fundamentally doesn’t – that’s another. It’s a question of where you landed from the get-go and what needs to be fixed to get to the end of the road.

We’re big fans of doing mini-testing – bringing in people we know and people whose opinions we want to hear. At some point you have to get outside of the process and aggregate what you hear over and over again. You need to address the common things that people pick up on. The only way to keep improving your movie is to get outside feedback so they tell you what to focus on.

Over time that significantly impacted the film. It’s not like any one person said that one thing that caused us to re-edit the film. People see the problem that sticks out to them in the cut and you work on that. The next time there’s something else and then you work on that. You keep trying to make all the improvements you can make. So it’s an iterative process.

[OP] This film marked a shift for you from using earlier versions of Final Cut Pro to now cutting on Final Cut Pro X for the first time. Why did you make that choice and what was the experience like?

[JV] George has a relationship with Apple and they had suggested using Final Cut Pro X on his next project. I had always used Final Cut Pro 7 as my preference. We had used it on an NBC show called Allegiance in 2014 and then on Birth of the Dragon in 2015 and 2016 – long after it had been discontinued. We all could see the writing on the wall – operating systems would quit running it and it’s not harnessing what the computers can do.

I got involved in the conversation and was invited to come to a seminar at the Editors Guild about Final Cut Pro X that was taught by Kevin Bailey, who was the assistant editor for Whiskey Tango Foxtrot. I had looked at Final Cut Pro X when it first came out and then again several years later. I felt like it had been vastly improved and was in a place where I could give it a shot. So I committed at that point to cutting this film on Final Cut Pro X and teaching myself how to use it. I also hired Kevin to help as my assistant for the start of the film. He became unavailable later in the production, so we found Steven Moyer to be my assistant and he was fantastic. I would have never made it through without the both of them.

[OP] How did you feel about Final Cut Pro X once you got your sea legs?

[JV] It’s always hard to learn to walk again. That’s what a lot of editors bump into with Final Cut Pro X, because it is a very different approach than any other NLE. I found that once you get to know it and rewire your brain that you can be very fast on it. A lot of the things that it does are revolutionary and pretty incredible. And there are still other areas that are being worked on. Those guys are constantly trying to make it better. We’ve had multiple conversations with them about the possibilities and they are very open to feedback.

[OP] Every editor has their own way of tackling dailies and wading through an avalanche of footage coming in from production. And of course, Final Cut Pro X features some interesting ways to organize media. What was the process like for The Banker?

[JV] The sound and picture were both running at 24fps. I would upload the sound files from my hotel room in Atlanta to Technicolor in LA, who would sync the sound. They would send back the dailies and sound, which Kevin – who was assisting at that time – would load into Final Cut. He would multi-clip the sound files and the two camera angles. Everything is in a multi-clip, except for purely MOS B-roll shots. Each scene had its own event. Kevin used the same system he had devised with Jan [Kovac, editor on Whiskey Tango Foxtrot and Focus]. He would keyword each dialogue line, so that when you select a keyword collection in the browser, every take for that line comes up. That’s labor-intensive for the assistant, but it makes life that much faster for me once it’s set up.

[OP] I suppose that method also makes it much faster when you are working with the director and need to quickly get to alternate takes.

[JV] It speeds things along for George, but also for me. I don’t have to hunt around to find the lines when I have to edit a very long dialogue scene. You could assemble selects reels first, but I like to look at everything. I fundamentally believe there’s something good in every bad take. It doesn’t take very long to watch every take of a line. Plus I do a fair amount of ‘Franken-biting’ with dialogue where needed.

[OP] Obviously the final mix and color correction were done at specialty facilities. Since The Banker was shot on film, I would imagine that complicated the hand-off slightly. Please walk me through the process you followed.

[JV] Marti Humphrey did the sound at The Dub Stage in Burbank. We have a good relationship with him and can call him very early in the process to work out the timeline of how we are going to do things. He had to soup up his system a bit to handle the Atmos near-field stuff, but it was a good opportunity for him to get into that space. So he was able to do all the various versions of our mix.

Technicolor was the new guy for us. Mike Hatzer did the color grade. It was a fairly complex process for them and they were a good partner. For the conform, we handed them an XML and EDL. They had their Flex files to get back to the film edge code. Steven had to break up the sequence to generate separate tracks for the 35mm original, stock, and VFX shots, because Technicolor needed separate EDLs for those. But it wasn’t like we invented anything that hasn’t been done before.

We did use third-party apps for some of this. The great thing about that is you can just contact the developer directly. There was one EDL issue and Steven could just call up the app developer to explain the issue and they’d fix it in a couple of days.

[OP] What sort of visual effects were required? The film is set more or less 70 years ago, so were the majority of effects just to make the locations look right? Like cars, signs, and so on?

[JV] It was mostly period clean-up. You have to paint out all sorts of boring stuff, like road paint. In the 50s and 60s, those white lines have to come out. Wires, of course. A couple of shots we wanted to ‘LA-ify’ Georgia. We shot some stuff in LA, but when you put Griffith Park right next to a shot of Newnan, Georgia, the way to blend that over is to put palm trees in the Newnan shot.

We also did a pick-up with Anthony while he was on another show the required a beard for that role. So we had to paint out his beard. Good luck figuring out which was the shot where we had to paint out his beard!

[OP] Now that you have a feature film under your belt with Final Cut Pro X, what are your thoughts about it? Anything you feel that it’s missing?

[JV] All the NLEs have their particular strengths. Final Cut has several that are amazing, like background exports and rendering. It has Roles, where you can differentiate dialogue, sound effects, and music sources. You can bus things to different places. This is the first time I’ve ever edited in 5.1, because Final Cut supports that. That was a fun challenge.

We used Final Cut Pro X to edit a movie shot on film, which is kind of a first at this level, but it’s not like we crashed into some huge problem with that. We gamed it out and it all worked like it was supposed to. Obviously it doesn’t do some stuff the same way. Fortunately through our relationship with Apple we can make some suggestions about that. But there really isn’t anything it doesn’t do. If that were the case, we would have just said that we can’t cut with this.

Final Cut Pro X is an evolving NLE – as they all are. What I realized at the seminar is that it changed a lot from when it first appeared. It was a good experience cutting a movie on it. Some editors are hesitant, because that first hour is difficult and I totally get that. But if you push through that and get to know it – there are many things that are very good and addictively good. I would certainly cut another movie on it.

____________________________________________

The Banker started a limited theatrical release on March 6 and will be available on the Apple TV+ streaming service on March 20.

For even more details on the post process for The Baker, check out Pro Video Coalition

Originally written for FCPco.

®2020 Oliver Peters