Beyond the Supernova

No one typifies hard driving, instrumental, guitar rock better than Joe Satriani. The guitar virtuoso – known to his fans as Satch – has sixteen studio albums under his belt, along with several other EPs, live concert and compilation recordings. In addition to his solo tours, Satriani founded the “G3”, a series of short tours that feature Satriani along with a changing cast of two other all-star, solo guitarists, such as Steve Vai, Yngwie Malmsteen, Guthrie Govan, and others. In another side project, Satriani is the guitarist for the supergroup Chickenfoot, which is fronted by former Van Halen lead singer, Sammy Hagar.

The energy behind Satriani’s performances was captured in the new documentary film, Beyond the Supernova, which is currently available on the Stingray Qello streaming channel. This documentary grew out of the general behind-the-scenes coverage of Satriani’s 2016 and 2017 tours in Asia and Europe, to promote his 15th studio album, Shockwave Supernova. Tour filming was handled by Satriani’s son, ZZ (Zachariah Zane) – an up-and-coming, young filmmaker. The tour coincided with Joe Satriani’s 60th birthday and 30 years after the release of his multi-platinum-selling album Surfing with the Alien. These elements, as well as capturing Satriani’s introspective nature, provided the ingredients for a more in-depth project, which ZZ Satriani produced, directed and edited.

According to Joe Satriani in an interview on Stingray’s PausePlay, “ZZ was able to capture the real me in a way that only a son would understand how to do; because I was struggling with how I was going to record a new record and go in a new direction. So, as I’m on the tour bus and backstage – I guess it’s on my face. He’s filming it and he’s going ‘there’s a movie in here about that. It’s not just a bunch of guys on tour.’”

From music to filmmaking

ZZ Satriani graduated from Occidental College in 2015 with a BA in Art History and Visual Arts, with a focus on film production. He moved to Los Angeles to start a career as a freelance editor. I spoke with ZZ Satriani about how he came to make this film. He explained, “For me it started with skateboarding in high school. Filmmaking and skateboarding go hand-in-hand. You are always trying to capture your buddies doing cool tricks. I gravitated more to filmmaking in college. For the 2012 G3 Tour, I produced a couple of web videos that used mainly jump cuts and were very disjointed, but fun. They decided to bring me on for the 2016 tour in order to produce something similar. But this time, it had to have more of a story. So I recorded the interviews afterwards.”

Although ZZ thinks of himself as primarily an editor, he handed all of the backstage, behind-the-scenes, and interview filming himself, using a Sony PXW-FS5 camera. He comments, “I was learning how to use the camera as I was shooting, so I got some weird results – but in a good way. I wanted the footage to have more of a filmic look – to have more the feeling of a memory, than simply real-time events.”

The structure of Beyond the Supernova intersperses concert performances with events on the tour and introspective interviews with Joe Satriani. The multi-camera concert footage was supplied by the touring support company and is often mixed with historical footage provided by Joe Satriani’s management team. This enabled ZZ to intercut performances of the same song, not only from different locations, but even different years, going back to Joe Satriani’s early career.

The style of cutting the concert performances is relatively straightforward, but the travel and interview bridges that join them together have more of a stream-of-consciousness feel to them and are often quite psychedelic. ZZ says, “I’m not a big [Adobe] After Effects guy, so all of the ‘effects’ are practical and built up in layers within [Adobe] Premiere Pro. The majority of ‘effects’ dealt with layering, blending and cropping different clips together. It makes you think about the space within the frame – different shapes, movement, direction, etc. I like playing around that way – you end up discovering things you wouldn’t have normally thought of. Let your curiosity guide you, keep messing with things and you will look at everything in a new way. It keeps editing exciting!”

Premiere Pro makes the cut

Beyond the Supernova was completely cut and finished in Premiere Pro. ZZ explains why,  “Around 2011-12, I made the switch from [Apple] Final Cut Pro to Premiere Pro while I was in a film production class. They informed us that was the new standard, so we rolled with it and the transition was very smooth. I use other apps in the Adobe suite and I like the layout of everything in each one, so I’ve never felt the need to switch to another NLE.”

ZZ Satriani continues, “We had a mix of formats to deal with, including the need to upscale some of the standard definition footage to HD, which I did in software. Premiere handled the PXW-FS5’s XAVC-L codec pretty well in my opinion. I didn’t transcode to Pro Res, since I had so much footage, and not a lot of external hard drive space. I knew this might make things go more slowly – but honestly, I didn’t notice any significant drawbacks. I also handled all of the color correction, using Premiere’s Lumetri color controls and the FilmConvert plug-in.” Satriani created the sound design for the interview segments, but John Cuniberti (who has also mixed Joe Satriani’s albums) re-mixed the live concert segments in his studio in London. The final 5.1 surround mix of the whole film was handled at Skywalker Sound.

The impetus pushing completion was entry into the October 2017 Mill Valley Film Festival. ZZ says, “I worked for a month putting together the trailer for Mill Valley. Because I had already organized the footage for this and an earlier teaser, the actual edit of the film came easily. It took me about two months to cut – working by myself in the basement on a [2013] Mac Pro. Coffee and burritos from across the street kept me going.” 

Introspection brings surprises

Fathers and sons working together can often be an interesting dynamic and even ZZ learned new things during the production. He comments, “The title of the film evolved out of the interviews. I learned that Joe’s songs on an album tend to have a theme tied to the theme of the album, which often has a sci-fi basis to it. But it was a real surprise to me when Joe explained that Shockwave Supernova was really his character or persona on stage. I went, ‘Wait! After all these years, how did I not know that?’”

As with any film, you have to decide what gets cut and what stays. In concert projects, the decision often comes down to which songs to include. ZZ says, “One song that I initially thought shouldn’t be included was Surfing with the Alien. It’s a huge fan favorite and such an iconic song for Joe. Including it almost seemed like giving in. But, in a way it created a ‘conflict point’ for the film. Once we added Joe’s interview comments, it worked for me. He explained that each time he plays it live that it’s not like repeating the past. He feels like he’s growing with the song – discovering new ways to approach it.”

The original plan for Beyond the Supernova after Mill Valley was to showcase it at other film festivals. But Joe Satriani’s management team thought that it coincided beautifully with the release of his 16th studio album, What Happens Next, which came out in January of this year. Instead of other film festivals, Beyond the Supernova made its video premiere on AXS TV in March and then started its streaming run on Stingray Qello this July. Qello is known as a home for classic and new live concerts, so this exposes the documentary to a wider audience. Whether you are a fan of Joe Satriani or just rock documentaries, ZZ Satriani’s Beyond the Supernova is a great peek behind the curtain into life on the road and some of the thoughts that keep this veteran solo performer fresh.

Images courtesy of ZZ Satriani.

©2018 Oliver Peters

Advertisements

Wild Wild Country

Sometimes real life is far stranger than fiction. Such is the tale of the Rajneeshees – disciples of the Indian guru Bhagwan Shree Rajneesh – who moved to Wasco County, Oregon in the 1980s. Their goal was to establish a self-contained, sustainable, utopian community of spiritual followers, but the story quickly took a dark turn. Conflicts with the local Oregon community escalated, including the first and single, largest bioterror attack in the United States, when a group of followers poisoned 751 guests at ten local restaurants through intentional salmonella contamination. 

Additional criminal activities included attempted murder, conspiracy to assassinate the U. S. Attorney for the District of Oregon, arson, and wiretapping. The community was largely controlled by Bhagwan Shree Rajneesh’s personal secretary, Sheela Silverman (Ma Anand Sheela), who served 29 months in federal prison on related charges. She moved to Switzerland upon her release. Although the Rajneeshpuram community is no more and its namesake is now deceased, the community of followers lives on as the Osho International Foundation. This slice of history has now been chronicled in the six-part Netflix documentary Wild Wild Country, directed by Chapman and Maclain Way.

Documentaries are truly an editor’s medium. More so than any other cinematic genre, the final draft of the script is written in the cutting room. I recently interviewed Wild Wild Country’s editor, Neil Meiklejohn, about putting this fascinating tale together.

Treasure in the archives

Neil Meiklejohn explains, “I had worked with the directors before to help them get The Battered Bastards of Baseball ready for Sundance. That is also an Oregon story. While doing their research at the Oregon Historical Society, the archivist turned them on to this story and the footage available. The 1980s was an interesting time in local broadcast news, because that was a transition from film to video. Often stories were shot on film and then transferred to videotape for editing and airing. Many times stations would simply erase the tape after broadcast and reuse the stock. The film would be destroyed. But in this case, the local stations realized that they had something of value and held onto the footage. Eventually it was donated to the historical society.”

“The Rajneeshees on the ranch were also very proud of what they were doing – farming and building a utopian city – so, they would constantly invite visitors and media organizations onto the ranch. They also had their own film crews documenting this, although we didn’t have as much access to that material. Ultimately, we accumulated approximately 300 hours of archival media in all manner of formats, including Beta-SP videotape, ripped DVDs, and the internet. It also came in different frame rates, since some of the sources were international. On top of the archival footage, the Ways also recorded another 100 hours of new interviews with many of the principals involved on both sides of this story. That was RED Dragon 6K footage, shot in two-camera, multi-cam set-ups. So, pretty much every combination you can think of went into this series. We just embraced the aesthetic defects and differences – creating an interesting visual texture.”

Balancing both sides of the story

“Documentaries are an editor’s time to shine,” continues Meiklejohn. “We started by wanting to tell the story of the battle between the cult and the local community without picking sides. This really meant that each scene had to be edited twice. Once from each perspective. Then those two would be combined to show both sides as point-counterpoint. Originally we thought about jumping around in time. But, it quickly became apparent that the best way to tell the story was as a linear progression, so that viewers could see why people did what they did. We avoided getting tricky.”

“In order to determine a structure to our episodes, we first decided the ‘ins’ and ‘outs’ for each and then the story points to hit within. Once that was established, we could look for ‘extra gold’ that might be added to an episode. We would share edits with our executive producers and Netflix. On a large research-based project like this, their input was crucial to making sure that the story had clarity.”

Managing the post production

Meiklejohn normally works as an editor at LA post facility Rock Paper Scissors. For Wild Wild Country, he spent ten months in 2017 at an ad hoc cutting room located at the offices of the film’s executive producers, Jay and Mark Duplass. His set-up included Apple iMacs running Adobe Creative Cloud software, connected to an Avid ISIS shared storage network. Premiere Pro was the editing tool of choice.

Meiklejohn says, “The crew was largely the directors and myself. Assistant editors helped at the front end to get all of the media organized and loaded, and then again when it came time to export files for final mastering. They also helped to take my temp motion graphics – done in Premiere – and then polish them in After Effects. These were then linked back into the timeline using Dynamic Link between Premiere and After Effects. Chapman and Maclain [Way] were very hands-on throughout, including scanning in stills and prepping them in Photoshop for the edit. We would discuss each new segment to sort out the best direction the story was taking and to help set the tone for each scene.”

“Premiere Pro was the ideal tool for this project, because we had so many different formats to deal with. It dealt well with the mess. All of the archival footage was imported and used natively – no transcoding. The 6K RED interview footage was transcoded to ProRes for the ‘offline’ editing phase. A lot of temp mixing and color correction was done within Premiere, because we always wanted the rough cuts to look smooth with all of the different archival footage. Nothing should be jarring. For the ‘online’ edit, the assistants would relink to the full-resolution RED raw files. The archival footage was already linked at its native resolution, because I had been cutting with that all along. Then the Premiere sequences were exported as DPX image sequences with notched EDLs and sent to E-Film, where color correction was handled by Mitch Paulson. Unbridled Sound handled the sound design and mix – and then Encore handled mastering and 1080p deliverables.”

Working with 400 hours of material and six hour-long episodes in Premiere might be a concern for some, but it was flawless for Meiklejohn. He continues, “We worked the whole series as one large project, so that at any given time, we could go back to scenes from an earlier episode and review and compare. The archival material was organized by topic and story order, with corresponding ‘selects’ sequences. As the project became bigger, I would pare it down by deleting unnecessary sequences and saving a newer, updated version. So, no real issue by keeping everything in a single project.”

As with any real-life event, where many of the people involved are still alive, opinions will vary as to how balanced the storytelling is. Former Rajneeshees have both praised and criticized the focus of the story. Meiklejohn says, “Sheela is one of our main interview subjects and in many ways, she is both the hero and the villain of this story. So, it was interesting to see how well she has been received on social media and in the public screenings we’ve done.”

Wild Wild Country shares a pointed look into one of the most bizarre clashes in the past few decades. Meiklejohn says, “Our creative process was really focused on the standoff between these two groups and the big inflection points. I tried to let the raw emotions that you see in these interviews come through and linger a bit on-screen to help inform the events that were unfolding. The story is sensational in and of itself, and I didn’t want to distract from that.”

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Editing the FX Series Atlanta

Atlanta just wrapped its second season on the FX Network. The brainchild of actor/writer/producer/director Donald Glover, Atlanta is the story of Earn Marks, a Princeton drop-out who returns home to Atlanta, where he decides to manage his cousin’s rap career. The show is very textural and plot is secondary. It loosely follows Earn and the people in his life – specifically his cousin, Paper Boi – an up and coming rapper – and his friend and posse-mate, Darrius.

The visual architect of the show is director Hiro Murai, who has directed the majority of the episodes. He has set an absurdist tone for much of the story. Any given episode can be wildly different from the episodes that come on either side of it. The episodes taken as a whole make up what the series is about.

I recently had a chance to interview the show’s editors, Kyle Reiter and Isaac Hagy, about working on Atlanta and their use of Adobe Premiere Pro CC to edit the series.

Isaac Hagy: “I have been collaborating with Hiro for years. We went to college together and ever since then, we’ve been making short films and music videos. I started out doing no-budget music videos, then eventually moved into documentaries and commercials, and now television. A few years ago, we made a short film called Clapping for the Wrong Reasons, starring Donald. That became kind of an aesthetic precursor that we used in pitching this show. It served as a template for the tone of Atlanta.”

“I’ve used pretty much every editing software under the sun – cutting short films in high school on iMovie, then Avid in college when I went to film school at USC. Once I started doing short film projects, I found Final Cut Pro to be more conducive to quick turnarounds than Avid. I used that for five or six years, but then they stopped updating it, so I needed to switch over to a more professional alternative. Premiere Pro was the easiest transition from Final Cut Pro and, at that time, Premiere was starting to be accepted as a professional platform. A lot of people on the show come from a very DIY background, where we do everything ourselves. Like with the early music videos – I would color and Hiro would do effects in After Effects. So, Premiere was a much more natural fit. I am on a show using [Avid] Media Composer right now and it feels like a step backwards.”

With a nod to their DIY ethos, post-production for Atlanta also follows a small, collective approach. 

Kyle Reiter: “We rent a post facility that is just a single-story house. We have a DIY server called a NAS that one of our assistants built and all the media is stored there. It’s just a tower. We brought in our own desktop iMacs with dual monitors that we connect to the server over Ethernet. The show is shot with ARRI Amira cameras in a cinema 2K format. Then that is transcoded to proxy media for editing, which makes it easy to manage. The color correction is done in Resolve. Our assistant editors online it for the colorist, so there’s no grading in-house.” Atlanta airs on the FX Network in the 720p format.

The structure and schedule of this production make it possible to use a simple team approach. Projects aren’t typically shared among multiple editors and assistants, so a more elaborate infrastructure isn’t required to get the job done. 

Isaac Hagy: “It’s a pretty small team. There’s Kyle and myself. We each have an assistant editor. We just split the episodes, so I took half of the season and Kyle the other half. We were pretty self-contained, but because there were an odd number of episodes, we ended up sharing the load on one of them. I did the first cut of that episode and Kyle took it through the director’s cut. But other than that, we each had our individual episodes.”

Kyle Reiter: “They’re in Atlanta for several months shooting. We’ll spend five to seven days doing our cut and then typically move on to the next thing, before we’re finished. That’s just because they’re out of town for several months shooting and then they’ll come back and continue to work. So, it’s actually quite a bit of time calendar-wise, but not a lot of time in actual work hours. We’ll start by pulling selects and marking takes. I do a lot of logging within Premiere. A lot of comments and a lot of markers about stuff that will make it easy to find later. It’s just breaking it down to manageable pieces. Then from there, going scene-by-scene, and putting it all together.”

Many scripted television series that are edited on Avid Media Composer rely on Avid’s script integration features. This led me to wonder whether Reiter and Hagy missed such tools in Premiere Pro.

Isaac Hagy: “We’re lucky that the way in which the DP [Christian Sprenger] and the director shoot the series is very controlled. The projects are never terribly unwieldy, so really simple organizing usually does the trick.”

Kyle Reiter: “They’re never doing more than a handful of takes and there aren’t more than a handful of set-ups, so it’s really easy to keep track of everything. I’ve worked with editors that used markers and just mark every line and then designate a line number; but, we don’t on this show. These episodes are very economical in how they are written and shot, so that sort of thing is not needed. It would be nice to have an Avid ScriptSync type of thing within Premiere Pro. However, we don’t get an unwieldy amount of footage, so frankly it’s almost not necessary. If it were on a different sort of show, where I needed that, then absolutely I would do it. But this is the sort of show I can get away with not doing it.”

Kyle Reiter: “I’m on a show right now being cut on Media Composer, where there are 20 to 25 takes of everything. Having ScriptSync is a real lifesaver on that one.”

Both editors are fans of Premiere Pro’s advanced features, including the ability to use it with After Effects, along with the new sound tools added in recent versions.

Isaac Hagy: “In the offline, we create some temp visual effects to set the concepts. Some of the simpler effects do make it into the show. We’ll mock it up in Premiere and then the AE’s will bring it into After Effects and polish the effect. Then it will be Dynamic Link-ed back into the Premiere timeline.”

“We probably go deeper on the sound than any other technical aspect of the show. In fact, a lot of the sound that we temp for the editor’s cut will make it to the final mix stage. We not only try to source sounds that are appropriate for a scene, but we also try to do light mixing ourselves – whether it’s adding reverb or putting the sound within the space – just giving it some realism. We definitely use the sound tools in Premiere quite a bit. Personally, I’ve had scenes where I was using 30 tracks just for sound effects.”

“I definitely feel more comfortable working in sound in Premiere than in Media Composer -and even than I felt in Final Cut. It’s way easier working with filters, mixing, panning, and controlling multiple tracks at once. This season we experimented with the Essential Sound Panel quite a bit. It was actually very good in putting a song into the background or putting sound effects outside of a room – just creating spaces.”

When a television series or film is about the music industry, the music in the series plays a principal role. Sometimes that is achieved with a composed score and on other shows, the soundtrack is built from popular music.

Kyle Reiter: “There’s no score on the show that’s not diegetic music, so we don’t have a composer. We had one episode this year where we did have score. Flying Lotus and Thundercat are two music friends of Donald’s that scored the episode. But other than that, everything else is just pop songs that we put into the show.”

Isaac Hagy: “The decision of which music to use is very collaborative. Some of the songs are written in the script. A lot are choices that Kyle and I make. Hiro will add some. Donald will add some. We also have two great music supervisors. We’re really lucky that we get nearly 90% of the music that we fall in love with cleared. But when we don’t, our music supervisors recommend some great alternatives. We’re looking for an authenticity to the world, so we try to rely on tracks that exist in the real world.”

Atlanta provides an interesting look of the city’s hip-hop culture on the fringe. A series that has included an alligator and Donald Glover in weird prosthetic make-up – and where Hiro Murai takes inspiration from The Shining certainly isn’t your run-of-the-mill television series. It definitely leaves fans wanting more, but to date, a third season has not yet been announced.

This interview was recorded using the Apogee MetaRecorder for iOS application and transcribed thanks to Digital Heaven’s SpeedScriber.

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Premiere Pro Multicam Editing

Over the years, a lot of the projects that I’ve edited have been based on real-person interviews. This includes documentaries, commercials, and corporate video. As the cost of camera gear has come down and DSLRs became capable of delivering quality video, interview-based production now almost always utilizes multiple cameras. Directors will typically record these sections with two or more cameras at various tangents to the subject, which makes it easy to edit for content without visible jump-cuts (hopefully). In addition, if they also shoot in 4K for an HD delivery, then you have the additional ability to cleanly punch-in for even more framing options.

While having a specific multicam feature in your NLE isn’t required for cutting these types of productions, it sure speeds up the process. Under the best of circumstances, you can play the sequence in real-time and cut between camera angles in the multicam viewer, much like a director calls camera switches in a live telecast. Since you are working within an NLE, you can also make these camera angle cuts at a slower or faster pace and, of course, trim the cuts for greater timing precision. Premiere Pro is my primary NLE these days and its multi-camera editing routines are a joy to use.

Prepping for multi-camera

Synchronization is the main requirement for productive multicam. That starts at the time of the original recording. You can either sync by common timecode, common audio, or a marked in-point.

Ideally, your production crew should use a Lockit Sync Box to generate timecode and sync to all cameras and any external sound recorder. That will only work with professional products, not DSLRs. Lacking that, the next best thing is old school – a common slate with a clap-stick or even just your subject clapping hands at the start, while in view on all cameras. This will allow the editor to mark a common in-point.

The last sync method is to match the common audio across all sources. Of course, that only works if the production crew has supplied quality audio to all cameras and external recorders. It has to be at least good enough so that the human editor and/or the audio analysis of the software can discern a match. Sometimes this method will suffer from a minor amount of delay – either, because of the inherent offset of the audio recording circuitry within the camera electronics – or, because an onboard camera mic was used and the distance to the subject results in a slight delay, compared to a lav mic on the subject.

In addition to synchronization, you obviously need to record high-quality audio. This can be a mixer feed or direct mic input to one or all of the camera tracks, or to a separate external audio recorder. A typical set-up is to feed a lav and a boom mic signal to audio input channels 1 and 2 of the camera. When a mixer and an external recorder are used, the sound recordist will often also record a mix. Another option, though not as desirable, is to record individual microphone signals onto different cameras. The reason this isn’t preferred, is that sometimes when these two sources are mixed in post (rather than only one source used at a time), audio phasing can occur.

Synching in Premiere Pro

To synchronize multicam clips in Premiere Pro, simply select the matching sources in the browser/bin, right-click, and choose “Create New Multi-Camera Source Sequence”. You will be presented with several options for sync, based on timecode, audio, or marked points. You may also opt to have the clips moved to a “Processed Clips” bin. If synchronization is successful, you’ll then end up with a multicam source clip that you can now cut to a standard sequence.

A multicam source clip is actually a modified, nested sequence. You can open the clip – same as a nested sequence – and make adjustments or apply filters to the clips within.

You can also create multicam clips without going through the aforementioned process. For example, let’s say that none of the three sync methods exist. You have a freewheeling interview with two or more cameras, but only one has any audio. There’s no clap and no common timecode. In fact, if all the cameras were DSLRs, then every clip arbitrarily starts at 00:00:00:00. The way to tackle this is to edit these cameras to separate video tracks of a new sequence. Sync the video by slipping the clips’ positions on the tracks. Select those clips on the timeline and create a nest. Once the nest is created, this can then be turned into a multicam source clip, which enables you to work with the multicam viewer.

One step I follow is to place the multicam source clip onto a sequence and replace the audio with the best original source. The standard multicam routine means that audio is also nested, which is something I dislike. I don’t want all of the camera audio tracks there, even if they are muted. So I will typically match-frame the source until I get back to the original audio that I intend to use, and then overwrite the multicam clip’s audio with the original on this working timeline. On the other hand, if the manual multicam creation method is used, then I would only nest the video tracks, which automatically leaves me with the clean audio that I desire.

Autosequence

One simple approach is to use an additional utility to create multicam sequences, such as Autosequence from software developer VideoToolShed. To use Autosequence, your clips must have matching timecode. First separate all of your clips into separate folders on your media hard drive – A-CAM, B-CAM, SOUND, and so on. Launch Autosequence and set the matching frame rate for your media. Then import each folder of clips separately. If you are using double-system sound you can choose whether or not to include the camera sound. Then generate an XML file.

Now, import the XML file into Premiere Pro. This will import the source media into bins, along with a sequence of clips where each camera is on a separate track. If your clips are broken into consecutive recordings with stops and starts in-between, then each recorded set will appear further down on the same timeline. To turn this sequence into one with multicam clips, just follow my explanation for working with a manual process, described above.

Multicam cutting

At this point, I dupe the sequence(s) and start a reductive process of shaping the interview. I usually don’t worry too much about changing camera angles, until I have the story fleshed out. When you are ready for that, right-click into the viewer, and change the display mode to multicam.

As you play, cut between cameras in the viewer by clicking on the corresponding section of the viewer. The timeline will update to show these on-the-fly edits when you stop playback. Or you can simply “blade” the clip and then right-click that portion of the clip to select the camera to be shown. Remember than any effects or color corrections you apply in the timeline are applicable to that visible angle, but do not follow it. So, if you change your mind and switch to a different angle, the effects and corrections do not change with it. Therefore, adjustments will be required to the effect or correction for that new camera angle.

Once I’m happy with the cutting, I will then go through and make a color correction pass. If the lighting has stayed consistent, I can usually grade each angle for one clip only and then copy that correction and paste it to each instance of that same angle on the timeline. Then repeat the procedure for the other camera angles.

When I’m ready to deliver the final product, I will dupe the sequence and clean it up. This means flattening all multicam clips, cleaning up unused clips on my timeline, deleting empty tracks, and usually, collapsing the clips down to the fewest number of tracks.

©2018 Oliver Peters

Audio Mixing with Premiere Pro

When budgets permit and project needs dictate, I will send my mixes out-of-house to one of a few regular mixers. Typically that means sending them an OMF or AAF to mix in Pro Tools. Then I get the mix and split-tracks back, drop them into my Premiere Pro timeline, and generate master files.

On the other hand, a lot of my work is cutting simple commercials and corporate presentations for in-house use or the web, and these are often less demanding  – 2 to 8 tracks of dialogue, limited sound effects, and music. It’s easy to do the mix inside of the NLE. Bear in mind that I can – and often have – done such a mix in Apple Logic Pro X or Adobe Audition, but the tools inside Premiere Pro are solid enough that I often just keep everything – mix included – inside my editing application. Let’s walk though that process.

Dealing with multiple channels on source clips

Start with your camera files or double-system audio recordings. Depending on the camera model, Premiere Pro will see these source clips as having either stereo (e.g. a Canon C100) or multi-channel mono (e.g. ARRI Alexa) channels. If you recorded a boom mic on channel 1 and a lavaliere mic on channel 2, then these will drop onto your stereo timeline either as two separate mono tracks (Alexa) – or as a single stereo track (C100), with the boom coming out of the left speaker and the lav out of the right. Which one it is will strictly depend on the device used to generate the original recordings.

First, when dual-mic recordings appear as stereo, you have to understand how Premiere Pro deals with stereo sources. Panning in Premiere Pro doesn’t “shift” the audio left, right, or center. Instead, it increases or decreases the relative volume of the left or right half of this stereo field. In our dual-mic scenario, panning the clip or track full left means that we only hear the boom coming out of the left speaker, but nothing out of the right. There are two ways to fix this – either by changing the channel configuration of the source in the browser – or by changing it after the fact in the timeline. Browser changes will not alter the configuration of clips already edited to the timeline. You can change one or more source clips from stereo to dual-mono in the browser, but you can’t make that same type of change to a clip already in your sequence.

Let’s assume that you aren’t going to make any browser changes and instead just want to work in your sequence. If your source clip is treated as dual-mono, then the boom and lav will cut over to track 1 and 2 of your sequence – and the sound will be summed in mono on the output to your speaks. However, if the clip is treated as stereo, then it will only cut over to track 1 of your sequence – and the sound will stay left and right on the output to your speakers. When it’s dual-mono, you can listen to one track versus the other, determine which mic sounds the best, and disable the clip with the other mic. Or you can blend the two using clip volume levels.

If the source clip ends up in the sequence as a stereo clip, then you will want to determine which one of the two mics you want to use for the best sound. To pick only one mic, you will need to change the clip’s audio configuration. When you do that, it’s still a stereo clip, however, both “sides” can be supplied by either one of the two source channels. So, both left and right output will either be the boom or the lav, but not both. If you want to blend both mics together, then you will need to duplicate (option-drag) the audio clip onto an adjacent timeline track, and change the audio channel configuration for both clips. One would be set to the boom for both channels and the other set to only the lav for its two channels. Then adjust clip volume for the two timeline clips.

Configuring your timeline

Like most editors, while I’m working through the stages of rough cutting on the way to an approved final copy, I will have a somewhat messy timeline. I may have multiple music cues on several tracks with only one enabled – just so I can preview alternates for the client. I will have multiple dialogue clips on a few tracks with some disabled, depending on microphone or take options. But when I’m ready to move to the finishing stage, I will duplicate that sequence to create a “final version” and clean that one up. This means getting rid of any disabled clips, collapsing my audio and video clips to the fewest number of tracks, and using Premiere’s track creation/deletion feature to delete all empty tracks – all so I can have the least amount of visual clutter. 

In other blog posts, I’ve discussed working with additional submix buses to create split-track exports; but, for most of these smaller jobs, I will only add one submix bus. (I will explain its purpose in a moment.) Once created, you will need to open the track mixer panel and route the timeline channels from the master to the submix bus and then the output of the submix bus back to the master.

Plug-ins

Premiere Pro CC comes with a nice set of audio plug-ins, which can be augmented with plenty of third-party audio effects filters. I am partial to Waves and iZotope, but these aren’t essential. However, there are several that I do use quite frequently. These three third-party filters will help improve any vocal-heavy piece.

The first two are Vocal Rider and MV2 from Waves and are designed specifically for vocal performances, like voice-overs and interviews. These can be pricey, but Waves has frequent sales, so I was able to pick these up for a fraction of their retail price. Vocal Rider is a real-time, automatic volume adjustment tool. Set the bottom and top parameters and let Vocal Rider do the rest, by automatically pushing the volume up or down on-the-fly. MV2 is similar, but it achieves this through compression on the top and bottom ends of the range. While they operate in a similar fashion, they do produce a different sound. I tend to pick MV2 for voice-overs and Vocal Rider for interviews.

We all know location audio isn’t perfect, which is where my third filter comes in. FxFactory is knows primarily for video plug-ins, but their partnership with Crumplepop has added a nice set of audio filters to their catalog. I find AudioDenoise to be quite helpful and fast in fixing annoying location sounds, like background air conditioning noise. It’s real-time and good-sounding, but like all audio noise reduction, you have to be careful not to overdo it, or everything will sound like it’s underwater.

For my other mix needs, I’ll stick to Premiere’s built-in effects, like EQ, compressors, etc. One that’s useful for music is the stereo imager. If you have a music cue that sounds too monaural, this will let you “expand” the track’s stereo signal so that it is spread more left and right. This often helps when you want the voice-over to cut through the mix a bit better. 

My last plug-in is a broadcast limiter that is placed onto the master bus. I will adjust this tight with a hard limit for broadcast delivery, but much higher (louder allowed) for web files. Be aware that Premiere’s plug-in architecture allows you to have the filter take affect either pre or post-fader. In the case of the master bus, this will also affect the VU display. In other words, if you place a limiter post-fader, then the result will be heard, but not visible through the levels displayed on the VU meters.

Mixing

I have used different mixing strategies over the years with Premiere Pro. I like using the write function of the track mixer to write fader automation. However, I have lately stopped using it – instead going back to manual keyframes within the clips. The reason is probably that my projects tend to get revised often in ways that change timing. Since track automation is based on absolute timeline position, keyframes don’t move when a clip is shifted, like they would when clip-based volume keyframes are used.

Likewise, Adobe has recently added Audition’s ducking for music to Premiere Pro. This uses Adobe’s Sensei artificial intelligence. Unfortunately I don’t find to be “intelligent” enough. Although sometimes it can provide a starting point. For me, it’s simply too coarse and doesn’t intelligently adjust for areas within a music clip that swell or change volume internally. Therefore, I stick with minor manual adjustments to compensate for music changes and to make the vocal parts easy to understand in the mix. Then I will use the track mixer to set overall levels for each track to get the right balance of voice, sound effects, and music.

Once I have a decent balance to my ears, I will temporarily drop the TC Electronic (included with Premiere Pro) Radar loudness plug-in to make sure my mix is CALM-compliant. This is where the submix bus comes in. If I like the overall balance, but I need to bring everything down, it’s an easy matter to simply lower the submix level and remeasure.

Likewise, it’s customary to deliver web versions with louder volume levels than the broadcast mix. Again the submix bus will help, because you cannot raise the volume on the master – only lower it. If you simply want to raise the overall volume of the broadcast mix for web delivery, simply raise the submix fader. Note that when I say louder, I’m NOT talking about slamming the VUs all the way to the top. Typically, a mix that hits -6 is plenty loud for the web. So, for web delivery, I will set a hard limit at -6, but adjust the mix for an average of about -10.

Hopefully this short explanation has provided some insight into mixing within Premiere Pro and will help you make sure that your next project sounds great.

©2018 Oliver Peters

Molly’s Game

Molly Bloom’s future looked extremely bright. A shot at Olympic skiing glory leading to entry into a leading law school. But an accident during qualifying trials for the U. S. ski team knocked her out of the running for the Salt Lake City games. (Bloom notes in her own memoir that it was her decision to retire and change the course of her life, rather than the minor accident.) She moved to Los Angeles and ended up running high stakes, private poker games with her boss at the time. These games included A-list celebrities, hedge fund managers, and eventually, members of the Russian mob. Bloom quickly earned the nickname as the “poker princess”. This all came crashing down when Bloom was busted by the FBI and sentenced for her role in the gambling ring.

Bloom’s memoir came to the attention of screenwriter Aaron Sorkin (The Social Network, Moneyball, Steve Jobs), who not only made this his next film script, but also his debut as a film director. Sorkin stayed close to the facts that Bloom described in her own memoir and consulted her during the writing of the screenplay. The biggest departure is that Bloom named some celebrities at these games, who had previously been revealed in released court documents. Sorkin opted to fictionalize them, explaining that he would rather focus the story on Bloom’s experiences and not on Hollywood gossip. Jessica Chastain (The Zookeeper’s Wife, A Most Violent YearZero Dark Thirty) stars as Molly Bloom.

Although three editors are credited for Molly’s Game, the back story is that a staggered schedule had to be worked out. The post production of Steve Jobs connected feature film editor Elliot Graham (Milk, 21, Superman Returns) with that film’s writer and director – Sorkin and Danny Boyle (T2 Trainspotting, 127 Hours, Slumdog Millionaire). Graham was tapped to cut Molly’s Game later into the process, replacing its original editor. He brought Josh Schaeffer (The Last Man on Earth, Detroiters, You’re the Worst) on as associate editor to join him. Graham started the recut with Schaeffer, but a prior schedule commitment to work on Trust for Boyle, saw him exiting the film early. (Trust is the BBC’s adaptation of the Getty kidnapping story.) Graham was able to bring the film about 50% of the way through post. Alan Baumgarten (Trumbo, American Hustle, Gangster Squad) picked up for Graham and edited with Schaeffer to the finish, thus earning all three an editing credit.

Working with a writer on his directorial debut

It can always be a challenge when a writer is close to the editing process. Scenes that may be near and dear to the writer are often cut, leading to tension. I asked the three about this situation. Graham says, “Aaron has always been on set with his other films and worked very closely with the director. So, he understands the process, having learned from some of the best directors in the business. I had a great time with Aaron on Steve Jobs. He’s an incredibly lovely and generous collaborator who brings out the best in his team.”

Baumgarten expands, “Working with Aaron was fun, because he appreciates being challenged. He’s open to seeing what an editor brings to the film. Aaron wrote a tight script that didn’t need to be re-arranged. Only about 20 minutes came out. We cut one small scene, but it was mostly trimming here and there. You want to be careful not to ruin the rhythm of his writing.”

Graham continues, “Aaron also found his own visual vocabulary. A lot of the story is told in time jumps, from present day to the past in flashbacks. Aaron always is looking for rapid fire, overlapping dialogue. It’s part of his uniqueness and it’s a joy to cut. What was new for Aaron was using voice over to drive things.”

 Another new challenge was the use of stock footage. About 150 stock shots were used for cutaways and mini-montages throughout the film. Most of these were never originally scripted. Graham says, “Stock footage was something I chose to start injecting into the film with Aaron’s collaboration when I came on. We felt it was useful to have visual references for some of the voice overs – to connect visuals with words, which helps to land Aaron’s linguistic ideas for viewers. This began with the opening ski sequence – the first thing I cut when I came on board.”

The editors would pull down shots from a variety of internet sources and then the actual footage had to be found and cleared. The editors ultimately partnered with STALKR to find and clear all of the stock shots that were used. Visual effects were handled by Mr. X in Toronto. Originally, only 90 shots were budgeted (for example, snow falling in the ski sequences), but in the end, there were almost 600 visual effects shots in the final film.

Musicality of the performance

Baumgarten explains the musicality of Sorkin’s style. He says, “Aaron knew the film he wanted and had that in his head. Part of his writing process is to read his dialogue out loud and listen for the cadence of the performance. As you go through takes, the film is always moving in the right direction. As a writer/director, he doesn’t need variations or ad libs in an actor’s performance from one take to another, because he knows what the intention of the line is. As editors, we didn’t need to experiment with different calibrations of the performance. The experimentation came in with how we wove in the voice-over and played with the general rhythm.”

Graham adds, “Daniel Pemberton is the composer I worked with on Steve Jobs. I brought on Carl Kaller, a great music editor, when I came on. I knew that the music and dialogue had to dance a beautiful rhythm together for the film to be its best. With a compressed schedule to finish the film, we needed someone like Carl to help choreograph that dance.”

Baumgarten continues, “Daniel was involved early and provided us with temp tracks, which was a great gift. We didn’t have to use scores from other composers as temp music. Carl was just down the hall, so it was easy to weave Daniel’s temp elements in and around the dialogue and voice-over during the editing stage. There is interplay between the voice-over and the music, and the VO is like another musical element.”

Avid for the post

The post operation followed a standard feature film set-up. Avid Media Composer for the editing work stations, tied to Avid ISIS shared storage. The film was shot digitally using ARRI Alexas.

Production covered 48 days ending in February [2017]. It took 10 weeks to get to a director’s cut and then editing on Molly’s Game continued for about six months, which included visual effects, final sound mix and color correction. Schaeffer explains, “The dialogue scenes were scripted using [Avid] ScriptSync. Aaron was familiar with ScriptSync from The Newsroom, and it was a great help for us on this film. It’s the best way to have everything readily available and it allows us to be extremely thorough. If Aaron wanted to change a single word in a take, we were always able to find all of the alternates and make the change quite easily.”

Schaeffer continues, “Aaron methodically worked in a reel-by-reel order. We would divide up sequences between us at breaks that made sense. But when it came time to review the cut on a sequence, we would all review together. A lot people think that you have three editors on a film because the project is so difficult. The truth is that it lets you be more creative. Productions shoot so much footage these days, that it’s great to be able to experiment. Having multiple editors on a film enables you to take the time to be creative. We were all glad that Aaron set up an environment, which made that possible.”

Originally written for Digital Video magazine / Creative Planet Network

©2018 Oliver Peters

Blackmagic Design DaVinci Resolve 14

DaVinci Resolve has made its mark as one of the premier color correction applications for the film and video industries. With the introduction of Resolve 14*, it’s clear that Blackmagic Design has set its sights higher. Advanced editing functions and the inclusion of the Fairlight audio engine put Resolve on track to be the industry’s latest all-in-one post-production powerhouse. I’ve reviewed Resolve in the past as a grading application, but my focus here is editing. Right at the start, let me paraphrase the judges on History Channel’s Forged in Fire series – ‘This NLE can cut!’ If you have no prior allegiances to other editing platforms, then using Resolve as your NLE of choice is a no-brainer.

(*This review was originally written right after the release of Resolve 14 in late 2017.)

DaVinci Resolve 14 comes in two flavors, DaVinci Resolve 14 (free) and DaVinci Resolve Studio ($299). Upgrades have been free to date. It’s the only NLE to support three operating systems: macOS, Windows, and Linux. Mac users also have the option to download Resolve (free) or purchase Resolve Studio through the Apple Mac App Store. These versions are basically the same as those on Blackmagic Design’s website, but with some differences, due to the requirement that App Store software be sandboxed.

Resolve offers the majority of the same features as Resolve Studio. The primary limitations are that exports are capped at UltraHD (3840×2160), and that features such as stereo3D, lens distortion correction, noise reduction, and collaboration require Resolve Studio. Regardless of the version, Resolve is a very deep application that’s been battle-tested through years of high-pressure, enterprise-grade deployment. But is that enough to sway loyal Final Cut Pro X, Premiere Pro, or Media Composer editors to switch? There’s certainly interest, as Stephen Mirrione pointed out in my recent Suburbicon interview, so I wouldn’t be surprised to hear news of a TV show or small feature film being edited with Resolve in the coming year.

The all-in-one concept

Creating a single application that’s good at many different tasks can be daunting and more often than not has been unsuccessful. In the case of Resolve, Blackmagic Design has taken a modal approach by splitting the interface into five pages: Media (ingest/import), Edit, Color, Fairlight (audio mixing), and Deliver (export/output).

The workflow follows a logical, left-to-right path through these five stages of post-production. With each page/mode change, the user interface is reconfigured to best suit the task at hand. The Edit page sports a standard source/record/bin/track layout similar to Media Composer, Premiere Pro, or Final Cut Pro 7. Color switches to the familiar tools and nodes of DaVinci color correction. The Fairlight mixing page isn’t just a mimic of the Fairlight interface. The engineers completely swapped out the audio guts of Resolve and replaced it with the Fairlight audio engine.

Not only is the interface that of a respected DAW, but it is also possible to expand your system with Fairlight’s audio acceleration card, as well as add a Fairlight mixing desk. This means that in a multi-suite facility, you can have task-specific rooms optimized for editing, color grading, or audio mixing – all using the exact same software application without the need for roundtrips or other list translations.

But does it work?

I put both versions of Resolve 14 through the paces and the application is reasonably solid, given how much has changed from version 12 (there was no version 13). General media management, editing, and audio processing is top notch. If you want audio/video output, Blackmagic Design Decklink or UltraStudio hardware is required. There is also a Cinema viewer function for fullscreen viewing on your computer display. With dual displays, the edit interface can be on one along with fullscreen video on the other.

The Fairlight mode will likely require a bit of rethinking by editors used to mixing audio in other NLEs, since it uses a DAW-style interface. Many well-known physical mixing consoles, like those from Solid State Logic, feature channel strips with built-in EQs, compressors, etc. That’s how Fairlight treats these software channels or tracks. Each track can have its own combination of Fairlight audio processing functions. Stick with those and you’ll be happy, although other audio filters on your computer, like Apple AU plug-ins, are accessible. Mixing and audio editing is good with subframe accuracy and the 14.1 update added linked groups to lock faders together. The pace of Fairlight integration was quite fast, but it’s still a bit rough. I encountered a number of application crashes only in the Fairlight page, while scrubbing audio.

Whether or not you like the editing is more a function of personal style and preference. The user interface design is a lot like Final Cut Pro X, except with bins and tracks. Interface windows, tabs, and panels can be opened or pulled down into various screen configurations, but you don’t have freeform control over size and position. Clearly Premiere Pro is king in that department. Some design choices aren’t consistent. For example, you can’t enable a single-viewer layout when using two displays.

Multicam editing is solid, but I experienced a small bit of latency in the viewer when cutting camera angles on-the-fly. It’s minor and may or may not bother you. You can sync clips by various methods, such as timecode or waveform, but oddly, it seemed to be too lax. In my tests, it would frequently sync clips that it shouldn’t have when a sync relationship didn’t exist.

There are a number of things in Resolve’s design that take getting used to. For example, a Resolve project is locked to the frame rate you picked when that new project was created – same as with Avid. This means you can’t mix sequences with different frame rates within the same project. There are no adjustment layers, although you can fake it in the Color page by using clip and program-based corrections. Color management via LUTs (look-up tables) is much deeper than any other NLE. You can set color management with LUTs to be global, which is best when the project uses only one camera type. Conversely, input LUTs may be applied singly or in a batch to specific cameras in a bin. But, when you do that, the LUT process doesn’t show up in the color correction node (only its result), when you switch to the Color page. On the plus side, real time performance has been improved from previous versions and the built-in effects include filters that you don’t often find in the basic build of other NLEs, like glow and watercolor effects. In addition to great built-in effects, third-party OpenFX packages, like Boris Continuum Complete and Sapphire are also available.

Collaboration

Resolve uses bin-locking like Avid Media Composer. The first editor to open a bin has read/write permission to it. Any other editor can open that same bin in a read-only mode. For example, in a long-form project, separate bins might be organized for Act 1, Act 2, and so on. Different editors can separately work on parts of the film at the same time. Since this all happens in a single database file, it always reflects the most current state of the project.

To set up shared projects, a different PostgreSQL database is required, which is installed through the custom options of the installer. Make sure you are using the most recent version when upgrading Resolve, since the older versions of PostgreSQL are no longer compatible with the newest OS versions. One machine on the network hosts this database and then other workstations connect to that database to access the Resolve projects. Only that host machine needs to have PostgreSQL software installed on it. The process of adding and connecting shared databases has been improved and simplified with the release of 14.1.1 (and later), which now includes an additional server set-up utility application.

In testing collaboration features, I initially ran into set-up problems. These were eventually fixed when I disabled the macOS firewall on the host machine, which was blocking access from the other connected Macs to its shared database. This took some back and forth with Blackmagic Design’s helpful support engineers until we figured out why I was getting the connection errors. Since I had to return the additional “dongle” (USB license key) before this was fixed, I wasn’t able to test two editors simultaneously editing within the same open project. However, the ability to open any shared project from any qualified computer on the network was just fine.

DaVinci Resolve Micro Panel

I also tested the smaller, bus-powered DaVinci Resolve Micro panel. The Micro panel is just the right size for an editor or a DIT on set. It’s smaller than the Mini (tested previously in another review), because it doesn’t have the upward slanting portion in the back; therefore, it’s a better physical fit between your computer keyboard and display. You don’t have to shuffle desk real estate between tools, as you do with the Micro panel. In spite of not having the extra controls and LCD displays of the Mini, the Micro panel combines most of the control functions you need for fast grading. If you are an editor who is heavy into color correction, then this is a must-have for Resolve.

I took an instant liking to the Micro. You can use both hands to quickly and intuitively work the trackballs and knob controls, making for faster and better correction. It’s tactile, with next and previous clip buttons to quickly advance through the timeline, so you can keep your eyes on the screen. I grade in Resolve, Avid, Premiere Pro, and Final Cut Pro X, and all of that is with a mouse. Using the panel easily resulted in faster grading by a factor at least 3X or 4X. I also achieved better-looking corrections with fewer steps or processes than grading in any of these other applications.

Conclusion

Overall, there’s a lot to love about Resolve, in spite of a few rough edges. In general, it seems more stable under macOS Sierra than with High Sierra. If you use Resolve on a Mac, then you are stuck dealing with Apple’s platform changes. For example, recent Macs that use an Nvidia GPU are at a disadvantage under High Sierra, because Nvidia is just now developing drivers for CUDA under this OS. I experienced a number of crashes running Resolve 14 on my 2014 MacBook Pro until I manually changed the Resolve hardware configuration under Resolve’s preferences from CUDA to using Metal. When I installed what was supposed to be the newest CUDA driver, I still received a prompt that no CUDA-compliant card was present. But, it’s working fine using Metal. Macs with AMD GPUs should be fine.

Resolve 14 is a dense tool, with a lot of depth in various menus, which some may find daunting. This review would be a lot longer if I went even deeper into the many specific features of this application. Yet, it is easy for new users to hit the ground running and then learn as they go. For many, this is their mythical “Final Cut Pro 8”. In any case, DaVinci Resolve 14 is the best incarnation of the all-in-one concept to date. If you add Blackmagic Design’s Fusion visual effects software into the mix (also available in free and paid versions), the result is a combination that’s tough to beat at any price.

Blackmagic Design’s engineers have shown impressive development over a very short period of time, so I fully expect Blackmagic to give the three “A” companies a run for their money. Even if you use another tool as your main editing application, Resolve is a great addition to the toolbox. Using it becomes addictive. Give it a try and you might just find it becomes your first choice.

©2017, 2018 Oliver Peters