Beyond the Supernova

No one typifies hard driving, instrumental, guitar rock better than Joe Satriani. The guitar virtuoso – known to his fans as Satch – has sixteen studio albums under his belt, along with several other EPs, live concert and compilation recordings. In addition to his solo tours, Satriani founded the “G3”, a series of short tours that feature Satriani along with a changing cast of two other all-star, solo guitarists, such as Steve Vai, Yngwie Malmsteen, Guthrie Govan, and others. In another side project, Satriani is the guitarist for the supergroup Chickenfoot, which is fronted by former Van Halen lead singer, Sammy Hagar.

The energy behind Satriani’s performances was captured in the new documentary film, Beyond the Supernova, which is currently available on the Stingray Qello streaming channel. This documentary grew out of the general behind-the-scenes coverage of Satriani’s 2016 and 2017 tours in Asia and Europe, to promote his 15th studio album, Shockwave Supernova. Tour filming was handled by Satriani’s son, ZZ (Zachariah Zane) – an up-and-coming, young filmmaker. The tour coincided with Joe Satriani’s 60th birthday and 30 years after the release of his multi-platinum-selling album Surfing with the Alien. These elements, as well as capturing Satriani’s introspective nature, provided the ingredients for a more in-depth project, which ZZ Satriani produced, directed and edited.

According to Joe Satriani in an interview on Stingray’s PausePlay, “ZZ was able to capture the real me in a way that only a son would understand how to do; because I was struggling with how I was going to record a new record and go in a new direction. So, as I’m on the tour bus and backstage – I guess it’s on my face. He’s filming it and he’s going ‘there’s a movie in here about that. It’s not just a bunch of guys on tour.’”

From music to filmmaking

ZZ Satriani graduated from Occidental College in 2015 with a BA in Art History and Visual Arts, with a focus on film production. He moved to Los Angeles to start a career as a freelance editor. I spoke with ZZ Satriani about how he came to make this film. He explained, “For me it started with skateboarding in high school. Filmmaking and skateboarding go hand-in-hand. You are always trying to capture your buddies doing cool tricks. I gravitated more to filmmaking in college. For the 2012 G3 Tour, I produced a couple of web videos that used mainly jump cuts and were very disjointed, but fun. They decided to bring me on for the 2016 tour in order to produce something similar. But this time, it had to have more of a story. So I recorded the interviews afterwards.”

Although ZZ thinks of himself as primarily an editor, he handed all of the backstage, behind-the-scenes, and interview filming himself, using a Sony PXW-FS5 camera. He comments, “I was learning how to use the camera as I was shooting, so I got some weird results – but in a good way. I wanted the footage to have more of a filmic look – to have more the feeling of a memory, than simply real-time events.”

The structure of Beyond the Supernova intersperses concert performances with events on the tour and introspective interviews with Joe Satriani. The multi-camera concert footage was supplied by the touring support company and is often mixed with historical footage provided by Joe Satriani’s management team. This enabled ZZ to intercut performances of the same song, not only from different locations, but even different years, going back to Joe Satriani’s early career.

The style of cutting the concert performances is relatively straightforward, but the travel and interview bridges that join them together have more of a stream-of-consciousness feel to them and are often quite psychedelic. ZZ says, “I’m not a big [Adobe] After Effects guy, so all of the ‘effects’ are practical and built up in layers within [Adobe] Premiere Pro. The majority of ‘effects’ dealt with layering, blending and cropping different clips together. It makes you think about the space within the frame – different shapes, movement, direction, etc. I like playing around that way – you end up discovering things you wouldn’t have normally thought of. Let your curiosity guide you, keep messing with things and you will look at everything in a new way. It keeps editing exciting!”

Premiere Pro makes the cut

Beyond the Supernova was completely cut and finished in Premiere Pro. ZZ explains why,  “Around 2011-12, I made the switch from [Apple] Final Cut Pro to Premiere Pro while I was in a film production class. They informed us that was the new standard, so we rolled with it and the transition was very smooth. I use other apps in the Adobe suite and I like the layout of everything in each one, so I’ve never felt the need to switch to another NLE.”

ZZ Satriani continues, “We had a mix of formats to deal with, including the need to upscale some of the standard definition footage to HD, which I did in software. Premiere handled the PXW-FS5’s XAVC-L codec pretty well in my opinion. I didn’t transcode to Pro Res, since I had so much footage, and not a lot of external hard drive space. I knew this might make things go more slowly – but honestly, I didn’t notice any significant drawbacks. I also handled all of the color correction, using Premiere’s Lumetri color controls and the FilmConvert plug-in.” Satriani created the sound design for the interview segments, but John Cuniberti (who has also mixed Joe Satriani’s albums) re-mixed the live concert segments in his studio in London. The final 5.1 surround mix of the whole film was handled at Skywalker Sound.

The impetus pushing completion was entry into the October 2017 Mill Valley Film Festival. ZZ says, “I worked for a month putting together the trailer for Mill Valley. Because I had already organized the footage for this and an earlier teaser, the actual edit of the film came easily. It took me about two months to cut – working by myself in the basement on a [2013] Mac Pro. Coffee and burritos from across the street kept me going.” 

Introspection brings surprises

Fathers and sons working together can often be an interesting dynamic and even ZZ learned new things during the production. He comments, “The title of the film evolved out of the interviews. I learned that Joe’s songs on an album tend to have a theme tied to the theme of the album, which often has a sci-fi basis to it. But it was a real surprise to me when Joe explained that Shockwave Supernova was really his character or persona on stage. I went, ‘Wait! After all these years, how did I not know that?’”

As with any film, you have to decide what gets cut and what stays. In concert projects, the decision often comes down to which songs to include. ZZ says, “One song that I initially thought shouldn’t be included was Surfing with the Alien. It’s a huge fan favorite and such an iconic song for Joe. Including it almost seemed like giving in. But, in a way it created a ‘conflict point’ for the film. Once we added Joe’s interview comments, it worked for me. He explained that each time he plays it live that it’s not like repeating the past. He feels like he’s growing with the song – discovering new ways to approach it.”

The original plan for Beyond the Supernova after Mill Valley was to showcase it at other film festivals. But Joe Satriani’s management team thought that it coincided beautifully with the release of his 16th studio album, What Happens Next, which came out in January of this year. Instead of other film festivals, Beyond the Supernova made its video premiere on AXS TV in March and then started its streaming run on Stingray Qello this July. Qello is known as a home for classic and new live concerts, so this exposes the documentary to a wider audience. Whether you are a fan of Joe Satriani or just rock documentaries, ZZ Satriani’s Beyond the Supernova is a great peek behind the curtain into life on the road and some of the thoughts that keep this veteran solo performer fresh.

Images courtesy of ZZ Satriani.

©2018 Oliver Peters

Advertisements

Hawaiki AutoGrade

The color correction tools in Final Cut Pro X are nice. Adobe’s Lumetri controls make grading intuitive. But sometimes you just want to click a few buttons and be happy with the results. That’s where AutoGrade from Hawaiki comes in. AutoGrade is a full-featured color correction plug-in that runs within Final Cut Pro X, Motion, Premiere Pro and After Effects. It is available from FxFactory and installs through the FxFactory plug-in manager.

As the name implies, AutoGrade is an automatic color correction tool designed to simplify and speed-up color correction. When you install AutoGrade, you get two plug-ins: AutoGrade and AutoGrade One. The latter is a simple, one-button version, based on global white balance. Simply use the color-picker (eye dropper) and sample an area that should be white. Select enable and the overall color balance is corrected. You can then tweak further, by boosting the correction, adjusting the RGB balance sliders, and/or fine-tuning luma level and saturation. Nearly all parameters are keyframeable, and looks can be saved as presets.

AutoGrade One is just a starter, though, for simple fixes. The real fun is with the full version of AutoGrade, which is a more comprehensive color correction tool. Its interface is divided into three main sections: Auto Balance, Quick Fix, and Fine-Tune. Instead of a single global balance tool, the Auto Balance section permits global, as well as, any combination of white, black, and/or skin correction. Simply turn on one or more desired parameters, sample the appropriate color(s) and enable Auto Balance. This tool will also raise or lower luma levels for the selected tonal range.

Sometimes you might have to repeat the process if you don’t like the first results. For example, when you sample the skin on someone’s face, sampling rosy cheeks will yield different results than if you sample the yellowish highlights on a forehead. To try again, just uncheck Auto Balance, sample a different area, and then enable Auto Balance again. In addition to an amount slider for each correction range, you can also adjust the RGB balance for each. Skin tones may be balanced towards warm or neutral, and the entire image can be legalized, which clamps video levels to 0-100.

Quick Fix is a set of supplied presets that work independently of the color balance controls. These include some standards, like cooling down or warming up the image, the orange and teal look, adding an s-curve, and so on. They are applied at 100% and to my eye felt a bit harsh at this default. To tone down the effect, simply adjust the amount slider downwards to get less intensity from the effect.

Fine-Tune rounds it out when you need to take a deeper dive. This section is built as a full-blown, 3-way color corrector. Each range includes a luma and three color offset controls. Instead of wheels, these controls are sliders, but the results are the same as with wheels. In addition, you can adjust exposure, saturation, vibrance, temperature/tint, and even two different contrast controls. One innovation is a log expander, designed to make it easy to correct log-encoded camera footage, in the absence of a specific log-to-Rec709 camera LUT.

Naturally, any plug-in could always offer more, so I have a minor wish list. I would love to see five additional features: film grain, vignette, sharpening, blurring/soft focus, and a highlights-only expander. There are certainly other individual filters that cover these needs, but having it all within a single plug-in would make sense. This would round out AutoGrade as a complete, creative grading module, servicing user needs beyond just color correction looks.

AutoGrade is a deceptively powerful color corrector, hidden under a simple interface. User-created looks can be saved as presets, so you can quickly apply complex settings to similar shots and set-ups. There are already many color correction tools on the market, including Hawaiki’s own Hawaiki Color. The price is very attractive, so AutoGrade is a superb tool to have in your kit. It’s a fast way to color-grade that’s ideal for both users who are new or experienced when it comes to color correction.

(Click any image to see an enlarged view.)

©2018 Oliver Peters

Wild Wild Country

Sometimes real life is far stranger than fiction. Such is the tale of the Rajneeshees – disciples of the Indian guru Bhagwan Shree Rajneesh – who moved to Wasco County, Oregon in the 1980s. Their goal was to establish a self-contained, sustainable, utopian community of spiritual followers, but the story quickly took a dark turn. Conflicts with the local Oregon community escalated, including the first and single, largest bioterror attack in the United States, when a group of followers poisoned 751 guests at ten local restaurants through intentional salmonella contamination. 

Additional criminal activities included attempted murder, conspiracy to assassinate the U. S. Attorney for the District of Oregon, arson, and wiretapping. The community was largely controlled by Bhagwan Shree Rajneesh’s personal secretary, Sheela Silverman (Ma Anand Sheela), who served 29 months in federal prison on related charges. She moved to Switzerland upon her release. Although the Rajneeshpuram community is no more and its namesake is now deceased, the community of followers lives on as the Osho International Foundation. This slice of history has now been chronicled in the six-part Netflix documentary Wild Wild Country, directed by Chapman and Maclain Way.

Documentaries are truly an editor’s medium. More so than any other cinematic genre, the final draft of the script is written in the cutting room. I recently interviewed Wild Wild Country’s editor, Neil Meiklejohn, about putting this fascinating tale together.

Treasure in the archives

Neil Meiklejohn explains, “I had worked with the directors before to help them get The Battered Bastards of Baseball ready for Sundance. That is also an Oregon story. While doing their research at the Oregon Historical Society, the archivist turned them on to this story and the footage available. The 1980s was an interesting time in local broadcast news, because that was a transition from film to video. Often stories were shot on film and then transferred to videotape for editing and airing. Many times stations would simply erase the tape after broadcast and reuse the stock. The film would be destroyed. But in this case, the local stations realized that they had something of value and held onto the footage. Eventually it was donated to the historical society.”

“The Rajneeshees on the ranch were also very proud of what they were doing – farming and building a utopian city – so, they would constantly invite visitors and media organizations onto the ranch. They also had their own film crews documenting this, although we didn’t have as much access to that material. Ultimately, we accumulated approximately 300 hours of archival media in all manner of formats, including Beta-SP videotape, ripped DVDs, and the internet. It also came in different frame rates, since some of the sources were international. On top of the archival footage, the Ways also recorded another 100 hours of new interviews with many of the principals involved on both sides of this story. That was RED Dragon 6K footage, shot in two-camera, multi-cam set-ups. So, pretty much every combination you can think of went into this series. We just embraced the aesthetic defects and differences – creating an interesting visual texture.”

Balancing both sides of the story

“Documentaries are an editor’s time to shine,” continues Meiklejohn. “We started by wanting to tell the story of the battle between the cult and the local community without picking sides. This really meant that each scene had to be edited twice. Once from each perspective. Then those two would be combined to show both sides as point-counterpoint. Originally we thought about jumping around in time. But, it quickly became apparent that the best way to tell the story was as a linear progression, so that viewers could see why people did what they did. We avoided getting tricky.”

“In order to determine a structure to our episodes, we first decided the ‘ins’ and ‘outs’ for each and then the story points to hit within. Once that was established, we could look for ‘extra gold’ that might be added to an episode. We would share edits with our executive producers and Netflix. On a large research-based project like this, their input was crucial to making sure that the story had clarity.”

Managing the post production

Meiklejohn normally works as an editor at LA post facility Rock Paper Scissors. For Wild Wild Country, he spent ten months in 2017 at an ad hoc cutting room located at the offices of the film’s executive producers, Jay and Mark Duplass. His set-up included Apple iMacs running Adobe Creative Cloud software, connected to an Avid ISIS shared storage network. Premiere Pro was the editing tool of choice.

Meiklejohn says, “The crew was largely the directors and myself. Assistant editors helped at the front end to get all of the media organized and loaded, and then again when it came time to export files for final mastering. They also helped to take my temp motion graphics – done in Premiere – and then polish them in After Effects. These were then linked back into the timeline using Dynamic Link between Premiere and After Effects. Chapman and Maclain [Way] were very hands-on throughout, including scanning in stills and prepping them in Photoshop for the edit. We would discuss each new segment to sort out the best direction the story was taking and to help set the tone for each scene.”

“Premiere Pro was the ideal tool for this project, because we had so many different formats to deal with. It dealt well with the mess. All of the archival footage was imported and used natively – no transcoding. The 6K RED interview footage was transcoded to ProRes for the ‘offline’ editing phase. A lot of temp mixing and color correction was done within Premiere, because we always wanted the rough cuts to look smooth with all of the different archival footage. Nothing should be jarring. For the ‘online’ edit, the assistants would relink to the full-resolution RED raw files. The archival footage was already linked at its native resolution, because I had been cutting with that all along. Then the Premiere sequences were exported as DPX image sequences with notched EDLs and sent to E-Film, where color correction was handled by Mitch Paulson. Unbridled Sound handled the sound design and mix – and then Encore handled mastering and 1080p deliverables.”

Working with 400 hours of material and six hour-long episodes in Premiere might be a concern for some, but it was flawless for Meiklejohn. He continues, “We worked the whole series as one large project, so that at any given time, we could go back to scenes from an earlier episode and review and compare. The archival material was organized by topic and story order, with corresponding ‘selects’ sequences. As the project became bigger, I would pare it down by deleting unnecessary sequences and saving a newer, updated version. So, no real issue by keeping everything in a single project.”

As with any real-life event, where many of the people involved are still alive, opinions will vary as to how balanced the storytelling is. Former Rajneeshees have both praised and criticized the focus of the story. Meiklejohn says, “Sheela is one of our main interview subjects and in many ways, she is both the hero and the villain of this story. So, it was interesting to see how well she has been received on social media and in the public screenings we’ve done.”

Wild Wild Country shares a pointed look into one of the most bizarre clashes in the past few decades. Meiklejohn says, “Our creative process was really focused on the standoff between these two groups and the big inflection points. I tried to let the raw emotions that you see in these interviews come through and linger a bit on-screen to help inform the events that were unfolding. The story is sensational in and of itself, and I didn’t want to distract from that.”

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Editing the FX Series Atlanta

Atlanta just wrapped its second season on the FX Network. The brainchild of actor/writer/producer/director Donald Glover, Atlanta is the story of Earn Marks, a Princeton drop-out who returns home to Atlanta, where he decides to manage his cousin’s rap career. The show is very textural and plot is secondary. It loosely follows Earn and the people in his life – specifically his cousin, Paper Boi – an up and coming rapper – and his friend and posse-mate, Darrius.

The visual architect of the show is director Hiro Murai, who has directed the majority of the episodes. He has set an absurdist tone for much of the story. Any given episode can be wildly different from the episodes that come on either side of it. The episodes taken as a whole make up what the series is about.

I recently had a chance to interview the show’s editors, Kyle Reiter and Isaac Hagy, about working on Atlanta and their use of Adobe Premiere Pro CC to edit the series.

Isaac Hagy: “I have been collaborating with Hiro for years. We went to college together and ever since then, we’ve been making short films and music videos. I started out doing no-budget music videos, then eventually moved into documentaries and commercials, and now television. A few years ago, we made a short film called Clapping for the Wrong Reasons, starring Donald. That became kind of an aesthetic precursor that we used in pitching this show. It served as a template for the tone of Atlanta.”

“I’ve used pretty much every editing software under the sun – cutting short films in high school on iMovie, then Avid in college when I went to film school at USC. Once I started doing short film projects, I found Final Cut Pro to be more conducive to quick turnarounds than Avid. I used that for five or six years, but then they stopped updating it, so I needed to switch over to a more professional alternative. Premiere Pro was the easiest transition from Final Cut Pro and, at that time, Premiere was starting to be accepted as a professional platform. A lot of people on the show come from a very DIY background, where we do everything ourselves. Like with the early music videos – I would color and Hiro would do effects in After Effects. So, Premiere was a much more natural fit. I am on a show using [Avid] Media Composer right now and it feels like a step backwards.”

With a nod to their DIY ethos, post-production for Atlanta also follows a small, collective approach. 

Kyle Reiter: “We rent a post facility that is just a single-story house. We have a DIY server called a NAS that one of our assistants built and all the media is stored there. It’s just a tower. We brought in our own desktop iMacs with dual monitors that we connect to the server over Ethernet. The show is shot with ARRI Amira cameras in a cinema 2K format. Then that is transcoded to proxy media for editing, which makes it easy to manage. The color correction is done in Resolve. Our assistant editors online it for the colorist, so there’s no grading in-house.” Atlanta airs on the FX Network in the 720p format.

The structure and schedule of this production make it possible to use a simple team approach. Projects aren’t typically shared among multiple editors and assistants, so a more elaborate infrastructure isn’t required to get the job done. 

Isaac Hagy: “It’s a pretty small team. There’s Kyle and myself. We each have an assistant editor. We just split the episodes, so I took half of the season and Kyle the other half. We were pretty self-contained, but because there were an odd number of episodes, we ended up sharing the load on one of them. I did the first cut of that episode and Kyle took it through the director’s cut. But other than that, we each had our individual episodes.”

Kyle Reiter: “They’re in Atlanta for several months shooting. We’ll spend five to seven days doing our cut and then typically move on to the next thing, before we’re finished. That’s just because they’re out of town for several months shooting and then they’ll come back and continue to work. So, it’s actually quite a bit of time calendar-wise, but not a lot of time in actual work hours. We’ll start by pulling selects and marking takes. I do a lot of logging within Premiere. A lot of comments and a lot of markers about stuff that will make it easy to find later. It’s just breaking it down to manageable pieces. Then from there, going scene-by-scene, and putting it all together.”

Many scripted television series that are edited on Avid Media Composer rely on Avid’s script integration features. This led me to wonder whether Reiter and Hagy missed such tools in Premiere Pro.

Isaac Hagy: “We’re lucky that the way in which the DP [Christian Sprenger] and the director shoot the series is very controlled. The projects are never terribly unwieldy, so really simple organizing usually does the trick.”

Kyle Reiter: “They’re never doing more than a handful of takes and there aren’t more than a handful of set-ups, so it’s really easy to keep track of everything. I’ve worked with editors that used markers and just mark every line and then designate a line number; but, we don’t on this show. These episodes are very economical in how they are written and shot, so that sort of thing is not needed. It would be nice to have an Avid ScriptSync type of thing within Premiere Pro. However, we don’t get an unwieldy amount of footage, so frankly it’s almost not necessary. If it were on a different sort of show, where I needed that, then absolutely I would do it. But this is the sort of show I can get away with not doing it.”

Kyle Reiter: “I’m on a show right now being cut on Media Composer, where there are 20 to 25 takes of everything. Having ScriptSync is a real lifesaver on that one.”

Both editors are fans of Premiere Pro’s advanced features, including the ability to use it with After Effects, along with the new sound tools added in recent versions.

Isaac Hagy: “In the offline, we create some temp visual effects to set the concepts. Some of the simpler effects do make it into the show. We’ll mock it up in Premiere and then the AE’s will bring it into After Effects and polish the effect. Then it will be Dynamic Link-ed back into the Premiere timeline.”

“We probably go deeper on the sound than any other technical aspect of the show. In fact, a lot of the sound that we temp for the editor’s cut will make it to the final mix stage. We not only try to source sounds that are appropriate for a scene, but we also try to do light mixing ourselves – whether it’s adding reverb or putting the sound within the space – just giving it some realism. We definitely use the sound tools in Premiere quite a bit. Personally, I’ve had scenes where I was using 30 tracks just for sound effects.”

“I definitely feel more comfortable working in sound in Premiere than in Media Composer -and even than I felt in Final Cut. It’s way easier working with filters, mixing, panning, and controlling multiple tracks at once. This season we experimented with the Essential Sound Panel quite a bit. It was actually very good in putting a song into the background or putting sound effects outside of a room – just creating spaces.”

When a television series or film is about the music industry, the music in the series plays a principal role. Sometimes that is achieved with a composed score and on other shows, the soundtrack is built from popular music.

Kyle Reiter: “There’s no score on the show that’s not diegetic music, so we don’t have a composer. We had one episode this year where we did have score. Flying Lotus and Thundercat are two music friends of Donald’s that scored the episode. But other than that, everything else is just pop songs that we put into the show.”

Isaac Hagy: “The decision of which music to use is very collaborative. Some of the songs are written in the script. A lot are choices that Kyle and I make. Hiro will add some. Donald will add some. We also have two great music supervisors. We’re really lucky that we get nearly 90% of the music that we fall in love with cleared. But when we don’t, our music supervisors recommend some great alternatives. We’re looking for an authenticity to the world, so we try to rely on tracks that exist in the real world.”

Atlanta provides an interesting look of the city’s hip-hop culture on the fringe. A series that has included an alligator and Donald Glover in weird prosthetic make-up – and where Hiro Murai takes inspiration from The Shining certainly isn’t your run-of-the-mill television series. It definitely leaves fans wanting more, but to date, a third season has not yet been announced.

This interview was recorded using the Apogee MetaRecorder for iOS application and transcribed thanks to Digital Heaven’s SpeedScriber.

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Premiere Pro Multicam Editing

Over the years, a lot of the projects that I’ve edited have been based on real-person interviews. This includes documentaries, commercials, and corporate video. As the cost of camera gear has come down and DSLRs became capable of delivering quality video, interview-based production now almost always utilizes multiple cameras. Directors will typically record these sections with two or more cameras at various tangents to the subject, which makes it easy to edit for content without visible jump-cuts (hopefully). In addition, if they also shoot in 4K for an HD delivery, then you have the additional ability to cleanly punch-in for even more framing options.

While having a specific multicam feature in your NLE isn’t required for cutting these types of productions, it sure speeds up the process. Under the best of circumstances, you can play the sequence in real-time and cut between camera angles in the multicam viewer, much like a director calls camera switches in a live telecast. Since you are working within an NLE, you can also make these camera angle cuts at a slower or faster pace and, of course, trim the cuts for greater timing precision. Premiere Pro is my primary NLE these days and its multi-camera editing routines are a joy to use.

Prepping for multi-camera

Synchronization is the main requirement for productive multicam. That starts at the time of the original recording. You can either sync by common timecode, common audio, or a marked in-point.

Ideally, your production crew should use a Lockit Sync Box to generate timecode and sync to all cameras and any external sound recorder. That will only work with professional products, not DSLRs. Lacking that, the next best thing is old school – a common slate with a clap-stick or even just your subject clapping hands at the start, while in view on all cameras. This will allow the editor to mark a common in-point.

The last sync method is to match the common audio across all sources. Of course, that only works if the production crew has supplied quality audio to all cameras and external recorders. It has to be at least good enough so that the human editor and/or the audio analysis of the software can discern a match. Sometimes this method will suffer from a minor amount of delay – either, because of the inherent offset of the audio recording circuitry within the camera electronics – or, because an onboard camera mic was used and the distance to the subject results in a slight delay, compared to a lav mic on the subject.

In addition to synchronization, you obviously need to record high-quality audio. This can be a mixer feed or direct mic input to one or all of the camera tracks, or to a separate external audio recorder. A typical set-up is to feed a lav and a boom mic signal to audio input channels 1 and 2 of the camera. When a mixer and an external recorder are used, the sound recordist will often also record a mix. Another option, though not as desirable, is to record individual microphone signals onto different cameras. The reason this isn’t preferred, is that sometimes when these two sources are mixed in post (rather than only one source used at a time), audio phasing can occur.

Synching in Premiere Pro

To synchronize multicam clips in Premiere Pro, simply select the matching sources in the browser/bin, right-click, and choose “Create New Multi-Camera Source Sequence”. You will be presented with several options for sync, based on timecode, audio, or marked points. You may also opt to have the clips moved to a “Processed Clips” bin. If synchronization is successful, you’ll then end up with a multicam source clip that you can now cut to a standard sequence.

A multicam source clip is actually a modified, nested sequence. You can open the clip – same as a nested sequence – and make adjustments or apply filters to the clips within.

You can also create multicam clips without going through the aforementioned process. For example, let’s say that none of the three sync methods exist. You have a freewheeling interview with two or more cameras, but only one has any audio. There’s no clap and no common timecode. In fact, if all the cameras were DSLRs, then every clip arbitrarily starts at 00:00:00:00. The way to tackle this is to edit these cameras to separate video tracks of a new sequence. Sync the video by slipping the clips’ positions on the tracks. Select those clips on the timeline and create a nest. Once the nest is created, this can then be turned into a multicam source clip, which enables you to work with the multicam viewer.

One step I follow is to place the multicam source clip onto a sequence and replace the audio with the best original source. The standard multicam routine means that audio is also nested, which is something I dislike. I don’t want all of the camera audio tracks there, even if they are muted. So I will typically match-frame the source until I get back to the original audio that I intend to use, and then overwrite the multicam clip’s audio with the original on this working timeline. On the other hand, if the manual multicam creation method is used, then I would only nest the video tracks, which automatically leaves me with the clean audio that I desire.

Autosequence

One simple approach is to use an additional utility to create multicam sequences, such as Autosequence from software developer VideoToolShed. To use Autosequence, your clips must have matching timecode. First separate all of your clips into separate folders on your media hard drive – A-CAM, B-CAM, SOUND, and so on. Launch Autosequence and set the matching frame rate for your media. Then import each folder of clips separately. If you are using double-system sound you can choose whether or not to include the camera sound. Then generate an XML file.

Now, import the XML file into Premiere Pro. This will import the source media into bins, along with a sequence of clips where each camera is on a separate track. If your clips are broken into consecutive recordings with stops and starts in-between, then each recorded set will appear further down on the same timeline. To turn this sequence into one with multicam clips, just follow my explanation for working with a manual process, described above.

Multicam cutting

At this point, I dupe the sequence(s) and start a reductive process of shaping the interview. I usually don’t worry too much about changing camera angles, until I have the story fleshed out. When you are ready for that, right-click into the viewer, and change the display mode to multicam.

As you play, cut between cameras in the viewer by clicking on the corresponding section of the viewer. The timeline will update to show these on-the-fly edits when you stop playback. Or you can simply “blade” the clip and then right-click that portion of the clip to select the camera to be shown. Remember than any effects or color corrections you apply in the timeline are applicable to that visible angle, but do not follow it. So, if you change your mind and switch to a different angle, the effects and corrections do not change with it. Therefore, adjustments will be required to the effect or correction for that new camera angle.

Once I’m happy with the cutting, I will then go through and make a color correction pass. If the lighting has stayed consistent, I can usually grade each angle for one clip only and then copy that correction and paste it to each instance of that same angle on the timeline. Then repeat the procedure for the other camera angles.

When I’m ready to deliver the final product, I will dupe the sequence and clean it up. This means flattening all multicam clips, cleaning up unused clips on my timeline, deleting empty tracks, and usually, collapsing the clips down to the fewest number of tracks.

©2018 Oliver Peters

Audio Mixing with Premiere Pro

When budgets permit and project needs dictate, I will send my mixes out-of-house to one of a few regular mixers. Typically that means sending them an OMF or AAF to mix in Pro Tools. Then I get the mix and split-tracks back, drop them into my Premiere Pro timeline, and generate master files.

On the other hand, a lot of my work is cutting simple commercials and corporate presentations for in-house use or the web, and these are often less demanding  – 2 to 8 tracks of dialogue, limited sound effects, and music. It’s easy to do the mix inside of the NLE. Bear in mind that I can – and often have – done such a mix in Apple Logic Pro X or Adobe Audition, but the tools inside Premiere Pro are solid enough that I often just keep everything – mix included – inside my editing application. Let’s walk though that process.

Dealing with multiple channels on source clips

Start with your camera files or double-system audio recordings. Depending on the camera model, Premiere Pro will see these source clips as having either stereo (e.g. a Canon C100) or multi-channel mono (e.g. ARRI Alexa) channels. If you recorded a boom mic on channel 1 and a lavaliere mic on channel 2, then these will drop onto your stereo timeline either as two separate mono tracks (Alexa) – or as a single stereo track (C100), with the boom coming out of the left speaker and the lav out of the right. Which one it is will strictly depend on the device used to generate the original recordings.

First, when dual-mic recordings appear as stereo, you have to understand how Premiere Pro deals with stereo sources. Panning in Premiere Pro doesn’t “shift” the audio left, right, or center. Instead, it increases or decreases the relative volume of the left or right half of this stereo field. In our dual-mic scenario, panning the clip or track full left means that we only hear the boom coming out of the left speaker, but nothing out of the right. There are two ways to fix this – either by changing the channel configuration of the source in the browser – or by changing it after the fact in the timeline. Browser changes will not alter the configuration of clips already edited to the timeline. You can change one or more source clips from stereo to dual-mono in the browser, but you can’t make that same type of change to a clip already in your sequence.

Let’s assume that you aren’t going to make any browser changes and instead just want to work in your sequence. If your source clip is treated as dual-mono, then the boom and lav will cut over to track 1 and 2 of your sequence – and the sound will be summed in mono on the output to your speaks. However, if the clip is treated as stereo, then it will only cut over to track 1 of your sequence – and the sound will stay left and right on the output to your speakers. When it’s dual-mono, you can listen to one track versus the other, determine which mic sounds the best, and disable the clip with the other mic. Or you can blend the two using clip volume levels.

If the source clip ends up in the sequence as a stereo clip, then you will want to determine which one of the two mics you want to use for the best sound. To pick only one mic, you will need to change the clip’s audio configuration. When you do that, it’s still a stereo clip, however, both “sides” can be supplied by either one of the two source channels. So, both left and right output will either be the boom or the lav, but not both. If you want to blend both mics together, then you will need to duplicate (option-drag) the audio clip onto an adjacent timeline track, and change the audio channel configuration for both clips. One would be set to the boom for both channels and the other set to only the lav for its two channels. Then adjust clip volume for the two timeline clips.

Configuring your timeline

Like most editors, while I’m working through the stages of rough cutting on the way to an approved final copy, I will have a somewhat messy timeline. I may have multiple music cues on several tracks with only one enabled – just so I can preview alternates for the client. I will have multiple dialogue clips on a few tracks with some disabled, depending on microphone or take options. But when I’m ready to move to the finishing stage, I will duplicate that sequence to create a “final version” and clean that one up. This means getting rid of any disabled clips, collapsing my audio and video clips to the fewest number of tracks, and using Premiere’s track creation/deletion feature to delete all empty tracks – all so I can have the least amount of visual clutter. 

In other blog posts, I’ve discussed working with additional submix buses to create split-track exports; but, for most of these smaller jobs, I will only add one submix bus. (I will explain its purpose in a moment.) Once created, you will need to open the track mixer panel and route the timeline channels from the master to the submix bus and then the output of the submix bus back to the master.

Plug-ins

Premiere Pro CC comes with a nice set of audio plug-ins, which can be augmented with plenty of third-party audio effects filters. I am partial to Waves and iZotope, but these aren’t essential. However, there are several that I do use quite frequently. These three third-party filters will help improve any vocal-heavy piece.

The first two are Vocal Rider and MV2 from Waves and are designed specifically for vocal performances, like voice-overs and interviews. These can be pricey, but Waves has frequent sales, so I was able to pick these up for a fraction of their retail price. Vocal Rider is a real-time, automatic volume adjustment tool. Set the bottom and top parameters and let Vocal Rider do the rest, by automatically pushing the volume up or down on-the-fly. MV2 is similar, but it achieves this through compression on the top and bottom ends of the range. While they operate in a similar fashion, they do produce a different sound. I tend to pick MV2 for voice-overs and Vocal Rider for interviews.

We all know location audio isn’t perfect, which is where my third filter comes in. FxFactory is knows primarily for video plug-ins, but their partnership with Crumplepop has added a nice set of audio filters to their catalog. I find AudioDenoise to be quite helpful and fast in fixing annoying location sounds, like background air conditioning noise. It’s real-time and good-sounding, but like all audio noise reduction, you have to be careful not to overdo it, or everything will sound like it’s underwater.

For my other mix needs, I’ll stick to Premiere’s built-in effects, like EQ, compressors, etc. One that’s useful for music is the stereo imager. If you have a music cue that sounds too monaural, this will let you “expand” the track’s stereo signal so that it is spread more left and right. This often helps when you want the voice-over to cut through the mix a bit better. 

My last plug-in is a broadcast limiter that is placed onto the master bus. I will adjust this tight with a hard limit for broadcast delivery, but much higher (louder allowed) for web files. Be aware that Premiere’s plug-in architecture allows you to have the filter take affect either pre or post-fader. In the case of the master bus, this will also affect the VU display. In other words, if you place a limiter post-fader, then the result will be heard, but not visible through the levels displayed on the VU meters.

Mixing

I have used different mixing strategies over the years with Premiere Pro. I like using the write function of the track mixer to write fader automation. However, I have lately stopped using it – instead going back to manual keyframes within the clips. The reason is probably that my projects tend to get revised often in ways that change timing. Since track automation is based on absolute timeline position, keyframes don’t move when a clip is shifted, like they would when clip-based volume keyframes are used.

Likewise, Adobe has recently added Audition’s ducking for music to Premiere Pro. This uses Adobe’s Sensei artificial intelligence. Unfortunately I don’t find to be “intelligent” enough. Although sometimes it can provide a starting point. For me, it’s simply too coarse and doesn’t intelligently adjust for areas within a music clip that swell or change volume internally. Therefore, I stick with minor manual adjustments to compensate for music changes and to make the vocal parts easy to understand in the mix. Then I will use the track mixer to set overall levels for each track to get the right balance of voice, sound effects, and music.

Once I have a decent balance to my ears, I will temporarily drop the TC Electronic (included with Premiere Pro) Radar loudness plug-in to make sure my mix is CALM-compliant. This is where the submix bus comes in. If I like the overall balance, but I need to bring everything down, it’s an easy matter to simply lower the submix level and remeasure.

Likewise, it’s customary to deliver web versions with louder volume levels than the broadcast mix. Again the submix bus will help, because you cannot raise the volume on the master – only lower it. If you simply want to raise the overall volume of the broadcast mix for web delivery, simply raise the submix fader. Note that when I say louder, I’m NOT talking about slamming the VUs all the way to the top. Typically, a mix that hits -6 is plenty loud for the web. So, for web delivery, I will set a hard limit at -6, but adjust the mix for an average of about -10.

Hopefully this short explanation has provided some insight into mixing within Premiere Pro and will help you make sure that your next project sounds great.

©2018 Oliver Peters

Putting Apple’s iMac Pro Through the Paces

At the end of December, Apple made good on the release of the new iMac Pro and started selling and shipping the new workstations. While this could be characterized as a stop-gap effort until the next generation of Mac Pro is produced, that doesn’t detract from the usefulness and power of this design in its own right. After all, the iMac line is the direct descendant in spirit and design of the original Macintosh. Underneath the sexy, all-in-one, space grey enclosure, the iMac Pro offers serious workstation performance.

I work mostly these days with a production company that produces and posts commercials, corporate videos, and entertainment programming. Our editing set-up consists of seven workstations, plus an auxiliary machine connected to a common QNAP shared storage network. These edit stations consisted of a mix of old and new Mac Pros and iMacs (connected via 10GigE), with a Mac Mini for the auxiliary (1GigE). It was time to upgrade the oldest machines, which led us to consider the iMac Pros. The company picked up three of them – replacing two Mac Pro towers and an older iMac. The new configuration is a mix of three, one-year-old Retina 5K iMacs (late 2015 model), a 2013 “trash can” Mac Pro, and three 2017 iMac Pros.

There are plenty of videos and articles on the web about how these machines perform; but, the testers often use artificial benchmarks or only Final Cut Pro X. This shop has a mix of NLEs (Adobe, Apple, Avid, Blackmagic Design), but our primary tool is Adobe Premiere Pro CC 2018. This gave me a chance to compare how these machines stacked up against each other in the kind of work we actually do. This comparison isn’t truly apples-to-apples, since the specs of the three different products are somewhat different from each other. Nevertheless, I feel that it’s a valid real-world assessment of the iMac Pros in a typical, modern post environment.

Why buy iMac Pros at all?

The question to address is why should someone purchase these machines? Let me say right off the bat, that if your main focus is 3D animation or heavy compositing using After Effects or other applications – and speed and performance are the most important factor – then don’t buy an Apple computer. Period. There are plenty of examples of Dell and HP workstations, along with high-end gaming PCs, that outperform any of the Macs. This is largely due to the availability of advanced NVidia GPUs for the PC, which simply aren’t an option for current Macs.

On the other hand, if you need a machine that’s solid and robust across a wide range of postproduction tasks – and you prefer the Mac operating ecosystem – then the iMac Pros are a good choice. Yes, the machine is pricy and you can buy cheaper gaming PCs and DIY workstations, but if you stick to the name brands, like Dell and HP, then the iMac Pros are competitively priced. In our case, a shift to PC would have also meant changing out all of the machines and not just three – therefore, even more expensive.

Naturally, the next thing is to compare price against the current 5K iMacs and 2013 Mac Pros. Apple’s base configuration of the iMac Pro uses an 8-core 3.2GHz Xeon W CPU, 32GB RAM, 1TB SSD, and the Radeon Pro Vega 56 GPU (8GB memory) for $4,999. A comparably configured 2013 Mac Pro is $5,207 (with mouse and keyboard), but no display. Of course, it also has the dual D-700 GPUs. The 5K iMac in a similar configuration is $3,729. Note that we require 10GigE connectivity, which is built into the iMac Pros. Therefore, in a direct comparison, you would need to bump up the iMac and Mac Pro prices by about $500 for a Thunderbolt2-to-10GigE converter.

Comparing these numbers for similar machines, you’d spend more for the Mac Pro and less for the iMac. Yet, the iMac Pro uses newer processors and faster RAM, so it could be argued that it’s already better out of the gate in the base configuration than Apple’s former top-of-the-line product. It has more horsepower than the tricked-out iMac, so then it becomes a question of whether the cost difference is important to you for what you are getting.

Build quality

Needless to say, Apple has a focus on the quality and fit-and-finish of its products. The iMac Pro is no exception. Except for the space grey color, it looks like the regular 27” iMacs and just as nicely built. However, let me quibble a bit with a few things. First, the edges of the case and foot tend to be a bit sharp. It’s not a huge issue, but compared with an iPhone, iPad, or 2013 Mac Pro, the edges just not as smooth and rounded. Secondly, you get a wireless mouse and extended keyboard. Both have to be plugged in to charge. In the case of the mouse, the cable plugs in at the bottom, rendering it useless during charging. Truly a bad design. The wireless keyboard is the newer, flatter style, so you lose two USB ports that were on the previous plug-in extended keyboard. Personally, I prefer the features and feel of the previous keyboard, not to mention any scroll wheel mouse over the Magic Mouse. Of course, those are strictly items of personal taste.

With the iMac Pro, Apple is transitioning its workstations to Thunderbolt 3, using USB-C connectors. Previous Thunderbolt 2 ports have been problematic, because the cables easily disconnect. In fact, on our existing iMacs, it’s very easy to disconnect the Thunderbolt 2 cable that connects us to the shared storage network, simply by moving the iMac around to get to the ports on the back. The USB-C connectors feel more snug, so hopefully we will find that to be an improvement. If you need to get to the back of the iMac or iMac Pro frequently, in order to plug in drives, dongles, etc., then I would highly recommend one of the docks from CalDigit or OWC as a valuable accessory.

5K screen

Apple spends a lot of marketing hype on promoting their 5K Retina screens. The 27” screens have a raw pixel resolution of 5120×2880 pixels, but that’s not what you see in terms of image and user interface dimensions. To start with, the 5K iMacs and iMac Pros use the same screen resolution and the default display setting (middle scaled option) is 2560×1440 pixels. The top choice is 3200×1800. Of course, if you use that setting, everything becomes extremely small on screen.  Conversely, our 2013 Mac Pro is connected to a 27” Apple LED Cinema Display (non Retina). It’s top scaled resolution is also 2560×1440 pixels. Therefore, at the most useable settings, all of our workstations are set to the same resolution. Even if you scale the resolution up (images and UI get smaller), you are going to end up adjusting the size of the application interface and viewer window. While you might see different viewer size percentage numbers between the machines, the effective size on screen will be the same.

Retina is Apple’s marketing name for high pixel density. This is the equivalent of DPI (dots per inch) in print resolutions. According to a Macworld article, iPhones from 4 to 5s had a pixel density of 326ppi (pixels per inch), while iMacs have 218ppi. Apple converts a device’s display to Retina by doubling the horizontal and vertical pixel count. More pixels are applied to any given area on the screen, resulting in smoother text, smoother diagonal lines, and so on. That’s assuming an application’s interface is optimized for it. At the distance that the editors sit from a 27” display, there is simply little or no difference between the look of the 27” LED display and the 27” iMac Retina screens.

Upgradeability

Future-proofing and upgrades are the biggest negatives thrown at all-in-ones, particularly the iMac Pros. While the user can upgrade RAM in the standard iMacs, that’s not the case with iMac Pros. You can upgrade RAM in the future, but that must be done at a service facility, such as the Apple Store’s Genius service. This means that in three years, when you want the latest, greatest CPU, GPU, storage, etc., you won’t be able to swap out components. But is this really an issue? I’m sure Apple has user research numbers to justify their decisions. Plus, the thermal design of the iMac would make user upgrades difficult, unlike older mac Pro towers.

In my own experience on personal machines, as well as clients’ machines that I’ve helped maintain, I have upgraded storage, GPU cards, and RAM, but never the CPU. Although I do know others who have upgraded Xeon models on their Mac Pro towers. Part of the dichotomy is buying what you can afford now and upgrading later, versus stretching a bit up front and then not needing to upgrade later. My gut feeling is that Apple is pushing the latter approach.

If I tally up the cost of the upgrades that I’ve made after about three years, I would already be part of the way towards a newer, better machine anyway. Plus, if you are cutting HD and even 4K today, then just about any advanced machine will do the trick, making it less likely that you’ll need to do that upgrade within the foreseeable life of the machine. An argument can be made for either approach, but I really think that the vast majority of users – even professional users – never actually upgrade any of the internal hardware from that of the configuration as originally purchased.

Performance testing

We ultimately purchased machines that were the 10-core bump-up from the base configuration, feeling that this is the sweet spot (and is currently available) within the iMac Pro product line.

The new machine specs within the facility now look like this:

2013 Mac Pro – 3GHz 8-core Xeon/64GB RAM/dual D-500 GPUs/1TB SSD (Sierra)

2015 iMac – 4GHz 4-core Core i7/32GB RAM/AMD R9/3TB Fusion drive (Sierra)

2017 iMac Pro – 3GHz 10-core Xeon W/64GB RAM/Radeon Vega 64/1TB SSD (High Sierra)

As you can see, the tech specs of the new iMac Pros more closely match the 2013 Mac Pro than the year-old 5K iMacs. Of course, it’s not a perfect match for optimal benchmark testing, but close enough for a good read on how well the iMac Pro delivers in a real working environment.

Test 1 – BruceX

The BruceX test uses a 5K Final Cut Pro X timeline made up only of built-in titles and generators. The timeline is then rendered out to a ProRes file. This tests the pure application without any media and codec variables. It’s a bit of an artificial test and only applicable to FCPX performance, but still useful. The faster the export time, the better. (I have bolded the best results.)

2013 Mac Pro – 26.8 sec.

2015 iMac – 28.3 sec.

2017 iMac Pro – 14.4 sec.

Test 2 – media encoding

In my next test, I took a 4½-minute-long 1080p ProRes file and rendered it to a 4K/UHD (3840×2160) H.264 (1-pass CBR 20Mbps) file. Not only was it being encoded, but also scaled up to 4K in this process. I rendered from and to the desktop, to eliminate any variables from the QNAP system. Finally, I conducted the test using both Adobe Media Encoder (using OpenCL processing) and Apple Compressor.

Two noteworthy issues. The Compressor test was surprisingly slow on the Mac Pro. (I actually ran the Compressor test twice, just to be certain about the slowness of the Mac Pro.) The AME version kicked in the fans on the iMac.

Adobe Media Encoder

2013 Mac Pro – 6:13 min.

2015 iMac – 7:14 min.

2017 iMac Pro – 4:48 min.

 Compressor

2013 Mac Pro – 11:02 min.

2015 iMac – 2:20 min.

2017 iMac Pro – 2:19 min.

 Test 3 – editing timeline playback – multi-layered sequence

This was a difficult test designed to break during unrendered playback. The 40-second 1080p/23.98 sequence include six layers of resized 4K source media.

Layer 1 – DJI clips with dissolves between the clips

Layers 2-5 – 2D PIP ARRI Alexa clips (no LUTs); layer 5 had a Gaussian blur effect added

Layer 6 – native REDCODE RAW with minor color correction

The sequence was created in both Final Cut Pro X and Premiere Pro. Playback was tested with the media located on the QNAP volumes, as well as from the desktop (this should provide the best possible playback).

Playing back this sequence in Final Cut Pro X from the QNAP resulted is the video output largely choking on all of the machines. Playing it back in Premiere Pro from the QNAP was slightly better than in FCPX, with the 2017 iMac Pro performing best of all. It played, but was still choppy.

When I tested playback from the desktop, all three machines performed reasonably well using both Final Cut Pro X (“best performance”) and Premiere Pro (“1/2 resolution”). There were some frames dropped, although the iMac Pro played back more smoothly than the other two. In fact, in Premiere Pro, I was able to set the sequence to “full resolution” and get visually smooth playback, although the indicator light still noted dropped frames. Typically, as each staggered layer kicked in, performance tended to hiccup.

Test 4 – editing timeline playback – single-layer sequence

 This was a simpler test using a standard workflow. The 30-second 1080p/23.98 sequence included three Alexa clips (no LUTs) with dissolves between the clips. Each source file was 4K/UHD and had a “punch-in” and reposition within the HD frame. Each also included a slight, basic color correction. Playback was tested in Final Cut Pro X and Premiere Pro, as well as from the QNAP system and the desktop. Quality settings were increased to “best quality” in FCPX and “full resolution” in Premiere Pro.

My complex timeline in Test 3 appeared to perform better in Premiere Pro. In Test 4, the edge was with Final Cut Pro X. No frames were dropped with any of the three machines playing back either from the QNAP or the desktop, when testing in FCPX. In Premiere Pro, the 2017 iMac Pro was solid in both situations. The 2015 iMac was mostly smooth at “full” and completely smooth at “1/2”. Unfortunately, the 2013 Mac Pro seemed to be the worst of the three, dropping frames even at “1/2 resolution” at each dissolve within the timeline.

Test 5 – timeline renders (multi-layered sequence)

In this test, I took the complex sequence from Test 3 and exported it to a ProRes master file. I used the QNAP-connected versions of the Premiere Pro and Final Cut Pro X timelines and rendered the exports to the desktop. In FCPX, I used its default Share function. In Premiere Pro, I queued the export to Adobe Media Encoder set to process in OpenCL. This was one of the few tests in which the 2013 Mac Pro put in a faster time, although the iMac Pro was very close.

Rendering to ProRes – Premiere Pro (via Adobe Media Encoder)

2013 Mac Pro – 1:29 min.

2015 iMac – 2:29 min.

2017 iMac Pro – 1:45 min.

Rendering to ProRes – Final Cut Pro X

2013 Mac Pro – 1:21 min.

2015 iMac – 2:29 min.

2017 iMac Pro – 1:22 min.

Test 6 – Adobe After Effects – rendering composition

My final test was to see how well the iMac Pro performed in rendering out compositions from After Effects. This was a 1080p/23.98 15-second composition. The bottom layer was a JPEG still with a Color Finesse correction. On top of that were five 1080p ProResLT video clips that had been slomo’ed to fill the composition length. Each was scaled, cropped, and repositioned. Each was beveled with a layer style and had a stylized effect added to it. The topmost layer was a camera layer with all other layers set to 3D, so the clips could be repositioned in z-space. Using the camera, I added a slight rotation/perspective change over the life of the composition.

Rendering to ProRes – After Effects

2013 Mac Pro – 2:37 min.

2015 iMac – 2:15 min.

2017 iMac Pro – 2:03 min.

Conclusion

After all of this testing, one is left with the answer “it depends”. The 2013 Mac Pro has two GPUs, but not every application takes advantage of that. Some apps tax all the available cores, so more, but slower, cores are better. Others go for the maximum speed on fewer cores. All things considered, the iMac Pro performed at the top of these three machines. It was either the best or close/equal to the best.

There is no way to really quantify actual editing playback performance and resolution by any numerical factor. However, it is interesting to look at the aggregate of the six tests that could be quantified. When you compare the cumulative totals of just the iMac Pro and the iMac, the Pro came out 48% faster. Compared to the 2013 Mac Pro, it was 85% faster. The iMac Pro’s performance against the totals of the slowest machines (either iMac or Mac Pro depending on the test), showed it being a whopping 113% faster – more than twice as fast. But it only bested the fastest set by 20%. Naturally, such comparisons are more curiosity than anything else. Some of these numbers will be meaningful and others won’t, depending on the apps used and a user’s storage situation.

I will say that installing these three machines was the easiest I’ve ever done, including connecting them to the 10GigE storage network. The majority of our apps come from Adobe Create Cloud, the Mac App Store, or FxFactory (for plug-ins). Except for a few other installers, there was largely no need to track down installers, activation information, etc. for a zillion small apps and plug-ins. This made it a breeze and is certainly part of the attraction of the Mac ecosystem. The iMac Pro’s all-in-one design limits the required peripherals, which also contributes to a faster installation. Naturally, I can’t tell anyone if this is the right machine for them, but so far, the investment does look like the correct choice for this shop’s needs.

(Updated 6/22/18)

Here are two additional impressions by working editors: Thomas Grove Carter and Ben Balser. Also a very comprehensive review from AppleInsider.

©2018 Oliver Peters