Rams

If you are a fan of the elegant, minimalist design of Apple products, then you have seen the influence of Dieter Rams. The renowned, German industrial designer, associated with functional and unobtrusive design, is known for the iconic consumer products he developed for Braun, as well as his Ten Principles for Good Design. Dieter Rams is the subject of Rams, a new documentary film by Gary Hustwit (Helvetica, Objectified, Urbanized).

This has been a labor of love for Hustwit and partially funded through a Kickstarter campaign. In a statement to the website Designboom, Huswit says, “This film is an opportunity to celebrate a designer whose work continues to impact us and preserve an important piece of design history. I’m also interested in exploring the role that manufactured objects play in our lives and, by extension, the relationship we have with the people who design them. We hope to dig deeper into Rams’ untold story – to try and understand a man of contradictions by design. I want the film to get past the legend of Dieter. I want it to get into his philosophy, process, inspirations, and even his regrets.” 

Hustwit has worked on the documentary for the past three years and premiered it in New York at the end of September. The film is currently on the road for a series of international premiere screenings until the end of the year. I recently had a conversation with Kayla Sklar, the young editor how had the opportunity to tackle this as her first feature film.

______________________________________________________

[OP] Please give me a little background about how you got into editing and then became connected with this project.

[KS] I moved to New York in 2014 after college to pursue working in theater administration for non-profit, Off Broadway theater companies. But at 25, I had sort of a quarter-life crisis and realized that wasn’t what I wanted to do at all. I knew I had to make a career change. I had done some video editing in high school with [Apple] iMovie and in college with [Apple] Final Cut Pro 7 and had enjoyed that. So I enrolled at The Edit Center in Brooklyn. They have an immersive, six-week-long program where you learn the art of editing by working with actual footage from real projects. Indie filmmakers working in documentaries and narrative films, who don’t have a lot of money, can submit their film to The Edit Center. Two are chosen per semester. 12 to 16 students are given scenes and get to work with the director. They give us feedback and at the end, we present a finished rough cut. This process gives us a sense of how to edit.

I knew I could definitely teach myself [Adobe] Premiere Pro, and probably figure out Avid [Media Composer], but I wanted to know if I would even enjoy the process of working with a director. I took the course in 2016 thinking I would pursue narrative films, because it felt the most similar to the world I had come from. But I left the course with an interest in documentary editing. I liked the puzzle-solving aspect of it. It’s where my skillset best aligned.

Afterwards, I took a few assistant editing jobs and eventually started as an assistant editor with Film First, which is owned by Jessica Edwards and Gary Hustwit. That’s how I got connected with Gary. I was assisting on a number of his projects, including working with some of the Rams footage and doing a few rough assemblies for him. Then last year he asked me to be the editor of the film. So I started shifting my focus exclusively to Rams at the beginning of this year. Gary has been working on it since 2015 – shooting on and off for three years. It just premiered in late September, but we even shot some pick-ups in Germany as late as late August / early September.

[OP] So you were working solidly on the film for about nine months. At what point did you lock the cut?

[KS] (laugh) Even now we’re still tinkering. We get more feedback from the screenings and are learning what things are working and aren’t working. The story was locked four days before the New York premiere, but we’re making small changes to things.

[OP] Documentary editing can encompass a variety of structures – narrator-driven, a single subject, a collection of interviewees, etc. What approach did you take with Rams?

[KS] Most of the film is in Dieter Rams’ own words. Gary’s other films have a huge cast of characters. But Gary wanted to make this film different from that and more streamlined. His original concept was that it was going to be Dieter as the only interview footage and you might meet other characters in the verité. But Gary realized that wasn’t going to work, simply because Dieter is a very humble man and he wasn’t really talking about his impact on design. We knew that we needed to give the film a larger context. We needed to bring in other people to tell how influential he has been.

[OP] Obviously a documentary like this has no narrative script to follow. Understanding the interview subject’s answers is critical for the editor in order to build the story arc. I understand that much of the film is in a foreign language. So what was your workflow to edit the film?

[KS] Right. So, the vast majority of the film is in German and a little bit in Japanese, both with subtitles. Maybe 25% is in English, but we’re creating it primarily with an English-speaking audience in mind. I know pretty much no German, except words from Sound of Music and Cabaret. We had a great team of translators on this project, with German transcripts broken down by paragraph and translated into English. I had a two-column set-up with German on one side and English on the other. Before I joined the project, there was an assistant who input titles directly into Premiere – putting subtitles over the dailies with the legacy titler. That was the only way I would be able to even get a rough assembly or ‘radio edit’ of what we wanted.

When you edit an English-speaking documentary, you often splice together two parts of a longer sentence to form a complete and concise thought. But German grammar is really complicated. I don’t think I really grasped how much I was taking on when I first started tackling the project. So I would build a sentence that was pretty close from the transcripts. Thank God for Google Translate, because I would put in my constructed sentence and hope that it spit out something pretty close to what we were going for. And that’s how we did the first rough cut.

Then we had an incredible woman, Katharina Kruse-Ramey, come in. She is a native German speaker living here in New York. She came in for a full eight or nine hours and picked through the edit with a fine tooth comb. For instance, “You can’t use this verb tense with this noun.” That sort of thing. She was hugely helpful and this film wouldn’t have been able to happen without Katharina. We knew then that a German speaker could watch this film and it would make sense! We also had another native German speaker, Eugen Braeunig, who was our archival researcher. He was great for the last minute pick-ups that were shot, when we couldn’t go through the longer workflow.

[OP] I presume you received notes and comments back from Dieter Rams on the cut. What has his response been?

[KS] The film premiered at the Milano Design Film Festival a few weeks ago and Dieter came to that. It was his first time seeing the finished product. From what I’ve heard, he really liked it! As much as one can like seeing themselves on a large screen, I suppose. We had sent him a rough cut a few months ago and in true analytical fashion, the notes that we got back from him were just very specific technical details about dates and products and not about overall storytelling. He really was quite willing to give Gary complete control over the filmmaking process. There was a lot of trust between the two of them.

[OP] Did you cut the film to temp music from the beginning or add music later? I understand that the prolific electronic musician and composer, Brian Eno (The Lego Batman Movie, T2 Trainspotting, The Simpsons), created the soundtrack. What was that like?

[KS] The structure of this film has more breathing room than a lot of docs might have. We really thought about the fact that we needed to give viewers a break from reading subtitles. We didn’t want to go more than ten minutes of reading at a time. So we purposely built in moments for the audience to digest and reflect on all that information. And that’s where Brian’s music was hugely important for us.

We actually didn’t start really editing the film until we had gotten the music back from Brian. I’ve been told that he doesn’t ever score to picture. We sent him some raw footage and he came back with about 16 songs that were inspired by the footage. When you have that gorgeous Brian Eno music, you know that you’re going to have moments where you can just sit back and enjoy the sheer beauty of the moment. Once we had the music in, everything just clicked into place.

[OP] The editor is integral to creating the story structure of a documentary, more so than narrative films – almost as if they are another writer. Tell me a bit about the structure for Rams.

[KS] This film is really not structured the way you would probably structure a normal doc. As I said earlier, we very purposefully put reading breaks in, either through English scenes or with Eno’s music. We had no interest in telling this story linearly. We jump back and forth. One plot line is the chronology of Dieter’s career. Then there’s this other, perhaps more important story, which is Dieter today.  His thoughts on the current state of design and the world. He’s still very active in giving talks and lectures. There’s a company called Vitsoe that makes a lot of his products and he travels to London to give input on their designs. That was the second half of the story and those are interspersed.

[OP] I presume you went outside for finishing services – sound, color correction, and so on. But did the subtitles take on any extra complexity, since they were such an important visual element?

[KS] There are three components to the post. We did an audio mix at one post house; there was a color correction pass at another; and we also had an animation studio – Trollbäck – working with us. There is a section in the film that we knew had to be visually very different and had to convey information in a different way than we had done in any other part of the film. So we gave Trollbäck that five-minute-long sequence. And they also did our opening titles.

We had thought about a stylistic treatment to the subtitles. There were two fonts that Trollbäck had used in their animation. Our initial intent was to use that in our subtitles. We did use one of those treatments in our titles and product credits. For the subtitles, we spent days trying out different looks. Are we going to shadow it or are we using outlines? What point font? What’s the kerning on it? There was going to be so much reading that we knew we had to do the titles thoughtfully. At the end of the day, we knew Helvetica was going to be the easiest (laugh)! We had tried the outline, but some of the internal space in the letters, like an ‘o’ or an ‘e’, looked closed off. We ended up going with a drop shadow. Dieter’s home is almost completely white, so there’s a lot of white space in the film. We used shadows, which looked a little softer, but still quite readable. Those were all built in Premiere’s legacy title tool.

[OP] You are in New York, which is a big Avid Media Composer town. So what was the thought process in deciding to cut this film in Adobe Premiere Pro?

[KS] When I came on-board, the project was already in Premiere. At that point I had been using Avid quite a lot since leaving The Edit Center, which teaches their editing course in Avid. I had taught myself Premiere and I might have tried to transfer the project to Avid, but there was already so much done in terms of the dailies with the subtitles. The thought of going back and spending maybe 50 hours worth of manual subtitling that didn’t migrate over correctly just seemed like a total nightmare. And I was happy to use Premiere. Had I started the project from scratch, I might have used Avid, because it’s the tool that I felt fastest on. Premiere was perfectly fine for the film that we were doing. Plus, if there were days when Gary wanted to tinker around in the project and look at things, he’s much more familiar with Premiere than he is with Avid. He also knows the other Adobe tools, so it made more sense to continue with the same family of creative products that he already knew and used.

Maybe it’s this way with the tool you learn first, but I really like Avid and I feel that I’m faster with it than with Premiere. It’s just the way my brain likes to edit things. But I would be totally happy to edit in Premiere again, if that’s what worked best for a project and what the director wanted. It was great that we didn’t have to transcode our archival footage, because of how Premiere can handle media. Definitely that was helpful, because we had some mixed frame rates and resolutions.

[OP] A closing question. This is your first feature film and with such an influential subjective. What impact did it have on you?

[KS] Dieter has Ten Principles for Good Design. He built them to talk about product design and as a way for him to judge how a product ideally should be made. I had these principles taped to my wall by my desk. His products are very streamlined, elegant, and clean. The framework should be neutral enough that they can convey what the intention was without bells-and-whistles. He wasn’t interested in adding a feature that was unnecessary. I really wanted to evoke those principles with the editing. Had the film been cluttered with extraneous information, or was self-aggrandizing, I think when we revealed the principles to the audience, they would have thought, “Wait a minute, this film isn’t doing that!” We felt that the structure of the film had to serve his principles well, wherever appropriate.

His final principle is ‘Good Design is as Little Design as Possible.’ We joked that ‘Good Filmmaking is as Little Filmmaking as Possible.’ We wanted the audience to be able to draw their own conclusions about Dieter’s work and how that translates into their daily lives. A viewer could walk away knowing what we were trying to accomplish without someone having to tell them what we were trying to accomplish.

There were times when I really didn’t know if I could do it. Being 26 and editing a feature film was daunting. Looking at those principles kept me focused on what the meat of the film’s structure should be. That made me realize how lucky we are to have had a designer who really took the time to think about principles that can be applied to a million different subjects. At one of these screenings someone came up to us, who had become a UI designer for software, in part, because of Dieter. He told us, “I read Dieter’s principles in a book and I realized these can be applied to how people interact with software.” They can be applied to a million different things and we certainly applied it to the edit.

______________________________________________________

Gary Hustwit will tour Rams internationally and in various US cities through December. After that time it will be available in digital form through Film First.

Click here to learn more about Dieter Rams’ Ten Principles for Good Design.

©2018 Oliver Peters

Advertisements

Apple 2018 MacBook Pro

July was a good month for Apple power users, with the simultaneous release of Blackmagic Design’s eGPU and a refresh of Apple’s popular MacBook Pro line, including both 13″ and 15″ models. Although these new laptops retain the previous model’s form factor, they gained a bump-up in processors, RAM, and storage capacity.

Apple loaned me one of the Touch Bar space gray 15” models for this review. It came maxed out with the 8th generation 2.9 GHz 6-core Intel Core i9 CPU, 32GB DDR4 (faster) RAM, Radeon Pro 560X GPU, and a 2TB SSD. The price range on the 15″ model is pretty wide, due in part to the available SSD choices – from 256GB up to 4TB. Touch Bar 15” configurations start at $2,399 and can go all the way up to $6,699, once you spec the top upgrade for everything. My configuration was only $4,699 with the 2TB SSD. Of course, that’s before you add Apple Care (which I highly recommend for laptops) and any accessories.

Apple also released premium leather sleeves for both the 13″ and 15″ models in three colors ($199 for the 15″ size). They are pricey, of course, but not out of line with other branded, luxury products, like bags and watch bands. They fit the unit snuggly and protect it when you are out and about. In addition, they serve as a good pad on rough desk surfaces or when you have the MacBook Pro on your lap. Depending on the task you are performing, the bottom surface of the MacBook Pro can get warm, but nothing to be concerned about.

Before you point me to the nearest Windows gaming machine instead, let me mention that this review really isn’t a comparison against Windows laptops, but rather advances by Apple within the MacBook Pro line. But for context, I have owned six laptops to date – 3 PCs and 3 Macs. I shifted to Mac in order to have access to Final Cut Pro and have been happy with that move. The first 2 PCs developed stress fractures at the lid hinges before they were even a year old. The third, an HP, was solid, but after I gave it to my daughter, the power supply shorted. In addition, the hard drive became so corrupt (thank you Windows) that it wasn’t worth trying to recover. In short, my Mac laptop experience, like that of others, has been one of good value. MacBook Pros generally last years and if you use them for actual billable work (editing, DIT, sound design, etc.), then the investment will pay for itself.

This is the fastest and best laptop Apple has made. Apple engineering has nicely balanced power, size, weight, and battery life in a way that’s hard to counter. It is expensive, but if you try to find an equivalent PC, it is hard to actually find one with these exact same specs or components, until you get into gaming PCs. Those a) look pretty ugly, b) tend to be larger and heavier, with lower battery life, and c) cost about the same. There’s also the sales experience. Try to navigate nearly any PC-centric laptop supplier in an effort to customize the options and it tends to become an exercise in frustration. On the other hand, Apple makes it quite easy to buy and configure its machines with the options that you want.

I do have to mention that when these MacBook Pros first came out there was an issue of performance throttling, which was quickly addressed by Apple and fixed by a supplemental macOS release. That had already been installed on my unit, so no throttling issues that affected any of my performance tests.

Likewise, there have been debris complaints with the first run of the “butterfly” keys used in this and the previous version of these laptops. As other reviewers have stated when tear-downs have been done, Apple has added a membrane under the keys to help with sound dampening. Some reviewers have speculated that this also helps mitigate or even eliminate the debris issues. Whatever the reason, I liked typing on this keyboard and it did sound quieter to me. I tend to bang on keys, since I’m not a touch typist. The feel of a keyboard to a typist can be very subjective and in the course of a day, I tend to type on several vintages of Apple keyboards. In general, the keyboard on this newest MacBook Pro felt comfortable to me, when used for standard typing.

What did Apple bring new to the mix?

When Apple introduced the Touch Bar in 2016, I thought ‘meh’. But after these couple of weeks, I’ve really enjoyed it, especially when an application like Final Cut Pro X extends its controls to the Touch Bar. You can switch the Touch Bar preferences to only be function keys if you like. But having control strip options makes it quick to adjust screen brightness, volume, and so on. In the case of FCPX, you also get a mini-timeline view in some modes. Even QuickTime player calls up a small movie strip into the Touch Bar screen for the file being played.

These units also include Apple’s T2 security chip, which powers the fingerprint Touch ID and the newly added “Hey Siri” commands. The Retina screen on this laptop is gorgeous with up to 500 nits brightness and a wide color gamut. Another new addition is True Tone, which adjusts the display’s color temperature for the surrounding ambient light. That may become a more important selling point in the coming years. There is growing concern within the industry that blue light emitted from computer displays causes long-term eyesight damage. Generally, True Tone warms up the screen when under interior lighting, which reduces eye fatigue when you are working with a lot of white documents. But my recommendation is that editors, colorists, photographers, and designers turn this feature off when working on tasks that require color accuracy. Otherwise, the color balance of media will appear too warm (yellowish).

The 2018 15” MacBook Pro has four Thunderbolt 3/USB-C ports and a headphone jack. The four ports (two per side) are driven by two internal Thunderbolt 3 (40Gb/s) buses. It appears that’s one for each side, which means that plugging in two devices on one side will split the available Thunderbolt 3 bandwidth on that bus in half. Although, this doesn’t seem to be much of a factor during actual use. The internal bus routing does appear to be different from the previous model, in spite of what otherwise is more or less the same hardware configuration.

Gone are all other connections, so plan on purchasing an assortment of adapters to connect peripherals, such as those ubiquitous USB thumb drives or hardware dongles (license keys). I do wish that Apple had retained at least one standard USB port. Thunderbolt 3 supports power, so no separate MagSafe port is required either. (Power supply and cable are included.) One minor downside of this is that there is no indicator LED when a full battery charge is achieved, like we used to have on the MagSafe plug.

If connected to a Thunderbolt 3 device with an adequate power supply (e.g. the LG displays or the Blackmagic eGPU sold through Apple), then a single cable can both transfer data and power the laptop. One caveat is that Thunderbolt 3 doesn’t pass a video signal in the same way as Thunderbolt 2. You cannot simply add a Thunderbolt 3-to-Thunderbolt 2 adapter and connect a typical monitor’s MiniDisplayPort plug, as was possible with Thunderbolt 2 ports. External monitors without the correct connection will need to go through a dock or monitor adapter in order to pass a video signal. (This is also true for the iMac Pros.)

Many users have taken to relying on their MacBook Pros as the primary machine for their home or office, as well on the road. The upside of Thunderbolt connectively is that when you get back to the office, connecting a single Thunderbolt 3 cable to the rest of your suite peripherals (dock, display, eGPU, whatever) is all you need to get up and running. Simple and clean. Stick the laptop in a cradle in the clamshell mode or on a laptop stand, connect the cable, and you now have a powerful desktop machine. MacBook Pros have gained enough power in recent years that – unless your demands are heavy – they can easily service your editing, photography, and graphic needs.

Is it time to upgrade?

I own a mid-2014 15” MacBook Pro (the last series with an NVIDIA GPU), which I purchased in early 2015. Three years is often a good interval for most professional users to plan on a computer refresh, so I decided to compare the two. To start with, the new 2018 machine boots faster and apps also open faster. It’s even slightly smaller and thinner than the mid-2014 model. Both have fast SSDs, but the 2018 model is significantly faster (2645 MB/s write, 2722 MB/s read – Blackmagic Speed Test).

As with other reviews, I pulled an existing edit project for my test sequence. This timeline could be the same in Final Cut Pro X, Premiere Pro, and Resolve – without effects unique to one specific software application. My timeline consisted of 4K Alexa ProResHQ files that had a LUT and were scaled into a 1080p sequence. A few 1080p B-roll shots were also part of this sequence. The only taxing effect was a reverse slomo 4K clip, using optical flow interpolation. Both machines handled 4K ProRes footage just fine at full resolution using various NLEs. Exports to ProRes and H.264 were approximately twice as fast from Final Cut Pro X on the newer MacBook Pro. The same exports from Premiere Pro were longer overall than from FCPX, but faster on the 2018 machine, as well (see the section at the end for performance by the numbers).

If you are a fan of Final Cut Pro X, this machine is one of the best to use it on, especially if you can store your media on the internal drive. However, as an equalizer of sorts, I also ran these same test projects from an external SSD connected via USB3. While fast (over 200+ MB/s read/write), it wasn’t nearly as fast as the internal SSDs. Nevertheless, performance didn’t really lag behind with either FCPX or Premiere Pro. However, the optical flow clip did pose some issues. It played smoothly at “best quality” in FCPX, but oddly stuttered in the “best performance” setting. It did not play well in Premiere Pro at either full or half resolution. I also believe it contributed to the slower export times evident with Premiere Pro.

I tested a second project made up of all 4K REDCODE raw footage, which was placed into a 4K timeline. The 2018 MacBook Pro played the individual files and edited sequences smoothly when set to “best performance” in FCPX or half resolution in Premiere Pro. However, bumping the settings up to full quality caused stuttering with either NLE.

My last test was the same DaVinci Resolve project that I’ve used for my eGPU “stress” tests. These are anamorphic 4K Alexa files in a 2K DCI timeline. I stripped off all of the added filters that I had applied for the test of the eGPU, leaving a typical editing timeline with only a LUT and basic correction. This sequence played smoothly without dropping frames, which bodes well for editors who are considering a shift to Resolve as their main NLE.

Speaking of the Blackmagic eGPU tests, I had one day of overlap between the loans of the MacBook Pro and the Blackmagic eGPU. DaVinci Resolve’s real-time playback performance and exports were improved by about a 2X factor with the eGPU connected to the 15” model. Naturally,  the 15” machine by itself was quite a bit faster than the 13” MacBook Pro, so the improvement with an eGPU attached wasn’t as dramatic of a margin as the test with the 13” demonstrated. Even with this powerhouse MacBook Pro, the Blackmagic eGPU still adds value as a general appliance, as well as providing Resolve acceleration.

A note on battery life. The spec claims about 10 hours, but that’s largely for simple use, like watching web movies or listening to iTunes. Most of these activities do not cause the graphics to switch over from the integrated Intel to the Radeon Pro GPU, which consumes more power. In my editing tests with the Radeon GPU constantly on – and most of the energy saving settings disabled – I got five to six hours of battery life. That’s even when an application like FCPX was open, but minimized, without any real activity being done on the laptop.

I also ran a “heavy load” test, which involved continually looping my sample 1080 timeline (with 4K source media) full screen at “best quality” in FCPX. This is obviously a worst case scenario, but the charge only lasted about two hours. In short, the battery capacity is very good for a laptop, but one can only expect so much. If you plan on a heavy workload for an extended period of time, stay plugged in.

The 2018 MacBook Pro is a solid update that creative professionals will certainly enjoy, both in the field and even as a desktop replacement. If you bought last year’s model, there’s little reason to refresh your computer, yet. But three years or more? Get out the credit card!

_________________________________________________

Performance by the numbers

Blackmagic Design eGPU test

DaVinci Resolve renders/exports
(using the same test sequence as used for my eGPU review)

13” 2018 MacBook Pro – internal Intel graphics only
Render at source resolution – 1fps
Render at timeline resolution – 4fps

13” 2018 MacBook Pro – with Blackmagic eGPU
Render at source resolution – 5.5fps
Render at timeline resolution – 17.5fps

15” 2018 MacBook Pro – internal Radeon graphics only
Render at source resolution – 2.5fps
Render at timeline resolution – 8fps

15” 2018 MacBook Pro – with Blackmagic eGPU
Render at source resolution – 5.5fps
Render at timeline resolution – 16fps

Standard performance tests – 2018 15” MacBook Pro vs. Mid-2014
(using editing test sequence – 4K ProResHQ media)

2018 export from FCPX to ProRes  :30
2018 export from FCPX to H.264 at 10Mbps  :57
2014 export from FCPX to ProRes  :57
2014 export from FCPX to H.264 at 10Mbps  1:42

2018 export from Premiere Pro to ProRes  2:59
2018 export from Premiere Pro to H.264 at 10Mbps  2:32
2014 export from Premiere Pro to ProRes  3:35
2014 export from Premiere Pro to H.264 at 10Mbps  3:25

2018 export from Resolve to ProRes :35
2018 export from Resolve to H.264 at 10Mbps  :35
(Mid-2014 MBP was not used in this test)

Originally written for RedSharkNews

©2018 Oliver Peters

Blackmagic Design eGPU

Power users have grown to rely on graphics processing units from AMD, Intel and Nvidia to accelerate a wide range of computational functions – from visual effect filters to gaming and 360VR, and even to bitcoin mining. Apple finally supports external GPUs, which can easily be added as plug-and-play devices without any hack. Blackmagic Design just released its own eGPU product for the Mac, which is sold exclusively through Apple ($699 USD). It requires macOS 10.13.6 or later, and a Thunderbolt 3 connection. (Thunderbolt 2, even with adapters, will not work.)

The Blackmagic eGPU features a sleek, aluminum enclosure that makes a fine piece of desk art. It’s of similar size and weight to a 2013 Mac Pro and is optimized for both cooling and low noise. The unit is built around the AMD Radeon Pro 580 GPU with 8GB of video memory. It delivers 5.5 teraflops of processing power and is the same GPU used in Apple’s top-end, 27” Retina 5K iMac.

Leveraging Thunderbolt 3

Thunderbolt 3 technology supports 40Gb/s of bandwidth, as well as power. The Blackmagic eGPU includes a beefy power supply that can also power and/or charge a connected MacBook Pro. There are two Thunderbolt 3 ports, four USB3.1 ports, and HDMI. Therefore, you can connect a Mac, two displays, plus various USB peripherals. It’s easy to think of it as an accelerator, but it is also an appliance that can be useful in other ways to extend the connectivity and performance of MacBook Pros. Competing products with the same Radeon 580 GPU may be a bit less expensive, but they don’t offer this level of connectivity.

Apple and Blackmagic both promote eGPUs as an add-on for laptops, but any Thunderbolt 3 Mac qualifies. I tested the Blackmagic eGPU with both a high-end iMac Pro and the base model 13” 2018 MacBook Pro with touch bar. This model of iMac Pro is configured with the more advanced Vega Pro 64 GPU (16GB VRAM). My main interest in including the iMac Pro was simply to see whether there would be enough performance boost to justify adding an eGPU to a Mac that is already Apple’s most powerful. Installation of the eGPU was simply a matter of plugging it in. A top menu icon appears on the Mac screen to let you know it’s there and so you can disconnect the unit while the Mac is powered up.

Pushing the boundaries through testing

My focus is editing and color correction and not gaming or VR. Therefore, I ran tests with and without the eGPU, using Final Cut Pro X, Premiere Pro, and DaVinci Resolve (Resolve Studio 15 beta). Anamorphic ARRI Alexa ProRes 4444 camera files (2880×2160, native / 5760×2160 pixels, unsqueezed) were cut into 2K DCI (Resolve) and/or 4K DCI (FCPX, Premiere Pro) sequences. This meant that every clip got a Log-C LUT and color correction, as well as aspect ratio correction and scaling. In order to really stress the system, I added several GPU-accelerated effect filters, like glow, film grain, and so on. Finally, timed exports went back to ProRes 4444 – using the internal SSD for media and render files to avoid storage bottlenecks.

Not many applications take advantage of this newfound power, yet. Neither FCPX nor Premiere utilize the eGPU correctly or even at all. Premiere exports were actually slower using the eGPU. In my tests, only DaVinci Resolve gained measurable acceleration from the eGPU, which also held true for a competing eGPU that I compared.

If editing, grading or possibly location DIT work is your main interest, then consider the Blackmagic eGPU a good accessory for DaVinci Resolve running on a MacBook Pro. As a general rule, lesser-powered machines benefit more from eGPU acceleration than powerful ones, like the iMac Pro, with its already-powerful, built-in Vega Pro 64 GPU.

Performance by the numbers (iMac Pro only)

To provide some context, here are the results I got with the iMac Pro:

Resolve on iMac Pro (internal V64 chip) – NO eGPU – Auto GPU config

Playback of timeline at real-time 23.976 without frames dropping

Render at source resolution – average 11fps (slower than real-time)

Render at timeline resolution – average 33fps (faster than real-time)

Resolve on iMac Pro – with BMD eGPU (580 chip) – OpenCL

Playback of timeline at real-time 23.976 without frames dropping

Render at source resolution – average 11fps (slower than real-time)

Render at timeline resolution – average 37fps (faster than real-time)

Metal

Apple’s ability to work with eGPUs is enabled by Metal. This is their framework for addressing hardware components, like graphics and central processors. The industry has relied on other frameworks, including OpenGL, OpenCL and CUDA. The first two are open standards written for a wide range of hardware platforms, while CUDA is specific to Nvidia GPUs. Apple is deprecating all of these in favor of Metal (now Metal 2). With each coming OS update, these will become more and more “legacy” until presumably, at some point in the future, macOS may only support Metal.

Apple’s intention is to gain performance improvements by optimizing the code at a lower level “closer to the metal”. It is possible to do this when you only address a limited number of hardware options, which may explain why Apple has focused on using only AMD and Intel GPUs. The downside is that developers must write code that is proprietary to Apple computers. Metal is in part what gives Final Cut Pro X it’s smooth media handling and real-time performance. Both Premiere Pro and Resolve give you the option to select Metal, when installed on Macs.

In the tests that I ran, I presume FCPX only used Metal, since there is no option to select anything else. I did, however, test both Premiere Pro/Adobe Media Encoder and Resolve with both Metal and again with OpenCL specifically selected. I didn’t see much difference in render times with either setting in Premiere/AME. Resolve showed definite differences, with OpenCL the clear winner. For now, Resolve is still optimized for OpenCL over Metal.

Power for the on-the-go editor and colorist

The MacBook Pro is where the Blackmagic eGPU makes the most sense. It gives you better performance with faster exports, and adds badly-needed connectivity. My test Resolve sequence is a lot more stressful than I would normally create. It’s the sort of sequence I would never work with in the real world on a lower-end machine, like this 13” model. But, of course, I’m purposefully pushing it through a demanding task.

When I ran the test on the laptop without the eGPU connected, it would barely play at all. Exports at source resolution rendered at around 1fps. Once I added the Blackmagic eGPU, this sequence played in real-time, although the viewer would start to drop frames towards the end of each shot. Exports at the source resolution averaged 5.5fps. At timeline resolution (2K DCI) it rendered at up to 17fps, as opposed to 4fps without it. That’s over 4X improvement.

Everyone’s set of formats and use of color correction and filters are different. Nevertheless, once you add the Blackmagic eGPU to this MacBook Pro model, functionality in Resolve goes from insanely slow to definitely useable. If you intend to do reliable color correction using Resolve, then a Thunderbolt 3 UltraStudio HD Mini or 4K Extreme 3 is also required for proper video monitoring. Resolve doesn’t send video signals over HDMI, like Premiere Pro and Final Cut Pro X can.

It will be interesting to see if Blackmagic also offers a second eGPU model with the higher-end chip in the future. That would likely double the price of the unit. In the testing I’ve done with other eGPUs that used a version of the Vega 64 GPU, I’m not convinced that such a product would consistently deliver 2X more performance to justify the cost. This Blackmagic eGPU adds a healthy does of power and connectivity for current MacBook Pro users and that will only get better in the future.

I think it’s clear that Apple is looking towards eGPUs are a way to enhance the performance of its MacBook Pro line, without compromising design, battery life, and cooling. Cable up to an external device and you’ve gained back horsepower that wouldn’t be there in the standard machine. After all, you mainly need this power when you are in a fixed, rather than mobile, location. The Blackmagic eGPU is portable enough, so that as long as you have electrical power, you are good to go.

In his review of the 2018 MacBook Pro, Ars Technica writer Samuel Axon stated, “Apple is trying to push its own envelope with the CPU options it has included in the 2018 MacBook Pro, but it’s business as usual in terms of GPU performance. I believe that’s because Apple wants to wean pro users with serious graphics needs onto external GPUs. Those users need more power than a laptop can ever reasonably provide – especially one with a commitment to portability.”

I think that neatly sums it up, so it’s nice to see Blackmagic Design fill in the gaps.

UPDATE: The September 2018 release of Mojave has changed the behavior of Final Cut Pro X when an eGPU is connected. It is now possible to set a preference for whether the internal or external GPU is to be used with Final Cut Pro X.

Originally written for RedShark News.

©2018 Oliver Peters

Premiere Pro Multicam Editing

Over the years, a lot of the projects that I’ve edited have been based on real-person interviews. This includes documentaries, commercials, and corporate video. As the cost of camera gear has come down and DSLRs became capable of delivering quality video, interview-based production now almost always utilizes multiple cameras. Directors will typically record these sections with two or more cameras at various tangents to the subject, which makes it easy to edit for content without visible jump-cuts (hopefully). In addition, if they also shoot in 4K for an HD delivery, then you have the additional ability to cleanly punch-in for even more framing options.

While having a specific multicam feature in your NLE isn’t required for cutting these types of productions, it sure speeds up the process. Under the best of circumstances, you can play the sequence in real-time and cut between camera angles in the multicam viewer, much like a director calls camera switches in a live telecast. Since you are working within an NLE, you can also make these camera angle cuts at a slower or faster pace and, of course, trim the cuts for greater timing precision. Premiere Pro is my primary NLE these days and its multi-camera editing routines are a joy to use.

Prepping for multi-camera

Synchronization is the main requirement for productive multicam. That starts at the time of the original recording. You can either sync by common timecode, common audio, or a marked in-point.

Ideally, your production crew should use a Lockit Sync Box to generate timecode and sync to all cameras and any external sound recorder. That will only work with professional products, not DSLRs. Lacking that, the next best thing is old school – a common slate with a clap-stick or even just your subject clapping hands at the start, while in view on all cameras. This will allow the editor to mark a common in-point.

The last sync method is to match the common audio across all sources. Of course, that only works if the production crew has supplied quality audio to all cameras and external recorders. It has to be at least good enough so that the human editor and/or the audio analysis of the software can discern a match. Sometimes this method will suffer from a minor amount of delay – either, because of the inherent offset of the audio recording circuitry within the camera electronics – or, because an onboard camera mic was used and the distance to the subject results in a slight delay, compared to a lav mic on the subject.

In addition to synchronization, you obviously need to record high-quality audio. This can be a mixer feed or direct mic input to one or all of the camera tracks, or to a separate external audio recorder. A typical set-up is to feed a lav and a boom mic signal to audio input channels 1 and 2 of the camera. When a mixer and an external recorder are used, the sound recordist will often also record a mix. Another option, though not as desirable, is to record individual microphone signals onto different cameras. The reason this isn’t preferred, is that sometimes when these two sources are mixed in post (rather than only one source used at a time), audio phasing can occur.

Synching in Premiere Pro

To synchronize multicam clips in Premiere Pro, simply select the matching sources in the browser/bin, right-click, and choose “Create New Multi-Camera Source Sequence”. You will be presented with several options for sync, based on timecode, audio, or marked points. You may also opt to have the clips moved to a “Processed Clips” bin. If synchronization is successful, you’ll then end up with a multicam source clip that you can now cut to a standard sequence.

A multicam source clip is actually a modified, nested sequence. You can open the clip – same as a nested sequence – and make adjustments or apply filters to the clips within.

You can also create multicam clips without going through the aforementioned process. For example, let’s say that none of the three sync methods exist. You have a freewheeling interview with two or more cameras, but only one has any audio. There’s no clap and no common timecode. In fact, if all the cameras were DSLRs, then every clip arbitrarily starts at 00:00:00:00. The way to tackle this is to edit these cameras to separate video tracks of a new sequence. Sync the video by slipping the clips’ positions on the tracks. Select those clips on the timeline and create a nest. Once the nest is created, this can then be turned into a multicam source clip, which enables you to work with the multicam viewer.

One step I follow is to place the multicam source clip onto a sequence and replace the audio with the best original source. The standard multicam routine means that audio is also nested, which is something I dislike. I don’t want all of the camera audio tracks there, even if they are muted. So I will typically match-frame the source until I get back to the original audio that I intend to use, and then overwrite the multicam clip’s audio with the original on this working timeline. On the other hand, if the manual multicam creation method is used, then I would only nest the video tracks, which automatically leaves me with the clean audio that I desire.

Autosequence

One simple approach is to use an additional utility to create multicam sequences, such as Autosequence from software developer VideoToolShed. To use Autosequence, your clips must have matching timecode. First separate all of your clips into separate folders on your media hard drive – A-CAM, B-CAM, SOUND, and so on. Launch Autosequence and set the matching frame rate for your media. Then import each folder of clips separately. If you are using double-system sound you can choose whether or not to include the camera sound. Then generate an XML file.

Now, import the XML file into Premiere Pro. This will import the source media into bins, along with a sequence of clips where each camera is on a separate track. If your clips are broken into consecutive recordings with stops and starts in-between, then each recorded set will appear further down on the same timeline. To turn this sequence into one with multicam clips, just follow my explanation for working with a manual process, described above.

Multicam cutting

At this point, I dupe the sequence(s) and start a reductive process of shaping the interview. I usually don’t worry too much about changing camera angles, until I have the story fleshed out. When you are ready for that, right-click into the viewer, and change the display mode to multicam.

As you play, cut between cameras in the viewer by clicking on the corresponding section of the viewer. The timeline will update to show these on-the-fly edits when you stop playback. Or you can simply “blade” the clip and then right-click that portion of the clip to select the camera to be shown. Remember than any effects or color corrections you apply in the timeline are applicable to that visible angle, but do not follow it. So, if you change your mind and switch to a different angle, the effects and corrections do not change with it. Therefore, adjustments will be required to the effect or correction for that new camera angle.

Once I’m happy with the cutting, I will then go through and make a color correction pass. If the lighting has stayed consistent, I can usually grade each angle for one clip only and then copy that correction and paste it to each instance of that same angle on the timeline. Then repeat the procedure for the other camera angles.

When I’m ready to deliver the final product, I will dupe the sequence and clean it up. This means flattening all multicam clips, cleaning up unused clips on my timeline, deleting empty tracks, and usually, collapsing the clips down to the fewest number of tracks.

©2018 Oliver Peters

Audio Mixing with Premiere Pro

When budgets permit and project needs dictate, I will send my mixes out-of-house to one of a few regular mixers. Typically that means sending them an OMF or AAF to mix in Pro Tools. Then I get the mix and split-tracks back, drop them into my Premiere Pro timeline, and generate master files.

On the other hand, a lot of my work is cutting simple commercials and corporate presentations for in-house use or the web, and these are often less demanding  – 2 to 8 tracks of dialogue, limited sound effects, and music. It’s easy to do the mix inside of the NLE. Bear in mind that I can – and often have – done such a mix in Apple Logic Pro X or Adobe Audition, but the tools inside Premiere Pro are solid enough that I often just keep everything – mix included – inside my editing application. Let’s walk though that process.

Dealing with multiple channels on source clips

Start with your camera files or double-system audio recordings. Depending on the camera model, Premiere Pro will see these source clips as having either stereo (e.g. a Canon C100) or multi-channel mono (e.g. ARRI Alexa) channels. If you recorded a boom mic on channel 1 and a lavaliere mic on channel 2, then these will drop onto your stereo timeline either as two separate mono tracks (Alexa) – or as a single stereo track (C100), with the boom coming out of the left speaker and the lav out of the right. Which one it is will strictly depend on the device used to generate the original recordings.

First, when dual-mic recordings appear as stereo, you have to understand how Premiere Pro deals with stereo sources. Panning in Premiere Pro doesn’t “shift” the audio left, right, or center. Instead, it increases or decreases the relative volume of the left or right half of this stereo field. In our dual-mic scenario, panning the clip or track full left means that we only hear the boom coming out of the left speaker, but nothing out of the right. There are two ways to fix this – either by changing the channel configuration of the source in the browser – or by changing it after the fact in the timeline. Browser changes will not alter the configuration of clips already edited to the timeline. You can change one or more source clips from stereo to dual-mono in the browser, but you can’t make that same type of change to a clip already in your sequence.

Let’s assume that you aren’t going to make any browser changes and instead just want to work in your sequence. If your source clip is treated as dual-mono, then the boom and lav will cut over to track 1 and 2 of your sequence – and the sound will be summed in mono on the output to your speaks. However, if the clip is treated as stereo, then it will only cut over to track 1 of your sequence – and the sound will stay left and right on the output to your speakers. When it’s dual-mono, you can listen to one track versus the other, determine which mic sounds the best, and disable the clip with the other mic. Or you can blend the two using clip volume levels.

If the source clip ends up in the sequence as a stereo clip, then you will want to determine which one of the two mics you want to use for the best sound. To pick only one mic, you will need to change the clip’s audio configuration. When you do that, it’s still a stereo clip, however, both “sides” can be supplied by either one of the two source channels. So, both left and right output will either be the boom or the lav, but not both. If you want to blend both mics together, then you will need to duplicate (option-drag) the audio clip onto an adjacent timeline track, and change the audio channel configuration for both clips. One would be set to the boom for both channels and the other set to only the lav for its two channels. Then adjust clip volume for the two timeline clips.

Configuring your timeline

Like most editors, while I’m working through the stages of rough cutting on the way to an approved final copy, I will have a somewhat messy timeline. I may have multiple music cues on several tracks with only one enabled – just so I can preview alternates for the client. I will have multiple dialogue clips on a few tracks with some disabled, depending on microphone or take options. But when I’m ready to move to the finishing stage, I will duplicate that sequence to create a “final version” and clean that one up. This means getting rid of any disabled clips, collapsing my audio and video clips to the fewest number of tracks, and using Premiere’s track creation/deletion feature to delete all empty tracks – all so I can have the least amount of visual clutter. 

In other blog posts, I’ve discussed working with additional submix buses to create split-track exports; but, for most of these smaller jobs, I will only add one submix bus. (I will explain its purpose in a moment.) Once created, you will need to open the track mixer panel and route the timeline channels from the master to the submix bus and then the output of the submix bus back to the master.

Plug-ins

Premiere Pro CC comes with a nice set of audio plug-ins, which can be augmented with plenty of third-party audio effects filters. I am partial to Waves and iZotope, but these aren’t essential. However, there are several that I do use quite frequently. These three third-party filters will help improve any vocal-heavy piece.

The first two are Vocal Rider and MV2 from Waves and are designed specifically for vocal performances, like voice-overs and interviews. These can be pricey, but Waves has frequent sales, so I was able to pick these up for a fraction of their retail price. Vocal Rider is a real-time, automatic volume adjustment tool. Set the bottom and top parameters and let Vocal Rider do the rest, by automatically pushing the volume up or down on-the-fly. MV2 is similar, but it achieves this through compression on the top and bottom ends of the range. While they operate in a similar fashion, they do produce a different sound. I tend to pick MV2 for voice-overs and Vocal Rider for interviews.

We all know location audio isn’t perfect, which is where my third filter comes in. FxFactory is knows primarily for video plug-ins, but their partnership with Crumplepop has added a nice set of audio filters to their catalog. I find AudioDenoise to be quite helpful and fast in fixing annoying location sounds, like background air conditioning noise. It’s real-time and good-sounding, but like all audio noise reduction, you have to be careful not to overdo it, or everything will sound like it’s underwater.

For my other mix needs, I’ll stick to Premiere’s built-in effects, like EQ, compressors, etc. One that’s useful for music is the stereo imager. If you have a music cue that sounds too monaural, this will let you “expand” the track’s stereo signal so that it is spread more left and right. This often helps when you want the voice-over to cut through the mix a bit better. 

My last plug-in is a broadcast limiter that is placed onto the master bus. I will adjust this tight with a hard limit for broadcast delivery, but much higher (louder allowed) for web files. Be aware that Premiere’s plug-in architecture allows you to have the filter take affect either pre or post-fader. In the case of the master bus, this will also affect the VU display. In other words, if you place a limiter post-fader, then the result will be heard, but not visible through the levels displayed on the VU meters.

Mixing

I have used different mixing strategies over the years with Premiere Pro. I like using the write function of the track mixer to write fader automation. However, I have lately stopped using it – instead going back to manual keyframes within the clips. The reason is probably that my projects tend to get revised often in ways that change timing. Since track automation is based on absolute timeline position, keyframes don’t move when a clip is shifted, like they would when clip-based volume keyframes are used.

Likewise, Adobe has recently added Audition’s ducking for music to Premiere Pro. This uses Adobe’s Sensei artificial intelligence. Unfortunately I don’t find to be “intelligent” enough. Although sometimes it can provide a starting point. For me, it’s simply too coarse and doesn’t intelligently adjust for areas within a music clip that swell or change volume internally. Therefore, I stick with minor manual adjustments to compensate for music changes and to make the vocal parts easy to understand in the mix. Then I will use the track mixer to set overall levels for each track to get the right balance of voice, sound effects, and music.

Once I have a decent balance to my ears, I will temporarily drop the TC Electronic (included with Premiere Pro) Radar loudness plug-in to make sure my mix is CALM-compliant. This is where the submix bus comes in. If I like the overall balance, but I need to bring everything down, it’s an easy matter to simply lower the submix level and remeasure.

Likewise, it’s customary to deliver web versions with louder volume levels than the broadcast mix. Again the submix bus will help, because you cannot raise the volume on the master – only lower it. If you simply want to raise the overall volume of the broadcast mix for web delivery, simply raise the submix fader. Note that when I say louder, I’m NOT talking about slamming the VUs all the way to the top. Typically, a mix that hits -6 is plenty loud for the web. So, for web delivery, I will set a hard limit at -6, but adjust the mix for an average of about -10.

Hopefully this short explanation has provided some insight into mixing within Premiere Pro and will help you make sure that your next project sounds great.

©2018 Oliver Peters

Stocking Stuffers 2017

It’s holiday time once again. For many editors that means it’s time to gift themselves with some new tools and toys to speed their workflows or just make the coming year more fun! Here are some products to consider.

Just like the tiny house craze, many editors are opting for their laptops as their main editing tool. I’ve done it for work that I cut when I’m not freelancing in other shops, simply because my MacBook Pro is a better machine than my old (but still reliable) 2009 Mac Pro tower. One less machine to deal with, which simplifies life. But to really make it feel like a desktop tool, you need some accessories along with an external display. For me, that boils down to a dock, a stand, and an audio interface. There are several stands for laptops. I bought both the Twelve South BookArc and the Rain Design mStand: the BookArc for when I just want to tuck the closed MacBook Pro out of the way in the clamshell mode and the mStand for when I need to use the laptop’s screen as a second display. Another option some editors like is the Vertical Dock from Henge Docks, which not only holds the MacBook Pro, but also offers some cable management.

The next hardware add-on for me is a USB audio interface. This is useful for any type of computer and may be used with or without other interfaces from Blackmagic Design or AJA. The simplest of these is the Mackie Onyx Blackjack, which combines interface and output monitor mixing into one package. This means no extra small mixer is required. USB input and analog audio output direct to a pair of powered speakers. But if you prefer a separate small mixer and only want a USB interface for input/output, then the PreSonus Audiobox USB or the Focusrite Scarlett series is the way to go.

Another ‘must have’ with any modern system is a Thunderbolt dock in order to expand the native port connectivity of your computer. There are several on the market but it’s hard to go wrong with either the CalDigit Thunderbolt Station 2 or the OWC Thunderbolt 2 Dock. Make sure you double-check which version fits for your needs, depending on whether you have a Thunderbolt 2 or 3 connection and/or USB-C ports. I routinely use each of the CalDigit and OWC products. The choice simply depends on which one has the right combination of ports to fit your needs.

Drives are another issue. With a small system, you want small portable drives. While LaCie Rugged and G-Technology portable drives are popular choices, SSDs are the way to go when you need true, fast performance. A number of editors I’ve spoken to are partial to the Samsung Portable SSD T5 drives. These USB3.0-compatible drives aren’t the cheapest, but they are ultraportable and offer amazing read/write speeds. Another popular solution is to use raw (uncased) drives in a drive caddy/dock for archiving purposes. Since they are raw, you don’t pack for the extra packaging, power supply, and interface electronics with each, just to have it sit on the shelf. My favorite of these is the HGST Deckstar NAS series.

For many editors the software world is changing with free applications, subscription models, and online services. The most common use of the latter is for review-and-approval, along with posting demo clips and short films. Kollaborate.tv, Frame.io, Wipster.io, and Vimeo are the best known. There are plenty of options and even Vimeo Pro and Business plans offer a Frame/Wipster-style review-and-approval and collaboration service. Plus, there’s some transfer ability between these. For example, you can publish to a Vimeo account from your Frame account. Another expansion of the online world is in team workgroups. A popular solution is Slack, which is a workgroup-based messaging/communication service.

As more resources become available online, the benefits of large-scale computing horsepower are available to even single editors. One of the first of these new resources is cloud-based, speech-to-text transcription. A number of online services provide this functionality to any NLE. Products to check out include Scribeomatic (Coremelt), Transcriptive (Digital Anarchy), and Speedscriber (Digital Heaven). They each offer different pricing models and speech analysis engines. Some are still in beta, but one that’s already out is Speedscriber, which I’ve used and am quite happy with. Processing is fast and reasonably accurate, given a solid audio recording.

Naturally free tools make every user happy and the king of the hill is Blackmagic Design with DaVinci Resolve and Fusion. How can you go wrong with something this powerful and free with ongoing company product development? Even the paid versions with some more advanced features are low cost. However, at the very least the free version of Resolve should be in every editor’s toolkit, because it’s such a Swiss Army Knife application.

On the other hand, editors who have the need to learn Avid Media Composer, need look no further than the free Media Composer | First. Avid has tried ‘dumbed-down’ free editing apps before, but First is actually built off of the same code base as the full Media Composer software. Thus, skills translate and most of the core functions are available for you to use.

Many users are quite happy with the advantages of Adobe’s Creative Cloud software subscription model. Others prefer to own their software. If you work in video, then it’s easy to put together alternative software kits for editing, effects, audio, and encoding that don’t touch an Adobe product. Yet for most, the stumbling block is Photoshop – until now. Both Affinity Photo (Serif) and Pixelmator Pro are full-fledged graphic design and creation tools that rival Photoshop in features and quality. Each of these has its own strong points. Affinity Photo offers Mac and Windows versions, while Pixelmator Pro is Mac only, but taps more tightly into macOS functions.

If you work in the Final Cut Pro X world, several utilities are essential. These include SendToX and XtoCC from Intelligent Assistance, along with X2Pro Audio Convert from Marquis Broadcast. Marquis’ newest is Worx4 X – a media management tool. It takes your final sequence and creates a new FCPX library with consolidated (trimmed) media. No transcoding is involved, so the process is lighting fast. Although in some cases media is copied without being trimmed. This can reduce the media to be archived from TBs down to GBs. They also offer Worx4 Pro, which is designed for Premiere Pro CC users. This tool serves as a media tracking application, to let editors find all of the media used in a Premiere Pro project across multiple volumes.

Most editors love to indulge in plug-in packages. If you can only invest in a single, large plug-in package, then BorisFX’s Boris Continuum Complete 11 and/or their Sapphire 11 bundles are the way to go. These are industry-leading tools with wide host and platform support. Both feature mocha tracking integration and Continuum also includes the Primatte Studio chromakey technology.

If you want to go for a build-it-up-as-you-need-it approach – and you are strictly on the Mac – then FxFactory will be more to your liking. You can start with the free, basic platform or buy the Pro version, which includes FxFactory’s own plug-ins. Either way, FxFactory functions as a plug-in management tool. FxFactory’s numerous partner/developers provide their products through the FxFactory platform, which functions like an app store for plug-ins. You can pick and choose the plug-ins that you need when the time is right to purchase them. There are plenty of plug-ins to recommend, but I would start with any of the Crumplepop group, because they work well and provide specific useful functions. They also include the few audio plug-ins available via FxFactory. Another plug-in to check out is the Hawaiki Keyer 4. It installs into both the Apple and Adobe applications and far surpasses the built-in keying tools within these applications.

The Crumplepop FxFactory plug-ins now includes Koji Advance, which is a powerful film look tool. I like Koji a lot, but prefer FilmConvert from Rubber Monkey Software. To my eyes, it creates one of the more pleasing and accurate film emulations around and even adds a very good three-way color corrector. This opens as a floating window inside of FCPX, which is less obtrusive than some of the other color correction plug-ins for FCPX. It’s not just for film emulation – you can actually use it as the primary color corrector for an entire project.

I don’t want to forget audio plug-ins in this end-of-the-year roundup. Most editors don’t feel too comfortable with a ton of surgical audio filters, so let me stick to suggestions that are easy-to-use and very affordable. iZotope is a well-known audio developer and several of its products are perfect for video editors. These fall into repair, mixing, and mastering needs. These include the Nectar, Ozone, and RX bundles, along with the RX Loudness Control. The first three groups are designed to cover a wide range of needs and, like the BCC video plug-ins, are somewhat of an all-encompassing product offering. But if that’s a bit rich for the blood, then check out iZotope’s various Elements versions.

The iZotope RX Loudness Control is great for accurate loudness compliance, and best used with Avid or Adobe products. However, it is not real-time, because it uses analysis and adaptive processing. If you want something more straightforward and real-time, then check out the LUFS Meter from Klangfreund. It can be used for loudness control on individual tracks or the master output. It works with most of the NLEs and DAWs. A similar tool to this is Loudness Change from Videotoolshed.

Finally, let’s not forget the iOS world, which is increasingly becoming a viable production platform. For example, I’ve used my iPad in the last year to do location interview recordings. This is a market that audio powerhouse Apogee has also recognized. If you need a studio-quality hardware interface for an iPhone or iPad, then check out the Apogee ONE. In my case, I tapped the Apogee MetaRecorder iOS application for my iPad, which works with both Apogee products and the iPad’s built-in mic. It can be used in conjunction with FCPX workflows through the integration of metadata tagging for Keywords, Favorites, and Markers.

Have a great holiday season and happy editing in the coming year!

©2017 Oliver Peters

Audio Splits and Stems in Premiere Pro Revisited

Creating multichannel, “split-track” master exports of your final sequences is something that should be a standard step in all of your productions. It’s often a deliverable requirement and having such a file makes later revisions or derivative projects much easier to produce. If you are a Final Cut Pro X user, the “audio lanes” feature makes it easy to organize and export sequences with isolated channels for dialogue, music, and effects. FCPX pros like to tweak the noses of other NLE users about how much easier it is in FCPX. While that’s more or less true – and, in fact, can be a lot deeper than simply a few aggregate channels – that doesn’t mean it’s particularly hard or less versatile in Premiere Pro.

Last year I wrote about how to set this up using Premiere submix tracks, which is a standard audio post workflow, common to most DAW and mix applications. Go back and read the article for more detail. But, what about sequences that are already edited, which didn’t start with a track configuration already set up with submix tracks and proper output routing? In fact, that’s quite easy, too, which brings me to today’s post.

Step 1 – Edit

Start out by editing as you always have, using your standard sequence presets. I’ve created a few custom presets that I normally use, based on the several standard formats I work in, like 1080p/23.976 and 1080p/29.97. These typically require stereo mixes, so my presets start with a minimum configuration of one picture track, two standard audio tracks, and stereo output. This is the starting point, but more video and audio tracks get added, as needed, during the course of editing.

Get into a habit of organizing your audio tracks. Typically this means dialogue and VO tracks towards the top (A1-A4), then sound effects (A5-A8), and finally music (A9-A12). Keep like audio types on their intended tracks. What you don’t want to do is mix different audio types onto the same track. For instance, don’t put sound effects onto tracks that you’ve designated for dialogue clips. Of course, the number of actual tracks needed for these audio types will vary with your projects. A simple VO+music sequence will only have two to four tracks, while dramatic entertainment pieces will have a lot more. Delete all empty audio tracks when you are ready to mix.

Mix for stereo output as you normally would. This means balancing components using keyframes and clip mixing. Then perform overall adjustments and “riding faders” in the track mixer. This is also where I add global effects, like compression for dialogue and limiting for the master mix.

Output your final mixed master file for delivery.

Step 2 – Multichannel DME sequences

The next step is to create or open a new multichannel DME (dialogue/music/effects) sequence. I’ve already created a custom preset, which you may download and install. It’s set up as 1080p/23.976, with two standard audio channels and three, pre-labelled stereo submix channels, but you can customize yours as needed. The master output is multichannel (8-channels), which is sufficient to cover stereo pairs for the final mix, plus isolated pairs for each of the three submixes – dialogue, music, and effects.

Next, copy-and-paste all clips from your final stereo sequence to the new multichannel sequence. If you have more than one track of picture and two tracks of audio, the new blank sequence will simply auto-populate more tracks once you paste the clips into it. The result should look the same, except with the additional three submix tracks at the bottom of your timeline. At this stage, the output of all tracks is still routed to the stereo master output and the submix tracks are bypassed.

Now open the track mixer panel and, from the pulldown output selector, switch each channel from master to its appropriate submix channel. Dialogue tracks to DIA, music tracks to MUS, and effects tracks to SFX. The sequence preset is already set up with proper output routing. All submixes go to output 1 and 2 (composite stereo mix), along with their isolated output – dialogue to 3 and 4, effects to 5 and 6, music to 7 and 8. As with your stereo mix, level adjustments and plug-in processing (compression, EQ, limiting, etc.) can be added to each of the submix channels.

Note: while not essential, multichannel, split-track master files are most useful when they are also textless. So, before outputting, I would recommend disabling all titles and lower third graphics in this sequence. The result is clean video – great for quick fixes later in the event of spelling errors or a title change.

Step 3 – Multichannel export

Now that the sequence is properly organized, you’ve got to export the multichannel sequence. I have created a mastering export preset, which you may also download. It works in the various Adobe CC apps, but is designed for Adobe Media Encoder workflows. This preset will match its output to the video size and frame rate of your sequence and master to a file with the ProRes4444 codec. The audio is set for eight output channels, configured as four stereo pairs – composite mix, plus three DME channels.

To test your exported file, simply reimport the multichannel file back into Premiere Pro and drop it onto a timeline. There you should see four independent stereo channels with audio organized according to the description above.

Presets

I have created a sequence and an export preset, which you may download here. I have only tested these on Mac systems, where they are installed into the Adobe folder contained within the user’s Documents folder. The sequence preset is placed into the Premiere Pro folder and the export preset into the Adobe Media Encoder folder. If you’ve updated the Adobe apps along the way, you will have a number of version subfolders. As of December 2017, the 12.0 subfolder is the correct location. Happy mixing!

©2017 Oliver Peters