NAB Show 2019

This year the NAB Show seemed to emphasize its roots – the “B” in National Association of Broadcasters. Gone or barely visible were the fads of past years, such as stereoscopic 3D, 360-degree video, virtual/augmented reality, drones, etc. Not that these are gone – merely that they have refocused on the smaller segment of marketshare that reflects reality. There’s not much point in promoting stereo 3D at NAB if most of the industry goes ‘meh’.

Big exhibitors of the past, like Quantel, RED, Apple, and Autodesk, are gone from the floor. Quantel products remain as part of Grass Valley (now owned by Belden), which is the consolidation of Grass Valley Group, Quantel, Snell & Wilcox, and Philips. RED decided last year that small, camera-centric shows were better venues. Apple – well, they haven’t been on the main floor for years, but even this year, there was no off-site, Final Cut Pro X stealth presence in a hotel suite somewhere. Autodesk, which shifted to a subscription model a couple of years ago, had a demo suite in the nearby Renaissance Hotel, focusing on its hero product, Flame 2020. Smoke for Mac users – tough luck. It’s been over for years.

This was a nuts-and-bolts year, with many exhibits showing new infrastructure products. These appeal to larger customers, such as broadcasters and network facilities. Specifically the world is shifting to an IP-based infrastructure for signal routing, control, and transmission. This replaces copper and fiber wiring of the past, along with the devices (routers, video switchers, etc) at either end of the wire. Companies that might have appeared less relevant, like Grass Valley, are back in a strong sales position. Other companies, like Blackmagic Design, are being encouraged by their larger clients to fulfill those needs. And as ever, consolidation continues – this year VizRT acquired NewTek, who has been an early player in video-over-IP with their proprietary NDI protocol.

Adobe

The NAB season unofficially started with Adobe’s pre-NAB release of the CC2019 update. For editors and designers, the hallmarks of this update include a new, freeform bin window view and adjustable guides in Premiere Pro and content-aware, video fill in After Effects. These are solid additions in response to customer requests, which is something Adobe has focused on. A smaller, but no less important feature is Adobe’s ongoing effort to improve media performance on the Mac platform.

As in past years, their NAB booth was an opportunity to present these new features in-depth, as well as showcase speakers who use Adobe products for editing, sound, and design. Part of the editing team from the series Atlanta was on hand to discuss the team’s use of Premiere Pro and After Effects in their ‘editing crash pad’.

Avid

For many attendees, NAB actually kicked off on the weekend with Avid Connect, a gathering of Avid users (through the Avid Customer Association), featuring meet-and-greets, workshops, presentations, and ACA leadership committee meetings. While past product announcements at Connect have been subdued from the vantage of Media Composer editors, this year was a major surprise. Avid revealed its Media Composer 2019.5 update (scheduled for release the end of May). This came as part of a host of many updates. Most of these apply to companies that have invested in the full Avid ecosystem, including Nexis storage and Media Central asset management. While those are superb, they only apply to a small percentage of the market. Let’s not forget Avid’s huge presence in the audio world, thanks to the dominance of Pro Tools – now with Dolby ATMOS support. With the acquisition of Euphonix years back, Avid has become a significant player in the live and studio sound arena. Various examples of its S-series consoles in action were presented.

Since I focus on editing, let me discuss Media Composer a bit more. The 2019.5 refresh is the first major Media Composer overhaul in years. It started in secret last year. 2019.5 is the first iteration of the new UI, with more to be updated in coming releases. In short, the interface has been modernized and streamlined in ways to attract newer, younger users, without alienating established editors. Its panel design is similar to Adobe’s approach – i.e. interface panels can be docked, floated, stacked, or tabbed. Panels that you don’t want to see may be closed or simply slid to the side and hidden. Need to see a hidden panel again? Simply side it back open from the edge of the screen.

This isn’t just a new skin. Avid has overhauled the internal video pipeline, with 32-bit floating color and an uncompressed DNx codec. Project formats now support up to 16K. Avid is also compliant with the specs of the Netflix Post Alliance and the ACES logo program.

I found the new version very easy to use and a welcomed changed; however, it will require some adaptation if you’ve been using Media Composer for a long time. In a nod to the Media Composer heritage, the weightlifter (aka ‘liftman’) and scissors icons (for lift and extract edits) are back. Even though Media Composer 2019.5 is just in early beta testing, Avid felt good enough about it to use this version in its workshops, presentations, and stage demos.

One of the reasons to go to NAB is for the in-person presentations by top editors about their real-world experiences. No one can top Avid at this game, who can easily tap a host of Oscar, Emmy, BFTA, and Eddie award winners. The hallmark for many this year was the presentation at Avid Connect and/or at the show by the Oscar-winning picture and sound editing/mixing team for Bohemian Rhapsody. It’s hard not to gather a standing-room-only crowd when you close your talk with the Live Aid finale sequence played in kick-ass surround!

Blackmagic Design

Attendees and worldwide observers have come to expect a surprise NAB product announcement out of Grant Petty each year and he certainly didn’t disappoint this time. Before I get into that, there were quite a few products released, including for IP infrastructures, 8K production and post, and more. Blackmagic is a full spectrum video and audio manufacturer that long ago moved into the ‘big leagues’. This means that just like Avid or Grass Valley, they have to respond to pressure from large users to develop products designed around their specific workflow needs. In the BMD booth, many of those development fruits were on display, like the new Hyperdeck Extreme 8K HDR recorder and the ATEM Constellation 8K switcher.

The big reveal for editors was DaVinci Resolve 16. Blackmagic has steadily been moving into the editorial space with this all-in-one, edit/color/mix/effects/finishing application. If you have no business requirement for – or emotional attachment to – one of the other NLE brands, then Resolve (free) or Resolve Studio (paid) is an absolute no-brainer. Nothing can touch the combined power of Resolve’s feature set.

New for Resolve 16 is an additional editorial module called the Cut Page. At first blush, the design, layout, and operation are amazingly similar to Apple’s Final Cut Pro X. Blackmagic’s intent is to make a fast editor where you can start and end your project for a time-sensitive turnaround without the complexities of the Edit Page. However, it’s just another tool, so you could work entirely in the Cut Page, or start in the Cut Page and refine your timeline in the Edit Page, or skip the Cut Page all together. Resolve offers a buffet of post tools that are at your disposal.

While Resolve 16’s Cut Page does elicit a chuckle from experienced FCPX users, it offers some new twists. For example, there’s a two-level timeline view – the top section is the full-length timeline and the bottom section is the zoomed-in detail view. The intent is quick navigation without the need to constantly zoom in and out of long timelines. There’s also an automatic sync detection function. Let’s say you are cutting a two-camera show. Drop the A-camera clips onto the timeline and then go through your B-camera footage. Find a cut-away shot, mark in/out on the source, and edit. It will ‘automagically’ edit to the in-sync location on the timeline. I presume this is matched by either common sound or timecode. I’ll have to see how this works in practice, but it demos nicely. Changes to other aspects of Resolve were minor and evolutionary, except for one other notable feature. The Color Page added its own version of content-aware, video fill.

Another editorial product addition – tied to the theme of faster, more-efficient editing – was a new edit keyboard. Anyone who’s ever cut in the linear days – especially those who ran Sony BVE9000/9100 controllers – will feel very nostalgic. It’s a robust keyboard with a high-quality, integrated jog/shuttle knob. The feel is very much like controlling a tape deck in a linear system, with fast shuttle response and precise jogging. The precision is far better than any of the USB controllers, like a Contour Shuttle. Whether or not enough people will have interest in shelling out $1,025 for it awaits to be seen. It’s a great tool, but are you really faster with one, than with FCPX’s skimming and a standard keyboard and mouse?

Ironically, if you look around the Blackmagic Design booth there does seem to be a nostalgic homage to Sony hardware of the past. As I said, the edit keyboard is very close to a BVE9100 keyboard. Even the style of the control panel on the Hyperdecks – and the look of the name badges on those panels – is very much Sony’s style. As humans, this appeals to our desire for something other than the glass interfaces we’ve been dealing with for the past few years. Michael Cioni (Panavision, Light Iron) coined this as ‘tactile attraction’ in his excellent Faster Together Stage talk. It manifests itself not only in these type of control surfaces, but also in skeuomorphic designs applied to audio filter interfaces. Or in the emotion created in the viewer when a colorist adds film grain to digital footage.

Maybe Grant is right and these methods are really faster in a pressure-filled production environment. Or maybe this is simply an effort to appeal to emotion and nostalgia by Blackmagic’s designers. (Check out Grant Petty’s two-hour 2019 Product Overview for more in-depth information on Blackmagic Design’s new products.)

8K

I won’t spill a lot of words on 8K. Seems kind of silly when most delivery is HD and even SD in some places. A lot of today’s production is in 4K, but really only for future-proofing. But the industry has to sell newer and flashier items, so they’ve moved on to 8K pixel resolution (7680 x 4320). Much of this is driven by Japanese broadcast and manufacturer efforts, who are pushing into 8K. You can laugh or roll your eyes, but NAB had many examples of 8K production tools (cameras and recorders) and display systems. Of course, it’s NAB, making it hard to tell how many of these are only prototypes and not yet ready for actual production and delivery.

For now, it’s still a 4K game, with plenty of mainstream product. Not only cameras and NLEs, but items like AJA’s KiPro family. The KiPro Ultra Plus records up to four channels of HD or one channel of 4K in ProRes or DNx. The newest member of the family is the KiPro GO, which records up to four channels of HD (25Mbps H.264) onto removable USB media.

Of course, the industry never stops, so while we are working with HD and 4K, and looking at 8K, the developers are planning ahead for 16K. As I mentioned, Avid already has project presets built-in for 16K projects. Yikes!

HDR

HDR – or high dynamic range – is about where it was last year. There are basically four formats vying to become the final standard used in all production, post, and display systems. While there are several frontrunners and edicts from distributors to deliver HDR-compatible masters, there still is no clear path. In you shoot in log or camera raw with nearly any professional camera produced within the past decade, you have originated footage that is HDR-compatible. But none of the low-cost post solutions make this easy. Without the right monitoring environment, you are wasting your time. If anything, those waters are muddier this year. There were a number of HDR displays throughout the show, but there were also a few labelled as using HDR simulation. I saw a couple of those at TV Logic. Yes, they looked gorgeous and yes, they were receiving an HDR signal. I found out that the ‘simulation’ part of the description meant that the display was bright (up to 350 nits), but not bright enough to qualify as ‘true’ HDR (1,000 nits or higher).

As in past transitions, we are certainly going to have to rely on a some ‘glue’ products. For me, that’s AJA again. Through their relationship with Colorfront, AJA offers two FS-HDR products: the HDR Image Analyzer and the FS-HDR convertor. The latter was introduced last year as a real-time frame synchronizer and color convertor to go between SDR and HDR display standards.  The new Analyzer is designed to evaluate color space and gamut compliance. Just remember, no computer display can properly show you HDR, so if you need to post and delivery HDR, proper monitoring and analysis tools are essential.

Cameras

I’m not a cinematographer, but I do keep up with cameras. Nearly all of this year’s camera developments were evolutionary: new LF (large format sensor) cameras (ARRI), 4K camcorders (Sharp, JVC), a full-frame mirrorless DSLR from Nikon (with ProRes RAW recording coming in a future firmware update). Most of the developments were targeted towards live broadcast production, like sports and megachurches.  Ikegami had an 8K camera to show, but their real focus was on 4K and IP camera control.

RED, a big player in the cinema space, was only there in a smaller demo room, so you couldn’t easily compare their 8K imagery against others on the floor, but let’s not forget Sony and Panasonic. While ARRI has been a favorite, due to the ‘look’ of the Alexa, Sony (Venice) and Panasonic (Varicam and now EVA-1) are also well-respected digital cinema tools that create outstanding images. For example, Sony’s booth featured an amazing, theater-sized, LED 8K micro-pixel display system. Some of the sample material shown was of the Rio Carnival, shot with anamorphic lenses on a 6K full-frame Sony Venice camera. Simply stunning.

Finally, let’s not forget Canon’s line-up of cinema cameras, from the C100 to the C700FF. To complement these, Canon introduced their new line of Sumire Prime lenses at the show. The C300 has been a staple of documentary films, including the Oscar-winning film, Free Solo, which I had the pleasure of watching on the flight to Las Vegas. Sweaty palms the whole way. It must have looked awesome in IMAX!

(For more on RED, cameras, and lenses at NAB, check out this thread from DP Phil Holland.)

It’s a wrap

In short, NAB 2019 had plenty for everyone. This also included smaller markets, like products for education seminars. One of these that I ran across was Cinamaker. They were demonstrating a complete multi-camera set-up using four iPhones and an iPad. The iPhones are the cameras (additional iPhones can be used as isolated sound recorders) and the iPad is the ‘switcher/control room’. The set-up can be wired or wireless, but camera control, video switching, and recording is done at the iPad. This can generate the final product, or be transferred to a Mac (with the line cut and camera iso media, plus edit list) for re-editing/refinement in Final Cut Pro X. Not too shabby, given the market that Cinamaker is striving to address.

For those of us who like to use the NAB Show exhibit floor as a miniature yardstick for the industry, one of the trends to watch is what type of gear is used in the booths and press areas. Specifically, one NLE over another, or one hardware platform versus the other. On that front, I saw plenty of Premiere Pro, along with some Final Cut Pro X. Hardware-wise, it looked like Apple versus HP. Granted, PC vendors, like HP, often supply gear to use in the booths as a form of sponsorship, so take this with a grain of salt. Nevertheless, I would guess that I saw more iMac Pros than any other single computer. For PCs, it was a mix of HP Z4, Z6, and Z8 workstations. HP and AMD were partner-sponsors of Avid Connect and they demoed very compelling set-ups with these Z-series units configured with AMD Radeon cards. These are very powerful workstations for editing, grading, mixing, and graphics.

©2019 Oliver Peters

Advertisements

Blackmagic Design eGPU Pro

Last year Apple embraced external graphics processing units. Blackmagic Design responded with the release of its AMD-powered eGPU model. Many questioned their choice of the Radeon Pro 580 chip instead of something more powerful. That challenge has been answered with the new Blackmagic eGPU Pro. It sports the Radeon RX Vega 56 – a similar model to the one inside the base iMac Pro configuration. The two eGPU models are nearly identical in design, but in addition to more processing power, the eGPU Pro adds a DisplayPort connection that can support 5K monitors.

The eGPU Pro includes two Thunderbolt 3/USB-C ports with 85W charging capability, HMDI, DisplayPort, and four USB-A type connectors for standard USB-3.1 devices. This means you can connect multiple peripherals and displays, plus power your laptop. You’ll need a Thunderbolt 3 connection from the computer and then either eGPU model becomes plug-and-play with Mojave (macOS 10.14) or later.

Setting up the eGPU Pro

With Mojave, most current creative apps, like Final Cut Pro X, Premiere Pro, Resolve, etc. offer a preference selection to always use the eGPU (when connected) from the application’s Get Info panel. This is an “either/or” choice. The application does not combine the power of both GPUs for maximum performance. When you pull up the Activity Monitor, you can easily see that the internal GPU is loafing while the eGPU Pro does the heavy lifting during tasks such as rendering. External GPUs benefit Macs with low-end, built-in GPUs, like the 13″ MacBook Pro or the Mac mini. A Blackmagic eGPU or eGPU Pro wouldn’t provide an edge to the render times of an iMac Pro, for example. It wouldn’t be worth the investment, unless you need one to connect additional high-resolution displays.

Users who are unfamiliar with external GPUs assume that the advantage is in faster export and render times, but that’s only part of the story. Not every function of an application uses the GPU, so many factors determine rendering. External GPU technology is very much about real-time image output. An eGPU will allow more connected displays of higher resolutions than an underpowered Mac would normally support on its own. The eGPU will also improve real-time playback of effects-heavy timelines. So yes, editors will get faster exports, but they will also enjoy a more fluid editing experience.

Extending the power of the Mac mini

In my Mac mini review, I concluded that a fully-loaded configuration made for a very capable editing computer. However, if you tend to use a number of effects that lean on GPU power, you will see an impact on real-time playback. For example, with the standard Intel GPU, I could add color correction, gaussian blur, and a title, and playback was generally fine with a fast drive. But, when I added a mask to the blur, it quickly dropped frames during playback. Once I connected the eGPU Pro to this same Mac Mini, such timelines played fluidly and, in fact, more effects could be layered onto clips. As in my other tests, Final Cut Pro X performed the best, but Premiere Pro and Resolve also performed solidly.

For basic rendering, I tested the same sequence that I used in the Mac mini review. This is a 9:15-long 1080p timeline made up of 4K source clips in a variety of codecs, plus scaling and color correction. I exported ProRes and H.264 master files from FCPX, Premiere Pro, and Resolve. With the eGPU Pro, times were cut in the range of 12% (FCPX) to 54% (Premiere). An inherently fast renderer, like Final Cut, gained the least by percentage, as it already exhibited the fastest times overall. Premiere Pro saw the greatest gain from the addition of the eGPU Pro. This is a major improvement over last year when Premiere didn’t seem to take much advantage of the eGPU. Presumably both Apple and Adobe have optimized performance when an eGPU is present.

Most taxing tests

A timeline export test is real-world but may or may not tax a GPU. So, I set up a specific render test for that purpose. I created a :60 6K timeline (5760×3240) composed of a nine-screen composite of 4K clips scaled into nine 1920×1080 sections. Premiere Pro would barely play this at even 1/16th resolution using only the Intel. With the eGPU Pro, it generally played at 1/2 resolution. This was exported to a final 1080 ProRes file. During my base test (without the eGPU connected) Premiere Pro took over 31 minutes with “maximum quality” selected. A standard quality export was about eight minutes, while Final Cut Pro X took five minutes. Once I re-connected the eGPU Pro, the same timelines exported in 3:20 under all three test scenarios. That’s a whopping 90% reduction in time for the most taxing condition! One last GPU-centric test was the BruceX test, which has been devised for Final Cut. The result without the eGPU was :58, but an impressive :16 when the eGPU Pro was used.

As you can see, effects-heavy work will benefit from the eGPU Pro, not only in faster renders and exports, but also improved real-time editing. This is also true of Resolve timelines with many nodes and in other graphics applications, like Pixelmater Pro. The 2018 Mac mini is a capable mid-range system when you purchase it with the advanced options. Nevertheless, users who need that extra grunt will definitely see a boost from the addition of a Blackmagic eGPU Pro.

Originally written for RedShark News.

©2019 Oliver Peters

The Nuances of Overcranking

The concept of overcranking and undercranking in the world of film and video production goes back to the origins of motion picture technology. The earliest film cameras required the camera operator to manually crank the film mechanism – they didn’t have internal motors. A good camera operator was partially judged by how constant of a frame rate they could maintain while cranking the film through the camera.

Prior to the introduction of sound, the correct frame rate was 18fps. If the camera was cranked faster than 18fps (overcranking), then the playback speed during projection was in slow motion. If the camera was cranked slower than 18fps (undercranking), the motion was sped up. With sound, the default frame rate shifted from 18 to 24fps. One by-product of this shift is that the projection of old B&W films gained that fast, jerky motion we often incorrectly attribute to “old time movies” today. That characteristic motion is because they are no longer played at their intended speeds.

While manual film cranking seems anachronistic in modern times, it had the benefit of in-camera, variable-speed capture – aka speed ramps. There are modern film cameras that include controlled mechanisms to still be able to do that today – in production, not in post.

Videotape recording

With the advent of videotape recording, the television industry was locked into constant recording speeds. Variable-speed recording wasn’t possible using tape transport mechanisms. Once color technology was established, the standard record, playback, and broadcast frame rates became 29.97fps and/or 25.0fps worldwide. Motion picture films captured at 24.0fps were transferred to video at the slightly slower rate of 23.976fps (23.98) in the US and converted to 29.97 by employing pulldown – a method to repeat certain frames according to a specific cadence. (I’ll skip the field versus frame, interlaced versus progressive scan discussion.)

Once we shifted to high definition, an additional frame rate category of 59.94fps was added to the mix. All of this was still pinned to physical videotape transports and constant frame rates. Slomo and fast speed effects required specialized videotape or disk pack recorders that could playback at variable speeds. A few disk recorders could record at different speeds, but in general, it was a post-production function.

File-based recording

Production shifted to in-camera, file-based recording. Post shifted to digital, computer-based, rather than electro-mechanical methods. The nexus of these two shifts is that the industry is no longer locked into a limited number of frame rates. So-called off-speed recording is now possible with nearly every professional production camera. All NLEs can handle multiple frame rates within the same timeline (albeit at a constant timeline frame rate).

Modern video displays, the web, and streaming delivery platforms enable viewers to view videos mastered at different frame rates, without being dependent on the broadcast transmission standard in their country or region. Common, possible system frame rates today include 23.98, 24.0, 25.0, 29.97, 30.0, 59.94, and 60.0fps. If you master in one of these, anyone around the world can see your video on a computer, smart phone, or tablet.

Record rate versus system/target rate

Since cameras can now record at different rates, it is imperative that the production team and the post team are on the same page. If the camera operator records everything at 29.97 (including sync sound), but the post is designed to be at 23.98, then the editor has four options. 1) Play the files as real-time (29.97 in a 23.98 sequence), which will cause frames to be dropped, resulting in some stuttering on motion. 2) Play the footage at the slowed speed, so that there is a one-to-one relationship of frames, which doesn’t work for sync sound. 3) Go through a frame rate conversion before editing starts, which will result in blended and/or dropped frames. 4) Change the sequence setting to 29.97, which may or may not be acceptable for final delivery.

Professional production cameras allow the operator to set both the system or target frame rate, in addition to the actual recording rate. These may be called different names in the menus, but the concepts are the same. The system or target rate is the base frame rate at which this file will be edited and/or played. The record rate is the frame rate at which images are exposed. When the record rate is higher than the target rate, you are effectively overcranking. That is, you are recording slow motion in-camera.

(Note: from here on I will use simplified instead of integer numbers in this post.) A record rate of 48fps and a target rate of 24fps results in an automatic 50% slow motion playback speed in post, with a one-to-one frame relationship (no duplicated or blended frames). Conversely, a record rate of 12fps with a target rate of 24fps results in playback that is fast motion at 200%. That’s the basis for hyperlapse/timelapse footage.

The good news is that professional production cameras embed the pertinent metadata into the file so that editing and player software automatically knows what to do. Import an ARRI Alexa file that was recorded at 120fps with a target rate of 24fps (23.98/23.976) into Final Cut Pro X or Premiere Pro and it will automatically playback in slow motion. The browser will identify the correct target rate and the clip’s timecode will be based on that same rate.

The bad news as that many cameras used in production today are consumer products or at best “prosumer” cameras. They are relatively “dumb” when it comes to such settings and metadata. Record 30fps on a Canon 5D or Sony A7S and you get 30fps playback. If you are cutting that into a 24fps (23.98) sequence, you will have to decide how to treat it. If the use is for non-sound-sync B-roll footage, then altering the frame rate (making it play slow motion) is fine. In many cases, like drone shots and handheld footage, that will be an intentional choice. The slower footage helps to smooth out the vibration introduced by using such a lightweight camera.

The worst recordings are those made with iPhone, iPads, or similar devices. These use variable-bit-rate codecs and variable-frame-rate recordings, making them especially difficult in post. For example, an iPhone recording at 30.0fps isn’t exactly at that speed. It wobbles around that rate – sometimes slightly slower and something faster. My recommendation for that type of footage is to always transcode to an optimized format before editing. If you must shoot with one of these devices, you really need to invest in the FiLMiC Pro application, which will give you a certain level of professional control over the iPhone/iPad camera.

Transcode

Time and storage permitting, I generally recommend transcoding consumer/prosumer formats into professional, optimized editing formats, like Avid DNxHD/HR or Apple ProRes. If you are dealing with speed differences, then set your file conversion to change the frame rate. In our 30 over 24 example (29.97 record/23.98 target), the new footage will be slowed accordingly with matching timecode. Recognize that any embedded audio will also be slowed, which changes its sample rate. If this is just for B-roll and cutaways, then no problem, because you aren’t using that audio. However, one quirk of Final Cut Pro X is that even when silent, the altered sample rate of the audio on the clip can induce strange sound artifacts upon export. So in FCPX, make sure to detach and delete audio from any such clip on your timeline.

Interpret footage

This may have a different name in any given application, but interpret footage is a function to make the application think that the file should be played at a different rate than it was recorded at. You may find this in your NLE, but also in your encoding software. Plus, there are apps that can re-write the QuickTime header information without transcoding the file. Then that file shows up at the desired rate inside of the NLE. In the case of FCPX, the same potential audio issues can arise as described above if you go this route.

In an NLE like Premiere or Resolve, it’s possible to bring in 30-frame files into a 24-frame project. Then highlight these clips in the browser and modify the frame rate. Instant fix, right? Well, not so fast. While I use this in some cases myself, it comes with some caveats. Interpreting footage often results in mismatched clip linking when you are using the internal proxy workflow. The proxy and full-res files don’t sync up to each other. Likewise, in a roundtrip with Resolve, file relinking in Resolve will be incorrect. It may result in not being able to relink these files at all, because the timecode that Resolve looks for falls outside of the boundaries of the file. So use this function with caution.

Speed adjustments

There’s a rub when work with standard speed changes (not frame rate offsets). Many editors simply apply an arbitrary speed based on what looks right to them. Unfortunately this introduces issues like skipping frames. To perfectly apply slow or fast motion to a clip, you MUST stick to simple multiples of that rate, much like traditional film post. A 200% speed increase is a proper multiple. 150% is not. The former means you are playing every other frame from a clip for smooth action. The latter results in only one fourth of the frames being eliminated in playback, leaving you with some unevenness in the movement. 

Naturally there are times when you simply want the speed you picked, even if it’s something like 177%. That’s when you have to play with the interpolation options of your NLE. Typically these include frame duplication, frame blending, and optical flow. All will give you different looks. When it comes to optical flow, some NLEs handle this better than others. Optical flow “creates” new  in-between frames. In the best case it can truly look like a shot was captured at that native frame rate. However, the computation is tricky and may often lead to unwanted image artifacts.

If you use Resolve for a color correction roundtrip, changes in motion interpolation in Resolve are pointless, unless the final export of the timeline is from Resolve. If clips go back to your NLE for finishing, then it will be that software which determines the quality of motion effects. Twixtor is a plug-in that many editors use when they need even more refined control over motion effects.

Doing the math

Now that I’ve discussed interpreting footage and the ways to deal with standard speed changes, let’s look at how best to handle off-speed clips. The proper workflow in most NLEs is to import the footage at its native frame rate. Then, when you cut the clip into the sequence, alter the speed to the proper rate for frames to play one-to-one (no blended, duplicate, or skipped frames). Final Cut Pro X handles this in the best manner, because it provides an automatic speed adjustment command. This not only makes the correct speed change, but also takes care of any potential audio sample rate issues. With other NLEs, like Premiere Pro, you will have to work out the math manually. 

The easiest way to get a value that yields clean frames (one-to-one frame rate) is to simply divide the timeline frame rate by the clip frame rate. The answer is the percentage to apply to the clip’s speed in the timeline. Simple numbers yield the same math results as integer numbers. If you are in a 23.98 timeline and have 29.97 clips, then 24 divided by 30 equals .8 – i.e. 80% slow motion speed. A 59.94fps clip is 40%. A 25fps clip is 96%.

Going in the other direction, if you are editing in a 29.97 timeline and add a 23.98 clip, the NLE will normally add a pulldown cadence (duplicated frames). If you want this to be one-to-one, if will have to be sped up. But the calculation is the same. 30 divided by 24 results in a 125% speed adjustment. And so on.

Understanding the nuances of frame rates and following these simple guidelines will give you a better finished product. It’s the kind of polish that will make your videos stand out from those of your fellow editors.

© 2019 Oliver Peters

Edit Collaboration and Best Practices

There are many workflows that involve collaboration, with multiple editors and designers working on the same large project or group of projects. Let me say up front that if you want the best possible collaborative experience with multiple editors, then work with Avid Media Composer. Full stop. I have worked both sides of the equation and without a doubt, Media Composer connected to Avid Unity/Isis/Nexis shared storage is simply not matched by Final Cut Pro, Final Cut Pro X, Premiere Pro, or any other editing software/storage/cloud combination. Everything else is a compromise, which is why feature film and TV series editorial teams continue to select Avid solutions as their first choice.

In spite of that, there are many reasons to use other editing tools. I work most of the time in Adobe Premiere Pro CC and freelance at a shop with nine edit workstations connected to shared storage. We work mainly in Adobe Creative Cloud applications and our projects involve a lot of collaboration. Some of these are corporate videos that are frequently edited and revised by different editors. Some are entertainment shows, cut by a small editorial team focused on those shows. For some projects, Premiere Pro is the perfect tool. For others, we have to develop strategies to adapt Premiere to our workflow.

With that in mind, the following are tips and best practices that I’ll share for what has worked best for us over the past three years, while working on large projects with a team of editors. Although it applies to our work with Premiere Pro, the same would generally be true if we were working with Apple Final Cut Pro X instead.

Organization. We organize all projects into a specific folder structure, using a Post Haste template. All media files, like camera footage, audio, graphic elements, etc. go into common folders. Editors know where to look to find things. When new camera footage comes in, files are organized as “dailies” into specific folders by date, camera, and camera card. Non-pro formats, like GoPro and DSLR footage will be batch-renamed to reflect the project, date, and camera card. The objective is to have unique file names for each and every media file.

Optimized, transcoded, or proxy media. Depending on the performance and amount of media, you may need to do some prep work before even starting the edit process. Premiere and FCPX work well with some media formats and not with others. NAS/SAN storage is particularly taxing, especially once you get to resolutions greater than HD. If you want the most fluid experience in a shared workflow, then you will likely need to transcode proxy files from within the application. The reason to stay inside of FCPX or Premiere Pro is so that frame size offsets are properly tracked. Once proxies have been transcoded, it’s a simple matter of toggling between the proxy media (best playback performance) and full-resolution media (best image quality).

On the other hand, if you’d rather stick to full-resolution, native media, then some formats will have to be transcoded into “optimized” media. For instance, GoPro 4K footage is terrible to edit with natively. It should always be transcoded to ProRes or DNxHD before editing, if you don’t want to go the proxy route. This can be done inside or outside of the application and is an easy task with DaVinci Resolve, EditReady, Adobe Media Encoder, or Apple Compressor.

Finally, if you have image sequences from a drone or other source, forget trying to edit from these off of a network. Transcode them right away into some format of master movie file. I find Resolve to be the best tool for this. It’s fast and since these are often camera raw files, you can apply a base grade to them as a starting point for future color correction.

Break up your projects. Depending on the type and size of the job and number of editors working on it, you may choose to work in multiple Premiere projects. There may be a master file where all media is imported and initially organized. Then there may be multiple projects that are offshoots from this for component parts. In a corporate environment, it could be several different videos cut from a single, larger set of media. In a feature film, there could be different Premiere projects for each reel of the film.

Since Premiere Pro employs project locking, any project opened by one editor can also be opened in a read-only mode by other editors. Editors can have multiple Premiere projects open at one time. Thus, it’s simple to bring in elements from one project into another, even while they are all open. This workflow mimics Avid’s bin-locking strategy.

It helps to keep project files streamlined as progress on the production extends over time. You want to keep the number of sequences in any given project small. Periodically duplicate your project(s), strip out old sequences from the current project, and archive the older project files.

As a general note, while working to build the creative story edits – i.e. “offline editing” – you will want to keep plug-in filter effects to a minimum. In fact, it’s generally a good idea to keep the plug-in selection on each system small, so that all workstations in this shared environment are able to have the same set of installed plug-ins. The same is true of fonts.

Finishing stages of post. There are generally two paths in the finishing, aka “online editing” stage. Either all final color correction and assembly of effects is completed within Premiere Pro, or there is a roundtrip through a color correction application, like Blackmagic Design DaVinci Resolve. The same holds true for audio, where a separate sound editor/designer/mixer may handle the finishing touches in Avid Pro Tools.

To accomplish an easy roundtrip with Resolve, create a sequence with all color correction and effects removed. Flatten the video to a single track (if possible), and remove the audio or do a simple stereo mixdown for reference. Ideally, media with mixed frame rates should be addressed as slow motion in the edited sequence. Avoid modifying the frame rate through any sort of “interpret” function within the application. Export an XML or AAF and send that and the associated media to Resolve. When color correction is complete, you can render the entire timeline at the sequence resolution as a single master file.

Conversely, if you want to send it back to Premiere Pro for final assembly and to complete the roundtrip, then render individual clips at their source resolution with handles of one to two seconds. Back in Premiere, re-apply titles, insert completed visual effects, and add any missing plug-in effects.

With audio post, there will be no roundtrip of elements, since the mixer will deliver a completed mixed stereo or surround track. This should be imported into Premiere (or Resolve if the final master is created in Resolve) and married back to the final video sequence. The mixer should also supply “stems” – the individual dialogue, music, and sound effects (D/M/E) submix tracks.

Mastering. Final sequences should be exported in a master file format (ProRes, DNxHD/HR, uncompressed) in at least two forms: 1) master with final mix and titles, and 2) textless submaster with split-track audio (multiple channels containing the D/M/E stems). All of these files are stored within the same job-based folder structure outlined at the top. It is quite common that future revisions will be made using the textless submaster rather than re-opening the full project, or that it may be used as source material in another edit.

Another aspect of finishing the project is media consolidation. This means taking the final sequence and generating a new project file from it. That file contained only those elements from the sequence, along with a copy of the media used, where each file has been trimmed to the portion within the sequence (plus handles). This is the Project Manager function in Premiere Pro. Unfortunately, Premiere is not consistently good at this task. Some formats will be properly trimmed, while others will be copied in their entirety. That’s OK for a :10 take, but a bummer when it’s a 30-minute interview.

The good news is that if you went through the Resolve roundtrip workflow and rendered individual clips, then effectively Resolve has already done media consolidation as a byproduct. In addition, if your source media is 4K, but you only finished in HD, the Resolve renders will be 4K. If in the future, you need to deliver the same master in 4K, everything is already set. Of course, that assumes that you didn’t do a lot of “punching in” and reframing in your edit sequence.

Cloud-based services. Often collaboration requires a distributed team, when not everyone is under one roof. While Adobe does offer cloud-based team editing methods, this doesn’t really work when editors are on different Creative Cloud accounts or when the collaboration is between an editor and a graphic designer/animator/VFX artist working in non-Adobe tools. In that case the old standbys have been Dropbox, Box, or Google Drive. Syncing is easy and relatively reliable. However, these are really just designed for sharing assets. But when this involves a couple of editors and each has a local, mirrored set of media, then simple sharing/syncing of only small project files makes for a working collaborative method.

Frame.io is the newbie here, with updated extension tools designed for in-application workspace panels within Final Cut Pro X, After Effects, and Premiere Pro. While they tout the ease of moving full-resolution media into their cloud, including camera files, I really wouldn’t recommend doing that. It’s simply not very practical on must projects. But for sharing cuts using a standard review-and-approach workflow, Frame.io definitely hits most of the buttons.

©2018 Oliver Peters

Rams

If you are a fan of the elegant, minimalist design of Apple products, then you have seen the influence of Dieter Rams. The renowned, German industrial designer, associated with functional and unobtrusive design, is known for the iconic consumer products he developed for Braun, as well as his Ten Principles for Good Design. Dieter Rams is the subject of Rams, a new documentary film by Gary Hustwit (Helvetica, Objectified, Urbanized).

This has been a labor of love for Hustwit and partially funded through a Kickstarter campaign. In a statement to the website Designboom, Huswit says, “This film is an opportunity to celebrate a designer whose work continues to impact us and preserve an important piece of design history. I’m also interested in exploring the role that manufactured objects play in our lives and, by extension, the relationship we have with the people who design them. We hope to dig deeper into Rams’ untold story – to try and understand a man of contradictions by design. I want the film to get past the legend of Dieter. I want it to get into his philosophy, process, inspirations, and even his regrets.” 

Hustwit has worked on the documentary for the past three years and premiered it in New York at the end of September. The film is currently on the road for a series of international premiere screenings until the end of the year. I recently had a conversation with Kayla Sklar, the young editor how had the opportunity to tackle this as her first feature film.

______________________________________________________

[OP] Please give me a little background about how you got into editing and then became connected with this project.

[KS] I moved to New York in 2014 after college to pursue working in theater administration for non-profit, Off Broadway theater companies. But at 25, I had sort of a quarter-life crisis and realized that wasn’t what I wanted to do at all. I knew I had to make a career change. I had done some video editing in high school with [Apple] iMovie and in college with [Apple] Final Cut Pro 7 and had enjoyed that. So I enrolled at The Edit Center in Brooklyn. They have an immersive, six-week-long program where you learn the art of editing by working with actual footage from real projects. Indie filmmakers working in documentaries and narrative films, who don’t have a lot of money, can submit their film to The Edit Center. Two are chosen per semester. 12 to 16 students are given scenes and get to work with the director. They give us feedback and at the end, we present a finished rough cut. This process gives us a sense of how to edit.

I knew I could definitely teach myself [Adobe] Premiere Pro, and probably figure out Avid [Media Composer], but I wanted to know if I would even enjoy the process of working with a director. I took the course in 2016 thinking I would pursue narrative films, because it felt the most similar to the world I had come from. But I left the course with an interest in documentary editing. I liked the puzzle-solving aspect of it. It’s where my skillset best aligned.

Afterwards, I took a few assistant editing jobs and eventually started as an assistant editor with Film First, which is owned by Jessica Edwards and Gary Hustwit. That’s how I got connected with Gary. I was assisting on a number of his projects, including working with some of the Rams footage and doing a few rough assemblies for him. Then last year he asked me to be the editor of the film. So I started shifting my focus exclusively to Rams at the beginning of this year. Gary has been working on it since 2015 – shooting on and off for three years. It just premiered in late September, but we even shot some pick-ups in Germany as late as late August / early September.

[OP] So you were working solidly on the film for about nine months. At what point did you lock the cut?

[KS] (laugh) Even now we’re still tinkering. We get more feedback from the screenings and are learning what things are working and aren’t working. The story was locked four days before the New York premiere, but we’re making small changes to things.

[OP] Documentary editing can encompass a variety of structures – narrator-driven, a single subject, a collection of interviewees, etc. What approach did you take with Rams?

[KS] Most of the film is in Dieter Rams’ own words. Gary’s other films have a huge cast of characters. But Gary wanted to make this film different from that and more streamlined. His original concept was that it was going to be Dieter as the only interview footage and you might meet other characters in the verité. But Gary realized that wasn’t going to work, simply because Dieter is a very humble man and he wasn’t really talking about his impact on design. We knew that we needed to give the film a larger context. We needed to bring in other people to tell how influential he has been.

[OP] Obviously a documentary like this has no narrative script to follow. Understanding the interview subject’s answers is critical for the editor in order to build the story arc. I understand that much of the film is in a foreign language. So what was your workflow to edit the film?

[KS] Right. So, the vast majority of the film is in German and a little bit in Japanese, both with subtitles. Maybe 25% is in English, but we’re creating it primarily with an English-speaking audience in mind. I know pretty much no German, except words from Sound of Music and Cabaret. We had a great team of translators on this project, with German transcripts broken down by paragraph and translated into English. I had a two-column set-up with German on one side and English on the other. Before I joined the project, there was an assistant who input titles directly into Premiere – putting subtitles over the dailies with the legacy titler. That was the only way I would be able to even get a rough assembly or ‘radio edit’ of what we wanted.

When you edit an English-speaking documentary, you often splice together two parts of a longer sentence to form a complete and concise thought. But German grammar is really complicated. I don’t think I really grasped how much I was taking on when I first started tackling the project. So I would build a sentence that was pretty close from the transcripts. Thank God for Google Translate, because I would put in my constructed sentence and hope that it spit out something pretty close to what we were going for. And that’s how we did the first rough cut.

Then we had an incredible woman, Katharina Kruse-Ramey, come in. She is a native German speaker living here in New York. She came in for a full eight or nine hours and picked through the edit with a fine tooth comb. For instance, “You can’t use this verb tense with this noun.” That sort of thing. She was hugely helpful and this film wouldn’t have been able to happen without Katharina. We knew then that a German speaker could watch this film and it would make sense! We also had another native German speaker, Eugen Braeunig, who was our archival researcher. He was great for the last minute pick-ups that were shot, when we couldn’t go through the longer workflow.

[OP] I presume you received notes and comments back from Dieter Rams on the cut. What has his response been?

[KS] The film premiered at the Milano Design Film Festival a few weeks ago and Dieter came to that. It was his first time seeing the finished product. From what I’ve heard, he really liked it! As much as one can like seeing themselves on a large screen, I suppose. We had sent him a rough cut a few months ago and in true analytical fashion, the notes that we got back from him were just very specific technical details about dates and products and not about overall storytelling. He really was quite willing to give Gary complete control over the filmmaking process. There was a lot of trust between the two of them.

[OP] Did you cut the film to temp music from the beginning or add music later? I understand that the prolific electronic musician and composer, Brian Eno (The Lego Batman Movie, T2 Trainspotting, The Simpsons), created the soundtrack. What was that like?

[KS] The structure of this film has more breathing room than a lot of docs might have. We really thought about the fact that we needed to give viewers a break from reading subtitles. We didn’t want to go more than ten minutes of reading at a time. So we purposely built in moments for the audience to digest and reflect on all that information. And that’s where Brian’s music was hugely important for us.

We actually didn’t start really editing the film until we had gotten the music back from Brian. I’ve been told that he doesn’t ever score to picture. We sent him some raw footage and he came back with about 16 songs that were inspired by the footage. When you have that gorgeous Brian Eno music, you know that you’re going to have moments where you can just sit back and enjoy the sheer beauty of the moment. Once we had the music in, everything just clicked into place.

[OP] The editor is integral to creating the story structure of a documentary, more so than narrative films – almost as if they are another writer. Tell me a bit about the structure for Rams.

[KS] This film is really not structured the way you would probably structure a normal doc. As I said earlier, we very purposefully put reading breaks in, either through English scenes or with Eno’s music. We had no interest in telling this story linearly. We jump back and forth. One plot line is the chronology of Dieter’s career. Then there’s this other, perhaps more important story, which is Dieter today.  His thoughts on the current state of design and the world. He’s still very active in giving talks and lectures. There’s a company called Vitsoe that makes a lot of his products and he travels to London to give input on their designs. That was the second half of the story and those are interspersed.

[OP] I presume you went outside for finishing services – sound, color correction, and so on. But did the subtitles take on any extra complexity, since they were such an important visual element?

[KS] There are three components to the post. We did an audio mix at one post house; there was a color correction pass at another; and we also had an animation studio – Trollbäck – working with us. There is a section in the film that we knew had to be visually very different and had to convey information in a different way than we had done in any other part of the film. So we gave Trollbäck that five-minute-long sequence. And they also did our opening titles.

We had thought about a stylistic treatment to the subtitles. There were two fonts that Trollbäck had used in their animation. Our initial intent was to use that in our subtitles. We did use one of those treatments in our titles and product credits. For the subtitles, we spent days trying out different looks. Are we going to shadow it or are we using outlines? What point font? What’s the kerning on it? There was going to be so much reading that we knew we had to do the titles thoughtfully. At the end of the day, we knew Helvetica was going to be the easiest (laugh)! We had tried the outline, but some of the internal space in the letters, like an ‘o’ or an ‘e’, looked closed off. We ended up going with a drop shadow. Dieter’s home is almost completely white, so there’s a lot of white space in the film. We used shadows, which looked a little softer, but still quite readable. Those were all built in Premiere’s legacy title tool.

[OP] You are in New York, which is a big Avid Media Composer town. So what was the thought process in deciding to cut this film in Adobe Premiere Pro?

[KS] When I came on-board, the project was already in Premiere. At that point I had been using Avid quite a lot since leaving The Edit Center, which teaches their editing course in Avid. I had taught myself Premiere and I might have tried to transfer the project to Avid, but there was already so much done in terms of the dailies with the subtitles. The thought of going back and spending maybe 50 hours worth of manual subtitling that didn’t migrate over correctly just seemed like a total nightmare. And I was happy to use Premiere. Had I started the project from scratch, I might have used Avid, because it’s the tool that I felt fastest on. Premiere was perfectly fine for the film that we were doing. Plus, if there were days when Gary wanted to tinker around in the project and look at things, he’s much more familiar with Premiere than he is with Avid. He also knows the other Adobe tools, so it made more sense to continue with the same family of creative products that he already knew and used.

Maybe it’s this way with the tool you learn first, but I really like Avid and I feel that I’m faster with it than with Premiere. It’s just the way my brain likes to edit things. But I would be totally happy to edit in Premiere again, if that’s what worked best for a project and what the director wanted. It was great that we didn’t have to transcode our archival footage, because of how Premiere can handle media. Definitely that was helpful, because we had some mixed frame rates and resolutions.

[OP] A closing question. This is your first feature film and with such an influential subjective. What impact did it have on you?

[KS] Dieter has Ten Principles for Good Design. He built them to talk about product design and as a way for him to judge how a product ideally should be made. I had these principles taped to my wall by my desk. His products are very streamlined, elegant, and clean. The framework should be neutral enough that they can convey what the intention was without bells-and-whistles. He wasn’t interested in adding a feature that was unnecessary. I really wanted to evoke those principles with the editing. Had the film been cluttered with extraneous information, or was self-aggrandizing, I think when we revealed the principles to the audience, they would have thought, “Wait a minute, this film isn’t doing that!” We felt that the structure of the film had to serve his principles well, wherever appropriate.

His final principle is ‘Good Design is as Little Design as Possible.’ We joked that ‘Good Filmmaking is as Little Filmmaking as Possible.’ We wanted the audience to be able to draw their own conclusions about Dieter’s work and how that translates into their daily lives. A viewer could walk away knowing what we were trying to accomplish without someone having to tell them what we were trying to accomplish.

There were times when I really didn’t know if I could do it. Being 26 and editing a feature film was daunting. Looking at those principles kept me focused on what the meat of the film’s structure should be. That made me realize how lucky we are to have had a designer who really took the time to think about principles that can be applied to a million different subjects. At one of these screenings someone came up to us, who had become a UI designer for software, in part, because of Dieter. He told us, “I read Dieter’s principles in a book and I realized these can be applied to how people interact with software.” They can be applied to a million different things and we certainly applied it to the edit.

______________________________________________________

Gary Hustwit will tour Rams internationally and in various US cities through December. After that time it will be available in digital form through Film First.

Click here to learn more about Dieter Rams’ Ten Principles for Good Design.

©2018 Oliver Peters

Premiere Pro Multicam Editing

Over the years, a lot of the projects that I’ve edited have been based on real-person interviews. This includes documentaries, commercials, and corporate video. As the cost of camera gear has come down and DSLRs became capable of delivering quality video, interview-based production now almost always utilizes multiple cameras. Directors will typically record these sections with two or more cameras at various tangents to the subject, which makes it easy to edit for content without visible jump-cuts (hopefully). In addition, if they also shoot in 4K for an HD delivery, then you have the additional ability to cleanly punch-in for even more framing options.

While having a specific multicam feature in your NLE isn’t required for cutting these types of productions, it sure speeds up the process. Under the best of circumstances, you can play the sequence in real-time and cut between camera angles in the multicam viewer, much like a director calls camera switches in a live telecast. Since you are working within an NLE, you can also make these camera angle cuts at a slower or faster pace and, of course, trim the cuts for greater timing precision. Premiere Pro is my primary NLE these days and its multi-camera editing routines are a joy to use.

Prepping for multi-camera

Synchronization is the main requirement for productive multicam. That starts at the time of the original recording. You can either sync by common timecode, common audio, or a marked in-point.

Ideally, your production crew should use a Lockit Sync Box to generate timecode and sync to all cameras and any external sound recorder. That will only work with professional products, not DSLRs. Lacking that, the next best thing is old school – a common slate with a clap-stick or even just your subject clapping hands at the start, while in view on all cameras. This will allow the editor to mark a common in-point.

The last sync method is to match the common audio across all sources. Of course, that only works if the production crew has supplied quality audio to all cameras and external recorders. It has to be at least good enough so that the human editor and/or the audio analysis of the software can discern a match. Sometimes this method will suffer from a minor amount of delay – either, because of the inherent offset of the audio recording circuitry within the camera electronics – or, because an onboard camera mic was used and the distance to the subject results in a slight delay, compared to a lav mic on the subject.

In addition to synchronization, you obviously need to record high-quality audio. This can be a mixer feed or direct mic input to one or all of the camera tracks, or to a separate external audio recorder. A typical set-up is to feed a lav and a boom mic signal to audio input channels 1 and 2 of the camera. When a mixer and an external recorder are used, the sound recordist will often also record a mix. Another option, though not as desirable, is to record individual microphone signals onto different cameras. The reason this isn’t preferred, is that sometimes when these two sources are mixed in post (rather than only one source used at a time), audio phasing can occur.

Synching in Premiere Pro

To synchronize multicam clips in Premiere Pro, simply select the matching sources in the browser/bin, right-click, and choose “Create New Multi-Camera Source Sequence”. You will be presented with several options for sync, based on timecode, audio, or marked points. You may also opt to have the clips moved to a “Processed Clips” bin. If synchronization is successful, you’ll then end up with a multicam source clip that you can now cut to a standard sequence.

A multicam source clip is actually a modified, nested sequence. You can open the clip – same as a nested sequence – and make adjustments or apply filters to the clips within.

You can also create multicam clips without going through the aforementioned process. For example, let’s say that none of the three sync methods exist. You have a freewheeling interview with two or more cameras, but only one has any audio. There’s no clap and no common timecode. In fact, if all the cameras were DSLRs, then every clip arbitrarily starts at 00:00:00:00. The way to tackle this is to edit these cameras to separate video tracks of a new sequence. Sync the video by slipping the clips’ positions on the tracks. Select those clips on the timeline and create a nest. Once the nest is created, this can then be turned into a multicam source clip, which enables you to work with the multicam viewer.

One step I follow is to place the multicam source clip onto a sequence and replace the audio with the best original source. The standard multicam routine means that audio is also nested, which is something I dislike. I don’t want all of the camera audio tracks there, even if they are muted. So I will typically match-frame the source until I get back to the original audio that I intend to use, and then overwrite the multicam clip’s audio with the original on this working timeline. On the other hand, if the manual multicam creation method is used, then I would only nest the video tracks, which automatically leaves me with the clean audio that I desire.

Autosequence

One simple approach is to use an additional utility to create multicam sequences, such as Autosequence from software developer VideoToolShed. To use Autosequence, your clips must have matching timecode. First separate all of your clips into separate folders on your media hard drive – A-CAM, B-CAM, SOUND, and so on. Launch Autosequence and set the matching frame rate for your media. Then import each folder of clips separately. If you are using double-system sound you can choose whether or not to include the camera sound. Then generate an XML file.

Now, import the XML file into Premiere Pro. This will import the source media into bins, along with a sequence of clips where each camera is on a separate track. If your clips are broken into consecutive recordings with stops and starts in-between, then each recorded set will appear further down on the same timeline. To turn this sequence into one with multicam clips, just follow my explanation for working with a manual process, described above.

Multicam cutting

At this point, I dupe the sequence(s) and start a reductive process of shaping the interview. I usually don’t worry too much about changing camera angles, until I have the story fleshed out. When you are ready for that, right-click into the viewer, and change the display mode to multicam.

As you play, cut between cameras in the viewer by clicking on the corresponding section of the viewer. The timeline will update to show these on-the-fly edits when you stop playback. Or you can simply “blade” the clip and then right-click that portion of the clip to select the camera to be shown. Remember than any effects or color corrections you apply in the timeline are applicable to that visible angle, but do not follow it. So, if you change your mind and switch to a different angle, the effects and corrections do not change with it. Therefore, adjustments will be required to the effect or correction for that new camera angle.

Once I’m happy with the cutting, I will then go through and make a color correction pass. If the lighting has stayed consistent, I can usually grade each angle for one clip only and then copy that correction and paste it to each instance of that same angle on the timeline. Then repeat the procedure for the other camera angles.

When I’m ready to deliver the final product, I will dupe the sequence and clean it up. This means flattening all multicam clips, cleaning up unused clips on my timeline, deleting empty tracks, and usually, collapsing the clips down to the fewest number of tracks.

©2018 Oliver Peters

Audio Splits and Stems in Premiere Pro Revisited

Creating multichannel, “split-track” master exports of your final sequences is something that should be a standard step in all of your productions. It’s often a deliverable requirement and having such a file makes later revisions or derivative projects much easier to produce. If you are a Final Cut Pro X user, the “audio lanes” feature makes it easy to organize and export sequences with isolated channels for dialogue, music, and effects. FCPX pros like to tweak the noses of other NLE users about how much easier it is in FCPX. While that’s more or less true – and, in fact, can be a lot deeper than simply a few aggregate channels – that doesn’t mean it’s particularly hard or less versatile in Premiere Pro.

Last year I wrote about how to set this up using Premiere submix tracks, which is a standard audio post workflow, common to most DAW and mix applications. Go back and read the article for more detail. But, what about sequences that are already edited, which didn’t start with a track configuration already set up with submix tracks and proper output routing? In fact, that’s quite easy, too, which brings me to today’s post.

Step 1 – Edit

Start out by editing as you always have, using your standard sequence presets. I’ve created a few custom presets that I normally use, based on the several standard formats I work in, like 1080p/23.976 and 1080p/29.97. These typically require stereo mixes, so my presets start with a minimum configuration of one picture track, two standard audio tracks, and stereo output. This is the starting point, but more video and audio tracks get added, as needed, during the course of editing.

Get into a habit of organizing your audio tracks. Typically this means dialogue and VO tracks towards the top (A1-A4), then sound effects (A5-A8), and finally music (A9-A12). Keep like audio types on their intended tracks. What you don’t want to do is mix different audio types onto the same track. For instance, don’t put sound effects onto tracks that you’ve designated for dialogue clips. Of course, the number of actual tracks needed for these audio types will vary with your projects. A simple VO+music sequence will only have two to four tracks, while dramatic entertainment pieces will have a lot more. Delete all empty audio tracks when you are ready to mix.

Mix for stereo output as you normally would. This means balancing components using keyframes and clip mixing. Then perform overall adjustments and “riding faders” in the track mixer. This is also where I add global effects, like compression for dialogue and limiting for the master mix.

Output your final mixed master file for delivery.

Step 2 – Multichannel DME sequences

The next step is to create or open a new multichannel DME (dialogue/music/effects) sequence. I’ve already created a custom preset, which you may download and install. It’s set up as 1080p/23.976, with two standard audio channels and three, pre-labelled stereo submix channels, but you can customize yours as needed. The master output is multichannel (8-channels), which is sufficient to cover stereo pairs for the final mix, plus isolated pairs for each of the three submixes – dialogue, music, and effects.

Next, copy-and-paste all clips from your final stereo sequence to the new multichannel sequence. If you have more than one track of picture and two tracks of audio, the new blank sequence will simply auto-populate more tracks once you paste the clips into it. The result should look the same, except with the additional three submix tracks at the bottom of your timeline. At this stage, the output of all tracks is still routed to the stereo master output and the submix tracks are bypassed.

Now open the track mixer panel and, from the pulldown output selector, switch each channel from master to its appropriate submix channel. Dialogue tracks to DIA, music tracks to MUS, and effects tracks to SFX. The sequence preset is already set up with proper output routing. All submixes go to output 1 and 2 (composite stereo mix), along with their isolated output – dialogue to 3 and 4, effects to 5 and 6, music to 7 and 8. As with your stereo mix, level adjustments and plug-in processing (compression, EQ, limiting, etc.) can be added to each of the submix channels.

Note: while not essential, multichannel, split-track master files are most useful when they are also textless. So, before outputting, I would recommend disabling all titles and lower third graphics in this sequence. The result is clean video – great for quick fixes later in the event of spelling errors or a title change.

Step 3 – Multichannel export

Now that the sequence is properly organized, you’ve got to export the multichannel sequence. I have created a mastering export preset, which you may also download. It works in the various Adobe CC apps, but is designed for Adobe Media Encoder workflows. This preset will match its output to the video size and frame rate of your sequence and master to a file with the ProRes4444 codec. The audio is set for eight output channels, configured as four stereo pairs – composite mix, plus three DME channels.

To test your exported file, simply reimport the multichannel file back into Premiere Pro and drop it onto a timeline. There you should see four independent stereo channels with audio organized according to the description above.

Presets

I have created a sequence and an export preset, which you may download here. I have only tested these on Mac systems, where they are installed into the Adobe folder contained within the user’s Documents folder. The sequence preset is placed into the Premiere Pro folder and the export preset into the Adobe Media Encoder folder. If you’ve updated the Adobe apps along the way, you will have a number of version subfolders. As of December 2017, the 12.0 subfolder is the correct location. Happy mixing!

©2017 Oliver Peters