NAB Show 2019

This year the NAB Show seemed to emphasize its roots – the “B” in National Association of Broadcasters. Gone or barely visible were the fads of past years, such as stereoscopic 3D, 360-degree video, virtual/augmented reality, drones, etc. Not that these are gone – merely that they have refocused on the smaller segment of marketshare that reflects reality. There’s not much point in promoting stereo 3D at NAB if most of the industry goes ‘meh’.

Big exhibitors of the past, like Quantel, RED, Apple, and Autodesk, are gone from the floor. Quantel products remain as part of Grass Valley (now owned by Belden), which is the consolidation of Grass Valley Group, Quantel, Snell & Wilcox, and Philips. RED decided last year that small, camera-centric shows were better venues. Apple – well, they haven’t been on the main floor for years, but even this year, there was no off-site, Final Cut Pro X stealth presence in a hotel suite somewhere. Autodesk, which shifted to a subscription model a couple of years ago, had a demo suite in the nearby Renaissance Hotel, focusing on its hero product, Flame 2020. Smoke for Mac users – tough luck. It’s been over for years.

This was a nuts-and-bolts year, with many exhibits showing new infrastructure products. These appeal to larger customers, such as broadcasters and network facilities. Specifically the world is shifting to an IP-based infrastructure for signal routing, control, and transmission. This replaces copper and fiber wiring of the past, along with the devices (routers, video switchers, etc) at either end of the wire. Companies that might have appeared less relevant, like Grass Valley, are back in a strong sales position. Other companies, like Blackmagic Design, are being encouraged by their larger clients to fulfill those needs. And as ever, consolidation continues – this year VizRT acquired NewTek, who has been an early player in video-over-IP with their proprietary NDI protocol.

Adobe

The NAB season unofficially started with Adobe’s pre-NAB release of the CC2019 update. For editors and designers, the hallmarks of this update include a new, freeform bin window view and adjustable guides in Premiere Pro and content-aware, video fill in After Effects. These are solid additions in response to customer requests, which is something Adobe has focused on. A smaller, but no less important feature is Adobe’s ongoing effort to improve media performance on the Mac platform.

As in past years, their NAB booth was an opportunity to present these new features in-depth, as well as showcase speakers who use Adobe products for editing, sound, and design. Part of the editing team from the series Atlanta was on hand to discuss the team’s use of Premiere Pro and After Effects in their ‘editing crash pad’.

Avid

For many attendees, NAB actually kicked off on the weekend with Avid Connect, a gathering of Avid users (through the Avid Customer Association), featuring meet-and-greets, workshops, presentations, and ACA leadership committee meetings. While past product announcements at Connect have been subdued from the vantage of Media Composer editors, this year was a major surprise. Avid revealed its Media Composer 2019.5 update (scheduled for release the end of May). This came as part of a host of many updates. Most of these apply to companies that have invested in the full Avid ecosystem, including Nexis storage and Media Central asset management. While those are superb, they only apply to a small percentage of the market. Let’s not forget Avid’s huge presence in the audio world, thanks to the dominance of Pro Tools – now with Dolby ATMOS support. With the acquisition of Euphonix years back, Avid has become a significant player in the live and studio sound arena. Various examples of its S-series consoles in action were presented.

Since I focus on editing, let me discuss Media Composer a bit more. The 2019.5 refresh is the first major Media Composer overhaul in years. It started in secret last year. 2019.5 is the first iteration of the new UI, with more to be updated in coming releases. In short, the interface has been modernized and streamlined in ways to attract newer, younger users, without alienating established editors. Its panel design is similar to Adobe’s approach – i.e. interface panels can be docked, floated, stacked, or tabbed. Panels that you don’t want to see may be closed or simply slid to the side and hidden. Need to see a hidden panel again? Simply side it back open from the edge of the screen.

This isn’t just a new skin. Avid has overhauled the internal video pipeline, with 32-bit floating color and an uncompressed DNx codec. Project formats now support up to 16K. Avid is also compliant with the specs of the Netflix Post Alliance and the ACES logo program.

I found the new version very easy to use and a welcomed changed; however, it will require some adaptation if you’ve been using Media Composer for a long time. In a nod to the Media Composer heritage, the weightlifter (aka ‘liftman’) and scissors icons (for lift and extract edits) are back. Even though Media Composer 2019.5 is just in early beta testing, Avid felt good enough about it to use this version in its workshops, presentations, and stage demos.

One of the reasons to go to NAB is for the in-person presentations by top editors about their real-world experiences. No one can top Avid at this game, who can easily tap a host of Oscar, Emmy, BFTA, and Eddie award winners. The hallmark for many this year was the presentation at Avid Connect and/or at the show by the Oscar-winning picture and sound editing/mixing team for Bohemian Rhapsody. It’s hard not to gather a standing-room-only crowd when you close your talk with the Live Aid finale sequence played in kick-ass surround!

Blackmagic Design

Attendees and worldwide observers have come to expect a surprise NAB product announcement out of Grant Petty each year and he certainly didn’t disappoint this time. Before I get into that, there were quite a few products released, including for IP infrastructures, 8K production and post, and more. Blackmagic is a full spectrum video and audio manufacturer that long ago moved into the ‘big leagues’. This means that just like Avid or Grass Valley, they have to respond to pressure from large users to develop products designed around their specific workflow needs. In the BMD booth, many of those development fruits were on display, like the new Hyperdeck Extreme 8K HDR recorder and the ATEM Constellation 8K switcher.

The big reveal for editors was DaVinci Resolve 16. Blackmagic has steadily been moving into the editorial space with this all-in-one, edit/color/mix/effects/finishing application. If you have no business requirement for – or emotional attachment to – one of the other NLE brands, then Resolve (free) or Resolve Studio (paid) is an absolute no-brainer. Nothing can touch the combined power of Resolve’s feature set.

New for Resolve 16 is an additional editorial module called the Cut Page. At first blush, the design, layout, and operation are amazingly similar to Apple’s Final Cut Pro X. Blackmagic’s intent is to make a fast editor where you can start and end your project for a time-sensitive turnaround without the complexities of the Edit Page. However, it’s just another tool, so you could work entirely in the Cut Page, or start in the Cut Page and refine your timeline in the Edit Page, or skip the Cut Page all together. Resolve offers a buffet of post tools that are at your disposal.

While Resolve 16’s Cut Page does elicit a chuckle from experienced FCPX users, it offers some new twists. For example, there’s a two-level timeline view – the top section is the full-length timeline and the bottom section is the zoomed-in detail view. The intent is quick navigation without the need to constantly zoom in and out of long timelines. There’s also an automatic sync detection function. Let’s say you are cutting a two-camera show. Drop the A-camera clips onto the timeline and then go through your B-camera footage. Find a cut-away shot, mark in/out on the source, and edit. It will ‘automagically’ edit to the in-sync location on the timeline. I presume this is matched by either common sound or timecode. I’ll have to see how this works in practice, but it demos nicely. Changes to other aspects of Resolve were minor and evolutionary, except for one other notable feature. The Color Page added its own version of content-aware, video fill.

Another editorial product addition – tied to the theme of faster, more-efficient editing – was a new edit keyboard. Anyone who’s ever cut in the linear days – especially those who ran Sony BVE9000/9100 controllers – will feel very nostalgic. It’s a robust keyboard with a high-quality, integrated jog/shuttle knob. The feel is very much like controlling a tape deck in a linear system, with fast shuttle response and precise jogging. The precision is far better than any of the USB controllers, like a Contour Shuttle. Whether or not enough people will have interest in shelling out $1,025 for it awaits to be seen. It’s a great tool, but are you really faster with one, than with FCPX’s skimming and a standard keyboard and mouse?

Ironically, if you look around the Blackmagic Design booth there does seem to be a nostalgic homage to Sony hardware of the past. As I said, the edit keyboard is very close to a BVE9100 keyboard. Even the style of the control panel on the Hyperdecks – and the look of the name badges on those panels – is very much Sony’s style. As humans, this appeals to our desire for something other than the glass interfaces we’ve been dealing with for the past few years. Michael Cioni (Panavision, Light Iron) coined this as ‘tactile attraction’ in his excellent Faster Together Stage talk. It manifests itself not only in these type of control surfaces, but also in skeuomorphic designs applied to audio filter interfaces. Or in the emotion created in the viewer when a colorist adds film grain to digital footage.

Maybe Grant is right and these methods are really faster in a pressure-filled production environment. Or maybe this is simply an effort to appeal to emotion and nostalgia by Blackmagic’s designers. (Check out Grant Petty’s two-hour 2019 Product Overview for more in-depth information on Blackmagic Design’s new products.)

8K

I won’t spill a lot of words on 8K. Seems kind of silly when most delivery is HD and even SD in some places. A lot of today’s production is in 4K, but really only for future-proofing. But the industry has to sell newer and flashier items, so they’ve moved on to 8K pixel resolution (7680 x 4320). Much of this is driven by Japanese broadcast and manufacturer efforts, who are pushing into 8K. You can laugh or roll your eyes, but NAB had many examples of 8K production tools (cameras and recorders) and display systems. Of course, it’s NAB, making it hard to tell how many of these are only prototypes and not yet ready for actual production and delivery.

For now, it’s still a 4K game, with plenty of mainstream product. Not only cameras and NLEs, but items like AJA’s KiPro family. The KiPro Ultra Plus records up to four channels of HD or one channel of 4K in ProRes or DNx. The newest member of the family is the KiPro GO, which records up to four channels of HD (25Mbps H.264) onto removable USB media.

Of course, the industry never stops, so while we are working with HD and 4K, and looking at 8K, the developers are planning ahead for 16K. As I mentioned, Avid already has project presets built-in for 16K projects. Yikes!

HDR

HDR – or high dynamic range – is about where it was last year. There are basically four formats vying to become the final standard used in all production, post, and display systems. While there are several frontrunners and edicts from distributors to deliver HDR-compatible masters, there still is no clear path. In you shoot in log or camera raw with nearly any professional camera produced within the past decade, you have originated footage that is HDR-compatible. But none of the low-cost post solutions make this easy. Without the right monitoring environment, you are wasting your time. If anything, those waters are muddier this year. There were a number of HDR displays throughout the show, but there were also a few labelled as using HDR simulation. I saw a couple of those at TV Logic. Yes, they looked gorgeous and yes, they were receiving an HDR signal. I found out that the ‘simulation’ part of the description meant that the display was bright (up to 350 nits), but not bright enough to qualify as ‘true’ HDR (1,000 nits or higher).

As in past transitions, we are certainly going to have to rely on a some ‘glue’ products. For me, that’s AJA again. Through their relationship with Colorfront, AJA offers two FS-HDR products: the HDR Image Analyzer and the FS-HDR convertor. The latter was introduced last year as a real-time frame synchronizer and color convertor to go between SDR and HDR display standards.  The new Analyzer is designed to evaluate color space and gamut compliance. Just remember, no computer display can properly show you HDR, so if you need to post and delivery HDR, proper monitoring and analysis tools are essential.

Cameras

I’m not a cinematographer, but I do keep up with cameras. Nearly all of this year’s camera developments were evolutionary: new LF (large format sensor) cameras (ARRI), 4K camcorders (Sharp, JVC), a full-frame mirrorless DSLR from Nikon (with ProRes RAW recording coming in a future firmware update). Most of the developments were targeted towards live broadcast production, like sports and megachurches.  Ikegami had an 8K camera to show, but their real focus was on 4K and IP camera control.

RED, a big player in the cinema space, was only there in a smaller demo room, so you couldn’t easily compare their 8K imagery against others on the floor, but let’s not forget Sony and Panasonic. While ARRI has been a favorite, due to the ‘look’ of the Alexa, Sony (Venice) and Panasonic (Varicam and now EVA-1) are also well-respected digital cinema tools that create outstanding images. For example, Sony’s booth featured an amazing, theater-sized, LED 8K micro-pixel display system. Some of the sample material shown was of the Rio Carnival, shot with anamorphic lenses on a 6K full-frame Sony Venice camera. Simply stunning.

Finally, let’s not forget Canon’s line-up of cinema cameras, from the C100 to the C700FF. To complement these, Canon introduced their new line of Sumire Prime lenses at the show. The C300 has been a staple of documentary films, including the Oscar-winning film, Free Solo, which I had the pleasure of watching on the flight to Las Vegas. Sweaty palms the whole way. It must have looked awesome in IMAX!

(For more on RED, cameras, and lenses at NAB, check out this thread from DP Phil Holland.)

It’s a wrap

In short, NAB 2019 had plenty for everyone. This also included smaller markets, like products for education seminars. One of these that I ran across was Cinamaker. They were demonstrating a complete multi-camera set-up using four iPhones and an iPad. The iPhones are the cameras (additional iPhones can be used as isolated sound recorders) and the iPad is the ‘switcher/control room’. The set-up can be wired or wireless, but camera control, video switching, and recording is done at the iPad. This can generate the final product, or be transferred to a Mac (with the line cut and camera iso media, plus edit list) for re-editing/refinement in Final Cut Pro X. Not too shabby, given the market that Cinamaker is striving to address.

For those of us who like to use the NAB Show exhibit floor as a miniature yardstick for the industry, one of the trends to watch is what type of gear is used in the booths and press areas. Specifically, one NLE over another, or one hardware platform versus the other. On that front, I saw plenty of Premiere Pro, along with some Final Cut Pro X. Hardware-wise, it looked like Apple versus HP. Granted, PC vendors, like HP, often supply gear to use in the booths as a form of sponsorship, so take this with a grain of salt. Nevertheless, I would guess that I saw more iMac Pros than any other single computer. For PCs, it was a mix of HP Z4, Z6, and Z8 workstations. HP and AMD were partner-sponsors of Avid Connect and they demoed very compelling set-ups with these Z-series units configured with AMD Radeon cards. These are very powerful workstations for editing, grading, mixing, and graphics.

©2019 Oliver Peters

Advertisements

Blackmagic Design eGPU Pro

Last year Apple embraced external graphics processing units. Blackmagic Design responded with the release of its AMD-powered eGPU model. Many questioned their choice of the Radeon Pro 580 chip instead of something more powerful. That challenge has been answered with the new Blackmagic eGPU Pro. It sports the Radeon RX Vega 56 – a similar model to the one inside the base iMac Pro configuration. The two eGPU models are nearly identical in design, but in addition to more processing power, the eGPU Pro adds a DisplayPort connection that can support 5K monitors.

The eGPU Pro includes two Thunderbolt 3/USB-C ports with 85W charging capability, HMDI, DisplayPort, and four USB-A type connectors for standard USB-3.1 devices. This means you can connect multiple peripherals and displays, plus power your laptop. You’ll need a Thunderbolt 3 connection from the computer and then either eGPU model becomes plug-and-play with Mojave (macOS 10.14) or later.

Setting up the eGPU Pro

With Mojave, most current creative apps, like Final Cut Pro X, Premiere Pro, Resolve, etc. offer a preference selection to always use the eGPU (when connected) from the application’s Get Info panel. This is an “either/or” choice. The application does not combine the power of both GPUs for maximum performance. When you pull up the Activity Monitor, you can easily see that the internal GPU is loafing while the eGPU Pro does the heavy lifting during tasks such as rendering. External GPUs benefit Macs with low-end, built-in GPUs, like the 13″ MacBook Pro or the Mac mini. A Blackmagic eGPU or eGPU Pro wouldn’t provide an edge to the render times of an iMac Pro, for example. It wouldn’t be worth the investment, unless you need one to connect additional high-resolution displays.

Users who are unfamiliar with external GPUs assume that the advantage is in faster export and render times, but that’s only part of the story. Not every function of an application uses the GPU, so many factors determine rendering. External GPU technology is very much about real-time image output. An eGPU will allow more connected displays of higher resolutions than an underpowered Mac would normally support on its own. The eGPU will also improve real-time playback of effects-heavy timelines. So yes, editors will get faster exports, but they will also enjoy a more fluid editing experience.

Extending the power of the Mac mini

In my Mac mini review, I concluded that a fully-loaded configuration made for a very capable editing computer. However, if you tend to use a number of effects that lean on GPU power, you will see an impact on real-time playback. For example, with the standard Intel GPU, I could add color correction, gaussian blur, and a title, and playback was generally fine with a fast drive. But, when I added a mask to the blur, it quickly dropped frames during playback. Once I connected the eGPU Pro to this same Mac Mini, such timelines played fluidly and, in fact, more effects could be layered onto clips. As in my other tests, Final Cut Pro X performed the best, but Premiere Pro and Resolve also performed solidly.

For basic rendering, I tested the same sequence that I used in the Mac mini review. This is a 9:15-long 1080p timeline made up of 4K source clips in a variety of codecs, plus scaling and color correction. I exported ProRes and H.264 master files from FCPX, Premiere Pro, and Resolve. With the eGPU Pro, times were cut in the range of 12% (FCPX) to 54% (Premiere). An inherently fast renderer, like Final Cut, gained the least by percentage, as it already exhibited the fastest times overall. Premiere Pro saw the greatest gain from the addition of the eGPU Pro. This is a major improvement over last year when Premiere didn’t seem to take much advantage of the eGPU. Presumably both Apple and Adobe have optimized performance when an eGPU is present.

Most taxing tests

A timeline export test is real-world but may or may not tax a GPU. So, I set up a specific render test for that purpose. I created a :60 6K timeline (5760×3240) composed of a nine-screen composite of 4K clips scaled into nine 1920×1080 sections. Premiere Pro would barely play this at even 1/16th resolution using only the Intel. With the eGPU Pro, it generally played at 1/2 resolution. This was exported to a final 1080 ProRes file. During my base test (without the eGPU connected) Premiere Pro took over 31 minutes with “maximum quality” selected. A standard quality export was about eight minutes, while Final Cut Pro X took five minutes. Once I re-connected the eGPU Pro, the same timelines exported in 3:20 under all three test scenarios. That’s a whopping 90% reduction in time for the most taxing condition! One last GPU-centric test was the BruceX test, which has been devised for Final Cut. The result without the eGPU was :58, but an impressive :16 when the eGPU Pro was used.

As you can see, effects-heavy work will benefit from the eGPU Pro, not only in faster renders and exports, but also improved real-time editing. This is also true of Resolve timelines with many nodes and in other graphics applications, like Pixelmater Pro. The 2018 Mac mini is a capable mid-range system when you purchase it with the advanced options. Nevertheless, users who need that extra grunt will definitely see a boost from the addition of a Blackmagic eGPU Pro.

Originally written for RedShark News.

©2019 Oliver Peters

The Nuances of Overcranking

The concept of overcranking and undercranking in the world of film and video production goes back to the origins of motion picture technology. The earliest film cameras required the camera operator to manually crank the film mechanism – they didn’t have internal motors. A good camera operator was partially judged by how constant of a frame rate they could maintain while cranking the film through the camera.

Prior to the introduction of sound, the correct frame rate was 18fps. If the camera was cranked faster than 18fps (overcranking), then the playback speed during projection was in slow motion. If the camera was cranked slower than 18fps (undercranking), the motion was sped up. With sound, the default frame rate shifted from 18 to 24fps. One by-product of this shift is that the projection of old B&W films gained that fast, jerky motion we often incorrectly attribute to “old time movies” today. That characteristic motion is because they are no longer played at their intended speeds.

While manual film cranking seems anachronistic in modern times, it had the benefit of in-camera, variable-speed capture – aka speed ramps. There are modern film cameras that include controlled mechanisms to still be able to do that today – in production, not in post.

Videotape recording

With the advent of videotape recording, the television industry was locked into constant recording speeds. Variable-speed recording wasn’t possible using tape transport mechanisms. Once color technology was established, the standard record, playback, and broadcast frame rates became 29.97fps and/or 25.0fps worldwide. Motion picture films captured at 24.0fps were transferred to video at the slightly slower rate of 23.976fps (23.98) in the US and converted to 29.97 by employing pulldown – a method to repeat certain frames according to a specific cadence. (I’ll skip the field versus frame, interlaced versus progressive scan discussion.)

Once we shifted to high definition, an additional frame rate category of 59.94fps was added to the mix. All of this was still pinned to physical videotape transports and constant frame rates. Slomo and fast speed effects required specialized videotape or disk pack recorders that could playback at variable speeds. A few disk recorders could record at different speeds, but in general, it was a post-production function.

File-based recording

Production shifted to in-camera, file-based recording. Post shifted to digital, computer-based, rather than electro-mechanical methods. The nexus of these two shifts is that the industry is no longer locked into a limited number of frame rates. So-called off-speed recording is now possible with nearly every professional production camera. All NLEs can handle multiple frame rates within the same timeline (albeit at a constant timeline frame rate).

Modern video displays, the web, and streaming delivery platforms enable viewers to view videos mastered at different frame rates, without being dependent on the broadcast transmission standard in their country or region. Common, possible system frame rates today include 23.98, 24.0, 25.0, 29.97, 30.0, 59.94, and 60.0fps. If you master in one of these, anyone around the world can see your video on a computer, smart phone, or tablet.

Record rate versus system/target rate

Since cameras can now record at different rates, it is imperative that the production team and the post team are on the same page. If the camera operator records everything at 29.97 (including sync sound), but the post is designed to be at 23.98, then the editor has four options. 1) Play the files as real-time (29.97 in a 23.98 sequence), which will cause frames to be dropped, resulting in some stuttering on motion. 2) Play the footage at the slowed speed, so that there is a one-to-one relationship of frames, which doesn’t work for sync sound. 3) Go through a frame rate conversion before editing starts, which will result in blended and/or dropped frames. 4) Change the sequence setting to 29.97, which may or may not be acceptable for final delivery.

Professional production cameras allow the operator to set both the system or target frame rate, in addition to the actual recording rate. These may be called different names in the menus, but the concepts are the same. The system or target rate is the base frame rate at which this file will be edited and/or played. The record rate is the frame rate at which images are exposed. When the record rate is higher than the target rate, you are effectively overcranking. That is, you are recording slow motion in-camera.

(Note: from here on I will use simplified instead of integer numbers in this post.) A record rate of 48fps and a target rate of 24fps results in an automatic 50% slow motion playback speed in post, with a one-to-one frame relationship (no duplicated or blended frames). Conversely, a record rate of 12fps with a target rate of 24fps results in playback that is fast motion at 200%. That’s the basis for hyperlapse/timelapse footage.

The good news is that professional production cameras embed the pertinent metadata into the file so that editing and player software automatically knows what to do. Import an ARRI Alexa file that was recorded at 120fps with a target rate of 24fps (23.98/23.976) into Final Cut Pro X or Premiere Pro and it will automatically playback in slow motion. The browser will identify the correct target rate and the clip’s timecode will be based on that same rate.

The bad news as that many cameras used in production today are consumer products or at best “prosumer” cameras. They are relatively “dumb” when it comes to such settings and metadata. Record 30fps on a Canon 5D or Sony A7S and you get 30fps playback. If you are cutting that into a 24fps (23.98) sequence, you will have to decide how to treat it. If the use is for non-sound-sync B-roll footage, then altering the frame rate (making it play slow motion) is fine. In many cases, like drone shots and handheld footage, that will be an intentional choice. The slower footage helps to smooth out the vibration introduced by using such a lightweight camera.

The worst recordings are those made with iPhone, iPads, or similar devices. These use variable-bit-rate codecs and variable-frame-rate recordings, making them especially difficult in post. For example, an iPhone recording at 30.0fps isn’t exactly at that speed. It wobbles around that rate – sometimes slightly slower and something faster. My recommendation for that type of footage is to always transcode to an optimized format before editing. If you must shoot with one of these devices, you really need to invest in the FiLMiC Pro application, which will give you a certain level of professional control over the iPhone/iPad camera.

Transcode

Time and storage permitting, I generally recommend transcoding consumer/prosumer formats into professional, optimized editing formats, like Avid DNxHD/HR or Apple ProRes. If you are dealing with speed differences, then set your file conversion to change the frame rate. In our 30 over 24 example (29.97 record/23.98 target), the new footage will be slowed accordingly with matching timecode. Recognize that any embedded audio will also be slowed, which changes its sample rate. If this is just for B-roll and cutaways, then no problem, because you aren’t using that audio. However, one quirk of Final Cut Pro X is that even when silent, the altered sample rate of the audio on the clip can induce strange sound artifacts upon export. So in FCPX, make sure to detach and delete audio from any such clip on your timeline.

Interpret footage

This may have a different name in any given application, but interpret footage is a function to make the application think that the file should be played at a different rate than it was recorded at. You may find this in your NLE, but also in your encoding software. Plus, there are apps that can re-write the QuickTime header information without transcoding the file. Then that file shows up at the desired rate inside of the NLE. In the case of FCPX, the same potential audio issues can arise as described above if you go this route.

In an NLE like Premiere or Resolve, it’s possible to bring in 30-frame files into a 24-frame project. Then highlight these clips in the browser and modify the frame rate. Instant fix, right? Well, not so fast. While I use this in some cases myself, it comes with some caveats. Interpreting footage often results in mismatched clip linking when you are using the internal proxy workflow. The proxy and full-res files don’t sync up to each other. Likewise, in a roundtrip with Resolve, file relinking in Resolve will be incorrect. It may result in not being able to relink these files at all, because the timecode that Resolve looks for falls outside of the boundaries of the file. So use this function with caution.

Speed adjustments

There’s a rub when work with standard speed changes (not frame rate offsets). Many editors simply apply an arbitrary speed based on what looks right to them. Unfortunately this introduces issues like skipping frames. To perfectly apply slow or fast motion to a clip, you MUST stick to simple multiples of that rate, much like traditional film post. A 200% speed increase is a proper multiple. 150% is not. The former means you are playing every other frame from a clip for smooth action. The latter results in only one fourth of the frames being eliminated in playback, leaving you with some unevenness in the movement. 

Naturally there are times when you simply want the speed you picked, even if it’s something like 177%. That’s when you have to play with the interpolation options of your NLE. Typically these include frame duplication, frame blending, and optical flow. All will give you different looks. When it comes to optical flow, some NLEs handle this better than others. Optical flow “creates” new  in-between frames. In the best case it can truly look like a shot was captured at that native frame rate. However, the computation is tricky and may often lead to unwanted image artifacts.

If you use Resolve for a color correction roundtrip, changes in motion interpolation in Resolve are pointless, unless the final export of the timeline is from Resolve. If clips go back to your NLE for finishing, then it will be that software which determines the quality of motion effects. Twixtor is a plug-in that many editors use when they need even more refined control over motion effects.

Doing the math

Now that I’ve discussed interpreting footage and the ways to deal with standard speed changes, let’s look at how best to handle off-speed clips. The proper workflow in most NLEs is to import the footage at its native frame rate. Then, when you cut the clip into the sequence, alter the speed to the proper rate for frames to play one-to-one (no blended, duplicate, or skipped frames). Final Cut Pro X handles this in the best manner, because it provides an automatic speed adjustment command. This not only makes the correct speed change, but also takes care of any potential audio sample rate issues. With other NLEs, like Premiere Pro, you will have to work out the math manually. 

The easiest way to get a value that yields clean frames (one-to-one frame rate) is to simply divide the timeline frame rate by the clip frame rate. The answer is the percentage to apply to the clip’s speed in the timeline. Simple numbers yield the same math results as integer numbers. If you are in a 23.98 timeline and have 29.97 clips, then 24 divided by 30 equals .8 – i.e. 80% slow motion speed. A 59.94fps clip is 40%. A 25fps clip is 96%.

Going in the other direction, if you are editing in a 29.97 timeline and add a 23.98 clip, the NLE will normally add a pulldown cadence (duplicated frames). If you want this to be one-to-one, if will have to be sped up. But the calculation is the same. 30 divided by 24 results in a 125% speed adjustment. And so on.

Understanding the nuances of frame rates and following these simple guidelines will give you a better finished product. It’s the kind of polish that will make your videos stand out from those of your fellow editors.

© 2019 Oliver Peters

Edit Collaboration and Best Practices

There are many workflows that involve collaboration, with multiple editors and designers working on the same large project or group of projects. Let me say up front that if you want the best possible collaborative experience with multiple editors, then work with Avid Media Composer. Full stop. I have worked both sides of the equation and without a doubt, Media Composer connected to Avid Unity/Isis/Nexis shared storage is simply not matched by Final Cut Pro, Final Cut Pro X, Premiere Pro, or any other editing software/storage/cloud combination. Everything else is a compromise, which is why feature film and TV series editorial teams continue to select Avid solutions as their first choice.

In spite of that, there are many reasons to use other editing tools. I work most of the time in Adobe Premiere Pro CC and freelance at a shop with nine edit workstations connected to shared storage. We work mainly in Adobe Creative Cloud applications and our projects involve a lot of collaboration. Some of these are corporate videos that are frequently edited and revised by different editors. Some are entertainment shows, cut by a small editorial team focused on those shows. For some projects, Premiere Pro is the perfect tool. For others, we have to develop strategies to adapt Premiere to our workflow.

With that in mind, the following are tips and best practices that I’ll share for what has worked best for us over the past three years, while working on large projects with a team of editors. Although it applies to our work with Premiere Pro, the same would generally be true if we were working with Apple Final Cut Pro X instead.

Organization. We organize all projects into a specific folder structure, using a Post Haste template. All media files, like camera footage, audio, graphic elements, etc. go into common folders. Editors know where to look to find things. When new camera footage comes in, files are organized as “dailies” into specific folders by date, camera, and camera card. Non-pro formats, like GoPro and DSLR footage will be batch-renamed to reflect the project, date, and camera card. The objective is to have unique file names for each and every media file.

Optimized, transcoded, or proxy media. Depending on the performance and amount of media, you may need to do some prep work before even starting the edit process. Premiere and FCPX work well with some media formats and not with others. NAS/SAN storage is particularly taxing, especially once you get to resolutions greater than HD. If you want the most fluid experience in a shared workflow, then you will likely need to transcode proxy files from within the application. The reason to stay inside of FCPX or Premiere Pro is so that frame size offsets are properly tracked. Once proxies have been transcoded, it’s a simple matter of toggling between the proxy media (best playback performance) and full-resolution media (best image quality).

On the other hand, if you’d rather stick to full-resolution, native media, then some formats will have to be transcoded into “optimized” media. For instance, GoPro 4K footage is terrible to edit with natively. It should always be transcoded to ProRes or DNxHD before editing, if you don’t want to go the proxy route. This can be done inside or outside of the application and is an easy task with DaVinci Resolve, EditReady, Adobe Media Encoder, or Apple Compressor.

Finally, if you have image sequences from a drone or other source, forget trying to edit from these off of a network. Transcode them right away into some format of master movie file. I find Resolve to be the best tool for this. It’s fast and since these are often camera raw files, you can apply a base grade to them as a starting point for future color correction.

Break up your projects. Depending on the type and size of the job and number of editors working on it, you may choose to work in multiple Premiere projects. There may be a master file where all media is imported and initially organized. Then there may be multiple projects that are offshoots from this for component parts. In a corporate environment, it could be several different videos cut from a single, larger set of media. In a feature film, there could be different Premiere projects for each reel of the film.

Since Premiere Pro employs project locking, any project opened by one editor can also be opened in a read-only mode by other editors. Editors can have multiple Premiere projects open at one time. Thus, it’s simple to bring in elements from one project into another, even while they are all open. This workflow mimics Avid’s bin-locking strategy.

It helps to keep project files streamlined as progress on the production extends over time. You want to keep the number of sequences in any given project small. Periodically duplicate your project(s), strip out old sequences from the current project, and archive the older project files.

As a general note, while working to build the creative story edits – i.e. “offline editing” – you will want to keep plug-in filter effects to a minimum. In fact, it’s generally a good idea to keep the plug-in selection on each system small, so that all workstations in this shared environment are able to have the same set of installed plug-ins. The same is true of fonts.

Finishing stages of post. There are generally two paths in the finishing, aka “online editing” stage. Either all final color correction and assembly of effects is completed within Premiere Pro, or there is a roundtrip through a color correction application, like Blackmagic Design DaVinci Resolve. The same holds true for audio, where a separate sound editor/designer/mixer may handle the finishing touches in Avid Pro Tools.

To accomplish an easy roundtrip with Resolve, create a sequence with all color correction and effects removed. Flatten the video to a single track (if possible), and remove the audio or do a simple stereo mixdown for reference. Ideally, media with mixed frame rates should be addressed as slow motion in the edited sequence. Avoid modifying the frame rate through any sort of “interpret” function within the application. Export an XML or AAF and send that and the associated media to Resolve. When color correction is complete, you can render the entire timeline at the sequence resolution as a single master file.

Conversely, if you want to send it back to Premiere Pro for final assembly and to complete the roundtrip, then render individual clips at their source resolution with handles of one to two seconds. Back in Premiere, re-apply titles, insert completed visual effects, and add any missing plug-in effects.

With audio post, there will be no roundtrip of elements, since the mixer will deliver a completed mixed stereo or surround track. This should be imported into Premiere (or Resolve if the final master is created in Resolve) and married back to the final video sequence. The mixer should also supply “stems” – the individual dialogue, music, and sound effects (D/M/E) submix tracks.

Mastering. Final sequences should be exported in a master file format (ProRes, DNxHD/HR, uncompressed) in at least two forms: 1) master with final mix and titles, and 2) textless submaster with split-track audio (multiple channels containing the D/M/E stems). All of these files are stored within the same job-based folder structure outlined at the top. It is quite common that future revisions will be made using the textless submaster rather than re-opening the full project, or that it may be used as source material in another edit.

Another aspect of finishing the project is media consolidation. This means taking the final sequence and generating a new project file from it. That file contained only those elements from the sequence, along with a copy of the media used, where each file has been trimmed to the portion within the sequence (plus handles). This is the Project Manager function in Premiere Pro. Unfortunately, Premiere is not consistently good at this task. Some formats will be properly trimmed, while others will be copied in their entirety. That’s OK for a :10 take, but a bummer when it’s a 30-minute interview.

The good news is that if you went through the Resolve roundtrip workflow and rendered individual clips, then effectively Resolve has already done media consolidation as a byproduct. In addition, if your source media is 4K, but you only finished in HD, the Resolve renders will be 4K. If in the future, you need to deliver the same master in 4K, everything is already set. Of course, that assumes that you didn’t do a lot of “punching in” and reframing in your edit sequence.

Cloud-based services. Often collaboration requires a distributed team, when not everyone is under one roof. While Adobe does offer cloud-based team editing methods, this doesn’t really work when editors are on different Creative Cloud accounts or when the collaboration is between an editor and a graphic designer/animator/VFX artist working in non-Adobe tools. In that case the old standbys have been Dropbox, Box, or Google Drive. Syncing is easy and relatively reliable. However, these are really just designed for sharing assets. But when this involves a couple of editors and each has a local, mirrored set of media, then simple sharing/syncing of only small project files makes for a working collaborative method.

Frame.io is the newbie here, with updated extension tools designed for in-application workspace panels within Final Cut Pro X, After Effects, and Premiere Pro. While they tout the ease of moving full-resolution media into their cloud, including camera files, I really wouldn’t recommend doing that. It’s simply not very practical on must projects. But for sharing cuts using a standard review-and-approach workflow, Frame.io definitely hits most of the buttons.

©2018 Oliver Peters

Building the Alternative Creative Toolkit

In the past, software was bought in a shrink-wrapped package with a single license to run the application on one computer. But trends change, with options today ranging from a one-time purchase, or a purchase plus a subscription for updates, all the way over to a total subscription model. While Adobe is the company most associated with the subscription model for creative software, nearly every company from Avid to Microsoft offers some variation of this. Software subscription makes a ton of sense for both the developer and the user, but it clearly is something that doesn’t suit everyone’s needs. If you want a comprehensive set of creative tools – but seek suggestions for alternatives to subscription – then look no further.

Productivity. The written side of production is as important as everything else. That means word processing, spreadsheets, and presentations often come first. The king of the hill has been Microsoft Office, but there are others. Still around and being updated by Corel is Wordperfect Office – one of the originals. Naturally you have Google Docs, but there are also plenty of others, such as OpenOffice, LibreOffice, and NeoOffice. Mac users have Pages, Numbers, and Keynote. These cover 90% of my needs, including very good compatibility with Microsoft Office documents. If your focus is structured creative writing, then you might also wish to check out Scrivener.

Design, graphics, and photography. It’s hard to find an exact replacement for Photoshop and Illustrator, but Serif comes the closest with its Affinity brand for Mac and Windows. Affinity Photo, Designer, and Publisher are solid substitutes with good levels of compatibility. But if your design tastes are more whimsical and artistic, then consider Pixelmator Pro (Mac) or Painter (Windows or Mac). If you need photo processing and manipulation, then Apple Photos, which is included with Macs, has become more potent in recent versions, although still targeted towards consumers. It’s not the industrial strength tool that is Adobe Lightroom Classic. If that’s your need and subscription is a no-go, then Capture One or ON1 seem to have captured many a photographer’s attention. Both include raw processing support and cataloging/organizing features.

Audio production and post. Avid Pro Tools is the 800-pound gorilla when it comes to professional audio in studios and post houses. Offerings range from free to subscription to perpetual. But as an alternative, one of the most popular, full-featured tools for music creation, audio production, mixing, and post is Apple’s Logic Pro X. It’s a tool that just keeps getting better, with more virtual instruments and plug-ins being added with every version update. Naturally it’s only available for the Mac. If you are mainly a video editor and not a recording engineer, then another option would be Blackmagic Design’s DaVinci Resolve, which has integrated the Fairlight audio toolset. This makes it viable as an audio-only application, as well as other post needs. However, other strong contenders are also still around, including Steinberg Nuendo, and Magix Vegas Pro and Sound Forge Pro. The latter two had been Sony Software products before Magix picked them up. While the Vegas products are considered video editing tools, they originally started life in audio and continue to be very viable audio post products. Sound Forge is an advanced single clip (up to 32 channels) audio editor that’s great for voice-over production, podcasts, and audio mastering.

Visual effects and motion graphics. If you love and really need Adobe After Effects, this is probably the one area where you won’t find a suitable equivalent. That’s fine, because Adobe offers attractive single-app licensing. While there are other options, simply none offer the depth of After Effects in a track/layer-based compositor. Apple has Motion, which is a great tool, but doesn’t tick all the boxes. Fusion and the Fusion page inside of DaVinci Resolve tackle composting by using nodes. So you work in a node-based, flowchart-style layout, rather than layers and tracks. All of these tools are powerful, but the switch from After Effects to Fusion or Motion require a complete mindset change, which most users aren’t interested in.

Editing. The best for last and in some ways, the category with the most options. Avid Media Composer continues its dominance for narrative broadcast and film editing. Like Pro Tools, there are free, perpetual, and subscription choices. Apple has been battling it out for mindshare with Final Cut Pro X, but of the group, it’s the one that is most different from a traditional NLE’s design and operation. Among the other solutions, you’ll find familiar names, including Grass Valley Edius, Magix Vegas Pro, DaVinci Resolve, and Lightworks. Even the venerable Media 100 (now owned by BorisFX) is still available for the Mac and for free!

Originally written for RedShark News

©2018 Oliver Peters

Rams

If you are a fan of the elegant, minimalist design of Apple products, then you have seen the influence of Dieter Rams. The renowned, German industrial designer, associated with functional and unobtrusive design, is known for the iconic consumer products he developed for Braun, as well as his Ten Principles for Good Design. Dieter Rams is the subject of Rams, a new documentary film by Gary Hustwit (Helvetica, Objectified, Urbanized).

This has been a labor of love for Hustwit and partially funded through a Kickstarter campaign. In a statement to the website Designboom, Huswit says, “This film is an opportunity to celebrate a designer whose work continues to impact us and preserve an important piece of design history. I’m also interested in exploring the role that manufactured objects play in our lives and, by extension, the relationship we have with the people who design them. We hope to dig deeper into Rams’ untold story – to try and understand a man of contradictions by design. I want the film to get past the legend of Dieter. I want it to get into his philosophy, process, inspirations, and even his regrets.” 

Hustwit has worked on the documentary for the past three years and premiered it in New York at the end of September. The film is currently on the road for a series of international premiere screenings until the end of the year. I recently had a conversation with Kayla Sklar, the young editor how had the opportunity to tackle this as her first feature film.

______________________________________________________

[OP] Please give me a little background about how you got into editing and then became connected with this project.

[KS] I moved to New York in 2014 after college to pursue working in theater administration for non-profit, Off Broadway theater companies. But at 25, I had sort of a quarter-life crisis and realized that wasn’t what I wanted to do at all. I knew I had to make a career change. I had done some video editing in high school with [Apple] iMovie and in college with [Apple] Final Cut Pro 7 and had enjoyed that. So I enrolled at The Edit Center in Brooklyn. They have an immersive, six-week-long program where you learn the art of editing by working with actual footage from real projects. Indie filmmakers working in documentaries and narrative films, who don’t have a lot of money, can submit their film to The Edit Center. Two are chosen per semester. 12 to 16 students are given scenes and get to work with the director. They give us feedback and at the end, we present a finished rough cut. This process gives us a sense of how to edit.

I knew I could definitely teach myself [Adobe] Premiere Pro, and probably figure out Avid [Media Composer], but I wanted to know if I would even enjoy the process of working with a director. I took the course in 2016 thinking I would pursue narrative films, because it felt the most similar to the world I had come from. But I left the course with an interest in documentary editing. I liked the puzzle-solving aspect of it. It’s where my skillset best aligned.

Afterwards, I took a few assistant editing jobs and eventually started as an assistant editor with Film First, which is owned by Jessica Edwards and Gary Hustwit. That’s how I got connected with Gary. I was assisting on a number of his projects, including working with some of the Rams footage and doing a few rough assemblies for him. Then last year he asked me to be the editor of the film. So I started shifting my focus exclusively to Rams at the beginning of this year. Gary has been working on it since 2015 – shooting on and off for three years. It just premiered in late September, but we even shot some pick-ups in Germany as late as late August / early September.

[OP] So you were working solidly on the film for about nine months. At what point did you lock the cut?

[KS] (laugh) Even now we’re still tinkering. We get more feedback from the screenings and are learning what things are working and aren’t working. The story was locked four days before the New York premiere, but we’re making small changes to things.

[OP] Documentary editing can encompass a variety of structures – narrator-driven, a single subject, a collection of interviewees, etc. What approach did you take with Rams?

[KS] Most of the film is in Dieter Rams’ own words. Gary’s other films have a huge cast of characters. But Gary wanted to make this film different from that and more streamlined. His original concept was that it was going to be Dieter as the only interview footage and you might meet other characters in the verité. But Gary realized that wasn’t going to work, simply because Dieter is a very humble man and he wasn’t really talking about his impact on design. We knew that we needed to give the film a larger context. We needed to bring in other people to tell how influential he has been.

[OP] Obviously a documentary like this has no narrative script to follow. Understanding the interview subject’s answers is critical for the editor in order to build the story arc. I understand that much of the film is in a foreign language. So what was your workflow to edit the film?

[KS] Right. So, the vast majority of the film is in German and a little bit in Japanese, both with subtitles. Maybe 25% is in English, but we’re creating it primarily with an English-speaking audience in mind. I know pretty much no German, except words from Sound of Music and Cabaret. We had a great team of translators on this project, with German transcripts broken down by paragraph and translated into English. I had a two-column set-up with German on one side and English on the other. Before I joined the project, there was an assistant who input titles directly into Premiere – putting subtitles over the dailies with the legacy titler. That was the only way I would be able to even get a rough assembly or ‘radio edit’ of what we wanted.

When you edit an English-speaking documentary, you often splice together two parts of a longer sentence to form a complete and concise thought. But German grammar is really complicated. I don’t think I really grasped how much I was taking on when I first started tackling the project. So I would build a sentence that was pretty close from the transcripts. Thank God for Google Translate, because I would put in my constructed sentence and hope that it spit out something pretty close to what we were going for. And that’s how we did the first rough cut.

Then we had an incredible woman, Katharina Kruse-Ramey, come in. She is a native German speaker living here in New York. She came in for a full eight or nine hours and picked through the edit with a fine tooth comb. For instance, “You can’t use this verb tense with this noun.” That sort of thing. She was hugely helpful and this film wouldn’t have been able to happen without Katharina. We knew then that a German speaker could watch this film and it would make sense! We also had another native German speaker, Eugen Braeunig, who was our archival researcher. He was great for the last minute pick-ups that were shot, when we couldn’t go through the longer workflow.

[OP] I presume you received notes and comments back from Dieter Rams on the cut. What has his response been?

[KS] The film premiered at the Milano Design Film Festival a few weeks ago and Dieter came to that. It was his first time seeing the finished product. From what I’ve heard, he really liked it! As much as one can like seeing themselves on a large screen, I suppose. We had sent him a rough cut a few months ago and in true analytical fashion, the notes that we got back from him were just very specific technical details about dates and products and not about overall storytelling. He really was quite willing to give Gary complete control over the filmmaking process. There was a lot of trust between the two of them.

[OP] Did you cut the film to temp music from the beginning or add music later? I understand that the prolific electronic musician and composer, Brian Eno (The Lego Batman Movie, T2 Trainspotting, The Simpsons), created the soundtrack. What was that like?

[KS] The structure of this film has more breathing room than a lot of docs might have. We really thought about the fact that we needed to give viewers a break from reading subtitles. We didn’t want to go more than ten minutes of reading at a time. So we purposely built in moments for the audience to digest and reflect on all that information. And that’s where Brian’s music was hugely important for us.

We actually didn’t start really editing the film until we had gotten the music back from Brian. I’ve been told that he doesn’t ever score to picture. We sent him some raw footage and he came back with about 16 songs that were inspired by the footage. When you have that gorgeous Brian Eno music, you know that you’re going to have moments where you can just sit back and enjoy the sheer beauty of the moment. Once we had the music in, everything just clicked into place.

[OP] The editor is integral to creating the story structure of a documentary, more so than narrative films – almost as if they are another writer. Tell me a bit about the structure for Rams.

[KS] This film is really not structured the way you would probably structure a normal doc. As I said earlier, we very purposefully put reading breaks in, either through English scenes or with Eno’s music. We had no interest in telling this story linearly. We jump back and forth. One plot line is the chronology of Dieter’s career. Then there’s this other, perhaps more important story, which is Dieter today.  His thoughts on the current state of design and the world. He’s still very active in giving talks and lectures. There’s a company called Vitsoe that makes a lot of his products and he travels to London to give input on their designs. That was the second half of the story and those are interspersed.

[OP] I presume you went outside for finishing services – sound, color correction, and so on. But did the subtitles take on any extra complexity, since they were such an important visual element?

[KS] There are three components to the post. We did an audio mix at one post house; there was a color correction pass at another; and we also had an animation studio – Trollbäck – working with us. There is a section in the film that we knew had to be visually very different and had to convey information in a different way than we had done in any other part of the film. So we gave Trollbäck that five-minute-long sequence. And they also did our opening titles.

We had thought about a stylistic treatment to the subtitles. There were two fonts that Trollbäck had used in their animation. Our initial intent was to use that in our subtitles. We did use one of those treatments in our titles and product credits. For the subtitles, we spent days trying out different looks. Are we going to shadow it or are we using outlines? What point font? What’s the kerning on it? There was going to be so much reading that we knew we had to do the titles thoughtfully. At the end of the day, we knew Helvetica was going to be the easiest (laugh)! We had tried the outline, but some of the internal space in the letters, like an ‘o’ or an ‘e’, looked closed off. We ended up going with a drop shadow. Dieter’s home is almost completely white, so there’s a lot of white space in the film. We used shadows, which looked a little softer, but still quite readable. Those were all built in Premiere’s legacy title tool.

[OP] You are in New York, which is a big Avid Media Composer town. So what was the thought process in deciding to cut this film in Adobe Premiere Pro?

[KS] When I came on-board, the project was already in Premiere. At that point I had been using Avid quite a lot since leaving The Edit Center, which teaches their editing course in Avid. I had taught myself Premiere and I might have tried to transfer the project to Avid, but there was already so much done in terms of the dailies with the subtitles. The thought of going back and spending maybe 50 hours worth of manual subtitling that didn’t migrate over correctly just seemed like a total nightmare. And I was happy to use Premiere. Had I started the project from scratch, I might have used Avid, because it’s the tool that I felt fastest on. Premiere was perfectly fine for the film that we were doing. Plus, if there were days when Gary wanted to tinker around in the project and look at things, he’s much more familiar with Premiere than he is with Avid. He also knows the other Adobe tools, so it made more sense to continue with the same family of creative products that he already knew and used.

Maybe it’s this way with the tool you learn first, but I really like Avid and I feel that I’m faster with it than with Premiere. It’s just the way my brain likes to edit things. But I would be totally happy to edit in Premiere again, if that’s what worked best for a project and what the director wanted. It was great that we didn’t have to transcode our archival footage, because of how Premiere can handle media. Definitely that was helpful, because we had some mixed frame rates and resolutions.

[OP] A closing question. This is your first feature film and with such an influential subjective. What impact did it have on you?

[KS] Dieter has Ten Principles for Good Design. He built them to talk about product design and as a way for him to judge how a product ideally should be made. I had these principles taped to my wall by my desk. His products are very streamlined, elegant, and clean. The framework should be neutral enough that they can convey what the intention was without bells-and-whistles. He wasn’t interested in adding a feature that was unnecessary. I really wanted to evoke those principles with the editing. Had the film been cluttered with extraneous information, or was self-aggrandizing, I think when we revealed the principles to the audience, they would have thought, “Wait a minute, this film isn’t doing that!” We felt that the structure of the film had to serve his principles well, wherever appropriate.

His final principle is ‘Good Design is as Little Design as Possible.’ We joked that ‘Good Filmmaking is as Little Filmmaking as Possible.’ We wanted the audience to be able to draw their own conclusions about Dieter’s work and how that translates into their daily lives. A viewer could walk away knowing what we were trying to accomplish without someone having to tell them what we were trying to accomplish.

There were times when I really didn’t know if I could do it. Being 26 and editing a feature film was daunting. Looking at those principles kept me focused on what the meat of the film’s structure should be. That made me realize how lucky we are to have had a designer who really took the time to think about principles that can be applied to a million different subjects. At one of these screenings someone came up to us, who had become a UI designer for software, in part, because of Dieter. He told us, “I read Dieter’s principles in a book and I realized these can be applied to how people interact with software.” They can be applied to a million different things and we certainly applied it to the edit.

______________________________________________________

Gary Hustwit will tour Rams internationally and in various US cities through December. After that time it will be available in digital form through Film First.

Click here to learn more about Dieter Rams’ Ten Principles for Good Design.

©2018 Oliver Peters

Apple 2018 MacBook Pro

July was a good month for Apple power users, with the simultaneous release of Blackmagic Design’s eGPU and a refresh of Apple’s popular MacBook Pro line, including both 13″ and 15″ models. Although these new laptops retain the previous model’s form factor, they gained a bump-up in processors, RAM, and storage capacity.

Apple loaned me one of the Touch Bar space gray 15” models for this review. It came maxed out with the 8th generation 2.9 GHz 6-core Intel Core i9 CPU, 32GB DDR4 (faster) RAM, Radeon Pro 560X GPU, and a 2TB SSD. The price range on the 15″ model is pretty wide, due in part to the available SSD choices – from 256GB up to 4TB. Touch Bar 15” configurations start at $2,399 and can go all the way up to $6,699, once you spec the top upgrade for everything. My configuration was only $4,699 with the 2TB SSD. Of course, that’s before you add Apple Care (which I highly recommend for laptops) and any accessories.

Apple also released premium leather sleeves for both the 13″ and 15″ models in three colors ($199 for the 15″ size). They are pricey, of course, but not out of line with other branded, luxury products, like bags and watch bands. They fit the unit snuggly and protect it when you are out and about. In addition, they serve as a good pad on rough desk surfaces or when you have the MacBook Pro on your lap. Depending on the task you are performing, the bottom surface of the MacBook Pro can get warm, but nothing to be concerned about.

Before you point me to the nearest Windows gaming machine instead, let me mention that this review really isn’t a comparison against Windows laptops, but rather advances by Apple within the MacBook Pro line. But for context, I have owned six laptops to date – 3 PCs and 3 Macs. I shifted to Mac in order to have access to Final Cut Pro and have been happy with that move. The first 2 PCs developed stress fractures at the lid hinges before they were even a year old. The third, an HP, was solid, but after I gave it to my daughter, the power supply shorted. In addition, the hard drive became so corrupt (thank you Windows) that it wasn’t worth trying to recover. In short, my Mac laptop experience, like that of others, has been one of good value. MacBook Pros generally last years and if you use them for actual billable work (editing, DIT, sound design, etc.), then the investment will pay for itself.

This is the fastest and best laptop Apple has made. Apple engineering has nicely balanced power, size, weight, and battery life in a way that’s hard to counter. It is expensive, but if you try to find an equivalent PC, it is hard to actually find one with these exact same specs or components, until you get into gaming PCs. Those a) look pretty ugly, b) tend to be larger and heavier, with lower battery life, and c) cost about the same. There’s also the sales experience. Try to navigate nearly any PC-centric laptop supplier in an effort to customize the options and it tends to become an exercise in frustration. On the other hand, Apple makes it quite easy to buy and configure its machines with the options that you want.

I do have to mention that when these MacBook Pros first came out there was an issue of performance throttling, which was quickly addressed by Apple and fixed by a supplemental macOS release. That had already been installed on my unit, so no throttling issues that affected any of my performance tests.

Likewise, there have been debris complaints with the first run of the “butterfly” keys used in this and the previous version of these laptops. As other reviewers have stated when tear-downs have been done, Apple has added a membrane under the keys to help with sound dampening. Some reviewers have speculated that this also helps mitigate or even eliminate the debris issues. Whatever the reason, I liked typing on this keyboard and it did sound quieter to me. I tend to bang on keys, since I’m not a touch typist. The feel of a keyboard to a typist can be very subjective and in the course of a day, I tend to type on several vintages of Apple keyboards. In general, the keyboard on this newest MacBook Pro felt comfortable to me, when used for standard typing.

What did Apple bring new to the mix?

When Apple introduced the Touch Bar in 2016, I thought ‘meh’. But after these couple of weeks, I’ve really enjoyed it, especially when an application like Final Cut Pro X extends its controls to the Touch Bar. You can switch the Touch Bar preferences to only be function keys if you like. But having control strip options makes it quick to adjust screen brightness, volume, and so on. In the case of FCPX, you also get a mini-timeline view in some modes. Even QuickTime player calls up a small movie strip into the Touch Bar screen for the file being played.

These units also include Apple’s T2 security chip, which powers the fingerprint Touch ID and the newly added “Hey Siri” commands. The Retina screen on this laptop is gorgeous with up to 500 nits brightness and a wide color gamut. Another new addition is True Tone, which adjusts the display’s color temperature for the surrounding ambient light. That may become a more important selling point in the coming years. There is growing concern within the industry that blue light emitted from computer displays causes long-term eyesight damage. Generally, True Tone warms up the screen when under interior lighting, which reduces eye fatigue when you are working with a lot of white documents. But my recommendation is that editors, colorists, photographers, and designers turn this feature off when working on tasks that require color accuracy. Otherwise, the color balance of media will appear too warm (yellowish).

The 2018 15” MacBook Pro has four Thunderbolt 3/USB-C ports and a headphone jack. The four ports (two per side) are driven by two internal Thunderbolt 3 (40Gb/s) buses. It appears that’s one for each side, which means that plugging in two devices on one side will split the available Thunderbolt 3 bandwidth on that bus in half. Although, this doesn’t seem to be much of a factor during actual use. The internal bus routing does appear to be different from the previous model, in spite of what otherwise is more or less the same hardware configuration.

Gone are all other connections, so plan on purchasing an assortment of adapters to connect peripherals, such as those ubiquitous USB thumb drives or hardware dongles (license keys). I do wish that Apple had retained at least one standard USB port. Thunderbolt 3 supports power, so no separate MagSafe port is required either. (Power supply and cable are included.) One minor downside of this is that there is no indicator LED when a full battery charge is achieved, like we used to have on the MagSafe plug.

If connected to a Thunderbolt 3 device with an adequate power supply (e.g. the LG displays or the Blackmagic eGPU sold through Apple), then a single cable can both transfer data and power the laptop. One caveat is that Thunderbolt 3 doesn’t pass a video signal in the same way as Thunderbolt 2. You cannot simply add a Thunderbolt 3-to-Thunderbolt 2 adapter and connect a typical monitor’s MiniDisplayPort plug, as was possible with Thunderbolt 2 ports. External monitors without the correct connection will need to go through a dock or monitor adapter in order to pass a video signal. (This is also true for the iMac Pros.)

Many users have taken to relying on their MacBook Pros as the primary machine for their home or office, as well on the road. The upside of Thunderbolt connectively is that when you get back to the office, connecting a single Thunderbolt 3 cable to the rest of your suite peripherals (dock, display, eGPU, whatever) is all you need to get up and running. Simple and clean. Stick the laptop in a cradle in the clamshell mode or on a laptop stand, connect the cable, and you now have a powerful desktop machine. MacBook Pros have gained enough power in recent years that – unless your demands are heavy – they can easily service your editing, photography, and graphic needs.

Is it time to upgrade?

I own a mid-2014 15” MacBook Pro (the last series with an NVIDIA GPU), which I purchased in early 2015. Three years is often a good interval for most professional users to plan on a computer refresh, so I decided to compare the two. To start with, the new 2018 machine boots faster and apps also open faster. It’s even slightly smaller and thinner than the mid-2014 model. Both have fast SSDs, but the 2018 model is significantly faster (2645 MB/s write, 2722 MB/s read – Blackmagic Speed Test).

As with other reviews, I pulled an existing edit project for my test sequence. This timeline could be the same in Final Cut Pro X, Premiere Pro, and Resolve – without effects unique to one specific software application. My timeline consisted of 4K Alexa ProResHQ files that had a LUT and were scaled into a 1080p sequence. A few 1080p B-roll shots were also part of this sequence. The only taxing effect was a reverse slomo 4K clip, using optical flow interpolation. Both machines handled 4K ProRes footage just fine at full resolution using various NLEs. Exports to ProRes and H.264 were approximately twice as fast from Final Cut Pro X on the newer MacBook Pro. The same exports from Premiere Pro were longer overall than from FCPX, but faster on the 2018 machine, as well (see the section at the end for performance by the numbers).

If you are a fan of Final Cut Pro X, this machine is one of the best to use it on, especially if you can store your media on the internal drive. However, as an equalizer of sorts, I also ran these same test projects from an external SSD connected via USB3. While fast (over 200+ MB/s read/write), it wasn’t nearly as fast as the internal SSDs. Nevertheless, performance didn’t really lag behind with either FCPX or Premiere Pro. However, the optical flow clip did pose some issues. It played smoothly at “best quality” in FCPX, but oddly stuttered in the “best performance” setting. It did not play well in Premiere Pro at either full or half resolution. I also believe it contributed to the slower export times evident with Premiere Pro.

I tested a second project made up of all 4K REDCODE raw footage, which was placed into a 4K timeline. The 2018 MacBook Pro played the individual files and edited sequences smoothly when set to “best performance” in FCPX or half resolution in Premiere Pro. However, bumping the settings up to full quality caused stuttering with either NLE.

My last test was the same DaVinci Resolve project that I’ve used for my eGPU “stress” tests. These are anamorphic 4K Alexa files in a 2K DCI timeline. I stripped off all of the added filters that I had applied for the test of the eGPU, leaving a typical editing timeline with only a LUT and basic correction. This sequence played smoothly without dropping frames, which bodes well for editors who are considering a shift to Resolve as their main NLE.

Speaking of the Blackmagic eGPU tests, I had one day of overlap between the loans of the MacBook Pro and the Blackmagic eGPU. DaVinci Resolve’s real-time playback performance and exports were improved by about a 2X factor with the eGPU connected to the 15” model. Naturally,  the 15” machine by itself was quite a bit faster than the 13” MacBook Pro, so the improvement with an eGPU attached wasn’t as dramatic of a margin as the test with the 13” demonstrated. Even with this powerhouse MacBook Pro, the Blackmagic eGPU still adds value as a general appliance, as well as providing Resolve acceleration.

A note on battery life. The spec claims about 10 hours, but that’s largely for simple use, like watching web movies or listening to iTunes. Most of these activities do not cause the graphics to switch over from the integrated Intel to the Radeon Pro GPU, which consumes more power. In my editing tests with the Radeon GPU constantly on – and most of the energy saving settings disabled – I got five to six hours of battery life. That’s even when an application like FCPX was open, but minimized, without any real activity being done on the laptop.

I also ran a “heavy load” test, which involved continually looping my sample 1080 timeline (with 4K source media) full screen at “best quality” in FCPX. This is obviously a worst case scenario, but the charge only lasted about two hours. In short, the battery capacity is very good for a laptop, but one can only expect so much. If you plan on a heavy workload for an extended period of time, stay plugged in.

The 2018 MacBook Pro is a solid update that creative professionals will certainly enjoy, both in the field and even as a desktop replacement. If you bought last year’s model, there’s little reason to refresh your computer, yet. But three years or more? Get out the credit card!

_________________________________________________

Performance by the numbers

Blackmagic Design eGPU test

DaVinci Resolve renders/exports
(using the same test sequence as used for my eGPU review)

13” 2018 MacBook Pro – internal Intel graphics only
Render at source resolution – 1fps
Render at timeline resolution – 4fps

13” 2018 MacBook Pro – with Blackmagic eGPU
Render at source resolution – 5.5fps
Render at timeline resolution – 17.5fps

15” 2018 MacBook Pro – internal Radeon graphics only
Render at source resolution – 2.5fps
Render at timeline resolution – 8fps

15” 2018 MacBook Pro – with Blackmagic eGPU
Render at source resolution – 5.5fps
Render at timeline resolution – 16fps

Standard performance tests – 2018 15” MacBook Pro vs. Mid-2014
(using editing test sequence – 4K ProResHQ media)

2018 export from FCPX to ProRes  :30
2018 export from FCPX to H.264 at 10Mbps  :57
2014 export from FCPX to ProRes  :57
2014 export from FCPX to H.264 at 10Mbps  1:42

2018 export from Premiere Pro to ProRes  2:59
2018 export from Premiere Pro to H.264 at 10Mbps  2:32
2014 export from Premiere Pro to ProRes  3:35
2014 export from Premiere Pro to H.264 at 10Mbps  3:25

2018 export from Resolve to ProRes :35
2018 export from Resolve to H.264 at 10Mbps  :35
(Mid-2014 MBP was not used in this test)

Originally written for RedSharkNews

©2018 Oliver Peters