NAB Show 2019

This year the NAB Show seemed to emphasize its roots – the “B” in National Association of Broadcasters. Gone or barely visible were the fads of past years, such as stereoscopic 3D, 360-degree video, virtual/augmented reality, drones, etc. Not that these are gone – merely that they have refocused on the smaller segment of marketshare that reflects reality. There’s not much point in promoting stereo 3D at NAB if most of the industry goes ‘meh’.

Big exhibitors of the past, like Quantel, RED, Apple, and Autodesk, are gone from the floor. Quantel products remain as part of Grass Valley (now owned by Belden), which is the consolidation of Grass Valley Group, Quantel, Snell & Wilcox, and Philips. RED decided last year that small, camera-centric shows were better venues. Apple – well, they haven’t been on the main floor for years, but even this year, there was no off-site, Final Cut Pro X stealth presence in a hotel suite somewhere. Autodesk, which shifted to a subscription model a couple of years ago, had a demo suite in the nearby Renaissance Hotel, focusing on its hero product, Flame 2020. Smoke for Mac users – tough luck. It’s been over for years.

This was a nuts-and-bolts year, with many exhibits showing new infrastructure products. These appeal to larger customers, such as broadcasters and network facilities. Specifically the world is shifting to an IP-based infrastructure for signal routing, control, and transmission. This replaces copper and fiber wiring of the past, along with the devices (routers, video switchers, etc) at either end of the wire. Companies that might have appeared less relevant, like Grass Valley, are back in a strong sales position. Other companies, like Blackmagic Design, are being encouraged by their larger clients to fulfill those needs. And as ever, consolidation continues – this year VizRT acquired NewTek, who has been an early player in video-over-IP with their proprietary NDI protocol.

Adobe

The NAB season unofficially started with Adobe’s pre-NAB release of the CC2019 update. For editors and designers, the hallmarks of this update include a new, freeform bin window view and adjustable guides in Premiere Pro and content-aware, video fill in After Effects. These are solid additions in response to customer requests, which is something Adobe has focused on. A smaller, but no less important feature is Adobe’s ongoing effort to improve media performance on the Mac platform.

As in past years, their NAB booth was an opportunity to present these new features in-depth, as well as showcase speakers who use Adobe products for editing, sound, and design. Part of the editing team from the series Atlanta was on hand to discuss the team’s use of Premiere Pro and After Effects in their ‘editing crash pad’.

Avid

For many attendees, NAB actually kicked off on the weekend with Avid Connect, a gathering of Avid users (through the Avid Customer Association), featuring meet-and-greets, workshops, presentations, and ACA leadership committee meetings. While past product announcements at Connect have been subdued from the vantage of Media Composer editors, this year was a major surprise. Avid revealed its Media Composer 2019.5 update (scheduled for release the end of May). This came as part of a host of many updates. Most of these apply to companies that have invested in the full Avid ecosystem, including Nexis storage and Media Central asset management. While those are superb, they only apply to a small percentage of the market. Let’s not forget Avid’s huge presence in the audio world, thanks to the dominance of Pro Tools – now with Dolby ATMOS support. With the acquisition of Euphonix years back, Avid has become a significant player in the live and studio sound arena. Various examples of its S-series consoles in action were presented.

Since I focus on editing, let me discuss Media Composer a bit more. The 2019.5 refresh is the first major Media Composer overhaul in years. It started in secret last year. 2019.5 is the first iteration of the new UI, with more to be updated in coming releases. In short, the interface has been modernized and streamlined in ways to attract newer, younger users, without alienating established editors. Its panel design is similar to Adobe’s approach – i.e. interface panels can be docked, floated, stacked, or tabbed. Panels that you don’t want to see may be closed or simply slid to the side and hidden. Need to see a hidden panel again? Simply side it back open from the edge of the screen.

This isn’t just a new skin. Avid has overhauled the internal video pipeline, with 32-bit floating color and an uncompressed DNx codec. Project formats now support up to 16K. Avid is also compliant with the specs of the Netflix Post Alliance and the ACES logo program.

I found the new version very easy to use and a welcomed changed; however, it will require some adaptation if you’ve been using Media Composer for a long time. In a nod to the Media Composer heritage, the weightlifter (aka ‘liftman’) and scissors icons (for lift and extract edits) are back. Even though Media Composer 2019.5 is just in early beta testing, Avid felt good enough about it to use this version in its workshops, presentations, and stage demos.

One of the reasons to go to NAB is for the in-person presentations by top editors about their real-world experiences. No one can top Avid at this game, who can easily tap a host of Oscar, Emmy, BFTA, and Eddie award winners. The hallmark for many this year was the presentation at Avid Connect and/or at the show by the Oscar-winning picture and sound editing/mixing team for Bohemian Rhapsody. It’s hard not to gather a standing-room-only crowd when you close your talk with the Live Aid finale sequence played in kick-ass surround!

Blackmagic Design

Attendees and worldwide observers have come to expect a surprise NAB product announcement out of Grant Petty each year and he certainly didn’t disappoint this time. Before I get into that, there were quite a few products released, including for IP infrastructures, 8K production and post, and more. Blackmagic is a full spectrum video and audio manufacturer that long ago moved into the ‘big leagues’. This means that just like Avid or Grass Valley, they have to respond to pressure from large users to develop products designed around their specific workflow needs. In the BMD booth, many of those development fruits were on display, like the new Hyperdeck Extreme 8K HDR recorder and the ATEM Constellation 8K switcher.

The big reveal for editors was DaVinci Resolve 16. Blackmagic has steadily been moving into the editorial space with this all-in-one, edit/color/mix/effects/finishing application. If you have no business requirement for – or emotional attachment to – one of the other NLE brands, then Resolve (free) or Resolve Studio (paid) is an absolute no-brainer. Nothing can touch the combined power of Resolve’s feature set.

New for Resolve 16 is an additional editorial module called the Cut Page. At first blush, the design, layout, and operation are amazingly similar to Apple’s Final Cut Pro X. Blackmagic’s intent is to make a fast editor where you can start and end your project for a time-sensitive turnaround without the complexities of the Edit Page. However, it’s just another tool, so you could work entirely in the Cut Page, or start in the Cut Page and refine your timeline in the Edit Page, or skip the Cut Page all together. Resolve offers a buffet of post tools that are at your disposal.

While Resolve 16’s Cut Page does elicit a chuckle from experienced FCPX users, it offers some new twists. For example, there’s a two-level timeline view – the top section is the full-length timeline and the bottom section is the zoomed-in detail view. The intent is quick navigation without the need to constantly zoom in and out of long timelines. There’s also an automatic sync detection function. Let’s say you are cutting a two-camera show. Drop the A-camera clips onto the timeline and then go through your B-camera footage. Find a cut-away shot, mark in/out on the source, and edit. It will ‘automagically’ edit to the in-sync location on the timeline. I presume this is matched by either common sound or timecode. I’ll have to see how this works in practice, but it demos nicely. Changes to other aspects of Resolve were minor and evolutionary, except for one other notable feature. The Color Page added its own version of content-aware, video fill.

Another editorial product addition – tied to the theme of faster, more-efficient editing – was a new edit keyboard. Anyone who’s ever cut in the linear days – especially those who ran Sony BVE9000/9100 controllers – will feel very nostalgic. It’s a robust keyboard with a high-quality, integrated jog/shuttle knob. The feel is very much like controlling a tape deck in a linear system, with fast shuttle response and precise jogging. The precision is far better than any of the USB controllers, like a Contour Shuttle. Whether or not enough people will have interest in shelling out $1,025 for it awaits to be seen. It’s a great tool, but are you really faster with one, than with FCPX’s skimming and a standard keyboard and mouse?

Ironically, if you look around the Blackmagic Design booth there does seem to be a nostalgic homage to Sony hardware of the past. As I said, the edit keyboard is very close to a BVE9100 keyboard. Even the style of the control panel on the Hyperdecks – and the look of the name badges on those panels – is very much Sony’s style. As humans, this appeals to our desire for something other than the glass interfaces we’ve been dealing with for the past few years. Michael Cioni (Panavision, Light Iron) coined this as ‘tactile attraction’ in his excellent Faster Together Stage talk. It manifests itself not only in these type of control surfaces, but also in skeuomorphic designs applied to audio filter interfaces. Or in the emotion created in the viewer when a colorist adds film grain to digital footage.

Maybe Grant is right and these methods are really faster in a pressure-filled production environment. Or maybe this is simply an effort to appeal to emotion and nostalgia by Blackmagic’s designers. (Check out Grant Petty’s two-hour 2019 Product Overview for more in-depth information on Blackmagic Design’s new products.)

8K

I won’t spill a lot of words on 8K. Seems kind of silly when most delivery is HD and even SD in some places. A lot of today’s production is in 4K, but really only for future-proofing. But the industry has to sell newer and flashier items, so they’ve moved on to 8K pixel resolution (7680 x 4320). Much of this is driven by Japanese broadcast and manufacturer efforts, who are pushing into 8K. You can laugh or roll your eyes, but NAB had many examples of 8K production tools (cameras and recorders) and display systems. Of course, it’s NAB, making it hard to tell how many of these are only prototypes and not yet ready for actual production and delivery.

For now, it’s still a 4K game, with plenty of mainstream product. Not only cameras and NLEs, but items like AJA’s KiPro family. The KiPro Ultra Plus records up to four channels of HD or one channel of 4K in ProRes or DNx. The newest member of the family is the KiPro GO, which records up to four channels of HD (25Mbps H.264) onto removable USB media.

Of course, the industry never stops, so while we are working with HD and 4K, and looking at 8K, the developers are planning ahead for 16K. As I mentioned, Avid already has project presets built-in for 16K projects. Yikes!

HDR

HDR – or high dynamic range – is about where it was last year. There are basically four formats vying to become the final standard used in all production, post, and display systems. While there are several frontrunners and edicts from distributors to deliver HDR-compatible masters, there still is no clear path. In you shoot in log or camera raw with nearly any professional camera produced within the past decade, you have originated footage that is HDR-compatible. But none of the low-cost post solutions make this easy. Without the right monitoring environment, you are wasting your time. If anything, those waters are muddier this year. There were a number of HDR displays throughout the show, but there were also a few labelled as using HDR simulation. I saw a couple of those at TV Logic. Yes, they looked gorgeous and yes, they were receiving an HDR signal. I found out that the ‘simulation’ part of the description meant that the display was bright (up to 350 nits), but not bright enough to qualify as ‘true’ HDR (1,000 nits or higher).

As in past transitions, we are certainly going to have to rely on a some ‘glue’ products. For me, that’s AJA again. Through their relationship with Colorfront, AJA offers two FS-HDR products: the HDR Image Analyzer and the FS-HDR convertor. The latter was introduced last year as a real-time frame synchronizer and color convertor to go between SDR and HDR display standards.  The new Analyzer is designed to evaluate color space and gamut compliance. Just remember, no computer display can properly show you HDR, so if you need to post and delivery HDR, proper monitoring and analysis tools are essential.

Cameras

I’m not a cinematographer, but I do keep up with cameras. Nearly all of this year’s camera developments were evolutionary: new LF (large format sensor) cameras (ARRI), 4K camcorders (Sharp, JVC), a full-frame mirrorless DSLR from Nikon (with ProRes RAW recording coming in a future firmware update). Most of the developments were targeted towards live broadcast production, like sports and megachurches.  Ikegami had an 8K camera to show, but their real focus was on 4K and IP camera control.

RED, a big player in the cinema space, was only there in a smaller demo room, so you couldn’t easily compare their 8K imagery against others on the floor, but let’s not forget Sony and Panasonic. While ARRI has been a favorite, due to the ‘look’ of the Alexa, Sony (Venice) and Panasonic (Varicam and now EVA-1) are also well-respected digital cinema tools that create outstanding images. For example, Sony’s booth featured an amazing, theater-sized, LED 8K micro-pixel display system. Some of the sample material shown was of the Rio Carnival, shot with anamorphic lenses on a 6K full-frame Sony Venice camera. Simply stunning.

Finally, let’s not forget Canon’s line-up of cinema cameras, from the C100 to the C700FF. To complement these, Canon introduced their new line of Sumire Prime lenses at the show. The C300 has been a staple of documentary films, including the Oscar-winning film, Free Solo, which I had the pleasure of watching on the flight to Las Vegas. Sweaty palms the whole way. It must have looked awesome in IMAX!

(For more on RED, cameras, and lenses at NAB, check out this thread from DP Phil Holland.)

It’s a wrap

In short, NAB 2019 had plenty for everyone. This also included smaller markets, like products for education seminars. One of these that I ran across was Cinamaker. They were demonstrating a complete multi-camera set-up using four iPhones and an iPad. The iPhones are the cameras (additional iPhones can be used as isolated sound recorders) and the iPad is the ‘switcher/control room’. The set-up can be wired or wireless, but camera control, video switching, and recording is done at the iPad. This can generate the final product, or be transferred to a Mac (with the line cut and camera iso media, plus edit list) for re-editing/refinement in Final Cut Pro X. Not too shabby, given the market that Cinamaker is striving to address.

For those of us who like to use the NAB Show exhibit floor as a miniature yardstick for the industry, one of the trends to watch is what type of gear is used in the booths and press areas. Specifically, one NLE over another, or one hardware platform versus the other. On that front, I saw plenty of Premiere Pro, along with some Final Cut Pro X. Hardware-wise, it looked like Apple versus HP. Granted, PC vendors, like HP, often supply gear to use in the booths as a form of sponsorship, so take this with a grain of salt. Nevertheless, I would guess that I saw more iMac Pros than any other single computer. For PCs, it was a mix of HP Z4, Z6, and Z8 workstations. HP and AMD were partner-sponsors of Avid Connect and they demoed very compelling set-ups with these Z-series units configured with AMD Radeon cards. These are very powerful workstations for editing, grading, mixing, and graphics.

©2019 Oliver Peters

Advertisements

Are you ready for a custom PC?

Why would an editor, colorist, or animator purchase a workstation from a custom PC builder, instead of one of the brand name manufacturers? Puget Systems, a PC supplier in Washington state, loaned me a workstation to delve into this question. They pride themselves on assembling systems tailor-made for creative users. Not all component choices are equal, so Puget tests the same creative applications we use every day in order to optimize their systems. For instance, Premiere Pro benefits from more CPU cores, whereas with After Effects, faster core speeds are more important than the core count.

Puget Systems also offers a unique warranty. It’s one year on parts, but lifetime free labor. This means free tech and repair support for as long as you own the unit. Even better, it also includes free labor to install hardware upgrades at their facility at any point in the future – you only pay for parts and shipping.

Built for editing

The experience starts with a consultation, followed by progress reports, test results, and photos of your system during and after assembly. These include thermal scans showing your system under load. Puget’s phone advisers can recommend a system designed specifically for your needs, whether that’s CAD, gaming, After Effects, or editing. My target was Premiere Pro and Resolve with a bit of After Effects. I needed it to be capable of dealing with 4K media using native codecs (no transcodes or proxies). 

Puget’s configuration included an eight-core Intel i9 3.6GHz CPU, 64GB RAM, and an MSI GeForce RTX 2080 Ti Venus GPU (11GB). We put in two Samsung SSDs (a Samsung 860 Pro for OS/applications, plus a faster Samsung 970 Pro M.2 NVMe for cache) and a Western Digital Ultrastar 6TB SATA3 spinning drive for media. This PC has tons of connectivity with ports for video displays, Thunderbolt 3, USB-C, and USB 3. The rest was typical for any PC: sound card, ethernet, wifi, DVD-RW, etc. This unit without a display costs slightly over $5K USD, including shipping and a Windows 10 license. That price is in line with (or cheaper than) any other robust, high-performance workstation.

The three drives in this system deliver different speeds and are intended for different purposes. The fastest of these is the “D” drive, which is a blazingly fast NVMe drive that is mounted directly onto the motherboard. This one is intended for use with material requiring frequent and fast read/write cycles. So it’s ideal for Adobe’s cache files and previews. While you wouldn’t store the media for a large Premiere Pro project on it, it would be well-suited for complex After Effects jobs, which typically only deal with a smaller amount of media. While the 6TB HGST “E” drive dealt well with the 4K media for my test projects, in actual practice you would likely add more drives and build up an internal RAID, or connect to a fast external array or NAS.

If we follow Steve Jobs’ analogy that PCs are like trucks, then this is the Ford F-350 of workstations. The unit is a tad bigger and heavier than an older Mac Pro tower. It’s built into an all-metal Fractal Design case with sound dampening and efficient cooling, resulting in the quietest workstation I’ve ever used – even the few times when the fans revved up. There’s plenty of internal space for future expansion, such as additional hard drives, GPUs, i/o card, etc.

For anyone fretting about a shift from macOS to Windows, setting up this system couldn’t have been simpler. Puget installs a professional build of Windows 10 without all of the junk software most PC makers put there. After connecting my devices, I was up and running in less than an hour, including software installation for Adobe CC, Resolve, Chrome, MacDrive, etc. That’s a very ‘Apple-like’ experience and something you can’t touch if you built your own PC.

The proof is in the pudding

Professional users want hardware and software to fade away so they can fluidly concentrate on the creative process. I was working with 4K media and mixed codecs in Premiere Pro, After Effects, and Resolve. The Puget PC more than lived up to its reputation. It was quiet, media handling was smooth, and Premiere and Resolve timelines could play without hiccups. In short, you can stay in the zone without the system creating distractions.

I don’t work as often with RED camera raw files; however, I did load up original footage from an indie film onto the fastest SSD. This was 4K REDCODE media in a 4K timeline in Premiere Pro. Adobe gives you access to the raw settings, in addition to Premiere’s Lumetri color correction controls. The playback was smooth as silk at full timeline resolution. Even adding Lumetri creative LUTs, dissolves, and slow motion with optical flow processing did not impede real-time playback at full resolution. No dropped frames! Nvidia and RED Digital Camera have been working closely together lately, so if your future includes work with 6K/8K RED media, then a system like this requires serious consideration.

The second concern is rendering and exporting. The RTX 2080 Ti is an Nvidia card that offers CUDA processing, a proprietary Nvidia technology.  So, how fast is the system? There are many variables, of course, such as scaling, filters, color correction, and codecs. When I tested the export of a single 4K Alexa clip from a 1080p Premiere Pro timeline, the export times were nearly the same between this PC and an eight-core 2013 Mac Pro. But you can’t tell much from such a simple test.

To push Premiere Pro, I used a nine minute 1080p travelogue episode containing mostly 4K camera files. I compared export times for ProRes (new on Windows with Adobe CC apps) and Avid DNx between this PC and the Mac Pro (through Adobe Media Encoder). ProRes exports were faster than DNxHD and the PC exports were faster than on the Mac, although comparative times tended to be within a minute of each other. The picture was different when comparing H.264 exports using the Vimeo Full HD preset. In that test, the PC export was approximately 75% faster.

The biggest performance improvements were demonstrated in After Effects and Resolve. I used Puget Systems’ After Effects Benchmark, which includes a series of compositions that test effects, tracking, keys, caustics, 3D text, and more (based on Video Copilot’s tutorials). The Puget PC trounced the Mac Pro in this test. The PC scored a total of 969.5 points versus the Mac’s 535 out of a possible maximum score of 1,000. Resolve was even more dramatic with the graded nine-minute-long sequence sent from Premiere Pro. Export times bested the Mac Pro by more than 2.5x for DNxHD and 6x for H.264.

Aside from these benchmark tests, I also created a “witches brew” After Effects composition of my own. This one contains ten layers of 4K media in a one-minute-long 6K composition. The background layer was blown up and defocused, while all other layers were scaled down and enhanced with a lot of color and Cycore stylized effects. A 3D camera was added to create a group move for the layers. In addition, I was working from the slower drives and not the fast SSDs on either machine. Needless to say this one totally bogs any system down. The Mac Pro rendered a 1080 ProRes file in about 54 minutes, whereas the PC took 42 minutes. Not the same 2-to-1 advantage as in the benchmarks; however, that’s likely due to the fact that I heavily weighted the composition with the Cycore effects. These are not particularly efficient and probably introduce some bottlenecks in After Effects’ processing. Nevertheless, the Puget Systems PC still maintained a decided advantage.

Conclusion

Mac vs. PC comparisons are inevitable when discussing creative workstations. Ultimately it gets down to preference – the OS, the ecosystem, and hardware options. But if you want the ultimate selection of performance hardware and to preserve future expandability, then a custom-built PC is currently the best solution. For straight-forward editing, both platforms will generally serve you well, but there are times when a top-of-the-line PC simply leaves any Mac in the dust. If you need to push performance in After Effects or Resolve, then Windows-based solutions offer the edge today. Custom systems, like those from Puget Systems, are designed with our needs in mind. That’s something you don’t necessarily get from a mainline PC maker. This workstation is a future-proof, no-compromise system that makes the switch from Mac to PC an easy and graceful transition – and with power to space.

Originally written for RedShark News.

©2019 Oliver Peters

Blackmagic Design eGPU Pro

Last year Apple embraced external graphics processing units. Blackmagic Design responded with the release of its AMD-powered eGPU model. Many questioned their choice of the Radeon Pro 580 chip instead of something more powerful. That challenge has been answered with the new Blackmagic eGPU Pro. It sports the Radeon RX Vega 56 – a similar model to the one inside the base iMac Pro configuration. The two eGPU models are nearly identical in design, but in addition to more processing power, the eGPU Pro adds a DisplayPort connection that can support 5K monitors.

The eGPU Pro includes two Thunderbolt 3/USB-C ports with 85W charging capability, HMDI, DisplayPort, and four USB-A type connectors for standard USB-3.1 devices. This means you can connect multiple peripherals and displays, plus power your laptop. You’ll need a Thunderbolt 3 connection from the computer and then either eGPU model becomes plug-and-play with Mojave (macOS 10.14) or later.

Setting up the eGPU Pro

With Mojave, most current creative apps, like Final Cut Pro X, Premiere Pro, Resolve, etc. offer a preference selection to always use the eGPU (when connected) from the application’s Get Info panel. This is an “either/or” choice. The application does not combine the power of both GPUs for maximum performance. When you pull up the Activity Monitor, you can easily see that the internal GPU is loafing while the eGPU Pro does the heavy lifting during tasks such as rendering. External GPUs benefit Macs with low-end, built-in GPUs, like the 13″ MacBook Pro or the Mac mini. A Blackmagic eGPU or eGPU Pro wouldn’t provide an edge to the render times of an iMac Pro, for example. It wouldn’t be worth the investment, unless you need one to connect additional high-resolution displays.

Users who are unfamiliar with external GPUs assume that the advantage is in faster export and render times, but that’s only part of the story. Not every function of an application uses the GPU, so many factors determine rendering. External GPU technology is very much about real-time image output. An eGPU will allow more connected displays of higher resolutions than an underpowered Mac would normally support on its own. The eGPU will also improve real-time playback of effects-heavy timelines. So yes, editors will get faster exports, but they will also enjoy a more fluid editing experience.

Extending the power of the Mac mini

In my Mac mini review, I concluded that a fully-loaded configuration made for a very capable editing computer. However, if you tend to use a number of effects that lean on GPU power, you will see an impact on real-time playback. For example, with the standard Intel GPU, I could add color correction, gaussian blur, and a title, and playback was generally fine with a fast drive. But, when I added a mask to the blur, it quickly dropped frames during playback. Once I connected the eGPU Pro to this same Mac Mini, such timelines played fluidly and, in fact, more effects could be layered onto clips. As in my other tests, Final Cut Pro X performed the best, but Premiere Pro and Resolve also performed solidly.

For basic rendering, I tested the same sequence that I used in the Mac mini review. This is a 9:15-long 1080p timeline made up of 4K source clips in a variety of codecs, plus scaling and color correction. I exported ProRes and H.264 master files from FCPX, Premiere Pro, and Resolve. With the eGPU Pro, times were cut in the range of 12% (FCPX) to 54% (Premiere). An inherently fast renderer, like Final Cut, gained the least by percentage, as it already exhibited the fastest times overall. Premiere Pro saw the greatest gain from the addition of the eGPU Pro. This is a major improvement over last year when Premiere didn’t seem to take much advantage of the eGPU. Presumably both Apple and Adobe have optimized performance when an eGPU is present.

Most taxing tests

A timeline export test is real-world but may or may not tax a GPU. So, I set up a specific render test for that purpose. I created a :60 6K timeline (5760×3240) composed of a nine-screen composite of 4K clips scaled into nine 1920×1080 sections. Premiere Pro would barely play this at even 1/16th resolution using only the Intel. With the eGPU Pro, it generally played at 1/2 resolution. This was exported to a final 1080 ProRes file. During my base test (without the eGPU connected) Premiere Pro took over 31 minutes with “maximum quality” selected. A standard quality export was about eight minutes, while Final Cut Pro X took five minutes. Once I re-connected the eGPU Pro, the same timelines exported in 3:20 under all three test scenarios. That’s a whopping 90% reduction in time for the most taxing condition! One last GPU-centric test was the BruceX test, which has been devised for Final Cut. The result without the eGPU was :58, but an impressive :16 when the eGPU Pro was used.

As you can see, effects-heavy work will benefit from the eGPU Pro, not only in faster renders and exports, but also improved real-time editing. This is also true of Resolve timelines with many nodes and in other graphics applications, like Pixelmater Pro. The 2018 Mac mini is a capable mid-range system when you purchase it with the advanced options. Nevertheless, users who need that extra grunt will definitely see a boost from the addition of a Blackmagic eGPU Pro.

Originally written for RedShark News.

©2019 Oliver Peters

The Nuances of Overcranking

The concept of overcranking and undercranking in the world of film and video production goes back to the origins of motion picture technology. The earliest film cameras required the camera operator to manually crank the film mechanism – they didn’t have internal motors. A good camera operator was partially judged by how constant of a frame rate they could maintain while cranking the film through the camera.

Prior to the introduction of sound, the correct frame rate was 18fps. If the camera was cranked faster than 18fps (overcranking), then the playback speed during projection was in slow motion. If the camera was cranked slower than 18fps (undercranking), the motion was sped up. With sound, the default frame rate shifted from 18 to 24fps. One by-product of this shift is that the projection of old B&W films gained that fast, jerky motion we often incorrectly attribute to “old time movies” today. That characteristic motion is because they are no longer played at their intended speeds.

While manual film cranking seems anachronistic in modern times, it had the benefit of in-camera, variable-speed capture – aka speed ramps. There are modern film cameras that include controlled mechanisms to still be able to do that today – in production, not in post.

Videotape recording

With the advent of videotape recording, the television industry was locked into constant recording speeds. Variable-speed recording wasn’t possible using tape transport mechanisms. Once color technology was established, the standard record, playback, and broadcast frame rates became 29.97fps and/or 25.0fps worldwide. Motion picture films captured at 24.0fps were transferred to video at the slightly slower rate of 23.976fps (23.98) in the US and converted to 29.97 by employing pulldown – a method to repeat certain frames according to a specific cadence. (I’ll skip the field versus frame, interlaced versus progressive scan discussion.)

Once we shifted to high definition, an additional frame rate category of 59.94fps was added to the mix. All of this was still pinned to physical videotape transports and constant frame rates. Slomo and fast speed effects required specialized videotape or disk pack recorders that could playback at variable speeds. A few disk recorders could record at different speeds, but in general, it was a post-production function.

File-based recording

Production shifted to in-camera, file-based recording. Post shifted to digital, computer-based, rather than electro-mechanical methods. The nexus of these two shifts is that the industry is no longer locked into a limited number of frame rates. So-called off-speed recording is now possible with nearly every professional production camera. All NLEs can handle multiple frame rates within the same timeline (albeit at a constant timeline frame rate).

Modern video displays, the web, and streaming delivery platforms enable viewers to view videos mastered at different frame rates, without being dependent on the broadcast transmission standard in their country or region. Common, possible system frame rates today include 23.98, 24.0, 25.0, 29.97, 30.0, 59.94, and 60.0fps. If you master in one of these, anyone around the world can see your video on a computer, smart phone, or tablet.

Record rate versus system/target rate

Since cameras can now record at different rates, it is imperative that the production team and the post team are on the same page. If the camera operator records everything at 29.97 (including sync sound), but the post is designed to be at 23.98, then the editor has four options. 1) Play the files as real-time (29.97 in a 23.98 sequence), which will cause frames to be dropped, resulting in some stuttering on motion. 2) Play the footage at the slowed speed, so that there is a one-to-one relationship of frames, which doesn’t work for sync sound. 3) Go through a frame rate conversion before editing starts, which will result in blended and/or dropped frames. 4) Change the sequence setting to 29.97, which may or may not be acceptable for final delivery.

Professional production cameras allow the operator to set both the system or target frame rate, in addition to the actual recording rate. These may be called different names in the menus, but the concepts are the same. The system or target rate is the base frame rate at which this file will be edited and/or played. The record rate is the frame rate at which images are exposed. When the record rate is higher than the target rate, you are effectively overcranking. That is, you are recording slow motion in-camera.

(Note: from here on I will use simplified instead of integer numbers in this post.) A record rate of 48fps and a target rate of 24fps results in an automatic 50% slow motion playback speed in post, with a one-to-one frame relationship (no duplicated or blended frames). Conversely, a record rate of 12fps with a target rate of 24fps results in playback that is fast motion at 200%. That’s the basis for hyperlapse/timelapse footage.

The good news is that professional production cameras embed the pertinent metadata into the file so that editing and player software automatically knows what to do. Import an ARRI Alexa file that was recorded at 120fps with a target rate of 24fps (23.98/23.976) into Final Cut Pro X or Premiere Pro and it will automatically playback in slow motion. The browser will identify the correct target rate and the clip’s timecode will be based on that same rate.

The bad news as that many cameras used in production today are consumer products or at best “prosumer” cameras. They are relatively “dumb” when it comes to such settings and metadata. Record 30fps on a Canon 5D or Sony A7S and you get 30fps playback. If you are cutting that into a 24fps (23.98) sequence, you will have to decide how to treat it. If the use is for non-sound-sync B-roll footage, then altering the frame rate (making it play slow motion) is fine. In many cases, like drone shots and handheld footage, that will be an intentional choice. The slower footage helps to smooth out the vibration introduced by using such a lightweight camera.

The worst recordings are those made with iPhone, iPads, or similar devices. These use variable-bit-rate codecs and variable-frame-rate recordings, making them especially difficult in post. For example, an iPhone recording at 30.0fps isn’t exactly at that speed. It wobbles around that rate – sometimes slightly slower and something faster. My recommendation for that type of footage is to always transcode to an optimized format before editing. If you must shoot with one of these devices, you really need to invest in the FiLMiC Pro application, which will give you a certain level of professional control over the iPhone/iPad camera.

Transcode

Time and storage permitting, I generally recommend transcoding consumer/prosumer formats into professional, optimized editing formats, like Avid DNxHD/HR or Apple ProRes. If you are dealing with speed differences, then set your file conversion to change the frame rate. In our 30 over 24 example (29.97 record/23.98 target), the new footage will be slowed accordingly with matching timecode. Recognize that any embedded audio will also be slowed, which changes its sample rate. If this is just for B-roll and cutaways, then no problem, because you aren’t using that audio. However, one quirk of Final Cut Pro X is that even when silent, the altered sample rate of the audio on the clip can induce strange sound artifacts upon export. So in FCPX, make sure to detach and delete audio from any such clip on your timeline.

Interpret footage

This may have a different name in any given application, but interpret footage is a function to make the application think that the file should be played at a different rate than it was recorded at. You may find this in your NLE, but also in your encoding software. Plus, there are apps that can re-write the QuickTime header information without transcoding the file. Then that file shows up at the desired rate inside of the NLE. In the case of FCPX, the same potential audio issues can arise as described above if you go this route.

In an NLE like Premiere or Resolve, it’s possible to bring in 30-frame files into a 24-frame project. Then highlight these clips in the browser and modify the frame rate. Instant fix, right? Well, not so fast. While I use this in some cases myself, it comes with some caveats. Interpreting footage often results in mismatched clip linking when you are using the internal proxy workflow. The proxy and full-res files don’t sync up to each other. Likewise, in a roundtrip with Resolve, file relinking in Resolve will be incorrect. It may result in not being able to relink these files at all, because the timecode that Resolve looks for falls outside of the boundaries of the file. So use this function with caution.

Speed adjustments

There’s a rub when work with standard speed changes (not frame rate offsets). Many editors simply apply an arbitrary speed based on what looks right to them. Unfortunately this introduces issues like skipping frames. To perfectly apply slow or fast motion to a clip, you MUST stick to simple multiples of that rate, much like traditional film post. A 200% speed increase is a proper multiple. 150% is not. The former means you are playing every other frame from a clip for smooth action. The latter results in only one fourth of the frames being eliminated in playback, leaving you with some unevenness in the movement. 

Naturally there are times when you simply want the speed you picked, even if it’s something like 177%. That’s when you have to play with the interpolation options of your NLE. Typically these include frame duplication, frame blending, and optical flow. All will give you different looks. When it comes to optical flow, some NLEs handle this better than others. Optical flow “creates” new  in-between frames. In the best case it can truly look like a shot was captured at that native frame rate. However, the computation is tricky and may often lead to unwanted image artifacts.

If you use Resolve for a color correction roundtrip, changes in motion interpolation in Resolve are pointless, unless the final export of the timeline is from Resolve. If clips go back to your NLE for finishing, then it will be that software which determines the quality of motion effects. Twixtor is a plug-in that many editors use when they need even more refined control over motion effects.

Doing the math

Now that I’ve discussed interpreting footage and the ways to deal with standard speed changes, let’s look at how best to handle off-speed clips. The proper workflow in most NLEs is to import the footage at its native frame rate. Then, when you cut the clip into the sequence, alter the speed to the proper rate for frames to play one-to-one (no blended, duplicate, or skipped frames). Final Cut Pro X handles this in the best manner, because it provides an automatic speed adjustment command. This not only makes the correct speed change, but also takes care of any potential audio sample rate issues. With other NLEs, like Premiere Pro, you will have to work out the math manually. 

The easiest way to get a value that yields clean frames (one-to-one frame rate) is to simply divide the timeline frame rate by the clip frame rate. The answer is the percentage to apply to the clip’s speed in the timeline. Simple numbers yield the same math results as integer numbers. If you are in a 23.98 timeline and have 29.97 clips, then 24 divided by 30 equals .8 – i.e. 80% slow motion speed. A 59.94fps clip is 40%. A 25fps clip is 96%.

Going in the other direction, if you are editing in a 29.97 timeline and add a 23.98 clip, the NLE will normally add a pulldown cadence (duplicated frames). If you want this to be one-to-one, if will have to be sped up. But the calculation is the same. 30 divided by 24 results in a 125% speed adjustment. And so on.

Understanding the nuances of frame rates and following these simple guidelines will give you a better finished product. It’s the kind of polish that will make your videos stand out from those of your fellow editors.

© 2019 Oliver Peters

Glass – Editing an Unconventional Trilogy

Writer/director M. Night Shyamalan has become synonymous with films about the supernatural that end with a twist. He first gained broad attention with The Sixth Sense and in the two decades since, has written, produced, and directed a range of large and small films. In recent years, he has taken a more independent route to filmmaking, working with lower budgets and keeping close control of production and post.

His latest endeavor, Glass, also becomes the third film in what is now an unconventional trilogy, starting first with Unbreakable, released 19 years ago. 2017’s Split was the second in this series. Glass combines the three principal characters from the previous two films – David Dunn/The Overseer (Bruce Willis), Elijah Price/Mr. Glass (Samuel L. Jackson), and Kevin Wendell Crumb (James McAvoy), who has 23 multiple personalities.

Shyamalan likes to stay close to his northeastern home base for production and post, which has afforded an interesting opportunity to young talent. One of those is Luke Ciarrocchi, who edited the final two installments of the trilogy, Split and Glass. This is only his third film in the editor’s chair. 2015’s The Visit was his first. Working with Shyamalan has provided him with a unique opportunity, but also a master class in filmmaking. I recently spoke with Luke Ciarrocchi about his experience editing Glass.

_________________________________________________

[OP] You’ve had the enviable opportunity to start your editing career at a pretty high level. Please tell me a bit about the road to this point.

[LC] I live in a suburb of Philadelphia and studied film at Temple University. My first job after college was as a production assistant to the editing team on The Happening with editor Conrad Buff (The Huntsman: Winter’s War, Rise of the Planet of the Apes, The Last Airbender) and his first assistant Carole Kenneally. When the production ended, I got a job cutting local market commercials. It wasn’t glamorous stuff, but it is where I got my first experience working on Avid [Media Composer] and really started to develop my technical knowledge. I was doing that for about seven months when The Last Airbender came to town.

I was hired as an apprentice editor by the same editing crew that I had worked with on The Happening. It was on that film that I started to get onto Night’s radar. I was probably the first Philly local to break into his editing team. There’s a very solid and talented group of local production crew in Philly, but I think I was the first local to join the Editors Guild and work in post on one of his films. Before that, all of the editing crew would come from LA or New York. So that was a big ‘foot in the door’ moment, getting that opportunity from Conrad and Carole.  I learned a lot on Airbender. It was a big studio visual effects film, so it was a great experience to see that up close – just a really exciting time for me.

During development of After Earth, even before preproduction began, Night asked me to build a type of pre-vis animatic from the storyboards for all the action sequences. I would take these drawings into After Effects and cut them up into moveable pieces, animate them, then cut them together into a scene in Avid. I was putting in music and sound effects, subtitles for the dialogue, and really taking them to a pretty serious and informative level. I remember animating the pupils on one of the drawings at one point to convey fear (laughs). We did this for a few months. I would do a cut, Night would give me notes, maybe the storyboard artist would create a new shot, and I would do a recut. That was my first back-and-forth creative experience with him.

Once the film began to shoot, I joined the editing team as an assistant editor. At the end of post – during crunch time – I got the opportunity to jump in and cut some actual scenes with Night. It was surreal. I remember sitting in the editing room auditioning cuts for him and him giving notes and all the while I’m just repeating in my head, ‘Don’t mess this up, don’t mess this up.’ I feel like we had a very natural rapport though, besides the obvious nervousness that would come from a situation like that. We really worked well together from the start. We both had a strong desire to dig deep and really analyze things, to not leave anything on the table. But at the same time we also had the ability to laugh at things and break the seriousness when we needed to. We have a similar sense of humor that to this day I think helps us navigate the more stressful days in the editing room. Personality plays a big roll in the editing room. Maybe more so then experience. I may owe my career to my immature sense of humor. I’m not sure.     

After that, I assisted on some other films passing through Philly and just kept myself busy. Then I got a call from Night’s assistant to come by to talk about his next film, The Visit. I got there and he handed me a script and told me he wanted me to be the sole editor on it. Looking back it seems crazy, because he was self-financing the film. He had lot on the line and he could have gotten any editor, but he saw something. So that was the first of the three films I would cut for him. The odds have to be one-in-a-million for that to pan out the way that it did in the suburbs of Philly. Right place, right time, right people. It’s a lot of luck, but when you find yourself in that situation, you just have to keep telling yourself, ‘Don’t mess this up.’

[OP] These three films, including Glass, are being considered a trilogy, even though they span about two decades. How do they tie together, not just in story, but also style?

[LC] I think it’s fair to call Glass the final installment of a trilogy – but definitely an untraditional one. First Unbreakable, then 19 years later Split, and now Glass. They’re all in the same universe and hopefully it feels like a satisfying philosophical arc through the three. The tone of the films is ingrained in the scripts and footage. Glass is sort of a mash-up of what Unbreakable was and what Split was. Unbreakable was a drama that then revealed itself as a comic book origin story. Split was more of a thriller – even horror at times – that then revealed itself as part of this Unbreakable comic book universe. Glass is definitely a hybrid of tone and genre representing the first two films. 

[OP] Did you do research into Unbreakable to study its style?

[LC] I didn’t have to, because Unbreakable has been one of my favorite films since I was 18. It’s just a beautiful film. I loved that in the end it wasn’t just about David Dunn accepting who he was, but also Elijah finding his place in the world only by committing these terrible crimes to discover his opposite. He had to become a villain to find the hero. It’s such a cool idea and for me, very rewatchable. The end never gets old to me. So I knew that film very, very well. 

[OP] Please walk me through your schedule for post-production.

[LC] We started shooting in October of 2017 and shot for about two month. I was doing my assembly during that time and the first week of December. Then Night joined me and we started the director’s cut. The way that Night has set up these last three films is with a very light post crew. It’s just my first assistant, Kathryn Cates, and me set up at Night’s offices here in the suburbs of Philadelphia with two Avids. We had a schedule that we were aiming for, but the release date was over a year out, so there was wiggle room if it was needed. 

Night’s doing this in a very unconventional way. He’s self-financing, so we didn’t need to go into a phase of a studio cut. After his director’s cut, we would go into a screening phase – first just for close crew, then more of a friends-and-family situation. Eventually we get to a general audience screening. We’re working and addressing notes from these screenings, and there isn’t an unbearable amount of pressure to lock it up before we’re happy. 

[OP] I understand that your first cut was about 3 1/2 hours long. It must take a lot of trimming and tweaking to get down to the release length of 129 minutes. What sort of things did you do to cut down the running time from that initial cut?

[LC] One of our obstacles throughout post was that initial length. You’re trying to get to the length that the film wants to be without gutting it in the process. You don’t want to overcut as much as you don’t want to undercut. We had a similar situation on Split, which was a long assembly as well. The good news is that there’s a lot of great stuff to work with and choose from.

We approach it very delicately. After each screening we trimmed a little and carefully pulled things out, so each screening was incrementally shorter, but never dramatically so. Sometimes you will learn from a screenings that you pulled the wrong thing out and it needed to go back in. Ultimately no major storyline was cut out of Glass. It was really just finding where we are saying the same thing twice, but differently – diagnosing which one of those versions is the more impactful one – then cutting the others. And so, we just go like that. Pass after pass. Reel by reel.

An interesting thing I’ve found is that when you are repeating things, you will often feel that the second time is the offensive moment of that information and the one to remove, because you’ve heard it once before. But the truth is that the first telling of that information is more often what you want to get rid of. By taking away the first one, you are saving something for later. Once you remove something earlier, it becomes an elevated scene, because you are aren’t giving away so much up front. 

[OP] What is your approach to getting started when you are first confronted with the production footage? What is your editing workflow like?

[LC] I’m pretty much paper-based. I have all of the script supervisor’s notes. Night is very vocal on set about what he likes and doesn’t like, and Charlie Rowe, our script supervisor, is very good at catching those thoughts. On top of that, Night still does dailies each day – either at lunch or the end of the day. As a crew, we get together wherever we are and screen all of the previous day’s footage, including B-roll. I will sit next to Night with a sheet that has all of the takes and set-ups with descriptions and I’ll take notes both on Night’s reactions, as well as my own feelings towards the footage. 

With that information, I’ll start an assembly to construct the scene in a very rough fashion without getting caught up in the small details of every edit. It starts to bring the shape of the scene out for me. I can see where the peaks and valleys are. Once I have a clearer picture of the scene and its intention, I’ll go back through my detailed notes – there’s a great look for this, there’s a great reading for that – and I find where those can fit in and whether they serve the edit. You might have a great reaction to something, but the scene might not want that to be on-camera. So first I find the bones of the scene and then I dress it up. 

Night gets a lot range from the actors from the first take to the last take. It is sometimes so vast that if you built a film out of only the last takes, it would be a dramatically different movie than if you only used take one. With each take he just pushes the performances further. So he provides you with a lot of control over how animated the scene is going to be. In Glass, Elijah is an eccentric driven by a strong ideology, so in the first take you get the subdued, calculated villain version of him, but by the last take it’s the carnival barker version. The madman. 

[OP] Do you get a sense when screening the dailies of which way Night wants to go with a scene?

[LC] Yes, he’ll definitely indicate a leaning and we can boil it down to a couple of selects. I’ll initially cut a scene with the takes that spoke to him the most during the dailies and never cut anything out ahead of time. He’ll see the first cuts as they were scripted, storyboarded, and shot. I’ll also experiment with a different take or approach if it seems valid and have that in my back pocket. He’s pretty quick to acknowledge that he might have liked a raw take on set and in dailies, but it doesn’t work as well when cut together into a scene. So then we’ll address that. 

[OP] As an Avid editor, have you used Media Composer’s script integration features, like ScriptSync?

[LC] I just had my first experience with it on a Netflix show. I came on later in their post, so the show had already been set up for ScriptSync. It was very cool and helpful to be able to jump in and quickly compare the different takes for the reading of a line. It’s a great ‘late in the game’ tool. Maybe you have a great take, but just one word is bobbled and you’d like to find a replacement for just that word. Or the emotion of a key word isn’t exactly what you want. It could be a time-saver for a lot of that kind of polishing work.

[OP] What takeaways can you share from your experiences working with M. Night Shyamalan?

[LC] Night works in the room with you everyday. He doesn’t just check in once a week or something like that. It’s really nice to have that other person there. I feel like often times the best stuff comes from discussing it and talking it through. He loves to deconstruct things and figure out the ‘why’. Why does this work and this doesn’t? I enjoy that as well. After three films of doing that, you learn a lot. You’re not aware of it, but you’re building a toolkit. These tools and choices start to become second nature. 

On the Netflix show that I just did, there were times where I didn’t have anyone else in the room for long stretches and I started to hear those things that have become inherent in my process clearer. I started to take notice of what had become my second nature – what the last decade had produced. Editing is something you just have to do to learn. You can’t just read about it or study a great film. You have to do it, do it again, and struggle with it. You need to mess it up to get it right.

________________________________________________

This interview is going online after Glass has scored its third consecutive weekend in the number one box office slot. Split was also number one for three weeks in a row. That’s a pretty impressive feat and fitting for the final installment of a trilogy.

Be sure to also check out Steve Hullfish’s AOTC interview with Luke Ciarrocchi here.

©2019 Oliver Peters

Editing with the 2018 Mac mini

It’s hard to pigeonhole the new Mac mini into any specific market, since the size and modular design fit the needs of many different users. Data centers, servers, and Compressor encoding clusters come to mind, but it’s also ideal for many location productions, such as DIT work, stage lighting and sound control. If you are replacing an aging computer, already own the other peripherals, and prefer the macOS ecosystem, then the Mac mini may be enticing.

The 2018 Mac mini features a familiar form factor that’s been revamped with a new thermal architecture, bigger fans, and redesigned power supply. It features eighth-generation Intel Core quad-core and six-core processor options, RAM that tops out at 64GB, and flash storage (SSD) up to 2TB. Connectivity includes four Thunderbolt 3 / USB-C ports (two internal buses), HDMI 2.0, two standard USB 3.1 ports, Bluetooth, wi-fi, a headphone jack, and an ethernet port. The latter can be bumped up to 10GigE in build-to-order machines. RAM is technically upgradeable, but Apple recommends Apple-certified service centers and not user replacement. Apple loaned me a six-core 3.2 GHz i7 model with 32GB of RAM and a 1TB SSD. Mac minis start at $799, but this configuration would cost you $2,499.

Getting started

Many have asked online, “Why is the only GPU choice an Intel UHD Graphics 630?” We are now in the era of external GPU devices and Apple has clearly designed the mini with that in mind. There are many applications where a powerful GPU simply isn’t necessary, such as standard desktop computing, like surfing the web, home accounting, and writing. But also, most pro audio, most graphics and photography, and creative editing that isn’t effects-intensive will work just fine with this Mac. If you need or want more GPU horsepower, then add an eGPU to the mix. (An upcoming review will assess the performance of the Mac mini together with a Blackmagic eGPU Pro.)

When you first unbox the Mac you will need to figure out how to connect an external display. A Thunderbolt 3 display, like the LG UltraFine 5K on Apple’s website, or a low-end display that uses HDMI are both clear options. However, if you already own a monitor that connects via Mini DisplayPort, DisplayPort, VGA, or DVI, then you’ll need to purchase a Thunderbolt 3 adapter specific to that connection standard. Other possibilities include connecting your monitor through an eGPU or a Thunderbolt dock that has the correct ports. I tested both CalDigit and OWC docks with 27″ Apple Retina and Dell displays and everything worked fine. A minor issue, but something to consider before you can even start using your Mac mini.

I put the Mac mini through its paces with Premiere Pro, Final Cut Pro X, DaVinci Resolve, and Pixelmater Pro to cover editing, color correction, and photo manipulation. Although I didn’t test the Mac mini extensively with Logic Pro X, this computer would also be a good choice for sound design, mixing, and music creation. My initial impressions are that this is a very capable computer for creative pros and that the Intel GPU is more than adequate for most tasks.

Real-world testing

I’ve been testing the Mac mini with an episode from a real production that I work on, which is a nine-minute-long travel segment edited in Premiere Pro and graded in Resolve. I also brought the Premiere sequence into FCPX for comparison testing. To me that’s more telling than any artificial benchmark score. The native media sources are 4K in a 1080p/23.98 timeline. Footage covers a mix of cameras and codecs, including ProResHQ, XAVC, H.264, and H.265. Sequence clip effects include resizing, speed changes, Lumetri color correction (or FCPX’s color tools), plus an audio mix. In short, everything that the offline/creative editor used. The Resolve grade consists of 145 clips averaging three to five nodes on every clip. To keep my render tests consistent across several machines, all media and project files were loaded to an external LaCie Rugged portable drive connected over USB-3.

ProRes and H.264 exports from each application were used to compare the Mac mini against two other Macs – my mid-2014 Retina MacBook Pro (the last series using Nvidia GPU cards) and a current 10-core iMac Pro. Premiere Pro and Resolve rendering was set to OpenCL, an open GPU standard, which still seems to yield the fastest results for these apps. Final Cut Pro X uses Metal, Apple’s method to leverage the combined power of the GPU and CPU.

Naturally the iMac Pro bested all of the times by half or more. The mini’s times – using only the Intel GPU – were actually similar to the older MacBook Pro, though noticeably faster with Resolve. The general editing experience was good, but video was a bit “sticky” when scrubbing/skimming through 4K media – thanks to the slow external drive. Once I moved the media onto the Mac mini’s blazingly fast SSD (around 2800 MB/s read-write speeds), the result was a super-responsive editing experience. I don’t recommend working with your raw camera footage on the internal drive, so if you edit large projects with a lot of media, then adding a fast, external Thunderbolt 3 drive or RAID array is the way to go. The 1TB size of the internal flash drive is the sweet spot for most editors. Companies with ethernet-based NAS shared storage systems will want to get the 10GigE upgrade when purchasing a Mac mini if they intend to edit with it.

That’s not to say the Mac mini is the most powerful without the extra GPU power. There are some GPU-accelerated effects that will definitely cause stuttering playback and dropped frames. Blurs are an obvious example. When I tested some blurs, playback generally held up until I added a mask to the effect in Premiere. But remember, I’m working with 4K media in native codecs. As a rule, Premiere Pro simply doesn’t handle this type of content as fluidly as Final Cut Pro X. I was able to push FCPX a bit farther without issues than I could Premiere. And, of course, if you want to use it, FCPX can aid the situation with background rendering.

Speaking as an editor and colorist, I’ve been happy with how the Mac mini performs. While not the most powerful Mac made, the mini is still a robust creative tool. Do you edit commercials, corporate video, or entertainment programming? If so, then there’s very little you’ll find issue with in daily operation. The mini presents a good price/performance bargain for editors, musicians, sound designers, graphic artists, photographers, and others. That’s even more the case if you already own the rest of the package.

I think it’s worth making a cost comparison before I close. You can certainly beef up the Mac mini quite a bit; however, in doing so, you should compare the other Mac options before buying. For example, let’s say you completely option out the mini and then add all the Apple store peripherals, including Apple keyboard/mouse, the LG 5K display, and a BMD eGPU Pro. That total would run $6945. Naturally those items from Apple are going to cost a bit more than third-party options. But to compare, the equivalent package in an eight-core iMac Pro with the base GPU, 64GB RAM, and a 2TB SSD would run $6599. That’s the same Vega 56 GPU as in the eGPU Pro, plus you have an eight-core Xeon instead of a Core i7 CPU. Clearly the iMac Pro would be the better choice, because you aren’t buying three enclosures, cooling systems, and power supplies. But if you don’t need that horsepower, already own some of the peripherals, or are better served by the modular design of the Mac mini, then the calculation shifts.

When I work on my own, it’s either with the MacBook Pro or an aging Mac Pro tower. My home editing demands are not as taxing as when I work freelance at other shops. I certainly would have no qualms about shifting projects like those to a Mac mini as a replacement computer, because it can deliver a reliable level of performance without breaking the bank.

Originally written for RedShark News.

For more on the Mac mini and editing, check out this coverage at FCP.co.

©2019 Oliver Peters

The State of the NLE 2019

It’s a new year, but the doesn’t mean that the editing software landscape will change drastically in the coming months. For all intents and purpose, professional editing options boil down to four choices: Avid Media Composer, Adobe Premiere Pro, Apple Final Cut Pro X, and Blackmagic Design DaVinci Resolve. Yes, I know Vegas, Lightworks, Edius, and others are still out there, but those are far off on the radar by comparison (no offense meant to any happy practitioners of these tools). Naturally, since blogs are mainly about opinions, everything I say from here on is purely conjecture. Although it’s informed by my own experiences with these tools and my knowing many of the players involved on the respective product design and management teams – past and present.

Avid continues to be the go-to NLE in the feature film and episodic television world. That’s certainly a niche, but it’s a niche that determines the tools developed by designers for the broader scope of video editing. Apple officially noted two million users for Final Cut Pro X last year and I’m sure it’s likely to be at least 2.5M by now. Adobe claims Premiere Pro to be the most widely used NLE by a large margin. I have no reason to doubt that statement, but I have also never seen any actual stats. I’m sure through the Creative Cloud subscription mechanism Adobe not only knows how many Premiere Pro installations have been downloaded, but probably has a good idea as to actual usage (as opposed to simply downloading the software). Bringing up the rear in this quartet is Resolve. While certainly a dominant color correction application, I don’t yet see it as a key player in the creative editing (as opposed to finishing) space. With the stage set, let’s take a closer look.

Avid Media Composer

Editors who have moved away from Media Composer or who have never used it, like to throw shade on Avid and its marquee product. But loyal users – who include some of the biggest names in film editing – stick by it due in part to familiarity, but also its collaborative features and overall stability. As a result, the development pace and rate of change is somewhat slow compared with the other three. In spite of that, Avid is currently on a schedule of a solid, incremental update nearly every month – each of which chips away at a long feature request list. The most recent one dropped on December 31st. Making significant changes without destroying the things that people love is a difficult task. Development pace is also hindered by the fact that each one of these developers is also chasing changes in the operating system, particularly Apple and macOS. Sometimes you get the feeling that it’s two steps forward, one step back.

As editors, we focus on Media Composer, but Avid is a much bigger company than just that, with its fingers in sound, broadcast, storage, cloud, and media management. If you are a Pro Tools user, you are just as concerned about Avid’s commitment to you, as editors are to them. Like any large company, Avid must advance not just a single core product, but its ecosystem of products. Yet it still must advance the features in these products, because that’s what gets users’ attention. In an effort to improve its attraction to new users, Avid has introduced subscription plans and free versions to make it easier to get started. They now cover editing and sound needs with a lower cost-of-entry than ever before.

I started nonlinear editing with Avid and it will always hold a spot in my heart. Truth be told, I use it much less these days. However, I still maintain current versions for the occasional project need plus compatibility with incoming projects. I often find that Media Composer is the single best NLE for certain tasks, mainly because of Avid’s legacy with broadcast. This includes issues like proper treatment of interlaced media and closed captioning. So for many reasons, I don’t see Avid going away any time soon, but whether or not they can grow their base remains an unknown. Fortunately many film and media schools emphasize Avid when they teach editing. If you know Media Composer, it’s an easy jump to any other editing tool.

Adobe Premiere Pro CC

The most widely used NLE? At least from what I can see around me, it’s the most used NLE in my market, including individual editors, corporate media departments, and broadcasters. Its attraction comes from a) the versatility in editing with a wide range of native media formats, and b) the similarity to – and viable replacement for – Final Cut Pro “legacy”. It picked up steam partly as a reaction to the Final Cut Pro X roll-out and users have generally been happy with that choice. While the shift by Adobe to a pure subscription model has been a roadblock for some (who stopped at CS6), it’s also been an advantage for others. I handle the software updates at a production company with nine edit systems and between the Adobe Creative Cloud and Apple Mac App Store applications, upgrades have never been easier.

A big criticism of Adobe has been Premiere’s stability. Of course, that’s based on forum reads, where people who have had problems will pipe up. Rarely does anyone ever post how uneventful their experience has been. I personally don’t find Premiere Pro to be any less stable than any other NLE or application. Nonetheless, working with a mix of oddball native media will certainly tax your system. Avid and Apple get around this by pushing optimized and proxy media. As such, editors reap the benefits of stability. And the same is true with Premiere. Working with consistent, optimized media formats (transcoded in advance) – or working with Adobe’s own proxies – results in a more stable project and a better editing experience.

Avid Media Composer is the dominant editing tool in major markets, but mainly in the long-form entertainment media space. Many of the top trailer and commercial edit shops in those same markets use Premiere Pro. Again, that goes back to the FCP7-to-Premiere Pro shift. Many of these companies had been using the old Final Cut rather than Media Composer. Since some of these top editors also cut features and documentaries, you’ll often see them use Premiere on the features that they cut, too. Once you get below the top tier of studio films and larger broadcast network TV shows, Premiere Pro has a much wider representation. That certainly is good news for Adobe and something for Avid to worry about.

Another criticism is that of Adobe’s development pace. Some users believed that moving to a subscription model would speed the development pace of new versions – independent of annual or semi-annual cycles. Yet cycles still persist – much to the disappointment of those users. This gets down to how software is actually developed, keeping up with OS changes, and to some degree, marketing cycles. For example, if there’s a big Photoshop update, then it’s possible that the marketing “wow” value of a large Premiere Pro update might be overshadowed and needs to wait. Not ideal, but that’s the way it is.

Just because it’s possible, doesn’t mean that users really want to constantly deal with automatic software updates that they have to keep track of. This is especially true with After Effects and Premiere Pro, where old project files often have to be updated once you update the application. And those updates are not backwards compatible. Personally, I’m happy to restrict that need to a couple of times a year.

Users have the fear that a manufacturer is going to end-of-life their favorite application at some point. For video users, this was made all too apparent by Apple and FCPX. Neither Apple nor Adobe has been exempt from killing off products that no longer fit their plans. Markets and user demands shift. Photography is an obvious example here. In recent years, smart phones have become the dominant photographic device, which has enabled cloud-syncing and storage of photos. Adobe and Apple have both shifted the focus for their photo products accordingly. If you follow any of the photo blogs, you’ll know there’s some concern that Adobe Lightroom Classic (the desktop version) will eventually give way completely to Lightroom CC (the cloud version). When a company names something as “classic”, you have to wonder how long it will be supported.

If we apply that logic to Premiere Pro, then the new Adobe Rush comes to mind. Rush is a simpler, nimbler, cross-platform/cross-device NLE targeted as users who produce video starting with their smart phone or tablet. Since there’s also a desktop version, one could certainly surmise that in the future Rush might replace Premiere Pro in the same way that FCPX replaced FCP7. Personally, I don’t think that will happen any time soon. Adobe treats certain software as core products. Photoshop, Illustrator, and After Effects are such products. Premiere Pro may or may not be viewed that way internally, but certainly more so now than ever in the past. Premiere Pro is being positioned as a “hub” application with connections to companion products, like Prelude and Audition. For now, Rush is simply an interesting offshoot to address a burgeoning market. It’s Adobe’s second NLE, not a replacement. But time will tell.

Apple Final Cut Pro X

Apple released Final Cut Pro X in the summer of 2011 – going on eight years now. It’s a versatile, professional tool that has improved greatly since that 2011 launch and gained a large and loyal fan base. Many FCPX users are also Premiere Pro users and the other way around. It can be used to cut nearly any type of project, but the interface design is different from the others, making it an acquired taste. Being a Mac-only product and developed within the same company that makes the hardware and OS, FCPX is optimized to run on Macs more so than any cross-platform product can be. For example, the fluidity of dealing with 4K ProRes media on even older Macs surpasses that of any other NLE.

Prognosticating Apple’s future plans is a fool’s errand. Some guesses have put the estimated lifespan of FCPX at 10 years, based in part on the lifespan of FCP “legacy”. I have no idea whether that’s true of not. Often when I read interviews with key Apple management (as well as off-the-record, casual discussions I’ve had with people I know on the inside), it seems like a company that actually has less of a concrete plan when it comes to “pro” users. Instead, it often appears to approach them with an attitude of “let’s throw something against the wall and see what sticks”. The 2013 Mac Pro is a striking example of this. It was clearly innovative and a stellar exhibit for Apple’s “think different” mantra. Yet it was a product that obviously was not designed by actually speaking with that product’s target user. Apple’s current “shunning” of Nvidia hardware seems like another example.

One has to ask whether a company so dominated by the iPhone is still agile enough to respond to the niche market of professional video editors. While Apple products (hardware and software) still appeal to creatives and video professionals, it seems like the focus with FCPX is towards the much broader sphere of pro video. Not TV shows and feature films (although that’s great when it comes) – or even high-end commercials and trailers – but rather the world of streaming channels, social media influencers, and traditional publishers who have shifted to an online media presence from a print legacy. These segments of the market have a broad range of needs. After all, so called “YouTube stars” shoot with everything from low-end cameras and smart phones all the way up to Alexas and REDs. Such users are equally professional in their need to deliver a quality product on a timetable and I believe that’s a part of the market that Apple seeks to address with FCPX.

If you are in the world of the more traditional post facility or production company, then those users listed above may be market segments that you don’t see or possibly even look down upon. I would theorize that among the more traditional sectors, FCPX may have largely made the inroads that it’s going to. Its use in films and TV shows (with the exception of certain high-profile, international examples) doesn’t seem to be growing, but I could be wrong. Maybe the marketing is just behind or it no longer has PR value. Regardless, I do see FCPX as continuing strong as a product. Even if it’s not your primary tool, it should be something in your toolkit. Apple’s moves to open up ProRes encoding and offering LumaForge and Blackmagic eGPU products in their online store are further examples that the pro customer (in whatever way you define “pro”) continues to have value to them. That’s a good thing for our industry.

Blackmagic Design DaVinci Resolve

No one seems to match the development pace of Blackmagic Design. DaVinci Resolve underwent a wholesale transformation from a tool that was mainly a high-end color corrector into an all-purpose editing application. Add to this the fact that Blackmagic has acquired and integrated a number of companies, whose tools have been modernized and integrated into Resolve. Blackmagic now offers a post-production solution with some similarities to FCPX while retaining a traditional, track-based interface. It includes modes for advanced audio post (Fairlight) and visual effects (Fusion) that have been adapted from those acquisitions. Unlike past all-in-one applications, Resolve’s modal pages retain the design and workflow specific to the task at hand, rather than making them fit into the editing application’s interface design. All of this in a very short order and across three operating systems, thus making their pace the envy of the industry.

But a fast development pace doesn’t always translate into a winning product. In my experience each version update has been relatively solid. There are four ways to get Resolve (free and paid, Mac App Store and reseller). That makes it a no-brainer for anyone starting out in video editing, but who doesn’t have the specific requirement for one application over another. I have to wonder though, how many new users go deep into the product. If you only edit, there’s no real need to tap into the Fusion, Fairlight, or color correction pages. Do Resolve editors want to finish audio in Fairlight or would they rather hand off the audio post and mix to a specialist who will probably be using Pro Tools? The nice thing about Resolve is that you can go as deep as you like – or not – depending on your mindset, capabilities, and needs.

On the other hand, is the all-in-one approach better than the alternatives: Media Composer/Pro Tools, Premiere Pro/After Effects/Audition, or Final Cut Pro X/Motion/Logic Pro X? I don’t mean for the user, but rather the developer. Does the all-in-one solution give you the best product? The standalone version of Fusion is more full-featured than the Fusion page in Resolve. Fusion users are rightly concerned that the standalone will go away, leaving them with a smaller subset of those tools. I would argue that there are already unnecessary overlaps in effects and features between the pages. So are you really getting the best editor or is it being compromised by the all-in-one approach? I don’t know the answer to these questions. Resolve for me is a good color correction/grading application that can also work for my finishing needs (although I still prefer to edit in something else and roundtrip to/from Resolve). It’s also a great option for the casual editor who wants a free tool. Yet in spite of all its benefits, I believe Resolve will still be a distant fourth in the NLE world, at least for the next year.

The good news is that there are four great editing options in the lead and even more coming from behind. There are no bad choices and with a lower cost than ever, there’s no reason to limit your knowledge to only one. After all, the products that are on top now may be gone in a decade. So broaden your knowledge and define your skills by your craft – not your tools!

©2019 Oliver Peters