Apple Pivots – WWDC 2020

Monday saw Apple’s first virtual WWDC keynote presentation. This was a concise, detail-packed 108 minute webcast covering the range of operating system changes affecting all of Apple’s product lines. I shared my initial thoughts and reactions here and here. If you want some in-depth info, then John Gruber’s follow-up interview with Craig Federighi and Greg Joswiak of Apple is a good place to start. With the dust settling, I see three key takeaways from WWDC for Mac users.

Apple Silicon

Apple Silicon becomes the third processor transition for the Mac. Apple has been using Intel CPUs for 15 years. This shift moves Mac computers to the same CPU family as the mobile platforms. The new CPUs are based on Arm SoC (system on chip) technology. Arm originally stood for Acorn RISC Machine and is a technology developed by Arm Holdings PLC. They license the technology to other companies who are then free to develop their own chip designs. As far as we know, the Apple Arm chips will be manufactured in foundries owned by TSMC in Taiwan. While any hardware shift can be disconcerting, the good news is that Apple already has more than a decade-long track record with these chips, thanks to the iPhone and iPad. (Click here for more details on RISC versus CISC chip architectures.)

macOS software demos were ostensibly shown on a Mac with 16GB RAM and operating on an A12Z Bionic chip – the same CPU as in the current iPad Pro. That’s an 8-core, 64-bit processor with an integrated GPU. It also includes cores for the Neural Engine, which Apple uses for machine learning functions.

Apple is making available a developer transition kit (DTK) that includes a Mac mini in this configuration. This would imply that was the type of machine used for the keynote demo. It’s hard to know if that was really the case. Nevertheless, performance appeared good – notably with Final Cut Pro X showing three streams of 4K ProRes at full resolution. Not earth-shattering, but decent if that is actually the level of machine that was used. Federighi clarified in the Gruber interview that this DTK is only to get developers comfortable with the new chip, as well as the new OS version. It is not intended to represent a machine that is even close to what will eventually be shipped. Nor will the first shipping Apple Silicon chips in Macs be A12Z CPUs.

The reason to shift to Arm-based CPUs is the promise of better performance with a lower power consumption. Lower power also means less heat, which is great for phones and tablets. That’s also great for laptops, but less critical on desktop computers. This means overclocking is also a viable option. According to Apple, the transition to Apple Silicon will take two years. Presumably that means by the end of two years, all new Macs released going forward will use Arm-based CPUs.

Aside from the CPU itself, what about Thunderbolt and the GPU? The A12Z has an integrated graphics unit, just like the Intel chip. However, for horsepower Apple has been using AMD – and in years past, NVIDIA. Will the relationship with AMD continue or will Apple Silicon also cover the extra graphics grunt? Thunderbolt is technology licensed by Intel. Will that continue and will Apple be able to integrate Thunderbolt into its new Macs? The developer Mac mini does not include Thunderbolt 3 ports. Can we expect some type of new i/o format in these upcoming Macs, like USB 4, for example? (Click here for another concise hardware overview.)

macOS Big Sur – Apple dials it to 11

The second major reveal (for Mac users) was the next macOS after Catalina – Big Sur. This is the first OS designed natively for Arm-based Macs, but is coded to be compatible with Intel, too. It will also be out by the end of the year. Big Sur is listed as OS version 11.0, thus dropping the OS X (ten) product branding. A list of compatible Macs has been posted by Apple and quite a few users are already running a beta version of Big Sur on their current Intel Macs. According to anecdotes I’ve seen and heard, most indicate that it’s stable and the majority of applications (not all) work just fine.

Apple has navigated such transitions before – particularly from PowerPC to Intel. (The PowerPC used a RISC architecture.) They are making developer tools available (Universal 2) that will allow software companies to compile apps that run natively on both Intel and/or Arm platforms. In addition, Big Sur will include Rosetta 2, a technology designed to run Intel apps under native emulation. In some cases, Apple says that Rosetta 2 will actually convert some apps into native versions upon installation. Boot Camp appears to be gone, but virtualization of Linux was mentioned. This makes sense because some Linux distributions already will run on Arm processors. We’ll have to wait and see whether virtualization will also apply to Windows. Federighi did appear to say that directly booting into a different OS would not be possible.

Naturally, users want to know if their favorite application is going to be ready on day one. Apple has been working closely with Adobe and Microsoft to ensure that Creative Cloud and Office applications will run natively. Those two are biggies. If you can’t run Photoshop or Illustrator correctly, then you’ve lost the entire design community – photographers, designers, web developers, ad agencies, etc. Likewise, if Word, Excel, or Powerpoint don’t work, you’ve lost every business enterprise. Apologies to video editors, but those two segments are more important to Mac sales than all the video apps combined.

Will the operating systems merge?

Apple has said loudly and clearly that they don’t intend to merge macOS and iOS/iPadOS. Both the Mac and mobile businesses are very profitable and those products fulfill vastly different needs. It doesn’t make any business sense to mash them together in some way. You can’t run Big Sur on an iPad Pro nor can you run iPadOS 14 on a Mac. Yet, they are more than cousins under the hood. You will be able to run iOS/iPadOS applications, such as games, on an Arm-based Mac with Big Sur. This is determined by the developer, because they will retain the option to have some apps be iOS-only at the time the application is entered into the App Store.

Technology aside, the look, feel, style, and design language is now very close between these two operating system branches. Apple has brought back widgets to the Mac. You now have notification and control centers on all platforms that function in the same manner. The macOS Finder and iPad Files windows look similar to each other. Overall, the design language has been unified as typified by a return to more color and a common chiclet design to the icons. After all, nearly ever Apple product has a similar style with rounded corners, going back to the original 1984 Mac.

Are we close to actually having true macOS on an iPad Pro? Or a Mac with touch or Apple Pencil support? Probably not. I know a lot of Final Cut Pro X users have speculated about seeing “FCPX Lite” on the iPad, especially since we have Adobe’s Rush and LumaTouch’s LumaFusion. The speculation is that some stripped-down version of FCPX on the iPad should be a no-brainer. Unfortunately iPads lack a proper computer-style file system, standard i/o, and the ability to work with external storage. So, there are still reasons to optimize operating systems and applications differently for each hardware form factor. But a few years on – who knows?

Should you buy a Mac now?

If you need a Mac to get your work done and can’t wait, then by all means do so. Especially if you are a “pro” user requiring beefier hardware and performance. On the other hand, if you can hold off, you should probably wait until the first new machines are out in the wild. Then we’ll all have a better idea of price and performance. Apple still plans to release more Intel Macs within the next two years. The prevailing wisdom is that the current state of Apple’s A-series chips will dictate lower-end consumer Macs first. Obviously a new 12″ MacBook and the MacBook Air would be prime candidates. After that, possibly a Mac mini or even a smaller iMac.

If that in fact is what will happen, then high-end MacBook Pros, iMacs, iMac Pros, and the new Mac Pro will all continue running Intel processors until comparable (or better) Apple Silicon chips become available. Apple is more willing to shake things up than any other tech company. In spite of that, they have a pretty good track record of supporting existing products for quite a few years. For example, Big Sur is supposed to be compatible with MacBook Pros and Mac Pros going back to 2013. Even though the FCPX product launch was rough, FCP7 continued to work for many years after that through several OS updates.

I just don’t agree with people who grouse that Apple abandons its existing costumers whenever one of these changes happen. History simply does not bear that out. Change is always a challenge – more for some types of users than others. But we’ve been here before. This change shouldn’t cause too much hand-wringing, especially if Apple Silicon delivers on its promise.

Click here for a nice recap of some of the other announcements from WWDC.

©2020 Oliver Peters

The Missing Mac

High-end Mac users waited six years for Apple to release its successor to the cylindrical 2013 Mac Pro. That’s a unit that was derided by some and was ideal for others. Its unique shape earned the nickname of the “trash can.” Love it or hate it that Mac Pro lacked the ability to expand and grow with the times. Nevertheless, many are still in daily service and being used to crank out great wok.

The 2019 Mac Pro revitalized Apple’s tower configuration – dubbed the “cheese grater” design. If you want expandability, this one has it in spades. But at a premium price that puts it way above the cost of a decked out 2013 Mac Pro. Unfortunately for many users, this leaves a gap in the product line – both in features and in price range.

If you want a powerful Mac in the $3,000 – $5,000 range without a built-in display, then there is none. I really like the top-spec versions of the iMac and iMac Pro, but if I already own a display, would like to only use an Apple XDR, or want an LG, Dell, Asus, etc, then I’m stuck. Naturally one approach would be to buy a 16″ MacBook Pro and dock it to an external display, using the MacBook Pro as a second display or in the clamshell configuration. I’ve discussed that in various posts and it’s one way nimble editing shops like Trim in London tend to work.

Another option would be the Mac Mini, which is closest to the unit that best fits this void. It recently got a slight bump up in specs, but it’s missing 8-core CPU options and an advanced graphics card. The best 6-core configuration might actually be a serviceable computer, but I would imagine effects requiring GPU acceleration will be hampered by the Intel UHD 630 built-in graphics. The Mini does tick a lot of the boxes, including wi-fi, Bluetooth, four Thunderbolt 3/USB-C ports, HDMI 2.0, two USB 3.0 ports, plus Ethernet and headphone jacks.

I’ve tested both the previous Mac Mini iteration (with and w/o eGPU) and the latest 16″ MacBook Pro. Both were capable Macs, but the 16″ truly shines. I find it hard to believe that Apple couldn’t have created a Mac Mini with the same electronics as the loaded 16″ MacBook Pro. After all, once you remove the better speaker system, keyboard, and battery from the lower case of the laptop, you have about the same amount of “guts” as that of the Mac Mini. I think you could make the same calculation with the iMac electronics. Even if the Mini case needed to be a bit taller, I don’t see why this wouldn’t be technically possible.

Here’s a hypothetical Mac Mini spec (similar to the MacBook Pro) that could be a true sweet spot:

  • 2.4GHz 8-core, 9th-generation Intel Core i9 processor (or faster)
  • 64GB 2666MHz DD4 memory
  • AMD Radeon Pro 5600M GPU with 8GB HBM2 memory
  • 1TB SSD storage (or higher – up to 8TB)
  • 10 Gigabit Ethernet

Such a configuration would likely be in the range of $3,000 – $5,000 based on the BTO options of the current Mini, iMac, and MacBook Pro. Of course, if you bump the internal SSD from 1TB to the higher capacities, the total price will go up. In my opinion, it should be easy for Apple to supply such a version without significant re-engineering. I recognize that if you went with a Xeon-based configuration, like the iMac Pros, then the task would be a bit more challenging, in part due to power demands and airflow. Naturally, an even higher-spec’ed Mac like this in the $5,000 – $10,000 range would also be appealing, but that would likely be a bridge too far for Apple.

What I ultimately want is a reasonably powerful Mac without being forced to also purchase an Apple display as this isn’t always the best option. But I want that without spending as much as on a car to get there. I understand that such a unit wouldn’t have the ability to add more cards, but then neither do the iMacs and MacBook Pros. So I really don’t see this as a huge issue. I feel that this configuration would be an instant success with many editors. Plug in the display and storage of your choice and Bob’s your uncle.

I’m not optimistic. Maybe Apple has run the calculation that such a version would rob sales from the 2019 Mac Pro or iMac Pros. Or maybe they simply view the Mini as fitting into a narrow range of server and home computing use cases. Whatever the reason, it seems clear to me that there is a huge gap in the product line that could be served by a Mac Mini with specs such as these.

On the other hand, Apple’s virtual WWDC is just around the corner, so we can always hope!

©2020 Oliver Peters

The Banker

Apple has launched its new TV+ service and this provides another opportunity for filmmakers to bring untold stories to the world. That’s the case for The Banker, an independent film picked up by Apple. It tells the story of two African American entrepreneurs attempting to earn their piece of the American dream during the repressive 1960s through real estate and banking. It stars Samuel L. Jackson, Anthony Mackie, Nia Long, and Nicholas Hoult.

The film was directed by George Nolfi (The Adjustment Bureau) and produced by Joel Viertel, who also signed on to edit the film. Viertel’s background hasn’t followed the usual path for a feature film editor. Interested in editing while still in high school, the move to LA after college landed him a job at Paramount where he eventually became a creative executive. During that time he kept up his editing chops and eventually left Paramount to pursue independent filmmaking as a writer, producer, and editor. His editing experience included Apple Final Cut Pro 1.0 through 7.0 and Avid Media Composer, but cutting The Banker was his first time using Apple’s Final Cut Pro X.

I recently chatted with Joel Viertel about the experience of making this film and working with Apple’s innovative editing application.

____________________________________________

[OP] How did you get involved with co-producing and cutting The Banker?

[JV] This film originally started while I was at Paramount. Through a connection from a friend, I met with David Smith and he pitched me the film. I fell in love with it right away, but as is the case with these films, it took a long while to put all the pieces together. While I was doing The Adjustment Bureau with George Nolfi and Anthony Mackie, I pitched it to them, and they agreed it would be a great project for us all to collaborate on. From there it took a few years to get to a script we were all happy with, cast the roles, get the movie financed, and off the ground.

[OP] I imagine that it’s exciting to be one of the first films picked up by Apple for their TV+ service. Was that deal arranged before you started filming or after everything was in the can, so to speak?

[JV] Apple partnered with us after it was finished. It was made and financed completely independently through Romulus Entertainment. While we were in the finishing stages, Endeavor Content repped the film and got us into discussions with Apple. It’s one of their first major theatrical releases and then goes on the platform after that. Apple is a great company and brand, so it’s exciting to get in on the ground floor of what they’re doing.

[OP] When I screened the film, one of the things I enjoyed was the use of montages to quickly cover a series of events. Was that how it was written or were those developed during the edit as a way to cut running time?

[JV] Nope, it was all scripted. Those segments can bedevil a production, because getting all of those little pieces is a lot of effort for very little yield. But it was very important to George and myself and the collaborators on the film to get them. It’s a film about banking and real estate, so you have to figure out how to make that a fun and interesting story. Montages were one way to keep the film propulsive and moving forward – to give it motion and excitement. We just had to get through production finding places to pick off those pieces, because none of those were developed in post.

[OP] What was your overall time frame to shoot and post this film?

[JV] We started in late September 2018 and finished production in early November. It was about 30 days in Atlanta and then a few days of pick-ups in LA. We started post right after Thanksgiving and locked in May, I think. Once Apple got involved, there were a few minor changes. However, Apple’s delivery specs were completely different from our original delivery specs, so we had to circle back on a bunch of our finishing.

[OP] Different in what way?

[JV] We had planned to finish in 2K with a 5.1 mix. Their deliverables are 4K with a Dolby Atmos mix. Because we had shot on 35mm film, we had the capacity, but it meant that we had to rescan and redo the visual effects at 4K. We had to lay the groundwork to do an Atmos mix and DolbyVision finish for theatrical and home video, which required the 35mm film negative to be rescanned and dust-busted.

Our DP, Charlotte Bruus Christensen, has shot mostly on 35mm – films like A Quiet Place and The Girl on a Train and those movies are beautiful. And so we wanted to accommodate that, but it presents challenges if you aren’t shooting in LA. Between Kodak in Atlanta and Technicolor in LA we were able to make it work.

Kodak would process the negative and Technicolor made a one-light transfer for 2K dailies. Those were archived and then I edited with ProResLT copies in HD. Once we were done, Technicolor onlined the movie from their 2K scans. After the change in deliverable specs, Technicolor rescanned the clips used for the online finish at 4K and conformed the cut at 4K.

[OP] I felt that the eclectic score fit this movie well and really places it in time. As an editor, how did you work to build up your temp tracks? Or did you simply leave it up to the composer?

[JV] George and I have worked with our composer, Scott Salinas, for a very long time on a bunch of things. Typically, I give him a script and then he pulls samples that he thinks are in the ballpark. He gave me a grab bag of stuff for The Banker – some of which was score, some of which was jazz. I start laying that against the picture myself as I go and find these little things that feel right and set the tone of the movie. I’m finding my way for the right marriage of music and picture. If it works, it sticks. If it doesn’t, we replace it. Then at the end, he’s got to score over that stuff.

Most of the jazz in The Banker is original, but there are a couple tracks where we just licensed them. There’s a track called “Cash and Carry” that I used over the montage when they get rich. They’ve just bought the Banker’s Building and popped the champagne. This wacky, French 1970s bit of music comes in with a dude scatting over it while they are buying buildings or looking at the map of LA. That was a track Scott gave me before we shot a frame of film, so when we got to that section of the movie, I chose it out of the bin and put that sequence to it and it just stuck.

There are some cases where it’s almost impossible to temp, so I just cut it dry and give it to him. Sometimes he’ll temp it and sometimes he’ll do a scratch score. For example, the very beginning of the movie never had temp in any way. I just cut it dry. I gave it to Scott. He scored it and then we revised his scoring a bunch of times to get to the final version.

[OP] Did you do any official or “friends and family” screenings of The Banker while editing it? If so, did that impact the way the film turned out?

[JV] The post process is largely dictated by how good your first cut is. If the movie works, but needs improvement – that’s one thing. If it fundamentally doesn’t – that’s another. It’s a question of where you landed from the get-go and what needs to be fixed to get to the end of the road.

We’re big fans of doing mini-testing – bringing in people we know and people whose opinions we want to hear. At some point you have to get outside of the process and aggregate what you hear over and over again. You need to address the common things that people pick up on. The only way to keep improving your movie is to get outside feedback so they tell you what to focus on.

Over time that significantly impacted the film. It’s not like any one person said that one thing that caused us to re-edit the film. People see the problem that sticks out to them in the cut and you work on that. The next time there’s something else and then you work on that. You keep trying to make all the improvements you can make. So it’s an iterative process.

[OP] This film marked a shift for you from using earlier versions of Final Cut Pro to now cutting on Final Cut Pro X for the first time. Why did you make that choice and what was the experience like?

[JV] George has a relationship with Apple and they had suggested using Final Cut Pro X on his next project. I had always used Final Cut Pro 7 as my preference. We had used it on an NBC show called Allegiance in 2014 and then on Birth of the Dragon in 2015 and 2016 – long after it had been discontinued. We all could see the writing on the wall – operating systems would quit running it and it’s not harnessing what the computers can do.

I got involved in the conversation and was invited to come to a seminar at the Editors Guild about Final Cut Pro X that was taught by Kevin Bailey, who was the assistant editor for Whiskey Tango Foxtrot. I had looked at Final Cut Pro X when it first came out and then again several years later. I felt like it had been vastly improved and was in a place where I could give it a shot. So I committed at that point to cutting this film on Final Cut Pro X and teaching myself how to use it. I also hired Kevin to help as my assistant for the start of the film. He became unavailable later in the production, so we found Steven Moyer to be my assistant and he was fantastic. I would have never made it through without the both of them.

[OP] How did you feel about Final Cut Pro X once you got your sea legs?

[JV] It’s always hard to learn to walk again. That’s what a lot of editors bump into with Final Cut Pro X, because it is a very different approach than any other NLE. I found that once you get to know it and rewire your brain that you can be very fast on it. A lot of the things that it does are revolutionary and pretty incredible. And there are still other areas that are being worked on. Those guys are constantly trying to make it better. We’ve had multiple conversations with them about the possibilities and they are very open to feedback.

[OP] Every editor has their own way of tackling dailies and wading through an avalanche of footage coming in from production. And of course, Final Cut Pro X features some interesting ways to organize media. What was the process like for The Banker?

[JV] The sound and picture were both running at 24fps. I would upload the sound files from my hotel room in Atlanta to Technicolor in LA, who would sync the sound. They would send back the dailies and sound, which Kevin – who was assisting at that time – would load into Final Cut. He would multi-clip the sound files and the two camera angles. Everything is in a multi-clip, except for purely MOS B-roll shots. Each scene had its own event. Kevin used the same system he had devised with Jan [Kovac, editor on Whiskey Tango Foxtrot and Focus]. He would keyword each dialogue line, so that when you select a keyword collection in the browser, every take for that line comes up. That’s labor-intensive for the assistant, but it makes life that much faster for me once it’s set up.

[OP] I suppose that method also makes it much faster when you are working with the director and need to quickly get to alternate takes.

[JV] It speeds things along for George, but also for me. I don’t have to hunt around to find the lines when I have to edit a very long dialogue scene. You could assemble selects reels first, but I like to look at everything. I fundamentally believe there’s something good in every bad take. It doesn’t take very long to watch every take of a line. Plus I do a fair amount of ‘Franken-biting’ with dialogue where needed.

[OP] Obviously the final mix and color correction were done at specialty facilities. Since The Banker was shot on film, I would imagine that complicated the hand-off slightly. Please walk me through the process you followed.

[JV] Marti Humphrey did the sound at The Dub Stage in Burbank. We have a good relationship with him and can call him very early in the process to work out the timeline of how we are going to do things. He had to soup up his system a bit to handle the Atmos near-field stuff, but it was a good opportunity for him to get into that space. So he was able to do all the various versions of our mix.

Technicolor was the new guy for us. Mike Hatzer did the color grade. It was a fairly complex process for them and they were a good partner. For the conform, we handed them an XML and EDL. They had their Flex files to get back to the film edge code. Steven had to break up the sequence to generate separate tracks for the 35mm original, stock, and VFX shots, because Technicolor needed separate EDLs for those. But it wasn’t like we invented anything that hasn’t been done before.

We did use third-party apps for some of this. The great thing about that is you can just contact the developer directly. There was one EDL issue and Steven could just call up the app developer to explain the issue and they’d fix it in a couple of days.

[OP] What sort of visual effects were required? The film is set more or less 70 years ago, so were the majority of effects just to make the locations look right? Like cars, signs, and so on?

[JV] It was mostly period clean-up. You have to paint out all sorts of boring stuff, like road paint. In the 50s and 60s, those white lines have to come out. Wires, of course. A couple of shots we wanted to ‘LA-ify’ Georgia. We shot some stuff in LA, but when you put Griffith Park right next to a shot of Newnan, Georgia, the way to blend that over is to put palm trees in the Newnan shot.

We also did a pick-up with Anthony while he was on another show the required a beard for that role. So we had to paint out his beard. Good luck figuring out which was the shot where we had to paint out his beard!

[OP] Now that you have a feature film under your belt with Final Cut Pro X, what are your thoughts about it? Anything you feel that it’s missing?

[JV] All the NLEs have their particular strengths. Final Cut has several that are amazing, like background exports and rendering. It has Roles, where you can differentiate dialogue, sound effects, and music sources. You can bus things to different places. This is the first time I’ve ever edited in 5.1, because Final Cut supports that. That was a fun challenge.

We used Final Cut Pro X to edit a movie shot on film, which is kind of a first at this level, but it’s not like we crashed into some huge problem with that. We gamed it out and it all worked like it was supposed to. Obviously it doesn’t do some stuff the same way. Fortunately through our relationship with Apple we can make some suggestions about that. But there really isn’t anything it doesn’t do. If that were the case, we would have just said that we can’t cut with this.

Final Cut Pro X is an evolving NLE – as they all are. What I realized at the seminar is that it changed a lot from when it first appeared. It was a good experience cutting a movie on it. Some editors are hesitant, because that first hour is difficult and I totally get that. But if you push through that and get to know it – there are many things that are very good and addictively good. I would certainly cut another movie on it.

____________________________________________

The Banker started a limited theatrical release on March 6 and will be available on the Apple TV+ streaming service on March 20.

For even more details on the post process for The Baker, check out Pro Video Coalition

Originally written for FCPco.

®2020 Oliver Peters

Apple 2019 16″ MacBook Pro

Creatives in all fields are the target market for Apple’s MacBook Pro product line. Apple introduced the 16″ model to replace its 15″ predecessor in late 2019. This new model is not only a serious tool for location work but is powerful enough to form the hub of your edit suite, whether on-site or at a fixed facility.

Nuts and bolts

The 2019 MacBook Pro comes in 6-core (Intel Core i7) and 8-core (Intel Core i9) configurations. It boasts up to 64GB RAM, an AMD Radeon Pro 5500M series GPU with up to 8GB of GDDR6 VRAM, and can be equipped with up to 8TB of internal SSD storage. Prices start at under $2,800 USD*, but the full monty rings up at about $6,500 USD*. The high-capacity SSD options contribute to the most expensive configurations, but those sizes may be overkill for most users. A more realistic price for a typical editor’s 8-core configuration would be about $3,900 USD*. Just stick to a 1TB internal SSD and back the RAM down to 32GB. Quite frankly the 6-core is likely to be sufficient for many editing and design tasks. (*With AppleCare warranty, but no local VAT or sales taxes added.)

While there are great PC laptop choices, it’s very hard to make direct comparisons to laptops with all of these same components. Few PC laptops offer this much RAM or SSDs that large, for example. When you can make a direct comparison, name brand PC laptops are often more expensive. And remember, great gaming PCs are not necessarily the best editing machines and vice versa.

What’s improved?

Apple loaned me a space gray, 8-core 16″ MacBook Pro configured with 64GB RAM, the AMD GPU with 8GB VRAM, and a 4TB internal hard drive. I own a mid-2014 MacBook Pro as the center of my edit system at home. The two MacBook Pros are physically very similar. They are nearly the identical size, with the 2019 MacBook Pro a bit thinner and lighter. The keyboard footprint is the same as my five-year-old model, though the keys are slightly larger on the new laptop with less space between. Apple tweaked the keyboard mechanism on the 16″ model. Typing feels about the same between these two, though the keys on the new machine are quieter. Plus, it has a much bigger trackpad and the touch bar.

The Retina screen is housed in a lid about the same size as the 15″ model. Its 16″ diagonal spec is achieved by using smaller bezels. Of course, the newer display is also higher density resulting in 3072 x 1920 pixels at 226 ppi. This 500-nit, P3 display offers True Tone, thanks to macOS Catalina. True Tone alters the color temperature based on ambient light. It’s easy on the eyes, because it warms up the image in standard room lighting. However, I would discourage enabling it while working on projects requiring critical color accuracy.

The MacBook Pro features four Thunderbolt 3/USB-C ports, like its 15″ predecessor. These connect peripherals and/or power. I don’t know whether Apple had room to add standard USB-A ports or if there was a trade-off for space needed for cooling or the improved speaker system. Maybe it was just a decision to move forward regardless of some short-term inconvenience. In any case, if you purchase a MacBook Pro, plan on also buying a few adapters and cables, a USB hub, and/or a Thunderbolt 3 dock.

Performance testing

How does the 2019 MacBook Pro stack up against a desktop Mac, like the 10-core 3.0 GHz 2017 iMac Pro used at my daily editing gig? Both have 64GB RAM, but there are 16GB of VRAM in the iMac Pro. I tested with both the internal SSDs and an external USB 3.0 GDRIVE SSD. Both internal drives clocked well over 2500 MB/sec, while the GDRIVE was in the 400-500 range.

My “benchmark” tests included BruceX for Final Cut Pro X, Puget Systems’ After Effects benchmark, and two of Simon Ubsdell’s Motion tutorial projects. For the “real world” tests, I used a travelogue series sizzle edit with 4K (and larger) media, various codecs, scaling, speed changes, and color correction. The timeline was exported to ProRes and H.264 using various NLEs. My final test sequence was a taxing 6K FCPX timeline composed of nine layers of 6K RED raw files.

Until the release of the new Mac Pro tower, the iMac Pro had been Apple’s most powerful desktop computer. Yet, in nearly all of these tests, the 16″ MacBook Pro equaled or slightly bettered the times of the iMac Pro. The laptop had faster export times and a higher Puget score with After Effects. One exception was my nine-layer 6K RED project, where the iMac Pro shined – exporting twice as fast as the MacBook Pro. Both Macs played back and scrubbed through this variety of files with ease, regardless of application or internal versus external drive. Overall, the biggest difference I noticed was that exports on the iMac Pro stayed quiet throughout, while the MacBook Pro frequently had to rev up the fans.

Apple claims up to 11 hours of battery life, but that’s really just during light duty computing: checking e-mails, writing, surfing the web, etc. And if you’ve optimized your energy settings for battery life. The MacBook Pro includes an integrated Intel GPU and employs automatic graphics cards switching. During tasks that don’t generate a heavy GPU load the machine is running on the integrated card. It switches to the AMD for apps like Final Cut or Premiere. I purposefully set up a looping 4K sequence in FCPX and found that the battery drained from 100% down to 10% in about two hours. While this is more stress than normal editing, it’s typical behavior for creative applications. The bottom line is that you shouldn’t expect to be editing for 11 hours straight running only on the internal battery.

Final thoughts

This machine comes with macOS Catalina pre-installed. Don’t plan on downgrading that. Catalina is the first version of macOS that is purely 64-bit and with significant under-the-hood security changes. All 32-bit dependencies have been dropped, which means that some software and hardware products might not be fully compatible. So, check first. I would recommend a clean install of apps and plug-ins rather than migrating from another machine’s drive. I did a clean installation of apps through my Apple and Adobe CC IDs and was ready to go in half a day. Conversely I’ve heard stories from others who pulled their hair out for a week trying to migrate between machines and OS versions. There are ways to do this and most of the time they work, but why not just start out fresh when you get a new machine?

Apple’s ProApps, Adobe Creative Cloud software, and DaVinci Resolve are all ready to go. Media Composer editors have a slight wait. Avid is beta testing a Catalina-compatible version of Media Composer (at the time of this writing). Some functionality won’t be there at the start, but updates will quickly add in many of those missing features.

Mac owners who bought a recent 15″ MacBook Pro will see a definite boost, but probably not enough to refresh their systems yet. But if, like me, you are running a five-year-old model, then this unit becomes very tempting, especially when working with anything more taxing than ProRes 1080p files. For work on-the-go, like on-site editing, DIT tasks, photo processing, or other creative tasks, the 2019 MacBook Pro easily fits the bill. Many editors may also need a machine that can easily shift from the field to the office/home/studio in place of an iMac or iMac Pro. If that’s you, then the MacBook Pro has the horsepower. Plug it up to a Thunderbolt dock, an external display, or other peripherals, and you are ready to go – no desktop computer needed.

Original written for RedShark News.

©2020 Oliver Peters

A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters