Time to Rethink ProRes RAW?

The Apple ProRes RAW codec has been available for several years at this point, yet we have not heard of any professional cinematography camera adding the ability to record ProRes RAW in-camera. I covered ProRes RAW with some detail in these three blog posts (HDR and RAW Demystified, Part 1 and Part 2, and More about ProRes RAW) back in 2018. But the industry has changed over the past few years. Has that changed any thoughts about ProRes RAW?

Understanding RAW

Today’s video cameras evolved their sensor design from a three CCD array for RGB into a single sensor, similar to those used in still photo cameras. Most of these sensors are built using a Bayer pattern of photosites. This pattern is an array of monochrome receptors that are filtered to receive incoming green, red, and blue wavelengths of light. Typically the green photosites cover 50% of this pattern and red and blue each cover 25%. These photosites capture linear light, which is turned into data that is then meshed and converted into RGB pixel information. Lastly, it’s recorded into a video format. Photosites do not correlate in a 1:1 relationship with output pixels. You can have more or fewer total photosite elements in the sensor than the recorded pixel resolution of the file.

The process of converting photosite data into RGB video pixels is done by the camera’s internal electronics. This process also includes scaling, gamma encoding (Rec709, Rec 2020, or log), noise reduction, image sharpening, and the application of that manufacturer’s proprietary color science. The term “color science” implies some type of neutral mathematical color conversion, but that isn’t the case. The color science that each manufacturer uses is in fact their own secret sauce. It can be neutral or skewed in favor of certain colors and saturation levels. ARRI is a prime example of this. They have done a great job in developing a color profile for their Alexa line of cameras that approximates the look of film.

All of this image processing adds cost, weight, and power demands to the design of a camera. If you offload the processing to another stage in the pipeline, then design options are opened up. Recording camera raw image data achieves that. Camera raw is the monochrome sensor data prior to the conversion into an encoded video signal. By recording a camera raw file instead of an encoded RGB video file, you defer the processing to post.

To decode this file, your operating system or application requires some type of framework, plug-in, or decoding/developing software in order to properly interpret that data into a color image. In theory, using a raw file in post provides greater control over ISO/exposure and temperature/tint values in color grading. Depending on the manufacturer, you may also apply a variety of different camera profiles. All of this is possible and still have a camera file that is of a smaller size than its encoded RGB counterpart.

In-camera recording, camera raw, and RED

Camera raw recording preceded the introduction of the RED One camera. These usually consisted of uncompressed movie files or image sequences recorded to an external recorder. RED introduced the ability to record a Wavelet-compressed, 4K camera raw signal at 24fps. This was a movie file recorded onboard the camera itself. RED was granted a number of patents around these processes, which preclude any other camera manufacturer from doing that exact same thing, unless entering into a licensing agreement with RED. So far these patents have been successfully upheld against Sony and Apple among others.

In 2007 – part way through the Final Cut Pro product run – Apple introduced its family of ProRes codecs. ProRes was Apple’s answer to Avid’s DNxHD codec, but with some improvements, like resolution independence. ProRes not only became Apple’s default intermediate codec, but also gained stature as the mastering and delivery codec of choice, regardless of which NLE you were using.

By 2010 Apple was successful in convincing ARRI to use ProRes as its internal recording codec with the introduction of the (then new) line of Alexa cameras. (ARRI camera raw recording was a secondary option using ARRIRAW and a Codex recorder.) Shooting with an Alexa, recording high-quality ProRes files, and posting those directly within FCP or any other compatible NLE created the simplest and smoothest capture-edit-deliver pipeline of any professional post workflow. That remains unchanged even today.

Despite ARRI’s success, only a few other camera manufacturers have adopted ProRes as an internal recording option. To my knowledge these include some cameras from AJA, JVC, Blackmagic Design, and RED (as a secondary file to REDCODE). The lack of widespread adoption is most likely due to Apple’s licensing arrangement, coupled with the fact that ProRes is a proprietary Apple format. It may be a de facto industry standard, but it’s not an official standard sanctioned by an industry standards committee.

The introduction of Apple’s ProRes RAW codecs has led many in the industry to wait with bated breath for cameras to also adopt ProRes RAW as their internal camera raw option. ARRI would obviously be a candidate. However, the RED patents would seem to be an impediment. But what if Apple never had that intention in the first place?

Do we have it all wrong?

When Apple introduced ProRes RAW, it did so in partnership with Atomos. Just like Sony, ARRI, and Panasonic recording their camera raw signals to an external recorder, sending a camera raw signal to an external Atomos monitor/recorder is a viable alternative to in-camera recording. Atomos’ own disagreements with RED have now been settled. Therefore, embedding the ProRes RAW codec into their products opens up that recording format to any camera manufacturer. The camera simply has to be capable of sending a compatible camera raw signal (as data) over SDI or HDMI to the connected Atomos recorder.

The desire to see ProRes RAW in-camera stems from the history of ProRes adoption by ARRI and the impact that had on high-end production and post. However, that came at a time when Apple was pushing harder into various pro film and video markets. As we’ve learned, that course was corrected by Steve Jobs, leading to the launch of Final Cut Pro X. Apple has always been about ease and democratization – targeting the middle third of a bell curve of users, not necessarily the top or bottom thirds. For better or worse, Final Cut Pro X refocused Apple’s pro video direction with that in mind.

In addition, during this past decade or more, Apple has also changed its approach to photography. Aperture was a tool developed with semi-pro and pro DSLR photographers in mind. Traditional DSLRs have lost photography market share to smart phones – especially the iPhone. Online sharing methods – Facebook, Flickr, Instagram, cloud picture libraries – have become the norm over the traditional photo album. And so, Aperture bit the dust in favor of Photos. From a corporate point-of-view, the rethinking of photography cannot be separated from Apple’s rethinking of all things video.

Final Cut Pro X is designed to be forward-thinking, while cutting the chord with many legacy workflows. I believe the same can be applied to ProRes RAW. The small form factor camera, rigged with tons of accessories including external displays, is probably more common these days than the traditional, shoulder-mounted, one-piece camcorder. By partnering with Atomos (and maybe others in the future), Apple has opened the field to a much larger group of cameras than handling the task one camera manufacturer at a time.

ProRes RAW is automatically available to cameras that were previously stuck recording highly-compressed M-JPEG or H.264/265 formats. Video-enabled DSLRs from manufacturers like Nikon and Fujifilm join Canon and Panasonic cinematography cameras. Simply send a camera raw signal over HDMI to an Atomos recorder. And yet, it doesn’t exclude a company like ARRI either. They simply need to enable Atomos to repack their existing camera raw signal into ProRes RAW.

We may never see a camera company adopt onboard ProRes RAW and it doesn’t matter. From Apple’s point-of-view and that of FCPX users, it’s all the same. Use the camera of choice, record to an Atomos, and edit as easily as with regular ProRes. Do you have the depth of options as with REDCODE RAW? No. Is your image quality as perfect in an absolute (albeit non-visible) sense as ARRIRAW? Probably not. But these concerns are for the top third of users. That’s a category that Apple is happy to have, but not crucial to their existence.

The bottom line is that you can’t apply classic Final Cut Studio/ProRes thinking to Final Cut Pro X/ProRes RAW in today’s Apple. It’s simply a different world.

____________________________________________

Addendum

The images I’ve used in this post come from Patrik Pettersson. These clips were filmed with a Nikon Z6 DSLR recording to an Atomos Ninja V. He’s made a a few sample clips available for download and testing. More at this link. This brings up an interesting issue, because most other forms of camera raw are tied to a specific camera profile. But with ProRes RAW, you can have any number of cameras. Once you bring those into Final Cut Pro X, you don’t have the correct camera profile with a color science that matches that model for each any every camera.

In the case of these clips, FCPX doesn’t offer any Nikon profiles. I decided to decode the clip (RAW to log conversion) using a Sony profile. This gave me the best possible results for the Nikon images and effectively gives me a log clip similar to that from a Sony camera. Then for the grade I worked in Color Finale Pro 2, using its ACES workflow. To complete the ACES workflow, I used the matching SLog3 conversion to Rec709.

The result is nice and you do have a number of options. However, the workflow isn’t as straightforward as Apple would like you to believe. I think these are all solvable challenges, but 1) Apple needs to supply the proper camera profiles for each of the compatible cameras; and 2) Apple needs to publish proper workflow guides that are useful to a wide range of users.

©2020 Oliver Peters

Apple Pivots – WWDC 2020

Monday saw Apple’s first virtual WWDC keynote presentation. This was a concise, detail-packed 108 minute webcast covering the range of operating system changes affecting all of Apple’s product lines. I shared my initial thoughts and reactions here and here. If you want some in-depth info, then John Gruber’s follow-up interview with Craig Federighi and Greg Joswiak of Apple is a good place to start. With the dust settling, I see three key takeaways from WWDC for Mac users.

Apple Silicon

Apple Silicon becomes the third processor transition for the Mac. Apple has been using Intel CPUs for 15 years. This shift moves Mac computers to the same CPU family as the mobile platforms. The new CPUs are based on Arm SoC (system on chip) technology. Arm originally stood for Acorn RISC Machine and is a technology developed by Arm Holdings PLC. They license the technology to other companies who are then free to develop their own chip designs. As far as we know, the Apple Arm chips will be manufactured in foundries owned by TSMC in Taiwan. While any hardware shift can be disconcerting, the good news is that Apple already has more than a decade-long track record with these chips, thanks to the iPhone and iPad. (Click here for more details on RISC versus CISC chip architectures.)

macOS software demos were ostensibly shown on a Mac with 16GB RAM and operating on an A12Z Bionic chip – the same CPU as in the current iPad Pro. That’s an 8-core, 64-bit processor with an integrated GPU. It also includes cores for the Neural Engine, which Apple uses for machine learning functions.

Apple is making available a developer transition kit (DTK) that includes a Mac mini in this configuration. This would imply that was the type of machine used for the keynote demo. It’s hard to know if that was really the case. Nevertheless, performance appeared good – notably with Final Cut Pro X showing three streams of 4K ProRes at full resolution. Not earth-shattering, but decent if that is actually the level of machine that was used. Federighi clarified in the Gruber interview that this DTK is only to get developers comfortable with the new chip, as well as the new OS version. It is not intended to represent a machine that is even close to what will eventually be shipped. Nor will the first shipping Apple Silicon chips in Macs be A12Z CPUs.

The reason to shift to Arm-based CPUs is the promise of better performance with a lower power consumption. Lower power also means less heat, which is great for phones and tablets. That’s also great for laptops, but less critical on desktop computers. This means overclocking is also a viable option. According to Apple, the transition to Apple Silicon will take two years. Presumably that means by the end of two years, all new Macs released going forward will use Arm-based CPUs.

Aside from the CPU itself, what about Thunderbolt and the GPU? The A12Z has an integrated graphics unit, just like the Intel chip. However, for horsepower Apple has been using AMD – and in years past, NVIDIA. Will the relationship with AMD continue or will Apple Silicon also cover the extra graphics grunt? Thunderbolt is technology licensed by Intel. Will that continue and will Apple be able to integrate Thunderbolt into its new Macs? The developer Mac mini does not include Thunderbolt 3 ports. Can we expect some type of new i/o format in these upcoming Macs, like USB 4, for example? (Click here for another concise hardware overview.)

macOS Big Sur – Apple dials it to 11

The second major reveal (for Mac users) was the next macOS after Catalina – Big Sur. This is the first OS designed natively for Arm-based Macs, but is coded to be compatible with Intel, too. It will also be out by the end of the year. Big Sur is listed as OS version 11.0, thus dropping the OS X (ten) product branding. A list of compatible Macs has been posted by Apple and quite a few users are already running a beta version of Big Sur on their current Intel Macs. According to anecdotes I’ve seen and heard, most indicate that it’s stable and the majority of applications (not all) work just fine.

Apple has navigated such transitions before – particularly from PowerPC to Intel. (The PowerPC used a RISC architecture.) They are making developer tools available (Universal 2) that will allow software companies to compile apps that run natively on both Intel and/or Arm platforms. In addition, Big Sur will include Rosetta 2, a technology designed to run Intel apps under native emulation. In some cases, Apple says that Rosetta 2 will actually convert some apps into native versions upon installation. Boot Camp appears to be gone, but virtualization of Linux was mentioned. This makes sense because some Linux distributions already will run on Arm processors. We’ll have to wait and see whether virtualization will also apply to Windows. Federighi did appear to say that directly booting into a different OS would not be possible.

Naturally, users want to know if their favorite application is going to be ready on day one. Apple has been working closely with Adobe and Microsoft to ensure that Creative Cloud and Office applications will run natively. Those two are biggies. If you can’t run Photoshop or Illustrator correctly, then you’ve lost the entire design community – photographers, designers, web developers, ad agencies, etc. Likewise, if Word, Excel, or Powerpoint don’t work, you’ve lost every business enterprise. Apologies to video editors, but those two segments are more important to Mac sales than all the video apps combined.

Will the operating systems merge?

Apple has said loudly and clearly that they don’t intend to merge macOS and iOS/iPadOS. Both the Mac and mobile businesses are very profitable and those products fulfill vastly different needs. It doesn’t make any business sense to mash them together in some way. You can’t run Big Sur on an iPad Pro nor can you run iPadOS 14 on a Mac. Yet, they are more than cousins under the hood. You will be able to run iOS/iPadOS applications, such as games, on an Arm-based Mac with Big Sur. This is determined by the developer, because they will retain the option to have some apps be iOS-only at the time the application is entered into the App Store.

Technology aside, the look, feel, style, and design language is now very close between these two operating system branches. Apple has brought back widgets to the Mac. You now have notification and control centers on all platforms that function in the same manner. The macOS Finder and iPad Files windows look similar to each other. Overall, the design language has been unified as typified by a return to more color and a common chiclet design to the icons. After all, nearly ever Apple product has a similar style with rounded corners, going back to the original 1984 Mac.

Are we close to actually having true macOS on an iPad Pro? Or a Mac with touch or Apple Pencil support? Probably not. I know a lot of Final Cut Pro X users have speculated about seeing “FCPX Lite” on the iPad, especially since we have Adobe’s Rush and LumaTouch’s LumaFusion. The speculation is that some stripped-down version of FCPX on the iPad should be a no-brainer. Unfortunately iPads lack a proper computer-style file system, standard i/o, and the ability to work with external storage. So, there are still reasons to optimize operating systems and applications differently for each hardware form factor. But a few years on – who knows?

Should you buy a Mac now?

If you need a Mac to get your work done and can’t wait, then by all means do so. Especially if you are a “pro” user requiring beefier hardware and performance. On the other hand, if you can hold off, you should probably wait until the first new machines are out in the wild. Then we’ll all have a better idea of price and performance. Apple still plans to release more Intel Macs within the next two years. The prevailing wisdom is that the current state of Apple’s A-series chips will dictate lower-end consumer Macs first. Obviously a new 12″ MacBook and the MacBook Air would be prime candidates. After that, possibly a Mac mini or even a smaller iMac.

If that in fact is what will happen, then high-end MacBook Pros, iMacs, iMac Pros, and the new Mac Pro will all continue running Intel processors until comparable (or better) Apple Silicon chips become available. Apple is more willing to shake things up than any other tech company. In spite of that, they have a pretty good track record of supporting existing products for quite a few years. For example, Big Sur is supposed to be compatible with MacBook Pros and Mac Pros going back to 2013. Even though the FCPX product launch was rough, FCP7 continued to work for many years after that through several OS updates.

I just don’t agree with people who grouse that Apple abandons its existing costumers whenever one of these changes happen. History simply does not bear that out. Change is always a challenge – more for some types of users than others. But we’ve been here before. This change shouldn’t cause too much hand-wringing, especially if Apple Silicon delivers on its promise.

Click here for a nice recap of some of the other announcements from WWDC.

©2020 Oliver Peters

The Missing Mac

High-end Mac users waited six years for Apple to release its successor to the cylindrical 2013 Mac Pro. That’s a unit that was derided by some and was ideal for others. Its unique shape earned the nickname of the “trash can.” Love it or hate it that Mac Pro lacked the ability to expand and grow with the times. Nevertheless, many are still in daily service and being used to crank out great wok.

The 2019 Mac Pro revitalized Apple’s tower configuration – dubbed the “cheese grater” design. If you want expandability, this one has it in spades. But at a premium price that puts it way above the cost of a decked out 2013 Mac Pro. Unfortunately for many users, this leaves a gap in the product line – both in features and in price range.

If you want a powerful Mac in the $3,000 – $5,000 range without a built-in display, then there is none. I really like the top-spec versions of the iMac and iMac Pro, but if I already own a display, would like to only use an Apple XDR, or want an LG, Dell, Asus, etc, then I’m stuck. Naturally one approach would be to buy a 16″ MacBook Pro and dock it to an external display, using the MacBook Pro as a second display or in the clamshell configuration. I’ve discussed that in various posts and it’s one way nimble editing shops like Trim in London tend to work.

Another option would be the Mac Mini, which is closest to the unit that best fits this void. It recently got a slight bump up in specs, but it’s missing 8-core CPU options and an advanced graphics card. The best 6-core configuration might actually be a serviceable computer, but I would imagine effects requiring GPU acceleration will be hampered by the Intel UHD 630 built-in graphics. The Mini does tick a lot of the boxes, including wi-fi, Bluetooth, four Thunderbolt 3/USB-C ports, HDMI 2.0, two USB 3.0 ports, plus Ethernet and headphone jacks.

I’ve tested both the previous Mac Mini iteration (with and w/o eGPU) and the latest 16″ MacBook Pro. Both were capable Macs, but the 16″ truly shines. I find it hard to believe that Apple couldn’t have created a Mac Mini with the same electronics as the loaded 16″ MacBook Pro. After all, once you remove the better speaker system, keyboard, and battery from the lower case of the laptop, you have about the same amount of “guts” as that of the Mac Mini. I think you could make the same calculation with the iMac electronics. Even if the Mini case needed to be a bit taller, I don’t see why this wouldn’t be technically possible.

Here’s a hypothetical Mac Mini spec (similar to the MacBook Pro) that could be a true sweet spot:

  • 2.4GHz 8-core, 9th-generation Intel Core i9 processor (or faster)
  • 64GB 2666MHz DD4 memory
  • AMD Radeon Pro 5600M GPU with 8GB HBM2 memory
  • 1TB SSD storage (or higher – up to 8TB)
  • 10 Gigabit Ethernet

Such a configuration would likely be in the range of $3,000 – $5,000 based on the BTO options of the current Mini, iMac, and MacBook Pro. Of course, if you bump the internal SSD from 1TB to the higher capacities, the total price will go up. In my opinion, it should be easy for Apple to supply such a version without significant re-engineering. I recognize that if you went with a Xeon-based configuration, like the iMac Pros, then the task would be a bit more challenging, in part due to power demands and airflow. Naturally, an even higher-spec’ed Mac like this in the $5,000 – $10,000 range would also be appealing, but that would likely be a bridge too far for Apple.

What I ultimately want is a reasonably powerful Mac without being forced to also purchase an Apple display as this isn’t always the best option. But I want that without spending as much as on a car to get there. I understand that such a unit wouldn’t have the ability to add more cards, but then neither do the iMacs and MacBook Pros. So I really don’t see this as a huge issue. I feel that this configuration would be an instant success with many editors. Plug in the display and storage of your choice and Bob’s your uncle.

I’m not optimistic. Maybe Apple has run the calculation that such a version would rob sales from the 2019 Mac Pro or iMac Pros. Or maybe they simply view the Mini as fitting into a narrow range of server and home computing use cases. Whatever the reason, it seems clear to me that there is a huge gap in the product line that could be served by a Mac Mini with specs such as these.

On the other hand, Apple’s virtual WWDC is just around the corner, so we can always hope!

©2020 Oliver Peters

The Banker

Apple has launched its new TV+ service and this provides another opportunity for filmmakers to bring untold stories to the world. That’s the case for The Banker, an independent film picked up by Apple. It tells the story of two African American entrepreneurs attempting to earn their piece of the American dream during the repressive 1960s through real estate and banking. It stars Samuel L. Jackson, Anthony Mackie, Nia Long, and Nicholas Hoult.

The film was directed by George Nolfi (The Adjustment Bureau) and produced by Joel Viertel, who also signed on to edit the film. Viertel’s background hasn’t followed the usual path for a feature film editor. Interested in editing while still in high school, the move to LA after college landed him a job at Paramount where he eventually became a creative executive. During that time he kept up his editing chops and eventually left Paramount to pursue independent filmmaking as a writer, producer, and editor. His editing experience included Apple Final Cut Pro 1.0 through 7.0 and Avid Media Composer, but cutting The Banker was his first time using Apple’s Final Cut Pro X.

I recently chatted with Joel Viertel about the experience of making this film and working with Apple’s innovative editing application.

____________________________________________

[OP] How did you get involved with co-producing and cutting The Banker?

[JV] This film originally started while I was at Paramount. Through a connection from a friend, I met with David Smith and he pitched me the film. I fell in love with it right away, but as is the case with these films, it took a long while to put all the pieces together. While I was doing The Adjustment Bureau with George Nolfi and Anthony Mackie, I pitched it to them, and they agreed it would be a great project for us all to collaborate on. From there it took a few years to get to a script we were all happy with, cast the roles, get the movie financed, and off the ground.

[OP] I imagine that it’s exciting to be one of the first films picked up by Apple for their TV+ service. Was that deal arranged before you started filming or after everything was in the can, so to speak?

[JV] Apple partnered with us after it was finished. It was made and financed completely independently through Romulus Entertainment. While we were in the finishing stages, Endeavor Content repped the film and got us into discussions with Apple. It’s one of their first major theatrical releases and then goes on the platform after that. Apple is a great company and brand, so it’s exciting to get in on the ground floor of what they’re doing.

[OP] When I screened the film, one of the things I enjoyed was the use of montages to quickly cover a series of events. Was that how it was written or were those developed during the edit as a way to cut running time?

[JV] Nope, it was all scripted. Those segments can bedevil a production, because getting all of those little pieces is a lot of effort for very little yield. But it was very important to George and myself and the collaborators on the film to get them. It’s a film about banking and real estate, so you have to figure out how to make that a fun and interesting story. Montages were one way to keep the film propulsive and moving forward – to give it motion and excitement. We just had to get through production finding places to pick off those pieces, because none of those were developed in post.

[OP] What was your overall time frame to shoot and post this film?

[JV] We started in late September 2018 and finished production in early November. It was about 30 days in Atlanta and then a few days of pick-ups in LA. We started post right after Thanksgiving and locked in May, I think. Once Apple got involved, there were a few minor changes. However, Apple’s delivery specs were completely different from our original delivery specs, so we had to circle back on a bunch of our finishing.

[OP] Different in what way?

[JV] We had planned to finish in 2K with a 5.1 mix. Their deliverables are 4K with a Dolby Atmos mix. Because we had shot on 35mm film, we had the capacity, but it meant that we had to rescan and redo the visual effects at 4K. We had to lay the groundwork to do an Atmos mix and DolbyVision finish for theatrical and home video, which required the 35mm film negative to be rescanned and dust-busted.

Our DP, Charlotte Bruus Christensen, has shot mostly on 35mm – films like A Quiet Place and The Girl on a Train and those movies are beautiful. And so we wanted to accommodate that, but it presents challenges if you aren’t shooting in LA. Between Kodak in Atlanta and Technicolor in LA we were able to make it work.

Kodak would process the negative and Technicolor made a one-light transfer for 2K dailies. Those were archived and then I edited with ProResLT copies in HD. Once we were done, Technicolor onlined the movie from their 2K scans. After the change in deliverable specs, Technicolor rescanned the clips used for the online finish at 4K and conformed the cut at 4K.

[OP] I felt that the eclectic score fit this movie well and really places it in time. As an editor, how did you work to build up your temp tracks? Or did you simply leave it up to the composer?

[JV] George and I have worked with our composer, Scott Salinas, for a very long time on a bunch of things. Typically, I give him a script and then he pulls samples that he thinks are in the ballpark. He gave me a grab bag of stuff for The Banker – some of which was score, some of which was jazz. I start laying that against the picture myself as I go and find these little things that feel right and set the tone of the movie. I’m finding my way for the right marriage of music and picture. If it works, it sticks. If it doesn’t, we replace it. Then at the end, he’s got to score over that stuff.

Most of the jazz in The Banker is original, but there are a couple tracks where we just licensed them. There’s a track called “Cash and Carry” that I used over the montage when they get rich. They’ve just bought the Banker’s Building and popped the champagne. This wacky, French 1970s bit of music comes in with a dude scatting over it while they are buying buildings or looking at the map of LA. That was a track Scott gave me before we shot a frame of film, so when we got to that section of the movie, I chose it out of the bin and put that sequence to it and it just stuck.

There are some cases where it’s almost impossible to temp, so I just cut it dry and give it to him. Sometimes he’ll temp it and sometimes he’ll do a scratch score. For example, the very beginning of the movie never had temp in any way. I just cut it dry. I gave it to Scott. He scored it and then we revised his scoring a bunch of times to get to the final version.

[OP] Did you do any official or “friends and family” screenings of The Banker while editing it? If so, did that impact the way the film turned out?

[JV] The post process is largely dictated by how good your first cut is. If the movie works, but needs improvement – that’s one thing. If it fundamentally doesn’t – that’s another. It’s a question of where you landed from the get-go and what needs to be fixed to get to the end of the road.

We’re big fans of doing mini-testing – bringing in people we know and people whose opinions we want to hear. At some point you have to get outside of the process and aggregate what you hear over and over again. You need to address the common things that people pick up on. The only way to keep improving your movie is to get outside feedback so they tell you what to focus on.

Over time that significantly impacted the film. It’s not like any one person said that one thing that caused us to re-edit the film. People see the problem that sticks out to them in the cut and you work on that. The next time there’s something else and then you work on that. You keep trying to make all the improvements you can make. So it’s an iterative process.

[OP] This film marked a shift for you from using earlier versions of Final Cut Pro to now cutting on Final Cut Pro X for the first time. Why did you make that choice and what was the experience like?

[JV] George has a relationship with Apple and they had suggested using Final Cut Pro X on his next project. I had always used Final Cut Pro 7 as my preference. We had used it on an NBC show called Allegiance in 2014 and then on Birth of the Dragon in 2015 and 2016 – long after it had been discontinued. We all could see the writing on the wall – operating systems would quit running it and it’s not harnessing what the computers can do.

I got involved in the conversation and was invited to come to a seminar at the Editors Guild about Final Cut Pro X that was taught by Kevin Bailey, who was the assistant editor for Whiskey Tango Foxtrot. I had looked at Final Cut Pro X when it first came out and then again several years later. I felt like it had been vastly improved and was in a place where I could give it a shot. So I committed at that point to cutting this film on Final Cut Pro X and teaching myself how to use it. I also hired Kevin to help as my assistant for the start of the film. He became unavailable later in the production, so we found Steven Moyer to be my assistant and he was fantastic. I would have never made it through without the both of them.

[OP] How did you feel about Final Cut Pro X once you got your sea legs?

[JV] It’s always hard to learn to walk again. That’s what a lot of editors bump into with Final Cut Pro X, because it is a very different approach than any other NLE. I found that once you get to know it and rewire your brain that you can be very fast on it. A lot of the things that it does are revolutionary and pretty incredible. And there are still other areas that are being worked on. Those guys are constantly trying to make it better. We’ve had multiple conversations with them about the possibilities and they are very open to feedback.

[OP] Every editor has their own way of tackling dailies and wading through an avalanche of footage coming in from production. And of course, Final Cut Pro X features some interesting ways to organize media. What was the process like for The Banker?

[JV] The sound and picture were both running at 24fps. I would upload the sound files from my hotel room in Atlanta to Technicolor in LA, who would sync the sound. They would send back the dailies and sound, which Kevin – who was assisting at that time – would load into Final Cut. He would multi-clip the sound files and the two camera angles. Everything is in a multi-clip, except for purely MOS B-roll shots. Each scene had its own event. Kevin used the same system he had devised with Jan [Kovac, editor on Whiskey Tango Foxtrot and Focus]. He would keyword each dialogue line, so that when you select a keyword collection in the browser, every take for that line comes up. That’s labor-intensive for the assistant, but it makes life that much faster for me once it’s set up.

[OP] I suppose that method also makes it much faster when you are working with the director and need to quickly get to alternate takes.

[JV] It speeds things along for George, but also for me. I don’t have to hunt around to find the lines when I have to edit a very long dialogue scene. You could assemble selects reels first, but I like to look at everything. I fundamentally believe there’s something good in every bad take. It doesn’t take very long to watch every take of a line. Plus I do a fair amount of ‘Franken-biting’ with dialogue where needed.

[OP] Obviously the final mix and color correction were done at specialty facilities. Since The Banker was shot on film, I would imagine that complicated the hand-off slightly. Please walk me through the process you followed.

[JV] Marti Humphrey did the sound at The Dub Stage in Burbank. We have a good relationship with him and can call him very early in the process to work out the timeline of how we are going to do things. He had to soup up his system a bit to handle the Atmos near-field stuff, but it was a good opportunity for him to get into that space. So he was able to do all the various versions of our mix.

Technicolor was the new guy for us. Mike Hatzer did the color grade. It was a fairly complex process for them and they were a good partner. For the conform, we handed them an XML and EDL. They had their Flex files to get back to the film edge code. Steven had to break up the sequence to generate separate tracks for the 35mm original, stock, and VFX shots, because Technicolor needed separate EDLs for those. But it wasn’t like we invented anything that hasn’t been done before.

We did use third-party apps for some of this. The great thing about that is you can just contact the developer directly. There was one EDL issue and Steven could just call up the app developer to explain the issue and they’d fix it in a couple of days.

[OP] What sort of visual effects were required? The film is set more or less 70 years ago, so were the majority of effects just to make the locations look right? Like cars, signs, and so on?

[JV] It was mostly period clean-up. You have to paint out all sorts of boring stuff, like road paint. In the 50s and 60s, those white lines have to come out. Wires, of course. A couple of shots we wanted to ‘LA-ify’ Georgia. We shot some stuff in LA, but when you put Griffith Park right next to a shot of Newnan, Georgia, the way to blend that over is to put palm trees in the Newnan shot.

We also did a pick-up with Anthony while he was on another show the required a beard for that role. So we had to paint out his beard. Good luck figuring out which was the shot where we had to paint out his beard!

[OP] Now that you have a feature film under your belt with Final Cut Pro X, what are your thoughts about it? Anything you feel that it’s missing?

[JV] All the NLEs have their particular strengths. Final Cut has several that are amazing, like background exports and rendering. It has Roles, where you can differentiate dialogue, sound effects, and music sources. You can bus things to different places. This is the first time I’ve ever edited in 5.1, because Final Cut supports that. That was a fun challenge.

We used Final Cut Pro X to edit a movie shot on film, which is kind of a first at this level, but it’s not like we crashed into some huge problem with that. We gamed it out and it all worked like it was supposed to. Obviously it doesn’t do some stuff the same way. Fortunately through our relationship with Apple we can make some suggestions about that. But there really isn’t anything it doesn’t do. If that were the case, we would have just said that we can’t cut with this.

Final Cut Pro X is an evolving NLE – as they all are. What I realized at the seminar is that it changed a lot from when it first appeared. It was a good experience cutting a movie on it. Some editors are hesitant, because that first hour is difficult and I totally get that. But if you push through that and get to know it – there are many things that are very good and addictively good. I would certainly cut another movie on it.

____________________________________________

The Banker started a limited theatrical release on March 6 and will be available on the Apple TV+ streaming service on March 20.

For even more details on the post process for The Baker, check out Pro Video Coalition

Originally written for FCPco.

®2020 Oliver Peters

Apple 2019 16″ MacBook Pro

Creatives in all fields are the target market for Apple’s MacBook Pro product line. Apple introduced the 16″ model to replace its 15″ predecessor in late 2019. This new model is not only a serious tool for location work but is powerful enough to form the hub of your edit suite, whether on-site or at a fixed facility.

Nuts and bolts

The 2019 MacBook Pro comes in 6-core (Intel Core i7) and 8-core (Intel Core i9) configurations. It boasts up to 64GB RAM, an AMD Radeon Pro 5500M series GPU with up to 8GB of GDDR6 VRAM, and can be equipped with up to 8TB of internal SSD storage. Prices start at under $2,800 USD*, but the full monty rings up at about $6,500 USD*. The high-capacity SSD options contribute to the most expensive configurations, but those sizes may be overkill for most users. A more realistic price for a typical editor’s 8-core configuration would be about $3,900 USD*. Just stick to a 1TB internal SSD and back the RAM down to 32GB. Quite frankly the 6-core is likely to be sufficient for many editing and design tasks. (*With AppleCare warranty, but no local VAT or sales taxes added.)

While there are great PC laptop choices, it’s very hard to make direct comparisons to laptops with all of these same components. Few PC laptops offer this much RAM or SSDs that large, for example. When you can make a direct comparison, name brand PC laptops are often more expensive. And remember, great gaming PCs are not necessarily the best editing machines and vice versa.

What’s improved?

Apple loaned me a space gray, 8-core 16″ MacBook Pro configured with 64GB RAM, the AMD GPU with 8GB VRAM, and a 4TB internal hard drive. I own a mid-2014 MacBook Pro as the center of my edit system at home. The two MacBook Pros are physically very similar. They are nearly the identical size, with the 2019 MacBook Pro a bit thinner and lighter. The keyboard footprint is the same as my five-year-old model, though the keys are slightly larger on the new laptop with less space between. Apple tweaked the keyboard mechanism on the 16″ model. Typing feels about the same between these two, though the keys on the new machine are quieter. Plus, it has a much bigger trackpad and the touch bar.

The Retina screen is housed in a lid about the same size as the 15″ model. Its 16″ diagonal spec is achieved by using smaller bezels. Of course, the newer display is also higher density resulting in 3072 x 1920 pixels at 226 ppi. This 500-nit, P3 display offers True Tone, thanks to macOS Catalina. True Tone alters the color temperature based on ambient light. It’s easy on the eyes, because it warms up the image in standard room lighting. However, I would discourage enabling it while working on projects requiring critical color accuracy.

The MacBook Pro features four Thunderbolt 3/USB-C ports, like its 15″ predecessor. These connect peripherals and/or power. I don’t know whether Apple had room to add standard USB-A ports or if there was a trade-off for space needed for cooling or the improved speaker system. Maybe it was just a decision to move forward regardless of some short-term inconvenience. In any case, if you purchase a MacBook Pro, plan on also buying a few adapters and cables, a USB hub, and/or a Thunderbolt 3 dock.

Performance testing

How does the 2019 MacBook Pro stack up against a desktop Mac, like the 10-core 3.0 GHz 2017 iMac Pro used at my daily editing gig? Both have 64GB RAM, but there are 16GB of VRAM in the iMac Pro. I tested with both the internal SSDs and an external USB 3.0 GDRIVE SSD. Both internal drives clocked well over 2500 MB/sec, while the GDRIVE was in the 400-500 range.

My “benchmark” tests included BruceX for Final Cut Pro X, Puget Systems’ After Effects benchmark, and two of Simon Ubsdell’s Motion tutorial projects. For the “real world” tests, I used a travelogue series sizzle edit with 4K (and larger) media, various codecs, scaling, speed changes, and color correction. The timeline was exported to ProRes and H.264 using various NLEs. My final test sequence was a taxing 6K FCPX timeline composed of nine layers of 6K RED raw files.

Until the release of the new Mac Pro tower, the iMac Pro had been Apple’s most powerful desktop computer. Yet, in nearly all of these tests, the 16″ MacBook Pro equaled or slightly bettered the times of the iMac Pro. The laptop had faster export times and a higher Puget score with After Effects. One exception was my nine-layer 6K RED project, where the iMac Pro shined – exporting twice as fast as the MacBook Pro. Both Macs played back and scrubbed through this variety of files with ease, regardless of application or internal versus external drive. Overall, the biggest difference I noticed was that exports on the iMac Pro stayed quiet throughout, while the MacBook Pro frequently had to rev up the fans.

Apple claims up to 11 hours of battery life, but that’s really just during light duty computing: checking e-mails, writing, surfing the web, etc. And if you’ve optimized your energy settings for battery life. The MacBook Pro includes an integrated Intel GPU and employs automatic graphics cards switching. During tasks that don’t generate a heavy GPU load the machine is running on the integrated card. It switches to the AMD for apps like Final Cut or Premiere. I purposefully set up a looping 4K sequence in FCPX and found that the battery drained from 100% down to 10% in about two hours. While this is more stress than normal editing, it’s typical behavior for creative applications. The bottom line is that you shouldn’t expect to be editing for 11 hours straight running only on the internal battery.

Final thoughts

This machine comes with macOS Catalina pre-installed. Don’t plan on downgrading that. Catalina is the first version of macOS that is purely 64-bit and with significant under-the-hood security changes. All 32-bit dependencies have been dropped, which means that some software and hardware products might not be fully compatible. So, check first. I would recommend a clean install of apps and plug-ins rather than migrating from another machine’s drive. I did a clean installation of apps through my Apple and Adobe CC IDs and was ready to go in half a day. Conversely I’ve heard stories from others who pulled their hair out for a week trying to migrate between machines and OS versions. There are ways to do this and most of the time they work, but why not just start out fresh when you get a new machine?

Apple’s ProApps, Adobe Creative Cloud software, and DaVinci Resolve are all ready to go. Media Composer editors have a slight wait. Avid is beta testing a Catalina-compatible version of Media Composer (at the time of this writing). Some functionality won’t be there at the start, but updates will quickly add in many of those missing features.

Mac owners who bought a recent 15″ MacBook Pro will see a definite boost, but probably not enough to refresh their systems yet. But if, like me, you are running a five-year-old model, then this unit becomes very tempting, especially when working with anything more taxing than ProRes 1080p files. For work on-the-go, like on-site editing, DIT tasks, photo processing, or other creative tasks, the 2019 MacBook Pro easily fits the bill. Many editors may also need a machine that can easily shift from the field to the office/home/studio in place of an iMac or iMac Pro. If that’s you, then the MacBook Pro has the horsepower. Plug it up to a Thunderbolt dock, an external display, or other peripherals, and you are ready to go – no desktop computer needed.

Original written for RedShark News.

©2020 Oliver Peters