Blackmagic Design eGPU

Power users have grown to rely on graphics processing units from AMD, Intel and Nvidia to accelerate a wide range of computational functions – from visual effect filters to gaming and 360VR, and even to bitcoin mining. Apple finally supports external GPUs, which can easily be added as plug-and-play devices without any hack. Blackmagic Design just released its own eGPU product for the Mac, which is sold exclusively through Apple ($699 USD). It requires macOS 10.13.6 or later, and a Thunderbolt 3 connection. (Thunderbolt 2, even with adapters, will not work.)

The Blackmagic eGPU features a sleek, aluminum enclosure that makes a fine piece of desk art. It’s of similar size and weight to a 2013 Mac Pro and is optimized for both cooling and low noise. The unit is built around the AMD Radeon Pro 580 GPU with 8GB of video memory. It delivers 5.5 teraflops of processing power and is the same GPU used in Apple’s top-end, 27” Retina 5K iMac.

Leveraging Thunderbolt 3

Thunderbolt 3 technology supports 40Gb/s of bandwidth, as well as power. The Blackmagic eGPU includes a beefy power supply that can also power and/or charge a connected MacBook Pro. There are two Thunderbolt 3 ports, four USB3.1 ports, and HDMI. Therefore, you can connect a Mac, two displays, plus various USB peripherals. It’s easy to think of it as an accelerator, but it is also an appliance that can be useful in other ways to extend the connectivity and performance of MacBook Pros. Competing products with the same Radeon 580 GPU may be a bit less expensive, but they don’t offer this level of connectivity.

Apple and Blackmagic both promote eGPUs as an add-on for laptops, but any Thunderbolt 3 Mac qualifies. I tested the Blackmagic eGPU with both a high-end iMac Pro and the base model 13” 2018 MacBook Pro with touch bar. This model of iMac Pro is configured with the more advanced Vega Pro 64 GPU (16GB VRAM). My main interest in including the iMac Pro was simply to see whether there would be enough performance boost to justify adding an eGPU to a Mac that is already Apple’s most powerful. Installation of the eGPU was simply a matter of plugging it in. A top menu icon appears on the Mac screen to let you know it’s there and so you can disconnect the unit while the Mac is powered up.

Pushing the boundaries through testing

My focus is editing and color correction and not gaming or VR. Therefore, I ran tests with and without the eGPU, using Final Cut Pro X, Premiere Pro, and DaVinci Resolve (Resolve Studio 15 beta). Anamorphic ARRI Alexa ProRes 4444 camera files (2880×2160, native / 5760×2160 pixels, unsqueezed) were cut into 2K DCI (Resolve) and/or 4K DCI (FCPX, Premiere Pro) sequences. This meant that every clip got a Log-C LUT and color correction, as well as aspect ratio correction and scaling. In order to really stress the system, I added several GPU-accelerated effect filters, like glow, film grain, and so on. Finally, timed exports went back to ProRes 4444 – using the internal SSD for media and render files to avoid storage bottlenecks.

Not many applications take advantage of this newfound power, yet. Neither FCPX nor Premiere utilize the eGPU correctly or even at all. Premiere exports were actually slower using the eGPU. In my tests, only DaVinci Resolve gained measurable acceleration from the eGPU, which also held true for a competing eGPU that I compared.

If editing, grading or possibly location DIT work is your main interest, then consider the Blackmagic eGPU a good accessory for DaVinci Resolve running on a MacBook Pro. As a general rule, lesser-powered machines benefit more from eGPU acceleration than powerful ones, like the iMac Pro, with its already-powerful, built-in Vega Pro 64 GPU.

Performance by the numbers (iMac Pro only)

To provide some context, here are the results I got with the iMac Pro:

Resolve on iMac Pro (internal V64 chip) – NO eGPU – Auto GPU config

Playback of timeline at real-time 23.976 without frames dropping

Render at source resolution – average 11fps (slower than real-time)

Render at timeline resolution – average 33fps (faster than real-time)

Resolve on iMac Pro – with BMD eGPU (580 chip) – OpenCL

Playback of timeline at real-time 23.976 without frames dropping

Render at source resolution – average 11fps (slower than real-time)

Render at timeline resolution – average 37fps (faster than real-time)


Apple’s ability to work with eGPUs is enabled by Metal. This is their framework for addressing hardware components, like graphics and central processors. The industry has relied on other frameworks, including OpenGL, OpenCL and CUDA. The first two are open standards written for a wide range of hardware platforms, while CUDA is specific to Nvidia GPUs. Apple is deprecating all of these in favor of Metal (now Metal 2). With each coming OS update, these will become more and more “legacy” until presumably, at some point in the future, macOS may only support Metal.

Apple’s intention is to gain performance improvements by optimizing the code at a lower level “closer to the metal”. It is possible to do this when you only address a limited number of hardware options, which may explain why Apple has focused on using only AMD and Intel GPUs. The downside is that developers must write code that is proprietary to Apple computers. Metal is in part what gives Final Cut Pro X it’s smooth media handling and real-time performance. Both Premiere Pro and Resolve give you the option to select Metal, when installed on Macs.

In the tests that I ran, I presume FCPX only used Metal, since there is no option to select anything else. I did, however, test both Premiere Pro/Adobe Media Encoder and Resolve with both Metal and again with OpenCL specifically selected. I didn’t see much difference in render times with either setting in Premiere/AME. Resolve showed definite differences, with OpenCL the clear winner. For now, Resolve is still optimized for OpenCL over Metal.

Power for the on-the-go editor and colorist

The MacBook Pro is where the Blackmagic eGPU makes the most sense. It gives you better performance with faster exports, and adds badly-needed connectivity. My test Resolve sequence is a lot more stressful than I would normally create. It’s the sort of sequence I would never work with in the real world on a lower-end machine, like this 13” model. But, of course, I’m purposefully pushing it through a demanding task.

When I ran the test on the laptop without the eGPU connected, it would barely play at all. Exports at source resolution rendered at around 1fps. Once I added the Blackmagic eGPU, this sequence played in real-time, although the viewer would start to drop frames towards the end of each shot. Exports at the source resolution averaged 5.5fps. At timeline resolution (2K DCI) it rendered at up to 17fps, as opposed to 4fps without it. That’s over 4X improvement.

Everyone’s set of formats and use of color correction and filters are different. Nevertheless, once you add the Blackmagic eGPU to this MacBook Pro model, functionality in Resolve goes from insanely slow to definitely useable. If you intend to do reliable color correction using Resolve, then a Thunderbolt 3 UltraStudio HD Mini or 4K Extreme 3 is also required for proper video monitoring. Resolve doesn’t send video signals over HDMI, like Premiere Pro and Final Cut Pro X can.

It will be interesting to see if Blackmagic also offers a second eGPU model with the higher-end chip in the future. That would likely double the price of the unit. In the testing I’ve done with other eGPUs that used a version of the Vega 64 GPU, I’m not convinced that such a product would consistently deliver 2X more performance to justify the cost. This Blackmagic eGPU adds a healthy does of power and connectivity for current MacBook Pro users and that will only get better in the future.

I think it’s clear that Apple is looking towards eGPUs are a way to enhance the performance of its MacBook Pro line, without compromising design, battery life, and cooling. Cable up to an external device and you’ve gained back horsepower that wouldn’t be there in the standard machine. After all, you mainly need this power when you are in a fixed, rather than mobile, location. The Blackmagic eGPU is portable enough, so that as long as you have electrical power, you are good to go.

In his review of the 2018 MacBook Pro, Ars Technica writer Samuel Axon stated, “Apple is trying to push its own envelope with the CPU options it has included in the 2018 MacBook Pro, but it’s business as usual in terms of GPU performance. I believe that’s because Apple wants to wean pro users with serious graphics needs onto external GPUs. Those users need more power than a laptop can ever reasonably provide – especially one with a commitment to portability.”

I think that neatly sums it up, so it’s nice to see Blackmagic Design fill in the gaps.

Originally written for RedShark News.

©2018 Oliver Peters


Beyond the Supernova

No one typifies hard driving, instrumental, guitar rock better than Joe Satriani. The guitar virtuoso – known to his fans as Satch – has sixteen studio albums under his belt, along with several other EPs, live concert and compilation recordings. In addition to his solo tours, Satriani founded the “G3”, a series of short tours that feature Satriani along with a changing cast of two other all-star, solo guitarists, such as Steve Vai, Yngwie Malmsteen, Guthrie Govan, and others. In another side project, Satriani is the guitarist for the supergroup Chickenfoot, which is fronted by former Van Halen lead singer, Sammy Hagar.

The energy behind Satriani’s performances was captured in the new documentary film, Beyond the Supernova, which is currently available on the Stingray Qello streaming channel. This documentary grew out of the general behind-the-scenes coverage of Satriani’s 2016 and 2017 tours in Asia and Europe, to promote his 15th studio album, Shockwave Supernova. Tour filming was handled by Satriani’s son, ZZ (Zachariah Zane) – an up-and-coming, young filmmaker. The tour coincided with Joe Satriani’s 60th birthday and 30 years after the release of his multi-platinum-selling album Surfing with the Alien. These elements, as well as capturing Satriani’s introspective nature, provided the ingredients for a more in-depth project, which ZZ Satriani produced, directed and edited.

According to Joe Satriani in an interview on Stingray’s PausePlay, “ZZ was able to capture the real me in a way that only a son would understand how to do; because I was struggling with how I was going to record a new record and go in a new direction. So, as I’m on the tour bus and backstage – I guess it’s on my face. He’s filming it and he’s going ‘there’s a movie in here about that. It’s not just a bunch of guys on tour.’”

From music to filmmaking

ZZ Satriani graduated from Occidental College in 2015 with a BA in Art History and Visual Arts, with a focus on film production. He moved to Los Angeles to start a career as a freelance editor. I spoke with ZZ Satriani about how he came to make this film. He explained, “For me it started with skateboarding in high school. Filmmaking and skateboarding go hand-in-hand. You are always trying to capture your buddies doing cool tricks. I gravitated more to filmmaking in college. For the 2012 G3 Tour, I produced a couple of web videos that used mainly jump cuts and were very disjointed, but fun. They decided to bring me on for the 2016 tour in order to produce something similar. But this time, it had to have more of a story. So I recorded the interviews afterwards.”

Although ZZ thinks of himself as primarily an editor, he handed all of the backstage, behind-the-scenes, and interview filming himself, using a Sony PXW-FS5 camera. He comments, “I was learning how to use the camera as I was shooting, so I got some weird results – but in a good way. I wanted the footage to have more of a filmic look – to have more the feeling of a memory, than simply real-time events.”

The structure of Beyond the Supernova intersperses concert performances with events on the tour and introspective interviews with Joe Satriani. The multi-camera concert footage was supplied by the touring support company and is often mixed with historical footage provided by Joe Satriani’s management team. This enabled ZZ to intercut performances of the same song, not only from different locations, but even different years, going back to Joe Satriani’s early career.

The style of cutting the concert performances is relatively straightforward, but the travel and interview bridges that join them together have more of a stream-of-consciousness feel to them and are often quite psychedelic. ZZ says, “I’m not a big [Adobe] After Effects guy, so all of the ‘effects’ are practical and built up in layers within [Adobe] Premiere Pro. The majority of ‘effects’ dealt with layering, blending and cropping different clips together. It makes you think about the space within the frame – different shapes, movement, direction, etc. I like playing around that way – you end up discovering things you wouldn’t have normally thought of. Let your curiosity guide you, keep messing with things and you will look at everything in a new way. It keeps editing exciting!”

Premiere Pro makes the cut

Beyond the Supernova was completely cut and finished in Premiere Pro. ZZ explains why,  “Around 2011-12, I made the switch from [Apple] Final Cut Pro to Premiere Pro while I was in a film production class. They informed us that was the new standard, so we rolled with it and the transition was very smooth. I use other apps in the Adobe suite and I like the layout of everything in each one, so I’ve never felt the need to switch to another NLE.”

ZZ Satriani continues, “We had a mix of formats to deal with, including the need to upscale some of the standard definition footage to HD, which I did in software. Premiere handled the PXW-FS5’s XAVC-L codec pretty well in my opinion. I didn’t transcode to Pro Res, since I had so much footage, and not a lot of external hard drive space. I knew this might make things go more slowly – but honestly, I didn’t notice any significant drawbacks. I also handled all of the color correction, using Premiere’s Lumetri color controls and the FilmConvert plug-in.” Satriani created the sound design for the interview segments, but John Cuniberti (who has also mixed Joe Satriani’s albums) re-mixed the live concert segments in his studio in London. The final 5.1 surround mix of the whole film was handled at Skywalker Sound.

The impetus pushing completion was entry into the October 2017 Mill Valley Film Festival. ZZ says, “I worked for a month putting together the trailer for Mill Valley. Because I had already organized the footage for this and an earlier teaser, the actual edit of the film came easily. It took me about two months to cut – working by myself in the basement on a [2013] Mac Pro. Coffee and burritos from across the street kept me going.” 

Introspection brings surprises

Fathers and sons working together can often be an interesting dynamic and even ZZ learned new things during the production. He comments, “The title of the film evolved out of the interviews. I learned that Joe’s songs on an album tend to have a theme tied to the theme of the album, which often has a sci-fi basis to it. But it was a real surprise to me when Joe explained that Shockwave Supernova was really his character or persona on stage. I went, ‘Wait! After all these years, how did I not know that?’”

As with any film, you have to decide what gets cut and what stays. In concert projects, the decision often comes down to which songs to include. ZZ says, “One song that I initially thought shouldn’t be included was Surfing with the Alien. It’s a huge fan favorite and such an iconic song for Joe. Including it almost seemed like giving in. But, in a way it created a ‘conflict point’ for the film. Once we added Joe’s interview comments, it worked for me. He explained that each time he plays it live that it’s not like repeating the past. He feels like he’s growing with the song – discovering new ways to approach it.”

The original plan for Beyond the Supernova after Mill Valley was to showcase it at other film festivals. But Joe Satriani’s management team thought that it coincided beautifully with the release of his 16th studio album, What Happens Next, which came out in January of this year. Instead of other film festivals, Beyond the Supernova made its video premiere on AXS TV in March and then started its streaming run on Stingray Qello this July. Qello is known as a home for classic and new live concerts, so this exposes the documentary to a wider audience. Whether you are a fan of Joe Satriani or just rock documentaries, ZZ Satriani’s Beyond the Supernova is a great peek behind the curtain into life on the road and some of the thoughts that keep this veteran solo performer fresh.

Images courtesy of ZZ Satriani.

©2018 Oliver Peters

Hawaiki AutoGrade

The color correction tools in Final Cut Pro X are nice. Adobe’s Lumetri controls make grading intuitive. But sometimes you just want to click a few buttons and be happy with the results. That’s where AutoGrade from Hawaiki comes in. AutoGrade is a full-featured color correction plug-in that runs within Final Cut Pro X, Motion, Premiere Pro and After Effects. It is available from FxFactory and installs through the FxFactory plug-in manager.

As the name implies, AutoGrade is an automatic color correction tool designed to simplify and speed-up color correction. When you install AutoGrade, you get two plug-ins: AutoGrade and AutoGrade One. The latter is a simple, one-button version, based on global white balance. Simply use the color-picker (eye dropper) and sample an area that should be white. Select enable and the overall color balance is corrected. You can then tweak further, by boosting the correction, adjusting the RGB balance sliders, and/or fine-tuning luma level and saturation. Nearly all parameters are keyframeable, and looks can be saved as presets.

AutoGrade One is just a starter, though, for simple fixes. The real fun is with the full version of AutoGrade, which is a more comprehensive color correction tool. Its interface is divided into three main sections: Auto Balance, Quick Fix, and Fine-Tune. Instead of a single global balance tool, the Auto Balance section permits global, as well as, any combination of white, black, and/or skin correction. Simply turn on one or more desired parameters, sample the appropriate color(s) and enable Auto Balance. This tool will also raise or lower luma levels for the selected tonal range.

Sometimes you might have to repeat the process if you don’t like the first results. For example, when you sample the skin on someone’s face, sampling rosy cheeks will yield different results than if you sample the yellowish highlights on a forehead. To try again, just uncheck Auto Balance, sample a different area, and then enable Auto Balance again. In addition to an amount slider for each correction range, you can also adjust the RGB balance for each. Skin tones may be balanced towards warm or neutral, and the entire image can be legalized, which clamps video levels to 0-100.

Quick Fix is a set of supplied presets that work independently of the color balance controls. These include some standards, like cooling down or warming up the image, the orange and teal look, adding an s-curve, and so on. They are applied at 100% and to my eye felt a bit harsh at this default. To tone down the effect, simply adjust the amount slider downwards to get less intensity from the effect.

Fine-Tune rounds it out when you need to take a deeper dive. This section is built as a full-blown, 3-way color corrector. Each range includes a luma and three color offset controls. Instead of wheels, these controls are sliders, but the results are the same as with wheels. In addition, you can adjust exposure, saturation, vibrance, temperature/tint, and even two different contrast controls. One innovation is a log expander, designed to make it easy to correct log-encoded camera footage, in the absence of a specific log-to-Rec709 camera LUT.

Naturally, any plug-in could always offer more, so I have a minor wish list. I would love to see five additional features: film grain, vignette, sharpening, blurring/soft focus, and a highlights-only expander. There are certainly other individual filters that cover these needs, but having it all within a single plug-in would make sense. This would round out AutoGrade as a complete, creative grading module, servicing user needs beyond just color correction looks.

AutoGrade is a deceptively powerful color corrector, hidden under a simple interface. User-created looks can be saved as presets, so you can quickly apply complex settings to similar shots and set-ups. There are already many color correction tools on the market, including Hawaiki’s own Hawaiki Color. The price is very attractive, so AutoGrade is a superb tool to have in your kit. It’s a fast way to color-grade that’s ideal for both users who are new or experienced when it comes to color correction.

(Click any image to see an enlarged view.)

©2018 Oliver Peters

More about ProRes RAW

A few weeks ago I wrote a two-part post – HDR and RAW Demystified. In the second part, I covered Apple’s new ProRes RAW codec. I still see a lot of misinformation on the web about what exactly this is, so I felt it was worth an additional post. Think of this post as an addendum to Part 2. My apologies up front, if there is some overlap between this and the previous post.


Camera raw codecs have been around since before RED Digital Camera brought out their REDCODE RAW codec. At NAB, Apple decided to step into the game. RED brought the innovation of recording the raw signal as a compressed movie file, making on-board recording and simplified post-production possible. Apple has now upped the game with a codec that is optimized for multi-stream playback within Final Cut Pro X, thus taking advantage of how FCPX leverages Apple hardware. At present, ProRes RAW is incompatible with all other applications. The exception is Motion, which will read and play the files, but with incorrect default – albeit correctable – video levels.

ProRes RAW is only an acquisition codec and, for now, can only be recorded externally using an Atomos Inferno or Sumo 19 monitor/recorder, or in-camera with DJI’s Inspire 2 or Zenmuse X7. Like all things Apple, the complexity is hidden under the surface. You don’t get the type of specific raw controls made available for image tweaking, as you do with RED. But, ProRes RAW will cover the needs of most camera raw users, making this the raw codec “for the rest of us”. At least that’s what Apple is banking on.

Capturing in ProRes RAW

The current implementation requires a camera that exports a camera raw signal over SDI, which in turn is connected to the Atomos, where the conversion to ProRes RAW occurs. Although no one is very specific about the exact process, I would presume that Atomos’ firmware is taking in the camera’s form of raw signal and rewrapping or transforming the data into ProRes RAW. This means that the Atomos firmware would require a conversion table for each camera, which would explain why only a few Sony, Panasonic, and Canon models qualify right now. Others, like ARRI Alexa or RED cameras, cannot yet be recorded as ProRes RAW. The ProRes RAW codec supports 12-bit color depth, but it depends on the camera. If the SDI output to the Atomos recorder is only 10-bit, then that’s the bit-depth recorded.

Until more users buy or update these specific Atomos products – or more manufacturers become licensed to record ProRes RAW onboard the camera – any real-word comparisons and conclusions come from a handful of ProRes RAW source files floating around the internet. That, along with the Apple and Atomos documentation, provides a pretty solid picture of the quality and performance of this codec group.

Understanding camera raw

All current raw methods depend on single-sensor cameras that capture a Bayer-pattern image. The sensor uses a monochrome mosaic of photosites, which are filtered to register the data for light in the red, green, or blue wavelengths. Nearly all of these sensors have twice as many green receptors as red or blue. At this point, the sensor is capturing linear light at the maximum dynamic range capable for the exposure range of the camera and that sensor. It’s just an electrical signal being turned into data, but without compression (within the sensor). The signal can be recorded as a camera raw file, with or without compression. Alternatively, it can also be converted directly into a full-color video signal and then recorded – again, with or without compression.

If the RGGB photosite data (camera raw) is converted into RGB pixels, then sensor color information is said to be “baked” into the file. However, if the raw conversion is stored in that form and then later converted to RGB in post, sensor data is preserved intact until much later into the post process. Basically, the choice boils down to whether that conversion is best performed within the camera’s electronics or later via post-production software.

The effect of compression may also be less destructive (fewer visible artifacts) with a raw image, because data, rather than video is being compressed. However, converting the file to RGB, does not mean that a wider dynamic range is being lost. That’s because most camera manufacturers have adopted logarithmic encoding schemes, which allow a wide color space and a high dynamic range (big exposure latitude) to be carried through into post. HDR standards are still in development and have been in testing for several years, completely independent of whether or not the source files are raw.

ProRes RAW compression

ProRes RAW and ProRes RAW HQ are both compressed codecs with roughly the same data footprint as ProRes and ProRes HQ. Both raw and standard versions use a variable bitrate form of compression, but in different ways. Apple explains it this way in their white paper: 

“As is the case with existing ProRes codecs, the data rates of ProRes RAW are proportional to frame rate and resolution. ProRes RAW data rates also vary according to image content, but to a greater degree than ProRes data rates. 

With most video codecs, including the existing ProRes family, a technique known as rate control is used to dynamically adjust compression to meet a target data rate. This means that, in practice, the amount of compression – hence quality – varies from frame to frame depending on the image content. In contrast, ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.”

ProRes RAW and HDR do not depend on each other

One of my gripes, when watching some of the ProRes RAW demos on the web and related comments on forums, is that ProRes RAW is being conflated with HDR. This is simply inaccurate. Raw applies to both SDR and HDR workflows. HDR workflows do not depend on raw source material. One of the online demos I saw recently immediately started with an HDR FCPX Library. The demo ProRes RAW clips were imported and looked blown out. This made for a dramatic example of recovering highlight information. But, it was wrong!

If you start with an SDR FCPX Library and import these same files, the default image looks great. The hitch here, is that these ProRes RAW files were shot with a Sony camera and a default LUT is applied in post. That’s part of the file’s metadata. To my knowledge, all current, common camera LUTs are based on conversion to the Rec709 color space, not HDR or wide gamut. If you set the inspector’s LUT tab to “none” in either SDR or HDR, you get a relatively flat, log image that’s easily graded in whatever direction you want.

What about raw-specific settings?

Are there any advantages to camera raw in the first place? Most people will point to the ability to change ISO values and color temperature. But these aren’t actually something inherently “baked” into the raw file. Instead, this is metadata, dialed in by the DP on the camera, which optimizes the images for the sensor. ISO is a sensitivity concept based on the older ASA film standard for exposing film. In modern digital cameras, it is actually an exposure index (EI), which is how some refer to it. (RedShark’s Phil Rhodes goes into depth in this linked article.)

The bottom line is that EI is a cross-reference to that camera sensor’s “sweet spot”. 800 on one camera might be ideal, while 320 is best on another. Changing ISO/EI has the same effect as changing gain in audio. Raising or lowering ISO/EI values means that you can either see better into the darker areas (with a trade-off of added noise) – or you see better highlight detail, but with denser dark areas. By changing the ISO/EI value in post, you are simply changing that reference point.

In the case of ProRes RAW and FCPX, there are no specific raw controls for any of this. So it’s anyone’s guess whether changing the master level wheel or the color temp/tint sliders within the color wheels panel is doing anything different for a ProRes RAW file than doing the same adjustment for any other RGB-encoded video file. My guess is that it’s not.

In the case of RED camera files, you have to install a camera raw plug-in module in order to work with the REDCODE raw codec inside of Final Cut Pro X. There is a lot of control of the image, prior to tweaking with FCPX’s controls. However, the amount of image control for the raw file is significantly more for a REDCODE file in Premiere Pro, than inside of FCPX. Again, my suspicion is that most of these controls take effect after the conversion to RGB, regardless of whether or not the slider lives in a specific camera raw module or in the app’s own color correction controls. For instance, changing color temperature within the camera raw module has no correlation to the color temperature control within the app’s color correction tools. It is my belief that few of these actually adjust file data at the raw level, regardless of whether this is REDCODE or ProRes RAW. The conversion from raw to RGB is proprietary with every manufacturer.

What is missing in the ProRes RAW implementation is any control over the color science used to process the image, along with de-Bayering options. Over the years, RED has reworked/improved its color science, which theoretically means that a file recorded a few years ago can look better today (using newer color science math) than it originally did. You can select among several color science models, when you work with the REDCODE format. 

You can also opt to lower the de-Bayering resolution to 1/2, 1/4, 1/8, etc. for a RED file.  When working in a 1080p timeline, this speeds up playback performance with minimal impact on the visible resolution displayed in the viewer. For full-quality conversion, software de-Bayering also yields different results than hardware acceleration, as with the RED Rocket-X card. While this level of control is nice to have, I suspect that’s the sort of professional complication that Apple seeks to avoid.

The main benefit of ProRes RAW may be a somewhat better-quality image carried into post at a lower file size. To get the comparable RGB image quality you’d need to go up to uncompressed, ProRes 4444, or ProRes 4444 XQ – all of which become very taxing in post. Yet, for many standard productions, I doubt you’ll see that great of a difference. Nevertheless, more quality with a lower footprint will definitely be welcomed.

People will want to know whether this is a game-changer or not. On that count, probably not. At least not until there are a number of in-camera options. If you don’t edit – and finish – with FCPX, then it’s a non-starter. If you shoot with a camera that records in a high-quality log format, like an ARRI Alexa, then you won’t see much difference in quality or workflow. If you shoot with any RED camera, you have less control over your image. On the other hand, it’s a definite improvement over all raw workflows that capture in image sequences. And it breathes some life into an older camera, like the Sony FS700. So, on balance, ProRes RAW is an advancement, but just not one that will affect as large a part of the industry as the rest of the ProRes family has.

(Note – click any image for an enlarged view. Images courtesy of Apple, FilmPlusGear, and OffHollywood.)

©2018 Oliver Peters

Luca Visual FX builds Mystery & Suspense

For most editors, creating custom music scores tends to fall into the “above my pay grade” category. If you are a whizz with GarageBand or Logic Pro X, then you might dip into Apple’s loop resources. But most commercials and corporate videos are easily serviced by the myriad of stock music sites, like Premium Beat and Music Bed. Some music customization is also possible with tracks from companies like SmartSound.

Yet, none of the go-to music library sites offer curated, genre-based, packages of tracks and elements that make it easy to build up a functional score for longer dramatic productions. Such projects are usually the work of composers or a specific music supervisor, sound designer, or music editor doing a lot of searching and piecing together from a wide range of resources.

Enter Luca Visual FX – a developer best known for visual effects plug-ins, such as Light Kit 2.0. It turns out that Luca Bonomo is also a composer. The first offering is the Mystery & Suspense Music and Sound Library, which is a collection of 500 clips, comprising music themes, atmospheres, drones, loops, and sound effects. This is a complete toolkit designed to make it easy to combine elements, in order to create a custom score for dramatic productions in the mystery or suspense genre.

These tracks are available for purchase as a single library through the LucaVFX website. They are downloaded as uncompressed, stereo AIF files in a 24-bit/96kHz resolution. This means they are of top quality and compatible with any Mac or PC NLE or DAW application. Best yet, is the awesome price of $79. The package is licensed for a single user and may be used for any audio or video production, including for commercial purposes.

Thanks to LucaVFX, I was able to download and test out the Library on a recent short film. The story is a suspense drama in the style of a Twilight Zone episode, so creating a non-specific, ethereal score fits perfectly. Drones, dissonance, and other suspenseful sounds are completely in line, which is where this collection shines.

Although I could have used any application to build this, I opted for Apple’s Final Cut Pro X. Because of its unique keyword structure, it made sense to first set up a separate FCPX library for only the Mystery & Suspense package. During import, I let FCPX create keyword collections based on the Finder folders. This keeps the Mystery & Suspense FCPX library organized in the same way as they are originally grouped. Doing so, facilitates fast and easy sorting and previewing of any of the 500 clips within the music library. Then I created a separate FCPX library for the production itself. With both FCPX libraries open, I could quickly preview and place clips from my music library to the edit sequence for the film, located within the other FCPX library.

Final Cut uses Connected Clips instead of tracks. This means that you can quickly build up and align overlapping atmospheres, transitions, loops, and themes for a densely layered music score in a very freeform manner. I was able to build up a convincing score for a half-hour-long piece in less that an afternoon. Granted, this isn’t mixed yet, but at least I now have the musical elements that I want and where I want them. I feel that style of working is definitely faster in Final Cut Pro X – and more conducive to creative experimentation – but it would certain work just as well in other applications.

The Mystery & Suspense Library is definitely a winner, although I do have a few minor quibbles. First, the music and effects are in keeping with the genre, but don’t go beyond it. When creating a score for this kind of production, you also need some “normal” or “lighter” moods for certain scenes or transitions. I felt that was missing and I would still have to step outside of this package to complete the score. Secondly, many of the clips have a synthesized or electronic tone to them, thanks to the instruments used to create the music. That’s not out of character with the genre, but I still would have liked some of these to include more natural instruments than they do. In fairness to LucaVFX, if the Mystery & Suspense Library is successful, then the company will create more libraries in other genres, including lighter fare.

In conclusion, this is a high quality library perfectly in keeping with its intended genre. Using it is fast and flexible, making it possible for even the most musically-challenged editor to develop a convincing, custom score without breaking the bank.

©2018 Oliver Peters

HDR and RAW Demystified, Part 2

(Part 1 of this series is linked here.) One of the surprises of NAB 2018 was the announcement of Apple ProRes RAW. This brought camera raw video to the forefront for many who had previously discounted it. To understand the ‘what’ and ‘why’ about raw, we first have to understand camera sensors.

For quite some years now, cameras have been engineering with a single, CMOS sensor. Most of these sensors use a Bayer-pattern array of photosites. Bayer – named for Bryce Bayer, a Kodak color scientist who developed the system. Photosites are the light-receiving elements of a sensor. The Bayer pattern is a checkerboard filter that separates light according to red/blue/green wavelengths. Each photosite captures light as monochrome data that has been separated according to color components. In doing so, the camera captures a wide exposure latitude as linear data. This is greater than what can be squeezed into standard video in this native form. There is a correlation between physical photosite size and resolution. With smaller photosites, more can fit on the sensor, yielding greater native resolution. But, with fewer, larger photosites, the sensor has better low-light capabilities. In short, resolution and exposure latitude are a trade-off in sensor design.

Log encoding

Typically, raw data is converted into RGB video by the internal electronics of the camera. It is then subsequently converted into component digital video and recorded using a compressed or uncompressed codec and one of the various color sampling schemes (4:4:4, 4:2:2, 4:1:1, 4:2:0). These numbers express a ratio that represents YCrCb – where Y = luminance (the first number) and CrCb = two difference signals (the second two numbers) used to derive color information. You may also see this written as YUV, Y/R-Y/B-Y or other forms. In the conversion, sampling, and compression process, some information is lost. For instance, a 4:4:4 codec preserves twice as much color information than a 4:2:2 codec. Two methods are used to preserve wide-color gamuts and extended dynamic range: log encoding and camera raw capture.

Most camera manufacturers offer some form of logarithmic video encoding, but the best-known is ARRI’s Log-C. Log encoding applies a logarithm to linear sensor data in order to compress that data into a “curve”, which will fit into the available video signal “bucket”. Log-C video, when left uncorrected and viewed in Rec. 709, will appear to lack contrast and saturation. To correct the image, a LUT (color look-up table) must be applied, which is the mathematic inverse of the process used to encode the Log-C signal. Once restored, the image can be graded to use and/or discard as much of the data as needed, depending on whether you are working in an SDR or HDR mode.

Remember that the conversion from a flat, log image to full color will only look good when you have bit-depth precision. This means that if you are working with log material in an 8-bit system, you only have 256 steps between black and white. That may not be enough and the grade from log to full color may result in banding. If you work in a 10-bit system, then you have 1024 steps instead of only 256 between the same black and white points. This greater precision yields a smoother transition in gradients and, therefore, no banding. If you work with ProRes recordings, then according to Apple, “Apple ProRes 4444 XQ and Apple ProRes 4444 support image sources up to 12 bits and preserve alpha sample depths up to 16 bits. All Apple ProRes 422 codecs support up to 10-bit image sources, though the best 10-bit quality is obtained with the higher-bit-rate family members – Apple ProRes 422 and Apple ProRes 422 HQ.”

Camera raw

RAW is not an acronym. It’s simply shorthand for camera raw information. Before video, camera raw was first used in photography, typified by Canon raw (.cr2) and Adobe’s Digital Negative (.dng) formats. The latter was released as an open standard and is widely used in video as Cinema DNG.

Camera raw in video cameras made its first practical introduction when RED Digital Cinema introduced their RED ONE cameras equipped with REDCODE RAW. While not the first with raw, RED’s innovation was to record a compressed data stream as a movie file (.r3d), which made post-production significantly easier. The key difference between raw workflows and non-raw workflows, is that with raw, the conversion into video no longer takes place in the camera or an external recorder. This conversion happens in post. Since the final color and dynamic range data is not “baked” into the file, the post-production process used can be improved in future years, making an even better result possible with an updated software version.

Camera raw data is usually proprietary to each manufacturer. In order for any photographic or video application to properly decode a camera raw signal, it must have a plug-in from that particular manufacturer. Some of these are included with a host application and some require that you download and install a camera-specific add-on. Such add-ons or plug-ins are considered to be a software “black box”. The decoding process is hidden from the host application, but the camera supplier will enable certain control points that an editor or colorist can adjust. For example, with RED’s raw module, you have access to exposure, the demosaicing (de-Bayering) resolution, RED’s color science method, and color temperature/tint. Other camera manufacturers will offer less.

Apple ProRes RAW

The release of ProRes RAW gives Apple a raw codec that is optimized for multi-stream playback performance in Final Cut Pro X and on the newest Apple hardware. This is an acquisition codec, so don’t expect to see the ability to export a timeline from your NLE and record it into ProRes RAW. Although I wouldn’t count out a transcode from another raw format into ProRes RAW, or possibly an export from FCPX when your timeline only consists of ProRes RAW content. In any case, that’s not possible today. In fact, you can only play ProRes RAW files in Final Cut Pro X or Apple Motion, but only FCPX displays the correct color information at default settings.

Currently ProRes RAW has only been licensed by Apple to Atomos and DJI. The Atomos Inferno and Sumo 19 units are equipped with ProRes RAW. This is only active with certain Canon, Panasonic, and Sony camera models that can send their raw signal out over an SDI cable. Then the Atomos unit will remap the camera’s raw values to ProRes RAW and encode the file. DJI’s Zenmuse X7 gimbal camera has also been updated to support ProRes RAW. With DJI, the acquisition occurs in-camera, rather than via an external recorder.

Like RED’s RECODE, Apple ProRes RAW is a variable bit-rate, compressed codec with different quality settings. ProRes RAW and ProRes RAW HQ fall in line similar to the data rates of ProRes and ProRes HQ. Unlike RED, no controls are exposed within Final Cut Pro X to access specific raw controls. Therefore, Final Cut Pro X’s color processing controls may or may not take affect prior to the conversion from raw to video. At this point that’s an unknown.

(Read more about ProRes RAW here.)


The main advantage of the shift to using movie file formats for camera raw – instead of image sequence files – is that processing is faster and the formats are conducive to working natively in most editing applications.

It can be argued whether or not there is really much difference in starting with a log-encoded versus a camera raw file. Leading feature films presented at the highest resolutions have originated both ways. Nevertheless, both methods empower you with extensive creative control in post when grading the image. Both accommodate a move into HDR and wider color gamuts. Clearly log and raw workflows future-proof your productions for little or no additional investment.

Originally written for RedShark News.

©2018 Oliver Peters

HDR and RAW Demystified, Part 1

Two buzzwords have been the highlight of many tech shows within this past year – HDR and RAW. In this first part, I will attempt to clarify some of the concepts surrounding video signals, including High Dynamic Range (HDR). In part 2, I’ll cover more about camera raw recordings.

Color space

Four things define the modern video signal: color space (aka color gamut), white point, gamma curve, and dynamic range. The easiest way to explain color space is with the standard triangular plot of the color spectrum, known as a chromaticity diagram. This chart defines the maximum colors visible to most humans when visualized on an x,y grid. Within it are numerous ranges that define a less-than-full range of colors for various standards. These represent the technical color spaces that cameras and display systems can achieve. On most charts, the most restrictive ranges are sRGB and Rec. 709. The former is what many computer displays have used until recently, while Rec. 709 is the color space standard for high definition TV. (These recommendations were developed by the International Telecommunications Union, so Rec. 709 is simply shorthand for ITU-R Recommendation BT.709.)

Next out is P3, a standard adopted for digital cinema projection and more recently, new computer displays, like those on the Apple iMac Pro. While P3 doesn’t display substantially more color than Rec. 709, colors at the extremes of the range do appear different. For example, the P3 color space will render more vibrant reds with a more accurate hue than Rec. 709 or sRGB. With UHD/4K becoming mainstream, there’s also a push for “better pixels”, which has brought about the Rec. 2020 standard for 4K video. This standard covers about 75% of the visible spectrum, although, it’s perfectly acceptable to deliver 4K content that was graded in a Rec. 709 color space. That’s because most current displays that are Rec. 2020 compatible can’t actually display 100% of the colors defined in this standard, yet.

The center point of the chromaticity diagram is white. However, different systems consider a slightly different color temperature to be white. Color temperature is measured in Kelvin degrees. Displays are a direct illumination source and for those, 6500-degrees (more accurately 6504) is considered pure white. This is commonly referred to as D-65. Digital cinema, which is a projected image, uses 6300-degrees as its white point. Therefore, when delivering something intended for P3, it is important to specify whether that is P3 D-65 or P3 DCI (digital cinema).

Dynamic range

Color space doesn’t live on its own, because the brightness of the image also defines what we see. Brightness (and contrast) are expressed as dynamic range. Up until the advent of UHD/4K we have been viewing displays in SDR (standard dynamic range). If you think of the chromaticity diagram as lying flat and dynamic range as a column that extends upward from the chart on the z-axis, you can quickly see that the concept can be thought of as a volumetric combination of color space and dynamic range. With SDR, that “column” goes from 0 IRE up to 100 IRE (also expressed as 0-100 percent).

Gamma is the function that changes linear brightness values into the weighted values that are translated to our screens. It defines numerical pixel value to its actual brightness. By increasing or decreasing gamma values, you are in effect, bending that straight-line between darkest and lightest values into a curve. This changes the midtone of the displayed image, making the image appear darker or lighter. Gamma values are applied to both the original image and to the display system. When they don’t match, then you run into situations where the image will look vastly different when viewed on one system versus another.

With the advent of UHD/4K, users have also been introduced to HDR (high dynamic range), which allows us to display brighter images and recover the overshoot elements in a frame, like bright lights and reflections. It is important to understand that HDR video is not the same as HDR photography. HDR photos are created by capturing several bracketed exposures of the same image and then blending those into a composite – either in-camera or via software, like Photoshop or Lightroom. HDR photos often yield hyper-real results, such as when high-contrast sky and landscape elements are combined.

HDR video is quite different. HDR photography is designed to work with existing technology, whereas HDR video actually takes advantage of the extended brightness range made possible in new displays. It is also only visible with the newest breed of UHD/4K TV sets that are HDR-capable. Display illumination is measured in nits. One nit equals one candela per square meter – in other words, the light of a single candle spread over a square meter. SDR displays have been capable of up to 100 nits. Modern computer displays, monitors, and consumer television sets can now display brightness in the range of 500 to 1,000 nits and even brighter. Anything over 1,000 nits is considered HDR. But that’s not the end of the story, as there are currently four competing standards: Dolby Vision, HDR10, HDR10+, and HLG. I won’t get into the weeds about the specifics of each, but they all apply different peak brightness levels and methods. Their nit levels range from 1,000 up to Dolby Vision’s theoretical limit of 10,000 nits.

Just because you own a high-nits display doesn’t mean you are seeing HDR. It isn’t simply turning up the brightness “to 11”, but rather providing the headroom to extend the parts of the image that exceed the normal range. These peaks can now be displayed with detail, without compressing or clipping them, as we do now. When an HDR master is created, metadata is stored with the file that tells the display device that the signal is an HDR signal and to turn on the necessary circuitry. That metadata is carried over HDMI. Therefore, every device in the playback chain must be HDR-capable.

HDR also means more hardware to work with it accurately. Although you may have grading software that accommodates HDR – and you have a 500 nits display, like those in an iMac Pro – you can’t effectively see HDR in order to properly grade it. That still requires proper capture/playback hardware from Blackmagic Design or AJA, along with a studio-grade, external HDR monitor.

Unfortunately, there’s one dirty little secret with HDR. Monitors and TV sets cannot display a full screen image at maximum brightness. You can’t display a total white background at 1,000 nits on a 1,000 nits display. These displays employ gain circuitry to darken the image in those cases. The responsiveness of any given display model will vary widely depending on how much of the screen is at full brightness and for how long. No two models will be at exactly the same brightness for any given percentage at peak level.

Today HDR is still the “wild west” and standards will evolve as the market settles in on a preference. The good news is that cameras have been delivering content that is “HDR-ready” for several years. This brings us to camera raw and log encoding, which will be covered in Part 2.

(Here is some additional information from SpectraCal and AVForums.)

Originally written for RedShark News.

©2018 Oliver Peters