CoreMelt PaintX

When Apple launched Final Cut Pro X, it was with a decidedly simplified set of video effects. This was enhanced by the easy ability for users to create their own custom effects, using Apple Motion as a development platform. The result has been an entirely new ecosystem of low-cost, high-quality video effects. As attractive as that is, truly advanced visual effects still require knowledgeable plug-in developers who are able to work within the FCPX and macOS architecture in order to produce more powerful tools. For example, other built-in visual effects tools, such as Avid Media Composer’s Intraframe Paint or the Fusion page in DaVinci Resolve, simply aren’t within the scope of FCPX, nor what users can create on their own through Motion templates.

To fill that need, developers like CoreMelt have been designing a range of advanced visual effects tools for the Final Cut Pro market, including effects for tracking, color correction, stabilization, and more. Their newest release is PaintX, which adds a set of Photoshop-style tools to Final Cut Pro X. As with many of CoreMelt’s other offerings, PaintX includes planar tracking, thanks to the licensing of Mocha tracking technology.

To start, drop the PaintX effect onto a clip and then launch the custom interface. PaintX requires a better control layout than the standard FCPX user interface has been designed for. Once inside the PaintX interface window, you have a choice of ten brush functions, including paint color, change color, blur, smear, sharpen, warp, clone, add noise, heal, and erase. These functions cover a range of needs, from simple wire removal to beauty enhancements and even pseudo horror makeup effects. You have control over brush size, softness, aspect ratio, angle, and opacity. The various brushes also have specific controls for their related functions, such as the blur range for the blur brush. Effects are applied in layers and actions. Each stroke is an action and both remain editable. If you aren’t the most precise artist, then the erase brush comes in handy. Did you color a bit too far outside of the lines? Simply use the erase brush on that layer and trim back your excess.

Multiple brush effects can be applied to the same or different areas within the image, simply by adding a new layer for each effect. Once you’ve applied the first paint stroke, an additional brush control panel opens – allowing you to edit the brush parameters, after the fact. So, if your brush size was too large or not soft enough, simply alter those settings without the need to redo the effect. Each effect can be individually tracked in either direction. The Mocha tracker offers additional features, such as transform (scale/position) versus perspective tracking, along with the ability to copy and paste tracking data between brush layers.

As a Final Cut Pro X effect, PaintX works within the standard video pipeline. If you applied color correction upstream of your PaintX filter, then that grade is visible within the PaintX interface. But if the color correction is applied downstream of the PaintX effect, you won’t see it when you open the PaintX interface. However, that correction will still be uniformly applied to the clip, including the areas altered within the PaintX effect. If you’ve “punched into” a 4K clip on an HD timeline, when you open PaintX, you’ll still see the full 4K frame. Finally, you have additional FCPX control over the opacity and mix of the applied PaintX filter.

I found PaintX to be well-behaved even on a modest Mac, like my 3-year-old laptop. However, if you don’t have a beefy Mac, keep the effect simple. The more brush effects that you apply and track in a single clip, the slower the real-time response will become, especially on under-powered machines. These effects are GPU-intensive and paint strokes are really a particle system; therefore, simple, single-layer effects are the easiest on the machine. But, if you intend to do more complex effects like blurs and sharpens in multiple layers, then you will really want one of the more powerful Macs. Playback response is generally better, once you’ve saved the effect and exit back to Final Cut. I did run into one minor issue with the clone brush on a single isolated clip, while using a 2013 Mac Pro. CoreMelt told me there have been a few early bugs with certain GPUs and is looking into the anomaly I discovered. That model in particular has been notorious for GPU issues with video effects. (Update: CoreMelt sent me a new build, which has corrected this problem.)

Originally written for RedShark News

©2018 Oliver Peters

Advertisements

Hawaiki AutoGrade

The color correction tools in Final Cut Pro X are nice. Adobe’s Lumetri controls make grading intuitive. But sometimes you just want to click a few buttons and be happy with the results. That’s where AutoGrade from Hawaiki comes in. AutoGrade is a full-featured color correction plug-in that runs within Final Cut Pro X, Motion, Premiere Pro and After Effects. It is available from FxFactory and installs through the FxFactory plug-in manager.

As the name implies, AutoGrade is an automatic color correction tool designed to simplify and speed-up color correction. When you install AutoGrade, you get two plug-ins: AutoGrade and AutoGrade One. The latter is a simple, one-button version, based on global white balance. Simply use the color-picker (eye dropper) and sample an area that should be white. Select enable and the overall color balance is corrected. You can then tweak further, by boosting the correction, adjusting the RGB balance sliders, and/or fine-tuning luma level and saturation. Nearly all parameters are keyframeable, and looks can be saved as presets.

AutoGrade One is just a starter, though, for simple fixes. The real fun is with the full version of AutoGrade, which is a more comprehensive color correction tool. Its interface is divided into three main sections: Auto Balance, Quick Fix, and Fine-Tune. Instead of a single global balance tool, the Auto Balance section permits global, as well as, any combination of white, black, and/or skin correction. Simply turn on one or more desired parameters, sample the appropriate color(s) and enable Auto Balance. This tool will also raise or lower luma levels for the selected tonal range.

Sometimes you might have to repeat the process if you don’t like the first results. For example, when you sample the skin on someone’s face, sampling rosy cheeks will yield different results than if you sample the yellowish highlights on a forehead. To try again, just uncheck Auto Balance, sample a different area, and then enable Auto Balance again. In addition to an amount slider for each correction range, you can also adjust the RGB balance for each. Skin tones may be balanced towards warm or neutral, and the entire image can be legalized, which clamps video levels to 0-100.

Quick Fix is a set of supplied presets that work independently of the color balance controls. These include some standards, like cooling down or warming up the image, the orange and teal look, adding an s-curve, and so on. They are applied at 100% and to my eye felt a bit harsh at this default. To tone down the effect, simply adjust the amount slider downwards to get less intensity from the effect.

Fine-Tune rounds it out when you need to take a deeper dive. This section is built as a full-blown, 3-way color corrector. Each range includes a luma and three color offset controls. Instead of wheels, these controls are sliders, but the results are the same as with wheels. In addition, you can adjust exposure, saturation, vibrance, temperature/tint, and even two different contrast controls. One innovation is a log expander, designed to make it easy to correct log-encoded camera footage, in the absence of a specific log-to-Rec709 camera LUT.

Naturally, any plug-in could always offer more, so I have a minor wish list. I would love to see five additional features: film grain, vignette, sharpening, blurring/soft focus, and a highlights-only expander. There are certainly other individual filters that cover these needs, but having it all within a single plug-in would make sense. This would round out AutoGrade as a complete, creative grading module, servicing user needs beyond just color correction looks.

AutoGrade is a deceptively powerful color corrector, hidden under a simple interface. User-created looks can be saved as presets, so you can quickly apply complex settings to similar shots and set-ups. There are already many color correction tools on the market, including Hawaiki’s own Hawaiki Color. The price is very attractive, so AutoGrade is a superb tool to have in your kit. It’s a fast way to color-grade that’s ideal for both users who are new or experienced when it comes to color correction.

(Click any image to see an enlarged view.)

©2018 Oliver Peters

More about ProRes RAW

A few weeks ago I wrote a two-part post – HDR and RAW Demystified. In the second part, I covered Apple’s new ProRes RAW codec. I still see a lot of misinformation on the web about what exactly this is, so I felt it was worth an additional post. Think of this post as an addendum to Part 2. My apologies up front, if there is some overlap between this and the previous post.

_____________________________

Camera raw codecs have been around since before RED Digital Camera brought out their REDCODE RAW codec. At NAB, Apple decided to step into the game. RED brought the innovation of recording the raw signal as a compressed movie file, making on-board recording and simplified post-production possible. Apple has now upped the game with a codec that is optimized for multi-stream playback within Final Cut Pro X, thus taking advantage of how FCPX leverages Apple hardware. At present, ProRes RAW is incompatible with all other applications. The exception is Motion, which will read and play the files, but with incorrect default – albeit correctable – video levels.

ProRes RAW is only an acquisition codec and, for now, can only be recorded externally using an Atomos Inferno or Sumo 19 monitor/recorder, or in-camera with DJI’s Inspire 2 or Zenmuse X7. Like all things Apple, the complexity is hidden under the surface. You don’t get the type of specific raw controls made available for image tweaking, as you do with RED. But, ProRes RAW will cover the needs of most camera raw users, making this the raw codec “for the rest of us”. At least that’s what Apple is banking on.

Capturing in ProRes RAW

The current implementation requires a camera that exports a camera raw signal over SDI, which in turn is connected to the Atomos, where the conversion to ProRes RAW occurs. Although no one is very specific about the exact process, I would presume that Atomos’ firmware is taking in the camera’s form of raw signal and rewrapping or transforming the data into ProRes RAW. This means that the Atomos firmware would require a conversion table for each camera, which would explain why only a few Sony, Panasonic, and Canon models qualify right now. Others, like ARRI Alexa or RED cameras, cannot yet be recorded as ProRes RAW. The ProRes RAW codec supports 12-bit color depth, but it depends on the camera. If the SDI output to the Atomos recorder is only 10-bit, then that’s the bit-depth recorded.

Until more users buy or update these specific Atomos products – or more manufacturers become licensed to record ProRes RAW onboard the camera – any real-word comparisons and conclusions come from a handful of ProRes RAW source files floating around the internet. That, along with the Apple and Atomos documentation, provides a pretty solid picture of the quality and performance of this codec group.

Understanding camera raw

All current raw methods depend on single-sensor cameras that capture a Bayer-pattern image. The sensor uses a monochrome mosaic of photosites, which are filtered to register the data for light in the red, green, or blue wavelengths. Nearly all of these sensors have twice as many green receptors as red or blue. At this point, the sensor is capturing linear light at the maximum dynamic range capable for the exposure range of the camera and that sensor. It’s just an electrical signal being turned into data, but without compression (within the sensor). The signal can be recorded as a camera raw file, with or without compression. Alternatively, it can also be converted directly into a full-color video signal and then recorded – again, with or without compression.

If the RGGB photosite data (camera raw) is converted into RGB pixels, then sensor color information is said to be “baked” into the file. However, if the raw conversion is stored in that form and then later converted to RGB in post, sensor data is preserved intact until much later into the post process. Basically, the choice boils down to whether that conversion is best performed within the camera’s electronics or later via post-production software.

The effect of compression may also be less destructive (fewer visible artifacts) with a raw image, because data, rather than video is being compressed. However, converting the file to RGB, does not mean that a wider dynamic range is being lost. That’s because most camera manufacturers have adopted logarithmic encoding schemes, which allow a wide color space and a high dynamic range (big exposure latitude) to be carried through into post. HDR standards are still in development and have been in testing for several years, completely independent of whether or not the source files are raw.

ProRes RAW compression

ProRes RAW and ProRes RAW HQ are both compressed codecs with roughly the same data footprint as ProRes and ProRes HQ. Both raw and standard versions use a variable bitrate form of compression, but in different ways. Apple explains it this way in their white paper: 

“As is the case with existing ProRes codecs, the data rates of ProRes RAW are proportional to frame rate and resolution. ProRes RAW data rates also vary according to image content, but to a greater degree than ProRes data rates. 

With most video codecs, including the existing ProRes family, a technique known as rate control is used to dynamically adjust compression to meet a target data rate. This means that, in practice, the amount of compression – hence quality – varies from frame to frame depending on the image content. In contrast, ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.”

ProRes RAW and HDR do not depend on each other

One of my gripes, when watching some of the ProRes RAW demos on the web and related comments on forums, is that ProRes RAW is being conflated with HDR. This is simply inaccurate. Raw applies to both SDR and HDR workflows. HDR workflows do not depend on raw source material. One of the online demos I saw recently immediately started with an HDR FCPX Library. The demo ProRes RAW clips were imported and looked blown out. This made for a dramatic example of recovering highlight information. But, it was wrong!

If you start with an SDR FCPX Library and import these same files, the default image looks great. The hitch here, is that these ProRes RAW files were shot with a Sony camera and a default LUT is applied in post. That’s part of the file’s metadata. To my knowledge, all current, common camera LUTs are based on conversion to the Rec709 color space, not HDR or wide gamut. If you set the inspector’s LUT tab to “none” in either SDR or HDR, you get a relatively flat, log image that’s easily graded in whatever direction you want.

What about raw-specific settings?

Are there any advantages to camera raw in the first place? Most people will point to the ability to change ISO values and color temperature. But these aren’t actually something inherently “baked” into the raw file. Instead, this is metadata, dialed in by the DP on the camera, which optimizes the images for the sensor. ISO is a sensitivity concept based on the older ASA film standard for exposing film. In modern digital cameras, it is actually an exposure index (EI), which is how some refer to it. (RedShark’s Phil Rhodes goes into depth in this linked article.)

The bottom line is that EI is a cross-reference to that camera sensor’s “sweet spot”. 800 on one camera might be ideal, while 320 is best on another. Changing ISO/EI has the same effect as changing gain in audio. Raising or lowering ISO/EI values means that you can either see better into the darker areas (with a trade-off of added noise) – or you see better highlight detail, but with denser dark areas. By changing the ISO/EI value in post, you are simply changing that reference point.

In the case of ProRes RAW and FCPX, there are no specific raw controls for any of this. So it’s anyone’s guess whether changing the master level wheel or the color temp/tint sliders within the color wheels panel is doing anything different for a ProRes RAW file than doing the same adjustment for any other RGB-encoded video file. My guess is that it’s not.

In the case of RED camera files, you have to install a camera raw plug-in module in order to work with the REDCODE raw codec inside of Final Cut Pro X. There is a lot of control of the image, prior to tweaking with FCPX’s controls. However, the amount of image control for the raw file is significantly more for a REDCODE file in Premiere Pro, than inside of FCPX. Again, my suspicion is that most of these controls take effect after the conversion to RGB, regardless of whether or not the slider lives in a specific camera raw module or in the app’s own color correction controls. For instance, changing color temperature within the camera raw module has no correlation to the color temperature control within the app’s color correction tools. It is my belief that few of these actually adjust file data at the raw level, regardless of whether this is REDCODE or ProRes RAW. The conversion from raw to RGB is proprietary with every manufacturer.

What is missing in the ProRes RAW implementation is any control over the color science used to process the image, along with de-Bayering options. Over the years, RED has reworked/improved its color science, which theoretically means that a file recorded a few years ago can look better today (using newer color science math) than it originally did. You can select among several color science models, when you work with the REDCODE format. 

You can also opt to lower the de-Bayering resolution to 1/2, 1/4, 1/8, etc. for a RED file.  When working in a 1080p timeline, this speeds up playback performance with minimal impact on the visible resolution displayed in the viewer. For full-quality conversion, software de-Bayering also yields different results than hardware acceleration, as with the RED Rocket-X card. While this level of control is nice to have, I suspect that’s the sort of professional complication that Apple seeks to avoid.

The main benefit of ProRes RAW may be a somewhat better-quality image carried into post at a lower file size. To get the comparable RGB image quality you’d need to go up to uncompressed, ProRes 4444, or ProRes 4444 XQ – all of which become very taxing in post. Yet, for many standard productions, I doubt you’ll see that great of a difference. Nevertheless, more quality with a lower footprint will definitely be welcomed.

People will want to know whether this is a game-changer or not. On that count, probably not. At least not until there are a number of in-camera options. If you don’t edit – and finish – with FCPX, then it’s a non-starter. If you shoot with a camera that records in a high-quality log format, like an ARRI Alexa, then you won’t see much difference in quality or workflow. If you shoot with any RED camera, you have less control over your image. On the other hand, it’s a definite improvement over all raw workflows that capture in image sequences. And it breathes some life into an older camera, like the Sony FS700. So, on balance, ProRes RAW is an advancement, but just not one that will affect as large a part of the industry as the rest of the ProRes family has.

(Note – click any image for an enlarged view. Images courtesy of Apple, FilmPlusGear, and OffHollywood.)

©2018 Oliver Peters

Luca Visual FX builds Mystery & Suspense

For most editors, creating custom music scores tends to fall into the “above my pay grade” category. If you are a whizz with GarageBand or Logic Pro X, then you might dip into Apple’s loop resources. But most commercials and corporate videos are easily serviced by the myriad of stock music sites, like Premium Beat and Music Bed. Some music customization is also possible with tracks from companies like SmartSound.

Yet, none of the go-to music library sites offer curated, genre-based, packages of tracks and elements that make it easy to build up a functional score for longer dramatic productions. Such projects are usually the work of composers or a specific music supervisor, sound designer, or music editor doing a lot of searching and piecing together from a wide range of resources.

Enter Luca Visual FX – a developer best known for visual effects plug-ins, such as Light Kit 2.0. It turns out that Luca Bonomo is also a composer. The first offering is the Mystery & Suspense Music and Sound Library, which is a collection of 500 clips, comprising music themes, atmospheres, drones, loops, and sound effects. This is a complete toolkit designed to make it easy to combine elements, in order to create a custom score for dramatic productions in the mystery or suspense genre.

These tracks are available for purchase as a single library through the LucaVFX website. They are downloaded as uncompressed, stereo AIF files in a 24-bit/96kHz resolution. This means they are of top quality and compatible with any Mac or PC NLE or DAW application. Best yet, is the awesome price of $79. The package is licensed for a single user and may be used for any audio or video production, including for commercial purposes.

Thanks to LucaVFX, I was able to download and test out the Library on a recent short film. The story is a suspense drama in the style of a Twilight Zone episode, so creating a non-specific, ethereal score fits perfectly. Drones, dissonance, and other suspenseful sounds are completely in line, which is where this collection shines.

Although I could have used any application to build this, I opted for Apple’s Final Cut Pro X. Because of its unique keyword structure, it made sense to first set up a separate FCPX library for only the Mystery & Suspense package. During import, I let FCPX create keyword collections based on the Finder folders. This keeps the Mystery & Suspense FCPX library organized in the same way as they are originally grouped. Doing so, facilitates fast and easy sorting and previewing of any of the 500 clips within the music library. Then I created a separate FCPX library for the production itself. With both FCPX libraries open, I could quickly preview and place clips from my music library to the edit sequence for the film, located within the other FCPX library.

Final Cut uses Connected Clips instead of tracks. This means that you can quickly build up and align overlapping atmospheres, transitions, loops, and themes for a densely layered music score in a very freeform manner. I was able to build up a convincing score for a half-hour-long piece in less that an afternoon. Granted, this isn’t mixed yet, but at least I now have the musical elements that I want and where I want them. I feel that style of working is definitely faster in Final Cut Pro X – and more conducive to creative experimentation – but it would certain work just as well in other applications.

The Mystery & Suspense Library is definitely a winner, although I do have a few minor quibbles. First, the music and effects are in keeping with the genre, but don’t go beyond it. When creating a score for this kind of production, you also need some “normal” or “lighter” moods for certain scenes or transitions. I felt that was missing and I would still have to step outside of this package to complete the score. Secondly, many of the clips have a synthesized or electronic tone to them, thanks to the instruments used to create the music. That’s not out of character with the genre, but I still would have liked some of these to include more natural instruments than they do. In fairness to LucaVFX, if the Mystery & Suspense Library is successful, then the company will create more libraries in other genres, including lighter fare.

In conclusion, this is a high quality library perfectly in keeping with its intended genre. Using it is fast and flexible, making it possible for even the most musically-challenged editor to develop a convincing, custom score without breaking the bank.

©2018 Oliver Peters

FCPX Color Wheels Take 2

Prior to version 10.4, the color correction tools within Final Cut Pro X were very basic. You could get a lot of work done with the color board, but it just didn’t offer tools competitive with other NLEs – not to mention color plug-ins or a dedicated grading app like DaVinci Resolve. With the release of 10.4, Apple upped the game by adding color wheels and a very nice curves implementation. However, for those of us who have been doing color correction for some time, it quickly became apparent that something wasn’t quite right in the math or color science behind these new FCPX color wheels. I described those anomalies in this January post.

To summarize that post, the color wheels tool seems to have been designed according to the lift/gamma/gain (LGG) correction model. The standard behavior for LGG is evident with a black-to-white gradient image. On a waveform display, this appears as a diagonal line from 0 to 100. If you adjust the highlight control (gain), the line appears to be pinned at the bottom with the higher end pivoting up or down as you shift the slider. Likewise, the shadow control (lift) leaves the line pinned at the top with the bottom half pivoting. The midrange control (gamma) bends the middle section of the line inward or outward, with no affect on the two ends, which stay pinned at 0 and 100, respectively. In addition to luminance value, when you shift the hue offset to an extreme edge – like moving the midrange puck completely to yellow – you should still see some remaining black and white at the two ends of the gradient.

That’s how LGG is supposed to work. In FCPX version 10.4, each color wheel control also altered the levels of everything else. When you adjusted midrange, it also elevated the shadow and highlight ranges. In the hue offset example, shifting the midrange control to full-on yellow tinted the entire image to yellow, leaving no hint of black or white. As a result, the color wheels correction tool was unpredictable and difficult to use, unless you were doing only very minor adjustments. You ended up chasing your tail, because when one correction was made, you’d have to go back and re-adjust one of the other wheels to compensate for the unwanted changes made by the first adjustment.

With the release of FCPX 10.4.1 this April, Apple engineers have changed the way the color wheels tool behaves. Corrections now correspond to the behavior that everyone accepts as standard LGG functionality. In other words, the controls mostly only affect their part of the image without also adjusting all other levels. This means that the shadows (lift) control adjusts the bottom, highlights (gain) will adjust the top end, and midrange (gamma) will lighten or darken the middle portion of the image. Likewise, hue offsets don’t completely contaminate the entire image.

One important thing to note is that existing FCPX Libraries created or promoted under 10.4 will now be promoted again when opened in 10.4.1. In order that your color wheel corrections don’t change to something unexpected when promoted, Projects in these Libraries will behave according to the previous FCPX 10.4 color model. This means that the look of clips where color wheels were used – and their color wheel values – haven’t changed. More importantly, the behavior of the wheels when inside those Libraries will also be according to the “old” way, should you make any further corrections. The new color wheels behavior will only begin within new Libraries created under 10.4.1.

These images clarify how the 10.4.1 adjustments now work (click to see enlarged and expanded views).

©2018 Oliver Peters

What’s up with Final Cut’s Color Wheels?

NOTE: The information presented here has been superseded by the release of FCPX 10.4.1 in April 2018. With that release the color wheels model has been changed. Please read the linked blog post for updated information.

Apple Final Cut Pro X 10.4 introduced new, advanced color correction tools to this editing application, including color wheels, curves, and hue vs. saturation curves. These are tools that users of other NLEs have enjoyed for some time – and, which were part of Final Cut Studio (FCP 7, Color). Like others, my first reaction was, “Super! They’ve added some nice advanced tools, which will improve the use of FCPX for higher-end users.” But, as I started to primarily use the Color Wheels with real correction work, I quickly realized that something wasn’t quite right in how they operated. Or at least, they didn’t work in a way that we’ve come to understand.

In trying to figure it out, I reached out to other industry pros and developers for their thoughts. Naturally this led to some spirited discussions at forums like those at Creative COW. However, other editors have noticed the same problems, so you can also find threads in the Facebook FCPX group and at FCP.co. It is certainly easy to characterize this as just another internet kerfuffle, surrounding Apple’s “think different” approaches to FCPX. But those arguments fall flat when you actually try to use the tools as intended.

The FCPX Color Wheels panel includes four wheels – Master, Shadows, Midtones, and Highlights. The puck in the center of each wheel is a hue offset control to push hues in the direction that you move the puck. The slider to the right of the wheel controls the brightness of that range. The left slider controls the saturation. One of the main issues is that when you adjust luminance using one of these controls, the affected range is too broad. Specifically, in the case of the Midtones control, as you adjust the luminance slider up or down, you are affecting most of the image and not just the midrange levels. This is not the way this type of control normally works in other tools, and in fact, it’s not how FCPX’s Color Board controls work either.

“What’s the big deal?” you might ask. Fair enough. I see two operational issues. The first is that to properly grade the image using the Color Wheels, you end up having to go back-and-forth a lot between wheels, to counteract the changes made by one control with another. The second is that using the Midtones slider tends to drive highlights above 100 IRE, where they will be clipped if any broadcast limiting is used. This doesn’t happen with other color tools, notably Apple’s own Color Board.

A lot of the discussion focuses on luma levels and specifically the Midtones slider, since it’s easy to see the issue there. However, other controls are also affected, but that’s too much to dissect in a single post. Throughout this post, be sure to click on the images to see the full view. I have presented various samples against each other and you will only get the full understanding if you open the thumbnail (which is small but also cropped) to the full image. I have compared the effect using five different tools – the Color Board, the Color Wheels, a color corrector plug-in that I built as a Motion template using Motion effects, Rubber Monkey Software FilmConvert (the wheels portion only), and finally, the Adobe Lumetri controls in Premiere Pro.

I am using three different test images – a black-to-white ramp, a test pattern, and a demo video image. The ramp without correction will appear as a diagonal line (0-100 IRE) on the scope, which makes it easy to analyze what’s happening. The video image has definite shadow and highlight areas, which lets us see how these controls work in the real world. For example, if you want to brighten the area of the shot where the man is in the shadows, but don’t want to make the highlights any brighter, this would normally be done using a Midtones control. Be aware that these various tools certainly aren’t calibrated the same way and some have a greater range of control than others. The weakest of these is FilmConvert’s wheels, since this plug-in has additional level controls in other parts of its interface.

Color science models

In the various forum threads, the argument is made that Apple is simply using a different color science method or a different weighing of some existing models. That’s certainly possible, since not all color correctors are built the same way. The most common approaches are Lift/Gamma/Gain and Shadows/Mids/Highlights. Be careful with naming. Just because something uses the terminology of Shadows, Midtones, and Highlights, does not mean that it also uses the SMH color science model. Many tools use the Lift/Gamma/Gain model, but in fact, call the controls shadows (Lift), mids (Gamma), and highlights (Gain). Another term you may run across is Set-up in some correction tools. This is typically used for control of shadows (equal to Lift), but can also function is an offset control that raises the level of the entire image. Avid Symphony employs this solution. Finally, both Symphony and Adobe SpeedGrade use what has been dubbed a 12-way color corrector. Each range is further subdivided into its own subset of shadows, mids, and highlights controls.

An LGG model provides broad control of shadows and highlights, with the midtones control working like a curve that covers the whole range, but with the largest effect in the middle. An SMH model normally divides the levels into three distinct, precisely overlapping ranges. This is much like a three-band audio equalizing filter. A number of the color correctors add a luma range control, which gives the user the ability to change how much of the image a specific range will affect. In other words, how broad is the control of the shadows, mids, or highlights control? This is like a Q control in an audio equalizer, where you change the shape of the envelope at a certain frequency.

Red Giant’s Magic Bullet Looks offers both color correction models with two different tools – the 4-way color corrector (SMH) and the Colorista color corrector (LGG). When you adjust the midrange control of their 4-way, the result is a graceful S-shaped curve to the levels on the waveform.

To study the effect of an LGG-based corrector, test the ramp. The shadows control (Lift) will raise or lower the dark areas of the image without changing the absolute highlights. The diagonal line of the ramp on the waveform essentially pivots, hinged at the 100 IRE point. Conversely, change the highlights control (Gain) pivots the line pinned to 0 IRE (at black). When you adjust the midtones control (Gamma), you create a curve to the line, which stays pinned at 0 and 100 IRE at either end. In this way you are effectively “expanding” or “compressing” the levels in the middle portion of your image without changing the position of your black or white points.

How the various color correction tools react

Looking at the luma control for the Midtones, two things are clear. First, all of these tools are using the LGG color science model. It’s not clear what the Color Wheels are using, but it isn’t SMH, as there is no bulge or S-curve visible in the scope. Second, the Color Wheels quickly drive the image levels into clipping, while the other tools generally keep black and while levels in place. In essence, the Midtones control affects the image more like a master or offset control would, than a typical mids or Gamma control. Yet, clearly Apple’s Color Board controls adhere to the standard LGG model. The concern, of course, is clipping. In the test image of the man walking on the village street, the sunlit building walls on the opposite side of the street will become overexposed and risk being clipped when the Color Wheels are used.

What about color? As a simple test, I next shifted the Midtones puck to the yellow. Bear in mind that the range of each of these controls is different, so you will see varying degrees of yellow intensity. Nevertheless, the way the control should work is that some pure black and white should be preserved at the top and bottom of the video levels. All of these tools maintain that, except for the Color Wheels. There, the entire image is yellow, effectively making the hue offset puck function more like a tint control.

One other issue to note, is that the Color Wheels offer an extraordinarily control range. The hue offset control RGB intensity values go from 0 (center of the wheel) to 1023. However, the puck icon can only go to the rim of the wheel, which it hits at about 200. With a mouse (or numerical entry), you can keep going well past the stop of the wheel icon – five times farther, in fact. The image not only becomes very yellow in this case, but you can easily lose the location of your control, since the GUI position in no longer relevant.

The working theory

The big question is why don’t the Color Wheels conform to established principles, when in fact, the Color Board controls do? Until there is some further clarification from Apple, one possible explanation is with HDR. FCPX 10.4 introduced High Dynamic Range (HDR) features. One of the various HDR standards is Rec. 2020 PQ. In that color space, the 0-100 IRE limitations of Rec. 709 are expanded to 0-10,000 nits. 0-100 nits is roughly the same brightness as we are used to with Rec. 709.

Looking at this image of the man walking along the street – where I’ve attempted to get a pleasing look with all of the tools – you’ll see that the Color Wheels in Rec. 709 don’t react correctly and will drive the highlights into a range to be clipped. However, in the bottom pane, which is the same image in Rec. 2020 PQ color space, the grade looks pretty normal. And, in practice, the Color Wheels controls work more or less the way I would have expected them to work. Yes, the same controls work differently in the different color spaces – properly in 2020 PQ and not in 709.

But why is that the case? I have no answer, but I do have a wild guess. Maybe, just maybe, the Color Wheels were designed for – or intended to only be used for – HDR work. Or maybe there’s conversion or recalibration of the controls that hasn’t taken place yet in this version. If the tool is only calibrated for HDR, then its range and weighing will be completely wrong for Rec. 709 video. If you increase the Midtones luma of the ramp in both Rec. 709 and Rec. 2020 PQ, you’ll see a similar curve. In fact, if you overlay a screen shot of each waveform, placing the full Rec. 709 scope image over the bottom portion of the Rec. 2020 PQ scale, you’ll notice that these sort of align up to about 100 IRE and nits. It’s as if one is simply a slice out of the other.

Regardless of why, this is something where I would hope Apple will provide a white paper or other demonstration of what the best practices will be for using this tool effectively. If it isn’t intentional, and actually is a mistake, then I presume a fix will be forthcoming. In either case, put in your feedback comments to Apple.

A word about HDR

Over the course of testing this tool and this theory, I’ve done a bit of testing with the HDR color spaces in FCPX. If you want to know more about HDR, I would encourage you to check out these contrary blog posts by Stu Maschwitz and Alexis Van Hurkman. I tend to side with Stu’s point-of-view and am not a big fan of HDR.

The way Apple has implemented these features in Final Cut Pro X 10.4 is to allow the user to set and override color spaces. If you set up your project to be Rec. 2020 PQ (and set preferences to “show HDR as raw values”), then the viewer and a/v output (direct from the Mac, not through a hardware i/o device) are effectively dimmed through the Mac’s color profile system. When you grade the image based on the 0-10,000 nits scale, you’ll end up seeing an image that looks pleasing and essentially the same as if you were working in Rec. 709. However – and I cannot over-emphasize this – you are not going to be able to produce an image that’s truly compatible with Dolby Vision and actually look correct as HDR, unless you have the correct AJA i/o hardware and a proper display. And by display, I mean a top-end Dolby, Canon, or Sony unit, costing tens of thousands of dollars.

As I understand the PQ specs, the bulk of the higher range is for the highlights that are normally constrained or clipped in our current video systems. However, that 10,000 nits scale is weighed, so that about 50% of the image value is in the first 100 nits, making it of comparable brightness to the current 100 IRE. The rest of that range is for brighter information, like specular highlights. You don’t necessarily get more brightness in the shadow detail. Therefore, if you are grading a shot in FCPX in a 2020 PQ color space and you only have the computer display to go by, you’ll grade by eye as much as by scope. This means that to get a pleasing image, you will end up making the average appearance of the image brighter than it really should be. When this is viewed on a real HDR monitor, it will be painfully bright. Having a higher-nits computer display, like on the iMac Pro (up to 500 nits), won’t make much difference, unless maybe, you crank the display brightness to its maximum (ouch!).  “Mine goes the 11!”

Right now, HDR is the wild, wild west. If you are smart, you’ll realize that you don’t know what you don’t know. While it’s nice to have these new features in FCPX, they can be very dangerous in the wrong hands.

But that’s another matter. Right now, I just hope Apple (or one of the usual suspects, like Ripple Training, LumaForge, or Larry Jordan) will come out with more elaboration on the Color Wheels.

©2018 Oliver Peters

Putting Apple’s iMac Pro Through the Paces

At the end of December, Apple made good on the release of the new iMac Pro and started selling and shipping the new workstations. While this could be characterized as a stop-gap effort until the next generation of Mac Pro is produced, that doesn’t detract from the usefulness and power of this design in its own right. After all, the iMac line is the direct descendant in spirit and design of the original Macintosh. Underneath the sexy, all-in-one, space grey enclosure, the iMac Pro offers serious workstation performance.

I work mostly these days with a production company that produces and posts commercials, corporate videos, and entertainment programming. Our editing set-up consists of seven workstations, plus an auxiliary machine connected to a common QNAP shared storage network. These edit stations consisted of a mix of old and new Mac Pros and iMacs (connected via 10GigE), with a Mac Mini for the auxiliary (1GigE). It was time to upgrade the oldest machines, which led us to consider the iMac Pros. The company picked up three of them – replacing two Mac Pro towers and an older iMac. The new configuration is a mix of three, one-year-old Retina 5K iMacs (late 2015 model), a 2013 “trash can” Mac Pro, and three 2017 iMac Pros.

There are plenty of videos and articles on the web about how these machines perform; but, the testers often use artificial benchmarks or only Final Cut Pro X. This shop has a mix of NLEs (Adobe, Apple, Avid, Blackmagic Design), but our primary tool is Adobe Premiere Pro CC 2018. This gave me a chance to compare how these machines stacked up against each other in the kind of work we actually do. This comparison isn’t truly apples-to-apples, since the specs of the three different products are somewhat different from each other. Nevertheless, I feel that it’s a valid real-world assessment of the iMac Pros in a typical, modern post environment.

Why buy iMac Pros at all?

The question to address is why should someone purchase these machines? Let me say right off the bat, that if your main focus is 3D animation or heavy compositing using After Effects or other applications – and speed and performance are the most important factor – then don’t buy an Apple computer. Period. There are plenty of examples of Dell and HP workstations, along with high-end gaming PCs, that outperform any of the Macs. This is largely due to the availability of advanced NVidia GPUs for the PC, which simply aren’t an option for current Macs.

On the other hand, if you need a machine that’s solid and robust across a wide range of postproduction tasks – and you prefer the Mac operating ecosystem – then the iMac Pros are a good choice. Yes, the machine is pricy and you can buy cheaper gaming PCs and DIY workstations, but if you stick to the name brands, like Dell and HP, then the iMac Pros are competitively priced. In our case, a shift to PC would have also meant changing out all of the machines and not just three – therefore, even more expensive.

Naturally, the next thing is to compare price against the current 5K iMacs and 2013 Mac Pros. Apple’s base configuration of the iMac Pro uses an 8-core 3.2GHz Xeon W CPU, 32GB RAM, 1TB SSD, and the Radeon Pro Vega 56 GPU (8GB memory) for $4,999. A comparably configured 2013 Mac Pro is $5,207 (with mouse and keyboard), but no display. Of course, it also has the dual D-700 GPUs. The 5K iMac in a similar configuration is $3,729. Note that we require 10GigE connectivity, which is built into the iMac Pros. Therefore, in a direct comparison, you would need to bump up the iMac and Mac Pro prices by about $500 for a Thunderbolt2-to-10GigE converter.

Comparing these numbers for similar machines, you’d spend more for the Mac Pro and less for the iMac. Yet, the iMac Pro uses newer processors and faster RAM, so it could be argued that it’s already better out of the gate in the base configuration than Apple’s former top-of-the-line product. It has more horsepower than the tricked-out iMac, so then it becomes a question of whether the cost difference is important to you for what you are getting.

Build quality

Needless to say, Apple has a focus on the quality and fit-and-finish of its products. The iMac Pro is no exception. Except for the space grey color, it looks like the regular 27” iMacs and just as nicely built. However, let me quibble a bit with a few things. First, the edges of the case and foot tend to be a bit sharp. It’s not a huge issue, but compared with an iPhone, iPad, or 2013 Mac Pro, the edges just not as smooth and rounded. Secondly, you get a wireless mouse and extended keyboard. Both have to be plugged in to charge. In the case of the mouse, the cable plugs in at the bottom, rendering it useless during charging. Truly a bad design. The wireless keyboard is the newer, flatter style, so you lose two USB ports that were on the previous plug-in extended keyboard. Personally, I prefer the features and feel of the previous keyboard, not to mention any scroll wheel mouse over the Magic Mouse. Of course, those are strictly items of personal taste.

With the iMac Pro, Apple is transitioning its workstations to Thunderbolt 3, using USB-C connectors. Previous Thunderbolt 2 ports have been problematic, because the cables easily disconnect. In fact, on our existing iMacs, it’s very easy to disconnect the Thunderbolt 2 cable that connects us to the shared storage network, simply by moving the iMac around to get to the ports on the back. The USB-C connectors feel more snug, so hopefully we will find that to be an improvement. If you need to get to the back of the iMac or iMac Pro frequently, in order to plug in drives, dongles, etc., then I would highly recommend one of the docks from CalDigit or OWC as a valuable accessory.

5K screen

Apple spends a lot of marketing hype on promoting their 5K Retina screens. The 27” screens have a raw pixel resolution of 5120×2880 pixels, but that’s not what you see in terms of image and user interface dimensions. To start with, the 5K iMacs and iMac Pros use the same screen resolution and the default display setting (middle scaled option) is 2560×1440 pixels. The top choice is 3200×1800. Of course, if you use that setting, everything becomes extremely small on screen.  Conversely, our 2013 Mac Pro is connected to a 27” Apple LED Cinema Display (non Retina). It’s top scaled resolution is also 2560×1440 pixels. Therefore, at the most useable settings, all of our workstations are set to the same resolution. Even if you scale the resolution up (images and UI get smaller), you are going to end up adjusting the size of the application interface and viewer window. While you might see different viewer size percentage numbers between the machines, the effective size on screen will be the same.

Retina is Apple’s marketing name for high pixel density. This is the equivalent of DPI (dots per inch) in print resolutions. According to a Macworld article, iPhones from 4 to 5s had a pixel density of 326ppi (pixels per inch), while iMacs have 218ppi. Apple converts a device’s display to Retina by doubling the horizontal and vertical pixel count. More pixels are applied to any given area on the screen, resulting in smoother text, smoother diagonal lines, and so on. That’s assuming an application’s interface is optimized for it. At the distance that the editors sit from a 27” display, there is simply little or no difference between the look of the 27” LED display and the 27” iMac Retina screens.

Upgradeability

Future-proofing and upgrades are the biggest negatives thrown at all-in-ones, particularly the iMac Pros. While the user can upgrade RAM in the standard iMacs, that’s not the case with iMac Pros. You can upgrade RAM in the future, but that must be done at a service facility, such as the Apple Store’s Genius service. This means that in three years, when you want the latest, greatest CPU, GPU, storage, etc., you won’t be able to swap out components. But is this really an issue? I’m sure Apple has user research numbers to justify their decisions. Plus, the thermal design of the iMac would make user upgrades difficult, unlike older mac Pro towers.

In my own experience on personal machines, as well as clients’ machines that I’ve helped maintain, I have upgraded storage, GPU cards, and RAM, but never the CPU. Although I do know others who have upgraded Xeon models on their Mac Pro towers. Part of the dichotomy is buying what you can afford now and upgrading later, versus stretching a bit up front and then not needing to upgrade later. My gut feeling is that Apple is pushing the latter approach.

If I tally up the cost of the upgrades that I’ve made after about three years, I would already be part of the way towards a newer, better machine anyway. Plus, if you are cutting HD and even 4K today, then just about any advanced machine will do the trick, making it less likely that you’ll need to do that upgrade within the foreseeable life of the machine. An argument can be made for either approach, but I really think that the vast majority of users – even professional users – never actually upgrade any of the internal hardware from that of the configuration as originally purchased.

Performance testing

We ultimately purchased machines that were the 10-core bump-up from the base configuration, feeling that this is the sweet spot (and is currently available) within the iMac Pro product line.

The new machine specs within the facility now look like this:

2013 Mac Pro – 3GHz 8-core Xeon/64GB RAM/dual D-500 GPUs/1TB SSD (Sierra)

2015 iMac – 4GHz 4-core Core i7/32GB RAM/AMD R9/3TB Fusion drive (Sierra)

2017 iMac Pro – 3GHz 10-core Xeon W/64GB RAM/Radeon Vega 64/1TB SSD (High Sierra)

As you can see, the tech specs of the new iMac Pros more closely match the 2013 Mac Pro than the year-old 5K iMacs. Of course, it’s not a perfect match for optimal benchmark testing, but close enough for a good read on how well the iMac Pro delivers in a real working environment.

Test 1 – BruceX

The BruceX test uses a 5K Final Cut Pro X timeline made up only of built-in titles and generators. The timeline is then rendered out to a ProRes file. This tests the pure application without any media and codec variables. It’s a bit of an artificial test and only applicable to FCPX performance, but still useful. The faster the export time, the better. (I have bolded the best results.)

2013 Mac Pro – 26.8 sec.

2015 iMac – 28.3 sec.

2017 iMac Pro – 14.4 sec.

Test 2 – media encoding

In my next test, I took a 4½-minute-long 1080p ProRes file and rendered it to a 4K/UHD (3840×2160) H.264 (1-pass CBR 20Mbps) file. Not only was it being encoded, but also scaled up to 4K in this process. I rendered from and to the desktop, to eliminate any variables from the QNAP system. Finally, I conducted the test using both Adobe Media Encoder (using OpenCL processing) and Apple Compressor.

Two noteworthy issues. The Compressor test was surprisingly slow on the Mac Pro. (I actually ran the Compressor test twice, just to be certain about the slowness of the Mac Pro.) The AME version kicked in the fans on the iMac.

Adobe Media Encoder

2013 Mac Pro – 6:13 min.

2015 iMac – 7:14 min.

2017 iMac Pro – 4:48 min.

 Compressor

2013 Mac Pro – 11:02 min.

2015 iMac – 2:20 min.

2017 iMac Pro – 2:19 min.

 Test 3 – editing timeline playback – multi-layered sequence

This was a difficult test designed to break during unrendered playback. The 40-second 1080p/23.98 sequence include six layers of resized 4K source media.

Layer 1 – DJI clips with dissolves between the clips

Layers 2-5 – 2D PIP ARRI Alexa clips (no LUTs); layer 5 had a Gaussian blur effect added

Layer 6 – native REDCODE RAW with minor color correction

The sequence was created in both Final Cut Pro X and Premiere Pro. Playback was tested with the media located on the QNAP volumes, as well as from the desktop (this should provide the best possible playback).

Playing back this sequence in Final Cut Pro X from the QNAP resulted is the video output largely choking on all of the machines. Playing it back in Premiere Pro from the QNAP was slightly better than in FCPX, with the 2017 iMac Pro performing best of all. It played, but was still choppy.

When I tested playback from the desktop, all three machines performed reasonably well using both Final Cut Pro X (“best performance”) and Premiere Pro (“1/2 resolution”). There were some frames dropped, although the iMac Pro played back more smoothly than the other two. In fact, in Premiere Pro, I was able to set the sequence to “full resolution” and get visually smooth playback, although the indicator light still noted dropped frames. Typically, as each staggered layer kicked in, performance tended to hiccup.

Test 4 – editing timeline playback – single-layer sequence

 This was a simpler test using a standard workflow. The 30-second 1080p/23.98 sequence included three Alexa clips (no LUTs) with dissolves between the clips. Each source file was 4K/UHD and had a “punch-in” and reposition within the HD frame. Each also included a slight, basic color correction. Playback was tested in Final Cut Pro X and Premiere Pro, as well as from the QNAP system and the desktop. Quality settings were increased to “best quality” in FCPX and “full resolution” in Premiere Pro.

My complex timeline in Test 3 appeared to perform better in Premiere Pro. In Test 4, the edge was with Final Cut Pro X. No frames were dropped with any of the three machines playing back either from the QNAP or the desktop, when testing in FCPX. In Premiere Pro, the 2017 iMac Pro was solid in both situations. The 2015 iMac was mostly smooth at “full” and completely smooth at “1/2”. Unfortunately, the 2013 Mac Pro seemed to be the worst of the three, dropping frames even at “1/2 resolution” at each dissolve within the timeline.

Test 5 – timeline renders (multi-layered sequence)

In this test, I took the complex sequence from Test 3 and exported it to a ProRes master file. I used the QNAP-connected versions of the Premiere Pro and Final Cut Pro X timelines and rendered the exports to the desktop. In FCPX, I used its default Share function. In Premiere Pro, I queued the export to Adobe Media Encoder set to process in OpenCL. This was one of the few tests in which the 2013 Mac Pro put in a faster time, although the iMac Pro was very close.

Rendering to ProRes – Premiere Pro (via Adobe Media Encoder)

2013 Mac Pro – 1:29 min.

2015 iMac – 2:29 min.

2017 iMac Pro – 1:45 min.

Rendering to ProRes – Final Cut Pro X

2013 Mac Pro – 1:21 min.

2015 iMac – 2:29 min.

2017 iMac Pro – 1:22 min.

Test 6 – Adobe After Effects – rendering composition

My final test was to see how well the iMac Pro performed in rendering out compositions from After Effects. This was a 1080p/23.98 15-second composition. The bottom layer was a JPEG still with a Color Finesse correction. On top of that were five 1080p ProResLT video clips that had been slomo’ed to fill the composition length. Each was scaled, cropped, and repositioned. Each was beveled with a layer style and had a stylized effect added to it. The topmost layer was a camera layer with all other layers set to 3D, so the clips could be repositioned in z-space. Using the camera, I added a slight rotation/perspective change over the life of the composition.

Rendering to ProRes – After Effects

2013 Mac Pro – 2:37 min.

2015 iMac – 2:15 min.

2017 iMac Pro – 2:03 min.

Conclusion

After all of this testing, one is left with the answer “it depends”. The 2013 Mac Pro has two GPUs, but not every application takes advantage of that. Some apps tax all the available cores, so more, but slower, cores are better. Others go for the maximum speed on fewer cores. All things considered, the iMac Pro performed at the top of these three machines. It was either the best or close/equal to the best.

There is no way to really quantify actual editing playback performance and resolution by any numerical factor. However, it is interesting to look at the aggregate of the six tests that could be quantified. When you compare the cumulative totals of just the iMac Pro and the iMac, the Pro came out 48% faster. Compared to the 2013 Mac Pro, it was 85% faster. The iMac Pro’s performance against the totals of the slowest machines (either iMac or Mac Pro depending on the test), showed it being a whopping 113% faster – more than twice as fast. But it only bested the fastest set by 20%. Naturally, such comparisons are more curiosity than anything else. Some of these numbers will be meaningful and others won’t, depending on the apps used and a user’s storage situation.

I will say that installing these three machines was the easiest I’ve ever done, including connecting them to the 10GigE storage network. The majority of our apps come from Adobe Create Cloud, the Mac App Store, or FxFactory (for plug-ins). Except for a few other installers, there was largely no need to track down installers, activation information, etc. for a zillion small apps and plug-ins. This made it a breeze and is certainly part of the attraction of the Mac ecosystem. The iMac Pro’s all-in-one design limits the required peripherals, which also contributes to a faster installation. Naturally, I can’t tell anyone if this is the right machine for them, but so far, the investment does look like the correct choice for this shop’s needs.

(Updated 6/22/18)

Here are two additional impressions by working editors: Thomas Grove Carter and Ben Balser. Also a very comprehensive review from AppleInsider.

©2018 Oliver Peters