Accusonus ERA4

It’s fair to say that most video editors, podcast producers, and audio enthusiasts don’t possess the level of intimate understanding of audio filters that a seasoned recording engineer does. Yet each still has the need to present the cleanest, most professional-sounding audio as part of their productions. Some software developers have sought to serve this market with plug-ins that combine the controls into a single knob or slider. The first of these was Waves Audio with their OneKnob series. Another company using the single-knob approach is a relative newcomer, Accusonus.

I first became aware of Accusonus through Avid. Media Composer license owners have been offered loyalty add-ons, such as plug-ins, which recently included the Accusonus ERA3 Voice Leveler plug-in. I found this to be a very useful tool and so when Accusonus offered to send the new ERA4 Bundle for my evaluation, I was more than happy to give the rest of the package a test run. ERA4 was released at the end of June in a Standard and Pro bundle along with discounted, introductory pricing, available until the end of July. You may also purchase each of these filters individually.

Audio plug-ins typically come in one of four formats: AU (Mac only), VST (Mac or Windows), VST3 (Mac or Windows), and AAX (for Avid Media Composer and Pro Tools). When you purchase audio filters, they don’t necessarily come in all flavors. Sometimes, plug-ins will be AU and VST/VST3, but leave out AAX. Or will only be for AAX. Or only AU. Accusonic plug-ins are installed as all four types on a Mac, which means that a single purchase covers most common DAWs and NLEs (check their site for supported hosts). For example, my Macs include Final Cut Pro X, Logic Pro X, Audition, Premiere Pro, and Media Composer. The ERA4 plug-ins work in all of these.

I ran into some issues with Resolve. The plug-ins worked fine on the Fairlight page of Resolve 16 Studio Beta. That’s on my home machine. However, the Macs at work are running the Mac App Store version of Resolve 15 Studio. There, only the VST versions could be applied and I had to re-enter each filter’s activation code and relaunch. I would conclude from this that Resolve is fine as a host, although there may be some conflicts in the Mac App Store version. That’s likely due to some differences between it and the software you download directly from Blackmagic Design.

Another benefit is that Accusonus permits each license key to be used on up to three machines. If a user has both a laptop and a desktop computer, the plug-in can be installed and activated on each without the need to swap authorizations through an online license server or move an iLok dongle between machines. The ERA4 installers include all of the tools in the bundle, even if you only purchased one. You can ignore the others, uninstall them, or test them out in a trial mode. The complete bundle is available and fully functional for a 14-day free trial.

ERA4 Bundles

I mentioned the Waves One Knob filters at the top, but there’s actually little overlap between these two offerings. The One Knob series is focused on EQ and compression tasks, whereas the ERA4 effects are designed for audio repair. As such, they fill a similar need as the iZotope’s RX series.

The ERA4 Standard bundle includes six audio plug-ins: Noise, Reverb, and Plosive Removers, De-Esser, De-Clipper, and the Voice Leveler. The Pro bundle adds two more: the more comprehensive De-Esser Pro and ERA-D, which is a combined noise and reverb filter for more advanced processing than the two individual filters. If you primarily work with well-recorded studio voice-overs or location dialogue, then most likely the Standard bundle will be all you need. However, the two extra filters in the Pro bundle come in handy with more problematic audio. Even productions with high values occasionally get stuck with recordings done in challenging environments and last-minute VOs done on iPhones. It’s certainly worth checking out the full package as a trial.

While Accusonus does use a single-control approach, but it’s a bit simplistic to say that you are tied to only one control knob. Some of the plug-ins do offer more depth so you can tailor your settings.  For instance, the Noise Remover filter offers five preset curves to determine the frequencies that are affected. Each filter includes additional controls for the task at hand.

In use

Accusonus ERA4 filters are designed to be easy to use and work well in real-time. When all I need to do is improve audio that isn’t a basket case, then the ERA filters at their default settings do a wonderful job. For example, a VO recording might require a combination of Voice Leveler (smooth out dynamics), De-Esser (reduce sibilance), and Plosive Remover (clean up popping “p” sounds). Using the default control level (40%) or even backing off a little improved the sound.

It was the more problematic audio where ERA4 was good, but not necessarily always the best tool. In one case I tested a very, heavily clipped VO recording. When I used ERA4 De-Clipper in Final Cut Pro X, I was able to get similar results to the same tool from iZotope RX6. However, doing the same comparison in Audition yielded different results. Audition is designed to preview an effect and then apply it. The RX plug-in at its extreme setting crackled in real-time playback, but yielded superior results compared with the ERA4 De-Clipper after the effect was applied (rendered). Unfortunately FCPX has no equivalent “apply,” “render and replace,” or audio bounce function, so audio has to stay real-time, which gives Accusonus a performance edge in FCPX. For most standard audio repair tasks, Accusonus’ plug-ins were equal or better than most other options, especially those built into the host application.

I started out talking about the Voice Leveler plug-in, because that’s an audio function I perform often, especially with voice-overs. It helps to make the VO stand out in the mix against music and sound effects. This is an intelligent compressor, which means it tries to bring up all audio and then compress peaks over a threshold. But learn the controls before diving in. For example, it includes a breath control. Engaging this will prevent the audio from pumping up in volume each time the announcer takes a breath. As with all of the ERA4 filters, there is a small, scrolling waveform in the plug-in’s control panel. Areas that were adjusted by the filter are highlighted, so you can see when it is active.

Voice Leveler is a good VO tool, but that type is one of the more subjective audio filters. Some editors or audio engineers compress, some limit, and others prefer to adjust levels only manually. My all-time favorite is Wave’s Vocal Rider. Unlike a compressor, it dynamically raises and lowers audio levels between two target points. To my ears, this method yields a more open sound than heavy compression. But when its normal MSRP is pretty expensive. I also like the Logic Pro X Compressor, which is available in Final Cut Pro X. It mimics various vintage compressors, like Focusrite Red or the DBX 160X. I feel that it’s one of the nicest sounding compressors, but is only available in the Apple pro apps. Adobe users – you are out of luck on that one.

From my point-of-view, the more tools the better. You never know when you might need one. The Accusonus ERA4 bundle offers a great toolset for anyone who has to turn around a good-sounding mix quickly. Each bundle is easy to install and activate and even easier to use. Operation is real-time, even when you stack several together. Accusonus’ current introductory price for the bundles is about what some individual plug-ins cost from competing companies, plus the 14-day trial is a great way to check them out. If you need to build up your audio toolbox, this is a solid set to start out with.

Check out Accusonus’ blog for tips on using the ERA plug-ins.

_________________

Update – March 4, 2022 – Accusonus has announced a partnership with Meta that changes the direction of their company. As such, Accusonus products are being sunset and are no longer available for sale. Existing customers can continue to use the audio plug-ins, but technical support will end early next year. Make sure you have downloaded their latest installer for the version that you purchased and have also saved your activation code somewhere. Details are on their Help-FAQ pages.

“We will sunset the ERA Bundle products and Voice Changer early next year. Until then, we will be releasing compatibility updates and offering email support to any existing customers.

You will still be able to use the ERA Bundle products even after next year; however, we will not be providing support, maintenance and updates.

The products will stay at the final version of ERA 6.2.00 & Voice Changer 1.3.10, so no further product development will take place. Thus, the plug-ins will not be tested in newer systems (Windows and Apple OS) or when compatible DAWs/NLEs are updated.”

©2019 Oliver Peters

The 2019 Mac Pro Truck

In 2010 Steve Jobs famously provided us with the analogy that traditional computers are like trucks in the modern era. Not that trucks were going away, but simply were no longer a necessity for most of us, now that the majority of the populace wasn’t engaged in farming. While trucks would continue to be purchased and used, far fewer people actually needed them, because the car covered their needs. The same was true, he felt, of traditional computers.

Jobs is often characterized as being a consumer market-driven guy, but I believe the story is more nuanced. After all, he founded NeXT Computer, which clearly made high-end workstations. Job also became the major shareholder in Pixar Animation Studios – a company that not only needed advanced, niche computing power, but also developed some of its own specialized graphics hardware and software. So a mix of consumer and advanced computing DNA runs throughout Apple.

By the numbers

Unless you’ve been under a rock, you know that Apple revealed its new 2019 Mac Pro at the WWDC earlier this month. This year’s WWDC was an example of a stable, mature Apple firing on all cylinders. iPhone unit sales have not been growing. The revenue has, but that’s because the prices have been going up. Now it’s time to push all of the company’s businesses, including iPad, services, software, and the Mac. Numbers are hard to come by, although Apple has acknowledged that the Mac unit by itself is nearly a $25 billion business and that it would be close to being in the Fortune 100 on its own. There’s a ratio of 80/20 Mac laptops to desktops. For comparison to the rest of the PC world, Apple’s marketshare is around 7%, ranking fourth behind Lenovo, HP, and Dell, but just ahead of Acer. There are 100 million active macOS users (Oct 2018), although Windows 10 adoption alone runs eight times larger (Mar 2019).

We can surmise from this information that there are 20 million active Mac Pro, iMac, iMac Pro, and Mac mini users. It’s fair to assume that a percentage of those are in the market for a new Mac Pro. I would project that maybe 1% of all Mac users would be interested in upgrading to this machine – i.e. around 1 million prospective purchasers. I’m just spit-balling here, but at a starting price of $6,000, that’s a potential market of $6 billion in sales before factoring in any upgrade options or new macOS users!

A funny thing happened on the way to the WWDC

Apple went through a computing platform progression from the old Quadra 950 and 9600 towers to the first Intel Mac Pro towers over the course of the mid-1990s to 2006. The second generation of the older Mac Pro was released during 2009. So in a dozen-plus years, Apple customers saw seven major processor/platform changes and had come to expect a constant churn. In essence, plan on replacing your system every few years. However, from 2009 onward, customers that bought those Mac Pros had a machine that could easily last, be productive, and still be somewhat competitive ten years later. The byproduct of this was the ability to plan longer life expectancy for the hardware you buy. No longer an automatic two to three year replacement need.

Even the 2013 Mac Pro has lasted until now (six years later) and remains competitive with most machines. The miscalculation that Apple made with the 2013 Mac Pro was that pro customers would prefer external expandability versus internal hardware upgrades. Form over function. That turned out to be wrong. I’m probably one of the few who actually likes the 2013 Mac Pro under the right conditions. It’s an innovative design, but unfortunately one that can’t be readily upgraded.

The second major change in computing hardware is that now “lesser” machines are more than capable of doing the work required in media and entertainment. During those earlier days of the G3/G4/G5 PowerMacs and the early Intel Mac Pros, Apple didn’t make laptops and all-in-ones that had enough horsepower to handle video editing and the like. Remember the colorful, plastic iMacs and white eMacs? Or what about the toilet-seat-like iBook laptop? Good enough for e-mail, but not what you would want for editing.

Now, we have a wide range of both Mac and PC desktop computers and laptops that are up to the task. In the past, if you needed a performance machine, then you needed a workstation class computer. Nothing else would do. Today, a general purpose desktop PC that isn’t necessarily classed as a workstation is more than sufficient for designers, editors, and colorists. In the case of Apple, there’s a range of laptops and all-in-ones that cover those needs at many different price points.

The 2019 Mac Pro Reveal

Let me first say that I didn’t attend WWDC and I haven’t seen the new Mac Pro in person. I hope to be able to do a review at some point in the future. The bottom line is that this is purely an opinion piece for now.

There have certainly been a ton of internet comments about this machine – both positive and negative. Price is the biggest pain point. Clearly Apple intends this to be a premium product for the customer with demanding computing requirements. You can spin the numbers any way you like and people have. Various sites have speculated that a fully-loaded machines could drive the starting price from $6,000 to as high as $35K to $50K. The components that Apple defines in the early tech information do not perfectly match equivalent model numbers available on the suppliers’ websites. No one knows for sure how the specific Intel Xeon being used by Apple equates to other Xeons listed on Intel’s site. Therefore, exact price extrapolations are simply guesses for now.

In late 2009 I purchased an entry model 8-core Mac Pro. With some storage and memory upgrades, AppleCare, sales tax, and a small business discount, I paid around $4,000. The inflation difference over the decade is about 17%, so that same hardware should cost me $4,680 today. In fairness, Apple has a different design in this new machine and there are technologies not in my base 2009 machine, such as 10GigE, Thunderbolt 3, a better GPU, etc. Even though this new machine may be out of my particular budget right now, it’s still an acceptable value when compared with the older Mac Pros.

Likewise, if you compare the 2019 Mac Pro to comparable name brand workstations, like an HP Z8, you’ll quickly find that the HP will cost more. One clear difference, though is that HP also offers smaller, less costly workstation models, such as the Z2, Z4 and Z6. The PC world also offers many high quality custom solutions, such as Puget Systems, which I have reviewed.

One design decision that could have mitigated the cost a bit is the choice of CPU chips. Apple has opted to install Xeons chips in all of its Mac Pro designs. Same with the iMac Pro. However, Intel also offers very capable Core i9 CPUs. The i9 chips offer faster core speeds and high core counts. The Xeons are designed to be run flat out 24/7. However, in the case of video editing, After Effects, and so on, the Core i9 chip may well be the better solution. These apps really thrive on fast single-core speeds, so having a 12-core or 28-core CPU, where each core has a slower clock speed, may not give you the best results. Regardless of benefit, Xeons do add to Apple’s hard costs in building the machine. Xeons are more expensive that Core chips. In some direct comparisons, a Xeon can garner $1,000 over Intel’s retail price of the equivalent Core CPU.

The ultimate justification for buying a Mac Pro tower isn’t necessarily performance alone, but rather longevity and expandability. As I outlined above, customers have now been conditioned to expect the system to last and be productive for at least a decade. That isn’t necessarily true of an all-in-one or a laptop. This means that if you amortize the investment in a 2019 Mac Pro over a ten-year period, it’s actually quite reasonable.

The shame – and this is where much of the internet ire is coming from – is that Apple didn’t offer any intermediate models, like HP’s Z4 or Z6. I presume that Apple is banking on those customers buying iMacs, iMac Pros, Mac minis, or MacBook Pros instead. Couple one of these models with an external GPU and fast external storage and you will have plenty of power for your needs today. It goes without saying that comparing this Mac Pro to a custom PC build (which may be cheaper) is a non-starter. A customer for this Mac Pro will buy one, pure and simple. There is built-in price elasticity to this niche of the market. Apple knows that and the customers know it.

Nuts and bolts

The small details haven’t been fully revealed, so we probably won’t know everything about these new Mac Pros until September (the rumored release). Apple once again adopted a signature case design, which like the earlier tower case has been dubbed a “cheese grater.” Unlike the previous model, where the holes were simply holes for ventilation, the updated model (or would that be the retro model?) uses a lattice system in the case to direct the airflow. The 2019 is about the same size as its “cheese grater” predecessor, but 20 pounds lighter.

There is very little rocket science in how you build a workstation, so items like Xeon CPUs, GPU cards, RAM, and SSD system drives are well understood and relatively standard for a modern PC system.

The short hardware overview consists of:

8, 12, 16, 24, and 28-core Xeon CPU options

Memory from 32GB to 1.5TB of DDR4 ECC RAM

Up to four AMD GPU cards

1.4 kW power supply

Eight PCIe expansion slots (one used for Apple i/o card)

System storage options from 256GB to 4TB

Four Thunderbolt 3 ports (2 top and 2 back) plus two USB 3 ports (back)

(Note – more ports available with the upgraded GPU options)

Two 10Gb Ethernet ports

WiFi, Bluetooth, built-in speakers, headphone jack

So far, so good. Any modern workstation would have similar choices. There are several key unknowns and that’s where the questions come in. First, the GPU cards appear to be custom-designed AMD cards installed into a new MPX (Mac Pro expansion) module. This is a mounting/connecting cage to install and connect the hardware. However, if you wanted to add your own GPU card, would it fit into such a module? Would you have to buy a blank module from Apple for your card? Would your card simply fit into the PCIe slot and screw in like on any other tower? The last question does appear to be possible, but will there be proper Nvidia support?

The second big question relates to internal storage. The old “cheese grater” had sleds to install four internal drives. Up to six could be installed if you used the optical drive bays. The 2019 Mac Pro appears to allow up to four drives within an MPX chassis. Promise has already announced two products specifically for the Mac Pro. One would include four RAIDed 8TB drives for a 32TB capacity. 14TB HDDs are already available, so presumably this internal capacity will go up. 

The unknown is whether or not you can add drives without purchasing an MPX module. The maximum internal GPU option seems to be four cards, which are mounted inside two MPX modules. This is also the space required for internal drives. Therefore, if you have both MPX modules populated with GPU cards, then I would imagine you can’t add internal storage. But I may be wrong. As with most things tech, I predict that if blank MPX modules are required, a number of vendors will quickly offer cheaper aftermarket MPX modules for GPUs, storage, etc.

One side issue that a few blogs have commented on is the power draw. Because of the size of the power supply, the general feeling is that the Mac Pro should be plugged into a standard electrical circuit by itself, plus maybe a monitor. In other words, not a circuit with a bunch of other electrical devices, otherwise you might start blowing breakers.

Afterburner

A new hardware item from Apple is the optional Afterburner ProRes and ProRes RAW accelerator card. This uses an FGPA (field programmable gate array), which is a chip that can be programmed for various specific functions. It can potentially be updated in the future. Anyone who has worked with the RED Rocket or RED Rocket-X card in the past will be quite familiar with what the Afterburner is.

The Afterburner will decode ProRes and ProRes RAW codecs on-the-fly when this media is played in Final Cut Pro X, QuickTime Player X, and any other application re-coded to support the card. This would be especially beneficial with camera raw codecs, because it debayers the raw sensor data via hardware acceleration at full resolution, instead of using the CPU. Other camera RAW manufacturers, like RED, ARRI, Canon, and Blackmagic Design, might add support for this card to accelerate their codecs, as well. What is not known is whether the Afterburner card can also be used to offload true background functions like background exports and transcoding within Final Cut Pro X.

An FPGA card offers the promise of being future-proofed, because you can always update its function later. However, in actual practice, the hardware capabilities of any card become outstripped as the technology changes. This happened with the RED Rocket card and others. We’ll see if Apple has any better luck over time.

Performance

Having lots of cores is great, but with most media and entertainment software the GPU can be key. Apple has been at a significant disadvantage with many applications, like After Effects, because of their stance with Nvidia and CUDA acceleration. Apple prefers that a manufacturer support Metal, which is their way of leveraging the combined power of all CPUs and GPUs in the system. This all sounds great, but the reality is that it’s one proprietary technology versus another. In the benchmark tests I ran with the Puget PC workstation, the CUDA performance in After Effects easily trounced any Mac that I scored it against.

Look at Apple’s website for a chart representing the relative GPU performance of a 2013 Mac Pro, an iMac Pro, and the new 2019 Mac Pro. Each was tested with their respective top-of-the-line GPU option. The iMac Pro is 1.5x faster than the 2013 Mac Pro. The 2019 Mac Pro is twice as fast as the iMac Pro and 3x faster than the 2013 Mac Pro. While that certainly looks impressive, that 2x improvement over the iMac Pro comes thanks to two upgraded GPU cards instead of one. Well, duh! Of course, at this time we have no idea what these cards and MPX units will cost. (Note – I am not totally sure as to whether this testing used two GPUs in one MPX module or a total of four GPUs in two modules.)

We won’t know how well these really perform until the first units get out into the wild. Especially how they compare against comparable PCs with high-powered Nvidia cards. I may be going out on a limb, but I would be willing to bet that many people who buy the base configuration for $6K – thinking that they will get a huge boost in performance – are going to be very disappointed. I don’t mean to trash the entry-level machine. It’s got solid specs, but in that configuration, isn’t the best performer. At $6K, you are buying a machine that will have longevity and which can be upgraded in the future. In short, the system can grow with you over time as the workload demands increase. That’s something which has not be available to Mac owners since the end of 2012.

Software

To take the most advantage of the capabilities of this new machine, software developers (both applications and plug-ins) will have to update their code. All of the major brands like Adobe, Avid, Blackmagic Design, and others seem to be on board with this. Obviously, so are the in-house developers at Apple who create the Pro Applications. Final Cut Pro X and Logic Pro X are obvious examples. Logic is increasing the track count and number of software instruments you can run. Updates have already been released.

Final Cut Pro X has a number of things that appear in need of change. Up until now, in spite of being based around Metal, Final Cut has not taken advantage of multiple GPUs when present. If you add an eGPU to a Mac today, you must toggle a preference setting to use one GPU or the other as the primary GPU (Mojave). Judging by the activity monitor, it appears to be an either-or thing, which means the other GPU is loafing. Clearly when you have four GPUs present, you will want to tap into the combined power of all four.

With the addition of the Afterburner option, FCPX (or any other NLE) has to know that the card is present and how to offload media to the card during playback (and render?). Finally, the color pipeline in Final Cut Pro X is being updated to work in 16-bit float math, as well as optimized for fast 8K workflows.

All of this requires new code and development work. With the industry now talking about 16K video, is 8K enough? Today, 4K delivery is still years away for many editors, so 8K is yet that much further. I suspect that if and when 16K gets serious traction, Apple will be ready with appropriate hardware and software technology. In the case of the new Mac Pro, this could simply mean a new Afterburner card instead of an entirely new computer.

The Apple Pro Display XDR

In tandem with the 2019 Mac Pro, Apple has also revealed the new Pro Display XDR – a 6K  32″ Retina display. It uses a similar design aesthetic to the Mac Pro, complete with a matching ventilation lattice. This display comes calibrated and is designed for HDR with 1,000 nits fullscreen, sustained brightness, and a 1,600 nit maximum. It will be interesting to see how this actually looks. Recent Final Cut Pro X updates have added HDR capabilities, but you can never get an accurate view of it on a UI display. Furthermore, the 500 nit, P3 displays used in the iMac Pros are some of the least color-accurate UI displays of any Mac that I work with. I really hope Apple gets this one right.

To sell the industry on this display, Apple is making the cost and feature comparison between this new display and actual HDR color reference displays costing in the $30K-40K range. Think Flanders Scientific or Sony. The dirty little HDR secret is that when you display an image at the maximum nit level across the entire screen, the display will dim in order to prevent damage. Only the most expensive displays are more tolerant of this. I would presume that the Pro Display XDR will also dim when presented with a fullscreen image of 1,600 nits, which is why their spec lists 1,000 nits fullscreen. That level is the minimum HDR spec. Of course, if you are grading real world images properly, then in my opinion, you rarely should have important picture elements at such high levels. Most of the image should be in a very similar range to SDR, with the extended range used to preserve highlight information, like a bright sky.

Some colorists are challenging the physics behind some of Apple’s claims. The concern is whether or not the display will result in bloomed highlights. Apple’s own marketing video points out that the design reduces blooming, but it doesn’t say that it completely eliminates it. We’ll see. I don’t quite see how this display fits as a reference display. It only has Thunderbolt connections – no SDI or HDMI – so it won’t connect in most standard color correction facilities without additional hardware. If, like all computer displays, the user can adjust the brightness, then that goes against the concept of an HDR reference display. At 32″, it’s much too small to be used as a client display to stick on the wall.

Why did Apple make the choice to introduce this as a user interface display? If they wanted to make a great HDR reference display, then that makes some sense. Even as a great specialty display, like you often find in photography or fine print work. I understand that it will likely display accurate, fullscreen video directly from Final Cut Pro X or maybe even Premiere Pro without the need and added cost of an AJA or BMD i/o device or card. But as a general purpose computer display? That feels like it simply misses the mark, no matter how good it is. Not to mention, at a brightness level of 1,000 to 1,600 nits, that’s way too bright for most edit suites. I even find that to be the case with the iMac Pro’s 500 nit displays, when you crank them up.

This display is listed as $5K without a stand. Add another $1k if you want a matte finish. Oh, and if you want the stand, add another $1K! I don’t care how seductively Jony Ive pronounces “all-u-minium,” that’s taxing the good will of your customer. Heck, make it $5,500 and toss in the stand at cost. Remember, the stand has an articulating arm, which will probably lose its tension in a few years. I hope that a number of companies will make high-quality knock-offs for a couple of hundred bucks.

If you compare the Apple Pro Display XDR to another UI display with a similar mission, then it’s worth comparing it to the HP Dreamcolor Z31x Studio Display. This is a 32″ 4K, calibrated display with an MSRP of right at $3,200. But it doesn’t offer HDR specs, Retina density, or 6K resolution. Factor in those features and Apple’s brand premium and then the entry price isn’t that far out of line – except for that stand.

I imagine that Apple’s thought process is that if you don’t want to buy this display, then there are plenty of cheaper choices, like an LG, HP, Asus, or Dell. And speaking of LG, where’s Apple’s innovative spirit to try something different with a UI display? Maybe something like an ultra wide. LG now has a high-resolution 49″ display for about $1,400. This size enables one large canvas across the width; or two views, like having two displays side-by-side. However, maybe a high-density display (Retina) isn’t possible with such a design, which could be Apple’s hang-up.

Final thoughts

The new 2019 Mac Pro clearly demonstrates that Apple has not left the high-end user behind. I view relevant technology through the lens of my needs with video; however, this model will appeal to a wide range of design, scientific, and engineering users. It’s a big world out there. While it may not be the most cost-effective choice for the individual owner/editor, there are still plenty of editors, production companies, and facilities that will buy one.

There is a large gap between the Mac mini and this new Mac Pro. I still believe there’s a market for a machine similar to some of those concept designs for a Mac Pro. Or maybe a smaller version of this machine that starts at $3,000. But there isn’t such a model from Apple. If you like the 2013 “trash can” Mac Pro, then you can still get it – at least until the 2019 model is officially released. Naturally, iMacs and iMac Pros have been a superb option for that in-between user and will continue to be so.

If you are in the market for the 2019 Mac Pro, then don’t cut yourself short. Think of it as an investment for at least 10 years. Unless you are tight and can only afford the base model, then I would recommend budgeting in the $10K range. I don’t have an exact configuration in mind, but that will likely be a sweet spot for demanding work. Once I get a chance to properly review the 2019 Mac Pro, I’ll be more than happy come back with a real evaluation.

©2019 Oliver Peters

Good Omens

Fans of British television comedies have a new treat in Amazon Prime’s Good Omens. The six-part mini-series is a co-production of BBC Studios and Amazon Studios. It is the screen adaptation of the 1990 hit novel by the late Terry Pratchett and Neil Gaiman, entitled Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch. Just imagine if the Book of Revelation had been written by Edgar Wright or the Coen brothers. Toss in a bit of The Witches of Eastwick and I think you’ll get the picture.

The series stars Michael Sheen (Masters of Sex, The Good Fight) as Aziraphale (an angel) and David Tennant (Mary Queen of Scots, Doctor Who) as Crowley (a demon). Although on opposing sides, the two have developed a close friendship going back to the beginning of humanity. Now it’s time for the Antichrist to arrive and bring about Armageddon. Except that the two have grown fond of humans and their life on Earth, so Crowley and Aziraphale aren’t quite ready to see it all end. They form an unlikely alliance to thwart the End Times. Naturally this gets off to a bad start, when the Antichrist child is mixed up at birth and ends up misplaced with the wrong family. The series also stars an eclectic supporting cast, including Jon Hamm (Baby Driver, Mad Men), Michael McKean (Veep, Better Call Saul), and Frances McDormand (Hail, Caesar!, Fargo) as the voice of God.

Neil Gaiman (Lucifer, American Gods) was able to shepherd the production from novel to the screen by adapting the screenplay and serving as show runner. Douglas Mackinnon (Doctor Who, Sherlock) directed all six episodes. I recently had a chance to speak with Will Oswald (Doctor Who, Torchwood: Children of Earth, Sherlock) and Emma Oxley (Lair, Happy Valley), the two editors who brought the production over the finish line.

(Click any image to see an enlarged view.)

_____________________________________________________

[OP] Please tell me a bit about your editing backgrounds and how you landed this project.

[Will] I was the lead editor for Doctor Who for a while and got along well with the people. This led to Sherlock. Douglas had worked on both and gave me a call when this came up.

[Emma] I’ve been mainly editing thrillers and procedurals and was looking for a completely different script, and out of the blue I received a call from Douglas. I had worked with him as an assistant editor in 2007 on an adaptation of the Jekyll and Hyde story and I was fortunate that a couple of Douglas’s main editors were not available for Good Omens. When I read the script I thought this is a dream come true.

[OP] Had either of you read the book before?

[Will] I hadn’t, but when I got the gig, I immediately read the book. It was great, because this is a drama-comedy. How good a job is that? You are doing everything you like. It’s a bit tricky, but it’s a great atmosphere to work in.

[Emma] I was the same, but within a week I had read it. Then the scripts came through and they were pretty much word for word – you don’t expect that. But since it was six hours instead of feature length the book could remain intact.

[OP] I know that episodic series often divide up the editorial workload in many different ways. Who worked on which episode and how was that decided?

[Will] Douglas decided that I would do the first three episodes and Emma would edit the last three. The series happened to split very neatly in the middle. The first three episodes really set up the back story and the relationship between the characters and then the story shifts tone in the last three episodes.

[Emma] Normally in TV the editors would leapfrog each other. In this case, as Will said, the story split nicely into two, three-hour sections. It was a nice experience not to have to jump backwards and forwards.

[Will] The difficult thing for me in the first half is that the timeline is so complicated. In the first three episodes you have to develop the back story, which in this case goes back and forth through the centuries – literally back to the beginning of time. You also have to establish the characters’ relationship to each other. By the end of episode three, they really start falling apart, even though they do really like each other. It’s a bit like Butch Cassidy and the Sundance Kid. Of course, Emma then had to resolve all the conflicts in her episodes. But it was nice to go rocking along from one episode to the next.

[OP] What was the post-production schedule like?

[Emma] Well, we didn’t really have a schedule. That’s why it worked! (laugh) Will and I were on it from the very start and once we decided to split up the edit as two blocks of three episodes, there were days when I wouldn’t get any rushes, so could focus on getting a cut done and vice versa with Will. When Douglas came in, we had six pretty good episodes that were cut according to the script. Douglas said he wanted to treat it like a six hour film, so we did a full pass on all six episodes before Neil came in and then finally the execs. They allowed us the creative freedom to do that.

[Will] When Douglas came back, we basically had a seven and a half hour movie, which we ran in a cinema on a big screen. Then we went through and made adjustments in order. It was the first time I’ve had both the show runner and the director in with me every day. Neil had promised Terry that he would make sure it happened. Terry passed away before the production, but he had told Neil – and I’m paraphrasing here – don’t mess it up! So this was a very personal project for him. That weighed heavily on me, because when I reread the book, I wanted to make sure ‘this’ was in and ‘that’ was in as I did my cut.

[OP] What sort of changes were made as you were refining the episodes?

[Will] There were a lot of structural changes in episodes one and two that differed a lot from the script. It was a matter of working out how best to tell the story. Episode one was initially 80 minutes long. There was quite a lot of work to get down to the hourlong final version. Episode three was much easier. 

[Emma] By the time it got to episode four, the pattern had been established, so we had to deal more with visual effects challenges in the second half. We had a number of large set pieces and a limited visual effects budget. So we had to be clever about using visual effects moments without losing the impact, but still maximizing the effects we did have. And at the same time keeping it as good as we could. For example, there’s a flying saucer scene, but the plate shot didn’t match the saucer shot and it was going to take a ton of work to match everything. So we combined it with a shot intended for another part of the scene. Instead of a full screen effects shot, it’s seen through a car window. Not only did it save a lot of money, but more importantly, it ended up being a better way for the ship to land and more in the realm of Good Omens storytelling. I love that shot.

[Will] Visual effects are just storytelling points. You want to be careful not to lose the plot. For example, the Hellhound changes into a puppy dog and that transformation was originally intended to be a big visual effect. But instead, we went with a more classic approach. Just a simple cut and the camera tilts down to reveal the smaller dog. It turned out to be a much better way of doing it and makes me laugh every time I see it.

[OP] I noticed a lot of music from Queen used throughout. Any special arrangement to secure that for the series?

[Will] Queen is in the book. Every time Crowley hears music, even if it’s Mozart, it turns into Queen. Fortunately Neil knows everybody!

[Emma] And it’s one of Douglas’ favorite bands of all time, so it was a treat for him to put as much Queen music in as possible. At one point we had it over many more moments.

[Will] Also working with David Arnold [series composer] was great. There’s a lot of his music as well and he really understands what we do in editing.

[OP] Since this is a large effort and a lot of complex work involved, did you have a large team of assistant editors on the job with you?

[Emma] This is the UK. We don’t have a huge team! (laugh)

[Will] We had one assistant, Cat Gregory, and then much later on, a couple more for visual effects.

[Emma] They were great. Cat, our first assistant, had an adjoining room to us and she was our ‘take barometer.’ If you put in an alt line and she didn’t laugh, you knew it wasn’t as good. But if there was a chuckle coming out of her room, it would more often stay.

[OP] How do you work with your assistants? For example, do you let assistants assemble selects, or cut in sound effects or music?

[Will] It was such a heavy schedule with a huge amount of material, so there was a lot of work just to get that in and organized. Just giving us an honest opinion was invaluable. But music and sound effects – you really have to do that yourself.

[Emma] Me, too. I cut my own music and assemble my own rushes.

[OP] Please tell me a bit about your editorial set-up and editing styles.

[Will] We were spread over two or four upstairs/downstairs rooms at the production company’s office in Soho. These were Avid Media Composer systems with shared storage. We didn’t have the ScriptSync option. We didn’t even have Sapphire plug-ins until late in the day, although that might have been nice with some of the bigger scenes with a lot of explosions. I don’t really have an editing style, I think it’s important not to have one as an editor. Style comes out of the content. I think the biggest challenge on this show was how do you get the English humor across to an American audience.

[Emma] I wouldn’t say I have an editing style either. I come in, read the notes, and then watch the rushes with that information in my head. There wasn’t a lot of wild variation in the takes and David’s and Michael’s performances were just dreamy. So the material kind of cut itself.

[Will] The most important thing is to familiarize yourself with the material and review the selected takes. Those are the ones the director wanted. That also gives you a fixed point to start from. The great thing about software these days is that you can have multiple versions.

[OP] I know some directors like to calibrate their actors’ performances, with each take getting more extreme in emotion. Others like to have each take be very different from the one before it. What was Mackinnon’s style on this show as a director?

[Emma] In the beginning you always want to figure out what they are thinking. With Douglas it’s easy to see from the material he gives you. He’s got it all planned. He really gets the performance down to a tee in the rehearsal.

[Will] Douglas doesn’t push for a wide range in the emotion from one take to the next. As Emma mentioned, Douglas works through that in rehearsal. Someone like David and Michael work that out, too, and they’re bouncing off each other. Douglas has a fantastic visual sense. You can look at the six episodes and go, “Wow, how did you get all of that in?” It’s a lot of material and he found a way to tell that story. There’s a very natural flow to the structure.

[OP] Since both Douglas Mackinnon and Will worked on Doctor Who, and David Tennant was one of the Doctors during the series, was there a conscious effort to stay away from anything that smacked of Doctor Who in Good Omens?

[Will] It never crossed my mind. I always try to do something different, but as I said, the style comes out of the material. It has jeopardy and humor like Doctor Who, but it’s really quite different. I did 32 episodes of Doctor Who and each of those was very different from the other. David Tennent is in it, of course, but he is not even remotely playing the Doctor. Crowley is a fantastic new character for him.

[OP] Are there any final thoughts you’d like to share about working on Good Omens?

[Will] It was a pleasure to work on a world-famous book and it is very funny. To do it justice was really all we were doing. I was going back every night and reading the book marking up things. Hopefully the fans like it. I know Neil does and I hope Terry is watching it.

[Emma] I’m just proud that the fans of the book are saying that it’s one of the best adaptations they’ve ever watched on the screen. That’s a success story and it gives me a warm feeling when I think about Good Omens. I’d go back and cut it again, which I rarely say about any other job.

©2019 Oliver Peters

Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters