Does Apple’s mid-2020 iMac deliver?

Apple told us at WWDC that more Intel Macs were on the way. The latest iMac refresh is the first fulfillment of that promise. In the Mac desktop line-up, iMac covers a span from two to ten CPU cores and up to 128GB of RAM. iMac Pro covers 10 to 18 cores and up to 256GB of RAM. This makes the 10-core configuration a bridge where the two branches overlap. It offers cost-effective performance and poses a great value for consumer power users along with professional editors, designers, photographers, engineers, and others. The recent refresh includes changes to the 21.5-inch iMac model, as well as the iMac Pro line. But I’m going focus on the 27-inch 5K iMac, since that model will most interest video professionals.

More power, faster storage, and nano-texture glass

The 5K iMac supplied by Apple for this review was configured with the Intel “Comet Lake” Core i9 10-core CPU (3.6GHz, Turbo Boost up to 5GHz), 64GB of DDR4 RAM, the Radeon Pro 5700 XT GPU (16GB of GDDR6 VRAM), and a 4TB SSD. It also came with the optional nano-texture glass display, keyboard with numeric keypad, trackpad, mouse, and 10Gb Ethernet. As tested, this would cost $6,158 USD (without AppleCare or tax). However, if you opted for a 1TB SSD, that retail cost would drop significantly. Fusion Drives are gone and replaced by all-flash storage options ranging from 256GB up to 8TB. The Blackmagic Disk Speed Test application clocked the internal 4TB SSD read/write speeds at around 2500-2900 MB/s respectively.

Before talking performance, let’s look at the rest of the iMac. It’s still the familiar silver form factor, but with a cooling system optimized for the 125W CPU. Four USB-A ports, two Thunderbolt 3/USB-C ports, 1Gb Ethernet, headphone jack, and a faster SDXC (UHS-II) card reader. Plus Wi-Fi and Bluetooth 5.0. If you need to connect to NAS storage (LumaForge Jellyfish, QNAS, Synology, etc.), then you’ll want to order your iMac with the optional 10GbE upgrade.

Recognizing that we are all spending more time at home, Apple improved the webcam to 1080p with an updated image sensor, enhanced the speakers with a variable EQ, added a three-point, “studio quality” mic system, and enabled “Hey Siri.”

The Retina 5K display sports 500 nits of brightness, one billion colors, and support for P3 wide color. True Tone color technology has been added. It’s a nice feature for the non-pro user. Turn it off if you are doing anything color-critical, since it warms or cools the color temperature of the display depending on the lighting environment.

The biggest buzz will be around the nano-texture glass option, first introduced as an option for the Pro Display XDR. Traditional matte finishes use a coating that reduces glare and reflections, but with a loss of contrast. Nano-texture is a method to etch the glass at the nanometer level so that it redirects light. The objective is to reduce glare while maintaining contrast on par with that of the standard finish. It achieves that goal, although at a close viewing distance, text will look crisper on a display with standard glass.

At $500, it’s a reasonable option and less costly compared with the XDR. However, if your room doesn’t have a lot of direct light hitting the screen anyway, then you may not appreciate as large of a benefit from the nano-texture finish. In theory, heavy-handed cleaning could scuff the display. Apple claims that if you use the supplied cleaning cloth and occasional water (if needed), then screen damage is highly unlikely. Be gentle, don’t scrub, and you’ll be fine.

How does it stand up to an iMac Pro?

I have access to several similarly-configured 10-core 2017 iMac Pros, so it seemed like a great opportunity for some head-to-head testing. The iMac Pro’s Xeon/Vega combo versus the new iMac’s Core i9/5700 XT combo. Both have 10-core CPUs, 64GB RAM, and a GPU with 16GB VRAM. The iMac Pro is designed as a workstation with appropriate parts and thermal cooling system. Until the 2019 Mac Pro was released, the iMac Pro was Apple’s most powerful Mac. On the other hand, the iMacs use components designed for general computing and gaming. That’s not to say they aren’t powerful. In fact by the numbers, the 10-core iMac features faster components than the equivalent iMac Pro model.

As a generality, you can say that the iMac should deliver better burst performance, whereas the iMac Pro is designed for lengthy, taxing performance, like constant use, extended rendering/encoding, and so on. But it really depends on the applications you are using and how much demand you place on the machine. When it comes to value, if we were to spec a 2020 27-inch iMac to closely match the 2017 iMac Pro I am using, then the iMac Pro currently runs about $1400 more (standard glass, no AppleCare, no tax). Is that added $1400 worth it? That’s where performance testing comes in.

Benchmark performance testing

I ran both machines through a series of identical benchmarks, including BruceX 5K for Final Cut Pro X, Puget Systems’ Premiere Pro and After Effects benchmarks, as well as custom projects in Final Cut Pro X, Motion, and DaVinci Resolve. These tests covered a range of media formats and codecs, such as DNG image sequences, ProRes, H.264, REDCODE raw, ProRes RAW, and BRAW. Media sizes ranged from HD to 8K and my sequences and exports were 4K. These projects tested scaling, camera raw decodes, color correction, effects, synthetic media, and so on. I stuck to the internal drive for all media locations and export destinations, since both the iMac and iMac Pro disk speed tests came in with very similar numbers.

The export results for the new iMac and the iMac Pro were neck-and-neck when using Apple’s applications – a few seconds faster from FCPX for the iMac and the same for both with Motion. The one exception was a 4K HEVC export of my 11-layer FCPX timeline. In that case the iMac clocked in a couple of minutes faster.

The Puget Systems’ Premiere Pro and After Effects benchmarks are designed around an overall target score of 1,000 possible points. Most Macs score in the 500 to 750-point range, while custom-built PCs often achieve 1,000 or better. Both the iMac and iMac Pro fell into the expected range, with the new iMac still beating out the iMac Pro. What really surprised me was that the iMac hit 1,027 in the After Effects benchmark! That so amazed me that I had to run the test again. Same result. I can only surmise that After Effects or the testing parameters favor the architecture of the Core-series CPUs and 5700 XT GPU over that of the Xeon/Vega combo used in the iMac Pro.

The Resolve test was the only instance in which the 2017 iMac Pro beat the 2020 iMac, with export times about one minute faster for a complex 7 minute, 4K color corrected sequence. During all of this testing, the cooling fans kicked into higher speeds for roughly the same amount of time and at the same places on both machines. For example, when exporting a Resolve clip that used temporal/spatial video noise reduction.

Should you buy one?

Clearly the new 27-inch iMac is a powerful performer equipped with one of the best-looking computer displays available anywhere. If you are an editor, designer, audio engineer, or similar creative professional, then you really can’t go wrong with one. A facility owner may skew towards the pricier iMac Pro, because it’s a workstation-class machine or they need more cores, more RAM, or additional Thunderbolt 3 ports. Customer upgradeability is limited – essentially none for the iMac Pro and only RAM for the iMac.

Of course, the “elephant in the room” question is: Should you buy an Intel Mac now, with Apple silicon presumably coming within a few months? If you need a machine now and can’t wait, then the answer is yes. Maybe you want to wait until second-generation Apple silicon hardware is out before taking the plunge into new technology. Or you need something that requires Intel, such as running Windows via Boot Camp. All good reasons for staying with an Intel hardware investment a while longer.

In reality, the transition to Apple silicon will take two years according to Apple. It may well be towards the end of that two-year period before we see comparable machines to today’s higher-end MacBook Pro, iMac, iMac Pro, or Mac Pro. We’ll know better once the first Apple silicon machines hit the market. In any case, Apple intends to support its Intel-based lines well after the transition is complete. Therefore, purchasing an Intel-based Mac today is likely to be less of a risk than people make it out to be.

The bottom line is that the mid-2020 27-inch 5K iMac in the 10-core configuration that I tested for this review is ideal for nearly any HD and 4K editing, color correction, graphics, and mixing. You can certainly go bigger with an iMac Pro or Mac Pro, but this configuration offers a tremendous value for these iconic, all-in-one desktop Macs. The 10-core model is that “sweet spot” where nearly every application can take full advantage of the available horsepower. If you need a desktop Mac now, then it should certainly be at the top of the list.

Check out the Stalman Podcast discussion about this iMac versus PCs.

Originally written for FCP.co.

©2020 Oliver Peters

A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

More about ProRes RAW

A few weeks ago I wrote a two-part post – HDR and RAW Demystified. In the second part, I covered Apple’s new ProRes RAW codec. I still see a lot of misinformation on the web about what exactly this is, so I felt it was worth an additional post. Think of this post as an addendum to Part 2. My apologies up front, if there is some overlap between this and the previous post.

_____________________________

Camera raw codecs have been around since before RED Digital Camera brought out their REDCODE RAW codec. At NAB, Apple decided to step into the game. RED brought the innovation of recording the raw signal as a compressed movie file, making on-board recording and simplified post-production possible. Apple has now upped the game with a codec that is optimized for multi-stream playback within Final Cut Pro X, thus taking advantage of how FCPX leverages Apple hardware. At present, ProRes RAW is incompatible with all other applications. The exception is Motion, which will read and play the files, but with incorrect default – albeit correctable – video levels.

ProRes RAW is only an acquisition codec and, for now, can only be recorded externally using an Atomos Inferno or Sumo 19 monitor/recorder, or in-camera with DJI’s Inspire 2 or Zenmuse X7. Like all things Apple, the complexity is hidden under the surface. You don’t get the type of specific raw controls made available for image tweaking, as you do with RED. But, ProRes RAW will cover the needs of most camera raw users, making this the raw codec “for the rest of us”. At least that’s what Apple is banking on.

Capturing in ProRes RAW

The current implementation requires a camera that exports a camera raw signal over SDI, which in turn is connected to the Atomos, where the conversion to ProRes RAW occurs. Although no one is very specific about the exact process, I would presume that Atomos’ firmware is taking in the camera’s form of raw signal and rewrapping or transforming the data into ProRes RAW. This means that the Atomos firmware would require a conversion table for each camera, which would explain why only a few Sony, Panasonic, and Canon models qualify right now. Others, like ARRI Alexa or RED cameras, cannot yet be recorded as ProRes RAW. The ProRes RAW codec supports 12-bit color depth, but it depends on the camera. If the SDI output to the Atomos recorder is only 10-bit, then that’s the bit-depth recorded.

Until more users buy or update these specific Atomos products – or more manufacturers become licensed to record ProRes RAW onboard the camera – any real-word comparisons and conclusions come from a handful of ProRes RAW source files floating around the internet. That, along with the Apple and Atomos documentation, provides a pretty solid picture of the quality and performance of this codec group.

Understanding camera raw

All current raw methods depend on single-sensor cameras that capture a Bayer-pattern image. The sensor uses a monochrome mosaic of photosites, which are filtered to register the data for light in the red, green, or blue wavelengths. Nearly all of these sensors have twice as many green receptors as red or blue. At this point, the sensor is capturing linear light at the maximum dynamic range capable for the exposure range of the camera and that sensor. It’s just an electrical signal being turned into data, but without compression (within the sensor). The signal can be recorded as a camera raw file, with or without compression. Alternatively, it can also be converted directly into a full-color video signal and then recorded – again, with or without compression.

If the RGGB photosite data (camera raw) is converted into RGB pixels, then sensor color information is said to be “baked” into the file. However, if the raw conversion is stored in that form and then later converted to RGB in post, sensor data is preserved intact until much later into the post process. Basically, the choice boils down to whether that conversion is best performed within the camera’s electronics or later via post-production software.

The effect of compression may also be less destructive (fewer visible artifacts) with a raw image, because data, rather than video is being compressed. However, converting the file to RGB, does not mean that a wider dynamic range is being lost. That’s because most camera manufacturers have adopted logarithmic encoding schemes, which allow a wide color space and a high dynamic range (big exposure latitude) to be carried through into post. HDR standards are still in development and have been in testing for several years, completely independent of whether or not the source files are raw.

ProRes RAW compression

ProRes RAW and ProRes RAW HQ are both compressed codecs with roughly the same data footprint as ProRes and ProRes HQ. Both raw and standard versions use a variable bitrate form of compression, but in different ways. Apple explains it this way in their white paper: 

“As is the case with existing ProRes codecs, the data rates of ProRes RAW are proportional to frame rate and resolution. ProRes RAW data rates also vary according to image content, but to a greater degree than ProRes data rates. 

With most video codecs, including the existing ProRes family, a technique known as rate control is used to dynamically adjust compression to meet a target data rate. This means that, in practice, the amount of compression – hence quality – varies from frame to frame depending on the image content. In contrast, ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.”

ProRes RAW and HDR do not depend on each other

One of my gripes, when watching some of the ProRes RAW demos on the web and related comments on forums, is that ProRes RAW is being conflated with HDR. This is simply inaccurate. Raw applies to both SDR and HDR workflows. HDR workflows do not depend on raw source material. One of the online demos I saw recently immediately started with an HDR FCPX Library. The demo ProRes RAW clips were imported and looked blown out. This made for a dramatic example of recovering highlight information. But, it was wrong!

If you start with an SDR FCPX Library and import these same files, the default image looks great. The hitch here, is that these ProRes RAW files were shot with a Sony camera and a default LUT is applied in post. That’s part of the file’s metadata. To my knowledge, all current, common camera LUTs are based on conversion to the Rec709 color space, not HDR or wide gamut. If you set the inspector’s LUT tab to “none” in either SDR or HDR, you get a relatively flat, log image that’s easily graded in whatever direction you want.

What about raw-specific settings?

Are there any advantages to camera raw in the first place? Most people will point to the ability to change ISO values and color temperature. But these aren’t actually something inherently “baked” into the raw file. Instead, this is metadata, dialed in by the DP on the camera, which optimizes the images for the sensor. ISO is a sensitivity concept based on the older ASA film standard for exposing film. In modern digital cameras, it is actually an exposure index (EI), which is how some refer to it. (RedShark’s Phil Rhodes goes into depth in this linked article.)

The bottom line is that EI is a cross-reference to that camera sensor’s “sweet spot”. 800 on one camera might be ideal, while 320 is best on another. Changing ISO/EI has the same effect as changing gain in audio. Raising or lowering ISO/EI values means that you can either see better into the darker areas (with a trade-off of added noise) – or you see better highlight detail, but with denser dark areas. By changing the ISO/EI value in post, you are simply changing that reference point.

In the case of ProRes RAW and FCPX, there are no specific raw controls for any of this. So it’s anyone’s guess whether changing the master level wheel or the color temp/tint sliders within the color wheels panel is doing anything different for a ProRes RAW file than doing the same adjustment for any other RGB-encoded video file. My guess is that it’s not.

In the case of RED camera files, you have to install a camera raw plug-in module in order to work with the REDCODE raw codec inside of Final Cut Pro X. There is a lot of control of the image, prior to tweaking with FCPX’s controls. However, the amount of image control for the raw file is significantly more for a REDCODE file in Premiere Pro, than inside of FCPX. Again, my suspicion is that most of these controls take effect after the conversion to RGB, regardless of whether or not the slider lives in a specific camera raw module or in the app’s own color correction controls. For instance, changing color temperature within the camera raw module has no correlation to the color temperature control within the app’s color correction tools. It is my belief that few of these actually adjust file data at the raw level, regardless of whether this is REDCODE or ProRes RAW. The conversion from raw to RGB is proprietary with every manufacturer.

What is missing in the ProRes RAW implementation is any control over the color science used to process the image, along with de-Bayering options. Over the years, RED has reworked/improved its color science, which theoretically means that a file recorded a few years ago can look better today (using newer color science math) than it originally did. You can select among several color science models, when you work with the REDCODE format. 

You can also opt to lower the de-Bayering resolution to 1/2, 1/4, 1/8, etc. for a RED file.  When working in a 1080p timeline, this speeds up playback performance with minimal impact on the visible resolution displayed in the viewer. For full-quality conversion, software de-Bayering also yields different results than hardware acceleration, as with the RED Rocket-X card. While this level of control is nice to have, I suspect that’s the sort of professional complication that Apple seeks to avoid.

The main benefit of ProRes RAW may be a somewhat better-quality image carried into post at a lower file size. To get the comparable RGB image quality you’d need to go up to uncompressed, ProRes 4444, or ProRes 4444 XQ – all of which become very taxing in post. Yet, for many standard productions, I doubt you’ll see that great of a difference. Nevertheless, more quality with a lower footprint will definitely be welcomed.

People will want to know whether this is a game-changer or not. On that count, probably not. At least not until there are a number of in-camera options. If you don’t edit – and finish – with FCPX, then it’s a non-starter. If you shoot with a camera that records in a high-quality log format, like an ARRI Alexa, then you won’t see much difference in quality or workflow. If you shoot with any RED camera, you have less control over your image. On the other hand, it’s a definite improvement over all raw workflows that capture in image sequences. And it breathes some life into an older camera, like the Sony FS700. So, on balance, ProRes RAW is an advancement, but just not one that will affect as large a part of the industry as the rest of the ProRes family has.

(Note – click any image for an enlarged view. Images courtesy of Apple, FilmPlusGear, and OffHollywood.)

©2018 Oliver Peters

Comparing Color, Resolve, SpeedGrade and Symphony

df_ccc_main_sm

It’s time to talk about color correctors. In this post, I’ll compare Color, Resolve, SpeedGrade and Symphony. These are the popular desktop color correction systems in use today. Certainly there are other options, like Filmlight’s Baselight Editions plug-in, as well as other NLEs with their own powerful color correction tools, including Autodesk Smoke and Quantel Rio. Some of these fall outside of the budget range of small shops or don’t really provide a correction workflow. For the sake of simplicity, in this post I’ll stick with the four I see the most.

df_ccc_sym_sm

Avid Technology Media Composer + Symphony

Although it started as a separate NLE product with dedicated hardware, today’s Symphony is really an add-on option to Media Composer. The main feature that differentiates Symphony from Media Composer in file-based workflows is an enhanced color correction toolset. Symphony used to be the “gold standard” for color correction within an NLE, combining controls “borrowed” from many other software and systems, like Photoshop, hardware proc amps and hardware versions of the DaVinci correctors. It was the first to use the color wheel control model for balance/hue offsets. A subset of the Symphony tools has been migrated into Media Composer. Basic correction features in Symphony include channel mixing, hue offsets (color balance), levels, curves and more.

Many perceive Symphony correction as a single level or layer of correction, but that’s not exactly true. Color correction occurs on two levels – segment and program track. Most of your correction is on individual clips and Symphony offers a relational grading system. This means you can apply grades based on single clips or all instances of a master clip, tape ID, camera, etc. All clips used from a common source can be automatically graded once the first instance of that clip is graded on the timeline. The program track grade allows the colorist to apply an additional layer of grading to a clip, a section of the timeline or the entire timeline. So, when the client asks for everything to be darker, a global adjustment can be made using the program track.

Symphony also offers secondary grading based on isolating colors via an HSL key and adjusting that range. Although Symphony doesn’t offer nodes or correction layers like other software, you can use Avid’s video track timeline hierarchy to add additional correction to blank tracks above those tracks containing the video clips. In this way you are using the tracks as de facto adjustment layers. The biggest weakness is the lack of built-in masking tools to create what is commonly referred to as “power windows” (a term originated by DaVinci). The workaround is to use Avid’s built-in Intraframe/Animatte effects tools to create masks. Then you can apply additional spot correction within the mask area. It takes a bit more work than other tools, but it’s definitely possible. Finally, many plug-in packages, like GenArts Sapphire, Boris Continuum Complete and Magic Bullet Looks include vignette filters that will work with Symphony.

The bottom line is that Symphony started it all, though by today’s standards is “long-in-the-tooth”. Nevertheless, the relational grading model – and the fact that you are working within the NLE and can freely move between color correction and editing/trimming – makes Symphony a fast unit to operate, especially in time-sensitive, long-form productions, like TV shows.

df_ccc_spgrd_sm

Adobe SpeedGrade CC

If you are current as a Creative Cloud subscriber, then you have access to the most recent version of Adobe Premiere Pro CC and SpeedGrade CC. With the updates introduced late last year, Adobe added Direct Link interaction between Premiere Pro and SpeedGrade. When you use Direct Link to send your Premiere Pro timeline to SpeedGrade, the actual Premiere Pro sequence becomes the SpeedGrade sequence. This means codec decoding, transitions and Premiere Pro effects are handled by Premiere Pro’s effects engine, even though you are working inside SpeedGrade. As such, a project created via Direct Link supports features and codecs that would not be possible within a standalone SpeedGrade project.

Another unique aspect is that native and third-party transitions and effects used in Premiere Pro are visible (though not adjustable) when you are working inside SpeedGrade. This is an important distinction, because other correction workflows that rely on roundtrips don’t include NLE-based filters. You can’t see how the correction will be affected by a filter used in the NLE timeline. Naturally, in the case of SpeedGrade, this only works if you are working on a machine with the same third-party filters installed. When you return to Premiere Pro from SpeedGrade, the color corrections on clips are collapsed into a Lumetri filter effect that is applied to the clip or adjustment layer within the Premiere Pro sequence. Essentially this Lumetri effect is similar to a LUT that encapsulates all of the grading layers applied in SpeedGrade into a single effect in Premiere Pro. This is possible because the two applications share the same color science. The result is a render-free workflow with the easy ability to go back-and-forth between Premiere Pro and SpeedGrade for changes and adjustments. Unlike a standard LUT, Lumetri filters can carry masks, keyframes and are 100% precise.

As a color corrector, SpeedGrade is designed with a layer-based interface, much like Photoshop. Layers can be primary (fullscreen), secondary (keys and masks) or filters. A healthy selection of effects filters and LUTs are included. The correction model splits the signal into what amounts to a 12-way color wheel arrangement. There are lift/gamma/gain controls for the overall image, as well as for each of the shadow, middle and highlight ranges. Controls can be configured as wheels or sliders, with additional sliders for contrast, pivot, temperature (red vs. blue bias), magenta (red/blue vs. green bias) and saturation. There are no curves controls.

Overall, I like the looks I get with SpeedGrade, but I find it lacking in some ways. There are definite plusses and minuses. I miss the curves. It currently does not work with Blackmagic Design hardware. Matrox, Bluefish and AJA are OK. It’s got a tracker, but I find both tracking and masking to be mediocre. The biggest workflow shortcoming is the lack of a temporary memory register feature. You can save a whole grade, which saves the entire stack of grading layers applied to a clip as a Lumetri filter. You can apply grades from earlier timeline clips quite simply and SpeedGrade lets you open multiple playheads for comparison/correction between multiple shots on the timeline. You can access the nine grades ahead and the nine grades beyond the current playhead position. You can also copy the grade from the clip below mouse position to the clip under the playhead by pressing the C key. What you cannot do is store a random set of grades or just a single layer in a temporary buffer and then apply it from that buffer somewhere else in the timeline. Adding these two items would greatly speed up the SpeedGrade workflow.

df_ccc_resolve_sm

Blackmagic Design DaVinci Resolve

The DaVinci name is legendary among color correction products, but that reputation was earned with its hardware products, like the DaVinci 2K. Resolve was the software-based product built around a Linux cluster. When Blackmagic bought the assets and technology of DaVinci, all of the legacy hardware products were dropped, in favor of concentrating on Resolve as the software that had the most life for the future. There are now four versions, including Resolve Lite (free), Resolve (paid – software only), Resolve with a Blackmagic control surface and Resolve for Linux. The first three work on Mac and PC. You may download the free Lite version from the Blackmagic website or Apple’s Mac App Store. The Lite version has nearly all of the power of the paid software, but with these limitations: noise reduction, stereoscopic tools and the ability to output at a resolution above UltraHD requires a paid version.

I’m writing this based on Resolve 10, which has rudimentary editing features. It is designed as a standalone color corrector that can be used for some editing. Blackmagic Design doubled-down on the editing side with Resolve 11 (shown at NAB 2014). When that’s finally released this summer, you’ll have a powerful NLE built into the application. The demos at NAB were certainly impressive. If that turns out to be the case, Resolve 11 would function as an Avid Symphony or Quantel Rio type of system. That means you could freely move between creative editing and color correction, simply by changing tabs in the interface. For now, Resolve 10 is mainly a color corrector, with some very good roundtrip and conforming support for other NLEs. Specifically there is very good support for Avid and FCP X workflows.

As a color corrector, Resolve offers the widest set of correction tools of any of these systems. In the work I’ve done, Resolve allows for more extreme grading and is more precise when trying to correct problem shots. I’ve done corrections with it that would have been impossible with any other tool. The correction controls include curves, wheels, primary sliders, channel mixers and more. Corrections are node-based and can be applied to clips or an entire track. Nodes can be applied in a serial or parallel fashion, with special splitter/combiner and layer mixing nodes. The latter includes Photoshop-style blend modes. Unlike SpeedGrade, you can store the value of a single node in a buffer (using the keyboard copy function) and then paste the value of just that node somewhere else. This makes it pretty fast when working up and down a timeline. Finally, the tracker is amazing.

A few things bother me about Resolve, in spite of its powerful toolset. The interface almost presents too many tools and it becomes very easy to lose track of what was done and where. There is no large viewer or fullscreen mode that doesn’t hide the node tree. This forces a lot of toggling between workspace configurations. If you have two displays, you cannot use the second display for anything other than the scopes and audio mixer. (This will change with Resolve 11.) Finally, you can only use Blackmagic Design hardware to view the video output on a grading monitor.

df_ccc_color_sm

Apple Color

Some of you are saying, “Why talk about that? It was killed off a few years ago! Who uses that anymore?” Yes, I know. What people so quickly forget, was that when the software was FinalTouch (before Apple’s purchase), it was very expensive and considered to be very innovative. Apple bought it, added some features and cleaned up some of the workflow. As part of Final Cut Studio, it set the standard for round-tripping with an NLE. Unfortunately for many Mac users, it retained its less glossy, “Unixy” interface and thus, didn’t really catch on for many editors. However, it still works just fine on the newest machines and OS versions and remains a fast, high-quality color corrector.

Nearly all of the long-form jobs I’ve done – including feature films and TV shows up to even a few months ago – have been done with Color. There are two reasons that I prefer it. The first is that most of these jobs were cut using FCP 7, so it’s still the most integrated software for these projects. More importantly, there are several key features that make it faster than SpeedGrade and Resolve for projects that fall within a standard range of grading. In other words, the in-camera look was good and there were no huge problem areas, plus the desired grade didn’t swing into extreme looks.

Color is designed with 10 levels of grading per clip – primary in, eight secondaries and primary out. Since secondaries can be fullscreen or a portion of the image qualified by an HSL key or mask, each secondary layer can actually have two corrections – inside and outside of the mask. In addition to these, there’s a ColorFX layer for node-based filter effects, which can also include color adjustments. In reality, the maximum number of corrections to a single clip could be up to 19. The primary corrections can include value changes for RGB lift/gamma/gain and saturation levels, as well a printer lights. On top of this are lift/gamma/gain color wheels and luma controls. Lastly there are curves. The secondaries include custom mask shapes and hue/sat/luma curves. There’s a tracker, too, but it’s not that great.

Where Color still shines for me is in workflow. Each layer is represented by a labelled bar on the timeline under the clip. This makes it easy to apply only a single secondary adjustment to other clips on the timeline simply by sliding the corresponding secondary bar from one timeline clip to one or more of the others. For example, I used Secondary 3 to qualify a person’s face and brighten it. I could then simply drag the bar for S3 that appears under the first clip on the timeline over to every other clip with the same person and similar set-up. All without selecting each of these clips prior to applying the adjustment.

Color works with all cards that work with Final Cut Pro, so there’s no AJA versus Blackmagic issue as mentioned above. Dual monitors work well. You can have scopes and the viewer (or a fullscreen viewer) on one display and the full control interface on the other. Realistically, Color works best with up to 2K video and one of the standard Apple codecs (uncompressed or ProRes work best). A lot of the footage I’ve graded with it was ProResHQ or ProRes 4444 that came native from an ARRI Alexa or transcoded from a C300, RED or a Canon 5D/7D. But I’ve also done a film that was all native EX rewrapped as .mov from a Sony camera and Color had no issues. Log-profile footage grades very nicely in Color, so Alexa ProRes 4444 encoded as Log-C forms a real sweet spot for Apple Color.

©2014 Oliver Peters

NAB 2014 Thoughts

Whodathunkit? More NLEs, new cameras from new vendors and even a new film scanner! I’ve been back from NAB for a little over a week and needed to get caught up on work while decompressing. The following are some thoughts in broad strokes.

Avid Connect. My trip started early with the Avid Connect costumer event. This was a corporate gathering with over 1,000 paid attendees. Avid execs and managers outlined the corporate vision of Avid Everywhere in presentations that were head-and-shoulders better than any executive presentations Avid has given in years. For many who attended, it was to see if there was still life in Avid. I think the general response was receptive and positive. Avid Everywhere is basically a realignment of existing and future products around a platform concept. That has more impact if you own Avid storage or asset management software. Less so, if you only own a seat of Media Composer or ProTools. No new software features were announced, but new pricing models were announced with options to purchase or rent individual seats of the software – or to rent floating licenses in larger quantities.

4K. As predicted, 4K was all over the show. However, when you talked to vendors and users, there was little clear direction about actual mastering in 4K. It is starting to be a requirement in some circles, like delivering to Netflix, for example; but for most users 4K stops at acquisition. There is interest for archival reasons, as well as for reframing shots when the master is HD or 2K.

Cameras. New cameras from Blackmagic Design. Not much of a surprise there. One is the bigger, ENG-style URSA, which is Blackmagic’s solution to all of the add-ons people use with smaller HDSLR-sized cameras. The biggest feature is a 10” flip-out LCD monitor. AJA was the real surprise with its own 4K Cion camera. Think KiPro Quad with a camera built around it. Several DPs I spoke with weren’t that thrilled about either camera, because of size or balance. A camera that did get everyone jazzed was Sony’s A7s, one of their new Alpha series HDSLRs. It’s 4K-capable when recorded via HDMI to an external device. The images were outstanding. Of course, 4K wasn’t everywhere. Notably not at ARRI. The news there is the Amiraa sibling to the Alexa. Both share the same sensor design, with the Amira designed as a documentary camera. I’m sure it will be a hit, in spite of being a 2K camera.

Mac Pro. The new Mac Pro was all over the show in numerous booths. Various companies showed housings and add-ons to mount the Mac Pro for various applications. Lots of Thunderbolt products on display to address expandability for this unit, as well as Apple laptops and eventually PCs that will use Thunderbolt technology. The folks at FCPworks showed a nice DIT table/cart designed to hold a Mac Pro, keyboard, monitoring and other on-set essentials.

FCP X. Speaking of FCP X, the best place to check it out was at the off-site demo suite that FCPworks was running during the show. The suite demonstrated a number of FCP X-based workflows using third-party utilities, shared storage from Quantum and more. FCP X was in various booths on the NAB show floor, but to me it seemed limited to partner companies, like AJA. I thought the occurrences of FCP X in other booths was overshadowed by Premiere Pro CC sightings. No new FCP X feature announcements or even hints were made by Apple in any private meetings.

NLEs. The state of nonlinear editing is in more flux than ever. FCP X seems to be picking up a little steam, as is Premiere Pro. Yet, still no clear market leader across all sectors. Autodesk announced Smoke 2015, which will be the last version you can buy. Following Adobe’s lead, this year they shift to a rental model for their products. Smoke 2015 diverges more from the Flame UI model with more timeline-based effects than Smoke 2013. Lightworks for the Mac was demoed at the EditShare booth, which will make it another new option for Mac editors. Nothing new yet out of Avid, except some rebranding – Media Composer is now Media Composer | Software and Sphere is now Media Composer | Cloud. Expect new features to be rolled in by the end of this year. The biggest new player is Blackmagic Design, who has expanded the DaVinci Resolve software into a full-fledged NLE. With a cosmetic resemblance to FCP X, it caused many to dub it “the NLE that Final Cut Pro 8 should have been”. Whether that’s on the mark or just irrational exuberance has yet to be determined. Suffice it to say that Blackmagic is serious about making it a powerful editor, which for now is targeted at finishing.

Death of i/o cards. I’ve seen little mention of this, but it seems to me that dedicated PCIe video capture cards are a thing of the past. KONA and Decklink cards are really just there to support legacy products. They have less relevance in the file-based world. Most of the focus these days is on monitoring, which can be easily (and more cheaply) handled by HDMI or small Thunderbolt devices. If you looked at AJA and Matrox, for example, most of the target for PCIe cards is now to supply the OEM market. AJA supplies Quantel with their 4K i/o cards. The emphasis for direct customers is on smaller output-only products, mini-converters or self-contained format converters.

Film. If you were making a custom, 35mm film scanner – get out of the business, because you are now competing against Blackmagic Design! Their new film scanner is based on technology acquired through the purchase of Cintel a few months ago. Now Blackmagic introduced a sleek 35mm scanner capable of up to 30fps with UltraHD images. It’s $30K and connects to a Mac Pro via Thunderbolt2. Simple operation and easy software (plus Resolve) will likely rekindle the interest at a number of facilities for the film transfer business. That will be especially true at sites with a large archive of film.

Social. Naturally NAB wouldn’t be the fun it is without the opportunity to meet up with friends from all over the world. That’s part of what I get out of it. For others it’s the extra training through classes at Post Production World. The SuperMeet is a must for many editors. The Avid Connect gala featured entertainment by the legendary Nile Rodgers and his band Chic. Nearly two hours of non-stop funk/dance/disco. Quite enjoyable regardless of your musical taste. So, another year in Vegas – and not quite the ho-hum event that many had thought it would be!

Click here for more analysis at Digital Video’s website.

©2014 Oliver Peters