A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters

Did you pick the right camera? Part 1

There are tons of great cameras and lenses on the market. While I am not a camera operator, I have been a videographer on some shoots in the past. Relevant production and camera logistical issues are not foreign to me. However, my main concern in evaluating cameras is how they impact me in post – workflow, editing, and color correction. First – biases on the table. Let me say from the start that I have had the good fortune to work on many productions shot with ARRI Alexas and that is my favorite camera system in regards to the three concerns offered in the introductory post. I love the image, adopting ProRes for recording was a brilliant move, and the workflow couldn’t be easier. But I also recognize that ARRI makes an expensive albeit robust product. It’s not for everyone. Let’s explore.

More camera choices – more considerations

If you are going to only shoot with a single camera system, then that simplifies the equation. As an editor, I long for the days when directors would only shoot single-camera. Productions were more organized and there was less footage to wade through. And most of that footage was useful – not cutting room fodder. But cameras have become cheaper and production timetables condensed, so I get it that having more than one angle for every recording can make up for this. What you will often see is one expensive ‘hero’ camera as the A-camera for a shoot and then cheaper/lighter/smaller cameras as the B and C-cameras. That can work, but the success comes down to the ingredients that the chef puts into the stew. Some cameras go well together and others don’t. That’s because all cameras use different color science.

Lenses are often forgotten in this discussion. If the various cameras being used don’t have a matched set of lenses, the images from even the exact same model cameras – set to the same settings – will not match perfectly. That’s because lenses have coloration to them, which will affect the recorded image. This is even more extreme with re-housed vintage glass. As we move into the era of HDR, it should be noted that various lens specialists are warning that images made with vintage glass – and which look great in SDR – might not deliver predictable results when that same recording is graded for HDR.

Find the right pairing

If you want the best match, use identical camera models and matched glass. But, that’s not practical or affordable for every company nor every production. The next best thing is to stay within the same brand. For example, Canon is a favorite among documentary producers. Projects using cameras from the EOS Cinema line (C300, C300 MkII, C500, C700) will end up with looks that match better in post between cameras. Generally the same holds true for Sony or Panasonic.

It’s when you start going between brands that matching looks becomes harder, because each manufacturer uses their own ‘secret sauce’ for color science. I’m currently color grading travelogue episodes recorded in Cuba with a mix of cameras. A and B-cameras were ARRI Alexa Minis, while the C and D-cameras were Panasonic EVA1s. Additionally Panasonic GH5, Sony A7SII, and various drones cameras were also used. Panasonic appears to use a similar color science as ARRI, although their log color space is not as aggressive (flat). With all cameras set to shoot with a log profile and the appropriate REC709 LUT applied to each in post (LogC and Vlog respectively) I was able to get a decent match between the ARRI and Panasonic cameras, including the GH5. Not so close with the Sony or drone cameras, however.

Likewise, I’ve graded a lot of Canon C300 MkII/C500 footage and it looks great. However, trying to match Canon to ARRI shots just doesn’t come out right. There is too much difference in how blues are rendered.

The hardest matches are when professional production cameras are married with prosumer DSLRs, such as a Sony FS5 and a Fujifilm camera. Not even close. And smartphone cameras – yikes! But as I said above, the GH5 does seem to provide passible results when used with other Panasonic cameras and in our case, the ARRIs. However, my experience there is limited, so I wouldn’t guarantee that in every case.

Unfortunately, there’s no way to really know when different brands will or won’t create a compatible A/B-camera combination until you start a production. Or rather, when you start color correcting the final. Then it’s too late. If you have the luxury of renting or borrowing cameras and doing a test first, that’s the best course of action. But as always, try to get the best you can afford. It may be better to get a more advanced camera, but only one. Then restructure your production to work with a single-camera methodology. At least then, all of your footage should be consistent.

Click here for the Introduction.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Intro

My first facility job after college at a hybrid production/post company included more than just editing. Our largest production effort was to produce, post, and dub weekly price-and-item retail TV commercials for a large, regional grocery chain. This included two to three days a week of studio production for product photography (product displays, as well as prepared food shots).

Early on, part of my shift included being the video shader for the studio camera being used. The video shader in a TV station operation is the engineering operator who makes sure the cameras are set up and adjusts video levels during the actual production. However, in our operation (as would be the case in any teleproduction facility of that time) this was a more creative role – more akin to a modern DIT (digital imaging technician) than a video engineer. It didn’t involve simply adjusting levels, but also ‘painting’ the image to get the best-looking product shots on screen. Under the direction of the agency producer and our lighting DP/camera operator, I would use both the RGB color balance controls of the camera, along with a built-in 6-way secondary color correction circuit, to make each shot look as stylistic – and the food as appetizing – as possible. Then I rolled tape and recorded the shot.

This was the mid-1970s when RCA dominated the broadcast camera market. Production and gear options where either NTSC, PAL, or film. We owned an RCA TK-45 studio camera and a TKP-45 ‘portable’ camera that was tethered to a motor home/mobile unit. This early RCA color correction system of RGB balance/level controls for lift/gamma/gain ranges, coupled with a 6-way secondary color correction circuit (sat/hue trim pots for RGBCMY) was used in RCA cameras and telecines. It became the basis for nearly all post-production color correction technology to follow. I still apply  those early fundamentals that I learned back then in my work today as a colorist.

Options = Complexity

In the intervening decades, the sheer number of camera vendors has blossomed and surpassed RCA, Philips, and the other few companies of the 1970s. Naturally, we are well past the simple concerns of NTSC or PAL; and film-based production is an oddity, not the norm. This has introduced a number of challenges:

1. More and cheaper options mean that productions using multiple cameras is a given.

2. Camera raw and log recording, along with modern color correction methods, give you seemingly infinite possibilities – often making it even harder to dial in the right look.

3. There is no agreement of file format/container standards, so file-based recording adds workflow complexity that never existed in the past.

In the next three blog posts, I will explore each of these items in greater depth.

©2019 Oliver Peters

Hawaiki AutoGrade

The color correction tools in Final Cut Pro X are nice. Adobe’s Lumetri controls make grading intuitive. But sometimes you just want to click a few buttons and be happy with the results. That’s where AutoGrade from Hawaiki comes in. AutoGrade is a full-featured color correction plug-in that runs within Final Cut Pro X, Motion, Premiere Pro and After Effects. It is available from FxFactory and installs through the FxFactory plug-in manager.

As the name implies, AutoGrade is an automatic color correction tool designed to simplify and speed-up color correction. When you install AutoGrade, you get two plug-ins: AutoGrade and AutoGrade One. The latter is a simple, one-button version, based on global white balance. Simply use the color-picker (eye dropper) and sample an area that should be white. Select enable and the overall color balance is corrected. You can then tweak further, by boosting the correction, adjusting the RGB balance sliders, and/or fine-tuning luma level and saturation. Nearly all parameters are keyframeable, and looks can be saved as presets.

AutoGrade One is just a starter, though, for simple fixes. The real fun is with the full version of AutoGrade, which is a more comprehensive color correction tool. Its interface is divided into three main sections: Auto Balance, Quick Fix, and Fine-Tune. Instead of a single global balance tool, the Auto Balance section permits global, as well as, any combination of white, black, and/or skin correction. Simply turn on one or more desired parameters, sample the appropriate color(s) and enable Auto Balance. This tool will also raise or lower luma levels for the selected tonal range.

Sometimes you might have to repeat the process if you don’t like the first results. For example, when you sample the skin on someone’s face, sampling rosy cheeks will yield different results than if you sample the yellowish highlights on a forehead. To try again, just uncheck Auto Balance, sample a different area, and then enable Auto Balance again. In addition to an amount slider for each correction range, you can also adjust the RGB balance for each. Skin tones may be balanced towards warm or neutral, and the entire image can be legalized, which clamps video levels to 0-100.

Quick Fix is a set of supplied presets that work independently of the color balance controls. These include some standards, like cooling down or warming up the image, the orange and teal look, adding an s-curve, and so on. They are applied at 100% and to my eye felt a bit harsh at this default. To tone down the effect, simply adjust the amount slider downwards to get less intensity from the effect.

Fine-Tune rounds it out when you need to take a deeper dive. This section is built as a full-blown, 3-way color corrector. Each range includes a luma and three color offset controls. Instead of wheels, these controls are sliders, but the results are the same as with wheels. In addition, you can adjust exposure, saturation, vibrance, temperature/tint, and even two different contrast controls. One innovation is a log expander, designed to make it easy to correct log-encoded camera footage, in the absence of a specific log-to-Rec709 camera LUT.

Naturally, any plug-in could always offer more, so I have a minor wish list. I would love to see five additional features: film grain, vignette, sharpening, blurring/soft focus, and a highlights-only expander. There are certainly other individual filters that cover these needs, but having it all within a single plug-in would make sense. This would round out AutoGrade as a complete, creative grading module, servicing user needs beyond just color correction looks.

AutoGrade is a deceptively powerful color corrector, hidden under a simple interface. User-created looks can be saved as presets, so you can quickly apply complex settings to similar shots and set-ups. There are already many color correction tools on the market, including Hawaiki’s own Hawaiki Color. The price is very attractive, so AutoGrade is a superb tool to have in your kit. It’s a fast way to color-grade that’s ideal for both users who are new or experienced when it comes to color correction.

(Click any image to see an enlarged view.)

©2018 Oliver Peters

FCPX Color Wheels Take 2

Prior to version 10.4, the color correction tools within Final Cut Pro X were very basic. You could get a lot of work done with the color board, but it just didn’t offer tools competitive with other NLEs – not to mention color plug-ins or a dedicated grading app like DaVinci Resolve. With the release of 10.4, Apple upped the game by adding color wheels and a very nice curves implementation. However, for those of us who have been doing color correction for some time, it quickly became apparent that something wasn’t quite right in the math or color science behind these new FCPX color wheels. I described those anomalies in this January post.

To summarize that post, the color wheels tool seems to have been designed according to the lift/gamma/gain (LGG) correction model. The standard behavior for LGG is evident with a black-to-white gradient image. On a waveform display, this appears as a diagonal line from 0 to 100. If you adjust the highlight control (gain), the line appears to be pinned at the bottom with the higher end pivoting up or down as you shift the slider. Likewise, the shadow control (lift) leaves the line pinned at the top with the bottom half pivoting. The midrange control (gamma) bends the middle section of the line inward or outward, with no affect on the two ends, which stay pinned at 0 and 100, respectively. In addition to luminance value, when you shift the hue offset to an extreme edge – like moving the midrange puck completely to yellow – you should still see some remaining black and white at the two ends of the gradient.

That’s how LGG is supposed to work. In FCPX version 10.4, each color wheel control also altered the levels of everything else. When you adjusted midrange, it also elevated the shadow and highlight ranges. In the hue offset example, shifting the midrange control to full-on yellow tinted the entire image to yellow, leaving no hint of black or white. As a result, the color wheels correction tool was unpredictable and difficult to use, unless you were doing only very minor adjustments. You ended up chasing your tail, because when one correction was made, you’d have to go back and re-adjust one of the other wheels to compensate for the unwanted changes made by the first adjustment.

With the release of FCPX 10.4.1 this April, Apple engineers have changed the way the color wheels tool behaves. Corrections now correspond to the behavior that everyone accepts as standard LGG functionality. In other words, the controls mostly only affect their part of the image without also adjusting all other levels. This means that the shadows (lift) control adjusts the bottom, highlights (gain) will adjust the top end, and midrange (gamma) will lighten or darken the middle portion of the image. Likewise, hue offsets don’t completely contaminate the entire image.

One important thing to note is that existing FCPX Libraries created or promoted under 10.4 will now be promoted again when opened in 10.4.1. In order that your color wheel corrections don’t change to something unexpected when promoted, Projects in these Libraries will behave according to the previous FCPX 10.4 color model. This means that the look of clips where color wheels were used – and their color wheel values – haven’t changed. More importantly, the behavior of the wheels when inside those Libraries will also be according to the “old” way, should you make any further corrections. The new color wheels behavior will only begin within new Libraries created under 10.4.1.

These images clarify how the 10.4.1 adjustments now work (click to see enlarged and expanded views).

©2018 Oliver Peters

Blackmagic Design DaVinci Resolve 14

DaVinci Resolve has made its mark as one of the premier color correction applications for the film and video industries. With the introduction of Resolve 14*, it’s clear that Blackmagic Design has set its sights higher. Advanced editing functions and the inclusion of the Fairlight audio engine put Resolve on track to be the industry’s latest all-in-one post-production powerhouse. I’ve reviewed Resolve in the past as a grading application, but my focus here is editing. Right at the start, let me paraphrase the judges on History Channel’s Forged in Fire series – ‘This NLE can cut!’ If you have no prior allegiances to other editing platforms, then using Resolve as your NLE of choice is a no-brainer.

(*This review was originally written right after the release of Resolve 14 in late 2017.)

DaVinci Resolve 14 comes in two flavors, DaVinci Resolve 14 (free) and DaVinci Resolve Studio ($299). Upgrades have been free to date. It’s the only NLE to support three operating systems: macOS, Windows, and Linux. Mac users also have the option to download Resolve (free) or purchase Resolve Studio through the Apple Mac App Store. These versions are basically the same as those on Blackmagic Design’s website, but with some differences, due to the requirement that App Store software be sandboxed.

Resolve offers the majority of the same features as Resolve Studio. The primary limitations are that exports are capped at UltraHD (3840×2160), and that features such as stereo3D, lens distortion correction, noise reduction, and collaboration require Resolve Studio. Regardless of the version, Resolve is a very deep application that’s been battle-tested through years of high-pressure, enterprise-grade deployment. But is that enough to sway loyal Final Cut Pro X, Premiere Pro, or Media Composer editors to switch? There’s certainly interest, as Stephen Mirrione pointed out in my recent Suburbicon interview, so I wouldn’t be surprised to hear news of a TV show or small feature film being edited with Resolve in the coming year.

The all-in-one concept

Creating a single application that’s good at many different tasks can be daunting and more often than not has been unsuccessful. In the case of Resolve, Blackmagic Design has taken a modal approach by splitting the interface into five pages: Media (ingest/import), Edit, Color, Fairlight (audio mixing), and Deliver (export/output).

The workflow follows a logical, left-to-right path through these five stages of post-production. With each page/mode change, the user interface is reconfigured to best suit the task at hand. The Edit page sports a standard source/record/bin/track layout similar to Media Composer, Premiere Pro, or Final Cut Pro 7. Color switches to the familiar tools and nodes of DaVinci color correction. The Fairlight mixing page isn’t just a mimic of the Fairlight interface. The engineers completely swapped out the audio guts of Resolve and replaced it with the Fairlight audio engine.

Not only is the interface that of a respected DAW, but it is also possible to expand your system with Fairlight’s audio acceleration card, as well as add a Fairlight mixing desk. This means that in a multi-suite facility, you can have task-specific rooms optimized for editing, color grading, or audio mixing – all using the exact same software application without the need for roundtrips or other list translations.

But does it work?

I put both versions of Resolve 14 through the paces and the application is reasonably solid, given how much has changed from version 12 (there was no version 13). General media management, editing, and audio processing is top notch. If you want audio/video output, Blackmagic Design Decklink or UltraStudio hardware is required. There is also a Cinema viewer function for fullscreen viewing on your computer display. With dual displays, the edit interface can be on one along with fullscreen video on the other.

The Fairlight mode will likely require a bit of rethinking by editors used to mixing audio in other NLEs, since it uses a DAW-style interface. Many well-known physical mixing consoles, like those from Solid State Logic, feature channel strips with built-in EQs, compressors, etc. That’s how Fairlight treats these software channels or tracks. Each track can have its own combination of Fairlight audio processing functions. Stick with those and you’ll be happy, although other audio filters on your computer, like Apple AU plug-ins, are accessible. Mixing and audio editing is good with subframe accuracy and the 14.1 update added linked groups to lock faders together. The pace of Fairlight integration was quite fast, but it’s still a bit rough. I encountered a number of application crashes only in the Fairlight page, while scrubbing audio.

Whether or not you like the editing is more a function of personal style and preference. The user interface design is a lot like Final Cut Pro X, except with bins and tracks. Interface windows, tabs, and panels can be opened or pulled down into various screen configurations, but you don’t have freeform control over size and position. Clearly Premiere Pro is king in that department. Some design choices aren’t consistent. For example, you can’t enable a single-viewer layout when using two displays.

Multicam editing is solid, but I experienced a small bit of latency in the viewer when cutting camera angles on-the-fly. It’s minor and may or may not bother you. You can sync clips by various methods, such as timecode or waveform, but oddly, it seemed to be too lax. In my tests, it would frequently sync clips that it shouldn’t have when a sync relationship didn’t exist.

There are a number of things in Resolve’s design that take getting used to. For example, a Resolve project is locked to the frame rate you picked when that new project was created – same as with Avid. This means you can’t mix sequences with different frame rates within the same project. There are no adjustment layers, although you can fake it in the Color page by using clip and program-based corrections. Color management via LUTs (look-up tables) is much deeper than any other NLE. You can set color management with LUTs to be global, which is best when the project uses only one camera type. Conversely, input LUTs may be applied singly or in a batch to specific cameras in a bin. But, when you do that, the LUT process doesn’t show up in the color correction node (only its result), when you switch to the Color page. On the plus side, real time performance has been improved from previous versions and the built-in effects include filters that you don’t often find in the basic build of other NLEs, like glow and watercolor effects. In addition to great built-in effects, third-party OpenFX packages, like Boris Continuum Complete and Sapphire are also available.

Collaboration

Resolve uses bin-locking like Avid Media Composer. The first editor to open a bin has read/write permission to it. Any other editor can open that same bin in a read-only mode. For example, in a long-form project, separate bins might be organized for Act 1, Act 2, and so on. Different editors can separately work on parts of the film at the same time. Since this all happens in a single database file, it always reflects the most current state of the project.

To set up shared projects, a different PostgreSQL database is required, which is installed through the custom options of the installer. Make sure you are using the most recent version when upgrading Resolve, since the older versions of PostgreSQL are no longer compatible with the newest OS versions. One machine on the network hosts this database and then other workstations connect to that database to access the Resolve projects. Only that host machine needs to have PostgreSQL software installed on it. The process of adding and connecting shared databases has been improved and simplified with the release of 14.1.1 (and later), which now includes an additional server set-up utility application.

In testing collaboration features, I initially ran into set-up problems. These were eventually fixed when I disabled the macOS firewall on the host machine, which was blocking access from the other connected Macs to its shared database. This took some back and forth with Blackmagic Design’s helpful support engineers until we figured out why I was getting the connection errors. Since I had to return the additional “dongle” (USB license key) before this was fixed, I wasn’t able to test two editors simultaneously editing within the same open project. However, the ability to open any shared project from any qualified computer on the network was just fine.

DaVinci Resolve Micro Panel

I also tested the smaller, bus-powered DaVinci Resolve Micro panel. The Micro panel is just the right size for an editor or a DIT on set. It’s smaller than the Mini (tested previously in another review), because it doesn’t have the upward slanting portion in the back; therefore, it’s a better physical fit between your computer keyboard and display. You don’t have to shuffle desk real estate between tools, as you do with the Micro panel. In spite of not having the extra controls and LCD displays of the Mini, the Micro panel combines most of the control functions you need for fast grading. If you are an editor who is heavy into color correction, then this is a must-have for Resolve.

I took an instant liking to the Micro. You can use both hands to quickly and intuitively work the trackballs and knob controls, making for faster and better correction. It’s tactile, with next and previous clip buttons to quickly advance through the timeline, so you can keep your eyes on the screen. I grade in Resolve, Avid, Premiere Pro, and Final Cut Pro X, and all of that is with a mouse. Using the panel easily resulted in faster grading by a factor at least 3X or 4X. I also achieved better-looking corrections with fewer steps or processes than grading in any of these other applications.

Conclusion

Overall, there’s a lot to love about Resolve, in spite of a few rough edges. In general, it seems more stable under macOS Sierra than with High Sierra. If you use Resolve on a Mac, then you are stuck dealing with Apple’s platform changes. For example, recent Macs that use an Nvidia GPU are at a disadvantage under High Sierra, because Nvidia is just now developing drivers for CUDA under this OS. I experienced a number of crashes running Resolve 14 on my 2014 MacBook Pro until I manually changed the Resolve hardware configuration under Resolve’s preferences from CUDA to using Metal. When I installed what was supposed to be the newest CUDA driver, I still received a prompt that no CUDA-compliant card was present. But, it’s working fine using Metal. Macs with AMD GPUs should be fine.

Resolve 14 is a dense tool, with a lot of depth in various menus, which some may find daunting. This review would be a lot longer if I went even deeper into the many specific features of this application. Yet, it is easy for new users to hit the ground running and then learn as they go. For many, this is their mythical “Final Cut Pro 8”. In any case, DaVinci Resolve 14 is the best incarnation of the all-in-one concept to date. If you add Blackmagic Design’s Fusion visual effects software into the mix (also available in free and paid versions), the result is a combination that’s tough to beat at any price.

Blackmagic Design’s engineers have shown impressive development over a very short period of time, so I fully expect Blackmagic to give the three “A” companies a run for their money. Even if you use another tool as your main editing application, Resolve is a great addition to the toolbox. Using it becomes addictive. Give it a try and you might just find it becomes your first choice.

©2017, 2018 Oliver Peters