Pixelmator Pro Revisited

Many Final Cut Pro X users prefer the software precisely because it does not require an ongoing subscription. If you bought all of Apple’s ProApps products, then you have largely replaced the need for an Adobe Creative Cloud subscription. The exception to that is graphics and photography design and manipulation. Even the most diehard FCPX users often maintain the basic Adobe photography bundle just to have Photoshop in their toolset, since Apple doesn’t make such as application.

There are alternatives, which I have reviewed in the past. Principal replacements come from either Pixelmator or Serif. If you want the most direct alternative to the Photoshop/Illustrator/InDesign trifecta then your best bet would be to buy Affinity Photo, Designer, and Publisher. On the other hand, if you only want an alternative to Photoshop, then Affinity Photo or Pixelmator Pro might be your best option. While I like them both, Pixelmator Pro seems the closest to the FCPX design ethos.

I reviewed Pixelmator Pro after its initial release a bit over two years ago and have recently started to use it on a more full-time basis. Like others who have the Creative Cloud apps installed alongside FCPX, I tend to go where muscle memory takes me. So if you have Photoshop installed, then you’ll probably just continue to use Photoshop as the line of least resistance. However, if you want to stay strictly within the macOS, FCPX-centric ecosystem, then you owe it to yourself to move over to something new.

Pixelmator Pro is a clean app written with newer code, designed to take advantage of Metal. Its interface design is a perfect complement to Final Cut – using a similar approach to tool/layer/inspector panels. These panels can be revealed or hidden, which means that you can have a very minimalist interface that focuses only on the image, if that’s how you like to work.

Another new technology integrated into Pixelmator Pro is machine learning. It’s important to remember that there isn’t really any “learning” involved with machine learning. Instead, calculations are made against a defined set of parameters. For instance, the application uses machine learning to automatically name layers when you import a photo and place it on a layer. It makes a generic guess at the name, like “building” for an image of a building, tower, or other similar image, which is based on shape recognition. In addition, alternative suggested names are also available. If you change it to a custom name, Pixelmator Pro does not “learn” that new name for future use. The available library of possible names is not increased or improved.

Machine learning can also be used for an image’s color/level adjustments. This is more sophisticated than a simple automatic white balance. I find the results more pleasing and successful than similar automatic adjustments in other applications. At the end of 2019, an update added machine learning to image scaling. If you want to blow up a lower resolution image for a higher resolution result, you can employ Pixelmator Pro’s Super Resolution function. This will give you the cleanest edges around complex images and textures as opposed to the other available algorithms. Unfortunately, it is a very slow process on my older MacBook Pro; however, its use it entirely optional. Expect faster results on the newer Macs.

As I’ve been using Pixelmator Pro more these days – instead of the knee-jerk reaction to head to Photoshop first – I’m rediscovering things that I like and that I find to be more fluid and intuitive than in Photoshop. While you can’t do some of Photoshop’s more exotic functions, like video animations, Pixelmator Pro covers the bulk of what an editor needs to do with a graphics tool. Furthermore, if you use Apple Photos, Pixelmator Pro is also supported as an extension and through Photos’ “edit with” functions. In short, Pixelmator Pro is a perfect match for Final Cut Pro X. If Apple were to design a graphics app, it would undoubtedly look and feel a lot like Pixelmator Pro.

Check out an enhanced version of the article at FCPco.

©2020 Oliver Peters

Color Finale 2.0

HDR, camera raw, and log profiles are an ever-increasing part of video acquisition, so post-production color correction has become an essential part of every project. Final Cut Pro X initially offered only basic color correction tools, which were quickly augmented by third party developers. One of the earliest was Color Finale – the brainchild of colorist/trainer Denver Riddle and ex-DI supervisor and color correction software designer Dmitry Lavrov. In the last year Lavrov created both Cinema Grade, now owned and run by Riddle, and Color Finale 2.0, owned and run by Lavrov himself under his own company, Color Trix Ltd. By focusing exclusively on the development of Color Finale 2.0, Lavrov can bring to market more advanced feature ideas, upgrades, and options with the intent of making Final Cut a professional grading solution.

For many, Blackmagic Design’s DaVinci Resolve and Fimlight’s Baselight systems set the standard for color correction and grading. So you might ask, why bother? But if you edit with Final Cut Pro X, then this requires a roundtrip between Final Cut and a dedicated grading suite or application. Roundtrips pose a few issues, including turnaround time, additional media rendering, and frequent translation errors with the edit and effects data between the edit and the grading application. The ideal situation is to never leave the editing application, but that requires more than just a few, simple color correction filters.

Over the course of eight years of Final Cut Pro X’s existence, the internal color tools have been improved and even more third-party color correction plug-ins have been developed. However, effective and fast color correction isn’t only about looks presets, LUTs, and filters. It’s about having a tool that is properly designed for a grading workflow. If you want to do advanced correction in FCPX with the least amount of clicking back-and-forth, then there are really only two options: Coremelt’s Chromatic and Color Finale.

This brings us to the end of 2019 and the release of Color Finale 2.0, which has been redesigned from the ground up as a new and improved version of the original. The update has been optimized for Metal and the newest color management, such as ACES. It comes in two versions – standard and Pro. Color Finale 2 Pro supports more features, such as Tangent panel control, ACES color space, group grading, mask tracking, and film grain emulation. Color Finale has been designed from the beginning as only a Final Cut Pro X plug-in. This focus means better optimization and a better user experience.

Primary color correction

Color Finale 2 is intended to give Final Cut users similar grading control to that of Resolve, Avid Symphony, or Adobe Premiere Pro’s Lumetri panel. It packs a lot of punch and honestly, there’s a lot more than I can easily cover with any depth here. The user interface is designed around two components: the FCPX Inspector controls and the floating Layers panel. The Inspector pane is a lot more than simply the place from which to launch the Layers panel. In fact, it’s a separate primary grading panel, not unlike the functions of the Basic tab within Adobe’s Lumetri panel.

The Inspector pane is where you control color management, along with exposure, contrast, pivot, temperature, tint, saturation, and sharpness. According to Lavrov, “Our Exposure tool is calibrated to real camera F-stop numbers. We’ve actually taken numerous images with the cameras and test charts shot at the different exposure settings and matched those to our slider control. Basically setting the Exposure slider to 1 means you’ve increased it by one stop up.”

There are also copy and paste buttons to transfer Color Finale settings between clips, false color indicators, and shot-matching based on standard color charts. Finally, there’s a Film Emulation tab, which is really a set of film grain controls. At the bottom is a mix slider to control the opacity value of the applied correction.

Layers

The real power of Color Finale 2 happens when you launch the Layers panel. This panel can be resized and positioned anywhere over the FCPX interface. It includes four tools: lift/gamma/gain color wheels/sliders (aka “telecine” controls), luma+RGB curves, six-vector secondary color, and hue/sat curves. This is rounded out by a looks preset browser. Each of these tools can be masked and the masks can be tracked within the image. Mask tracking is good, though not quite as fast as Resolve’s tracker (almost nothing is).

I suspect most users will spend the bulk of their time with color wheels, which can be toggled from wheels to sliders, depending on your preference. Of course, if you invested in a Tangent panel, then the physical trackballs control the color wheels. Another nice aspect of the lift/gamma/gain color tool is saturation management. You can adjust saturation for each of the three ranges. There is also a master saturation control with separate controls for shadow and highlight range restrictions. This means that you can increase overall saturation, but adjust the shadow or highlights range value so that more or less of the dark or light areas of the image are affected.

As you add tools, each stacks as a new layer within the panel. The resulting color correction is the sum of all of the layers. You can stack as many layers as you like and the same tool can be used more than once. Layers can be turned on and off to see how that correction affects the image. They can also be reordered and grouped into a folder. In fact, when you load a preset look, this is actually a group of tools set to generate that look. Finally, each layer has a blend control to set the opacity percentage and alter the blend mode – normal, add, multiply, etc – for different results.

Advanced features

Let me expand on a few of the advanced grading features, such as color management. You have control over four methods: 1) assume video (the default) – intended for regular Rec 709 video or log footage where FCPX has already applied a LUT (ARRI Alexa, for example); 2) assume log – pick this if you don’t know the camera type and Color Finale will apply a generic Rec 709 LUT correction; 3) use ACES; and 4) use input LUT – import a technical or custom LUT file that you wish to apply to a clip.

ACES is an advanced color management workflow designed for certain delivery specs, such as for Netflix originals. The intent of the ACES color space is to be an intermediate color space that can be compatible with different display systems, so that your grade will look the same on any of these displays. Ideally you want to select ACES if you are working within a complete ACES color pipeline; however, you can still apply it to shots for general grading even if you don’t have to provide an ACES-compliant master. To use it, you must select both the input LUT (typically a camera-specific technical LUT) and the target display color space, such as Rec 709 100 nits (for non-HDR TVs and monitors).

In order to facilitate a proper ACES workflow, Color Trix added the ability to import and export CDLs (color decisions lists). Currently this is more for testing purposes and is designed for compatibility between Final Cut and ACES-compliant grading systems, like Baselight. A CDL is essentially like an EDL (edit decision list), but with basic color correction information. This will translate to the lift/gamma/gain/saturation settings in Color Finale 2 Pro, but nothing more complex, such as curves, selective color, or masks.

Performance and workflow

Overall, I really liked how the various tools worked. Response was fast and I was able to get good grading results with a build-up of several layers. In addition, I prefer the ergonomics of a horizontal layout for color wheels versus the cluster of controls used by Apple’s built-in tool. I had tested the betas of both Color Finale 1.0 and now 2.0 and I remember that it originally took a while to dial in the RGB curves for the 1.0 release. In general, curves can be quite destructive, so if you don’t get the math right, you’ll see banding with very little change of a curve. That was fixed before 1.0 was ever released and the quality in 2.0 looks very good.

Color Finale 2.0 beta had an issue with color wheels. For some users (myself included) the image didn’t update in real-time as you moved the color wheel pucks with a mouse. This was fixed right after release with an update. So if you are experiencing that issue, make sure you have re-installed the update.

The difference between grading and simple clip-based color correction is workflow. That’s where a good colorist using a dedicated grading application will shine. Unfortunately the “apply color correction from one (two, three) clip(s) back” command in Final Cut Pro X can only be used with its own built-in correction. So if you intend to use Color Finale 2 for a full timeline of clips, then you have to develop a workflow to quickly apply the Color Finale or Color Finale Pro effect, without constantly dragging it from the effects browser to each individual clip.

One solution is to apply the effect to the first clip, copy that clip, select all the rest, and then apply “paste effects” or “paste attributes” to the rest of the clips in the timeline. As you move from clip to clip, the Color Finale effect is open in the Inspector so you can tweak settings and edit layers as needed. I have found that by using this method the layers panel often doesn’t stay open persistently. The second method is to designate the Color Finale or Pro effect as the default video effect and map “apply default effect” to a key. Using this second method, the panel stayed open in my testing when go through successive clips on the timeline. Documentation and tutorials are a bit light at the moment, so hopefully Color Trix will begin posting more tips-and-tricks information to their support page or YouTube channel.

One can only run a valid test of any plug-in by using it on a real project. As an example of what you can do with Color Finale 2, I’ve graded Philip Bloom’s 2013 “Hiding Place” short featuring actress Kate Loustau. This was shot on the London Eye in “stealth” mode using the Blackmagic Pocket Cinema Camera. Bloom made the ungraded cut available for non-commercial use. I’ve used it a number of times to test color correction applications. Click the link to see the video, which includes two different grading looks, achieved through Color Final 2 Pro.

Color Finale 2.0 is a huge improvement over the original, but it’s not a one-click solution. It’s designed as an advanced, yet easy to use color correction tool. I find the toolset and visual results similar to the old Apple Color. The graded images appear very natural, which is a good fit for my aesthetic. DaVinci Resolve is better for extreme “surgical” grading, but Color Finale 2.0 certainly covers at least 90% of most color correction needs and styles. If you want to stay entirely within the Final Cut Pro X environment and skip the roundtrips, then Color Finale 2 Pro should be part of your arsenal. It’s this sort of extensibility that FCPX users like about the approach Apple has taken. Having powerful tools, like Color Finale 2.0, from independent developers, like Color Trix, definitely validates the concept.

Check out the Color Finale website for the various purchase and upgrade plans, including add-ons, like the Ascend presets packages.

The article was originally written for FCPco.

©2020 Oliver Peters

A First Look at Postlab Cloud

Apple developed Final Cut Pro X around single-editor workflows. As such, professional editing teams who wanted to use this tool for collaborative editing have been challenged to develop their own solutions. One approach was Postlab, which was developed in-house at Dutch broadcaster Evanglische Omroep (EO). In order to expand the product as a commercial application, lead developer Jasper Siegers decided to move it under the Hedge umbrella. This required the app to be rebuilt with new code before it could be offered to the FCPX market. That time has come and Postlab is now available as Postlab Cloud.

As the name implies, Postlab Cloud hosts your FCPX libraries “in the cloud,” i.e. on Postlab’s servers. Some production companies or broadcasters are reticent to have their editing computers connected online, but it’s important to note that only libraries and no media or caches are hosted by Postlab. This keeps the transfer times fast and file sizes light. Cache and media files stay local, whether on your machine or on connected shared storage. Postlab sets up accounts based on site licenses and numbers of users. Each user is assigned a log-in based on an e-mail address and a password. This means that a production hosted by Postlab can be accessed by authorized users anywhere in the world, provided there’s a viable internet connection.

The owner of the account can set up productions and organize them within folders. Each production is a collection or bundle of one or more Final Cut Pro X libraries. If you have ever worked with Final Cut Server in the FCP7 days, then the Postlab workflow is very similar. Once a production has been created, an editor can log in, download the library (a check-out step), edit in it, and then upload the changed version (a check-in step). As part of this upload, the Postlab interface prompts you to add comments describing the work you’ve done. Only one editor at a time can download a library and have write access; however, other users can still download it with read-only access. If you have two editors ping-ponging work on the same library file, then one has to upload it (check in) before the other editor can download it (check out) for their edits.

Getting started

I decided to test Postlab Cloud in two scenarios: a) multiple workstations connected to a shared storage network, and b) two disconnected editors collaborating over long distances. To start, once an account has been established, any editor using Postlab Cloud must install the small Postlab application. Since the app controls some of Final Cut’s functions, you will be prompted to enable GUI Scripting in your privacy preferences. In order for Postlab to work properly, media and cache files need to be outside of the library bundle. When you first download a library, you may be prompted to change your settings. In a networked environment with media on shared storage, the path to the media should be the same on each workstation. This means when Editor A finishes and checks in the production and then Editor B checks it back out, you generally will not need to relink the media files on Editor B’s system. Therefore, this edit collaboration can proceed fluidly.

Once a production has been downloaded, the library file exists as a temporary file on the local machine and not the network. This means that Postlab can still work in tandem with storage solutions that don’t normally perform well with FCPX libraries. In addition to this temporary library file, the Final Cut backup library is also stored in the location you have designated. If you are working in a networked, collaborative environment, then the advantage Postlab offers is version tracking and the ability for multiple users to open a library file (only one with write access).

Long distance

The second scenario is working with other editors outside of your facility. The first step is to get the media to the outside editor. You could certainly send a drive, but that isn’t efficient in time nor cost, especially across continents. If you only need creative editing and not finishing services, then low-res, proxy files are fine. So I converted my 4K UHD ProRes HQ files to 960 x 540 H264 (3Mbps) files and used Frame.io to transfer them over the internet. The key to proper relinking when you are done is to set audio to pass-through when converting this files. This was a double-system sound shoot, so I uploaded both the H264 videos files and the sound recordist’s WAV files to Frame and then downloaded them again at the other end (my home). Now I had media in both locations. The process would be the same even if it were two editors in two different countries.

The first Postlab step is to create and upload this FCPX library. Once that has been established, any authorized user with a Postlab log-in can access the production. I decided to go back and forth on this production between my home and the facility and also using different user log-ins – thus simulating a team of remote editors. Each time I did this, version changes were tracked by Postlab. If I were working with multiple editors, I would have been able to see what tasks each had performed.

It’s important to note that when you collaborate in this way, each editor should be using the same effects, LUTs, and Motion templates, otherwise some things will appear offline. Since the path to the media was different at home versus at the facility, each time I went between the two, checking in and then checking out the production, media files would appear offline. A simple relink fixed this, but it’s something to be aware of. Once totally done, I could relink to the high-res camera files and “finish” the project back at the office.

Wrap-up

When you upload a library back to Postlab, that open FCPX library is closed within Final Cut Pro X on your system, because you have checked it back in. Once you log out of Postlab, the temporary library file is moved to the trash. If you need a local version of the library, then export it from the Postlab app.

Once you get the hang of it, collaboration is simple using Postlab Cloud. Library files stay light without any of the sort of corruption caused by using services like DropBox. My test project included synchronized multi-cam clips and multi-channel audio. Each time during this exchange clips, projects, and edits showed up as expected when going between the various users. Whether or not Apple ever tackles collaboration within Final Cut Pro X is an unknown. But why wait? If you need that today, then Postlab Cloud offers a solid answer.

The relaunched Postlab Cloud includes three plans, which are priced per user/per year: Postlab, Postlab Pro, and Postlab Server. The first tier only allows for library version tracking and sharing. Pro allows for a lot more libraries to be shared and comes with more features. Server is a dedicated Postlab Cloud server for larger teams or those that require IT-specific features like Active Directory. Finally, Hedge/Postlab plans to ship a local version of Postlab – designed for use within local networks – soon after launch.

Postlab has now expanded to include Premiere Pro users.

Check out the Postlab tutorials for more information.

The article was originally written for FCP.co.

©2020 Oliver Peters

Video Technology 2020 – Editing Software

Four editing applications dominate the professional market: Adobe Premiere Pro, Apple Final Cut Pro X, Avid Media Composer, and Blackmagic Design DaVinci Resolve. Established facilities are still heavy Avid users, with Adobe being the up-and-coming choice. This doesn’t mean that Final Cut Pro X lost out. Going into 2020, Apple can tout FCPX as the best-selling version of its professional editing tool. It most likely has three million users after nearly nine years on the market. While pro editors in the US are often reluctant to adopt FCPX, this innovative application has earned wider acceptance in the broader international market.

The three “A”s have been battling for editing market share, but the wild card is Blackmagic Design’s DaVinci Resolve. It started as a high-end color correction application, but through Blackmagic’s acquisitions and fast development pace, Resolve is becoming an all-in-one application rivaling Autodesk Smoke or Avid DS. Recent versions bring enhanced creative editing tools, making it possible to edit, mix, composite, grade, and deliver entirely from Resolve. No need to roundtrip with other applications. Blackmagic is so dedicated to Resolve as an editor that they introduced a special editor keyboard.

Is Resolve attractive enough to sway editors to shift away from other tools? The answer for most in 2020 will still be “no.” Experienced editors have made their choice and all of the current options are quite good. However, Resolve does make the most sense for new users with no prior allegiances. The caveat is advanced finishing. Users may edit in an editing application, but then roundtrip to Resolve and back for grading. Unfortunately these roundtrips can be problematic. So I do think that many will opt to cut creatively in their NLE of choice, but then send to Resolve for the final grade, mix, and VFX work. Expect to see Resolve’s finishing footprint expand in 2020.

Two challenges confront these companies in 2020: multi-user collaboration and high dynamic range (HDR) delivery. Collaboration is an Avid strength, but not so for the other three. Blackmagic and Adobe have an approach to project sharing, but still not what Avid users have come to expect. Apple offers nothing directly, but there are some third-party workarounds. Expect 2020 to yield collaboration improvements for Final Cut Pro X and Premiere Pro.

HDR is a more complex situation requiring specialized hardware for proper monitoring. There simply is no way to accurately view HDR on any computer display. All of these companies are developing software pipelines to deal with HDR, but in 2020, HDR delivery will still require specific hardware that will remain the domain of dedicated color correction facilities.

Finally, as with cameras, AI will become an increasing aspect of post hardware. You already see that in Apple’s shape recognition within FCPX (automatic sorting of wides and close-ups) or Adobe Sensei for content replacement and automatic music editing. Expect to see more of these features introduced in coming software versions.

Originally written for Creative Planet Network.

©2020 Oliver Peters

A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

It is time to reconsider Final Cut Pro X?

While Final Cut Pro X may have ultimately landed in the market sector that Apple envisioned, the industry widely acknowledged that the original launch could have been better managed. Many staunch Final Cut Pro (“legacy”) users were irrevocably alienated. That’s a shame, because FCPX wasn’t a bad design when released – merely incomplete. In the eight years that have followed, the user base has grown to more than 2.5 million (April 2018) and the application sports the widest third-party support of any editing software.

I have certainly gone back and forth in my own use of FCPX, depending on whether it was the right tool for a given job. I cut a feature film with it back in the pre-10.1 days when it was a bifurcated application with separate Event and Project files. Since then, I have also used it on plenty of spots and corporate videos. Although my daily workflow is largely Premiere Pro-based now, I regularly use Final Cut Pro X when appropriate, as well as Blackmagic Design DaVinci Resolve and Avid Media Composer. Modern editors need to be NLE-multilingual.

I realize that winning Oscars and cutting large-scale productions isn’t what the majority of editors do. Nevertheless, these types of productions give any product street cred. You are probably aware of Focus and Whiskey Tango Foxtrot, but there are certainly others that have used FCPX. Hollywood studios films are dominated by films cut with Avid Media Composer; however, short films cut using FCPX have won the short film Oscar category for two years in a row. While largely invisible to many US viewers, major international productions, on par with Game of Thrones, have been edited using Final Cut Pro X.

If you were one of those FCP7 users who jumped ship to another tool, then maybe it’s time to revisit Final Cut Pro X. There are many reasons I say that. In the past eight years, Apple has added wide codec support, LUTs, HDR capabilities, vastly improved color correction tools, and an easy method of working with captioning. Final Cut is clearly the better tool in many situations and here’s a quick overview why I feel that way.

What productions are best with FCPX?

Final Cut Pro X is capable of handling all types of editing, but it’s more ideal for some than others. The biggest differentiator is turnaround time. If you have to get done quickly – from ingest to delivery – then FCPX is hard to beat. It handles media better than any other NLE without the need for the beefiest hardware. Want to cut 4K ProResHQ on a two-year-old MacBook Pro? Then FCPX shines. That makes it a natural in broadcast news, promos, and sports. It’s also perfect for non-broadcast event coverage. Frankly, I’m surprised that US broadcasters haven’t gravitated to it like various other broadcasters around the world – especially for cutting news stories. The workflow, interface, and low hardware requirements make it well-suited to the task.

Station promo production might be questionable for some, but stop and think about the use of Motion Templates and how that technology can be applied to broadcast design. Final Cut features the unique ability to use templates that any user can create and publish as an effect out of Apple Motion. Therefore, custom effects, animation, and graphics can easily be created specifically for a station’s bespoke look.

For example, a broadcast group or network that owns multiple stations in different cities could have one creative team develop a custom station graphics package for each outlet, simply by using Motion. Those templates could be deployed to each promo department and installed into the individual FCPX edit systems. This would allow each editor to modify or customize time and event information based on the published parameters without mistakenly deviating from the prescribed graphic look. That’s a broadcast creative director’s dream.

A simple hardware footprint

Obviously Final Cut requires Apple computers, but there’s easy connectivity to media from external Thunderbolt, USB, and ethernet-based storage. Some facilities certainly need elaborate shared storage systems for collaborative workflows, but others don’t. If you are a creative editorial boutique, all of a given project’s proxy editing files can be stored on a single SSD drive, allowing the editor to easily move from room to room, or home to work, simply by carrying the SSD with them. They can even be cutting on a laptop and then bring that in to work, connect to an external display for better monitoring, and keep rocking. With the advent of external GPU systems (eGPU), you can easily augment the horsepower of middle-level Macs when the need arises. 

No external i/o hardware is required for monitoring. While I recommend a simple audio i/o interface and external speakers as a minimum, there are plenty of fixed-location systems where the editors only use headphones. AJA or Blackmagic interfaces to play video out to an external display are optional. Simply connect a high-quality display to the Mac via HDMI or Thunderbolt and FCPX will feed real video to it full screen. Premiere Pro can also do this, but Media Composer and Resolve do not.

Third-party ecosystem

Some of Final Cut’s deficits have developed into a huge asset. It enjoys one of the best ecosystems of third-party tools that enhance the application. These range from translation tools from vendors like Intelligent Assistance and Marquis Broadcast, to a myriad of plug-ins, such as those from FxFactory and Coremelt. Final Cut already comes with a very solid set of built-in effects filters – probably the most useful variety of the various NLE options. Even better, if you also purchase Motion, you can easily create more effects by building your own as Motion Templates. This has resulted in a ton of small developers who create and sell their own variations using this core technology.

You certainly don’t have to purchase any additional effects to be productive with FCPX, but if you do, then one of the better options is FxFactory by Noise Industries. FxFactory is both a set of effects and a delivery platform for other developers. You can use the FxFactory interface to purchase, install, and manage plug-ins and even applications from a diverse catalogue of tools. Pick and choose what you need and grow the repertoire as you see fit. One of the first options to start with is idustrial revolution’s newly revamped XEffects Toolkit. This includes numerous effects and title templates to augment your daily work. Some of these employ built-in tracking technology that allows you to pin items to objects within a shot.

Apple’s latest feature addition is workflow extensions. Adobe introduced this technology first in its products. But Apple has built upon it through macOS integration with apps like Photos and now in Final Cut Pro X. In short, an extension allows direct FCPX integration with another application. Various extensions can be downloaded from the Mac App Store and installed into FCPX. An extension then adds a panel into Final Cut, which allows you to interact with that application from inside the FCPX interface. Initially some of the companies offering extensions include frame.io, Shutterstock, Simon Says, and others.

Subscription

A sore point for many Adobe customers was the shift to the subscription business model. While the monthly rates are reasonable if you are an ongoing business, they have caused some to stick with software as old as CS6 (yikes!). As more companies adopt subscriptions, you have to start wondering when enough is enough. I don’t think we are there yet and Creative Cloud is still a solid value. But if you are an individual who doesn’t make a living with these tools, then it’s a concern. Adobe recently raised eyebrows with the doubling of the monthly cost for its Photography plan. As it turns out this is an additional pricing plan with more storage and not a replacement, but that’s only evident after the website page appears to have been quickly fixed. Predictably this gives competitors like ON1 an avenue for counter-marketing.

Concerned with subscriptions? Then the Apple professional applications are an alternative. Final Cut Pro X, Compressor, Motion, and Logic ProX – coupled with photo and graphics tools from Affinity and/or Pixelmator – provide a viable competing package to Adobe Creative Cloud. Heck, augment that with Fusion and/or DaVinci Resolve – even the free versions – and the collection becomes a formidable toolkit.

The interface

Naturally, the elephant in the room is the FCPX interface. It’s what simultaneously excited and turned off so many FCP7 users. In the end, how you edit with Final Cut Pro X does not have to be all that different than your editing style with other NLEs. Certainly there are differences, but once you get used to the basics, there’s more that’s similar than is different.

Isn’t imitation the highest form of flattery? You only have to look at Adobe Premiere Rush or the new Cut Page in Resolve 16 to realize that just maybe, others are starting to see the value in Apple’s approach. On top of that, there are features touted in Resolve 16, like facial (actually shape) recognition or adjustment layers, that were there even in FCPX 10.0. Whether this all is blatant copying or simply a tip-of-the-hat doesn’t matter. Each company has come to the conclusion that some workflows and some newer editors need a faster and more direct user interface that is easily scalable to small and large screens and to single and dual-display systems.

I realize that many out there will read this post and scream Apple apologist. Whatever. If you’ve shifted to PC, then very little of what I’ve said applies to you. I make my daily living with Apple hardware. While I recognize you can often get superior performance with a PC, I don’t find the need to make a change yet. This means that Final Cut Pro X remains a great option for my workflows. It’s a tool I can use for nearly any job and one that is often times better than most. If you rejected it eight years ago, maybe it’s time to take a second look.

©2019 Oliver Peters

CoreMelt PaintX

When Apple launched Final Cut Pro X, it was with a decidedly simplified set of video effects. This was enhanced by the easy ability for users to create their own custom effects, using Apple Motion as a development platform. The result has been an entirely new ecosystem of low-cost, high-quality video effects. As attractive as that is, truly advanced visual effects still require knowledgeable plug-in developers who are able to work within the FCPX and macOS architecture in order to produce more powerful tools. For example, other built-in visual effects tools, such as Avid Media Composer’s Intraframe Paint or the Fusion page in DaVinci Resolve, simply aren’t within the scope of FCPX, nor what users can create on their own through Motion templates.

To fill that need, developers like CoreMelt have been designing a range of advanced visual effects tools for the Final Cut Pro market, including effects for tracking, color correction, stabilization, and more. Their newest release is PaintX, which adds a set of Photoshop-style tools to Final Cut Pro X. As with many of CoreMelt’s other offerings, PaintX includes planar tracking, thanks to the licensing of Mocha tracking technology.

To start, drop the PaintX effect onto a clip and then launch the custom interface. PaintX requires a better control layout than the standard FCPX user interface has been designed for. Once inside the PaintX interface window, you have a choice of ten brush functions, including paint color, change color, blur, smear, sharpen, warp, clone, add noise, heal, and erase. These functions cover a range of needs, from simple wire removal to beauty enhancements and even pseudo horror makeup effects. You have control over brush size, softness, aspect ratio, angle, and opacity. The various brushes also have specific controls for their related functions, such as the blur range for the blur brush. Effects are applied in layers and actions. Each stroke is an action and both remain editable. If you aren’t the most precise artist, then the erase brush comes in handy. Did you color a bit too far outside of the lines? Simply use the erase brush on that layer and trim back your excess.

Multiple brush effects can be applied to the same or different areas within the image, simply by adding a new layer for each effect. Once you’ve applied the first paint stroke, an additional brush control panel opens – allowing you to edit the brush parameters, after the fact. So, if your brush size was too large or not soft enough, simply alter those settings without the need to redo the effect. Each effect can be individually tracked in either direction. The Mocha tracker offers additional features, such as transform (scale/position) versus perspective tracking, along with the ability to copy and paste tracking data between brush layers.

As a Final Cut Pro X effect, PaintX works within the standard video pipeline. If you applied color correction upstream of your PaintX filter, then that grade is visible within the PaintX interface. But if the color correction is applied downstream of the PaintX effect, you won’t see it when you open the PaintX interface. However, that correction will still be uniformly applied to the clip, including the areas altered within the PaintX effect. If you’ve “punched into” a 4K clip on an HD timeline, when you open PaintX, you’ll still see the full 4K frame. Finally, you have additional FCPX control over the opacity and mix of the applied PaintX filter.

I found PaintX to be well-behaved even on a modest Mac, like my 3-year-old laptop. However, if you don’t have a beefy Mac, keep the effect simple. The more brush effects that you apply and track in a single clip, the slower the real-time response will become, especially on under-powered machines. These effects are GPU-intensive and paint strokes are really a particle system; therefore, simple, single-layer effects are the easiest on the machine. But, if you intend to do more complex effects like blurs and sharpens in multiple layers, then you will really want one of the more powerful Macs. Playback response is generally better, once you’ve saved the effect and exit back to Final Cut. I did run into one minor issue with the clone brush on a single isolated clip, while using a 2013 Mac Pro. CoreMelt told me there have been a few early bugs with certain GPUs and is looking into the anomaly I discovered. That model in particular has been notorious for GPU issues with video effects. (Update: CoreMelt sent me a new build, which has corrected this problem.)

Originally written for RedShark News

©2018 Oliver Peters