Building that Zoom Look

COVID-19 has altered our lives in many ways, but it has also changed our visual language. Video conference calls didn’t start with this pandemic, but by now Skype, Zoom, WebEx, Blue Jeans, and other services have become part of our daily lives – both as participants and as viewers. We use these for communicating with friends, distance learning, entertainment, and remote corporate meetings. Not only has video conferencing become an accepted production and broadcast method, but the “video conference look” is now a familiar entertainment style for all of us.

Many of these productions are actually live. Through elaborate and clever production techniques they can indeed achieve a quality level that’s better than the average Zoom call. However, in many cases, the video conference appearance with multiple participants on screen, was actually created own post, precisely because that aesthetic is now instantly recognizable to all of us. The actual interaction might have happened over Zoom, but full-frame video was simultaneously captured. This enables an editor to polish the overall production and rebuild the multi-screen images where appropriate without being tied to the highly-compressed, composite Zoom feed.

Building multi-screen composites in post can be time-consuming, which is where templates come in handy. Apple Final Cut Pro X offers a perfect solution for editing this style of project. There are a number of paid and/or free video conference-style Motion templates on the market. Enterprising editors can also build their own templates using Apple Motion. A nice free offering is idustrial revolution’s XEffects Video Conference – a toolkit of effects templates to easily build 4-up, 9-up, and 16-up displays.

If you need something more involved, then check out Video Walls 2 from developer Luca Visual FX, which can be purchased and installed through the FxFactory platform. This Motion template includes a series of 15 FCPX generators that cover a range of video wall and video conference styles.

The templates use image drop wells for videos and stills, which are arranged into a grid or row with adjustable borders and drop shadows. Some of the generators permit circles as well as rectangles with adjustable rounded corners. Positioning may be controlled to re-arrange the grid pattern and even overlap the panes. These generators include build-in animation effects along with keyframeable parameters.

If you want to mimic a video conference call, there’s also a dedicated generator for a Zoom-style menu bar that appears at the bottom of the screen. Border highlights around an image well may be changed as you edit to maintain the illusion that the highlight color syncs to whichever speaker in the group is talking at any given time..

Overall I found these temples easy to use and adjust. The one thing to be mindful of is that if you build up a video wall of 20+ video clips, this is like 20+ layers of video. Therefore, large video walls will require some horsepower. However, it was possible to do this on my mid-2014 MacBook Pro, albeit a bit more slowly. The good news is that all of this happens within the generator, so there’s only one clip on the timeline. You may also stack multiple instances of these templates if you need to have more images on-screen at once. Or if you want to add the menu bar template on top of a video conference template.

There’s no telling how long the pseudo-Zoom look will be in vogue. However, Video Walls 2 gives you enough variety that it should have legs beyond our current “work from home” mode.

©2020 Oliver Peters

Dialogue Mixing Tips

 

Video is a visual medium, but the audio side of a project is as important – often more important – than the picture side. When story context is based on dialogue, then the story will make no sense if you can’t hear or understand that spoken information. In theatrical mixes, it’s common for a three person team of rerecording mixers to operate the console for the final mix. Their responsibilities are divided into dialogue, sound effects, and music. The dialogue mixer is usually the team lead, precisely because intelligible dialogue is paramount to a successful motion picture mix. For this reason, dialogue is also mixed as primarily mono coming from the center speaker in a 5.1 surround set-up.

A lot of my work includes documentary-style entertainment and corporate projects, which frequently lean on recorded interviews to tell the story. In many cases, sending the mix outside isn’t in the budget, which means that mix falls to me. You can mix in a DAW or in your NLE. Many video editors are intimidated by or unfamiliar with ProTools or Logic Pro X – or even the Fairlight page in DaVinci Resolve. Rest assured that every modern NLE is capable of turning out an excellent stereo mix for the purposes of TV, web, or mobile viewing. Given the right monitoring and acoustic environment, you can also turn out solid LCR or 5.1 surround mixes, adequate for TV viewing.

I have covered audio and mix tips in the past, especially when dealing with Premiere. The following are a few more pointers.

Original location recording

You typically have no control over the original sound recording. On many projects, the production team will have recorded double-system sound controlled by a separate location mixer (recordist). They generally use two microphones on the subject – a lav and an overhead shotgun/boom mic.

The lav will often be tucked under clothing to filter out ambient noise from the surrounding environment and to hide it from the camera. This will sound closer, but may also sound a bit muffled. There may also be occasional clothes rustle from the clothing rubbing against the mic as the speaker moves around. For these reasons I will generally select the shotgun as the microphone track to use. The speaker’s voice will sound better and the recording will tend to “breathe.” The downside is that you’ll also pick up more ambient noise, such as HVAC fans running in the background. Under the best of circumstances these will be present during quiet moments, but not too noticeable when the speaker is actually talking.

Processing

The first stage of any dialogue processing chain or workflow is noise reduction and gain correction. At the start of the project you have the opportunity to clean up any raw voice tracks. This is ideal, because it saves you from having to do that step later. In the double-system sound example, you have the ability to work with the isolated .wav file before syncing it within a multicam group or as a synchronized clip.

Most NLEs feature some audio noise reduction tools and you can certainly augment these with third party filters and standalone apps, like those from iZotope. However, this is generally a process I will handle in Adobe Audition, which can process single tracks, as well as multitrack sessions. Audition starts with a short noise print (select a short quiet section in the track) used as a reference for the sounds to be suppressed. Apply the processing and adjust settings if the dialogue starts sounding like the speaker is underwater. Leaving some background noise is preferable to over-processing the track.

Once the noise reduction is where you like it, apply gain correction. Audition features an automatic loudness match feature or you can manually adjust levels. The key is to get the overall track as loud as you can without clipping the loudest sections and without creating a compressed sound. You may wish to experiment with the order of these processes. For example, you may get better results adjusting gain first and then applying the noise reduction afterwards.

After both of these steps have been completed, bounce out (export) the track to create a new, processed copy of the original. Bring that into your NLE and combine it with the picture. From here on, anytime you cut to that clip, you will be using the synced, processed audio.

If you can’t go through such a pre-processing step in Audition or another DAW, then the noise reduction and correction must be handled within your NLE. Each of the top NLEs includes built-in noise reduction tools, but there are plenty of plug-in offerings from Waves, iZotope, Accusonus, and Crumplepop to name a few. In my opinion, such processing should be applied on the track (or audio role in FCPX) and not on the clip itself. However, raising or lowering the gain/volume of clips should be performed on the clip or in the clip mixer (Premiere Pro) first.

Track/audio role organization

Proper organization is key to an efficient mix. When a speaker is recorded multiple times or at different locations, then the quality or tone of those recordings will vary. Each situation may need to be adjusted differently in the final mix. You may also have several speakers interviewed at the same time in the same location. In that case, the same adjustments should work for all. Or maybe you only need to separate male from female speakers, based on voice characteristics.

In a track-based NLE like Media Composer, Resolve, Premiere Pro, or others, simply place each speaker onto a separate track so that effects processing can be specific for that speaker for the length of the program. In some cases, you will be able to group all of the speaker clips onto one or a few tracks. The point is to arrange VO, sync dialogue, sound effects, and music together as groups of tracks. Don’t intermingle voice, effects, or music clips onto the same tracks.

Once you have organized your clips in this manner, then you are ready for the final mix. Unfortunately this organization requires some extra steps in Final Cut Pro X, because it has no tracks. Audio clips in FCPX must be assigned specific audio roles, based on audio types, speaker names, or any other criteria. Such assignments should be applied immediately upon importing a clip. With proper audio role designations, the process can work quite smoothly. Without it, you are in a world of hurt.

Since FCPX has no traditional track mixer, the closest equivalent is to apply effects to audio lanes based on the assigned audio roles. For example, all clips designated as dialogue will have their audio grouped together into the dialogue lane. Your sequence (or just the audio) must first be compounded before you are able to apply effects to entire audio lanes. This effectively applies these same effects to all clips of a given audio role assignment. So think of audio lanes as the FCPX equivalent to audio tracks in Premiere, Media Composer, or Resolve.

The vocal chain

The objective is to get your dialogue tracks to sound consistent and stand out in the mix. To do this, I typically use a standard set of filter effects. Noise reduction processing is applied either through preprocessing (described above) or as the first plug-in filter applied to the track. After that, I will typically apply a de-esser and a plosive remover. The first reduces the sibilance of the spoken letter “s” and the latter reduces mic pops from the spoken letter “p.” As with all plug-ins, don’t get heavy-handed with the effect, because you want to maintain a natural sound.

You will want the audio – especially interviews – to have a consistent level throughout. This can be done manually by adjusting clip gain, either clip by clip, or by rubber banding volume levels within clips. You can also apply a track effect, like an automatic volume filter (Waves, Accusonus, Crumplepop, other). In some cases a compressor can do the trick. I like the various built-in plug-ins offered within Premiere and FCPX, but there are a ton of third-party options. I may also apply two compression effects – one to lightly level the volume changes, and the second to compress/limit the loudest peaks. Again, the key is to apply light adjustments, because I will also compress/limit the master output in addition to these track effects.

The last step is equalization. A parametric EQ is usually the best choice. The objective is to assure vocal clarity by accentuating certain frequencies. This will vary based on the sound quality of each speaker’s voice. This is why you often separate speakers onto their own tracks according to location, voice characteristics, and so on. In actual practice, only two to three tracks are usually needed for dialogue. For example, interviews may be consistent, but the voice-over recordings require a different touch.

Don’t get locked into the specific order of these effects. What I have presented in this post isn’t necessarily gospel for the hierarchical order in which to use them. For example, EQ and level adjusting filters might sound best when placed at different positions in this stack. A certain order might be better for one show, whereas a different order may be best the next time. Experiment and listen to get the best results!

©2020 Oliver Peters

FilmConvert Nitrate

When it comes to film emulation software and plug-ins, FilmConvert is the popular choice for many editors. It was one of the earliest tools for film stock emulation in digital editing workflows. It not only provides excellent film looks, but also functions as a primary color correction tool in its own right. FilmConvert has now been updated into FilmConvert Nitrate – a name that’s a tip of the hat to the chemical composition of early film stocks.

The basics of film emulation with Nitrate

FilmConvert Nitrate uses built-in looks based on 19 film stocks. These include a variety of motion and still photo negative and positive stocks, ranging from Kodak and Fuji to Polaroid and Ilford. Each stock preset includes built-in film grain based on 6K film scans. Unlike other plug-ins that simply add a grain overlay, FilmConvert calculates and integrates grain based on the underlying color of the image. Whenever you apply a film stock style, a matching grain preset, which changes with each stock choice, is automatically added. The grain amount and texture can be changed or you can dial the settings back to zero if you simply want a clean image.

These film stock emulations are not simply LUTs applied to the image. In order to work its magic, FilmConvert Nitrate starts with a camera profile. Custom profiles have been built for different camera makes and models and these work inside the plug-in. This allows the software to tailor the film stock to the color science of the selected camera for more accurate picture styles. When you select a specific camera from the pulldown menu instead of the FilmConvert default, you’ll be prompted to download any camera pack that hasn’t already been installed. Free camera profile packs are available from the FilmConvert website and currently cover most of the major brands, including ARRI, Sony, Blackmagic, Canon, Panasonic, and more. You don’t have to download all of the packs at first and can add new camera packs at any time as your productions require it.

New features in FilmConvert Nitrate include Cineon log emulation, curves, and more advanced grain controls. The Cineon-to-print option appears whenever you apply FilmConvert Nitrate to a log clip, such as from an ARRI Alexa recorded in Log-C. This option enables greater control over image contrast and saturation. Remember to first remove any automatic or manually-applied LUTs, otherwise the log conversion will be doubled.

Taking FilmConvert Nitrate for a spin

As with my other color reviews, I’ve tested a variety of stock media from various cameras. This time I added a clip from Philip Bloom’s Sony FX9 test. The clip was recorded with that camera’s S-Cinetone profile, which is based on Sony’s Venice color. It looks quite nice to begin with, but of course, that doesn’t mean you shouldn’t tweak it! Other clips included ARRI Alexa log and Blackmagic BRAW files.

In Final Cut Pro X, apply the FilmConvert Nitrate plug-in to a clip and launch the floating control panel from the inspector. In Premiere, all of the controls are normally exposed in the effects controls panel. The plug-in starts with a default preset applied, so next select the camera manufacturer, model, and profile. If you haven’t already installed that specific camera pack, you’ll be prompted to download and install it. Once that’s done, simply select the film stock and adjust the settings to taste. Non-log profiles present you with film chroma and luma sliders. Log profiles change those sliders into film color and Cineon-to-print film emulation.

Multiple panes in the panel expand to reveal the grain response and primary color controls. Grading adjustments include exposure/temperature/tint, low/mid/high color wheels, and saturation. As you move the temperature and tint sliders left or right, the slider bar shows the color for the direction in which you are moving that control. That’s a nice UI touch. In addition, there are RGB curves (which can be split by color) and a levels control. Overall, this plug-in plays nice with Final Cut Pro X and Premiere Pro. It’s responsive and real-time playback performance is typically not impacted.

It is common in other film emulation filters to include grain as an overlay effect. Adjusting the filter with and without grain often results in a large difference in level. Since Nitrate’s grain is a built-in part of the preset, you won’t get an unexpected level change as you apply more grain. In addition to grain presets for film stocks from 8mm to 35mm Full Frame, you can adjust grain luminance, saturation, and size. You can also soften the picture under the grain, which might be something you’d want to do for a more convincing 8mm emulation. One unique feature is a separate response curve for grain, allowing you to adjust the grain brightness levels for lows, mids, and highs. In order to properly judge the amount of grain you apply, set Final Cut Pro X’s playback setting to Better Quality.

For a nice trick, apply two instances of Nitrate to a clip. On the first one, set the camera profile to a motion film negative stock, like Kodak 5207 Vision 3. Then apply a second instance with the default preset, but select a still photo positive stock, like Fuji Astia 100. Finally, tweak the color settings to get the most pleasing look. At this point, however, you will need to render for smooth playback. The result is designed to mimic a true film process where you would shoot a negative stock and then print it to a photograph or release print.

FilmConvert Nitrate supports the ability to export your settings as a 3D LUT (.cube) file, which will carry the color information, although not the grain. To test the transparency of this workflow, I exported my custom Nitrate setting as a LUT. Next, I removed the plug-in effect from the clip and added the Custom LUT effect back to it. This was linked to the new LUT that I had just exported. When I compared the clip with the Nitrate setting versus just the LUT, they were very close with only a minor level difference between. This is a great way to move a look between systems or into other applications without having FilmConvert Nitrate installed in all of them.

Wrap-up

Any color correction effect – especially film emulation styles – are highly subjective, so no single filter is going to be a perfect match for everyone’s taste. FilmConvert Nitrate advances the original FilmConvert plug-in with an updated interface, built around a venerable set of film stock choices. This makes it a good choice if you want to nail the look of film. There’s plenty you can tweak to fine-tune the look, not to mention a wide variety of specific camera profiles. Even Apple iPhones are covered.

FilmConvert Nitrate is available for Final Cut Pro X 10.4.8 and Motion running under macOS 10.13.6 or later. It is also available for Premiere Pro/After Effects, DaVinci Resolve, and Media Composer on both macOS and Windows 10. The plug-in can be purchased for individual applications or as a bundle that covers all of the NLEs. If you already own FilmConvert, then the company has upgrade offers to switch to FilmConvert Nitrate.

Originally written for FCP.co.

©2020 Oliver Peters

Time to Rethink ProRes RAW?

The Apple ProRes RAW codec has been available for several years at this point, yet we have not heard of any professional cinematography camera adding the ability to record ProRes RAW in-camera. I covered ProRes RAW with some detail in these three blog posts (HDR and RAW Demystified, Part 1 and Part 2, and More about ProRes RAW) back in 2018. But the industry has changed over the past few years. Has that changed any thoughts about ProRes RAW?

Understanding RAW

Today’s video cameras evolved their sensor design from a three CCD array for RGB into a single sensor, similar to those used in still photo cameras. Most of these sensors are built using a Bayer pattern of photosites. This pattern is an array of monochrome receptors that are filtered to receive incoming green, red, and blue wavelengths of light. Typically the green photosites cover 50% of this pattern and red and blue each cover 25%. These photosites capture linear light, which is turned into data that is then meshed and converted into RGB pixel information. Lastly, it’s recorded into a video format. Photosites do not correlate in a 1:1 relationship with output pixels. You can have more or fewer total photosite elements in the sensor than the recorded pixel resolution of the file.

The process of converting photosite data into RGB video pixels is done by the camera’s internal electronics. This process also includes scaling, gamma encoding (Rec709, Rec 2020, or log), noise reduction, image sharpening, and the application of that manufacturer’s proprietary color science. The term “color science” implies some type of neutral mathematical color conversion, but that isn’t the case. The color science that each manufacturer uses is in fact their own secret sauce. It can be neutral or skewed in favor of certain colors and saturation levels. ARRI is a prime example of this. They have done a great job in developing a color profile for their Alexa line of cameras that approximates the look of film.

All of this image processing adds cost, weight, and power demands to the design of a camera. If you offload the processing to another stage in the pipeline, then design options are opened up. Recording camera raw image data achieves that. Camera raw is the monochrome sensor data prior to the conversion into an encoded video signal. By recording a camera raw file instead of an encoded RGB video file, you defer the processing to post.

To decode this file, your operating system or application requires some type of framework, plug-in, or decoding/developing software in order to properly interpret that data into a color image. In theory, using a raw file in post provides greater control over ISO/exposure and temperature/tint values in color grading. Depending on the manufacturer, you may also apply a variety of different camera profiles. All of this is possible and still have a camera file that is of a smaller size than its encoded RGB counterpart.

In-camera recording, camera raw, and RED

Camera raw recording preceded the introduction of the RED One camera. These usually consisted of uncompressed movie files or image sequences recorded to an external recorder. RED introduced the ability to record a Wavelet-compressed, 4K camera raw signal at 24fps. This was a movie file recorded onboard the camera itself. RED was granted a number of patents around these processes, which preclude any other camera manufacturer from doing that exact same thing, unless entering into a licensing agreement with RED. So far these patents have been successfully upheld against Sony and Apple among others.

In 2007 – part way through the Final Cut Pro product run – Apple introduced its family of ProRes codecs. ProRes was Apple’s answer to Avid’s DNxHD codec, but with some improvements, like resolution independence. ProRes not only became Apple’s default intermediate codec, but also gained stature as the mastering and delivery codec of choice, regardless of which NLE you were using.

By 2010 Apple was successful in convincing ARRI to use ProRes as its internal recording codec with the introduction of the (then new) line of Alexa cameras. (ARRI camera raw recording was a secondary option using ARRIRAW and a Codex recorder.) Shooting with an Alexa, recording high-quality ProRes files, and posting those directly within FCP or any other compatible NLE created the simplest and smoothest capture-edit-deliver pipeline of any professional post workflow. That remains unchanged even today.

Despite ARRI’s success, only a few other camera manufacturers have adopted ProRes as an internal recording option. To my knowledge these include some cameras from AJA, JVC, Blackmagic Design, and RED (as a secondary file to REDCODE). The lack of widespread adoption is most likely due to Apple’s licensing arrangement, coupled with the fact that ProRes is a proprietary Apple format. It may be a de facto industry standard, but it’s not an official standard sanctioned by an industry standards committee.

The introduction of Apple’s ProRes RAW codecs has led many in the industry to wait with bated breath for cameras to also adopt ProRes RAW as their internal camera raw option. ARRI would obviously be a candidate. However, the RED patents would seem to be an impediment. But what if Apple never had that intention in the first place?

Do we have it all wrong?

When Apple introduced ProRes RAW, it did so in partnership with Atomos. Just like Sony, ARRI, and Panasonic recording their camera raw signals to an external recorder, sending a camera raw signal to an external Atomos monitor/recorder is a viable alternative to in-camera recording. Atomos’ own disagreements with RED have now been settled. Therefore, embedding the ProRes RAW codec into their products opens up that recording format to any camera manufacturer. The camera simply has to be capable of sending a compatible camera raw signal (as data) over SDI or HDMI to the connected Atomos recorder.

The desire to see ProRes RAW in-camera stems from the history of ProRes adoption by ARRI and the impact that had on high-end production and post. However, that came at a time when Apple was pushing harder into various pro film and video markets. As we’ve learned, that course was corrected by Steve Jobs, leading to the launch of Final Cut Pro X. Apple has always been about ease and democratization – targeting the middle third of a bell curve of users, not necessarily the top or bottom thirds. For better or worse, Final Cut Pro X refocused Apple’s pro video direction with that in mind.

In addition, during this past decade or more, Apple has also changed its approach to photography. Aperture was a tool developed with semi-pro and pro DSLR photographers in mind. Traditional DSLRs have lost photography market share to smart phones – especially the iPhone. Online sharing methods – Facebook, Flickr, Instagram, cloud picture libraries – have become the norm over the traditional photo album. And so, Aperture bit the dust in favor of Photos. From a corporate point-of-view, the rethinking of photography cannot be separated from Apple’s rethinking of all things video.

Final Cut Pro X is designed to be forward-thinking, while cutting the chord with many legacy workflows. I believe the same can be applied to ProRes RAW. The small form factor camera, rigged with tons of accessories including external displays, is probably more common these days than the traditional, shoulder-mounted, one-piece camcorder. By partnering with Atomos (and maybe others in the future), Apple has opened the field to a much larger group of cameras than handling the task one camera manufacturer at a time.

ProRes RAW is automatically available to cameras that were previously stuck recording highly-compressed M-JPEG or H.264/265 formats. Video-enabled DSLRs from manufacturers like Nikon and Fujifilm join Canon and Panasonic cinematography cameras. Simply send a camera raw signal over HDMI to an Atomos recorder. And yet, it doesn’t exclude a company like ARRI either. They simply need to enable Atomos to repack their existing camera raw signal into ProRes RAW.

We may never see a camera company adopt onboard ProRes RAW and it doesn’t matter. From Apple’s point-of-view and that of FCPX users, it’s all the same. Use the camera of choice, record to an Atomos, and edit as easily as with regular ProRes. Do you have the depth of options as with REDCODE RAW? No. Is your image quality as perfect in an absolute (albeit non-visible) sense as ARRIRAW? Probably not. But these concerns are for the top third of users. That’s a category that Apple is happy to have, but not crucial to their existence.

The bottom line is that you can’t apply classic Final Cut Studio/ProRes thinking to Final Cut Pro X/ProRes RAW in today’s Apple. It’s simply a different world.

____________________________________________

Addendum

The images I’ve used in this post come from Patrik Pettersson. These clips were filmed with a Nikon Z6 DSLR recording to an Atomos Ninja V. He’s made a a few sample clips available for download and testing. More at this link. This brings up an interesting issue, because most other forms of camera raw are tied to a specific camera profile. But with ProRes RAW, you can have any number of cameras. Once you bring those into Final Cut Pro X, you don’t have the correct camera profile with a color science that matches that model for each any every camera.

In the case of these clips, FCPX doesn’t offer any Nikon profiles. I decided to decode the clip (RAW to log conversion) using a Sony profile. This gave me the best possible results for the Nikon images and effectively gives me a log clip similar to that from a Sony camera. Then for the grade I worked in Color Finale Pro 2, using its ACES workflow. To complete the ACES workflow, I used the matching SLog3 conversion to Rec709.

The result is nice and you do have a number of options. However, the workflow isn’t as straightforward as Apple would like you to believe. I think these are all solvable challenges, but 1) Apple needs to supply the proper camera profiles for each of the compatible cameras; and 2) Apple needs to publish proper workflow guides that are useful to a wide range of users.

©2020 Oliver Peters

Digital Anarchy’s Video Anarchy Bundle

 

There are many reasons to add plug-ins and effects filters to your NLE, but the best reason is for video repair or enhancement. That’s where Digital Anarchy’s four main video plug-in products fit. These include Beauty Box Video, Samurai Sharpen, Flicker Free, and Light Wrap Fantastic. They are compatible with a range of NLE hosts and may be purchased individually or as part of several bundles. Digital Anarchy also offers photography filters, as well as a few free offerings, such as Ugly Box. That’s an offshoot of Beauty Box, but designed to achieve the opposite effect.

Beauty Box Video

Let’s face it, even the most attractive person doesn’t always come across with the most pleasing appearance on camera, in spite of good make-up and lighting. Some people simply have a skin texture, wrinkles, or blemishes that look worse on screen than face-to-face. This is where Beauty Box comes in. It is a skin retouching plug-in that uses basic face detection to isolate the skin area within the image. The mask is based on the range between the dark and light skin colors within the image. You can adjust the colors and settings to refine the area of the mask.

Like all skin smoothing filters, Beauty Box works by blurring the contrast within the affected area. However, it offers a nice range of control, along with GPU acceleration. If you apply the filter with a light touch, then you get a more subtle effect. Crank it up and you’ll get a result not unlike high-gloss, fashion photography with sprayed-on make-up. Both looks can be good, given the appropriate circumstance.

Unfortunately, out of the four, Beauty Box was the only one of these plug-ins that had an issue in Final Cut Pro X. The full control panel did not show up within the inspector pane. This was tested on three different Macs running Mojave, so I’m pretty sure it’s a bug, which I’ve reported to Digital Anarchy. Others may not run into this, but nevertheless, it worked perfectly inside Motion. While that’s a nuisance, it’s not a deal-breaker, given the usefulness of this filter. Simply process the clip in Motion and bring the corrected file back into Final Cut. I tested the same thing in Premiere Pro and no such issue appeared there.

Samurai Sharpen

Sharpening filters work by increasing contrast around the detected edges of contrasting areas within an image. This localized contrast increase results in the perception that the image is sharper. Taken to an extreme, it can also create a cartoon effect. Samurai Sharpen uses edge detection to create a mask for the areas to be sharpened. This mask prevents image noise from also being sharpened. The mask can be adjusted to achieve the desired effect.

For example, the eye make-up used by most actresses provides a nice edge to which sharpening can be applied. A subtle application of the effect will result in the clip appearing to be sharper. However, you can also push the various controls to achieve a more stylized look.

Flicker Free

As the name implies Flicker Free is designed to get rid of image flicker. Typical situations where you might have image flicker include timelapse/hyperlapse clips, archival footage, strobing lights, computer and TV screens within the shot, LED displays, and the propeller shadows in drone footage. Flicker Free does a great job of tackling these situations, but is also more processing intensive than the other three plug-ins. All of these conditions involve some variation in exposure within the frame or from one frame to the next and that’s what Flicker Free will even out.

There are several pulldown presets (more than other similar plug-ins) and adjustment controls for sensitivity and frame intervals. In a few cases, a single instance of the plug-in with one setting will not completely eliminate all of the flicker. That’s when you may opt to apply a second instance of the effect in order to catch the remainder of the flicker. Each instance would use different settings so that the combination yields the desired result.

According to Digital Anarchy, Flicker Free 2.0 is in public beta. First for Adobe hosts and then soon for Final Cut Pro X. This update shifts the load to GPU acceleration, so you’ll need a good GPU card to benefit from this update.

Light Wrap Fantastic

The last of these four plug-ins isn’t designed for image repair, but rather enhancing chromakey composites. Whenever you composite blue-screen or green-screen shots, the trick is getting the foreground to properly blend with the background image for a composite that appears natural.

When a person stands in a natural environment, the ambient light reflected from the surroundings onto the person is visible on the edges of their image. That’s how the camera lens see it. That subtle lighting artifact is called light wrap. The foreground subject in a green-screen shoot doesn’t naturally have this same ambient light wrap – or it’s seen as green spill. This can be corrected through careful lighting, but such care is often not taken – especially on budget-conscious productions. Therefore, you have to add light wrap in post. Some keyers include a built-in light wrap tool or function, while others rely on a separate light wrap filter. That’s where Light Wrap Fantastic comes in. It’s not a keyer by itself, but is designed to work in conjunction with a keyer as part of the effects stack applied to the foreground layer.

You can use a background color or drop the background layer into the image well, which then becomes the source for the light wrap around the foreground image. That light blends as a subtle glow around the interior edge of the subject. Since you want the shot to feel natural, you are generally going to want to select the background image, rather than a stock color. This has the benefit of not only looking like the same environment, but if there are lighting changes within the background image, the light wrap edge will react dynamically. The light wrap itself can be adjusted for brightness, softness, and various blend modes. These settings allow you to control the subtlety of the light wrap.

As a group, these four plug-ins form the Anarchy Video Bundle, but you have to purchase separate bundles for each host. The Apple bundle covers Final Cut Pro X and Motion, but if you also want to use these filters in After Effects, then you’ll need to also purchase the Adobe version of the bundle. Same for other host applications. You probably won’t use one of these on every session. On the other hand, when you do need to use one, it’s often the kind of enhancement that can ward off a reshoot and let you save the job in post.

Originally written for FCP.co.

©2020 Oliver Peters