Trusting Apple Displays?

In the “good old days” of post, directors, cinematographers, and clients would all judge final image quality in an online edit or color correction suite using a single, calibrated reference monitor. We’ve moved away from rooms that look like the bridge of the Enterprise into more minimalist set-ups. This is coupled with the current and possibly future work-from-home and general remote post experiences. Without everyone looking at the same reference display, it becomes increasingly difficult to be sure that what everyone sees is actually the proper appearance of the image. For some, clients rarely comes into the suite anymore. Instead, they are often making critical judgements based on what they see on their home or work computers and/or devices.

The lowest common denominator

Historically, a common item in most recording studios was a set of Auratone sound cubes. These small, single speaker monitors, which some mixers dubbed “awful-tones,” were intended to provide a representation of the mix as it would sound on radios and cheaper hi-fi audio set-ups. TV show re-recording mixers would also use these to check a mix in order to hear how it would translate to home TV sets.

Today, smart phones and tablets have become the video equivalent of that cheap hi-fi set-up. Generally that means Apple iPhones or iPads. In fact, thanks to Apple’s color management, videos played back on iPads and iPhones do approximate the correct look of your master file. As editors or colorists, we often ask clients to evaluate the image on an Apple device, not because they are perfect (they aren’t), but rather because they are the best of the many options out in the consumer space. In effect, checking against an iPhone has become the modern video analog of the Auratone sound cubes.

Apple color management

Apple’s color management includes several techniques that are helpful, but can also trip you up. If you are going to recommend that your clients use an iPhone, iPad, or even on iMac to judge the material, then you also want to make sure they have correctly set up their device. This also applies to you, the editor, if you are creating videos and only making judgements on an iMac (or XDR) display, without any actual external video (not computer) display.

Apple computers enable the use of different color profiles and the ability to make adjustments according to calibration. If you have a new iMac, then you are generally better off leaving the color profile set to the default iMac setting instead of fiddling with other profiles. New Apple device displays are set to P3 D65 color with a higher brightness capacity (up to 500 nits – more with XDR). You cannot expect them to perfectly reproduce an image that looks 100% like a Rec 709 100 nits TV set. But, they do get close.

I routinely edit/grade with Media Composer, Premiere Pro, DaVinci Resolve, and Final Cut Pro on iMacs and iMac Pros. Of these four, only Final Cut Pro shows an image in the edit viewer window that is relatively close to the way that image appears on the video output to a monitor. This is thanks to Apple’s color management and the broader Apple hardware/software ecosystem. The viewer image for the other three may look darker, be more saturated, have richer reds, and/or show more contrast.

User control

Once you get past the color profile (Mac only), then most Apple devices offer two or three additional user controls (depending on OS version). Obviously there’s brightness, which can be manual or automatic. When set to automatic, the display will adjust brightness based on the ambient light. Generally auto will be fine, unless you really need to see crucial shadow detail. For example, the pluge portion of a test pattern (darkest gray patches) may not be discernible unless you crank up the brightness or are in a dark room.

The next two are gotchas. Along with the user interface dark mode, Apple introduced Night Shift and True Tone in an effort to reduce eye fatigue after long computer use. These are based on the theory that blue light from computer and device screens is fatiguing, harmful, and/or can impact sleep patterns. Such health concerns, as they relate to computer use, are not universally supported by the medical community.

Nevertheless, they do have a pleasing effect, because these features make the display warmer or cooler based on the time of day or the color temperature of the ambient light in the room. Typically the display will appear warmer at night or in a dimmer room. If you are working with a lot of white on the screen, such as working with documents, then these modes do feel more comfortable on your eyes (at least for me). However, your brain adjusts to the color temperature shift of the display when using something like True Tone. The screen doesn’t register in your mind as being obviously warm.

If you are doing anything that involves judging color, the LAST thing you want to use is True Tone or Night Shift. This applies to editing, color correction, art, photography, etc. It’s important to note that these settings only affect the way the image is displayed on the screen. They don’t actually change the image itself. Therefore, if you take a screen grab with True Tone or Night Shift set very cool or warm, the screen grab itself will still be neutral.

In my case, I leave these off for all of the computers I use, but I’m OK with leaving them on for my iPhone and iPad. However, this does mean I need to remember to turn the setting off whenever I use the iPhone or iPad to remotely judge videos. And there’s the rub. If you are telling your client to remotely judge a video using an Apple device – and color is part of that evaluation – then it’s imperative that you ask them (and maybe even teach them how) to turn off those settings. Unless they are familiar with the phenomena, the odds are that True Tone and/or Night Shift has been enabled on their device(s) and they’ve never thought twice about it simply because the mind adjusts.

QuickTime

QuickTime Player is the default media player for many professionals and end users, especially those using Macs. The way QuickTime displays a compatible file to the screen is determined by the color profile embedded into the file metadata. If I do a color correction session in Resolve, with the color management set to Rec 709 2.4 gamma (standard TV), then when I render a ProRes file, it will be encoded with a color profile of 1-2-1 (the 2 indicates 2.4 gamma).

If I export that same clip from Final Cut Pro or Premiere Pro (or re-encode the Resolve export through one of those apps) the resulting ProRes now has a profile of 1-1-1. The difference through QuickTime Player is that the Resolve clip will look darker in the shadows than the clip exported from FCP or Premiere Pro. Yet both files are exactly the same. It’s merely how QuickTime player displays it to the screen based on the metadata. If I open both clips in different players, like Switch or VLC, which don’t use this same metadata, then they will both appear the same, without any gamma shift.

Client recommendations

How should one deal with such uncertainties? Obviously, it’s a lot easier to tackle when everyone is in the same room. Unfortunately, that’s a luxury that may become totally obsolete. It already has for many. Fortunately most people aren’t as sensitive to color issues as the typical editor, colorist, or DP. In my experience, people tend to have greater issues with the mix than they do with color purity. But that doesn’t preclude you from politely educating your client and making sure certain best practices are followed.

First, make sure that features like True Tone and Night Shift are disabled, so that a neutral image is being viewed. Second, if you use a review-and-approval service, like frame.io or Vimeo, then you can upload test chart image files (color bars, grayscale, etc). These may be used whenever you need to check the image with your client. Is the grayscale a neutral gray in appearance or is it warmer or cooler? Can you see separation in the darkest and brightest patches of these charts? Or are they all uniformly black or white? Knowing the answers will give you a better idea about what the client is seeing and how to guide them to change or improve their settings for more consistent results.

Finally, if their comments seem to relate to a QuickTime issue, then suggest using a different player, such as Switch (free with watermarks will suffice) or VLC.

The brain, eyes, and glasses

Some final considerations… No two people see colors in exactly the same way. Many people suffer from mild color blindness, i.e. color vision deficiencies. This means they may be more or less sensitive to shades of some colors. Eye glasses affect your eyesight. For example, many glasses, depending on the coatings and material, will yellow over time. I cannot use polycarbonate lenses, because I see chromatic aberration on highlights wearing this material, even though most opticians and other users don’t see that at all. CR-9 (optical plastic) or glass (no longer sold) are the only eyeglass materials that work for me.

If I’m on a flight in a window seat, then the eye closest to the window is being bombarded with a different color temperature of light than the eye towards the plane’s interior. This can be exacerbated with sunglasses. After extended exposure to such a differential, I can look at something neutral and when I close one eye or the other, I will see the image with a drastically different color temperature for one eye versus the other. This eventually normalizes itself, but it’s an interested situation.

The bottom line of such anecdotes is that people view things differently. The internet dress color question is an example of this. So when a client gives you color feedback that just doesn’t make sense to you, it might just be them and not their display!

Check out my follow-up article at PVC about dealing with color management in Adobe Premiere Pro.

©2021 Oliver Peters

Larry Jordan’s Techniques of Visual Persuasion

You may know him as a speaker, trainer, or web presenter. Or from the long-running Digital Production Buzz podcast series. Or his 2 Reel Guys series with the late Norman Hollyn. Regardless of how, Larry Jordan is well-known by most working and aspiring video professionals. But Jordan is also an accomplished author, with several books to his credit. The latest is Techniques of Visual Persuasion + Create powerful images that motivate.

Commercials, corporate videos, or entertainment – the art of persuasion is at the heart of what every editor does. Persuasion is about convincing someone to want to do whatever action you want to have happen or to share a feeling you are trying to convey. In addition to creating persuasive messages, we ourselves are also consumers and recipients of these same communications. Therefore, knowledge and understanding is key. It is Jordan’s premise that with modern life’s faster pace, proper communication today is more like haiku than a lengthy report. Every professional needs to know how to make their presentation – whether spoken, still, or motion – succinct and impactful. This book is perfectly laid out to get that point across.

Techniques of Visual Persuasion is arranged into three sections. The first covers the fundamentals of persuasion. The second is about developing persuasive still images and the last section is about persuasive motion images. This book is arranged like a text book, which is a good thing. It’s well-researched and detailed. Each chapter starts with the goals to be covered and ends with a recap. Each is also capped off with an anecdote (like Larry starting a fire in a TV studio) or a guest contributor’s point-of-view. The pages are illustrated nicely with sidebars, images, and charts that help make the point of how and why one example is more inviting or persuasive than another.

Jordan covers a wide range of theoretical and practice advice, such as the 180-degree rule, the rule of thirds, three-point lighting, sans serif vs. serif fonts, and much more. But it’s not all just concepts. Jordan has a lengthy background in software training, including several books around Final Cut Pro and Adobe products, as well as his PowerUp series of videos.

Section two includes two chapters on the basics of Photoshop with practical examples of how to use its tools to enhance and repair still images and create layered composites. Section three goes even deeper into real-world experience. Jordan covers topics, such as suggested camera and audio equipment, interviewing techniques, how to properly record audio, and how to properly plan and produce a video shoot. This section also goes deepest into software basics, including a detailed look at Adobe Audition, Apple Final Cut Pro X, and Apple Motion.

Techniques of Visual Persuasion is a like a college film program condensed into under 400 informative pages. All of it written in a very engaging manner. I found that it’s not only a good first read, but useful to have around for a quick reference, whether you are just entering the field or have been in the business for years. Larry Jordan is a gifted presenter who can express complex topics in an easy-to-digest manner and this latest book is no exception.

©2020 Oliver Peters

Time to Rethink ProRes RAW?

The Apple ProRes RAW codec has been available for several years at this point, yet we have not heard of any professional cinematography camera adding the ability to record ProRes RAW in-camera. I covered ProRes RAW with some detail in these three blog posts (HDR and RAW Demystified, Part 1 and Part 2, and More about ProRes RAW) back in 2018. But the industry has changed over the past few years. Has that changed any thoughts about ProRes RAW?

Understanding RAW

Today’s video cameras evolved their sensor design from a three CCD array for RGB into a single sensor, similar to those used in still photo cameras. Most of these sensors are built using a Bayer pattern of photosites. This pattern is an array of monochrome receptors that are filtered to receive incoming green, red, and blue wavelengths of light. Typically the green photosites cover 50% of this pattern and red and blue each cover 25%. These photosites capture linear light, which is turned into data that is then meshed and converted into RGB pixel information. Lastly, it’s recorded into a video format. Photosites do not correlate in a 1:1 relationship with output pixels. You can have more or fewer total photosite elements in the sensor than the recorded pixel resolution of the file.

The process of converting photosite data into RGB video pixels is done by the camera’s internal electronics. This process also includes scaling, gamma encoding (Rec709, Rec 2020, or log), noise reduction, image sharpening, and the application of that manufacturer’s proprietary color science. The term “color science” implies some type of neutral mathematical color conversion, but that isn’t the case. The color science that each manufacturer uses is in fact their own secret sauce. It can be neutral or skewed in favor of certain colors and saturation levels. ARRI is a prime example of this. They have done a great job in developing a color profile for their Alexa line of cameras that approximates the look of film.

All of this image processing adds cost, weight, and power demands to the design of a camera. If you offload the processing to another stage in the pipeline, then design options are opened up. Recording camera raw image data achieves that. Camera raw is the monochrome sensor data prior to the conversion into an encoded video signal. By recording a camera raw file instead of an encoded RGB video file, you defer the processing to post.

To decode this file, your operating system or application requires some type of framework, plug-in, or decoding/developing software in order to properly interpret that data into a color image. In theory, using a raw file in post provides greater control over ISO/exposure and temperature/tint values in color grading. Depending on the manufacturer, you may also apply a variety of different camera profiles. All of this is possible and still have a camera file that is of a smaller size than its encoded RGB counterpart.

In-camera recording, camera raw, and RED

Camera raw recording preceded the introduction of the RED One camera. These usually consisted of uncompressed movie files or image sequences recorded to an external recorder. RED introduced the ability to record a Wavelet-compressed, 4K camera raw signal at 24fps. This was a movie file recorded onboard the camera itself. RED was granted a number of patents around these processes, which preclude any other camera manufacturer from doing that exact same thing, unless entering into a licensing agreement with RED. So far these patents have been successfully upheld against Sony and Apple among others.

In 2007 – part way through the Final Cut Pro product run – Apple introduced its family of ProRes codecs. ProRes was Apple’s answer to Avid’s DNxHD codec, but with some improvements, like resolution independence. ProRes not only became Apple’s default intermediate codec, but also gained stature as the mastering and delivery codec of choice, regardless of which NLE you were using. (Apple was awarded an Engineering Emmy Award this year for the ProRes codecs.)

By 2010 Apple was successful in convincing ARRI to use ProRes as its internal recording codec with the introduction of the (then new) line of Alexa cameras. (ARRI camera raw recording was a secondary option using ARRIRAW and a Codex recorder.) Shooting with an Alexa, recording high-quality ProRes files, and posting those directly within FCP or any other compatible NLE created the simplest and smoothest capture-edit-deliver pipeline of any professional post workflow. That remains unchanged even today.

Despite ARRI’s success, only a few other camera manufacturers have adopted ProRes as an internal recording option. To my knowledge these include some cameras from AJA, JVC, Blackmagic Design, and RED (as a secondary file to REDCODE). The lack of widespread adoption is most likely due to Apple’s licensing arrangement, coupled with the fact that ProRes is a proprietary Apple format. It may be a de facto industry standard, but it’s not an official standard sanctioned by an industry standards committee.

The introduction of Apple’s ProRes RAW codecs has led many in the industry to wait with bated breath for cameras to also adopt ProRes RAW as their internal camera raw option. ARRI would obviously be a candidate. However, the RED patents would seem to be an impediment. But what if Apple never had that intention in the first place?

Do we have it all wrong?

When Apple introduced ProRes RAW, it did so in partnership with Atomos. Just like Sony, ARRI, and Panasonic recording their camera raw signals to an external recorder, sending a camera raw signal to an external Atomos monitor/recorder is a viable alternative to in-camera recording. Atomos’ own disagreements with RED have now been settled. Therefore, embedding the ProRes RAW codec into their products opens up that recording format to any camera manufacturer. The camera simply has to be capable of sending a compatible camera raw signal (as data) over SDI or HDMI to the connected Atomos recorder.

The desire to see ProRes RAW in-camera stems from the history of ProRes adoption by ARRI and the impact that had on high-end production and post. However, that came at a time when Apple was pushing harder into various pro film and video markets. As we’ve learned, that course was corrected by Steve Jobs, leading to the launch of Final Cut Pro X. Apple has always been about ease and democratization – targeting the middle third of a bell curve of users, not necessarily the top or bottom thirds. For better or worse, Final Cut Pro X refocused Apple’s pro video direction with that in mind.

In addition, during this past decade or more, Apple has also changed its approach to photography. Aperture was a tool developed with semi-pro and pro DSLR photographers in mind. Traditional DSLRs have lost photography market share to smart phones – especially the iPhone. Online sharing methods – Facebook, Flickr, Instagram, cloud picture libraries – have become the norm over the traditional photo album. And so, Aperture bit the dust in favor of Photos. From a corporate point-of-view, the rethinking of photography cannot be separated from Apple’s rethinking of all things video.

Final Cut Pro X is designed to be forward-thinking, while cutting the chord with many legacy workflows. I believe the same can be applied to ProRes RAW. The small form factor camera, rigged with tons of accessories including external displays, is probably more common these days than the traditional, shoulder-mounted, one-piece camcorder. By partnering with Atomos (and maybe others in the future), Apple has opened the field to a much larger group of cameras than handling the task one camera manufacturer at a time.

ProRes RAW is automatically available to cameras that were previously stuck recording highly-compressed M-JPEG or H.264/265 formats. Video-enabled DSLRs from manufacturers like Nikon and Fujifilm join Canon and Panasonic cinematography cameras. Simply send a camera raw signal over HDMI to an Atomos recorder. And yet, it doesn’t exclude a company like ARRI either. They simply need to enable Atomos to repack their existing camera raw signal into ProRes RAW.

We may never see a camera company adopt onboard ProRes RAW and it doesn’t matter. From Apple’s point-of-view and that of FCPX users, it’s all the same. Use the camera of choice, record to an Atomos, and edit as easily as with regular ProRes. Do you have the depth of options as with REDCODE RAW? No. Is your image quality as perfect in an absolute (albeit non-visible) sense as ARRIRAW? Probably not. But these concerns are for the top third of users. That’s a category that Apple is happy to have, but not crucial to their existence.

The bottom line is that you can’t apply classic Final Cut Studio/ProRes thinking to Final Cut Pro X/ProRes RAW in today’s Apple. It’s simply a different world.

____________________________________________

Addendum

The images I’ve used in this post come from Patrik Pettersson. These clips were filmed with a Nikon Z6 DSLR recording to an Atomos Ninja V. He’s made a a few sample clips available for download and testing. More at this link. This brings up an interesting issue, because most other forms of camera raw are tied to a specific camera profile. But with ProRes RAW, you can have any number of cameras. Once you bring those into Final Cut Pro X, you don’t have the correct camera profile with a color science that matches that model for each any every camera.

In the case of these clips, FCPX doesn’t offer any Nikon profiles. (Note: This was corrected with the FCPX 10.4.9 update.) I decided to decode the clip (RAW to log conversion) using a Sony profile. This gave me the best possible results for the Nikon images and effectively gives me a log clip similar to that from a Sony camera. Then for the grade I worked in Color Finale Pro 2, using its ACES workflow. To complete the ACES workflow, I used the matching SLog3 conversion to Rec709.

The result is nice and you do have a number of options. However, the workflow isn’t as straightforward as Apple would like you to believe. I think these are all solvable challenges, but 1) Apple needs to supply the proper camera profiles for each of the compatible cameras; and 2) Apple needs to publish proper workflow guides that are useful to a wide range of users.

©2020 Oliver Peters

ADA Compliance

The Americans with Disabilities Act (ADA) has enriched the lives of many in the disabled community since its introduction in 1990. It affects all of our lives, from wheelchair-friendly ramps on street corners and business entrances to the various accessibility modes in our computers and smart devices. While many editors don’t have to deal directly with the impact of the ADA on media, the law does affect broadcasters and streaming platforms. If you deliver commercials and programs, then your production will be affected in one way or another. Typically the producer is not directly subject to compliance, but the platform is. This means someone has to provide the elements that complete compliance as part of any distribution arrangement, whether it is the producer or the outlet itself.

Two components are involved to meet proper ADA compliance: closed captions and described audio (aka audio descriptions). Captions come in two flavors – open and closed. Open captions or subtitles consists of text “burned” into the image. It is customarily used when a foreign language is spoken in an otherwise English program (or the equivalent in non-English-speaking countries). Closed captions are enclosed in a data stream that can be turned on and off by the viewer, device, or the platform and are intended to make the dialogue accessible to the hearing-impaired. Closed captions are often also turned on in noisy environments, like a TV playing in a gym or a bar.

Audio descriptions are intended to aid the visually-impaired. This is a version of the audio mix with an additional voice-over element. An announcer describes visual information that is not readily obvious from the audio of the program itself. This voice-over fills in the gaps, such as “man climbs to the top of a large hill” or “logos appear on screen.”

Closed captions

Historically post houses and producers have opted to outsource caption creation to companies that specialize in those services. However, modern NLEs enable any editor to handle captions themselves and the increasing enforcement of ADA compliance is now adding to the deliverable requirements for many editors. With this increased demand, using a specialist may become cost prohibitive; therefore, built-in tools are all the more attractive.

There are numerous closed caption standards and various captioning file formats. The most common are .scc (Scenarist), .srt (SubRip), and .vtt (preferred for the web). Captions can be supplied as “embedded” (secondary data within the master file) or as a separate “sidecar” file, which is intended to play in sync with the video file. Not all of these are equal. For example, .scc files (embedded or as sidecar files) support text formatting and positioning, while .srt and .vtt do not. For example, if you have a lower-third name graphic come on screen, you want to move any caption from its usual lower-third, safe-title position to the top of the screen while that name graphic is visible. This way both remain legible. The .scc format supports that, but the other two don’t. The visual appearance of the caption text is a function of the playback hardware or software, so the same captions look different in QuickTime Player versus Switch or VLC. In addition, SubRip (.srt) captions all appear at the bottom, even if you repositioned them to the top, while .vtt captions appear at the top of the screen.

You may prefer to first create a transcription of the dialogue using an outside service, rather than simply typing in the captions from scratch. There are several online resources that automate speech-to-text, including SpeedScriber, Simon Says, Transcriptive, and others. Since AI-based transcription is only as good as the intelligibility of the audio and dialects of the speakers, they all require further text editing/correction through on online tool before they are ready to use.

One service that I’ve used with good results is REV.com, which uses human transcribers for greater accuracy, as well as offering on online text editing tool. The transcription can be downloaded in various formats, including simple text (.txt). Once you have a valid transcription, that file can be converted through a variety of software applications into .srt, .scc, or .vtt files. These in turn can be imported into your preferred NLE for timing, formatting, and positioning adjustments.

Getting the right look

There are guidelines that captioning specialists follow, but some are merely customary and do not affect compliance. For example, upper and lower case text is currently the norm, but you’ll still be OK if your text is all caps. There are also accepted norms when English (or other) subtitles appear on screen, such as for someone speaking in a foreign language. In those cases, no additional closed caption text is used, since the subtitle already provides that information. However, a caption may appear at the top of the screen identifying that a foreign language is being spoken. Likewise, during sections with only music or ambient sounds, a caption may briefly identifying it as such.

When creating captions, you have to understand that readability is key, so the text will not always run perfectly in sync with the dialogue. For instance, when two actors engage in rapid fire dialogue, each caption may stay on longer than the spoken line. You can adjust the timing against that scene so that they eventually catch up once the pace slows down. It’s good to watch a few captioned programs before starting from scratch – just to get a sense of what works and what doesn’t.

If you are creating captions for a program to run on a specific broadcast network or streaming services, then it’s a good idea to find out of they provide a style guide for captions.

Using your NLE to create closed captions

Avid Media Composer, Adobe Premiere Pro, DaVinci Resolve, and Apple Final Cut Pro X all support closed captions. I find FCPX to be the best of this group, because of its extensive editing control over captions and ease of use. This includes text formatting, but also display methods, like pop-on, paint-on, and roll-up effects. Import .scc files for maximum control or extract captions from an existing master, if your media already has embedded caption data. The other three NLEs place the captions onto a single data track (like a video track) within which captions can be edited. Final Cut Pro X places them as a series of connected clips, like any other video clip or graphic. If you perform additional editing, the FCPX magnetic timeline takes care of keeping the captions in sync with the associated dialogue.

Final Cut’s big plus for me is that validation errors are flagged in red. Validation errors occur when caption clips overlap, may be too short for the display method (like a paint-on), are too close to the start of the file, or other errors. It’s easy to find and fix these before exporting the master file.

Deliverables

NLEs support the export of a master file with embedded captions, or “burned” into the video as a subtitle, or the captions exported as a separate sidecar file. Specific format support for embedded captions varies among applications. For example, Premiere Pro – as well as Adobe Media Encoder – will only embed captioning data when you export your sequence or encode a file as a QuickTime-wrapped master file. (I’m running macOS, so there may be other options with Windows.)

On the other hand, Apple Compressor and Final Cut Pro X can encode or export files with embedded captions for formats such as MPEG2 TS, MPEG 2 PS, or MP4. It would be nice if all these NLEs supported the same range of formats, but they don’t. If your goal is a sidecar caption file instead of embedded data, then it’s a far simpler and more reliable process.

Audio descriptions

Compared to closed captions, providing audio description files is relatively easy. These can either be separate audio files – used as sidecar files for secondary audio – or additional tracks on the delivery master. Sometimes it’s a completely separate video file with only this version of the mix. Advanced platforms like Netflix may also require an IMF (Interoperable Master Format) package, which would include an audio description track as part of that package. When audio sidecar files are requested for the web or certain playback platforms, like hotel TV systems, the common deliverable formats are .mp3 or .m4a. The key is that the audio track should be able to run in sync with the rest of the program.

Producing an audio description file doesn’t require any new skills. A voice-over announcer is describing any action that occurs on screen, but which wouldn’t otherwise make sense if you were only listening to audio without that. Think of it like a radio play or podcast version of your TV program. This can be as simple as fitting additional VO into the gaps between actor/host/speaker dialogue. If you have access to the original files (such as a Pro Tools session) or dialogue/music/effects stems, then you have some latitude to adjust audio elements in order to fit in the additional voice-over lines. For example, sometimes the off-camera dialogue may be moved or edited in order to make more space for the VO descriptions. However, on-camera/sync dialogue is left untouched. In that case, some of this audio may be muted or ducked to make space for even longer descriptions.

Some of the same captioning service providers also provide audio description services, using their pool of announcers. Yet, there’s nothing about the process that any producer or editor couldn’t handle themselves. For example, scripting the extra lines, hiring and directing talent, and producing the final mix only require a bit more time added to the schedule, yet permits the most creative control.

ADA compliance has been around since 1990, but hasn’t been widely enforced outside of broadcast. That’s changing and there are no more excuses with the new NLE tools. It’s become easier than ever for any editor or producer to make sure they can provide the proper elements to touch every potential viewer.

For additional information, consult the FCC guidelines on closed captions.

The article was originally written for Pro Video Coalition.

©2020 Oliver Peters

Video Technology 2020 – Shared Storage

Shared storage used to be the domain of “heavy iron” facilities with Avid, Facilis, and earlier Apple Xserve systems providing the horsepower. Thanks to advances in networking and Ethernet technology, shared storage is accessible to any user. Whether built-in or via adapters, modern computers can tap into 1Gbps, 10Gbps, and even higher, networking speeds. Most computers can natively access Gigabit Ethernet networks (1Gbps) – adequate for SD and HD workflows. Computers designed for the pro video market increasingly sport built-in 10GbE ports, enabling comfortable collaboration with 4K media and up. Some of today’s most popular shared storage vendors include QNAP, Synology, and LumaForge.

This technology will become more prolific in 2020, with systems easier to connect and administer, making shared storage as plug-and-play as any local drives. Network Attached Storage (NAS) systems can service a single workstation or multiple users. In fact, companies like QNAP even offer consumer versions of these products designed to operate as home media servers. Even LumaForge sells a version of its popular Jellyfish through the online Apple Store. A simple, on-line connection guide will get you up and running, no IT department required. This is ideal for the individual editor or small post shop.

Expect 2020 to see higher connection speeds, such as 40GbE, and NAS proliferation even more widespread. It’s not just a matter of growth. These vendors are also interested in extending the functionality of their products beyond being a simple bucket for media. NAS systems will become full-featured media hubs. For example, if you an Avid user, you are familiar with their Media Central concept. In essence, this means the shared storage solution is a platform for various other applications, including the editing software. There are additional media applications that include management apps for user permission control, media queries, and more. Like Avid, the other vendors are exploring similar extensibility through third-party apps, such as Axle Video, Kyno, Hedge, Frame.io, and others. As such, a shared network becomes the whole that is greater than the sum of its parts.

Along with increased functionality, expect changes in the hardware, too. Modern NAS hardware is largely based on RAID arrays with spinning mechanical drives. As solid state (SSD) storage devices become more affordable, many NAS vendors will offer some of their products featuring RAID arrays configured with SSDs or even NVMe systems. Or a mixture of the two, with the SSD-based units used for short-term projects or cache files. Eventually the cost will come down enough so that large storage volumes can be cost-effectively populated with only SSDs. Don’t expect to be purchasing 100TB of SSD storage at a reasonable price in 2020; however, that is the direction in which we are headed. At least in this coming year, mechanical drives will still rule. Nevertheless, start looking at some percentage of your storage inventory to soon be based on SSDs.

Click here for more on shared storage solutions.

Originally written for Creative Planet Network.

©2020 Oliver Peters