Is good enough finally good enough?

Like many in post, I have spent weeks in a WFH (work from home) mode. Although I’m back in the office now on a limited basis, part of those weeks included studying the various webinars covering remote post workflows. Not as a solution for now, but to see what worked and what didn’t for the “next time.”

It was interesting to watch some of the comments from executives involved in network production groups and running multi-site, global post companies. While many offered good suggestions, I also heard a few statements about having to settle for something that was “good enough” under the circumstances. Maybe it wasn’t meant the way it sounded to me, but to characterize cutting in Premiere Pro and delivering ProRes masters as something they had to “settle for” struck me as just a bit snobbish. My apologies if I took it the wrong way.

A look back

The image at the top (click to expand) is a facility that I helped design and build and that I worked out of for over a dozen years. This was Century III, the resident post facility at Universal Studios Florida – back in the “Hollywood east” days of the 1990s. Not every post house of the day was this fancy and as equipped, but it represented the general state-of-the-art for that time. During its operation, we worked with 1″, D1, D2, Digital Betacam, and eventually some HD. But along the way, traditional linear post gave way to cheaper non-linear suites. We evolved with that trend and the last construction project was to repurpose one of the linear suites into a high-end Avid Symphony finishing suite.

All things come to an end and 2002 saw Century III’s demise. In part, because of the economic aftermath following September 11th, but also changes in the general film climate in Florida. That was also a time when dramatic and comedic filmed series gave way to many non-scripted, “reality” TV series.

I became a freelancer/independent contractor that year and about a year or so later was cutting and finishing an Animal Planet series. We cut and finished with four, networked Avid workstations spread across two apartments. There we covered all post, except the final audio mix. It was readily obvious to me that this was up to 160 hours/week of post that was no longer being done at an established facility. And that it was a trend that would accelerate, not go away.

Continued shift

It’s going on two decades now since that shift. In that time I’ve worked out of my home studio (picture circa 2011), my laptop on site, and within other production companies and facilities. Under various conditions, I’ve cut, finished, and delivered commercials, network shows, trade-show presentations, themed attraction projects, and feature films and documentaries. I’ve cut and graded with Final Cut Pro (“legacy” and X), Premiere Pro, Media Composer/Symphony, AvidDS, Color, Resolve, and others. The final delivered files have all passed rigid QC. It’s a given to me that you don’t need a state-of-the-art facility to do good work – IF you know what you are doing – and IF you can trust your gear in a way that you can generate predictable results. So I have to challenge the assumptions, when I hear “good enough.”

Predictable results – ah, there’s the rub. Colorists swear by the necessity for rooms with the proper neutral paint job and very expensive, calibrated displays. Yet, now many are working from home in ad hoc grading rooms. Many took home their super-expensive Sonys, but others are also using high-end LG, Flanders, or the new Apple XDR to grade by. And guess what? Somehow it all works. Would a calibrated grading environment be better? Sure, I’m not saying that it wouldn’t – simply that you can deliver quality without one when needed.

I’ve often asked clients to evaluate an in-progress grade using an Apple iPad, simply because they display good, consistent results. It’s like audio mixers who use the old Auratone cube speakers. Both devices are intended to be a “lowest common denominator.” If it looks or sounds good there, then that will translate reasonably well to other consumer devices. For grading I would still like to have the client present at the end for a final pass. Color is subjective and it’s essential that you are looking at the same display in the same room to make sure everyone is talking the same language.

I need to point out that I’m generally talking about finishing for streaming, the web, and/or broadcast with a stereo mix. When it comes to specialized venues, like theatrical presentations and custom attractions (theme parks or museums), the mixing and grading almost always has to be completed in properly designed suites/theaters/mix stages (motion pictures) or on-site (special venues). For example, if you mix a motion picture for theatrical display, you need a properly certified 5.1, 7.1, or Dolby Atmos environment. Otherwise, it’s largely a guessing game. The same for picture projection, which differs from TV and the web in terms of brightness, gamma, and color space. In these two instances, it’s highly unlikely that anyone working out of their house is going to have an acceptable set-up.

The new normal

So where do we go from here? What is the “new normal?” Once some level of normal has returned, I do believe a lot of post will go back to the way it was before. But, not all. Think of the various videoconference-style (Skype, Zoom, etc) shows you’ve been watching these weeks. Obviously, these were produced that way out of necessity. But, guess what! Quite a few are downright entertaining, which says to me that this format isn’t going away. It will become another way to produce a show that viewers like. Just as GoPros and drones have become a standard part of the production lexicon, the same will be true of iPhones and even direct Zoom or Skype feeds. Viewers are now comfortable with it.

At a time when the manufacturers have been trying to cram HDR and 8K down our throats, we suddenly find that something entirely different is more important. This will change not only production, but also post. Of course, many editors have already been working from home or ad hoc cutting rooms prior to this; but editing is a collaborative art working with other creatives.

All situations aren’t equal though. I’ve typically worked without a client sitting over my shoulder for years. Review-and-approval services like Frame.io have become standard tools in my workflow. Although not quite as efficient as haven’t a client right there, it still can be very effective. That’s common in my workflows, but has likely become a new way of working over these past two months for editors and colorists who never worked that way prior to Covid-19.

Going forward

Where does “good enough” fit in? If cutting in Media Composer and delivering DNxHR has been your norm within a facility, then using editors working from home may require a shift in thinking. For example, is cutting in Resolve, Premiere Pro, or Final Cut Pro X and then delivering ProResHQ (or higher) an acceptable alternative? There simply is no quality compromise, regardless of what some may believe, but it may require a shift in workflow or thinking.

Security may be harder to overcome. In studio or network-controlled features and TV series, security is tight, making WFH situations dicey. However, the truth of the matter is that the lowest common denominator may be more dangerous than a hacker. Think about the unscrupulous person somewhere in the chain who has access to files. Or someone with a smartphone camera recording a screen. In the end, do you or don’t you have employees and/or freelancers that you can trust? Frame.io is addressing some of these security questions with personalized screeners. Nevertheless, such issues need to be addressed and in some case, loosened.

Another item to consider is what are your freelancers using to cut or grade with? Do they have an adequate workstation with the right software, plug-ins, and fonts? Or does the company need to supply that? What about monitoring? All of these are items to explore with your staff and freelancers.

The hardest nut to crack is if you need access to a home base. Sure you can “sneakernet” drives between editors. You can transfer large files over the internet on a limited basis. These both come with a hit in efficiency. For example, my current work situation requires ongoing access to high-res, native media stored on QNAP and LumaForge Jellyfish NAS systems – an aggregate of about 3/4PB of potential storage. Fortunately, we have a policy of archiving all completed projects onto removable drives, even while still storing the projects on the NAS systems for as long as possible. In preparation for our WFH mode, I brought home about 40 archive drives (about 150TB of media) as a best guess of everything I might need to work on from home. Two other editors took home a small RAID each for projects that they were working on.

Going forward, what have I learned? The bottom line is – I don’t know. We can easily work from home and deliver high-quality work. To me that’s a given and has been for a while. In fact, if you are running a loaded 5K iMac, iMac Pro, or 16″ MacBook Pro, then you already have a better workstation than most suites still running 10-year-old “cheese grater” or 7-year-old “trash can” Mac Pros. Toss in a fast Thunderbolt or USB3.0 RAID and ProRes or DNxHR media becomes a breeze. Clearly this “good enough” scenario will deliver comparable results to a “blessed” edit suite.

Unfortunately, if you can’t stay completely self-contained, then the scenarios involve someone being at the home base. In larger facilities this still requires IT personnel  or assistant editors to go into the office. Even if you are an editor cutting from home with proxy files, someone has to go into the office to conform the camera originals and create deliverables. This tends to make a mockery out of stringent WFH restrictions.

If the world truly has changed forever, as many believe, and remote work will be how the majority of post-production operates going forward, then it certainly changes the complexion of what a facility will look like. Why invest in a large SAN/NAS storage solution? Why invest in a fleet of new Mac Pros? There’s no need, because the facility footprint can be much smaller. Just make sure your employees/freelancers have adequate hardware to do your work.

The alternative is fast, direct access over the internet to your actual shared storage. Technically, you can access files in a number of ways. None of them are particularly efficient. The best systems involve expense, like Teradici products or the HP RGS feature. However, if you have an IT hiccup or a power outage, you are back in the same boat. The “holy grail” for many is to have all media in the cloud and to edit directly from the cloud. That to me is still a total pipe dream and will be for a while for a variety of reasons. I don’t want to say that all of these ideas present insurmountable hurdles, but they aren’t cheaper – nor more secure – than being on premises. At least not yet.

The good news is that our experience over the past few months has spurred interest in new ways of working that will incentivize development. And maybe – just maybe – instead of fretting about the infrastructure to support 8K, we’ll find better, faster, more efficient ways to work with high-quality media at a distance.

©2020 Oliver Peters

Chasing the Elusive Film Look

Ever since we started shooting dramatic content on video, directors have pushed to achieve the cinematic qualities of film. Sometimes that’s through lens selection, lighting, or frame rate, but more often it falls on the shoulders of the editor or colorist to make that video look like film. Yet, many things contribute to how we perceive the “look of film.” It’s not a single effect, but rather the combination of careful set design, costuming, lighting, lenses, camera color science, and color correction in post.

As editors, we have control over the last ingredient, which brings me to LUTs and plug-ins. A number of these claim to offer looks based on certain film emulsions. I’m not talking about stylized color presets, but the subtle characteristics of film’s color and texture. But what does that really mean? A projected theatrical film is the product of four different stocks within that chain – original camera negative, interpositive print, internegative, and the release print. Conversely, a digital project shot on film and then scanned to a file only involves one film stock. So it doesn’t really mean much to say you are copying the look of film emulsion, without really understanding the desired effect.

My favorite film plug-in is Koji Advance, which is distributed through the FxFactory platform. Koji was developed between Crumplepop and noted film timer, Dale Grahn. A film timer is the film lab’s equivalent to a digital colorist. Grahn selected several color and black-and-white film stocks as the basis for the Koji film looks and film grain emulation. Then Crumplepop’s developers expanded those options with neutral, saturated, and low contrast versions of each film stock and included camera-based conversions from log or Rec 709 color spaces. This is all wrapped into a versatile color correction plug-in with controls for temperature/tint, lift/gamma/gain/density (low, mid, high, master), saturation, and color correction sliders. (Click an image to see an expanded view.)

This post isn’t a review of the Koji Advance plug-in, but rather how to use such a filter effectively within an NLE like Final Cut Pro X (or Premiere Pro and After Effects, as well). In fact, these tips can also be used with other similar film look plug-ins. Koji can be used as your primary color correction tool, applying and adjusting it on each clip. But I really see it as icing on the cake and so will take a different approach.

1. Base grade/shot matching. The first thing you want to do in any color correction session is to match your shots within the sequence. It’s best to establish a base grade before you dive into certain stylized looks. Set the correct brightness and contrast and then adjust for proper balance and color tone. For these examples, I’ve edited a timeline consisting of a series of random FilmSupply stock footage clips. These clips cover a mix of cameras and color spaces. Before I do anything, I have to grade these to look consistent.

Since these are not all from the same set-up, there will naturally be some variances. A magic hour shot can never be corrected to be identical to a sunny exterior or an office shot. Variations are OK, as long as general levels are good and the tone feels right. Final Cut Pro X features a solid color correction tool set that is aided by the comparison view. That makes it easy to match a shot to the clip before and after it in the timeline.

2. Adding the film look. Once you have an evenly graded sequence of shots, add an adjustment layer. I will typically apply the Koji filter, an instance of Hue/Sat Curves, and a broadcast-safe limiter into that layer.

Within the Koji filter, select generic Rec 709 as the camera format and then the desired film stock. Each selection will have different effects on the color, brightness, and contrast of the clips. Pick the one closest to your intended effect. If you also want film grain, then select a stock choice for grain and adjust the saturation, contrast, and mix percentage for that grain. It’s best to view grain playing back at close to your target screen size with Final Cut set to Better Quality. Making grain judgements in a small viewer or in Better Performance mode can be deceiving. Grain should be subtle, unless you are going for a grunge look.

The addition of any of these film emulsion effects will impact the look of your base grade; therefore, you may need to tweak the color settings with the Koji controls. Remember, you are going for an overall look. In many cases, your primary grade might look nice and punchy – perfect for TV commercials. But that style may feel too saturated for a convincing film look of a drama. That’s where the Hue/Sat Curves tool comes in. Select LUMA vs SAT and bring down the low end to taste. You want to end up with pure blacks (at the darkest point) and a slight decrease in shadow-area saturation.

3. Readjust shots for your final grade. The application of a film effect is not transparent and the Koji filter will tend to affect the look of some clips more than others. This means that you’ll need to go back and make slight adjustments to some of the clips in your sequence. Tweak the clip color correction settings applied in the first step so that you optimize each clip’s final appearance through the Koji plug-in.

4. Other options. Remember that Koji or similar plug-ins offer different options – so don’t be afraid to experiment. Want film noir? Try a black-and-white film stock, but remember to also turn down the grain saturation.

You aren’t going for a stylized color correction treatment with these tips. What you are trying to achieve is a look that is more akin to that of a film print. The point of adding a film filter on top is to create a blend across all of your clips – a type of visual “glue.” Since filters like this and the adjustment layer as a whole have opacity settings, is easy to go full bore with the look or simply add a hint to taste. Subtlety is the key.

Originally written for FCP.co.

©2020 Oliver Peters

A First Look at Postlab Cloud

Apple developed Final Cut Pro X around single-editor workflows. As such, professional editing teams who wanted to use this tool for collaborative editing have been challenged to develop their own solutions. One approach was Postlab, which was developed in-house at Dutch broadcaster Evanglische Omroep (EO). In order to expand the product as a commercial application, lead developer Jasper Siegers decided to move it under the Hedge umbrella. This required the app to be rebuilt with new code before it could be offered to the FCPX market. That time has come and Postlab is now available as Postlab Cloud.

As the name implies, Postlab Cloud hosts your FCPX libraries “in the cloud,” i.e. on Postlab’s servers. Some production companies or broadcasters are reticent to have their editing computers connected online, but it’s important to note that only libraries and no media or caches are hosted by Postlab. This keeps the transfer times fast and file sizes light. Cache and media files stay local, whether on your machine or on connected shared storage. Postlab sets up accounts based on site licenses and numbers of users. Each user is assigned a log-in based on an e-mail address and a password. This means that a production hosted by Postlab can be accessed by authorized users anywhere in the world, provided there’s a viable internet connection.

The owner of the account can set up productions and organize them within folders. Each production is a collection or bundle of one or more Final Cut Pro X libraries. If you have ever worked with Final Cut Server in the FCP7 days, then the Postlab workflow is very similar. Once a production has been created, an editor can log in, download the library (a check-out step), edit in it, and then upload the changed version (a check-in step). As part of this upload, the Postlab interface prompts you to add comments describing the work you’ve done. Only one editor at a time can download a library and have write access; however, other users can still download it with read-only access. If you have two editors ping-ponging work on the same library file, then one has to upload it (check in) before the other editor can download it (check out) for their edits.

Getting started

I decided to test Postlab Cloud in two scenarios: a) multiple workstations connected to a shared storage network, and b) two disconnected editors collaborating over long distances. To start, once an account has been established, any editor using Postlab Cloud must install the small Postlab application. Since the app controls some of Final Cut’s functions, you will be prompted to enable GUI Scripting in your privacy preferences. In order for Postlab to work properly, media and cache files need to be outside of the library bundle. When you first download a library, you may be prompted to change your settings. In a networked environment with media on shared storage, the path to the media should be the same on each workstation. This means when Editor A finishes and checks in the production and then Editor B checks it back out, you generally will not need to relink the media files on Editor B’s system. Therefore, this edit collaboration can proceed fluidly.

Once a production has been downloaded, the library file exists as a temporary file on the local machine and not the network. This means that Postlab can still work in tandem with storage solutions that don’t normally perform well with FCPX libraries. In addition to this temporary library file, the Final Cut backup library is also stored in the location you have designated. If you are working in a networked, collaborative environment, then the advantage Postlab offers is version tracking and the ability for multiple users to open a library file (only one with write access).

Long distance

The second scenario is working with other editors outside of your facility. The first step is to get the media to the outside editor. You could certainly send a drive, but that isn’t efficient in time nor cost, especially across continents. If you only need creative editing and not finishing services, then low-res, proxy files are fine. So I converted my 4K UHD ProRes HQ files to 960 x 540 H264 (3Mbps) files and used Frame.io to transfer them over the internet. The key to proper relinking when you are done is to set audio to pass-through when converting this files. This was a double-system sound shoot, so I uploaded both the H264 videos files and the sound recordist’s WAV files to Frame and then downloaded them again at the other end (my home). Now I had media in both locations. The process would be the same even if it were two editors in two different countries.

The first Postlab step is to create and upload this FCPX library. Once that has been established, any authorized user with a Postlab log-in can access the production. I decided to go back and forth on this production between my home and the facility and also using different user log-ins – thus simulating a team of remote editors. Each time I did this, version changes were tracked by Postlab. If I were working with multiple editors, I would have been able to see what tasks each had performed.

It’s important to note that when you collaborate in this way, each editor should be using the same effects, LUTs, and Motion templates, otherwise some things will appear offline. Since the path to the media was different at home versus at the facility, each time I went between the two, checking in and then checking out the production, media files would appear offline. A simple relink fixed this, but it’s something to be aware of. Once totally done, I could relink to the high-res camera files and “finish” the project back at the office.

Wrap-up

When you upload a library back to Postlab, that open FCPX library is closed within Final Cut Pro X on your system, because you have checked it back in. Once you log out of Postlab, the temporary library file is moved to the trash. If you need a local version of the library, then export it from the Postlab app.

Once you get the hang of it, collaboration is simple using Postlab Cloud. Library files stay light without any of the sort of corruption caused by using services like DropBox. My test project included synchronized multi-cam clips and multi-channel audio. Each time during this exchange clips, projects, and edits showed up as expected when going between the various users. Whether or not Apple ever tackles collaboration within Final Cut Pro X is an unknown. But why wait? If you need that today, then Postlab Cloud offers a solid answer.

The relaunched Postlab Cloud includes three plans, which are priced per user/per year: Postlab, Postlab Pro, and Postlab Server. The first tier only allows for library version tracking and sharing. Pro allows for a lot more libraries to be shared and comes with more features. Server is a dedicated Postlab Cloud server for larger teams or those that require IT-specific features like Active Directory. Finally, Hedge/Postlab plans to ship a local version of Postlab – designed for use within local networks – soon after launch.

Postlab has now expanded to include Premiere Pro users.

Check out the Postlab tutorials for more information.

The article was originally written for FCP.co.

©2020 Oliver Peters

ADA Compliance

The Americans with Disabilities Act (ADA) has enriched the lives of many in the disabled community since its introduction in 1990. It affects all of our lives, from wheelchair-friendly ramps on street corners and business entrances to the various accessibility modes in our computers and smart devices. While many editors don’t have to deal directly with the impact of the ADA on media, the law does affect broadcasters and streaming platforms. If you deliver commercials and programs, then your production will be affected in one way or another. Typically the producer is not directly subject to compliance, but the platform is. This means someone has to provide the elements that complete compliance as part of any distribution arrangement, whether it is the producer or the outlet itself.

Two components are involved to meet proper ADA compliance: closed captions and described audio (aka audio descriptions). Captions come in two flavors – open and closed. Open captions or subtitles consists of text “burned” into the image. It is customarily used when a foreign language is spoken in an otherwise English program (or the equivalent in non-English-speaking countries). Closed captions are enclosed in a data stream that can be turned on and off by the viewer, device, or the platform and are intended to make the dialogue accessible to the hearing-impaired. Closed captions are often also turned on in noisy environments, like a TV playing in a gym or a bar.

Audio descriptions are intended to aid the visually-impaired. This is a version of the audio mix with an additional voice-over element. An announcer describes visual information that is not readily obvious from the audio of the program itself. This voice-over fills in the gaps, such as “man climbs to the top of a large hill” or “logos appear on screen.”

Closed captions

Historically post houses and producers have opted to outsource caption creation to companies that specialize in those services. However, modern NLEs enable any editor to handle captions themselves and the increasing enforcement of ADA compliance is now adding to the deliverable requirements for many editors. With this increased demand, using a specialist may become cost prohibitive; therefore, built-in tools are all the more attractive.

There are numerous closed caption standards and various captioning file formats. The most common are .scc (Scenarist), .srt (SubRip), and .vtt (preferred for the web). Captions can be supplied as “embedded” (secondary data within the master file) or as a separate “sidecar” file, which is intended to play in sync with the video file. Not all of these are equal. For example, .scc files (embedded or as sidecar files) support text formatting and positioning, while .srt and .vtt do not. For example, if you have a lower-third name graphic come on screen, you want to move any caption from its usual lower-third, safe-title position to the top of the screen while that name graphic is visible. This way both remain legible. The .scc format supports that, but the other two don’t. The visual appearance of the caption text is a function of the playback hardware or software, so the same captions look different in QuickTime Player versus Switch or VLC. In addition, SubRip (.srt) captions all appear at the bottom, even if you repositioned them to the top, while .vtt captions appear at the top of the screen.

You may prefer to first create a transcription of the dialogue using an outside service, rather than simply typing in the captions from scratch. There are several online resources that automate speech-to-text, including SpeedScriber, Simon Says, Transcriptive, and others. Since AI-based transcription is only as good as the intelligibility of the audio and dialects of the speakers, they all require further text editing/correction through on online tool before they are ready to use.

One service that I’ve used with good results is REV.com, which uses human transcribers for greater accuracy, as well as offering on online text editing tool. The transcription can be downloaded in various formats, including simple text (.txt). Once you have a valid transcription, that file can be converted through a variety of software applications into .srt, .scc, or .vtt files. These in turn can be imported into your preferred NLE for timing, formatting, and positioning adjustments.

Getting the right look

There are guidelines that captioning specialists follow, but some are merely customary and do not affect compliance. For example, upper and lower case text is currently the norm, but you’ll still be OK if your text is all caps. There are also accepted norms when English (or other) subtitles appear on screen, such as for someone speaking in a foreign language. In those cases, no additional closed caption text is used, since the subtitle already provides that information. However, a caption may appear at the top of the screen identifying that a foreign language is being spoken. Likewise, during sections with only music or ambient sounds, a caption may briefly identifying it as such.

When creating captions, you have to understand that readability is key, so the text will not always run perfectly in sync with the dialogue. For instance, when two actors engage in rapid fire dialogue, each caption may stay on longer than the spoken line. You can adjust the timing against that scene so that they eventually catch up once the pace slows down. It’s good to watch a few captioned programs before starting from scratch – just to get a sense of what works and what doesn’t.

If you are creating captions for a program to run on a specific broadcast network or streaming services, then it’s a good idea to find out of they provide a style guide for captions.

Using your NLE to create closed captions

Avid Media Composer, Adobe Premiere Pro, DaVinci Resolve, and Apple Final Cut Pro X all support closed captions. I find FCPX to be the best of this group, because of its extensive editing control over captions and ease of use. This includes text formatting, but also display methods, like pop-on, paint-on, and roll-up effects. Import .scc files for maximum control or extract captions from an existing master, if your media already has embedded caption data. The other three NLEs place the captions onto a single data track (like a video track) within which captions can be edited. Final Cut Pro X places them as a series of connected clips, like any other video clip or graphic. If you perform additional editing, the FCPX magnetic timeline takes care of keeping the captions in sync with the associated dialogue.

Final Cut’s big plus for me is that validation errors are flagged in red. Validation errors occur when caption clips overlap, may be too short for the display method (like a paint-on), are too close to the start of the file, or other errors. It’s easy to find and fix these before exporting the master file.

Deliverables

NLEs support the export of a master file with embedded captions, or “burned” into the video as a subtitle, or the captions exported as a separate sidecar file. Specific format support for embedded captions varies among applications. For example, Premiere Pro – as well as Adobe Media Encoder – will only embed captioning data when you export your sequence or encode a file as a QuickTime-wrapped master file. (I’m running macOS, so there may be other options with Windows.)

On the other hand, Apple Compressor and Final Cut Pro X can encode or export files with embedded captions for formats such as MPEG2 TS, MPEG 2 PS, or MP4. It would be nice if all these NLEs supported the same range of formats, but they don’t. If your goal is a sidecar caption file instead of embedded data, then it’s a far simpler and more reliable process.

Audio descriptions

Compared to closed captions, providing audio description files is relatively easy. These can either be separate audio files – used as sidecar files for secondary audio – or additional tracks on the delivery master. Sometimes it’s a completely separate video file with only this version of the mix. Advanced platforms like Netflix may also require an IMF (Interoperable Master Format) package, which would include an audio description track as part of that package. When audio sidecar files are requested for the web or certain playback platforms, like hotel TV systems, the common deliverable formats are .mp3 or .m4a. The key is that the audio track should be able to run in sync with the rest of the program.

Producing an audio description file doesn’t require any new skills. A voice-over announcer is describing any action that occurs on screen, but which wouldn’t otherwise make sense if you were only listening to audio without that. Think of it like a radio play or podcast version of your TV program. This can be as simple as fitting additional VO into the gaps between actor/host/speaker dialogue. If you have access to the original files (such as a Pro Tools session) or dialogue/music/effects stems, then you have some latitude to adjust audio elements in order to fit in the additional voice-over lines. For example, sometimes the off-camera dialogue may be moved or edited in order to make more space for the VO descriptions. However, on-camera/sync dialogue is left untouched. In that case, some of this audio may be muted or ducked to make space for even longer descriptions.

Some of the same captioning service providers also provide audio description services, using their pool of announcers. Yet, there’s nothing about the process that any producer or editor couldn’t handle themselves. For example, scripting the extra lines, hiring and directing talent, and producing the final mix only require a bit more time added to the schedule, yet permits the most creative control.

ADA compliance has been around since 1990, but hasn’t been widely enforced outside of broadcast. That’s changing and there are no more excuses with the new NLE tools. It’s become easier than ever for any editor or producer to make sure they can provide the proper elements to touch every potential viewer.

For additional information, consult the FCC guidelines on closed captions.

The article was originally written for Pro Video Coalition.

©2020 Oliver Peters

Video Technology 2020 – Shared Storage

Shared storage used to be the domain of “heavy iron” facilities with Avid, Facilis, and earlier Apple Xserve systems providing the horsepower. Thanks to advances in networking and Ethernet technology, shared storage is accessible to any user. Whether built-in or via adapters, modern computers can tap into 1Gbps, 10Gbps, and even higher, networking speeds. Most computers can natively access Gigabit Ethernet networks (1Gbps) – adequate for SD and HD workflows. Computers designed for the pro video market increasingly sport built-in 10GbE ports, enabling comfortable collaboration with 4K media and up. Some of today’s most popular shared storage vendors include QNAP, Synology, and LumaForge.

This technology will become more prolific in 2020, with systems easier to connect and administer, making shared storage as plug-and-play as any local drives. Network Attached Storage (NAS) systems can service a single workstation or multiple users. In fact, companies like QNAP even offer consumer versions of these products designed to operate as home media servers. Even LumaForge sells a version of its popular Jellyfish through the online Apple Store. A simple, on-line connection guide will get you up and running, no IT department required. This is ideal for the individual editor or small post shop.

Expect 2020 to see higher connection speeds, such as 40GbE, and NAS proliferation even more widespread. It’s not just a matter of growth. These vendors are also interested in extending the functionality of their products beyond being a simple bucket for media. NAS systems will become full-featured media hubs. For example, if you an Avid user, you are familiar with their Media Central concept. In essence, this means the shared storage solution is a platform for various other applications, including the editing software. There are additional media applications that include management apps for user permission control, media queries, and more. Like Avid, the other vendors are exploring similar extensibility through third-party apps, such as Axle Video, Kyno, Hedge, Frame.io, and others. As such, a shared network becomes the whole that is greater than the sum of its parts.

Along with increased functionality, expect changes in the hardware, too. Modern NAS hardware is largely based on RAID arrays with spinning mechanical drives. As solid state (SSD) storage devices become more affordable, many NAS vendors will offer some of their products featuring RAID arrays configured with SSDs or even NVMe systems. Or a mixture of the two, with the SSD-based units used for short-term projects or cache files. Eventually the cost will come down enough so that large storage volumes can be cost-effectively populated with only SSDs. Don’t expect to be purchasing 100TB of SSD storage at a reasonable price in 2020; however, that is the direction in which we are headed. At least in this coming year, mechanical drives will still rule. Nevertheless, start looking at some percentage of your storage inventory to soon be based on SSDs.

Click here for more on shared storage solutions.

Originally written for Creative Planet Network.

©2020 Oliver Peters