ADA Compliance

The Americans with Disabilities Act (ADA) has enriched the lives of many in the disabled community since its introduction in 1990. It affects all of our lives, from wheelchair-friendly ramps on street corners and business entrances to the various accessibility modes in our computers and smart devices. While many editors don’t have to deal directly with the impact of the ADA on media, the law does affect broadcasters and streaming platforms. If you deliver commercials and programs, then your production will be affected in one way or another. Typically the producer is not directly subject to compliance, but the platform is. This means someone has to provide the elements that complete compliance as part of any distribution arrangement, whether it is the producer or the outlet itself.

Two components are involved to meet proper ADA compliance: closed captions and described audio (aka audio descriptions). Captions come in two flavors – open and closed. Open captions or subtitles consists of text “burned” into the image. It is customarily used when a foreign language is spoken in an otherwise English program (or the equivalent in non-English-speaking countries). Closed captions are enclosed in a data stream that can be turned on and off by the viewer, device, or the platform and are intended to make the dialogue accessible to the hearing-impaired. Closed captions are often also turned on in noisy environments, like a TV playing in a gym or a bar.

Audio descriptions are intended to aid the visually-impaired. This is a version of the audio mix with an additional voice-over element. An announcer describes visual information that is not readily obvious from the audio of the program itself. This voice-over fills in the gaps, such as “man climbs to the top of a large hill” or “logos appear on screen.”

Closed captions

Historically post houses and producers have opted to outsource caption creation to companies that specialize in those services. However, modern NLEs enable any editor to handle captions themselves and the increasing enforcement of ADA compliance is now adding to the deliverable requirements for many editors. With this increased demand, using a specialist may become cost prohibitive; therefore, built-in tools are all the more attractive.

There are numerous closed caption standards and various captioning file formats. The most common are .scc (Scenarist), .srt (SubRip), and .vtt (preferred for the web). Captions can be supplied as “embedded” (secondary data within the master file) or as a separate “sidecar” file, which is intended to play in sync with the video file. Not all of these are equal. For example, .scc files (embedded or as sidecar files) support text formatting and positioning, while .srt and .vtt do not. For example, if you have a lower-third name graphic come on screen, you want to move any caption from its usual lower-third, safe-title position to the top of the screen while that name graphic is visible. This way both remain legible. The .scc format supports that, but the other two don’t. The visual appearance of the caption text is a function of the playback hardware or software, so the same captions look different in QuickTime Player versus Switch or VLC. In addition, SubRip (.srt) captions all appear at the bottom, even if you repositioned them to the top, while .vtt captions appear at the top of the screen.

You may prefer to first create a transcription of the dialogue using an outside service, rather than simply typing in the captions from scratch. There are several online resources that automate speech-to-text, including SpeedScriber, Simon Says, Transcriptive, and others. Since AI-based transcription is only as good as the intelligibility of the audio and dialects of the speakers, they all require further text editing/correction through on online tool before they are ready to use.

One service that I’ve used with good results is REV.com, which uses human transcribers for greater accuracy, as well as offering on online text editing tool. The transcription can be downloaded in various formats, including simple text (.txt). Once you have a valid transcription, that file can be converted through a variety of software applications into .srt, .scc, or .vtt files. These in turn can be imported into your preferred NLE for timing, formatting, and positioning adjustments.

Getting the right look

There are guidelines that captioning specialists follow, but some are merely customary and do not affect compliance. For example, upper and lower case text is currently the norm, but you’ll still be OK if your text is all caps. There are also accepted norms when English (or other) subtitles appear on screen, such as for someone speaking in a foreign language. In those cases, no additional closed caption text is used, since the subtitle already provides that information. However, a caption may appear at the top of the screen identifying that a foreign language is being spoken. Likewise, during sections with only music or ambient sounds, a caption may briefly identifying it as such.

When creating captions, you have to understand that readability is key, so the text will not always run perfectly in sync with the dialogue. For instance, when two actors engage in rapid fire dialogue, each caption may stay on longer than the spoken line. You can adjust the timing against that scene so that they eventually catch up once the pace slows down. It’s good to watch a few captioned programs before starting from scratch – just to get a sense of what works and what doesn’t.

If you are creating captions for a program to run on a specific broadcast network or streaming services, then it’s a good idea to find out of they provide a style guide for captions.

Using your NLE to create closed captions

Avid Media Composer, Adobe Premiere Pro, DaVinci Resolve, and Apple Final Cut Pro X all support closed captions. I find FCPX to be the best of this group, because of its extensive editing control over captions and ease of use. This includes text formatting, but also display methods, like pop-on, paint-on, and roll-up effects. Import .scc files for maximum control or extract captions from an existing master, if your media already has embedded caption data. The other three NLEs place the captions onto a single data track (like a video track) within which captions can be edited. Final Cut Pro X places them as a series of connected clips, like any other video clip or graphic. If you perform additional editing, the FCPX magnetic timeline takes care of keeping the captions in sync with the associated dialogue.

Final Cut’s big plus for me is that validation errors are flagged in red. Validation errors occur when caption clips overlap, may be too short for the display method (like a paint-on), are too close to the start of the file, or other errors. It’s easy to find and fix these before exporting the master file.

Deliverables

NLEs support the export of a master file with embedded captions, or “burned” into the video as a subtitle, or the captions exported as a separate sidecar file. Specific format support for embedded captions varies among applications. For example, Premiere Pro – as well as Adobe Media Encoder – will only embed captioning data when you export your sequence or encode a file as a QuickTime-wrapped master file. (I’m running macOS, so there may be other options with Windows.)

On the other hand, Apple Compressor and Final Cut Pro X can encode or export files with embedded captions for formats such as MPEG2 TS, MPEG 2 PS, or MP4. It would be nice if all these NLEs supported the same range of formats, but they don’t. If your goal is a sidecar caption file instead of embedded data, then it’s a far simpler and more reliable process.

Audio descriptions

Compared to closed captions, providing audio description files is relatively easy. These can either be separate audio files – used as sidecar files for secondary audio – or additional tracks on the delivery master. Sometimes it’s a completely separate video file with only this version of the mix. Advanced platforms like Netflix may also require an IMF (Interoperable Master Format) package, which would include an audio description track as part of that package. When audio sidecar files are requested for the web or certain playback platforms, like hotel TV systems, the common deliverable formats are .mp3 or .m4a. The key is that the audio track should be able to run in sync with the rest of the program.

Producing an audio description file doesn’t require any new skills. A voice-over announcer is describing any action that occurs on screen, but which wouldn’t otherwise make sense if you were only listening to audio without that. Think of it like a radio play or podcast version of your TV program. This can be as simple as fitting additional VO into the gaps between actor/host/speaker dialogue. If you have access to the original files (such as a Pro Tools session) or dialogue/music/effects stems, then you have some latitude to adjust audio elements in order to fit in the additional voice-over lines. For example, sometimes the off-camera dialogue may be moved or edited in order to make more space for the VO descriptions. However, on-camera/sync dialogue is left untouched. In that case, some of this audio may be muted or ducked to make space for even longer descriptions.

Some of the same captioning service providers also provide audio description services, using their pool of announcers. Yet, there’s nothing about the process that any producer or editor couldn’t handle themselves. For example, scripting the extra lines, hiring and directing talent, and producing the final mix only require a bit more time added to the schedule, yet permits the most creative control.

ADA compliance has been around since 1990, but hasn’t been widely enforced outside of broadcast. That’s changing and there are no more excuses with the new NLE tools. It’s become easier than ever for any editor or producer to make sure they can provide the proper elements to touch every potential viewer.

For additional information, consult the FCC guidelines on closed captions.

The article was originally written for Pro Video Coalition.

©2020 Oliver Peters

A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

Blackmagic Design UltraScope

blg_bmd_uscope

Blackmagic Design’s UltraScope gained a lot of buzz at NAB 2009. In a time when fewer facilities are spending precious budget dollars on high-end video and technical monitors, the UltraScope seems to fit the bill for a high-quality, but low-cost waveform monitor and vectorscope. It doesn’t answer all needs, but if you are interested in replacing that trusty NTSC Tektronix , Leader or Videotek scope with something that’s both cost–effective and designed for HD, then the UltraScope may be right for you.

The Blackmagic Design Ultrascope is an outgrowth of the company’s development of the Decklink cards. Purchasing UltraScope provides you with two components – a PCIe SDI/HD-SDI input card and the UltraScope software. These are to be installed into a qualified Windows PC with a high-resolution monitor and in total, provide a multi-pattern monitoring system. The PC specs are pretty loose. Blackmagic Design has listed a number of qualified systems on their website, but like most companies, these represent products that have been tested and known to work – not all the possible options that, in fact, will work. Stick to the list and you are safe. Pick other options and your mileage may vary.

Configuring your system

The idea behind UltraScope is to end up with a product that gives you high-quality HD and SD monitoring, but without the cost of top-of-the-line dedicated hardware or rasterizing scopes. The key ingredients are a PC with a PCIe bus and the appropriate graphics display card. The PC should have an Intel Core 2 Duo 2.5GHz processor (or better) and run Windows XP or Vista. Windows 32-bit and 64-bit versions are supported, but check Blackmagic Design’s tech specs page for exact details. According to Blackmagic Design, the card has to incorporate the OpenGL 2.1 (or better) standard. A fellow editor configured his system with an off-the-shelf card from a computer retailer for about $100. In his case, a Diamond-branded card using the ATI 4650 chipset worked just fine.

You need the right monitor for the best experience. Initial marketing information specified 24” monitors. In fact, the requirement is to be able to support a 1920×1200 screen resolution. My friend is using an older 23” Apple Cinema Display. HP also makes some monitors with that resolution in the 22” range for under $300. If you are prepared to do a little “DIY” experimentation and don’t mind returning a product to the store if it doesn’t work, then you can certainly get UltraScope to work on a PC that isn’t on Blackmagic Design’s list. Putting together such a system should cost under $2,000, including the UltraScope and monitor, which is well under the price of the lowest-cost competitor.

Once you have a PC with UltraScope installed, the rest is pretty simple. The UltraScope software is simply another Windows application, so it can operate on a workstation that is shared for other tasks. UltraScope becomes the dominant application when you launch it. Its interface hides everything else and can’t be minimized, so you are either running UltraScope or not. As such, I’d recommend using a PC that isn’t intended for essential editing tasks, if you plan to use UltraScope fulltime.

Connect your input cable to the PCIe card and whatever is being sent will be displayed in the interface. The UltraScope input card can handle coax and fiber optic SDI at up to 3Gb/s and each connection offers a loop-through. Most, but not all, NTSC, PAL and HD formats and frame-rates are supported. For instance, 1080p/23.98 is supported but 720p/23.98 is not. The input is auto-sensing, so as you change project settings or output formats on your NLE, the UltraScope adjusts accordingly. No operator interaction is required.

The UltraScope display is divided into six panes that display parade, waveform, vectorscope, histogram, audio and picture. The audio pane supports up to 8 embedded SDI channels and shows both volume and phase. The picture pane displays a color image and VITC timecode. There’s very little to it beyond that. You can’t change the displays or rearrange them. You also cannot zoom, magnify or calibrate the scope readouts in any way. If you need to measure horizontal or vertical blanking or where captioning is located within the vertical interval, then this product isn’t for you. The main function of the UltraScope is to display levels for quality control monitoring and color correction and it does that quite well. Video levels that run out of bounds are indicated with a red color, so video peaks that exceed 100 change from white to red as they cross over.

Is it right for you?

The UltraScope is going to be more useful to some than others. For instance, if you run Apple Final Cut Studio, then the built-in software scopes in Final Cut Pro or Color will show you the same information and, in general use, seem about as accurate. The advantage of UltraScope for such users, is the ability to check levels at the output of any hardware i/o card or VTR, not just within the editing software. If you are an Avid editor, then you only have access to built-in scopes when in the color correction mode, so UltraScope is of greater benefit.

My colleague’s system is an Avid Media Composer equipped with Mojo DX. By adding UltraScope he now has fulltime monitoring of video waveforms, which is something the Media Composer doesn’t provide. The real-time updating of the display seems very fast without lag. I did notice that the confidence video in the picture pane dropped a few frames at times, but the scopes appeared to keep up. I’m not sure, but it seems that Blackmagic Design has given preference in the software to the scopes over the image display, which is a good thing. The only problem we encountered was audio. When the Mojo DX was supposed to be outputting eight discrete audio channels, only four showed up on the UltraScope meters. As we didn’t have an 8-channel VTR to test this, I’m not sure if this was an Avid or Blackmagic Design issue.

Since the input card takes any SDI signal, it also makes perfect sense to use the Blackmagic Design UltraScope as a central monitor. You could assign the input to the card from a router or patch bay and use it in a central machine room. Another option is to locate the computer centrally, but use Cat5-DVI extenders to place a monitor in several different edit bays. This way, at any given time, one room could use the UltraScope, without necessarily installing a complete system into each room.

Future-proofed through software

It’s important to remember that this is 1.0 product. Because UltraScope is software-based, features that aren’t available today can easily be added. Blackmagic Design has already been doing that over the years with its other products. For instance, scaling and calibration aren’t there today, but if enough customers request it, then it might be available in the next software update as a simple downloadable update.

Blackmagic Design UltraScope is a great product for the editor that misses having a dedicated set of scopes, but who doesn’t want to break the bank anymore. Unlike hardware units, a software product like UltraScope makes it easier than ever to update features and improve the existing product over time. Even if you have built-in scopes within your NLE, this is going to be the only way to make sure your i/o card is really outputting the right levels, plus it gives you an ideal way to check the signal on your VTR without tying up other systems. And besides… What’s cooler to impress a client than having another monitor whose display looks like you are landing 747s at LAX?

©2009 Oliver Peters

Written for NewBay Media LLC and DV magazine

What’s wrong with this picture?

blg_whatswrong

“May you live in interesting times” is said to be an ancient Chinese curse. That certainly describes modern times, but no more so than in the video world. We are at the intersection of numerous transitions: analog to digital broadcast; SD to HD; CRTs to LCD and plasma displays; and tape-based to file-based acquisition and delivery. Where the industry had the chance to make a clear break with the past, it often chose to integrate solutions that protected legacy formats and infrastructure, leaving us with the bewildering options that we know today.

 

Broadcasters settled on two standards: 720p and 1080i. These are both full-raster, square pixel formats: 1280x720p/59.94 (60 progressive frames per seconds in NTSC countries) – commonly known as “60P” – and 1920x1080i/59.94 (60 interlaced fields per second in NTSC countries) – commonly known as “60i”. The industry has wrestled with interlacing since before the birth of NTSC.

 

Interlaced scan

 

Interlaced displays show a frame as two sequential sets of alternating odd and even-numbered scan lines. Each set is called a field and occurs at 1/60th of a second, so two fields make a single full-resolution frame. Since the fields are displaced in time, one frame with fast horizontal motion will appear like it has serrated edges or horizontal lines. That’s because odd-numbered scan lines show action that occurred 1/60th of a second apart from the even-numbered, adjacent scan lines. If you routinely move interlaced content between software apps, you have to careful to maintain proper field dominance (whether edits start on field 1 or field 2 of a frame) and field order (whether a frame is displayed starting with odd or even-numbered scan lines).

 

Progressive scan

 

A progressive format, like 720p, displays a complete, full-resolution frame for each of 60 frames per second. All scan lines show action that was captured at the exact same instance in time. When you combine the spatial with the temporal resolution, the amount of data that passes in front of a viewer’s eyes in one second is essentially the same for 1080i (about 62 million pixels) as for 720p (about 55 million pixels).

 

Progressive is ultimately a better format solution from the point-of-view of conversions and graphics. Progressive media scales more easily from SD to HD without the risk of introducing interlace errors that can’t be corrected later. Graphic and VFX artists also have a better time with progressive media and won’t have issues with proper field order, as is so often the case when working with NTSC or even 1080i. The benefits of progressive media apply regardless of the format size or frame rate, so 1080p/23.98 offers the same advantages.

 

Outside of the boundary lines

 

Modern cameras, display systems and NLEs have allowed us to shed a number of boundaries from the past. Thanks to Sony and Laser Pacific, we’ve added 1920x1080psf/23.98. That’s a “progressive segmented frame” running at the video-friendly rate of 23.98 for 24fps media. PsF is really interlacing, except that at the camera end, both fields are captured at the same point in time. PsF allows the format to be “superimposed” onto an otherwise interlaced infrastructure with less impact on post and manufacturing costs.

 

Tapeless cameras have added more wrinkles. A Panasonic VariCam records to tape at 59.94fps (60P), even though you are shooting with the camera set to 23.98fps (24P). This is often called 24-over-60. New tapeless Panasonic P2 camcorders aren’t bound by VTR mechanisms and can record a file to the P2 recording media at any “native” frame rate. To conserve data space on the P2 card, simply record at the frame rate you need, like 23.98pn (progressive, native) or 29.97pn. No need for any redundant frames (added 3:2 pulldown) to round 24fps out to 60fps as with the VariCam.

 

I’d be remiss if I didn’t address raster size. At the top, I mentioned full-raster and square pixels, but the actual video content recorded in the file cheats this by changing the size and pixel aspect ratio as a way of reducing the data rate. This will vary with codec. For example, DVCPRO HD records at a true size of 960×720 pixels, but displays as 1280×720 pixels. Proper display sizes of such files (as compared with actual file sizes) are controlled by the NLE software or a media player application, like QuickTime.

 

Mixing it up

 

Editors routinely have to deal with a mix of frame rates, image sizes and aspect ratios, but ultimately this all has to go to tape or distribution through the funnel of the two accepted HD broadcast formats (720p/59.94 and 1080i/59.94). PLUS good old fashioned NTSC and/or PAL. For instance, if you work on a TV or film project being mastered at 1920x1080p/23.98, you need to realize several things: few displays support native 23.98 (24P) frame rates. You will ultimately have to generate not only a 23.98p master videotape or file, but also “broadcast” or “air” masters. Think of your 23.98p master as a “digital internegative”, which will be used to generate 1080i, 720p, NTSC, PAL, 16×9 squeezed, 4×3 center-cut and letterboxed variations.

 

Unfortunately your NLE won’t totally get you there. I recently finished some spots in 1080p/23.98 on an FCP system with a KONA2 card. If you think the hardware can convert to 1080i output, guess again! Changing FCP’s Video Playback setting to 1080i is really telling the FCP RT engine to do this in software, not in hardware. The ONLY conversions down by the KONA hardware are those available in the primary and secondary format options of the AJA Control Panel. In this case, only the NTSC downconversion gets the benefit of hardware-controlled pulldown insertion.

 

OK, so let FCP do it. The trouble with that idea is that yes, FCP can mix frame rates and convert them, but it does a poor job of it. Instead of the correct 2:3:2:3 cadence, FCP uses the faster-to-calculate 2:2:2:4. The result is an image that looks like frames are being dropped, because the fourth frame is always being displayed twice, resulting in a noticeable visual stutter. In my case, the solution was to use Apple Compressor to create the 1080i and 720p versions and to use the KONA2’s hardware downconversion for the NTSC Beta-SP dubs. Adobe After Effects also functions as a good, software conversion tool.

 

Another variation to this dilemma is the 720pn/29.97 (aka 30PN) of the P2 cameras. This is an easily edited format in FCP, but it deviates from the true 720p/59.94 standard. Edit in FCP with a 29.97p timeline, but when you change the Video Playback setting to 59.94, FCP converts the video on-the-fly to send a 60P video stream to the hardware. FCP is adding 2:2 pulldown (doubling each frame) to make the signal compliant. Depending on the horsepower of your workstation, you may, in fact, lower the image resolution by doing this. If you are doing this for HD output, it might actually be better to convert or render the 29.97p timeline to a new 59.94p sequence prior to output, in order to maintain proper resolution.

 

Converting to NTSC

 

But what about downconversion? Most of the HD decks and I/O cards you buy have built-in downconversion, right? You would think they do a good job, but when images are really critical, they don’t cut it. Dedicated conversion products, like the Teranex Mini do a far better job in both directions. I delivered a documentary to HBO and one of the items flagged by their QC department was the quality of the credits in the downconverted (letterboxed) Digital Betacam back-up master. I had used rolling end credits on the HD master, so I figured that changing the credits to static cards and bumping up the font size a bit would make it a lot better. I compared the converted quality of these new static HD credits through FCP internally, through the KONA hardware and through the Sony HDW-500 deck. None of these looked as crisp and clean as simply creating new SD credits for the Digital Betacam master. Downconverted video and even lower third graphics all looked fine on the SD master – just not the final credits.

 

The trouble with flat panels

 

This would be enough of a mess without display issues. Consumers are buying LCDs and plasmas. CRTs are effectively dead. Yet, CRTs are the only device to properly display interlacing – especially if you are troubleshooting errors. Flat panels all go through conversions and interpolation to display interlaced video in a progressive fashion. Going back to the original 720p versus 1080i options, I really have to wonder whether the rapid technology change in display devices was properly forecast. If you shoot 1080p/23.98, this often gets converted to a 1080i/59.94 broadcast master (with added 3:2 pulldown) and is transmitted to your set as a 1080i signal. The set converts the signal. That’s the best case scenario.

 

Far more often, the production company, network and local affiliate haven’t adopted the same HD standard. As a result, there may be several 720p-to-1080i and/or 1080i-to-720p that happen along the way. To further complicate things, many older consumer sets are native 720p panels and scale a 1080 image. Many include circuitry to remove 3:2 pulldown and convert 24fps programs back to progressive images. This is usually called the “film” mode setting. It generally doesn’t work well with mixed-cadence shows or rolling/crawling video titles over film content.

 

The newest sets are 1080p, which is a totally bogus marketing feature. These are designed for video game playback and not TV signals, which are simply frame-doubled. All of this mish-mash – plus the heavy digital compression used in transmission – makes me marvel at how bad a lot of HD signals look in retail stores. I recently saw a clip from NBC’s Heroes on a large 1080p set at a local Sam’s Club. It was far more pleasing to me on my 20” Samsung CRT at home, received over analog cable, than on the big 1080p digital panel.

 

Progress (?) marches on…

 

We can’t turn back time , of course, but my feeling about displays is that a 29.97p (30P) signal is the “sweet spot” for most LCD and plasma panels. In fact, 720p on most of today’s consumer panel looks about the same as 1080i or 1080p. When I look at 23.98 (24P) content as 29.97 (24p-over-60i), it looks proper to my eyes on a CRT, but a bit funky on an LCD display. On the other hand 29.97 (30P) strobes a bit on a CRT, but appears very smooth on a flat panel. Panasonic’s 720p/59.94 looks like regular video on a CRT, but 720p recorded as 30p-over-60p looks more film-like. Yet both signals actually look very similar on a flat panel. This is likely due to the refresh rates and image latency in an LCD or plasma panel as compared to a CRT. True 24P is also fine if your target is the web. As a web file it can be displayed as true 24fps without pulldown. Remember that as video, though, many flat panels cannot display 23.98 or 24fps frame rates without pulldown being added.

 

Unfortunately there is no single, best solution. If your target distribution is for the web or primarily to be viewed on flat panel display devices (including projectors), I highly recommend working strictly in a progressive format and a progressive timeline setting. If interlacing is involved, them make sure to deinterlace these clips or even the entire timeline before your final delivery. Reserve interlaced media and timelines for productions that are intended predominantly for broadcast TV using a 480i (NTSC) or 1080i transmission.

 

By now you’re probably echoing the common question, “When are we going to get ONE standard?” My answer is that there ARE standards – MANY of them. This won’t get better, so you can only prepare yourself with more knowledge. Learn what works for your system and your customers and then focus on those solutions – and yes – the necessary workarounds, too!

 

Does your head hurt yet?

 

© 2009 Oliver Peters

Avid ScriptSync – Automating Script Based Editing

Script continuity is the basis of organizing any dramatic television production or feature film. The script supervisor’s so-called lined script provides editors with a schematic for the coverage available for each scene in the script and is the basis for the concept of script based editing. As a scene is filmed the supervisor writes the scene and take number at the dialogue line on the script page where the shot starts and then draws a vertical line down through the page, stopping at the point when the director calls “cut”. As the director films various takes for master shots, close-ups and pick-ups, each one is indicated on that page with a scene/take number and a corresponding vertical line.

 

Script based editing for nonlinear systems has its origins in Cinedco’s Ediflex. To prepare dailies, assistant editors used a process called Script Mimic. They would draw numbered horizontal lines across the script at every sentence or paragraph of dialogue. Once dailies were available, the assistant would next enter timecodes that corresponded to this script breakdown for each scene and take. Ediflex used a unique lightpen-driven interface and a screen layout similar to the appearance of an edit decision list. Clicking on the intersection on the screen of a vertical (scene/take) and horizontal (dialogue line) entry permitted the editor to instantly zero in on the exact line of dialogue from any given take loaded by the assistant.

 

After the demise of Cinedco, the intellectual property of Ediflex’s Script Mimic ended up in the hands of Avid Technology. This formed the basis of Avid’s own Script Integration feature, first introduced in 1998 as a function within the Media Composer and Film Composer product family. The script based toolset has continued to be developed ever since and is available in both Avid Media Composer and Avid Xpress Pro software. Since this is a patented technology, Avid is the only nonlinear editing company to offer this feature and no competitor has anything to offer that’s even remotely close.

 

 

Script Based Editing Becomes Faster Than Ever

 

To date, Avid script based editing has generally stayed in the domain of episodic television shows and feature films. These are productions that budget the time and money for assistant editors, who in turn take over the responsibility of getting dailies ready for the editor so he or she can take advantage of these tools. Until recently this has been a time-consuming process. A year ago, Avid released ScriptSync as part of the Avid Media Composer 2.7 software (not included with Avid Xpress Pro). ScriptSync uses voice recognition technology licensed from Nexidia to automate the match of a media clip with the text of the script.

 

Here’s a quick overview. To use script based editing you first have to import the script. This has to be an ASCII text file with the document formatting maintained. Most film and TV writers use Final Draft to write their scripts and this application already has an “export for Avid” function. Inside the Media Composer interface, open the script bin and corresponding clip bin. Highlight a section of the script with the dialogue for those clips and then drag-and-drop one or several clips onto the highlighted section of the script. Now the script bin is updated to display the same vertical lines drawn through the text as you would see in a script supervisor’s lined script. In addition, if there are portions of the dialogue that are off-camera for a character, the software lets you highlight those dialogue lines and add the same sort of squiggly notation for that sentence or paragraph as you’d see in the supervisor’s hand-written notations.

 

Now the true magic happens. Once you’ve established the link between the text and the media, highlight the clips and select ScriptSync from the pulldown menu. At this point voice recognition analysis kicks in. According to Avid’s explanation, phonetic characters are generated for the text in the script and these are matched to the waveforms of the audio tracks. There are various preference settings that can be adjusted, which will affect the results. For example, first pick one of the nine languages that are recognized so far. You can select from audio tracks A1, A2 or both, in the case where different speakers are separated onto different channels. Lastly, there are settings to skip or ignore certain text conditions, like capital letters, which might be used for character names or scene descriptions in the script.

 

Several clips can be analyzed simultaneously at a rate far faster than real-time. Once this is completed, each vertical line descending from a media clip will have a series of nodes at each line of dialogue. By simply clicking on one of these points, the editor has instant access to that exact line of dialogue on any one of the applicable clips. A process that used to takes hours has literally been reduced to minutes and is probably one of the greatest productivity gains of any new NLE feature to come along in years.

 

 

Avid ScriptSync In The Real World

 

Avid’s script based editing is a tool that many experienced editors have never used, but it’s also one that other editors simply can’t live without. I had a chance to explore this with Brian Schnuckel and Zene Baker, two film and television editors who rely on it for their projects. Schnuckel has most recently been editing Just Jordan, a Nickelodeon sitcom that’s in its second season. In the first season, this was a single-camera show and the assistant editor handled script preparation manually. Season two is a multi-camera show shot in two days. One of the biggest challenges for script based editing is with ad libs or dialogue changes.

 

According to Brian, “When there are relatively simple changes, like a few words that are different, it’s not too bad and ScriptSync is smart enough to skip over these and catch up to the right point in the dialogue. However, it’s tougher when whole lines of dialogue are different. Then my assistant has to sync these areas by hand again.” The Avid software does permit you to cut, copy and paste changes directly in the script bin, but you can only work with lines or paragraphs, not individual words. Since you cannot undo these changes, Avid recommends making such changes in a word processor and then pasting the new text into the script bin.

 

Schnuckel continued, “Restarts are the biggest problem. Avid is working on ways to tell the software to ignore certain areas, but for the time being, these issues have to be fixed by hand. Depending on the production, these fixes offset the gains offered by ScriptSync’s automation, so you might not end up saving as much time as you’d hoped.” Surprisingly ScriptSync doesn’t have too much problem sifting through less-than-pristine audio. Editors even report that there’s little or no issue with actors who are speaking English but with a heavy foreign accent. In spite of a few issues, Schnuckel reports, “I’ve really come to rely on this feature and would have to change my whole workflow if I were editing with another system.”

 

Zene Baker is currently cutting a low-budget, indie feature with the working title of She Lived. He reports his preference for Avid Media Composer over Apple Final Cut Pro, because “there’s less to worry about and it’s easier to be your own assistant.”  Baker is cutting She Lived on what he describes as a “poor man’s Unity”. Two Media Composer systems connected via Ethernet and each working with a set of duplicate media files. Baker explained his experience with script based editing. “I was familiar with the old manual way prior to ScriptSync and found it to be very time-consuming, but I tried it on a few short projects and liked it. I have the luxury of an assistant editor on She Lived, as well as receiving digital dailies on hard drives.  This frees up some of the more tedious operations an assistant would normally be busy with and allows her to prepare the material more thoroughly with Avid’s script based editing and ScriptSync.  The weakest area is still with restarts and script page changes.  Features films that are shot rigidly according to the script work the best and comedy, with its ad libs, is still the toughest.”

 

 

Working Smarter

 

Both editors pinpointed the same software weaknesses, such as restarts, but Baker suggested that first subclipping the takes that had restarts was a good workaround. Another tip he offered was to create numerous bins with a smaller number of scenes in each bin. “It just gets to be too much data for the system to handle efficiently if you try to work in a single master script bin with all the clips tied to the film script. Instead, import the script into several bins and then just work with one to five scenes in each bin.”

 

Although the software continues to evolve, both Baker and Schnuckel pointed out one key advantage. As Brian put it, “It makes you look smarter! The session just goes more smoothly when you’re in the room with the director and every take is at your fingertips.” Zene added, “If you don’t have an assistant, you’d really have to weigh the advantages against the schedule. You spend more time on the front end, but you really make it up on the back end. It’s a real time-saver when people are in the room. I just love the feature of highlighting a section of dialogue and quickly being able to see and hear every bit of coverage for that line.”

 

Generally script based editing pops up among scripted TV drama and film editors, but it’s a great tool for other productions, too. For example, documentaries and reality television shows typically transcribe all the spoken raw footage, such as interviews. This type of footage becomes a natural for script based editing and Avid’s ScriptSync. Just think, with one click you can find any word or sentence within a lengthy interview and better yet, the script bin works with Media Composer’s internal Find and Find Next commands. Avid’s script based editing and ScriptSync form a clear advantage over the competition, so if you cut on Avid NLEs and have never tried it, you’re only the next project away from making this a key part of your workflow.

 

Written by Oliver Peters for Videography magazine (NewBay Media, LLC)