Analogue Wayback, Ep. 15

A radio station with pictures.

The mid 80s found me working for a year at a facility that operated two radio stations and owned two satellite transponders. I managed the video production side of the company. Satellite space was hard to get at the time, so they operated their own network on one of them and sublet the other to a different company and network.

At that same time MTV had come to the end of its first contract with cable companies and many wanted other options. Creating a new music video channel alternative was something of interest for us. Unfortunately, our other transponder client was still leasing space within that short window when cable companies could have chosen an alternative option rather than renewing with MTV. Thus, a missed opportunity, because it was shortly thereafter that our client moved on anyway, leaving us with an unfilled satellite transponder. In spite of the unfortunate timing, our company’s owner still decided to launch a new and competing music video network instead of seeking a new client. That new channel was called Odyssey.

As head of production, I was part of the team tasked with figuring out the hardware and general operation of this network. This was the era of the early professional videocassette formats, so we settled on the first generation of M-format decks from Panasonic.

The M-format was a professional videocassette format developed by Panasonic and RCA. It was marketed under the Recam name by Panasonic, RCA, and Ampex. Much like VHS versus Betamax, it was Panasonic’s M-format versus Sony’s Betacam. M-format decks recorded onto standard VHS videocassettes that ran at a faster speed. They used component analog instead of composite recording. This first generation of the M-format was later replaced by the MII series, which had a slightly better professional run, but ultimately still failed in the marketplace.

It was important for us to use a premium brand of VHS tape in these decks, since music videos would play in a high rotation, putting wear and tear on the tape. The Odyssey master control featured seven decks, plus a computer-controlled master control system designed to sequence the playlist of videos, commercials, promos, etc. The computer system was developed by Larry Seehorn, a Silicon Valley engineer who was one of the early developers of computer-assisted, linear editing systems.

We launched at the end of the year right at the start of the holiday week between Christmas and New Year. Everything was off and running… Until the playlist computer system crashed. We quickly found out that it would only support 1500 events and then stop. This was something that the manufacturer failed to disclose when we purchased the system. You had to reload a new list and start over, losing a lot of time in between. It would have been fine in a normal TV station operation, since you had long program segments between commercial breaks. For us, this was insufficient time, because we only had the length available of a music video in order to reload and reboot a new playlist.

Fortunately as a back-up in case of some sort of system failure, we had prepared a number of hourlong 1″ video tapes with music video blocks in advance. Running these allowed us to temporarily continue operation while we figured out plan B.

Ultimately the solution we settled on was to chuck the master control computer and replace it with a Grass Valley master control switcher. This was an audio-follows-video device, meaning that switching sources simultaneously switched audio and video. If you used the fader bar to dissolve between sources, it would also mix between the audio sources. This now became a human-controlled operation with the master control operator loading and cueing tapes, switching sources, and so on. Although manual, it proved to be superior to a playlist-driven automated system.

The operators effectively became radio station disk jockeys and those same guidelines applied. Our radio station program director selected music, set up a manual playlist, a “clock” for song genre and commercial rotation, and so on. Music videos sent to us by record labels would be copied to the M-format VHS tapes with a countdown and any added graphics, like music video song credits. Quite frankly, I have to say that our song selection were more diverse than the original MTV. In addition, having human operators allowed us to adjust timing on-the-fly in ways that an automated list couldn’t.

As ambitious as this project was, it had numerous flaws. The company was unable to get any cable provider to commit a full channel as they had with MTV. Consequently programming was offered to any broadcast station or cable company in any market on a first-come-first-served basis, but without a time requirement. If a small, independent TV station in a large market decided to contract for only a few hours on the weekend, then they locked up that entire market.

The other factor that worked against Odyssey was that Turner Broadcasting had already tried to launch their music channel with a LOT more money. Turner’s effort crashed and burned in a month. Needless to say, our little operation was viewed with much skepticism. Many would-be customers and advertisers decided to hold off at least a year to see if we’d still be in business at that time. Of course, that didn’t help our bottom line.

In spite of these issues, Odyssey hung on for ten months before the owner finally tossed in the towel. Even though it didn’t work out and I had moved on anyway, it was still a very fun experience that took me back to when I started out in radio.

©2022 Oliver Peters

Virtual Production

Thanks to the advances in video game software and LED display technology, virtual production has become an exciting new tool for the filmmaker. Shows like The Mandalorian have thrust these techniques into the mainstream. To meet the demand, numerous companies around the world are creating virtual production sound stages, often referred to as “the volume.” I recently spoke with Pixomondo and Trilith Studios about their moves into virtual production.

Pixomondo

Pixomondo is an Oscar and Emmy-winning visual effects company with multiple VFX and virtual production stages in North America and Europe. Their virtual production credits include the series Star Trek: Strange New Worlds and the upcoming Netflix series Avatar: The Last Airbender.

The larger of the two virtual production stages at Pixomodo’s Toronto facilities is 300 feet x 90 feet and 24 feet tall. The LED screen system is 72 feet in diameter. Josh Kerekas is Pixomondo’s Head of Virtual Production.

Why did Pixomondo decide to venture into virtual production?

We saw the potential of this new technology and launched a year-long initiative to get our virtual production division off the ground. We’re really trying to embrace real-time technology, not just in the use case of virtual production in special studios, but even in traditional visual effects.

Click here to continue this article at postPerspective.

©2022 Oliver Peters

Analogue Wayback, Ep. 14

What’s old is new again.

When I watch shows like The Mandalorian and learn about using the volume, it becomes apparent that such methods conceptually stem from the earliest days of film. Some of these old school techniques are still in use today.

Rear-screen projection draws the most direct line to the volume. In its simplest form, there’s a translucent screen behind the talent. Imagery is projected from behind onto the screen. The camera sees the actors against this background scene as if that was a real set or landscape. No compositing is required since this is all in-camera. In old films, this was a common technique for car driving scenes. The same technique was used by David Fincher for Mank. Instead of projected images, large high-resolution video screens were used.

Front-screen projection is a similar process. The camera faces a special reflective backdrop coated with tiny glass beads. There’s a two-way mirror block between the camera lens and the talent who is standing in front of the screen. A projection source sits at 90 degrees to the camera and shines into the mirror, which is at a 45-degree angle inside the block. This casts the image onto the reflective backdrop. The camera shoots through this same mirror and sees both the talent and the projected image behind them, much like front screen projection.

The trick is that the projected image is also shining onto the talent, but you don’t actually see it on the talent. The reason is that the projector light level is so low that it’s washed out by the lighting on the talent. The glass beads of the backdrop act as tiny lenses to focus the light of the projected background image back towards the camera lens. The camera sees a proper combination without contamination onto the talent, even if that’s not what you see with the naked eye.

A similar concept is used in certain chromakey techniques. A ring light on the camera lens shines green or blue light onto the talent and the grey, reflective backdrop behind the talent. This backdrop also contains small glass beads that act as tiny lenses. The camera sees color-correct talent, but instead of grey, it’s a perfect green or blue screen behind them.

Aerial image projection is a cool technique that I haven’t personally seen used in modern production, although it’s probably still used in some special effects work. The process was used in multimedia production to add camera moves on still images. In a sense it led to digital video effects. There’s a projection source that shines an image onto a translucent, suspended pane of ground glass. A camera is positioned on the opposite side, so both camera and projector face the glass pane. The projected image is focused onto the glass, so that it’s crisp. Then the camera records the image, which can be resized as needed. In addition, a camera operator can add camera moves while recording the projected image that is “floating” on the glass pane.

©2022 Oliver Peters

Six Premiere Pro Game Changers

When a software developer updates any editing application, users often look for big changes, fancy features, and new functionality. Unfortunately, many little updates that can really change your day-to-day workflow are often overlooked.

Ever since the shift to its Create Cloud subscription model, Adobe has brought a string of updates to its core audio and video applications. Although there are several that have made big news, the more meaningful changes often seem less than awe inspiring to Adobe’s critics. Let me counter that narrative and point out six features that have truly improved the daily workflow for my Premiere Pro projects.

Auto Reframe Sequence. If you deliver projects for social media outlets, you know that various vertical formats are required. This is truly a pain when starting with content designed for 16×9 horizontal distribution. The Auto Reframe feature in Premiere Pro makes it easy to reformat any sequence for 9×16, 4×5, and 1×1 formats. It takes care of keyframing each shot to follow an area of interest within that shot, such as a person walking.

While other NLEs, like Final Cut Pro, also offer reformatting for vertical aspect ratios, none offer the same degree of automatic control to reposition the clip. It’s not perfect, but it works for most shots. It you don’t like the results on a shot, simply override the existing keyframes and manually reposition the clip. Auto Reframe works best if you start with a flattened, textless file, which brings me to the next feature.

Scene Edit Detection. This feature is generally used in color correction to automatically determine cuts between shots in a flattened file. The single clip in the sequence is split at each detected cut point. While you can use it for color correction in Premiere Pro, as well, it is also useful when Auto Reframing a sequence for verticals. If you try to apply Auto Reframe to a flattened file, Premiere will attempt to analyze and apply keyframes across the entire sequence since it’s one long clip. With these added splices created by Scene Edit Detection, Premiere can analyze each shot separately within the flattened file.

Auto Transcribe Sequence / Captioning. Modern deliverables take into account the challenges many viewers face. One of these is closed captions, which are vital to hearing-impaired viewers. Captions are also turned on by many viewers with otherwise normal hearing abilities for a variety of reasons. Just a few short years ago, getting interviews transcribed, adding subtitles for foreign languages, or creating closed captions required using an outside service, often at a large cost. 

Adobe’s first move was to add caption and subtitle functions to Premiere Pro, which enabled editors to import, create, and/or edit caption and subtitle text. This text can be exported as a separate sidecar file (such as .srt) or embedded into the video file. In a more recent update, Adobe augmented these features with Auto Transcribe. It’s included as part of your Creative Cloud subscription and there is generally no length limitation for reasonable use. If you have an hourlong interview that needs to be transcribed – no problem. 

Adobe uses cloud-based AI for part of the transcription process, so an internet connection is required. The turnaround time is quite fast and the accuracy is one of the best I’ve encountered. While the language options aren’t as broad as some of the competitors, most common Romance and Asian languages are covered. After the analysis and the speech-to-text process has been completed, that text can be used as a transcription or as captions (closed captions and/or subtitles). The transcription can also be exported as a text file with timecode. That’s handy for producers to create a paper cut for the editor.

Remix. You’ve just cut a six-minute corporate video and now you have to edit a needle drop music cue as a bed. It’s only 2:43, but needs to be extended to fit the 6:00 length and correctly time out to match the ending. You can either do this yourself or let Adobe tackle it for you. Remix came into Premiere Pro from Audition. This feature lets you use Adobe Sensei (their under-the-hood AI technology) to automatically re-edit a music track to a new target length. 

Open the Essential Sound panel, designate the track containing the cue as Music, enable the Duration tab, and select Remix. Set your target length and see what you get. You can customize the number of segments and variations to make the track sound less repetitive if needed. Some tracks have long fade-outs. So you may have to overshoot your target length in order to get the fade to properly coincide with the end of the video. I often still make one manual music edit to get it just right. Nevertheless the Remix feature is a great time-saver that usually gets me 90% of the way there.

Audition. If you pay for a full Creative Cloud subscription, then you benefit from the larger Adobe ecosystem. One of those applications is Audition, Adobe’s digital audio workstation (DAW) software. Audition is often ignored in most DAW roundups, because it doesn’t include many music-specific features, like software instruments and MIDI. Instead, Audition is targeted at general audio production (VO recordings, podcasts, commercials) and audio-for-video post in conjunction with Premiere Pro. Audition is designed around editing and processing a single audio file or for working in a multitrack session. I want to highlight the first method here.

Noise in location recordings is a fact of life for many projects. Record an interview in a working commercial kitchen and there will be a lot of background noise. Premiere Pro includes a capable noise reduction audio filter, which can be augmented by many third party tools from Accusonus, Crumplepop, and of course, iZotope RX. But if the Premiere Pro filter isn’t good enough, you need look no further than Audition. Export the track(s) from Premiere and open those (or the original files) in Audition.

Select the Noise Reduction/Restoration category under the Effects pulldown menu. First capture a short noise print in a section of the track with only background noise. This “trains” the filter for what is to be removed. Then select Noise Reduction (process). Follow the instructions and trust your own hearing to remove as much noise as possible with the least impact on the dialogue. If the person speaking sounds like they are underwater, then you’ve gone too far. Apply the effect in order to render the processing and then bounce (export) that processed track. Import the new track into Premiere. While this is a two-step process, you aren’t encumbering your computer with any real-time noise reduction filter when using such a pre-processed audio file.

Link Media. OK, I know relinking isn’t new to Premiere Pro and it’s probably not a marquee feature for editors always working with native media. When moving projects from offline to online – creative to finishing editorial – you know that if you cannot properly relink media files, a disaster will ensue.

Media Composer, Final Cut Pro, and Resolve all have relink functions. They work well with application-controlled, optimized media. But at other times, when working with camera original, native files, it might not work at all. I find Premiere Pro works about the best of these NLEs when it comes to relinking a wide variety of media files. That’s precisely because the user has a lot of control of the relink criteria in Premiere Pro. It’s not left up entirely to the application.

Premiere Pro expects the media be in the same relative path on the drive. Let’s say that you move the entire project to a different folder (like from Active Projects to Archived Projects) on your storage system. Navigate to and locate the first missing file and Premiere will find all the rest.

The relinking procedure is also quite forgiving, because various file criteria used to relink can be checked or unchecked. For example, I frequently edit with watermarked temporary music tracks, which are 44.1kHz MP3 files. When the cut is approved and the music is licensed, I download new, non-watermarked versions of that music as 48kHz WAV or AIF files. Premiere Pro easily relinks to the WAV or AIF files instead of the MP3s once I point it in the right direction. All music edits (including internal edits made by Remix) stay as intended and there is no mismatch due the the sample rate change.

These features might not make it into everyone’s Top 10 list, but they are tools generally not found in other NLEs. I use them quite often to speed up the session and remove drudgery from the editing process.

©2022 Oliver Peters