Impressions of NAB 2023

2023 marks the 100th year of the NAB Convention, which started out as a radio gathering in New York City. This year you could add ribbons to your badges indicating the number of years that you’d attended – 5, 10, etc. My first NAB was 1979 in Dallas, so I proudly displayed the 25+ ribbon. Although I haven’t attended each one in those intervening years, I have attended many and well over 25.

Some have been ready to sound the death knell for large, in-person conventions, thanks to the pandemic and proliferation of online teleconferencing services like Zoom. 2019 was the last pre-covid year with an attendance of 91,500 – down from previous highs of over 100,000. 2022 was the first post-covid NAB and attendance was around 52,400. That was respectable given the climate a year ago. This year’s attendance was over 65,000, so certainly an upward trend. If anything, this represents a pent-up desire to kick the tires in person and hook back up with industry friends from all over the world. My gut feeling is that international attendance is still down, so I would expect future years’ attendance to grow higher.

Breaking down the halls

Like last year, the convention spread over the Central, North, and new West halls. The South hall with its two floors of exhibition space has been closed for renovation. The West hall is a three-story complex with a single, large exhibition floor. It’s an entire convention center in its own right. West hall is connected to the North hall by the sidewalk, an enclosed upstairs walkway, as well as the LVCC Loop (the connecting tunnel that ferries people between buildings in Teslas). From what I hear, next year will be back to the North, Central, and South halls.

As with most NAB conventions, these halls were loosely organized by themes. Location and studio production gear could mostly be found in Central. Post was mainly in the North hall, but next year I would expect it to be back in the South hall. The West hall included a mixture of vendors that fit under connectivity topics, such as streaming, captioning, etc. It also included some of the radio services.

Although the booths covered nearly all of the floor space, it felt to me like many of the big companies were holding back. By that I mean, products with large infrastructure needs (big shared storage systems, large video switchers, huge mixing desks, etc) were absent. Mounting a large booth at the Las Vegas Convention Center – whether that’s for CES or NAB – is quite costly, with many unexpected charges.

Nevertheless, there were still plenty of elaborate camera sets and huge booths, like that of Blackmagic Design. If this was your first year at NAB, the sum of the whole was likely to be overwhelming. However, I’m sure many vendors were still taking a cautious approach. For example, there was no off-site Avid Connect event. There were no large-scale press conferences the day before opening.

The industry consolidates

There has been a lot of industry consolidation over the past decade or two. This has been accelerated thanks to the pandemic. Many venerable names are now part of larger holding companies. For example, Audiotonix owns many large audio brands, including Solid State Logic, DiGiCo, Sound Devices, among others. And they added Harrison to their portfolio, just in time for NAB. The Sennheiser Group owns both Sennheiser and Neumann. Grass Valley, Snell, and Quantel products have all been consolidated by Black Dragon Capital under the Grass Valley brand. Such consolidation was evident through shared booth space. In many cases, the brands retained their individual identities. Unfortunately for Snell and Quantel, those brands have now been completely subsumed by Grass Valley.

A lot of this is a function of the industry tightening up. While there’s a lot more media production these days, there are also many inexpensive solutions to create that media. Therefore, many companies are venturing outside of their traditional lanes. For example. Sennheiser still manufactures great microphone products, but they’ve also developed the AMBEO immersive audio product line. At NAB they demonstrated the AMBEO 2-Channel Spatial Audio renderer. This lets a mixer take surround mixes and/or stems and turn them into 2-channel spatial mixes that are stereo-compatible. The control software allows you to determine the stereo width and amount of surround and LFE signal put into the binaural mix. In the same booth, Neumann was demoing their new KH 120-II near-field studio monitors.

General themes

Overall, I didn’t see any single trend that would point to an overarching theme for the show. AI/ML/Neural Networks were part of many companies’ marketing strategy. Yet, I found nothing that jumped out like the current public fascination with ChatGPT. You have to wonder how much of this is more evolutionary than revolutionary and that the terms themselves are little more than hype.

Stereoscopic production is still around, although I only found one company with product (Stereotec). Virtual sets were aplenty, including a large display by Vu Studios and even a mobile expando trailer by Magicbox for virtual set production on-location. Insta360 was there, but tucked away in the back of Central hall.

Of course, everyone has a big push for “the cloud” in some way, shape, or form. However, if there is any single new trend that seems to be getting manufacturers’ attention, it’s passing video over IP. The usual companies who have dealt in SDI-based video hardware, like AJA, Blackmagic Design, and Matrox, were all showing IP equivalents. Essentially, where you used to send SDI video signals using the uncompressed SDI protocol, you will now use the SMPTE ST 2110 IP protocol to send it through 1GigE networks.

The world of post production

Let me shift to post – specifically Adobe, Avid, and Blackmagic Design. Unlike Blackmagic, neither Avid nor Adobe featured their usual main stage presentations. I didn’t see Apple’s Final Cut Pro anywhere on the floor and only one sighting in the press room. Avid’s booth was a shadow of itself, with only a few smaller demo pods. Their main focus was showing the tighter integration between Media Composer and Pro Tools (finally!). There were no Pro Tools control surfaces to play with. However, in their defense, NAMM 2023 (the large audio and music products exhibition) was held just the week before. Most likely this was a big problem for any audio vendor that exhibits at both shows. NAMM shifts back to January in 2024, which is its historical slot on the calendar.

Uploading media to the cloud for editing has been the mantra at Frame io, which is now under the Adobe wing. They’ve enhanced those features with direct support by Fujifilm (video) and Capture One (photography). In addition, Frame has improved features specific to the still photography market. New to the camera-to-cloud game is also Atomos, which demoed its own cloud-based editor developed by asset management developer Axle ai.

Adobe demoed the new, text-based editing features for Premiere Pro. It’s currently in beta, but will soon be in full release. In my estimation, this is the best text-based method of any of the NLEs. Avid’s script-based editing is optimized for scripted content, but doesn’t automatically generate text. Its strength is in scripted films and TV shows, where the page layout mimics a script supervisor’s lined script.

Adobe’s approach seems better for documentary projects. Text is generated through speech-to-text software within Premiere Pro. That is now processed on your computer instead of in the cloud. When you highlight text in the transcription panel, it automatically marks the in and out points on that source clip. Then, using insert and overwrite commands while the transcription panel is still selected, automatically edit that portion of the source clip to the timeline. Once you shift your focus to the timeline, the transcription panel displays the edited text that corresponds to the clips on the timeline. Rearrange the text and Premiere Pro automatically rearranges the clips on the timeline. Or rearrange the clips and the text follows.

Meanwhile over at Blackmagic Design’s massive booth, the new DaVinci Resolve 18.5 features were on full display. 18.5 is also in beta. While there are a ton of new features, it also includes automatic speech-to-text generation. This felt to me like a work-in-progress. So far, only English is supported. It creates text for the source and you can edit from the text panel to the timeline. However, unlike Premiere Pro, there is no interaction between the text and clips in the timeline.

I was surprised to see that Blackmagic Design was not promoting Resolve on the iPad. There was only one demo station and no dedicated demo artist. I played with it a bit and it felt to me like it’s not truly optimized for iPadOS yet. It does work well with the Speed Editor keyboard. That’s useful for any user, since the Cut page is probably where anyone would do the bulk of the work in this version of Resolve. When I used the Apple Pencil, the interface lacked any feedback as icons were clicked. So I was never quite sure if an action had happened or not when I used the Pencil. I’m not sure many will do a complete edit with Resolve on the iPad; however, it could evolve into a productive tool for preliminary editing in the field.

Here’s an interesting side note. Nearly all of the Blackmagic Design demo pods for DaVinci Resolve were running on Apple’s 24″ candy-colored iMacs. Occasionally performance was a bit sluggish from what I could tell. Especially when the operator demoed the new Relight feature to me. Nevertheless, they seemed to work well throughout the show.

In other Blackmagic news, all of the Cloud Store products are now shipping. The Cintel film scanner gets an 8mm gate. There are now IP versions of the video cards and converters. There’s an OLPF version of the URSA Mini Pro 12K and you can shoot vertical video with the Pocket Cinema Camera that’s properly tagged as vertical.

Of course, not everyone wants their raw media in the cloud and Blackmagic Design wasn’t showing the only storage products. Most of the usual storage vendors were present, including Facilis, OpenDrives, Synology, OWC, and QNAP. The technology trends include a shift away from spinning drives towards solid state storage, as well as faster networking protocols. Quite a few vendors(like Sonnet) were showing 25GbE (and faster) connections. This offers a speed improvement over the 1GbE and 10GbE ports and switches that are currently used.

Finally, one of the joys of NAB is to check out the smaller booths, where you’ll often find truly innovative new products. These small start-ups often grow into important companies in our industry. Hedge is just such a company. Tucked into a corner of the North hall, Hedge was demonstrating its growing portfolio of essential workflow products. Another start-up, Colourlab AI shared some booth space there, as well, to show off Freelab, their new integration with Premiere Pro and DaVinci Resolve.

That’s a quick rundown of my thoughts about this year’s NAB Show. For other thoughts and specific product reviews, be sure to also check out NAB coverage at Pro Video Coalition, RedShark News, and postPerspective. There’s also plenty of YouTube coverage.

Click on any image below to view an NAB slideshow.

©2023 Oliver Peters

What is a Finishing Editor?

To answer that, let’s step back to film. Up until the 1970s dramatic television shows, feature films, and documentaries were shot and post-produced on film. The film lab would print positive copies (work print) of the raw negative footage. Then a team of film editors and assistants would handle the creative edit of the story by physically cutting and recutting this work print until the edit was approved. This process was often messy with many film splices, grease pencil marks on the work print to indicate dissolves, and so on.

Once a cut was “locked” (approved by the director and the execs) the edited work print and accompanying notes and logs were turned over to the negative cutter. It was this person’s job to match the edits on the work print by physically cutting and splicing the original camera negative, which up until then was intact. The negative cutter would also insert any optical effects created by an optical house, including titles, transitions, and visual effects.

Measure twice, cut once

Any mistakes made during negative cutting were and are irreparable, so it is important that a negative cutter be detail-oriented, precise, and works cleanly. You don’t want excess glue at the splices and you don’t want to pick up any extra dirt and dust on the negative if it can be avoided. If a mistaken cut is made and you have to repair that splice, then at least one frame is lost from that first splice.

A single frame – 1/24th of a second – is the difference in a fight scene between a punch just about to enter the frame and the arm passing all the way through the frame. So you don’t want a negative cutter who is prone to making mistakes. Paul Hirsch, ACE points out in his book A long time ago in a cutting room far, far away…. that there’s an unintentional jump cut in the Death Star explosion scene in the first Star Wars film, thanks to a negative cutting error.

In the last phase of the film post workflow, the cut negative goes to the lab’s color timer (the precursor to today’s colorist), who sets the “timing” information (color, brightness, and densities) used by the film printer. The printer generates an interpositive version of the complete film from the assembled negative. From this interpositive, the lab will generally create an internegative from which release prints are created.

From the lab to the linear edit bay

This short synopsis of the film post-production process points to where we started. By the mid-1970s, video post-production technology came onto the scene for anything destined for television broadcast. Material was still shot on film and in some cases creatively edited on film, as well. But the finishing aspect shifted to video. For example, telecine systems were used to transfer and color correct film negative to videotape. The lab’s color timing function was shifted to this stage (before the edit) and was now handled by the telecine operator, who later became known as a colorist.

If work print was generated and edited by a film editor, then it was the video editor’s job to match those edits from the videotapes of the transferred film. Matching was a manual process. A number of enterprising film editors worked out methods to properly compute the offsets, but no computerized edit list was involved. Sometimes a video offline edit session was first performed with low-res copies of the film transfer. Other times producers simply worked from handwritten timecode notes for selected takes. This video editing – often called online editing and operated by an online editor – was the equivalent to the negative cutting stage described earlier. Simpler projects, such as TV commercials, might be edited directly in an online edit session without any prior film or offline edit.

Into the digital era

Over time, any creative editing previously done on film for television projects shifted to videotape edit systems and later to digital nonlinear edit systems (NLEs), such as Avid and Lightworks. These editors were referred to as offline editors and post now followed a bifurcated process know as offline and online editing. This was analogous to film’s work print and negative cutting stages. Likewise, telecine technology evolved to not only perform color correction during the film transfer process, but also afterwards working from the assembled master videotape as a source. This process, known as tape-to-tape color correction, gave the telecine operator – now colorist – the tools to perform better shot matching, as well as to create special looks in post. With this step the process had gone full circle, making the video colorist the true equivalent of the lab’s color timer.

As technology marched on, videotape and linear online edit bays gave way to all-digital, NLE-based facilities. Nevertheless, the separation of roles and processes continued. Around 2000, Avid came in with its Symphony model – originally a separate product and not just a software option. Avid Symphony systems offered a full set of color-correction tools and the ability to work in uncompressed resolutions.

It became quite common for a facility to have multiple offline edit bays using Avid Media Composer units staffed by creative, offline editors working with low-res media. These would be networked to an Avid shared storage solution. In addition, these facilities would also have one or more Avid Symphony units staffed by online editors.

A project would be edited on Media Composer until the cut was locked. Then assistants would ingest high-res media from files or videotape, and an online editor would “conform” the edit with this high-res media to match the approved timeline. The online editor would also handle Symphony color correction, insert visual effects, titles, etc. Finally, all tape or file deliverables would be exported out of the Avid Symphony. This system configuration and workflow is still in effect at many facilities around the world today, especially those that specialize in unscripted (“reality”) TV series.

The rise of the desktop systems

Naturally, there are more software options today. Over time, Avid’s dominance has been challenged by Apple Final Cut Pro (FCP 1-7 and FCPX), Adobe Premiere Pro, and more recently Blackmagic Design DaVinci Resolve. Systems are no longer limited by resolution constraints. General purpose computers can handle the work with little or no bespoke hardware requirements.

Fewer projects are even shot on film anymore. An old school, film lab post workflow is largely impossible to mount any longer. And so, video and digital workflows that were once only used for television shows and commercials are now used in nearly all aspects of post, including feature films. There are still some legacy terms in use, such as DI (digital intermediate), which for feature film is essentially an online edit and color correction session.

Given that modern software – even running on a laptop – is capable of performing nearly every creative and technical post-production task, why do we still have separate dedicated processes and different individuals assigned to each? The technical part of the answer is that some tasks do need extra tools. Proper color correction requires precision monitoring and becomes more efficient with specialized control panels. You may well be able to cut with a laptop, but if your source media is made up of 8K RED files, a proxy (offline-to-online) workflow makes more sense.

The human side of the equation is more complex

Post-production tasks often involve a left/right-side brain divide. Not every great editor is good when it comes to the completion phase. In spite of being very creative, many often have sloppy edits, messy timelines, and their project organization leaves a lot to be desired. For example, all footage and sequences just bunched together in one large project without bins. Timelines might have clips spread vertically in no particular order with some disabled clip – based on changes made in each revision path. As I’ve said before: You will be judged by your timelines!

The bottom line is that the kind of personality that makes a good creative editor is different than one that makes a good online editor. The latter is often called a finishing editor today within larger facilities. While not a perfect analogy, there’s a direct evolutionary path from film negative cutter to linear online editor to today’s finishing editor.

If you compare this to the music world, songs are often handled by a mixing engineer followed by a mastering engineer. The mix engineer creates the best studio mix possible and the mastering engineer makes sure that mix adheres to a range of guidelines. The mastering engineer – working with a completely different set of audio tools – often adds their own polish to the piece, so there is creativity employed at this stage, as well. The mastering engineer is the music world’s equivalent to a finishing editor in the video world.

Remember, that on larger projects, like a feature film, the film editor is contracted for a period of time to deliver a finished cut of the film. They are not permanent staff. Once, that job is done the project is handed off to the finishing team to accurately generate the final product working with the high-res media. Other than reviewing the work, there’s no value to having a highly paid film editor also handle basic assembly of the master. This is also true in many high-end commercial editorial companies. It’s more productive to have the creative editors working with the next client, while the staff finishing team finalizes the master files.

The right kit for the job

It also comes down to tools. Avid Symphony is still very much in play, especially with reality television shows. But there’s also no reason finishing and final delivery can’t be done using Apple Final Cut Pro or Adobe Premiere Pro. Often more specialized edit tools are assigned to these finishing duties, including systems such as Autodesk Smoke/Flame, Quantel Rio, and SGO Mistika. The reason, aside from quality, is that these tools also include comprehensive color and visual effects functions.

Finishing work today includes more that simply conforming a creative edit from a decision list. The finishing editor may be called upon to create minor visual effects and titles along with finessing those that came out of the edit. Increasingly Blackmagic Design DaVinci Resolve is becoming a strong contender for finishing – especially if Resolve was used for color correction. It’s a powerful all-in-one post-production application, capable of handling all of the effects and delivery chores. If you finish out of Resolve, that cuts out half of the roundtrip process.

Attention to detail is the hallmark of a good finishing editor. Having good color and VFX skills is a big plus. It is, however, a career path in its own right and not necessarily a stepping stone to becoming a top-level feature film editor or even an A-list colorist. While that might be a turn-off to some, it will also appeal to many others and provide a great place to let your skills shine.

©2023 Oliver Peters

Analogue Wayback, Ep. 17

The shape of your stomach.

The 1970s into the early 1990s was an era of significant experimentation and development in analog and digital video effects and animation. This included computer video art projects, broadcast graphics, image manipulation, and more. Denver-based Computer Image Corporation was both a hardware developer and a production company. Hardware included an advanced video switcher and the Scanimate computer animation system. The video switchers were optimized for compositing and an integral part of the system; however, it was the Scanimate analog computer that is most remembered.

Computer Image developed several models of Scanimate, which were also sold to other production companies, including Image West in Los Angeles (an offshoot of CI) and Dolphin Productions in New York. Dave Sieg, Image West’s former chief engineer, has a detailed website dedicated to preserving the history of this technology.

I interviewed for a job at Dolphin in the mid-1980s and had a chance to tour the facility. This was a little past the company’s prime, but they still had a steady stream of high-end ad agency and music video clients. Some of Dolphin’s best-known work included elements for PBS’ Sesame Street and The Electric Company, the show open for Washington Week in Review (PBS), news opens for NBC, CBS, and ABC News, as well as numerous national commercials. One memorial Pepto Bismal campaign featured actors that step forward from a live action scene. As they do, their body turns a greenish monochrome color and the stomach expands and becomes distorted.

Dolphin was situated in a five-story brownstone near Central Park. It had formerly housed a law practice. Behind reception on the ground floor was the videotape room, cleverly named Image Storage and Retrieval. The second floor consisted of an insert stage plus offices. Editing/Scanimate suites were on the third and fourth floors. What had been the fifth-floor law library now held the master videotape reels instead of books. A stairwell connected the floors and provided the cable runs to connect the electronics between rooms.

Each edit suite housed several racks of Scanimate and switcher electronics, the editor’s console, and client seating. At the time of my interview and tour, Dolphin had no computer-assisted linear edit controllers, such as CMX (these were added later). Cueing and editing was handled via communication between the editor and the VTR operator on the ground floor. They used IVC-9000 VTRs, which were 2″ helical scan decks. These are considered to have provided the cleanest image over multiple generations of any analog VTR ever produced.

Each suite could use up to four decks and animation was created by layering elements over each other from one VTR to the next. The operator would go round-robin from deck to deck. Play decks A/B/C and record onto D. Next pass, play B/C/D and record onto A to add more. Now, play C/D/A and record onto B for more again, and so on – until maybe as many as 20 layers were composited in sophisticated builds. Whichever reel the last pass ended up on was then the final version from that session. Few other companies or broadcasters possessed compatible IVC VTRs. So 2″ quad copies of the finished commercial or video were made from the 2″ helical and that’s the master tape a client left with.

This method of multi-pass layering is a technique that later took hold in other forms, such as the graphic design for TBS and CNN done by J. C. Burns and then more sophisticated motion layering by Charlex using the Abekas A-62. The concept is also the foundation for such recursive recording techniques as the preread edit function that Sony integrated into its D2 and Digital Betacam VTRs.

The path through Scanimate started with a high-resolution oscilloscope and companion camera. The camera signal was run through the electronics, which included analog controls and patching. Any image to be manipulated (transformed, moved, rotated, distorted, colorized) was sourced from tape, an insert stage camera, or a copy stand titling camera and displayed in monochrome on the oscilloscope screen. This image was re-photographed off of the oscilloscope screen by the high-resolution video camera and that signal sent into the rest of the Scanimate system.

Images were manipulated in two ways. First, the operator could use Scanimate to manipulate/distort the sweep of the oscilloscope itself, which would in turn cause the displayed image to distort. Once this distorted oscilloscope display was then picked up by the high-resolution camera, then the rest of Scanimate could be used to further alter that image through colorization and other techniques. Various keying and masking methods were used to add in each new element as layers were combined for the final composite.

Stability was of some concern since this was an analog computer. If you stopped for lunch, you might not be able to perfectly match what you had before lunch. The later Scanimate systems developed by Computer Image addressed this by using digital computers to control the analog computer hardware, making them more stable and consistent.

The companies evolved or went out of business and the Scanimate technology went by the wayside. Nevertheless, it’s an interesting facet of video history, much like that of the early music synthesizers. Even today, it’s hard to perfectly replicate the look of some of the Scanimate effects, in part, because today’s technology is too good and too clean! While it’s not a perfect analogy, these early forms of video animation offer a similar charm to the analog consoles, multitrack recorders, and vinyl cherished by many audiophiles and mixing engineers.

Check out this video at Vimeo if you want to know more about Scanimate and see it in action.

©2022 Oliver Peters

Analogue Wayback, Ep. 14

What’s old is new again.

When I watch shows like The Mandalorian and learn about using the volume, it becomes apparent that such methods conceptually stem from the earliest days of film. Some of these old school techniques are still in use today.

Rear-screen projection draws the most direct line to the volume. In its simplest form, there’s a translucent screen behind the talent. Imagery is projected from behind onto the screen. The camera sees the actors against this background scene as if that was a real set or landscape. No compositing is required since this is all in-camera. In old films, this was a common technique for car driving scenes. The same technique was used by David Fincher for Mank. Instead of projected images, large high-resolution video screens were used.

Front-screen projection is a similar process. The camera faces a special reflective backdrop coated with tiny glass beads. There’s a two-way mirror block between the camera lens and the talent who is standing in front of the screen. A projection source sits at 90 degrees to the camera and shines into the mirror, which is at a 45-degree angle inside the block. This casts the image onto the reflective backdrop. The camera shoots through this same mirror and sees both the talent and the projected image behind them, much like front screen projection.

The trick is that the projected image is also shining onto the talent, but you don’t actually see it on the talent. The reason is that the projector light level is so low that it’s washed out by the lighting on the talent. The glass beads of the backdrop act as tiny lenses to focus the light of the projected background image back towards the camera lens. The camera sees a proper combination without contamination onto the talent, even if that’s not what you see with the naked eye.

A similar concept is used in certain chromakey techniques. A ring light on the camera lens shines green or blue light onto the talent and the grey, reflective backdrop behind the talent. This backdrop also contains small glass beads that act as tiny lenses. The camera sees color-correct talent, but instead of grey, it’s a perfect green or blue screen behind them.

Aerial image projection is a cool technique that I haven’t personally seen used in modern production, although it’s probably still used in some special effects work. The process was used in multimedia production to add camera moves on still images. In a sense it led to digital video effects. There’s a projection source that shines an image onto a translucent, suspended pane of ground glass. A camera is positioned on the opposite side, so both camera and projector face the glass pane. The projected image is focused onto the glass, so that it’s crisp. Then the camera records the image, which can be resized as needed. In addition, a camera operator can add camera moves while recording the projected image that is “floating” on the glass pane.

©2022 Oliver Peters

Analogue Wayback, Ep. 11

Bumping your capstan.

I started out editing in an era of wrestling edits out of quad VTRs, so I tend to have less concern when there’s an issue with some plug-in. Not that it can’t be a problem, but it’s just one more indication of how far the industry has come.

In the 70s and 80s, the minimum configuration of an online edit bay involved three VTRs, a switcher, audio mixer, and the edit controller. Two VTRs were for playback and the third was what you edited onto. You needed both players to make a dissolve. If there was only one camera reel, then before starting the session, the editor would often make a complete copy (dub) of that camera reel. Once copied, you now had the A-Roll (camera original) and a B-Roll Dub to work from. You could roll A and B together and make a dissolve in a single pass, laying down clip 1 and clip 2 with the dissolve in-between. If it was a series of dissolves, then this required matched-frame edits in order to dissolve from the end of clip 2 to clip 3, then the same from clip 3 to clip 4, and so on.

To be completely seamless, the matched-frame edits had to be perfect. There’s the rub. In simple terms, NTSC and PAL are systems where the color signal rides on top of the black-and-white signal. This involves a colorburst signal and a sync pulse. NTSC follows a cadence of 4 fields (2 interlaced frames) in which the phase of the signal repeats every other frame. This cadence is known as the color frame sequence. When you play back a recording and the VTR first achieves servo-lock, it can lock up usually in one of two phase conditions as it syncs with the house sync generator. This slightly affects the horizontal position of the picture. 

If you record clip 1 and the VTR locks in one horizontal position, then when you make the matched-frame edit onto the end of clip 1, the VTR has to lock up again in that same position. If not, then there will be a slight, but noticeable, horizontal shift at the edit point. It’s a 50/50 probability of where the deck locks up. Some of the Ampex decks featured a bit more control, but the RCA TR-600 models that we were using tended to be sloppier. If you got an H-shift at the edit, you simply repeated the edit (sometimes several times) until it was right.

The facility hired a sharp young chief engineer who took it upon himself to create a viable workaround, since RCA was never going to fix it. His first step was to add an LED onto the front of one of the circuit boards as an indicator. This was visible to the editor when the VTR panels were open. This indicator could be monitored through the glass that separated the edit suite from the VTRs. Polarity condition 1, LED on. Condition 2, LED off. His next step was to add a remote switch for each player VTR next to the edit console. The editor could trigger it to “bump” the capstan control. This would cause the VTR to unlock and quickly relock its playback.

If the LED was on when recording the first part of the clip, then on the second edit the VTR would need to lock with the LED on, as well. If so, you’d achieve a successful matched-flame edit without any H-shift. Quad VTRs would lock up in anywhere from under one up to ten seconds or longer. The editor would monitor the LED status and could control the preroll length, which was generally five seconds for the TR-600s. During a matched-frame edit, if the condition was wrong, hit the switch and hope that the deck would lock up correctly before the end of the preroll. Otherwise lengthen the preroll time. This process worked better than expected and quickly became second nature.

At the risk of moving into the “kids, get off my lawn” territory, young editors clearly don’t know the fun they are missing with today’s modern nonlinear edit systems!

©2022 Oliver Peters