Analogue Wayback, Ep. 17

The shape of your stomach.

The 1970s into the early 1990s was an era of significant experimentation and development in analog and digital video effects and animation. This included computer video art projects, broadcast graphics, image manipulation, and more. Denver-based Computer Image Corporation was both a hardware developer and a production company. Hardware included an advanced video switcher and the Scanimate computer animation system. The video switchers were optimized for compositing and an integral part of the system; however, it was the Scanimate analog computer that is most remembered.

Computer Image developed several models of Scanimate, which were also sold to other production companies, including Image West in Los Angeles (an offshoot of CI) and Dolphin Productions in New York. Dave Sieg, Image West’s former chief engineer, has a detailed website dedicated to preserving the history of this technology.

I interviewed for a job at Dolphin in the mid-1980s and had a chance to tour the facility. This was a little past the company’s prime, but they still had a steady stream of high-end ad agency and music video clients. Some of Dolphin’s best-known work included elements for PBS’ Sesame Street and The Electric Company, the show open for Washington Week in Review (PBS), news opens for NBC, CBS, and ABC News, as well as numerous national commercials. One memorial Pepto Bismal campaign featured actors that step forward from a live action scene. As they do, their body turns a greenish monochrome color and the stomach expands and becomes distorted.

Dolphin was situated in a five-story brownstone near Central Park. It had formerly housed a law practice. Behind reception on the ground floor was the videotape room, cleverly named Image Storage and Retrieval. The second floor consisted of an insert stage plus offices. Editing/Scanimate suites were on the third and fourth floors. What had been the fifth-floor law library now held the master videotape reels instead of books. A stairwell connected the floors and provided the cable runs to connect the electronics between rooms.

Each edit suite housed several racks of Scanimate and switcher electronics, the editor’s console, and client seating. At the time of my interview and tour, Dolphin had no computer-assisted linear edit controllers, such as CMX (these were added later). Cueing and editing was handled via communication between the editor and the VTR operator on the ground floor. They used IVC-9000 VTRs, which were 2″ helical scan decks. These are considered to have provided the cleanest image over multiple generations of any analog VTR ever produced.

Each suite could use up to four decks and animation was created by layering elements over each other from one VTR to the next. The operator would go round-robin from deck to deck. Play decks A/B/C and record onto D. Next pass, play B/C/D and record onto A to add more. Now, play C/D/A and record onto B for more again, and so on – until maybe as many as 20 layers were composited in sophisticated builds. Whichever reel the last pass ended up on was then the final version from that session. Few other companies or broadcasters possessed compatible IVC VTRs. So 2″ quad copies of the finished commercial or video were made from the 2″ helical and that’s the master tape a client left with.

This method of multi-pass layering is a technique that later took hold in other forms, such as the graphic design for TBS and CNN done by J. C. Burns and then more sophisticated motion layering by Charlex using the Abekas A-62. The concept is also the foundation for such recursive recording techniques as the preread edit function that Sony integrated into its D2 and Digital Betacam VTRs.

The path through Scanimate started with a high-resolution oscilloscope and companion camera. The camera signal was run through the electronics, which included analog controls and patching. Any image to be manipulated (transformed, moved, rotated, distorted, colorized) was sourced from tape, an insert stage camera, or a copy stand titling camera and displayed in monochrome on the oscilloscope screen. This image was re-photographed off of the oscilloscope screen by the high-resolution video camera and that signal sent into the rest of the Scanimate system.

Images were manipulated in two ways. First, the operator could use Scanimate to manipulate/distort the sweep of the oscilloscope itself, which would in turn cause the displayed image to distort. Once this distorted oscilloscope display was then picked up by the high-resolution camera, then the rest of Scanimate could be used to further alter that image through colorization and other techniques. Various keying and masking methods were used to add in each new element as layers were combined for the final composite.

Stability was of some concern since this was an analog computer. If you stopped for lunch, you might not be able to perfectly match what you had before lunch. The later Scanimate systems developed by Computer Image addressed this by using digital computers to control the analog computer hardware, making them more stable and consistent.

The companies evolved or went out of business and the Scanimate technology went by the wayside. Nevertheless, it’s an interesting facet of video history, much like that of the early music synthesizers. Even today, it’s hard to perfectly replicate the look of some of the Scanimate effects, in part, because today’s technology is too good and too clean! While it’s not a perfect analogy, these early forms of video animation offer a similar charm to the analog consoles, multitrack recorders, and vinyl cherished by many audiophiles and mixing engineers.

Check out this video at Vimeo if you want to know more about Scanimate and see it in action.

©2022 Oliver Peters

Analogue Wayback, Ep. 14

What’s old is new again.

When I watch shows like The Mandalorian and learn about using the volume, it becomes apparent that such methods conceptually stem from the earliest days of film. Some of these old school techniques are still in use today.

Rear-screen projection draws the most direct line to the volume. In its simplest form, there’s a translucent screen behind the talent. Imagery is projected from behind onto the screen. The camera sees the actors against this background scene as if that was a real set or landscape. No compositing is required since this is all in-camera. In old films, this was a common technique for car driving scenes. The same technique was used by David Fincher for Mank. Instead of projected images, large high-resolution video screens were used.

Front-screen projection is a similar process. The camera faces a special reflective backdrop coated with tiny glass beads. There’s a two-way mirror block between the camera lens and the talent who is standing in front of the screen. A projection source sits at 90 degrees to the camera and shines into the mirror, which is at a 45-degree angle inside the block. This casts the image onto the reflective backdrop. The camera shoots through this same mirror and sees both the talent and the projected image behind them, much like front screen projection.

The trick is that the projected image is also shining onto the talent, but you don’t actually see it on the talent. The reason is that the projector light level is so low that it’s washed out by the lighting on the talent. The glass beads of the backdrop act as tiny lenses to focus the light of the projected background image back towards the camera lens. The camera sees a proper combination without contamination onto the talent, even if that’s not what you see with the naked eye.

A similar concept is used in certain chromakey techniques. A ring light on the camera lens shines green or blue light onto the talent and the grey, reflective backdrop behind the talent. This backdrop also contains small glass beads that act as tiny lenses. The camera sees color-correct talent, but instead of grey, it’s a perfect green or blue screen behind them.

Aerial image projection is a cool technique that I haven’t personally seen used in modern production, although it’s probably still used in some special effects work. The process was used in multimedia production to add camera moves on still images. In a sense it led to digital video effects. There’s a projection source that shines an image onto a translucent, suspended pane of ground glass. A camera is positioned on the opposite side, so both camera and projector face the glass pane. The projected image is focused onto the glass, so that it’s crisp. Then the camera records the image, which can be resized as needed. In addition, a camera operator can add camera moves while recording the projected image that is “floating” on the glass pane.

©2022 Oliver Peters

Analogue Wayback, Ep. 11

Bumping your capstan.

I started out editing in an era of wrestling edits out of quad VTRs, so I tend to have less concern when there’s an issue with some plug-in. Not that it can’t be a problem, but it’s just one more indication of how far the industry has come.

In the 70s and 80s, the minimum configuration of an online edit bay involved three VTRs, a switcher, audio mixer, and the edit controller. Two VTRs were for playback and the third was what you edited onto. You needed both players to make a dissolve. If there was only one camera reel, then before starting the session, the editor would often make a complete copy (dub) of that camera reel. Once copied, you now had the A-Roll (camera original) and a B-Roll Dub to work from. You could roll A and B together and make a dissolve in a single pass, laying down clip 1 and clip 2 with the dissolve in-between. If it was a series of dissolves, then this required matched-frame edits in order to dissolve from the end of clip 2 to clip 3, then the same from clip 3 to clip 4, and so on.

To be completely seamless, the matched-frame edits had to be perfect. There’s the rub. In simple terms, NTSC and PAL are systems where the color signal rides on top of the black-and-white signal. This involves a colorburst signal and a sync pulse. NTSC follows a cadence of 4 fields (2 interlaced frames) in which the phase of the signal repeats every other frame. This cadence is known as the color frame sequence. When you play back a recording and the VTR first achieves servo-lock, it can lock up usually in one of two phase conditions as it syncs with the house sync generator. This slightly affects the horizontal position of the picture. 

If you record clip 1 and the VTR locks in one horizontal position, then when you make the matched-frame edit onto the end of clip 1, the VTR has to lock up again in that same position. If not, then there will be a slight, but noticeable, horizontal shift at the edit point. It’s a 50/50 probability of where the deck locks up. Some of the Ampex decks featured a bit more control, but the RCA TR-600 models that we were using tended to be sloppier. If you got an H-shift at the edit, you simply repeated the edit (sometimes several times) until it was right.

The facility hired a sharp young chief engineer who took it upon himself to create a viable workaround, since RCA was never going to fix it. His first step was to add an LED onto the front of one of the circuit boards as an indicator. This was visible to the editor when the VTR panels were open. This indicator could be monitored through the glass that separated the edit suite from the VTRs. Polarity condition 1, LED on. Condition 2, LED off. His next step was to add a remote switch for each player VTR next to the edit console. The editor could trigger it to “bump” the capstan control. This would cause the VTR to unlock and quickly relock its playback.

If the LED was on when recording the first part of the clip, then on the second edit the VTR would need to lock with the LED on, as well. If so, you’d achieve a successful matched-flame edit without any H-shift. Quad VTRs would lock up in anywhere from under one up to ten seconds or longer. The editor would monitor the LED status and could control the preroll length, which was generally five seconds for the TR-600s. During a matched-frame edit, if the condition was wrong, hit the switch and hope that the deck would lock up correctly before the end of the preroll. Otherwise lengthen the preroll time. This process worked better than expected and quickly became second nature.

At the risk of moving into the “kids, get off my lawn” territory, young editors clearly don’t know the fun they are missing with today’s modern nonlinear edit systems!

©2022 Oliver Peters

Analogue Wayback, Ep. 6

This is the world’s most expensive stopwatch.

There are few clients who truly qualify as “the client from Hell”. Clients have their own stresses that may be unseen or unknown to the editor. Nevertheless, some create extremely stressful edit sessions. I previously wrote about the color bar fiasco in Jacksonville. That client returned for numerous campaigns. Many of the edit sessions were overnight and each was a challenge.

Editing with a client is all about the interpersonal dynamics. In this case, the agency came down with an entourage – director, creative director, account executive, BTS photographer, and others. The director had been a big-time commercial director in the days of cigarette ads on TV. When those were pulled, his business dried up. So he had a retainer deal with this agency. However, the retail spots that I was cutting were the only TV spots the agency (owned by a larger corporation as an in-house agency) was allowed to do. For much of the run, the retail spots featured a celebrity actor/spokesman, which probably explained the entourage.

Often editors complain about a client sitting next to them and starting to crowd their working space as that client edges closer to the monitor. In these sessions the creative director and director would sit on either side of me – left and right. Coming from a film background, they were less familiar with video advances like timecode and insisted on using stopwatches to time every take. Of course, given reaction times and the fact that both didn’t get the same length, there was a lot of, “Please rewind and play it again.” At least on one occasion I was prompted to point to the edit controller display and remind them that I had the world’s most expensive stopwatch right there. I could tell them exactly how long the clip was. But, to no avail.

The worst part was that the two would get into arguments with each other – across me! Part of this was just personality and part of it was that they had written the spots in the hotel room the night before the shoot. (Prior planning? Harumph!) In any case, there were numerous sessions when I just had to excuse myself from the room while they heatedly hashed it out. “Call me when you’ve made a decision.”

There was an ironic twist. One quiet gentleman in the back of the room seemed to be the arbiter. He could make a decision when neither of them would. At the beginning I had assumed that he was the person really in charge. As it turned out, he was the account executive and they largely discounted him and his opinions. Yet, he had the best understanding of their client, which is why, when all else failed, they deferred to him!

Over the course of numerous sessions we pumped out commercial campaigns in spite of the stress. But those sessions always stick in my mind as some of the quirkiest I’ve ever had.

©2022 Oliver Peters

My Kingdom for Some Color Bars

In a former life, video deliverables were on videotape and no one seriously used the internet for any mission-critical media projects. TVs and high-quality video monitors used essentially the same display technology and standards. Every videotape started with SMPTE color bars used as a reference to set up the playback of the tape deck. Monitors were calibrated to bars and gray scale charts to assure proper balance, contrast, saturation, and hue. If the hardware was adjusted to this recognized standard, then what you saw in an edit suite would also be what the network or broadcaster would see going out over the air.

Fast forward to the present when nearly all deliverables are sent as files. Aesthetic judgements – especially by clients and off-site producers – are commonly made viewing MOV or MP4 files on some type of computer or device screen. As an editor who also does color correction, making sure that I’m sending the client a file that matches what I saw when it was created is very important.

Color management and your editing software

In researching and writing several articles and posts about trusting displays and color management, I’ve come to realize the following. If you expect the NLE viewer to be a perfect match with the output to a video display or an exported file playing in every media player, then good luck! The chances are slim.

There are several reasons for this. First, Macs and PCs use different gamma standards when displaying media files. Second, not all computer screens work in the same color space. For instance, some use P3-D65 while others use sRGB. Third, these color space and gamma standards differ from the standards used by televisions and also projection systems.

I’ll stick to standard dynamic range (SDR) in this discussion. HDR is yet another mine field best left for another day. The television display standard for SDR video is Rec. 709 with a 2.4 gamma value. Computers do not use this; however, NLEs use it as the working color space for the timeline. Some NLEs will also emulate this appearance within the source and record viewers in order to match the Rec. 709, 2.4 gamma feed going out through the i/o hardware to a video monitor.

As with still photos, a color profile is assigned when you export a video file, regardless of file wrapper or codec. This color profile is metadata that any media player software can use to interpret how a file should be displayed to the screen. For example, if you edit in Premiere Pro, Adobe uses a working SDR color space of Rec. 709 with 2.4 gamma. Exported files are assigned a color profile of 1-1-1. They will appear slightly lighter and less saturated in QuickTime Player as compared with the Premiere Pro viewer. That’s because computer screens default to a different gamma value – usually 1.96 on Macs. However, if you re-import that file back into Premiere, it will be properly interpreted and will match the original within Premiere. There’s nothing wrong with the exported file. It’s merely a difference based on differing display targets.

The developer’s conundrum

A developer of editing software has several options when designing their color management system. The first is to assume that the viewer should match Rec. 709, 2.4 gamma, since that’s the television standard and is consistent with legacy workflows. This is the approach taken by Adobe, Avid, and Blackmagic, but with some variations. Premiere Pro offers no alternate SDR timeline options, but After Effects does. Media Composer editors can set the viewer based on several standards and different video levels for Rec. 709: legal range (8-bit levels of 16-235) versus full range (8-bit levels of 0-255). Blackmagic enables different gamma options even when the Rec. 709 color space is selected.

Apple has taken a different route with Final Cut Pro by utilizing ColorSync. The same image in an FCP viewer will appear somewhat brighter than in the viewer of other NLEs; however, it will match the playback of an exported file in QuickTime Player. In addition, the output through AJA or Blackmagic i/o hardware to a video display will also match. Not only does the image look great on Apple screens, but it looks consistent across all apps on any Apple device that uses the ColorSync technology.

You have to look at it this way. A ton of content is being delivered only over the internet via sites like Instagram, Facebook, and YouTube rather than through traditional broadcast. A file submitted to a large streamer like Netflix will be properly interpreted within their pipeline, so no real concerns there. This begs the question. Should the app’s viewer really be designed to emulate Rec. 709, 2.4 gamma or should it look correct for the computer’s display technology?

The rubber meets the road

Here’s what happens in actual practice. When you export from Premiere Pro, Final Cut Pro, or Media Composer, the result is a media file tagged with the 1-1-1 color profile. For Premiere and Media Composer, exports will appear with somewhat less contrast and saturation than the image in the viewer.

In Resolve, you can opt to work in Rec. 709 with different gamma settings, including 2.4 or 709-A (“A” for Apple, I presume). These two different output settings would look the same until you start to apply a color grade (so don’t switch midstream). If you are set to 2.4 (or automatic), then the exported file has a color profile of 1-2-1. But with 709-A the exported file has a color profile of 1-1-1. These Resolve files will match the viewer and each other, but will also look darker than the comparable Premiere Pro and FCP exports.

All of the major browsers use the color profile. So do most media players, except VLC. These differences are also apparent on a PC, so it’s not an Apple issue per se. More importantly the profile determines how a file is interpreted. For instance, the two Resolve ProRes exports (one at 1-1-1, the other at 1-2-1) look the same in this first generation export. But let’s say you use Adobe Media Encoder to generate H.264 MP4 viewing copies from those ProRes files. The transcoded MP4 of the 709-A export (1-1-1 color profile) will match its ProRes original. However, the transcoded MP4 of the 2.4 export (1-2-1 color profile) will now look a bit brighter than its ProRes original. That’s because the color profile of the MP4 has been changed to 1-1-1.

Gamma changes mostly affect the midrange and shadow portion of a video signal. Therefore, differences are also more or less apparent depending on content. The more extreme your grading, the more apparent (and to some, obnoxious) these differences become. If these really bother you, then there are several ways to create files that are “enhanced” for computer viewing. This will make them a bit darker and more saturated.

  1. You can tweak the color correction by using an adjustment layer to export a file with a bit more contrast and saturation. In Premiere Pro, you can use a Lumetri effect in the adjustment layer to add a slight s-curve along with a 10% bump in saturation.
  2. You can use a QT Gamma Correction LUT (such as from Adobe) as part of the export. However, in my experience, it’s a bit too dark in the shadows for my taste.
  3. You can pass the exported file through After Effects and create a separate sRGB version.

These approaches are not transparent. In other words, you cannot re-import these files and expect them to match the original. Be very careful about your intentions when using any of these hacks, because you are creating misadjusted files simply for viewing purposes. 

In the end, is it really right to use Rec. 709 2.4 gamma as the standard for an NLE viewer? Personally, I think Apple used the better and more modern approach. Should you do any of these hacks? Well, that’s up to you. More and more people are reviewing content on smart phones and tablets – especially iPhones and iPads – all of which show good-looking images. So maybe these concerns are simply much ado about nothing.

Or paraphrasing Dr. Strangelove – How I Learned to Stop Worrying and Love Color Profiles.

©2021 Oliver Peters