Analogue Wayback, Ep. 10

Color correction all stems from a slab of beef.

Starting out as an online editor at a production and post facility included working on a regional grocery chain account. The production company had a well-oiled “assembly line” process worked out with the agency in order to crank out 40-80 weekly TV commercials, plus several hundred station dubs. Start on Tuesday shooting product in the studio and recording/mixing tracks. Begin editing at the end of the day and overnight in time for agency review Wednesday morning. Make changes Wednesday afternoon and then copy station dubs overnight. Repeat the process on Thursday for the second round of the week.

The studio product photography involved tabletop recording of packaged product, as well as cooked spreads, such as a holiday turkey, a cooked steak, or an ice cream sundae. There was a chef on contract, so everything was real and edible – no fake stylist food there! Everything was set up on black or white sweep tables or large rolling, flat tables that could be dressed in whatever fashion was needed.

The camera was an RCA TK-45 with a short zoom lens and was mounted on a TV studio camera pedestal. This was prior to the invention of truly portable, self-contained video cameras. For location production, the two-piece TKP-45 was also used. It was tethered to our remote production RV.

This was a collaborative production, where our DP/camera operator handled lighting and the agency producers handled props and styling. The videotape operator handled the recording, camera set-up, and would insert retail price graphics (from art cards and a copy stand camera) during the recording of each take. Agency producers would review, pick takes, and note the timecode on the script. This allowed editors to assemble the spots unsupervised overnight.

Since studio recording was not a daily affair, there was no dedicated VTR operator at first. This duty was shared between the editors and the chief engineer. When I started as an editor, I would also spend one or two days supporting the studio operation. A big task for the VTR operator was camera set-up, aka camera shading. This is the TV studio equivalent to what a DIT might do today. The camera control electronics were located in my room with the videotape recorder, copy stand camera, and small video switcher.

Television cameras feature several video controls. The iris and pedestal knobs (or joysticks) control black level (pedestal) and brightness/exposure (iris). The TK-45 also included a gain switch, which increased sensitivity (0, +3dB, or +6dB), and a knob called black stretch. The latter would stretch the shadow area much like a software shadows slider or a gamma control today. Finally, there were RGB color balance controls for black and white. In normal operation, you would point the camera at a “chip chart” (grayscale chart) and balance RGB so that the image was truly black and white as measured on a waveform scope. The VTR operator/camera shader would set up the camera to the chart and then only adjust pedestal and iris throughout the day. 

Unfortunately not all food – especially raw ham, beef, or a rare steak – looks great under studio lighting nor in the world of NTSC color. Thankfully, RCA had also developed a camera module called the Chromaproc (chroma processor). This was a small module on the camera control unit that allowed you to adjust RGBCMY – the six vectors of the color spectrum. The exact details are hard to find now, but if I remember correctly, there were switches to enable each of the six vector controls. Below that were six accompanying hue control pots, which required a tweaker (small screwdriver) to adjust. When a producer became picky about the exact appearance of a rare steak and whether or not it looked appetizing, then you could flick on the Chromaproc and slightly shift the hues with the tweaker to get a better result. Thus you were “painting” the image.

RCA used this chroma processing technology in their cameras and telecine controls. The company eventually developed a separate product that predated any of the early color correctors, like the Wiz (the original device from which DaVinci was spawned). In addition to RGB color balance of the lift/gamma/gain ranges, you could further tweak the saturation and hue of these six vectors, which we now refer to as secondary color correction. The missing ingredients were memory, recall, and list management, which were added by the subsequent developers in their own products. This latter augmentation led to high-profile patent lawsuits, which have now largely been forgotten.

And so when I talk about color correction to folks, I’ll often tell them that everything I know about it was learned by shading product shots for grocery commercials!

©2022 Oliver Peters

The Batman

DC Comics’ Caped Crusader has been covered for decades in television and cinema. Sometimes campy, sometimes more dramatic. Director Matt Reeves’ latest take on Bruce Wayne/Batman (starring Robert Pattinson in the title role) takes us into a considerably more gritty version of Gotham.

No superhero film is possible without extensive visual effects work and The Batman is no exception, with numerous top-flight effects house contributing to the film. I recently spoke with Anders Langlands, Wētā FX visual effects supervisor, about his company’s contributions to help bring The Batman to life.

Anders, what are some of the scenes that ended up on Weta FX’s “to do” list?

We supplied a number of scenes. The biggest and most exciting one was the highway chase. We also did all of the work on the Batcave and the City Hall environment for the Mayor’s Memorial sequence. We worked on a few fight scenes in a couple of different locations, like the Iceberg Lounge. These required fight augmentation – speeding up and adjusting punches and kicks to make people look like they’re actually hitting each other – as well as some face replacements between Rob Pattinson and his stunt double. There are also CG bats in the Batcave and in a cage in the Riddler’s apartment.

Click here to continue reading the rest of the interview at postPerspective.

©2022 Oliver Peters

Analogue Wayback, Ep. 9

The evolution of typography in the edit suite.

As a kid my mom worked at a small town newspaper. Although offset (photographic) printing was becoming the norm, they still used old Linotype presses. This gave me an insight into typography. There are plenty of videos on YouTube explaining Linotype presses, but the short explanation is that you type a sentence on its custom keyboard and the press uses a master set of that typeface to create slugs for a line of type. Slugs are comparable to the key slugs on an old mechanical typewriter, except these are the full sentence in a row instead of individual letters.

The process uses molten lead to form these slugs, which are molded and cool as they exit the device. An operator then aligns the slugs on a tray that forms the layout for a page. Once inked, these print the text onto paper – for example, a newspaper page. To change fonts requires replacing the tray of one master typeface with a different tray.

Prior to this semi-automated system, type was laid out by hand using individual letter slugs. One advantage to the Linotype press over hand was that you only needed a single character set, rather than individual slugs with a ton of additional letters. You wouldn’t run out of the letter E, for example. The hand printing process points to the origin of the terms kerning (space between individual letters) and leading (space between lines of text – originally using lead spacers).

My first TV job was as an audio operator/booth announcer on the evening shift at a PBS station. With downtime between program breaks, we also prepared title cards to be used for the studio productions. This was prior to the general availability of electronic character generation. Titles were typically black cards with white text. The cards were shot with one camera and then keyed over another shot, such as for lower thirds. If you liked arts and crafts in elementary school, then this was right up your alley! 

To create a lower third title card, you started with tabbed booklets of individual white letters on a black background – sort of like a stack of Post-it notes. Tear off a letter or a spacer and place it with the backside facing out into a special ruler-like guide to build a line of text. Once the little paper tabs are properly aligned and that name complete, run double-stick tape across the back. Then place the row of text onto the black card, sticking it with the tape. Since the edges of the torn paper tabs are white, the last step is to take a black Sharpie pen and ink out any specks of white that aren’t text. Whew!

My first editing gig at a real post house was still prior to electronic titling being common. What gear was available created terribly crude-looking text on screen. In our case, graphics and/or titles were integrated into the edit using cameras or a slide projector connected into a film chain (telecine island). It was common for edit suites to include one or more black-and-white “titling” cameras that were vertically mounted on a stand with lighting. Place the black-and-white card on the table, manually straighten the card by eye or a grid while viewing a monitor, and use the camera’s zoom lens to scale the graphic.

Our biggest client was a regional grocery chain and there was a whole process at the facility to efficiently crank out multiple weekly “price & item” commercials. No electronic method at that time would support graphics like “$3.99/lb, Limit 3 per customer” in different typefaces, font sizes, kerning, or proportions. So we had an art department that generated art cards for the main titles ($3.99/lb), as well as 35mm slides for the smaller disclaimer text (Limit 3 per customer).

Even when electronic systems like Chyron were introduced, the early systems could not generate clean, anti-aliased text with infinite size options. The ability to generate extra small “mouse type”, like a retail disclaimer, only came later with more advanced product versions. The shop’s engineer had rigged up a home brew slide system that fed one side of our film chain. It used a standard Kodak 35mm projector mounted on a platform with thumbs screws for leveling. A thin strip of art tape was placed on the safe title line of the monitor for visual alignment. The editors could easily line up the slides, both centered and level. That certainly sounds crude today, but it was a bit of old school ingenuity that resulted in quality text on screen for the time.

The next time you are wrestling with that titler plug-in, just be glad you don’t have to run into the next room to level the graphic. Or to wait an hour while the art department makes a change or fixes a typo!

©2022 Oliver Peters

Audio Design Desk

The concept of the digital audio workstation stems from a near-century-old combination of a recording system and a mixing desk. Nearly every modern DAW is still built around that methodology. Gabriel Cowan, CEO and co-founder of Audio Design Desk, sought to modernize the approach with a DAW focused on sound design, using the power of metadata for workflow efficiency. The application was launched a couple of years ago and has since won several trade show awards for innovation, including a Product of the Year Award for audio just last week at the 2022 NAB Show.

Every video editor knows that a kicking sound track can often elevate an otherwise lackluster video. Audio Design Desk is intended to do just that, regardless of whether you are an editor, musician, or sound designer. The application is currently a Mac-only product that supports both Intel and M1 Macs natively. It breaks down into sound design (synthetic sounds, like whooshes, drones, hits, and risers), foley (real world sound effects), ambiences, and music.

Click here to continue this article at Pro Video Coalition.

©2022 Oliver Peters