Storage Case Studies

Regardless of whether you own or work for a small editorial company or a large studio cranking out blockbusters, media and how you manage it is the circulatory system of your operation. No matter the size, many post operations have some of the same concerns, although may approach them with solutions that are vastly different from company to company.

Last year I wrote on this topic for postPerspective and interviewed key players at Molinare and Republic. This year I’ve revisited the topic, taking a look at top Midwestern spot shops Drive Thru and Utopic, as well as Marvel Studios. In addition, I’ve also broken down the “best practices” that Netflix suggests to its production partners.

Here are links to these articles at postPerspective:

Editing and Storage: Molinare and Republic

Utopic and Drive Thru: How Spot Shops Manage Their Media

Marvel and Netflix: How Studio Operations Manage Media

©2022 Oliver Peters

NLE Tips – Premiere Pro Workflow Guide

Avid Media Composer is still the king of the hill when it comes to editing feature films and other long-form projects. However, Adobe also has a strong and ever-growing presence with many editors of notable TV shows, documentaries, and dramatic feature films using Premiere Pro as their NLE of choice. Adobe maintains a close relationship with many of these users, often seeding early versions of advanced features to them, as well as seeing what workflow pain points they encounter.

This battle-testing led Adobe to release a new Best Practices and Workflow Guide. It’s available online and as a free, downloadable PDF. While it’s targeted towards editors working on long-form projects, there are many useful pointers for all Premiere Pro editors. The various chapters cover such topics as hardware settings, proxies, multi-camera, remote/cloud editing, and much more.

Adobe has shied away from written documentation over the years, so it’s good to see them put the effort in to document best practices gleaned from working editors that will benefit all Premiere Pro users.

©2022 Oliver Peters

Analogue Wayback, Ep. 20

D2 – recursive editing

Video production and post transitioned from analog to digital starting in the late 1980s. Sony introduced the component digital D1 videotape recorder, but that was too expensive for most post facilities. These were also harder to integrate into existing composite analog facilities. In 1988 Ampex and Sony introduced the D2 format – an uncompressed, composite digital VTR with built-in A/D and D/A conversion.

D2 had a successful commercial run of about 10 years. Along the way it competed for marketshare with Panasonic’s D3 (composite) and D5 (component) digital formats. D2 was eventually usurped by Sony’s own mildly compressed Digital Betacam format. That format coincided with the widespread availability of serial digital routing, switching, and so on, successfully moving the industry into a digital production and post environment.

During D2’s heyday, these decks provided the ideal replacement for older 1″ VTRs, because they could be connected to existing analog routers, switchers, and patch bays. True digital editing and transfer was possible if you connected the decks using composite digital hardware and cabling (with large parallel connections, akin to old printer cables). Because of this bulk, there weren’t too many composite digital edit suites. Instead, digital i/o was reserved for direct VTR to VTR copies – i.e. a true digital clone. Some post houses touted their “digital” edit suites, but in reality their D2 VTRs were connected to the existing analog infrastructure, such as the popular Grass Valley Group 200 and 300 video switchers.

One unique feature of the D2 VTRs was “read before write”, also called “preread”. This was later adopted in the Digital Betacam decks, too. Preread enabled the deck to play a signal and immediately record that same signal back onto the same tape. If you passed the signal through a video switcher, you could add more elements, such as titles. There was no visual latency in using preread. While you did incur some image degradation by going through D/A and A/D conversions along the way, the generation loss was minor compared with 1″ technology. If you stayed within a reasonable number of generations, then there was no visible signal loss of any consequence.

Up until D2, performing a simple transition like a dissolve required three VTRs – the A and B playback sources, plus the recorder. If the two clips were on the same source tape, then one of the two clips had to be copied (i.e dubbed) onto a second tape to enable the transition. If you knew that a lot of these transitions were likely, an editor might take the time before any session to immediately copy the camera tape, creating a “B-roll dub” before ever starting. One hourlong camera tape would take an hour to copy. Longer, if the camera originals were longer.

With D2 and preread, the B-roll dub process could be circumvented, thus shaving unproductive time off of the session. Plus, only two VTRs were required to make the same edit – a player and a recorder. The editor would record the A clip long in order to have a “handle” for the length of the dissolve. Then switch on preread and preview the edit. If the preview looked good, then record the dissolve to the incoming B clip, which was playing from the same camera tape. This was all recorded onto the same master videotape.

Beyond this basic edit solution, D2’s preread ushered in what I would call recursive editing techniques. It has a lot of similarities with sound-on-sound audio recording innovated by the legendary Les Paul. For example, television show deliverables often require the master plus a “textless” master (no credits or titles). With D2, the editor could assemble the clean, textless master of the show. Next make a digital clone of that tape. Then go back on one of the two and use the preread function to add titles over the existing video. Another example would be simple graphic composites, like floating video boxes over a background image or a simple quad split. Simply build up all layers with preread, one at a time, in successive edit passes recorded onto the same tape. 

The downside was that if you made a mistake, you had to start over again. There was no undo. However, by this time linear edit controllers were pretty sophisticated and often featured complex integrations with video switchers and digital effects devices. This was especially true in an online bay made up of all Sony hardware. If you did make a mistake, you could simply start over using the edit controller’s auto-assembly function to automatically re-edit the events up to the point of the mistake. Not as good as modern software’s undo feature, but usually quite painless.

D2 held an important place in video post. Not only as the mainstream beginning of digital editing, but also for the creative options it inspired in editors.

©2022 Oliver Peters

Analogue Wayback, Ep. 14

What’s old is new again.

When I watch shows like The Mandalorian and learn about using the volume, it becomes apparent that such methods conceptually stem from the earliest days of film. Some of these old school techniques are still in use today.

Rear-screen projection draws the most direct line to the volume. In its simplest form, there’s a translucent screen behind the talent. Imagery is projected from behind onto the screen. The camera sees the actors against this background scene as if that was a real set or landscape. No compositing is required since this is all in-camera. In old films, this was a common technique for car driving scenes. The same technique was used by David Fincher for Mank. Instead of projected images, large high-resolution video screens were used.

Front-screen projection is a similar process. The camera faces a special reflective backdrop coated with tiny glass beads. There’s a two-way mirror block between the camera lens and the talent who is standing in front of the screen. A projection source sits at 90 degrees to the camera and shines into the mirror, which is at a 45-degree angle inside the block. This casts the image onto the reflective backdrop. The camera shoots through this same mirror and sees both the talent and the projected image behind them, much like front screen projection.

The trick is that the projected image is also shining onto the talent, but you don’t actually see it on the talent. The reason is that the projector light level is so low that it’s washed out by the lighting on the talent. The glass beads of the backdrop act as tiny lenses to focus the light of the projected background image back towards the camera lens. The camera sees a proper combination without contamination onto the talent, even if that’s not what you see with the naked eye.

A similar concept is used in certain chromakey techniques. A ring light on the camera lens shines green or blue light onto the talent and the grey, reflective backdrop behind the talent. This backdrop also contains small glass beads that act as tiny lenses. The camera sees color-correct talent, but instead of grey, it’s a perfect green or blue screen behind them.

Aerial image projection is a cool technique that I haven’t personally seen used in modern production, although it’s probably still used in some special effects work. The process was used in multimedia production to add camera moves on still images. In a sense it led to digital video effects. There’s a projection source that shines an image onto a translucent, suspended pane of ground glass. A camera is positioned on the opposite side, so both camera and projector face the glass pane. The projected image is focused onto the glass, so that it’s crisp. Then the camera records the image, which can be resized as needed. In addition, a camera operator can add camera moves while recording the projected image that is “floating” on the glass pane.

©2022 Oliver Peters

Analogue Wayback, Ep. 10

Color correction all stems from a slab of beef.

Starting out as an online editor at a production and post facility included working on a regional grocery chain account. The production company had a well-oiled “assembly line” process worked out with the agency in order to crank out 40-80 weekly TV commercials, plus several hundred station dubs. Start on Tuesday shooting product in the studio and recording/mixing tracks. Begin editing at the end of the day and overnight in time for agency review Wednesday morning. Make changes Wednesday afternoon and then copy station dubs overnight. Repeat the process on Thursday for the second round of the week.

The studio product photography involved tabletop recording of packaged product, as well as cooked spreads, such as a holiday turkey, a cooked steak, or an ice cream sundae. There was a chef on contract, so everything was real and edible – no fake stylist food there! Everything was set up on black or white sweep tables or large rolling, flat tables that could be dressed in whatever fashion was needed.

The camera was an RCA TK-45 with a short zoom lens and was mounted on a TV studio camera pedestal. This was prior to the invention of truly portable, self-contained video cameras. For location production, the two-piece TKP-45 was also used. It was tethered to our remote production RV.

This was a collaborative production, where our DP/camera operator handled lighting and the agency producers handled props and styling. The videotape operator handled the recording, camera set-up, and would insert retail price graphics (from art cards and a copy stand camera) during the recording of each take. Agency producers would review, pick takes, and note the timecode on the script. This allowed editors to assemble the spots unsupervised overnight.

Since studio recording was not a daily affair, there was no dedicated VTR operator at first. This duty was shared between the editors and the chief engineer. When I started as an editor, I would also spend one or two days supporting the studio operation. A big task for the VTR operator was camera set-up, aka camera shading. This is the TV studio equivalent to what a DIT might do today. The camera control electronics were located in my room with the videotape recorder, copy stand camera, and small video switcher.

Television cameras feature several video controls. The iris and pedestal knobs (or joysticks) control black level (pedestal) and brightness/exposure (iris). The TK-45 also included a gain switch, which increased sensitivity (0, +3dB, or +6dB), and a knob called black stretch. The latter would stretch the shadow area much like a software shadows slider or a gamma control today. Finally, there were RGB color balance controls for black and white. In normal operation, you would point the camera at a “chip chart” (grayscale chart) and balance RGB so that the image was truly black and white as measured on a waveform scope. The VTR operator/camera shader would set up the camera to the chart and then only adjust pedestal and iris throughout the day. 

Unfortunately not all food – especially raw ham, beef, or a rare steak – looks great under studio lighting nor in the world of NTSC color. Thankfully, RCA had also developed a camera module called the Chromaproc (chroma processor). This was a small module on the camera control unit that allowed you to adjust RGBCMY – the six vectors of the color spectrum. The exact details are hard to find now, but if I remember correctly, there were switches to enable each of the six vector controls. Below that were six accompanying hue control pots, which required a tweaker (small screwdriver) to adjust. When a producer became picky about the exact appearance of a rare steak and whether or not it looked appetizing, then you could flick on the Chromaproc and slightly shift the hues with the tweaker to get a better result. Thus you were “painting” the image.

RCA used this chroma processing technology in their cameras and telecine controls. The company eventually developed a separate product that predated any of the early color correctors, like the Wiz (the original device from which DaVinci was spawned). In addition to RGB color balance of the lift/gamma/gain ranges, you could further tweak the saturation and hue of these six vectors, which we now refer to as secondary color correction. The missing ingredients were memory, recall, and list management, which were added by the subsequent developers in their own products. This latter augmentation led to high-profile patent lawsuits, which have now largely been forgotten.

And so when I talk about color correction to folks, I’ll often tell them that everything I know about it was learned by shading product shots for grocery commercials!

©2022 Oliver Peters