Analogue Wayback, Ep. 20

D2 – recursive editing

Video production and post transitioned from analog to digital starting in the late 1980s. Sony introduced the component digital D1 videotape recorder, but that was too expensive for most post facilities. These were also harder to integrate into existing composite analog facilities. In 1988 Ampex and Sony introduced the D2 format – an uncompressed, composite digital VTR with built-in A/D and D/A conversion.

D2 had a successful commercial run of about 10 years. Along the way it competed for marketshare with Panasonic’s D3 (composite) and D5 (component) digital formats. D2 was eventually usurped by Sony’s own mildly compressed Digital Betacam format. That format coincided with the widespread availability of serial digital routing, switching, and so on, successfully moving the industry into a digital production and post environment.

During D2’s heyday, these decks provided the ideal replacement for older 1″ VTRs, because they could be connected to existing analog routers, switchers, and patch bays. True digital editing and transfer was possible if you connected the decks using composite digital hardware and cabling (with large parallel connections, akin to old printer cables). Because of this bulk, there weren’t too many composite digital edit suites. Instead, digital i/o was reserved for direct VTR to VTR copies – i.e. a true digital clone. Some post houses touted their “digital” edit suites, but in reality their D2 VTRs were connected to the existing analog infrastructure, such as the popular Grass Valley Group 200 and 300 video switchers.

One unique feature of the D2 VTRs was “read before write”, also called “preread”. This was later adopted in the Digital Betacam decks, too. Preread enabled the deck to play a signal and immediately record that same signal back onto the same tape. If you passed the signal through a video switcher, you could add more elements, such as titles. There was no visual latency in using preread. While you did incur some image degradation by going through D/A and A/D conversions along the way, the generation loss was minor compared with 1″ technology. If you stayed within a reasonable number of generations, then there was no visible signal loss of any consequence.

Up until D2, performing a simple transition like a dissolve required three VTRs – the A and B playback sources, plus the recorder. If the two clips were on the same source tape, then one of the two clips had to be copied (i.e dubbed) onto a second tape to enable the transition. If you knew that a lot of these transitions were likely, an editor might take the time before any session to immediately copy the camera tape, creating a “B-roll dub” before ever starting. One hourlong camera tape would take an hour to copy. Longer, if the camera originals were longer.

With D2 and preread, the B-roll dub process could be circumvented, thus shaving unproductive time off of the session. Plus, only two VTRs were required to make the same edit – a player and a recorder. The editor would record the A clip long in order to have a “handle” for the length of the dissolve. Then switch on preread and preview the edit. If the preview looked good, then record the dissolve to the incoming B clip, which was playing from the same camera tape. This was all recorded onto the same master videotape.

Beyond this basic edit solution, D2’s preread ushered in what I would call recursive editing techniques. It has a lot of similarities with sound-on-sound audio recording innovated by the legendary Les Paul. For example, television show deliverables often require the master plus a “textless” master (no credits or titles). With D2, the editor could assemble the clean, textless master of the show. Next make a digital clone of that tape. Then go back on one of the two and use the preread function to add titles over the existing video. Another example would be simple graphic composites, like floating video boxes over a background image or a simple quad split. Simply build up all layers with preread, one at a time, in successive edit passes recorded onto the same tape. 

The downside was that if you made a mistake, you had to start over again. There was no undo. However, by this time linear edit controllers were pretty sophisticated and often featured complex integrations with video switchers and digital effects devices. This was especially true in an online bay made up of all Sony hardware. If you did make a mistake, you could simply start over using the edit controller’s auto-assembly function to automatically re-edit the events up to the point of the mistake. Not as good as modern software’s undo feature, but usually quite painless.

D2 held an important place in video post. Not only as the mainstream beginning of digital editing, but also for the creative options it inspired in editors.

©2022 Oliver Peters

Analogue Wayback, Ep. 14

What’s old is new again.

When I watch shows like The Mandalorian and learn about using the volume, it becomes apparent that such methods conceptually stem from the earliest days of film. Some of these old school techniques are still in use today.

Rear-screen projection draws the most direct line to the volume. In its simplest form, there’s a translucent screen behind the talent. Imagery is projected from behind onto the screen. The camera sees the actors against this background scene as if that was a real set or landscape. No compositing is required since this is all in-camera. In old films, this was a common technique for car driving scenes. The same technique was used by David Fincher for Mank. Instead of projected images, large high-resolution video screens were used.

Front-screen projection is a similar process. The camera faces a special reflective backdrop coated with tiny glass beads. There’s a two-way mirror block between the camera lens and the talent who is standing in front of the screen. A projection source sits at 90 degrees to the camera and shines into the mirror, which is at a 45-degree angle inside the block. This casts the image onto the reflective backdrop. The camera shoots through this same mirror and sees both the talent and the projected image behind them, much like front screen projection.

The trick is that the projected image is also shining onto the talent, but you don’t actually see it on the talent. The reason is that the projector light level is so low that it’s washed out by the lighting on the talent. The glass beads of the backdrop act as tiny lenses to focus the light of the projected background image back towards the camera lens. The camera sees a proper combination without contamination onto the talent, even if that’s not what you see with the naked eye.

A similar concept is used in certain chromakey techniques. A ring light on the camera lens shines green or blue light onto the talent and the grey, reflective backdrop behind the talent. This backdrop also contains small glass beads that act as tiny lenses. The camera sees color-correct talent, but instead of grey, it’s a perfect green or blue screen behind them.

Aerial image projection is a cool technique that I haven’t personally seen used in modern production, although it’s probably still used in some special effects work. The process was used in multimedia production to add camera moves on still images. In a sense it led to digital video effects. There’s a projection source that shines an image onto a translucent, suspended pane of ground glass. A camera is positioned on the opposite side, so both camera and projector face the glass pane. The projected image is focused onto the glass, so that it’s crisp. Then the camera records the image, which can be resized as needed. In addition, a camera operator can add camera moves while recording the projected image that is “floating” on the glass pane.

©2022 Oliver Peters

Analogue Wayback, Ep. 10

Color correction all stems from a slab of beef.

Starting out as an online editor at a production and post facility included working on a regional grocery chain account. The production company had a well-oiled “assembly line” process worked out with the agency in order to crank out 40-80 weekly TV commercials, plus several hundred station dubs. Start on Tuesday shooting product in the studio and recording/mixing tracks. Begin editing at the end of the day and overnight in time for agency review Wednesday morning. Make changes Wednesday afternoon and then copy station dubs overnight. Repeat the process on Thursday for the second round of the week.

The studio product photography involved tabletop recording of packaged product, as well as cooked spreads, such as a holiday turkey, a cooked steak, or an ice cream sundae. There was a chef on contract, so everything was real and edible – no fake stylist food there! Everything was set up on black or white sweep tables or large rolling, flat tables that could be dressed in whatever fashion was needed.

The camera was an RCA TK-45 with a short zoom lens and was mounted on a TV studio camera pedestal. This was prior to the invention of truly portable, self-contained video cameras. For location production, the two-piece TKP-45 was also used. It was tethered to our remote production RV.

This was a collaborative production, where our DP/camera operator handled lighting and the agency producers handled props and styling. The videotape operator handled the recording, camera set-up, and would insert retail price graphics (from art cards and a copy stand camera) during the recording of each take. Agency producers would review, pick takes, and note the timecode on the script. This allowed editors to assemble the spots unsupervised overnight.

Since studio recording was not a daily affair, there was no dedicated VTR operator at first. This duty was shared between the editors and the chief engineer. When I started as an editor, I would also spend one or two days supporting the studio operation. A big task for the VTR operator was camera set-up, aka camera shading. This is the TV studio equivalent to what a DIT might do today. The camera control electronics were located in my room with the videotape recorder, copy stand camera, and small video switcher.

Television cameras feature several video controls. The iris and pedestal knobs (or joysticks) control black level (pedestal) and brightness/exposure (iris). The TK-45 also included a gain switch, which increased sensitivity (0, +3dB, or +6dB), and a knob called black stretch. The latter would stretch the shadow area much like a software shadows slider or a gamma control today. Finally, there were RGB color balance controls for black and white. In normal operation, you would point the camera at a “chip chart” (grayscale chart) and balance RGB so that the image was truly black and white as measured on a waveform scope. The VTR operator/camera shader would set up the camera to the chart and then only adjust pedestal and iris throughout the day. 

Unfortunately not all food – especially raw ham, beef, or a rare steak – looks great under studio lighting nor in the world of NTSC color. Thankfully, RCA had also developed a camera module called the Chromaproc (chroma processor). This was a small module on the camera control unit that allowed you to adjust RGBCMY – the six vectors of the color spectrum. The exact details are hard to find now, but if I remember correctly, there were switches to enable each of the six vector controls. Below that were six accompanying hue control pots, which required a tweaker (small screwdriver) to adjust. When a producer became picky about the exact appearance of a rare steak and whether or not it looked appetizing, then you could flick on the Chromaproc and slightly shift the hues with the tweaker to get a better result. Thus you were “painting” the image.

RCA used this chroma processing technology in their cameras and telecine controls. The company eventually developed a separate product that predated any of the early color correctors, like the Wiz (the original device from which DaVinci was spawned). In addition to RGB color balance of the lift/gamma/gain ranges, you could further tweak the saturation and hue of these six vectors, which we now refer to as secondary color correction. The missing ingredients were memory, recall, and list management, which were added by the subsequent developers in their own products. This latter augmentation led to high-profile patent lawsuits, which have now largely been forgotten.

And so when I talk about color correction to folks, I’ll often tell them that everything I know about it was learned by shading product shots for grocery commercials!

©2022 Oliver Peters

Generalists versus Specialists

“Jack of all trades, master of none” is a quote most are familiar with. But the complete quote “Jack of all trades, master of none, but oftentimes better than master of one” actually has quite the opposite perceived meaning. In the world of post production you have Jacks and Jills of all trades (generalists) and masters of one (specialists). While editors are certainly specialized in storytelling, I would consider them generalists when comparing their skillset to those of other specialists, such as visual effects artists, colorists, and audio engineers. Editors often touch on sound, effects, and color in a more general (often temp) way to get client approval. The others have to deliver the best, final results within a single discipline. Editors have to know the tools of editing, but not the nitty gritty of color correction or visual effects.

This is closely tied to the Pareto Principle, which most know as the 80/20 Rule. This principle states that 80% of the consequences come from 20% of the causes, but it’s been applied in various ways. When talking about software development, the 80/20 Rule predicts that 80% of the users are going to use 20% of the features, while only 20% of users will find a need for the other features. The software developer has to decide whether the target customer is the generalist (the 80% user) or the specialist (the 20% user). If the generalist is the target, then the challenge is to add some specialized features to service the advanced user without creating a bloated application that no one will use.

Applying these concepts to editing software development

When looking at NLEs, the first question to ask is, “Who is defined as a video editor today?” I would separate editors into three groups. One group would be the “I have to do it all” group, which generates most of what we see on local TV, corporate videos, YouTube, etc. These are multi-discipline generalists who have neither the time nor interest in dealing with highly specialized software. In the case of true one-man bands, the skill set also includes videography, plus location lighting and sound.

The “top end” – national and international commercials, TV series, and feature films – could be split into two groups: craft (aka film or offline) editors and finishing (aka online) editors. Craft editors are specialists in molding the story, but generalists when it comes to working software. Their technical skills don’t have to be the best, but they need to have a solid understanding of visual effects, sound, and color, so that they can create a presentable rough cut with temp elements. The finishing editor’s role is to take the final elements from sound, color, and the visual effects houses, and assemble the final deliverables. A key talent is quality control and attention to detail; therefore, they have no need to understand dedicated color, sound, or effects applications, unless they are also filling one of these roles.

My motivation for writing this post stemmed from an open letter to Tim Cook, which many editors have signed – myself included. Editors have long been fans of Apple products and many gravitated from Avid Media Composer to Apple Final Cut Pro 1-7. However, when Apple reimagined Final Cut and dropped Final Cut Studio in order to launch Final Cut Pro X many FCP fans were in shock. FCPX lacked a number of important features at first. A lot of these elements have since been added back, but that development pace hasn’t been fast enough for some, hence the letter. My wishlist for new features is quite small. I recognize Final Cut for what it is in the Apple ecosystem. But I would like to see Apple work to raise the visibility of Final Cut Pro within the broader editing community. That’s especially important when the decision of which editing application to use is often not made by editors.

Blackmagic Design DaVinci Resolve – the über-app for specialists

This brings me to Resolve. Editors point to Blackmagic’s aggressive development pace and the rich feature set. Resolve is often viewed as the greener pasture over the hill. I’m going to take a contrarian’s point of view. I’ve been using Resolve since it was introduced as Mac software and recently graded a feature film that was cut on Resolve by another editor.

Unfortunately, the experience was more problematic than I’ve had with grades roundtripped to Resolve from other NLEs. Its performance as an editor was quite slow when trying to move around in the timeline, replace shots, or trim clips. Resolve wouldn’t be my first NLE choice when compared to Premiere Pro, Media Composer, or Final Cut Pro. It’s a complex program by necessity. The color management alone is enough to trip up even experienced editors who aren’t intimately familiar with what the various settings do with the image.

DaVinci Resolve is an all-in-one application that integrates editing (2 different editing models), color correction (aka grading), Fusion visual effects, and the Fairlight DAW. Historically, all-in-ones have not had a great track record in the market. Other such über-apps would include Avid|DS and Autodesk Smoke. Avid pulled the plug on DS and Autodesk changed their business model for the Flame/Smoke/Lustre product family into subscription. Neither DS nor Smoke as a standalone application moved the needle for market share.

At its core, Resolve is a grading application with Fusion and Fairlight added in later. Color, effects, and audio mixing are all specialized skills and the software is designed so that each specialist if comfortable with the toolset presented on those pages/modes. I believe Blackmagic has been attempting to capitalize on Final Cut editor discontent and create the mythical “FCP8” or “FC Extreme” that many wanted. However, adding completely new and disparate functions to an application that at its core is designed around color correction can make it quite unwieldy. Beginning editors are never going to touch most of what Resolve has to offer and the specialists would rather have a dedicated specialized tool, like Nuke, After Effects, or Pro Tools.

Apple Final Cut Pro – reimagining modern workflows for generalists

Apple makes software for generalists. Pages, Numbers, Keynote, Photos, GarageBand, and iMovie are designed for that 80%. Apple also creates advanced software for the more demanding user under the ProApps banner (professional applications). This is still “generalist” software, but designed for more complex workflows. That’s where Final Cut Pro, Motion, Compressor, and Logic Pro fit.

Apple famously likes to “skate to where the puck will be” and having control over hardware, operating system, and software gives the teams special incite to develop software that is optimized for the hardware/OS combo. As a broad-based consumer goods company Apple also understands market trends. In the case of iPhones and digital photography it also plays a huge role in driving trends.

When Apple launched Final Cut Pro X the goal was an application designed for simplified, modernized workflows – even if “Hollywood” wasn’t quite ready. This meant walking away from the comprehensive “suite of tools” concept (Final Cut Studio). They chose to focus on a few applications that were better equipped for where the wider market of content creators was headed – yet, one that could still address more sophisticated needs, albeit in a different way.

This reimagining of Final Cut Pro had several aspects to it. One was to design an application that could easily be used on laptops and desktop systems and was adaptable to single and dual screen set-ups. It also introduced workflows based on metadata to improve edit efficiency. It was intended as a platform with third parties filling in the gaps. This means you need to augment FCP to cover a few common industry workflows. In short, FCP is designed to appeal to a broad spectrum of today’s “professionals” and not how one might have defined that term in the early 1990s, when nonlinear editing first took hold.

For a developer, it gets down to who the product is marketed towards and which new features to prioritize. Generalists are going to grow the market faster, hence a better return on development resources. The more complex an application becomes, the more likely it is to have bugs or break when the hardware or OS is updated. Quality assurance testing (QA) expands exponentially with complexity.

Final thoughts

Do my criticisms of Resolve mean that it’s a bad application? No, definitely not! It’s powerful in the right hands, especially if you work within its left-to-right workflow (edit -> Fusion -> color -> Fairlight). But, I don’t think it’s the ideal NLE for craft editing. The tools are designed for a collection of specialists. Blackmagic has been on this path for a rather long time now and seem to be at a fork in the road. Maybe they should step back, start from a clean slate, and develop a fresh, streamlined version of Resolve. Or, split it up into a set of individual, focused applications.

So, is Final Cut Pro the ideal editing platform? It’s definitely a great NLE for the true generalist. I’m a fan and use it when it’s the appropriate tool for the job. I like that it’s a fluid NLE with a responsive UI design. Nevertheless, it isn’t the best fit for many circumstances. I work in a market and with clients that are invested in Adobe Creative Cloud workflows. I have to exchange project files and make sure plug-ins are all compatible. I collaborate with other editors and more than one of us often touches these projects.

Premiere Pro is the dominant NLE for me in this environment. It also clicks with how my mind works and feels natural to me. Although you hear complaints from some, Premiere has been quite stable for me in all my years of use. Premiere Pro hits the sweet spot for advanced editors working on complex productions without becoming overly complex. Product updates over the past year have provided new features that I use every day. However, if I were in New York or Los Angeles, that answer would likely be Avid Media Composer, which is why Avid maintains such dominance in broadcast operations and feature film post.

In the end, there is no right or wrong answer. If you have the freedom to choose, then assess your skills. Where do you fall on the generalist/specialist spectrum? Pick the application that best meets your needs and fits your mindset.

For another direct comparison check out this previous post.

©2022 Oliver Peters

Adobe’s Frame Rollout

Adobe acquired Frame.io last October. The latest Adobe Creative Cloud application updates showcase the first formal integration of Frame.io as a product within the Creative Cloud ecosystem. Frame.io had already developed a Premiere Pro integration using Adobe’s extensions architecture; however, the latest version of Premiere Pro and After Effects adds an integrated interface panel called Review with Frame.io.

Now your individual Adobe Creative Cloud subscription includes a Frame.io account at no additional charge. This includes 100GB of cloud storage (separate from existing Creative Cloud storage) for up to five projects, use by two collaborators, and unlimited access for reviewers. If you need more storage or to add more collaborators, then you can upgrade to a larger Frame.io plan, but at additional cost.

Adobe Creative Cloud Team and Enterprise accounts don’t fall under this plan and those admins will need to consult Adobe or Frame.io for a plan that best meets their needs. In other words, if you are a production company paying for an Adobe Team account with multiple users on the account, you don’t get 100GB of “free” Frame.io storage for each user. This offering is primarily designed for individual Adobe Creative Cloud subscribers.

Something to know before you start

There’s a gotcha for some existing Frame.io customers. You activate your new Adobe CC Frame.io service by logging in with the same e-mail and password as used for your Adobe ID. Let’s say you work freelance at a facility and are a collaborator on their Frame.io Team account. In that case, you might be using a personal email address to log into Frame.io. However, if that email is the same as used for your personal Adobe ID, then Frame.io does not know how to differentiate between the two.

To rectify this you need to use a different email for one of these two log-ins. This is generally a minor issue, since most people have more than one email address that they use. In my own case, I needed to change my Adobe ID email, which was a relatively quick procedure. This allows me to separately access either of the two Frame.io accounts as a collaborator, based on which email I log in with.

One confusing thing I encountered was that the account starts as a 30-day trial for a Frame.io Team account, so it looks like you are going to get billed extra after the trial ends. This is not the case. I think it’s a mistake for Adobe and Frame.io to do this, because they are trying to upsell you to the paid account. Fortunately there’s no need to enter payment information up front. I wish that this was clearer in the marketing details. Hopefully Adobe will correct this after the initial rollout. At the end of the 30-trial, you will be asked whether to pay or end the trial. If you opt to end the trial, then the account reverts to the free plan, which is the one included with your Adobe Creative Cloud subscription.

Getting started

Open the Review with Frame.io panel in Premiere Pro or After Effects and sign-in using your Adobe ID. This will open your default browser and send you to the Frame.io website to complete the sign-in. As long as you stay signed in, you can access Frame.io either in your web browser or within the panel. If you sign out, then next time you’ll need to sign in again using the Adobe ID.

I won’t go into how Frame.io itself works, since there are plenty of tutorials. This integration doesn’t change any of the operation. The Frame.io panel works like the previous extensions panel. A clip with reviewer comments can be synced to your Premiere Pro timeline for easy changes. Or you can simply work from the web portal and ignore the panel entirely. 100GB is plenty if your intent is to use Frame.io for low-resolution review files. However, if your intention is a larger, more complex workflow, then you may need to upgrade your Frame.io account after all.

Enter C2C

The bigger picture is that Frame.io is enthusiastically pushing its camera-to-cloud (C2C) workflow. I’m not really a big believer in this concept, but I know plenty of companies are going to announce more cloud and remote services at NAB. For many reasons, I don’t believe that all of our media will be in the cloud in a decade or two. However, I think Adobe does. In my opinion, it’s not a particularly good goal for users or the planet. But, I digress. In today’s world, what C2C offers in conjunction with the Premiere Pro integration is a Dropbox-style experience.

Let’s say your videographer is recording a corporate CEO interview in Los Angeles. The company’s PR rep is in New York and the editor in Atlanta. And there’s a very short turnaround schedule. In this basic scenario, both the videographer and editor are collaborators on a Frame.io project. While the interview is being recorded, the feed is being uploaded to Frame.io in near real-time. This requires some hardware on the camera side or it could be done by someone on set right after the recording ends. Once it’s in Frame.io, the PR rep in NYC can access and review the takes. The editor in Atlanta also sees the footage appear in the Frame.io panel within Premiere Pro. Files can be downloaded from the panel to the editor’s drives and the edit can start right away.

Given most standard internet speeds today and the 100GB bucket, this workflow makes sense if you are uploading smaller camera proxy files. Some proxies can actually be good enough to master with – especially in fast turnaround situations. In other scenarios, the proxies might be used to start the edit and later replaced with the high-res camera originals, once received from the shoot.

I feel that such situations are a lot fewer than the marketers want you to believe. Moving high-res files over the internet is never fast. FedEx often still offers the better option. So unless you really do need to get started right away, just wait for the media to arrive a day or so later. However, C2C for the purpose of an out-of-town producer reviewing takes remotely – especially in light of workflow changes caused by COVID over the past couple of years – has gained steam.

Frame.io is clear that just because they are an Adobe company doesn’t change their dedication to other workflows and other applications, such as Final Cut Pro. New announcements include native FilmLight Baselight integration, an app for Apple TV, and C2C partnerships with FiLMiC Pro.

If you are a current Frame.io customer without any Adobe subscription – no problem. Nothing changes for you. I’ve been using Frame.io since it launched and have been happy with the service. There are occasional glitches, but no worse than any other internet service, including your regular e-mail provider. Better yet, clients love the process. It’s not perfect, but it is one of the better review-and-approval sites and services on the market. If this is the first time you start using Frame.io by virtue of your Adobe subscription, then you are bound to see your daily workflow enhanced.

©2022 Oliver Peters