Analogue Wayback, Ep. 20

D2 – recursive editing

Video production and post transitioned from analog to digital starting in the late 1980s. Sony introduced the component digital D1 videotape recorder, but that was too expensive for most post facilities. These were also harder to integrate into existing composite analog facilities. In 1988 Ampex and Sony introduced the D2 format – an uncompressed, composite digital VTR with built-in A/D and D/A conversion.

D2 had a successful commercial run of about 10 years. Along the way it competed for marketshare with Panasonic’s D3 (composite) and D5 (component) digital formats. D2 was eventually usurped by Sony’s own mildly compressed Digital Betacam format. That format coincided with the widespread availability of serial digital routing, switching, and so on, successfully moving the industry into a digital production and post environment.

During D2’s heyday, these decks provided the ideal replacement for older 1″ VTRs, because they could be connected to existing analog routers, switchers, and patch bays. True digital editing and transfer was possible if you connected the decks using composite digital hardware and cabling (with large parallel connections, akin to old printer cables). Because of this bulk, there weren’t too many composite digital edit suites. Instead, digital i/o was reserved for direct VTR to VTR copies – i.e. a true digital clone. Some post houses touted their “digital” edit suites, but in reality their D2 VTRs were connected to the existing analog infrastructure, such as the popular Grass Valley Group 200 and 300 video switchers.

One unique feature of the D2 VTRs was “read before write”, also called “preread”. This was later adopted in the Digital Betacam decks, too. Preread enabled the deck to play a signal and immediately record that same signal back onto the same tape. If you passed the signal through a video switcher, you could add more elements, such as titles. There was no visual latency in using preread. While you did incur some image degradation by going through D/A and A/D conversions along the way, the generation loss was minor compared with 1″ technology. If you stayed within a reasonable number of generations, then there was no visible signal loss of any consequence.

Up until D2, performing a simple transition like a dissolve required three VTRs – the A and B playback sources, plus the recorder. If the two clips were on the same source tape, then one of the two clips had to be copied (i.e dubbed) onto a second tape to enable the transition. If you knew that a lot of these transitions were likely, an editor might take the time before any session to immediately copy the camera tape, creating a “B-roll dub” before ever starting. One hourlong camera tape would take an hour to copy. Longer, if the camera originals were longer.

With D2 and preread, the B-roll dub process could be circumvented, thus shaving unproductive time off of the session. Plus, only two VTRs were required to make the same edit – a player and a recorder. The editor would record the A clip long in order to have a “handle” for the length of the dissolve. Then switch on preread and preview the edit. If the preview looked good, then record the dissolve to the incoming B clip, which was playing from the same camera tape. This was all recorded onto the same master videotape.

Beyond this basic edit solution, D2’s preread ushered in what I would call recursive editing techniques. It has a lot of similarities with sound-on-sound audio recording innovated by the legendary Les Paul. For example, television show deliverables often require the master plus a “textless” master (no credits or titles). With D2, the editor could assemble the clean, textless master of the show. Next make a digital clone of that tape. Then go back on one of the two and use the preread function to add titles over the existing video. Another example would be simple graphic composites, like floating video boxes over a background image or a simple quad split. Simply build up all layers with preread, one at a time, in successive edit passes recorded onto the same tape. 

The downside was that if you made a mistake, you had to start over again. There was no undo. However, by this time linear edit controllers were pretty sophisticated and often featured complex integrations with video switchers and digital effects devices. This was especially true in an online bay made up of all Sony hardware. If you did make a mistake, you could simply start over using the edit controller’s auto-assembly function to automatically re-edit the events up to the point of the mistake. Not as good as modern software’s undo feature, but usually quite painless.

D2 held an important place in video post. Not only as the mainstream beginning of digital editing, but also for the creative options it inspired in editors.

©2022 Oliver Peters

Analogue Wayback, Ep. 19

Garage bands before the boy bands

As an editor, I’ve enjoyed the many music-oriented video productions I’ve worked on. In fact one of my first feature films was a concert film highlighting many top Reggae artists. Along the way, I’ve cut numerous jazz concerts for PBS, along with various videos for folks like Jimmy Buffet and the Bob Marley Foundation.

We often think about the projects that “got away” or never happened. For me, one of those was a documentary about the “garage band” acts of central Florida during the 1960s. These were popular local and regional acts with an eye towards stardom, but who never became household names, like Elvis or The Beatles. Central Florida was a hot bed for such acts back then, in the same way as San Francisco, Memphis, or Seattle have been during key moments in rock ‘n roll history.

For much of the early rock ‘n roll era music was a vertically-integrated business. Artist management, booking, recording studios, and marketing/promotion/distribution were all handled by the same company. The money was made in booking performances more so than record sales.

Records were produced, especially 45RPM “singles”, in order to promote the band. Singles were sent for free to radio stations in hopes that they would be placed into regular rotation by the station. That airplay would familiarize listeners/fans with the bands and their music. While purchasing the records was a goal, the bigger aim was name recognition, so that when a band was booked for a local event (dance, concert, youth club appearance, tour date) the local fans would buy tickets and show up to the event. Naturally some artists broke out in a big way, which meant even more money in record sales, as well as touring.

Record labels, studios, recording  studios, and talent booking services – whether the same company or separate entities – enjoyed a very symbiotic relationship. Much of this is chronicled in a mini-doc I cut for the Memphis Rock ‘n Soul Museum. It highlighted studios like Sun, Stax, and Hi and their role in the birth of rock ‘n roll and soul music.

In the central Florida scene, one such company was Bee Jay, started by musician/entrepreneur Eric Schabacker. Bee Jay originally encompassed a booking service and eventually a highly regarded recording studio responsible for many local acts. Many artists passed through those studio doors, but one of the biggest acts to record there was probably Molly Hatchet. I got to know Schabacker when the post facility I was with acquired the Bee Jay Studios facility.

Years later Schabacker approached me with an interesting project – a documentary about the local garage bands on the 60s. Together with a series of interviews with living band members, post for the documentary would also involve the restoration of several proto-music videos. Bee Jay had videotaped promotional videos for 13 of the bands back in the day. While Schabacker handled the recording of the interviews, I tackled the music videos.

The original videos were recorded using a rudimentary black-and-white production system. These were recorded onto half-inch open reel videotape. Unfortunately, the video tubes in the cameras back then didn’t always handle bright outdoor light well and the video switcher did not feature clean vertical interval switching. The result was a series of recordings in which video levels fluctuated and camera cuts often glitched. There were sections in the recordings where the tape machine lost servo lock during recording. The audio was not recorded live. Instead, the bands lip-synced to playback of their song recordings, which was also recorded in sync with the video. These old videos were transferred to DV25 QuickTime files, which formed my starting point.

Step one was to have clean audio. The bands’ tunes had been recorded and mixed at Bee Jay Studios at the time into a 13-song LP that was used for promotion to book those bands. However, at this point over three decades later, the master recordings were no longer available. But Schabacker did have pristine vinyl LPs from those session. These were turned over to local audio legend and renowned master engineer, Bob Katz. In turn, he took those versions and created remastered files for my use.

Now that I had good sound, my task was to take the video – warts and all – and rebuild it in sync with the song tracks, clean up the video, get rid of any damage and glitches, and in general end up with a useable final video for each song. Final Cut Pro (legacy) was the tool of choice at that time. Much of the “restoration” involved the slight slowing or speeding up of shots to resync the files – shot by shot. I also had to repeat and slomo some shots for fit-and-fill, since frames would be lost as glitchy camera cuts and other disturbances were removed. In the end, I rebuilt all 13 into a presentable form.

While that was a labor of love, the down side was that the documentary never came to be. All of these bands had recorded great-sounding covers (such as Solitary Man), but no originals. Unfortunately, it would have been a nightmare and quite costly to clear the music rights for these clips if used in the documentary. A shame, but that’s life in the filmmaking world.

None of these bands made it big, but in subsequent years, bands of another era like *NSYNC and the Backstreet Boys did. And they ushered a new boy band phenomenon, which carries on to this day in the form of K-pop, among other styles.

©2022 Oliver Peters

Analogue Wayback, Ep. 18

Connections Redux

In 1993 I worked on a corporate image short film for AT&T entitled Connections: AT&T’s Vision of the Future. I wrote about this in a 2010 blog post, but I thought it was a good topic to revisit in the context of this Analogue Wayback series. Next year will be 30 years since its release, which makes it a good time to compare these futurists’ ideas with what was actually developed. (The full film can be viewed here on YouTube.)

The inspiration for the film came from AT&T exec Henry Bassman. It was designed as a vision piece to be used in various public and investor relations endeavors. The concepts shown in the film were based on the ideas of a number of theorists working with AT&T’s labs and grounded in actual technology that was being studied and developed there. The film’s concept was to extrapolate those ideas 20 years into the future and show actual productization that might come to be. Henry Bassman and director Robert Wiemer wove these ideas into the fictional narrative of this 15-minute short film. Bassman discussed Connections: AT&T’s Vision of the Future in this 2007 interview with the Paleo-Future blog.

The production was filmed in the Universal Studios Florida soundstages on 35mm and posted at Century III. We transferred the film to Sony D2 composite digital tape using our Rank Cintel Mark III/DaVinci-equipped telecine suite. The offline edit was handled with an Ediflex system and the online conform/finishing edit done in our online edit bays (CMX 3600 edit system, Grass Valley 300 switcher with Kaleidoscope DVE, and D2 mastering). My role was the online edit, along with a number of standard visual effects, like screen inserts and basic composites. The more advanced 2D and 3D visual effects were handled by our designers.

While the film might certainly seem quaint to modern eyes, the general concepts and the quality of the visual effects were in keeping with other productions of that era, such as Star Trek: The Next Generation – of course, without the fantasy, sci-fi component. Remember that the internet was still young, no iPhone existed, and most of today’s commonplace technology simply never existed outside of the lab. Naturally, as with any of these past looks into the future, the way that theoretical concepts morph into real technology is never exactly the same as depicted, nor as seamless in operation. But these were pretty darn close.

I covered much of the technology and those concepts in my 2010 post, but it’s worth taking a new look at the ideas shown:

Simultaneous Facetime or Zoom-like conversations

Real-time captioning with live foreign language translation

Seat back airline entertainment systems with communications capabilities

16×9 displays

Foldable tablets

Tablet cameras with augmented reality

A form of the metaverse with avatars and Oculus-style interfaces

Noise-cancelling communication area

Large flat-panel TV displays with computer interfaces

Computer intelligent assistants

Online shopping with augmented reality

Online, computer -assisted leaning in classrooms

Super-thin computers

Automotive communications/media electronics

One can certainly point out flaws when viewed through the modern lens. Plus, since this is an AT&T piece, it focuses on some of their ideas, like active phone booths, and the video phone. Not to mention some obvious misses, like not really seeing the advent of the modern smart phone in a clear way. Nevertheless, it’s interesting to see how close so much of this is. It makes you wonder how we will view back onto today 20 years from now.

©2022 Oliver Peters

Will DaVinci Resolve 18 get you to switch?

DaVinci Resolve has been admired by users of other editing applications, because of the pace of Blackmagic Design’s development team. Many have considered a switch to Resolve. Since its announcement earlier this year, DaVinci Resolve fans and pros alike have been eagerly awaiting Resolve 18 to get out of public beta. It was recently released and I’ve been using it ever since for a range of color correction jobs.

DaVinci Resolve 18 is available in two versions: Resolve (free) or Resolve Studio (paid). These are free updates to existing customers. They can be downloaded/bought either from the Blackmagic Design website (Windows, Mac, Linux) or through the Apple Mac App Store (macOS only – Intel and M1). The free version of Resolve is missing only a few of the advanced features available in Resolve Studio. Due to App Store policies and sandboxing, there are also some differences between the Blackmagic and App Store installations. The Blackmagic website installations may be activated on up to two computers at the same time using a software activation code. The App Store versions will run on any Mac tied to your Apple ID.

(Click images to see an enlarged view.)

A little DaVinci Resolve history

If you are new to DaVinci Resolve, then here’s a quick recap. The application is an amalgam of the intellectual property and assets acquired by Blackmagic Design over several years from three different companies: DaVinci Systems, eyeon (Fusion), and Fairlight Instruments. Blackmagic Design built upon the core of DaVinci Resolve to develop an all-in-one, post production solution. The intent is to encompass an end-to-end workflow that integrates the specialized tasks of editing, color grading, visual effects, and post production sound all within a single application.

The interface character and toolset tied to each of these tasks is preserved using a page-style, modal user interface. In effect, you have separate tools, tied to a common media engine, which operate under the umbrella of a single application. Some pages are fluidly interoperable (like edit and Color) and others aren’t. For example, color nodes applied to clips in the Color page do not appear as nodes within the Fusion page. Color adjustments made to clips in a Fusion composition need to be done with Fusion’s separate color tools.

Blackmagic has expanded Resolve’s editing features – so much so that it’s a viable competitor to Avid Media Composer, Apple Final Cut Pro, and/or Adobe Premiere Pro. Resolve sports two editing modes: the Cut page (a Final Cut Pro-style interface for fast assembly editing) and the Edit page (a traditional track-based interface). The best way to work in Resolve is to adhere to its sequential, “left to right” workflow – just like the pages/modes are oriented. Start by ingesting in the Media page and then work your way through the tasks/pages until it’s time to export using the Deliver page.

Blackmagic Design offers a range of optional hardware panels for Resolve, including bespoke editing keyboards, color correction panels, and modular control surface configurations for Fairlight (sound mixing). Of course, there’s also Blackmagic’s UltraStudio, Intensity Pro, and DeckLink i/o hardware.

A new collaboration model through Blackmagic Cloud

The biggest news is that DaVinci Resolve 18 was redesigned for multi-user collaboration. Resolve projects are usually stored in a database on your local computer or a local drive, rather than as separate binary project files. Sharing projects in a multi-user environment requires a separate database server, which isn’t designed for remote editing. To simplify this and address remote work, Blackmagic Design established and hosts the new Blackmagic Cloud service.

As I touched on in my Cloud Store Mini review, anyone may sign up for a free Blackmagic Cloud account. When ready, the user creates a Library (database) on Cloud from within the Resolve UI. That user is the “owner” of the Library, which can contain multiple projects. The owner pays $5/library/month for each Library hosted on Blackmagic Cloud.

The Library owner can share a project with any other registered Blackmagic Cloud user. This collaboration model is similar to working in Media Composer and is based on bin locking. The first user to open a bin has read/write permission to that bin and any timelines contained in it. Other users opening the same timeline operate with read-only permission. Changes made by the user with write permission can then be updated by the read-only users on their systems.

Blackmagic Design only hosts the Library/project files and not any media, which stays local for each collaborator. The media syncing workflow is addressed through features of the Cloud Store storage products (see my review). Both collaboration via Blackmagic Cloud and the storage products are independent of each other. You can use either without needing the other. However, since Blackmagic Cloud is hosted “in the cloud” you do need an internet connection. 

There is some latency between the time a change is made by one user before it’s updated on the other users’ machines. In my tests, the collaborator needs to relink to the local media each time a shared project is accessed again. You can also move a project from Cloud back to your local computer as needed.

What else is new in DaVinci Resolve 18?

Aside from the new collaboration tools, DaVinci Resolve 18 also features a range of enhancements. Resolve 17 already introduced quite a few new features, which have been expanded upon in Resolve 18. The first of these is a new, simplified proxy workflow using the “prefer proxies” model. Native media handling has always been a strength of Resolve, especially with ProRes or Blackmagic RAW (BRAW) files. (Sorry, no support for Apple ProRes RAW.) But file sizes, codecs, and your hardware limitations can impede efficient editing. Therefore, working with proxy files may be the better option on some projects. When you are ready to deliver, then switch back to the camera originals for the final output.

The website installer for DaVinci Resolve Studio 18 includes the new Blackmagic Proxy Generator application. This automatically creates H.264, H.265, or ProRes proxy files using a watch folder. However, you can also create proxies internally from Resolve without using this app, or externally using Apple Compressor or Adobe Media Encoder. The trick is that proxy files must have matching names, lengths, timecode values, and audio channel configurations.

Proxy files should be rendered into a subfolder called “Proxy” located within each folder of original camera files. (Resolve and/or the Proxy Generator application do this automatically.) Then Resolve’s intelligent media management automatically detects and attaches the proxies to the original file. This makes linking easy and allows you to automatically toggle between the proxy and the original files.

Regarding other enhancements, the Color page didn’t see any huge new features, since tools like the Color Warper and HDR wheels were added in Resolve 17. However, there were some new items, including object replacement and enhanced tracking. But, I didn’t find the results to be as good as Adobe’s Content Aware Fill techniques.

Two additions worth mentioning are the Automatic Depth Map and Resolve FX Beauty effect. The beauty effect is a subtle skin smoothing tool. It’s nice, but quite frankly, too subtle. My preference in this type of tool would be Digital Anarchy’s Beauty Box or Boris FX’s Beauty Studio. However, Resolve does include other similar tools, like Face Refinement where you have more control.

Automatic Depth Map is more of a marquee feature. This is a pretty sophisticated process – analyzing depth separation in a moving image without the benefit of any lens metadata. It shows up as a Resolve FX in the Edit, Fusion, and the Color pages. Don’t use it in the Edit page, because you can’t do anything with it there. In the Color page, rather than apply it to a node, drag the effect into the node tree, where it creates its own node.

After brief clip analysis, the tool generates a mask, which you can use as a qualifier to isolate the foreground and background. Bear in mind this is for mild grading differences. Even though you might think of this for blurring a background, don’t do it! The mask is relatively broad. If you try to tighten the mask and use it to blur a background, you’ll get a result that looks like a Zoom call background. Instead, use it to subtly lighten or darken the foreground versus the background within a shot. Remember, the shot is moving, which can often lead to some chatter on the edges of the mask as the clip moves. So you’ll have to play with it to get the best result. Playback performance at Better Quality was poor on a 2017 iMac Pro. Use Faster while working and then switch to Better when you are ready to export or render.

Fusion

Complex visual effects and compositing are best done in the Fusion page. Fusion is both a component of Resolve, as well as a separate application offered by Blackmagic Design. It uses a node-based interface, but these nodes are separate and unrelated to the nodes in the Color page. Fusion features advanced tracking, particle effects, and a true 3D workspace that can work with 3D models. If you have any stereoscopic projects, then Fusion is the tool to use. The news for Fusion and the standalone Fusion Studio 18 application in this update focuses on GPU acceleration.

Before the acquisition by Blackmagic Design, eyeon offered several integrations of Fusion with NLEs like DPS Velocity and Avid Media Composer. The approach within Resolve is very similar to those – send a clip to Fusion for the effect, work with it inside the Fusion UI, and then it’s updated on the timeline as a Fusion clip. This is not unlike the Dynamic Link connection between Premiere Pro and After Effects, except that it all happens inside the same piece of software.

If you are used to working with a layer-style graphics application, like After Effects, Motion, or maybe HitFilm, then Fusion is going to feel foreign. It is a high-end visual effects tool, but might feel cumbersome to some when doing standard motion graphics. Yet for visual effects, the node-based approach is actually superior. There are plenty of good tutorials for Fusion, for any user ready to learn more about its visual effects power.

There are a few things to be aware of with Fusion. The image in the Fusion viewer and the output through UltraStudio to a monitor will be dark, as compared with that same image on the Edit page. Apparently this has been an ongoing user complaint and I have yet to find a color management setting that definitively solves this issue. There is also no way to “decompose” or “break apart” a Fusion composition on the timeline. You can reset the clip to a Fusion default status, but you cannot revert the timeline clip back to that camera file without it being part of a Fusion composition. Therefore, the best workaround is to copy the clip to a higher track before sending it to Fusion. That way you have both the Fusion composition and the original clip on the timeline.

In addition to visual effects, Fusion templates are also used for animated titles. These can be dropped onto a track in the Edit page and then modified in the inspector or the Fusion page. These Fusion titles function a lot like Apple’s Motion templates being used in Final Cut Pro.

Fairlight

Fairlight Instruments started with a popular digital audio workstation (Fairlight CMI) at the dawn of digital audio. After Blackmagic’s acquisition, the software portion of Fairlight was reimagined as a software module for audio post built into DaVinci Resolve. The Fairlight hardware and control surfaces were modularized. You can definitely run Fairlight in Resolve without any extra hardware. However, you can improve real-time performance on mixes with heavy track counts by adding the Fairlight Audio Core accelerator card. You can also configure one or more Blackmagic control surfaces into a large mixing console.

Taken as a whole, this makes the Fairlight ecosystem a very scalable product line in its own right that can appeal to audio post engineers and other audio production professionals. In other words, you can use the Fairlight portion of Resolve without ever using any of the video-centric pages. In that way, Resolve with Fairlight competes with Adobe Audition, Avid Pro Tools, Apple Logic Pro, and others. In fact, Fairlight is still the only professional DAW that’s actually integrated into an NLE.

Fairlight is designed as a DAW for broadcast and film with meter calibration based on broadcast standards. It comes with a free library of sound effects that can be downloaded from Blackmagic Design. The Fairlight page also includes an ADR workflow. DaVinci Resolve 18 expanded the Fairlight toolset. There’s new compatibility for FlexBus audio busing/routing with legacy projects. A lot of work has been put into Dolby Atmos support, including a binaural renderer, and an audio Space view of objects in relation to the room in 3D space.

On the other hand, if you are into music creation, then Fairlight lacks software instruments and music-specific plug-ins, like amp emulation. The MIDI support is focused on sound design. A musician would likely gravitate towards Logic Pro, Cubase, Luna, or Ableton Live. Nevertheless, Fairlight is still quite capable as a DAW for music mixes. Each track/fader integrates a channel strip for effects, plus built-in EQ and compression. Resolve comes with its own complement of Fairlight FX plug-ins, plus it supports third-party AU/VST plug-ins.

I decided to test that concept using some of the mixes from the myLEWITT music sessions. I stacked LEWITT’s multitrack recordings onto a blank Fairlight timeline, which automatically created new mono or stereo tracks, based on the file. I was able to add new busses (stem or submaster channels) for each instrument group and then route those busses to the output. It was easy to add effects and control levels by clip, by track, or by submaster.

Fairlight might not be my first choice if I were a music mixer, but I could easily produce a good mix with it. The result is a transparent, modern sound. If you prefer vintage, analog-style coloration, then you’ll need to add third-party plug-ins for that. Whether or not Fairlight fits the bill for music will depend on your taste as a mixer.

Conclusion

Once again, Blackmagic Design has added more power in the DaVinci Resolve 18 release. Going back to the start of this post – is this the version that will finally cause a paradigm shift away from the leading editing applications? In my opinion, that’s doubtful. As good as it is, the core editing model is probably not compelling enough to coax the majority of loyal users away from their favorite software. However, that doesn’t mean those same users won’t tap into some of Resolve’s tools for a variety of tasks.

There will undoubtedly be people who shift away from Premiere Pro or Final Cut Pro and over to DaVinci Resolve. Maybe it’s for Resolve’s many features. Maybe they’re done with subscriptions. Maybe they no longer feel that Apple is serious. Whatever the reason, Resolve is a highly capable editing application. In fact, during the first quarter of this year I graded and finished a feature film that had been cut entirely in Resolve 17.

Software choices can be highly personal and intertwined with workflow, muscle memory, and other factors. Making a change often takes a big push. I suspect that many Resolve editors are new to editing, often because they got a copy when they bought one of the Blackmagic Design cameras. Resolve just happens to be the best application for editing BRAW files and that combo can attract new users.

DaVinci Resolve 18 is a versatile, yet very complex application. Even experienced users don’t tap into the bulk of what it offers. My advice to any new user is to start with a simple project. Begin in the Cut or Edit page, get comfortable, and ignore everything else. Then learn more over time as you expand the projects you work on and begin to master more of the toolkit. If you really want to dive into DaVinci Resolve, then check out the many free and paid tutorials from Blackmagic Design, Mixing Light, and Ripple Training. Resolve is one application where any user, regardless of experience, will benefit from training, even if it’s only a refresher.

I’ve embedded a lot of links throughout this post, so I hope you’ll take the time to check them out. They cover some of the enhancements that were introduced in earlier versions, the history of DaVinci Resolve, and links to the new features of DaVinci Resolve 18. Enjoy!

©2022 Oliver Peters

Analogue Wayback, Ep. 17

The shape of your stomach.

The 1970s into the early 1990s was an era of significant experimentation and development in analog and digital video effects and animation. This included computer video art projects, broadcast graphics, image manipulation, and more. Denver-based Computer Image Corporation was both a hardware developer and a production company. Hardware included an advanced video switcher and the Scanimate computer animation system. The video switchers were optimized for compositing and an integral part of the system; however, it was the Scanimate analog computer that is most remembered.

Computer Image developed several models of Scanimate, which were also sold to other production companies, including Image West in Los Angeles (an offshoot of CI) and Dolphin Productions in New York. Dave Sieg, Image West’s former chief engineer, has a detailed website dedicated to preserving the history of this technology.

I interviewed for a job at Dolphin in the mid-1980s and had a chance to tour the facility. This was a little past the company’s prime, but they still had a steady stream of high-end ad agency and music video clients. Some of Dolphin’s best-known work included elements for PBS’ Sesame Street and The Electric Company, the show open for Washington Week in Review (PBS), news opens for NBC, CBS, and ABC News, as well as numerous national commercials. One memorial Pepto Bismal campaign featured actors that step forward from a live action scene. As they do, their body turns a greenish monochrome color and the stomach expands and becomes distorted.

Dolphin was situated in a five-story brownstone near Central Park. It had formerly housed a law practice. Behind reception on the ground floor was the videotape room, cleverly named Image Storage and Retrieval. The second floor consisted of an insert stage plus offices. Editing/Scanimate suites were on the third and fourth floors. What had been the fifth-floor law library now held the master videotape reels instead of books. A stairwell connected the floors and provided the cable runs to connect the electronics between rooms.

Each edit suite housed several racks of Scanimate and switcher electronics, the editor’s console, and client seating. At the time of my interview and tour, Dolphin had no computer-assisted linear edit controllers, such as CMX (these were added later). Cueing and editing was handled via communication between the editor and the VTR operator on the ground floor. They used IVC-9000 VTRs, which were 2″ helical scan decks. These are considered to have provided the cleanest image over multiple generations of any analog VTR ever produced.

Each suite could use up to four decks and animation was created by layering elements over each other from one VTR to the next. The operator would go round-robin from deck to deck. Play decks A/B/C and record onto D. Next pass, play B/C/D and record onto A to add more. Now, play C/D/A and record onto B for more again, and so on – until maybe as many as 20 layers were composited in sophisticated builds. Whichever reel the last pass ended up on was then the final version from that session. Few other companies or broadcasters possessed compatible IVC VTRs. So 2″ quad copies of the finished commercial or video were made from the 2″ helical and that’s the master tape a client left with.

This method of multi-pass layering is a technique that later took hold in other forms, such as the graphic design for TBS and CNN done by J. C. Burns and then more sophisticated motion layering by Charlex using the Abekas A-62. The concept is also the foundation for such recursive recording techniques as the preread edit function that Sony integrated into its D2 and Digital Betacam VTRs.

The path through Scanimate started with a high-resolution oscilloscope and companion camera. The camera signal was run through the electronics, which included analog controls and patching. Any image to be manipulated (transformed, moved, rotated, distorted, colorized) was sourced from tape, an insert stage camera, or a copy stand titling camera and displayed in monochrome on the oscilloscope screen. This image was re-photographed off of the oscilloscope screen by the high-resolution video camera and that signal sent into the rest of the Scanimate system.

Images were manipulated in two ways. First, the operator could use Scanimate to manipulate/distort the sweep of the oscilloscope itself, which would in turn cause the displayed image to distort. Once this distorted oscilloscope display was then picked up by the high-resolution camera, then the rest of Scanimate could be used to further alter that image through colorization and other techniques. Various keying and masking methods were used to add in each new element as layers were combined for the final composite.

Stability was of some concern since this was an analog computer. If you stopped for lunch, you might not be able to perfectly match what you had before lunch. The later Scanimate systems developed by Computer Image addressed this by using digital computers to control the analog computer hardware, making them more stable and consistent.

The companies evolved or went out of business and the Scanimate technology went by the wayside. Nevertheless, it’s an interesting facet of video history, much like that of the early music synthesizers. Even today, it’s hard to perfectly replicate the look of some of the Scanimate effects, in part, because today’s technology is too good and too clean! While it’s not a perfect analogy, these early forms of video animation offer a similar charm to the analog consoles, multitrack recorders, and vinyl cherished by many audiophiles and mixing engineers.

Check out this video at Vimeo if you want to know more about Scanimate and see it in action.

©2022 Oliver Peters