Analogue Wayback, Ep. 21

The Jacksonville Jazz Festival

Regular readers probably know by now that I have a soft spot in my heart for music production. I’ve worked on a number of films and TV shows that were tied to musical performances and it’s always been an enjoyable experience for me. One of those ongoing experiences was post for the Jacksonville Jazz Festival PBS specials in the 80s and 90s. Although I was living in Jacksonville at the start of this annual event, I really didn’t get involved with the shows until a few years after I’d left town.

The yearly Jacksonville Jazz Festival is a cultural highlight for the city of Jacksonville, Florida. Launched in 1980, the first two years were hosted in the neighboring fishing town of Mayport, home of a large US Navy base. It quickly shifted to downtown Jacksonville’s Metropolitan Park by the St. Johns River, which cuts through the heart of the city.

Recording jazz in the “backyard”

WJCT, the local PBS and NPR affiliate, had been covering the annual event for PBS since the second year of the festival. By 1983, the festival and the station were tightly intertwined. In that year, the park was renovated with a new WJCT facility adjacent to it. Having the building next to the park provided a unique opportunity to install direct audio and video cable runs between the station facility and the covered pavilion performance stage at the park. To inaugurate both, WJCT covered the festival with an eight-hour live broadcast.

From 1981 until 1994 (with the exception of 1983), WJCT produced each year’s festival as a one-hour TV special for PBS distribution. This was a fall event, which was posted over the subsequent months and aired early the next year. My involvement started with the 1984 show, helping to post eight of the eleven TV specials during those years. I worked closely with the station’s VP of Programming, Richard V. Brown, and Creative Services Director, Bill Weather.

Production and post arrangements varied from year to year. Bill Weather was the show’s producer/director for the live event recordings most of those eleven years. (Other directors included Dan Kossoff, David Atwood, and Patrick Kelly.) Weather and I traded off working as the creative editor, so in some years I was the online editor and in others, both editor and online editor. During that decade of shows, post was either at Century III (where I worked) or at our friendly crosstown rival, The Post Group at The Disney-MGM Studios.

Turning the festival into a TV show

Richard V. Brown was the show’s executive producer and also handled the artist arrangements for the show and the festival. Performers received fees for both the live event appearance and the TV show (if they were featured in it), so budgets often dictated who was presented in the telecast. A legendary, but expensive performer like Ray Charles or Miles Davis might headline the festival, yet not appear in the TV special. However, this wasn’t always dictated by money, since top names already brought with them a level of overexposure in the media. And so, the featured artists each year covered a wide spectrum of traditional and contemporary jazz styles, often introducing lesser known artists to a wider TV audience. New Orleans, fusion, Latin, blues, and even some rock performers were included in this eclectic mix.

The artist line-up for each special was decided before the event. Most shows highlighted four acts of about 10 to 15 minutes each. The songs to be included from each artist were selected from the live set, which tended to run for about an hour. The first editorial step (handled by Brown and Weather) was to select which songs to use from each performer, as well as any internal song edits needed to ensure that the final show length fit PBS guidelines.

Recording the live experience

Production and post grew in sophistication over time. Once the WJCT building was completely ready, multiple cameras could be controlled and switched from the regular production control room. No mobile unit required. This usually included up to seven cameras for the event. A line cut was recorded to 1″ videotape, along with several of the cameras as extra iso recordings to be used in post.

The station’s own production equipment was augmented with other gear, including stage lighting, camera dolly, and camera boom. With such an important local event, the station crew was also expanded thanks to local production professionals from the town, including a few top directors and cinematographers working the stage and running cameras; and volunteers working tirelessly to truly make each year memorable.

When it came to sound, the new WJCT facility also included its own 24-track audio recorder. Stage mic signals could be split in order to simultaneously feed the “front of the house” mixing board, the stage monitors, and run back into the building to the multitrack recorder. These 2″ analog audio masters also recorded “time of day” timecode, thus could be synced with the video line cut and iso recordings in post.

Editing is more than just editing

Although my role was post, I was able to attend several of the live festivals, even if I was only the online editor. I sat in the control room and functioned a bit like an assistant director, noting potential editorial issues. But I also made sure that I had coverage of all the members of the band. One performer might take a solo, but I also needed options for other camera angles. As with most live jazz and rock performances, the band members might trade off solos, so it was important to keep an eye on where the focus of the performance could switch to next. Since the director had his hands full just focusing on the real-time action, I would often lean over and ask for a little different coverage from one of the other cameras not currently punched up.

None of the crew was intimately familiar with the live performances of these acts, so it was all about having a sixth sense for the music. However, there was one surprising exception. That was the year that Paul Shaffer and the World’s Most Dangerous Band headlined. As you probably know this was the house band for The David Letterman Show, but they also had a limited live touring schedule.

For their set, Shaffer sent in a coordinator with a printout of their entire set rundown. Shaffer and the band had choreographed the whole set, so he was able to give the director a “heads up” for each part of the performance. In addition, Shaffer is the consummate band leader. His set included a jam with his band and several other jazz artists from earlier in the day. Each had a cameo solo. This sort of ad hoc, live jam can often become a big mess; but this one went off as if they’d rehearsed it. Shaffer literally put this together in quick conversations with the other artists during the course of that day.

3/4″ and a legal pad of notes

Once everything was in the can, post could start – initially with content selection. Then camera cuts could be cleaned up using the iso angles. This “offline edit” was largely done by reviewing the 3/4″ U-matic tapes, which had been recorded for the line cut and three of the iso angles using a quad-split generator with a timecode overlay. This gave the editor a multicam view, but from a single tape source. Unfortunately, listing camera cut changes to specific angles required a lot of meticulous, handwritten timecode notes. (Early days had four monitors and a timecode generator display stacked as closely as possible, with an independent camera recording to 3/4″tape.)

Based on these notes, the show master could then be edited in a linear, online session using the 1″ originals and mastering to 1″ or D2. If the line cut of the live recording was solid, then any given song might only have new edits for about 10-25% of the song. Edits might mean a cut to a different angle or maybe the same angle, but just a bit sooner. In addition to the live camera angles, we also had extra ENG footage, including audience shots, party boats anchored in the river nearby, and even some helicopter aerials of the wider event grounds, the pavilion stage, and the audience.

In a typical year, I would finish the camera clean-up edits and trims unsupervised, then Brown and Weather would arrive for the supervised part of the online edit. Here we would build the visual style for the show open and transitions between songs and bands. Plus final credits. This was at the dawn of digital post, so most show opens involved a lot of layering.

It’s all about the mix

The Jacksonville Jazz Festival PBS specials were, of course, about the music. Getting the best possible mix was a very important consideration. In the earliest years, the live recording and remix methodology was evolving, but generally run under the auspices of the WJCT audio engineers. This shifted to our Century III staff audio engineer, Jerry Studenka. He handled the mix for the shows for several years in the late 80s.

To the best of my recollection, the 24-track tapes were recorded at 15ips with Dolby SR noise reduction. This enabled an hourlong set to be recorded on a single reel of tape. Audio mixes/remixes were recorded onto two tracks of that same 24-track tape. In later years, working out of the Century III facility on the lot at Universal, we used Sony 24-track digital audio recorders. The staff would first bounce the analog master reels to digital tape ahead of the audio mix session. Then the audio engineer would mix from one digital recorder to the other. Century III and The Post Group were equipped with Solid State Logic consoles in their main audio rooms, which provided a comfort factor for any experienced music mixer.

The performances were recorded live and mixed on-the-fly during each set as the first pass. Then in the post session, they were polished or remixed in part with punch-ins or even fully remixed depending on what everyone felt gave the best result. But the mixes were all based on the actual live recordings – no overdubs added later.

Every year, each performer was afforded the opportunity to bring in their own recording engineer or representative for the show’s mix. Only two artists ever took Brown up  on that  – Paul Shaffer and Spyro Gyra. Larry Swist came down for Spyro Gyra, who appeared at numerous festivals and was featured in several of the specials. Swist, who later became a well-respected studio designer, was the recording engineer for the band’s albums. Shaffer sent Will Lee (the band’s vocalist/bassist) as his rep to the mixing session. Spyro Gyra and Shaffer’s band happened to be on the same show that year. By the time Lee arrived, Studenka and Swist already had a good mix, so Lee was able to quickly sign off.

Swist had an easy-going, “no drama” personality. Everyone had such a good experience working with him that for each year thereafter, Swist was brought in for all of the sessions. He coordinated both the live recording to multitrack during the event and then remixed all the music for the show during post.

These remixes weren’t as straightforward as they might seem. All sound post was handled on tape, not with any sort of DAW. It was a linear process, just like the picture edits. First of all, there were internal edits within the songs. Therefore, all outboard processing and console and fader settings had to match at the edit point, so that the edit was undetectable. Second, the transitions between songs or from one artist to the next had to be bridged. This was generally done by overlapping additional crowd applause across the change to hide the performance edit, which again required audio matching.

The Jacksonville Jazz Festival of 1994 (aired 1995) was the last of the PBS specials, due in part to the cost of production and TV rights. Eventually WJCT turned over production of the festival itself to the City of Jacksonville. The results for that time speak for themselves. The collective effort produced not only great festival experiences, but also memorable television. Unfortunately, some of the production folks involved, like Richard V. Brown, Larry Swist, and Jerry Studenka are no longer with us. And likewise, neither are some of the featured performers. But together, they left a worthwhile legacy that is still carried on by the City of Jacksonville to this day. 

©2022 Oliver Peters

Analogue Wayback, Ep. 20

D2 – recursive editing

Video production and post transitioned from analog to digital starting in the late 1980s. Sony introduced the component digital D1 videotape recorder, but that was too expensive for most post facilities. These were also harder to integrate into existing composite analog facilities. In 1988 Ampex and Sony introduced the D2 format – an uncompressed, composite digital VTR with built-in A/D and D/A conversion.

D2 had a successful commercial run of about 10 years. Along the way it competed for marketshare with Panasonic’s D3 (composite) and D5 (component) digital formats. D2 was eventually usurped by Sony’s own mildly compressed Digital Betacam format. That format coincided with the widespread availability of serial digital routing, switching, and so on, successfully moving the industry into a digital production and post environment.

During D2’s heyday, these decks provided the ideal replacement for older 1″ VTRs, because they could be connected to existing analog routers, switchers, and patch bays. True digital editing and transfer was possible if you connected the decks using composite digital hardware and cabling (with large parallel connections, akin to old printer cables). Because of this bulk, there weren’t too many composite digital edit suites. Instead, digital i/o was reserved for direct VTR to VTR copies – i.e. a true digital clone. Some post houses touted their “digital” edit suites, but in reality their D2 VTRs were connected to the existing analog infrastructure, such as the popular Grass Valley Group 200 and 300 video switchers.

One unique feature of the D2 VTRs was “read before write”, also called “preread”. This was later adopted in the Digital Betacam decks, too. Preread enabled the deck to play a signal and immediately record that same signal back onto the same tape. If you passed the signal through a video switcher, you could add more elements, such as titles. There was no visual latency in using preread. While you did incur some image degradation by going through D/A and A/D conversions along the way, the generation loss was minor compared with 1″ technology. If you stayed within a reasonable number of generations, then there was no visible signal loss of any consequence.

Up until D2, performing a simple transition like a dissolve required three VTRs – the A and B playback sources, plus the recorder. If the two clips were on the same source tape, then one of the two clips had to be copied (i.e dubbed) onto a second tape to enable the transition. If you knew that a lot of these transitions were likely, an editor might take the time before any session to immediately copy the camera tape, creating a “B-roll dub” before ever starting. One hourlong camera tape would take an hour to copy. Longer, if the camera originals were longer.

With D2 and preread, the B-roll dub process could be circumvented, thus shaving unproductive time off of the session. Plus, only two VTRs were required to make the same edit – a player and a recorder. The editor would record the A clip long in order to have a “handle” for the length of the dissolve. Then switch on preread and preview the edit. If the preview looked good, then record the dissolve to the incoming B clip, which was playing from the same camera tape. This was all recorded onto the same master videotape.

Beyond this basic edit solution, D2’s preread ushered in what I would call recursive editing techniques. It has a lot of similarities with sound-on-sound audio recording innovated by the legendary Les Paul. For example, television show deliverables often require the master plus a “textless” master (no credits or titles). With D2, the editor could assemble the clean, textless master of the show. Next make a digital clone of that tape. Then go back on one of the two and use the preread function to add titles over the existing video. Another example would be simple graphic composites, like floating video boxes over a background image or a simple quad split. Simply build up all layers with preread, one at a time, in successive edit passes recorded onto the same tape. 

The downside was that if you made a mistake, you had to start over again. There was no undo. However, by this time linear edit controllers were pretty sophisticated and often featured complex integrations with video switchers and digital effects devices. This was especially true in an online bay made up of all Sony hardware. If you did make a mistake, you could simply start over using the edit controller’s auto-assembly function to automatically re-edit the events up to the point of the mistake. Not as good as modern software’s undo feature, but usually quite painless.

D2 held an important place in video post. Not only as the mainstream beginning of digital editing, but also for the creative options it inspired in editors.

©2022 Oliver Peters

Analogue Wayback, Ep. 19

Garage bands before the boy bands

As an editor, I’ve enjoyed the many music-oriented video productions I’ve worked on. In fact one of my first feature films was a concert film highlighting many top Reggae artists. Along the way, I’ve cut numerous jazz concerts for PBS, along with various videos for folks like Jimmy Buffet and the Bob Marley Foundation.

We often think about the projects that “got away” or never happened. For me, one of those was a documentary about the “garage band” acts of central Florida during the 1960s. These were popular local and regional acts with an eye towards stardom, but who never became household names, like Elvis or The Beatles. Central Florida was a hot bed for such acts back then, in the same way as San Francisco, Memphis, or Seattle have been during key moments in rock ‘n roll history.

For much of the early rock ‘n roll era music was a vertically-integrated business. Artist management, booking, recording studios, and marketing/promotion/distribution were all handled by the same company. The money was made in booking performances more so than record sales.

Records were produced, especially 45RPM “singles”, in order to promote the band. Singles were sent for free to radio stations in hopes that they would be placed into regular rotation by the station. That airplay would familiarize listeners/fans with the bands and their music. While purchasing the records was a goal, the bigger aim was name recognition, so that when a band was booked for a local event (dance, concert, youth club appearance, tour date) the local fans would buy tickets and show up to the event. Naturally some artists broke out in a big way, which meant even more money in record sales, as well as touring.

Record labels, studios, recording  studios, and talent booking services – whether the same company or separate entities – enjoyed a very symbiotic relationship. Much of this is chronicled in a mini-doc I cut for the Memphis Rock ‘n Soul Museum. It highlighted studios like Sun, Stax, and Hi and their role in the birth of rock ‘n roll and soul music.

In the central Florida scene, one such company was Bee Jay, started by musician/entrepreneur Eric Schabacker. Bee Jay originally encompassed a booking service and eventually a highly regarded recording studio responsible for many local acts. Many artists passed through those studio doors, but one of the biggest acts to record there was probably Molly Hatchet. I got to know Schabacker when the post facility I was with acquired the Bee Jay Studios facility.

Years later Schabacker approached me with an interesting project – a documentary about the local garage bands on the 60s. Together with a series of interviews with living band members, post for the documentary would also involve the restoration of several proto-music videos. Bee Jay had videotaped promotional videos for 13 of the bands back in the day. While Schabacker handled the recording of the interviews, I tackled the music videos.

The original videos were recorded using a rudimentary black-and-white production system. These were recorded onto half-inch open reel videotape. Unfortunately, the video tubes in the cameras back then didn’t always handle bright outdoor light well and the video switcher did not feature clean vertical interval switching. The result was a series of recordings in which video levels fluctuated and camera cuts often glitched. There were sections in the recordings where the tape machine lost servo lock during recording. The audio was not recorded live. Instead, the bands lip-synced to playback of their song recordings, which was also recorded in sync with the video. These old videos were transferred to DV25 QuickTime files, which formed my starting point.

Step one was to have clean audio. The bands’ tunes had been recorded and mixed at Bee Jay Studios at the time into a 13-song LP that was used for promotion to book those bands. However, at this point over three decades later, the master recordings were no longer available. But Schabacker did have pristine vinyl LPs from those session. These were turned over to local audio legend and renowned master engineer, Bob Katz. In turn, he took those versions and created remastered files for my use.

Now that I had good sound, my task was to take the video – warts and all – and rebuild it in sync with the song tracks, clean up the video, get rid of any damage and glitches, and in general end up with a useable final video for each song. Final Cut Pro (legacy) was the tool of choice at that time. Much of the “restoration” involved the slight slowing or speeding up of shots to resync the files – shot by shot. I also had to repeat and slomo some shots for fit-and-fill, since frames would be lost as glitchy camera cuts and other disturbances were removed. In the end, I rebuilt all 13 into a presentable form.

While that was a labor of love, the down side was that the documentary never came to be. All of these bands had recorded great-sounding covers (such as Solitary Man), but no originals. Unfortunately, it would have been a nightmare and quite costly to clear the music rights for these clips if used in the documentary. A shame, but that’s life in the filmmaking world.

None of these bands made it big, but in subsequent years, bands of another era like *NSYNC and the Backstreet Boys did. And they ushered a new boy band phenomenon, which carries on to this day in the form of K-pop, among other styles.

©2022 Oliver Peters

Analogue Wayback, Ep. 18

Connections Redux

In 1993 I worked on a corporate image short film for AT&T entitled Connections: AT&T’s Vision of the Future. I wrote about this in a 2010 blog post, but I thought it was a good topic to revisit in the context of this Analogue Wayback series. Next year will be 30 years since its release, which makes it a good time to compare these futurists’ ideas with what was actually developed. (The full film can be viewed here on YouTube.)

The inspiration for the film came from AT&T exec Henry Bassman. It was designed as a vision piece to be used in various public and investor relations endeavors. The concepts shown in the film were based on the ideas of a number of theorists working with AT&T’s labs and grounded in actual technology that was being studied and developed there. The film’s concept was to extrapolate those ideas 20 years into the future and show actual productization that might come to be. Henry Bassman and director Robert Wiemer wove these ideas into the fictional narrative of this 15-minute short film. Bassman discussed Connections: AT&T’s Vision of the Future in this 2007 interview with the Paleo-Future blog.

The production was filmed in the Universal Studios Florida soundstages on 35mm and posted at Century III. We transferred the film to Sony D2 composite digital tape using our Rank Cintel Mark III/DaVinci-equipped telecine suite. The offline edit was handled with an Ediflex system and the online conform/finishing edit done in our online edit bays (CMX 3600 edit system, Grass Valley 300 switcher with Kaleidoscope DVE, and D2 mastering). My role was the online edit, along with a number of standard visual effects, like screen inserts and basic composites. The more advanced 2D and 3D visual effects were handled by our designers.

While the film might certainly seem quaint to modern eyes, the general concepts and the quality of the visual effects were in keeping with other productions of that era, such as Star Trek: The Next Generation – of course, without the fantasy, sci-fi component. Remember that the internet was still young, no iPhone existed, and most of today’s commonplace technology simply never existed outside of the lab. Naturally, as with any of these past looks into the future, the way that theoretical concepts morph into real technology is never exactly the same as depicted, nor as seamless in operation. But these were pretty darn close.

I covered much of the technology and those concepts in my 2010 post, but it’s worth taking a new look at the ideas shown:

Simultaneous Facetime or Zoom-like conversations

Real-time captioning with live foreign language translation

Seat back airline entertainment systems with communications capabilities

16×9 displays

Foldable tablets

Tablet cameras with augmented reality

A form of the metaverse with avatars and Oculus-style interfaces

Noise-cancelling communication area

Large flat-panel TV displays with computer interfaces

Computer intelligent assistants

Online shopping with augmented reality

Online, computer -assisted leaning in classrooms

Super-thin computers

Automotive communications/media electronics

One can certainly point out flaws when viewed through the modern lens. Plus, since this is an AT&T piece, it focuses on some of their ideas, like active phone booths, and the video phone. Not to mention some obvious misses, like not really seeing the advent of the modern smart phone in a clear way. Nevertheless, it’s interesting to see how close so much of this is. It makes you wonder how we will view back onto today 20 years from now.

©2022 Oliver Peters

Analogue Wayback, Ep. 17

The shape of your stomach.

The 1970s into the early 1990s was an era of significant experimentation and development in analog and digital video effects and animation. This included computer video art projects, broadcast graphics, image manipulation, and more. Denver-based Computer Image Corporation was both a hardware developer and a production company. Hardware included an advanced video switcher and the Scanimate computer animation system. The video switchers were optimized for compositing and an integral part of the system; however, it was the Scanimate analog computer that is most remembered.

Computer Image developed several models of Scanimate, which were also sold to other production companies, including Image West in Los Angeles (an offshoot of CI) and Dolphin Productions in New York. Dave Sieg, Image West’s former chief engineer, has a detailed website dedicated to preserving the history of this technology.

I interviewed for a job at Dolphin in the mid-1980s and had a chance to tour the facility. This was a little past the company’s prime, but they still had a steady stream of high-end ad agency and music video clients. Some of Dolphin’s best-known work included elements for PBS’ Sesame Street and The Electric Company, the show open for Washington Week in Review (PBS), news opens for NBC, CBS, and ABC News, as well as numerous national commercials. One memorial Pepto Bismal campaign featured actors that step forward from a live action scene. As they do, their body turns a greenish monochrome color and the stomach expands and becomes distorted.

Dolphin was situated in a five-story brownstone near Central Park. It had formerly housed a law practice. Behind reception on the ground floor was the videotape room, cleverly named Image Storage and Retrieval. The second floor consisted of an insert stage plus offices. Editing/Scanimate suites were on the third and fourth floors. What had been the fifth-floor law library now held the master videotape reels instead of books. A stairwell connected the floors and provided the cable runs to connect the electronics between rooms.

Each edit suite housed several racks of Scanimate and switcher electronics, the editor’s console, and client seating. At the time of my interview and tour, Dolphin had no computer-assisted linear edit controllers, such as CMX (these were added later). Cueing and editing was handled via communication between the editor and the VTR operator on the ground floor. They used IVC-9000 VTRs, which were 2″ helical scan decks. These are considered to have provided the cleanest image over multiple generations of any analog VTR ever produced.

Each suite could use up to four decks and animation was created by layering elements over each other from one VTR to the next. The operator would go round-robin from deck to deck. Play decks A/B/C and record onto D. Next pass, play B/C/D and record onto A to add more. Now, play C/D/A and record onto B for more again, and so on – until maybe as many as 20 layers were composited in sophisticated builds. Whichever reel the last pass ended up on was then the final version from that session. Few other companies or broadcasters possessed compatible IVC VTRs. So 2″ quad copies of the finished commercial or video were made from the 2″ helical and that’s the master tape a client left with.

This method of multi-pass layering is a technique that later took hold in other forms, such as the graphic design for TBS and CNN done by J. C. Burns and then more sophisticated motion layering by Charlex using the Abekas A-62. The concept is also the foundation for such recursive recording techniques as the preread edit function that Sony integrated into its D2 and Digital Betacam VTRs.

The path through Scanimate started with a high-resolution oscilloscope and companion camera. The camera signal was run through the electronics, which included analog controls and patching. Any image to be manipulated (transformed, moved, rotated, distorted, colorized) was sourced from tape, an insert stage camera, or a copy stand titling camera and displayed in monochrome on the oscilloscope screen. This image was re-photographed off of the oscilloscope screen by the high-resolution video camera and that signal sent into the rest of the Scanimate system.

Images were manipulated in two ways. First, the operator could use Scanimate to manipulate/distort the sweep of the oscilloscope itself, which would in turn cause the displayed image to distort. Once this distorted oscilloscope display was then picked up by the high-resolution camera, then the rest of Scanimate could be used to further alter that image through colorization and other techniques. Various keying and masking methods were used to add in each new element as layers were combined for the final composite.

Stability was of some concern since this was an analog computer. If you stopped for lunch, you might not be able to perfectly match what you had before lunch. The later Scanimate systems developed by Computer Image addressed this by using digital computers to control the analog computer hardware, making them more stable and consistent.

The companies evolved or went out of business and the Scanimate technology went by the wayside. Nevertheless, it’s an interesting facet of video history, much like that of the early music synthesizers. Even today, it’s hard to perfectly replicate the look of some of the Scanimate effects, in part, because today’s technology is too good and too clean! While it’s not a perfect analogy, these early forms of video animation offer a similar charm to the analog consoles, multitrack recorders, and vinyl cherished by many audiophiles and mixing engineers.

Check out this video at Vimeo if you want to know more about Scanimate and see it in action.

©2022 Oliver Peters