Mixing – Analog or Digital?

A perennial topic among YouTube audio production channels is whether analog is better than digital and whether or not it even makes a difference. While I’m a video editor and not a mixer, the music projects that I have been involved with have all been recorded analog. Of course, in the past 20 years audio has been increasingly recorded and mixed purely in the digital realm. Although, sometimes analog pieces of gear were used for character and color.

Produce Like A Pro is a YouTube channel that I follow. Music producer Warren Huart frequently features videos by Grammy-nominated producer/engineer/mixer Marc Daniel Nelson. Many of these videos include downloadable session tracks that enable you to remix the songs in order to learn from the process.

I found this particular video (linked) of Nelson’s intriguing, because it tackled the analog/digital debate head-on. It’s from an older session of his in which he recorded and mixed the song “Traveling Light” by artist S. Joel Norman. As he explains in the video, most of the instrument tracks were “multed” – i.e. the mic signals were split and simultaneously recorded to 2″ analog multitrack tape, as well as directly into Pro Tools. Once the tape tracks were also ingested into Pro Tools, they could compare and pick whichever sounded the best. According to his commentary, the instrument tracks that were recorded to tape were preferred over those recorded directly to Pro Tools for this song. This is in keeping with the soul/gospel/RnB vibe of the song.

Doing my own remix

Since I like to mix some of these tunes (a hobby and to learn), I downloaded the tracks, dropped them into Logic Pro, and compared. As I first listened to the soloed tracks, the digital versions sounded better to me – louder and more open. My intent originally was to mix in Logic using mainly the built-in plug-ins. Unfortunately as I started to build the mix, I had trouble getting the right sound, especially with drums. Drums are often one of the hardest parts of the mix to get right. It’s usually the largest number of mics with the most leakage. Getting a drum kit to sound right and not like someone is pounding on cardboard boxes can take a mix engineer a lot of time.

I decided to change my approach and wherever possible, switch over to the tracks recorded to tape. Instantly the mix started to fall into line. This is a classic case of what sounds great in solo might not sound as good in combination with the rest of the mix. The whole is greater than the sum of the parts. This is why veteran mixers always caution beginners not to fixate too much on making each individual track sound perfect on its own.

Along with the decision to change my approach, I also abandoned the idea of doing the whole mix with Logic’s native plug-ins. Don’t get me wrong. The tools included with Logic Pro are quite good. Their compressor and vintage EQ options are designed to emulate certain models of sought-after, classic analog gear. They just don’t use the licensed branding. I did still use them, but more sparsely.

Tracks -> Stacks -> Submix -> Output

My standard track layout for these mixes is to combine each instrument group into a summing track stack (a bus) – drums, guitar, bass, keys, vocals, etc. I usually route all of these instrument stems (buses) to a submix bus, which in turn is sent to the output. This allows me to mix levels and add plug-ins/processing at three stages – the track, the track stack, and the final submix bus. I don’t add any processing to the output bus. Only metering plug-ins are applied there.

For this project, I decided to use a modified approach. All instrument stems were routed to a separate instruments bus (minus any vocals). Then the combination of instruments, vocals, and choir were routed to the submix bus. The advantage of this type of film/TV mixing style is that I could adjust all instruments as a group on a single channel and balance them as a unit against the vocals and choir.

In the past I used to rely on hardware faders, but I don’t own a control surface. I also used to write live automation passes with the mouse, but I’ve gone away from doing that, too. Instead, I surgically add and adjust keyframes throughout the individuals tracks, as well as the stems. Usually I will balance out the mix this way before ever adding plug-ins. Those are there to sweeten – not to do the heavy lifting.

Mixing with plug-ins and channel strips

My main effects tool for this mix was the Waves Scheps Omni Channel plug-in, which I applied to each track stack (instrument group). Andrew Scheps is a renowned mixer who has partnered with Waves to develop the Omni Channel. The advantage to a channel strip is that you have multiple effects tools (filters, compression, EQ, etc) at your fingertips all within a single interface. It mimics a channel strip on an analog console. No need to open multiple plug-in windows.

I also have both SSL and Focusrite channel strip plug-ins, but I prefer the Scheps version. Instead of simply designing just another SSL or Neve copy, Scheps was able to pick and choose the character of different products to create a channel strip that he would like to use himself. It sounds great, has a ton of presets, and unlike the name-brand emulations, the modules within the plug-in can be expanded and re-arranged. When applying it to instrument stacks, I can really develop the character that I want to hear.

No mix is ever finished after the first pass. When I compared my mix to the official mix that’s available on Spotify, I noticed some distinct differences. The artist’s version had some additional overdubbed instrumentation (strings and some embellishments) that I didn’t have in the download. They also chose to delay the start of the choir after the breakdown mid-song. These are all subjective choices based on taste. Of course, the release mix has also been professionally mastered, which can make a big difference.

What bothered me in my mix was the lack of a really present bottom end. This is often the difference in amateur versus pro mixes. A top-level mixer like Marc Daniel Nelson is certainly going to be way better at it than I am. In addition, he might be mixing in a hybrid fashion using Pro Tools along with key pieces of analog gear that really improve the sound and help to sculpt the sonic qualities of a song.

In an effort to increase and improve the bottom end, I decided to swap the kick drum tracks recorded to tape for the digital versions. I also dropped the bass amp track in favor of only using the bass DI track. The second thing was to use Logic’s vintage graphic EQ to boost the kick drum and bass low frequencies. This particular plug-in emulates an API console EQ and is a good choice for the low end. 

In the modern era, live drum sounds are often replaced by drum samples. The samples are triggered by the live drums, so you still get the right feel and timing, but a better drum sound. Often a mixer will combine a bit of both. I don’t know whether or not that was done in the actual mix. I’m certainly not implying that it was. Nevertheless, this is a fairly common modern practice to get really killer drum kit mixes.

Dealing with recording reality

When you start playing with raw tracks, it’s inevitable that you’re going to listen to each in the solo mode. You quickly see that even the best recordings will have some wrinkles. For example, I don’t like when a singer or a voice-over artist takes huge breaths between phrases.  At first, I tried to mitigate these with De-Breath plug-ins – first Accusonus and later iZotope RX. Both introduced some annoying artifacts that I could hear in the mix. So I decided on the old-school approach, simply adding keyframes and ducking the vocal track at each breath. In doing so – and paying very close attention to the vocal, I also realized that some sort of gate must have been used during the recording. You could hear a track drop to silence as a last word faded between phrases. Riding levels helped to smooth these out, too.

Working with the bass track, I also noticed some “fizz” in the 3khz range. This appeared to be coming from the bass pick-ups. Noise reduction/restoration plug-in hurt the quality too much, so I used Logic’s parametric EQ to notch out this frequency.

Final thoughts

Circling back to the original analog versus digital debate, it simply comes down to preference and the genre of the music. If you grew up on the classic rock, country, or RnB/soul music of the 70s, 80s, and 90s, then you’ll probably prefer the sound of analog. After all, those recordings were usually made in the best studios, by mixers at the top of their game, and using the finest analog gear of the day. Can you reproduce those exact sounds on your own computer with bog standard plug-ins? Maybe, but unlikely. On the other hand, if your musical tastes go off in a different direction – electronica, hip hop, etc – then maybe digital will sound better to you. There is no right or wrong answer, since taste is personal.

The trick is starting with a great recording that gets you nearly there and then enhance it. To do that, learn the tools you already have. Every DAW comes with a great set of built-in plug-ins. There are also many free and/or inexpensive third-party plug-ins on the market. The upside is that you can apply multiple instances of a fancy name-brand emulation on each and every track of your mix, which would never be possible with the real hardware due to cost. The downside is that you have so many options out there, that a lot of users simply amass a collection of plug-ins that they have no idea how to use. This induces option-paralysis.

If you own a ton of plug-ins, it’s a good idea to ween yourself off of them. Focus on a select group and learn them well. Understand how they work and when to use them. As I’ve mentioned, I like Omni Channel, as well as the Logic plug-ins. If you are looking for a family of products, it’s hard to go wrong with any of the tools from iZotope, Sonible, and/or FabFilter. Music mixing is about taste and emotion. Be sure to preview your mixes for some trusted friends to get their feedback. After working for hours on a mix, you might be too close to it. Then refine as needed. In the end, if you are doing this for fun, then you have only yourself to please. Enjoy!

Click this link to listen to the remix on Vimeo.

©2023 Oliver Peters

A Film’s Manageable Length

There are three versions of every feature film: as written, as shot, and what comes out of post. The written and filmed story can often be too long as compared to some arbitrary target length. It is the editor’s job to get the film down such a “manageable length.” Since a film isn’t broadcast television and doesn’t have to fit into a time format, deciding on the right length is a vague concept. It’s like saying how long a book should be.

This idea was derived in part based on both the audience’s attention span to the story and how long their bladders held out. Couple this to a theater’s schedule – longer movies meant fewer screenings and, therefore, lower box office revenue. In past decades, the accepted length was in the 90 to 100 minute range. Modern blockbusters can easily clock in at 120 to 150 minutes. However, if you are an indie filmmaker and didn’t produce a film starring Tom Cruise, then you better stick closely to that 100 minute mark.

The script

The rule of thumb for a script is around one minute per page. 100 pages = 100 minutes. For the most part that works, until you hit a script line, such as, “and the battle ensued.” That can easily consume several minutes of screen time. And so, it takes careful reading and interpretation of a script to get a valid ballpark length. That not only impacts the final length, but also the shooting schedule, production budget, and more.

Walter Murch has a technique to get a good idea for the true length of a film. His method is to take a day or two and act out each scene of the script by himself – reading the dialogue and going through characters’ actions. As he does this, he times each scene. He’ll do this two or three times until he has a good average timing for each scene and a total estimate for the film. Then, as the film is being shot, he’ll compare his time estimates with those coming from the script supervisor. If they are radically off, then he knows that something deviated a lot from the written script. And that will need some explanation.

Trimming the first assembly

The starting point for any editor is to assemble everything according to the script. At this point, the editor does not have discretion to drop lines, scenes, or re-arrange anything. The point is to present an initial cut to the director, which is faithful to the director’s intention during filming. Now you know how long the combined material really is. It’s quite common for the film to be long. In fact, that’s better than being too short or even very close to the target length.

If a film runs 10-30% over, then according to Murch, you can get there through “diet and exercise.” If it’s 50-100% or more over-length, then it’s time for true “surgery” to figuratively lose some body parts or organs.

A film that’s 10-30% long can usually be trimmed in various ways, without losing any key scenes. One way is to cut lines more tightly together, which can also help with pacing. A film often has “shoe leather” – getting a character from point A to point B. For example, a character arrives home in his car, walks up to the front door, opens it, and enters the home. Here, the editor can cut from the car arriving home directly to the interior of the home as the actor enters. Another technique is to enter scenes a bit later and exit them earlier. And finally, as you see the assembled film, you may realize that there are redundant dialogue lines or early plot reveals that can be cut. All of these comprise the “diet and exercise” solution.

Surgery

If the film is long and you can’t get to a desired length through “diet and exercise,” then more drastic cuts are needed. You might have to lose entire scenes or even characters. Sometimes this can focus the film by honing in on the real story. You often realize that some of these scenes weren’t needed after all and the film plays better without them. It’s at this stage, that the director and editor may re-arrange some of the scene order. In doing so, you may also discover that certain plot elements become obvious and that scenes, which might have foreshadowed or explained them aren’t needed after all. This process can takes days, weeks, or months.

It can also be painful for many directors. Some are happy to jump in and make severe cuts right away. Others have to go through an iterative process of whittling the film down in numerous passes over the course of weeks.

One of the earliest films I’ve cut was “The First of May.” It was a family film with a child lead actor coupled with an ensemble of older acting legends. Toss in a literal circus and you can see the complexity. The final length was long by what was assumed to be the “ideal” length for an indie, family film.

As we we getting down to the wire for the initial pitches to potential distributors, the producing partners – who split the roles of writer and director – were at odds over the length. One argument was that “ET” was a family film and it was long. The counter-argument was that this wasn’t “ET” and if it was too long, they’d never get in the door in the first place.

We were at an impasse and the co-producer/director and I did what we called the “slash and burn” edit. What could we cut out of the film to get to 90 minutes if told it had to absolutely be at that length? Unfortunately, this exercise didn’t sit well with the co-producer/writer. In the end, after some tense conversations, they were able to agree on an edit that held together well and met the objectives.

This is a dilemma that every editor/director team faces and it will always be painful for some. After all, when the editor cuts out the scene with that great crane shot that took all day to pull off, the director can’t help but wince. However, it’s all in service of the story. Remember, the audience only sees the film that they are presented with and will usually never know what was cut out. If the pacing and emotion are right and the story holds up and entertains, then you’ve done your job as an editor – no matter what the film’s final length is.

©2023 Oliver Peters

Photo Phun 2022

Let’s polish off the year with another post of stills from my photography hobby. These stills were taken during this fall and Christmas season, plus a few oldies from other posts about Firstlight and Optics. As before, all of these images were captured with my iPhone SE using Firstlight, FiLMiC’s still photo companion to their FiLMiC Pro video capture app. Aside from the extra features, Firstlight enhances the phone with camera raw recording. This isn’t otherwise possible on the SE using the native camera application.

The workflow to “develop” these images started in Adobe Bridge, where it was easy to make the basic raw adjustments using the camera raw module. Bridge offers Lightroom-style control and quick processing for a folder of images. These images then went to Photoshop for cropping and resizing.

Boris FX Optics functions as both a Photoshop plug-in and a standalone application. It’s one of my favorite tools for creating looks with still photos. It goes far beyond the filters, adjustments, and effects included in applications like Photoshop alone. Nearly all image manipulation was done by roundtripping each file from Photoshop to Optics (via the plug-in) and then back. The last step in the workflow was to use the TinyJPG website to optimize the file sizes of these JPEG images. Click any image below to peruse a gallery of these stills.

Enjoy the images and the rest of the holiday season. I’ll be back after we flip the page to a new year. Look for a 4-part interview in January with legendary film editor, Walter Murch.

©2022 Oliver Peters

Analogue Wayback, Ep. 21

The Jacksonville Jazz Festival

Regular readers probably know by now that I have a soft spot in my heart for music production. I’ve worked on a number of films and TV shows that were tied to musical performances and it’s always been an enjoyable experience for me. One of those ongoing experiences was post for the Jacksonville Jazz Festival PBS specials in the 80s and 90s. Although I was living in Jacksonville at the start of this annual event, I really didn’t get involved with the shows until a few years after I’d left town.

The yearly Jacksonville Jazz Festival is a cultural highlight for the city of Jacksonville, Florida. Launched in 1980, the first two years were hosted in the neighboring fishing town of Mayport, home of a large US Navy base. It quickly shifted to downtown Jacksonville’s Metropolitan Park by the St. Johns River, which cuts through the heart of the city.

Recording jazz in the “backyard”

WJCT, the local PBS and NPR affiliate, had been covering the annual event for PBS since the second year of the festival. By 1983, the festival and the station were tightly intertwined. In that year, the park was renovated with a new WJCT facility adjacent to it. Having the building next to the park provided a unique opportunity to install direct audio and video cable runs between the station facility and the covered pavilion performance stage at the park. To inaugurate both, WJCT covered the festival with an eight-hour live broadcast.

From 1981 until 1994 (with the exception of 1983), WJCT produced each year’s festival as a one-hour TV special for PBS distribution. This was a fall event, which was posted over the subsequent months and aired early the next year. My involvement started with the 1984 show, helping to post eight of the eleven TV specials during those years. I worked closely with the station’s VP of Programming, Richard V. Brown, and Creative Services Director, Bill Weather.

Production and post arrangements varied from year to year. Bill Weather was the show’s producer/director for the live event recordings most of those eleven years. (Other directors included Dan Kossoff, David Atwood, and Patrick Kelly.) Weather and I traded off working as the creative editor, so in some years I was the online editor and in others, both editor and online editor. During that decade of shows, post was either at Century III (where I worked) or at our friendly crosstown rival, The Post Group at The Disney-MGM Studios.

Turning the festival into a TV show

Richard V. Brown was the show’s executive producer and also handled the artist arrangements for the show and the festival. Performers received fees for both the live event appearance and the TV show (if they were featured in it), so budgets often dictated who was presented in the telecast. A legendary, but expensive performer like Ray Charles or Miles Davis might headline the festival, yet not appear in the TV special. However, this wasn’t always dictated by money, since top names already brought with them a level of overexposure in the media. And so, the featured artists each year covered a wide spectrum of traditional and contemporary jazz styles, often introducing lesser known artists to a wider TV audience. New Orleans, fusion, Latin, blues, and even some rock performers were included in this eclectic mix.

The artist line-up for each special was decided before the event. Most shows highlighted four acts of about 10 to 15 minutes each. The songs to be included from each artist were selected from the live set, which tended to run for about an hour. The first editorial step (handled by Brown and Weather) was to select which songs to use from each performer, as well as any internal song edits needed to ensure that the final show length fit PBS guidelines.

Recording the live experience

Production and post grew in sophistication over time. Once the WJCT building was completely ready, multiple cameras could be controlled and switched from the regular production control room. No mobile unit required. This usually included up to seven cameras for the event. A line cut was recorded to 1″ videotape, along with several of the cameras as extra iso recordings to be used in post.

The station’s own production equipment was augmented with other gear, including stage lighting, camera dolly, and camera boom. With such an important local event, the station crew was also expanded thanks to local production professionals from the town, including a few top directors and cinematographers working the stage and running cameras; and volunteers working tirelessly to truly make each year memorable.

When it came to sound, the new WJCT facility also included its own 24-track audio recorder. Stage mic signals could be split in order to simultaneously feed the “front of the house” mixing board, the stage monitors, and run back into the building to the multitrack recorder. These 2″ analog audio masters also recorded “time of day” timecode, thus could be synced with the video line cut and iso recordings in post.

Editing is more than just editing

Although my role was post, I was able to attend several of the live festivals, even if I was only the online editor. I sat in the control room and functioned a bit like an assistant director, noting potential editorial issues. But I also made sure that I had coverage of all the members of the band. One performer might take a solo, but I also needed options for other camera angles. As with most live jazz and rock performances, the band members might trade off solos, so it was important to keep an eye on where the focus of the performance could switch to next. Since the director had his hands full just focusing on the real-time action, I would often lean over and ask for a little different coverage from one of the other cameras not currently punched up.

None of the crew was intimately familiar with the live performances of these acts, so it was all about having a sixth sense for the music. However, there was one surprising exception. That was the year that Paul Shaffer and the World’s Most Dangerous Band headlined. As you probably know this was the house band for The David Letterman Show, but they also had a limited live touring schedule.

For their set, Shaffer sent in a coordinator with a printout of their entire set rundown. Shaffer and the band had choreographed the whole set, so he was able to give the director a “heads up” for each part of the performance. In addition, Shaffer is the consummate band leader. His set included a jam with his band and several other jazz artists from earlier in the day. Each had a cameo solo. This sort of ad hoc, live jam can often become a big mess; but this one went off as if they’d rehearsed it. Shaffer literally put this together in quick conversations with the other artists during the course of that day.

3/4″ and a legal pad of notes

Once everything was in the can, post could start – initially with content selection. Then camera cuts could be cleaned up using the iso angles. This “offline edit” was largely done by reviewing the 3/4″ U-matic tapes, which had been recorded for the line cut and three of the iso angles using a quad-split generator with a timecode overlay. This gave the editor a multicam view, but from a single tape source. Unfortunately, listing camera cut changes to specific angles required a lot of meticulous, handwritten timecode notes. (Early days had four monitors and a timecode generator display stacked as closely as possible, with an independent camera recording to 3/4″tape.)

Based on these notes, the show master could then be edited in a linear, online session using the 1″ originals and mastering to 1″ or D2. If the line cut of the live recording was solid, then any given song might only have new edits for about 10-25% of the song. Edits might mean a cut to a different angle or maybe the same angle, but just a bit sooner. In addition to the live camera angles, we also had extra ENG footage, including audience shots, party boats anchored in the river nearby, and even some helicopter aerials of the wider event grounds, the pavilion stage, and the audience.

In a typical year, I would finish the camera clean-up edits and trims unsupervised, then Brown and Weather would arrive for the supervised part of the online edit. Here we would build the visual style for the show open and transitions between songs and bands. Plus final credits. This was at the dawn of digital post, so most show opens involved a lot of layering.

It’s all about the mix

The Jacksonville Jazz Festival PBS specials were, of course, about the music. Getting the best possible mix was a very important consideration. In the earliest years, the live recording and remix methodology was evolving, but generally run under the auspices of the WJCT audio engineers. This shifted to our Century III staff audio engineer, Jerry Studenka. He handled the mix for the shows for several years in the late 80s.

To the best of my recollection, the 24-track tapes were recorded at 15ips with Dolby SR noise reduction. This enabled an hourlong set to be recorded on a single reel of tape. Audio mixes/remixes were recorded onto two tracks of that same 24-track tape. In later years, working out of the Century III facility on the lot at Universal, we used Sony 24-track digital audio recorders. The staff would first bounce the analog master reels to digital tape ahead of the audio mix session. Then the audio engineer would mix from one digital recorder to the other. Century III and The Post Group were equipped with Solid State Logic consoles in their main audio rooms, which provided a comfort factor for any experienced music mixer.

The performances were recorded live and mixed on-the-fly during each set as the first pass. Then in the post session, they were polished or remixed in part with punch-ins or even fully remixed depending on what everyone felt gave the best result. But the mixes were all based on the actual live recordings – no overdubs added later.

Every year, each performer was afforded the opportunity to bring in their own recording engineer or representative for the show’s mix. Only two artists ever took Brown up  on that  – Paul Shaffer and Spyro Gyra. Larry Swist came down for Spyro Gyra, who appeared at numerous festivals and was featured in several of the specials. Swist, who later became a well-respected studio designer, was the recording engineer for the band’s albums. Shaffer sent Will Lee (the band’s vocalist/bassist) as his rep to the mixing session. Spyro Gyra and Shaffer’s band happened to be on the same show that year. By the time Lee arrived, Studenka and Swist already had a good mix, so Lee was able to quickly sign off.

Swist had an easy-going, “no drama” personality. Everyone had such a good experience working with him that for each year thereafter, Swist was brought in for all of the sessions. He coordinated both the live recording to multitrack during the event and then remixed all the music for the show during post.

These remixes weren’t as straightforward as they might seem. All sound post was handled on tape, not with any sort of DAW. It was a linear process, just like the picture edits. First of all, there were internal edits within the songs. Therefore, all outboard processing and console and fader settings had to match at the edit point, so that the edit was undetectable. Second, the transitions between songs or from one artist to the next had to be bridged. This was generally done by overlapping additional crowd applause across the change to hide the performance edit, which again required audio matching.

The Jacksonville Jazz Festival of 1994 (aired 1995) was the last of the PBS specials, due in part to the cost of production and TV rights. Eventually WJCT turned over production of the festival itself to the City of Jacksonville. The results for that time speak for themselves. The collective effort produced not only great festival experiences, but also memorable television. Unfortunately, some of the production folks involved, like Richard V. Brown, Larry Swist, and Jerry Studenka are no longer with us. And likewise, neither are some of the featured performers. But together, they left a worthwhile legacy that is still carried on by the City of Jacksonville to this day. 

©2022 Oliver Peters

Analogue Wayback, Ep. 19

Garage bands before the boy bands

As an editor, I’ve enjoyed the many music-oriented video productions I’ve worked on. In fact one of my first feature films was a concert film highlighting many top Reggae artists. Along the way, I’ve cut numerous jazz concerts for PBS, along with various videos for folks like Jimmy Buffet and the Bob Marley Foundation.

We often think about the projects that “got away” or never happened. For me, one of those was a documentary about the “garage band” acts of central Florida during the 1960s. These were popular local and regional acts with an eye towards stardom, but who never became household names, like Elvis or The Beatles. Central Florida was a hot bed for such acts back then, in the same way as San Francisco, Memphis, or Seattle have been during key moments in rock ‘n roll history.

For much of the early rock ‘n roll era music was a vertically-integrated business. Artist management, booking, recording studios, and marketing/promotion/distribution were all handled by the same company. The money was made in booking performances more so than record sales.

Records were produced, especially 45RPM “singles”, in order to promote the band. Singles were sent for free to radio stations in hopes that they would be placed into regular rotation by the station. That airplay would familiarize listeners/fans with the bands and their music. While purchasing the records was a goal, the bigger aim was name recognition, so that when a band was booked for a local event (dance, concert, youth club appearance, tour date) the local fans would buy tickets and show up to the event. Naturally some artists broke out in a big way, which meant even more money in record sales, as well as touring.

Record labels, studios, recording  studios, and talent booking services – whether the same company or separate entities – enjoyed a very symbiotic relationship. Much of this is chronicled in a mini-doc I cut for the Memphis Rock ‘n Soul Museum. It highlighted studios like Sun, Stax, and Hi and their role in the birth of rock ‘n roll and soul music.

In the central Florida scene, one such company was Bee Jay, started by musician/entrepreneur Eric Schabacker. Bee Jay originally encompassed a booking service and eventually a highly regarded recording studio responsible for many local acts. Many artists passed through those studio doors, but one of the biggest acts to record there was probably Molly Hatchet. I got to know Schabacker when the post facility I was with acquired the Bee Jay Studios facility.

Years later Schabacker approached me with an interesting project – a documentary about the local garage bands on the 60s. Together with a series of interviews with living band members, post for the documentary would also involve the restoration of several proto-music videos. Bee Jay had videotaped promotional videos for 13 of the bands back in the day. While Schabacker handled the recording of the interviews, I tackled the music videos.

The original videos were recorded using a rudimentary black-and-white production system. These were recorded onto half-inch open reel videotape. Unfortunately, the video tubes in the cameras back then didn’t always handle bright outdoor light well and the video switcher did not feature clean vertical interval switching. The result was a series of recordings in which video levels fluctuated and camera cuts often glitched. There were sections in the recordings where the tape machine lost servo lock during recording. The audio was not recorded live. Instead, the bands lip-synced to playback of their song recordings, which was also recorded in sync with the video. These old videos were transferred to DV25 QuickTime files, which formed my starting point.

Step one was to have clean audio. The bands’ tunes had been recorded and mixed at Bee Jay Studios at the time into a 13-song LP that was used for promotion to book those bands. However, at this point over three decades later, the master recordings were no longer available. But Schabacker did have pristine vinyl LPs from those session. These were turned over to local audio legend and renowned master engineer, Bob Katz. In turn, he took those versions and created remastered files for my use.

Now that I had good sound, my task was to take the video – warts and all – and rebuild it in sync with the song tracks, clean up the video, get rid of any damage and glitches, and in general end up with a useable final video for each song. Final Cut Pro (legacy) was the tool of choice at that time. Much of the “restoration” involved the slight slowing or speeding up of shots to resync the files – shot by shot. I also had to repeat and slomo some shots for fit-and-fill, since frames would be lost as glitchy camera cuts and other disturbances were removed. In the end, I rebuilt all 13 into a presentable form.

While that was a labor of love, the down side was that the documentary never came to be. All of these bands had recorded great-sounding covers (such as Solitary Man), but no originals. Unfortunately, it would have been a nightmare and quite costly to clear the music rights for these clips if used in the documentary. A shame, but that’s life in the filmmaking world.

None of these bands made it big, but in subsequent years, bands of another era like *NSYNC and the Backstreet Boys did. And they ushered a new boy band phenomenon, which carries on to this day in the form of K-pop, among other styles.

©2022 Oliver Peters