We often think about the projects that “got away” or never happened. For me, one of those was a documentary about the “garage band” acts of central Florida during the 1960s. These were popular local and regional acts with an eye towards stardom, but who never became household names, like Elvis or The Beatles. Central Florida was a hot bed for such acts back then, in the same way as San Francisco, Memphis, or Seattle have been during key moments in rock ‘n roll history.
For much of the early rock ‘n roll era music was a vertically-integrated business. Artist management, booking, recording studios, and marketing/promotion/distribution were all handled by the same company. The money was made in booking performances more so than record sales.
Records were produced, especially 45RPM “singles”, in order to promote the band. Singles were sent for free to radio stations in hopes that they would be placed into regular rotation by the station. That airplay would familiarize listeners/fans with the bands and their music. While purchasing the records was a goal, the bigger aim was name recognition, so that when a band was booked for a local event (dance, concert, youth club appearance, tour date) the local fans would buy tickets and show up to the event. Naturally some artists broke out in a big way, which meant even more money in record sales, as well as touring.
Record labels, studios, recording studios, and talent booking services – whether the same company or separate entities – enjoyed a very symbiotic relationship. Much of this is chronicled in a mini-doc I cut for the Memphis Rock ‘n Soul Museum. It highlighted studios like Sun, Stax, and Hi and their role in the birth of rock ‘n roll and soul music.
Years later Schabacker approached me with an interesting project – a documentary about the local garage bands on the 60s. Together with a series of interviews with living band members, post for the documentary would also involve the restoration of several proto-music videos. Bee Jay had videotaped promotional videos for 13 of the bands back in the day. While Schabacker handled the recording of the interviews, I tackled the music videos.
The original videos were recorded using a rudimentary black-and-white production system. These were recorded onto half-inch open reel videotape. Unfortunately, the video tubes in the cameras back then didn’t always handle bright outdoor light well and the video switcher did not feature clean vertical interval switching. The result was a series of recordings in which video levels fluctuated and camera cuts often glitched. There were sections in the recordings where the tape machine lost servo lock during recording. The audio was not recorded live. Instead, the bands lip-synced to playback of their song recordings, which was also recorded in sync with the video. These old videos were transferred to DV25 QuickTime files, which formed my starting point.
Step one was to have clean audio. The bands’ tunes had been recorded and mixed at Bee Jay Studios at the time into a 13-song LP that was used for promotion to book those bands. However, at this point over three decades later, the master recordings were no longer available. But Schabacker did have pristine vinyl LPs from those session. These were turned over to local audio legend and renowned master engineer, Bob Katz. In turn, he took those versions and created remastered files for my use.
Now that I had good sound, my task was to take the video – warts and all – and rebuild it in sync with the song tracks, clean up the video, get rid of any damage and glitches, and in general end up with a useable final video for each song. Final Cut Pro (legacy) was the tool of choice at that time. Much of the “restoration” involved the slight slowing or speeding up of shots to resync the files – shot by shot. I also had to repeat and slomo some shots for fit-and-fill, since frames would be lost as glitchy camera cuts and other disturbances were removed. In the end, I rebuilt all 13 into a presentable form.
While that was a labor of love, the down side was that the documentary never came to be. All of these bands had recorded great-sounding covers (such as Solitary Man), but no originals. Unfortunately, it would have been a nightmare and quite costly to clear the music rights for these clips if used in the documentary. A shame, but that’s life in the filmmaking world.
None of these bands made it big, but in subsequent years, bands of another era like *NSYNC and the Backstreet Boys did. And they ushered a new boy band phenomenon, which carries on to this day in the form of K-pop, among other styles.
DaVinci Resolve has been admired by users of other editing applications, because of the pace of Blackmagic Design’s development team. Many have considered a switch to Resolve. Since its announcement earlier this year, DaVinci Resolve fans and pros alike have been eagerly awaiting Resolve 18 to get out of public beta. It was recently released and I’ve been using it ever since for a range of color correction jobs.
The interface character and toolset tied to each of these tasks is preserved using a page-style, modal user interface. In effect, you have separate tools, tied to a common media engine, which operate under the umbrella of a single application. Some pages are fluidly interoperable (like edit and Color) and others aren’t. For example, color nodes applied to clips in the Color page do not appear as nodes within the Fusion page. Color adjustments made to clips in a Fusion composition need to be done with Fusion’s separate color tools.
Blackmagic has expanded Resolve’s editing features – so much so that it’s a viable competitor to Avid Media Composer, Apple Final Cut Pro, and/or Adobe Premiere Pro. Resolve sports two editing modes: the Cut page (a Final Cut Pro-style interface for fast assembly editing) and the Edit page (a traditional track-based interface). The best way to work in Resolve is to adhere to its sequential, “left to right” workflow – just like the pages/modes are oriented. Start by ingesting in the Media page and then work your way through the tasks/pages until it’s time to export using the Deliver page.
A new collaboration model through Blackmagic Cloud
The biggest news is that DaVinci Resolve 18 was redesigned for multi-user collaboration. Resolve projects are usually stored in a database on your local computer or a local drive, rather than as separate binary project files. Sharing projects in a multi-user environment requires a separate database server, which isn’t designed for remote editing. To simplify this and address remote work, Blackmagic Design established and hosts the new Blackmagic Cloud service.
The Library owner can share a project with any other registered Blackmagic Cloud user. This collaboration model is similar to working in Media Composer and is based on bin locking. The first user to open a bin has read/write permission to that bin and any timelines contained in it. Other users opening the same timeline operate with read-only permission. Changes made by the user with write permission can then be updated by the read-only users on their systems.
Blackmagic Design only hosts the Library/project files and not any media, which stays local for each collaborator. The media syncing workflow is addressed through features of the Cloud Store storage products (see my review). Both collaboration via Blackmagic Cloud and the storage products are independent of each other. You can use either without needing the other. However, since Blackmagic Cloud is hosted “in the cloud” you do need an internet connection.
There is some latency between the time a change is made by one user before it’s updated on the other users’ machines. In my tests, the collaborator needs to relink to the local media each time a shared project is accessed again. You can also move a project from Cloud back to your local computer as needed.
What else is new in DaVinci Resolve 18?
Aside from the new collaboration tools, DaVinci Resolve 18 also features a range of enhancements. Resolve 17 already introduced quite a few new features, which have been expanded upon in Resolve 18. The first of these is a new, simplified proxy workflow using the “prefer proxies” model. Native media handling has always been a strength of Resolve, especially with ProRes or Blackmagic RAW (BRAW) files. (Sorry, no support for Apple ProRes RAW.) But file sizes, codecs, and your hardware limitations can impede efficient editing. Therefore, working with proxy files may be the better option on some projects. When you are ready to deliver, then switch back to the camera originals for the final output.
The website installer for DaVinci Resolve Studio 18 includes the new Blackmagic Proxy Generator application. This automatically creates H.264, H.265, or ProRes proxy files using a watch folder. However, you can also create proxies internally from Resolve without using this app, or externally using Apple Compressor or Adobe Media Encoder. The trick is that proxy files must have matching names, lengths, timecode values, and audio channel configurations.
Proxy files should be rendered into a subfolder called “Proxy” located within each folder of original camera files. (Resolve and/or the Proxy Generator application do this automatically.) Then Resolve’s intelligent media management automatically detects and attaches the proxies to the original file. This makes linking easy and allows you to automatically toggle between the proxy and the original files.
Regarding other enhancements, the Color page didn’t see any huge new features, since tools like the Color Warper and HDR wheels were added in Resolve 17. However, there were some new items, including object replacement and enhanced tracking. But, I didn’t find the results to be as good as Adobe’s Content Aware Fill techniques.
Two additions worth mentioning are the Automatic Depth Map and Resolve FX Beauty effect. The beauty effect is a subtle skin smoothing tool. It’s nice, but quite frankly, too subtle. My preference in this type of tool would be Digital Anarchy’s Beauty Box or Boris FX’s Beauty Studio. However, Resolve does include other similar tools, like Face Refinement where you have more control.
Automatic Depth Map is more of a marquee feature. This is a pretty sophisticated process – analyzing depth separation in a moving image without the benefit of any lens metadata. It shows up as a Resolve FX in the Edit, Fusion, and the Color pages. Don’t use it in the Edit page, because you can’t do anything with it there. In the Color page, rather than apply it to a node, drag the effect into the node tree, where it creates its own node.
After brief clip analysis, the tool generates a mask, which you can use as a qualifier to isolate the foreground and background. Bear in mind this is for mild grading differences. Even though you might think of this for blurring a background, don’t do it! The mask is relatively broad. If you try to tighten the mask and use it to blur a background, you’ll get a result that looks like a Zoom call background. Instead, use it to subtly lighten or darken the foreground versus the background within a shot. Remember, the shot is moving, which can often lead to some chatter on the edges of the mask as the clip moves. So you’ll have to play with it to get the best result. Playback performance at Better Quality was poor on a 2017 iMac Pro. Use Faster while working and then switch to Better when you are ready to export or render.
Before the acquisition by Blackmagic Design, eyeon offered several integrations of Fusion with NLEs like DPS Velocity and Avid Media Composer. The approach within Resolve is very similar to those – send a clip to Fusion for the effect, work with it inside the Fusion UI, and then it’s updated on the timeline as a Fusion clip. This is not unlike the Dynamic Link connection between Premiere Pro and After Effects, except that it all happens inside the same piece of software.
If you are used to working with a layer-style graphics application, like After Effects, Motion, or maybe HitFilm, then Fusion is going to feel foreign. It is a high-end visual effects tool, but might feel cumbersome to some when doing standard motion graphics. Yet for visual effects, the node-based approach is actually superior. There are plenty of good tutorials for Fusion, for any user ready to learn more about its visual effects power.
There are a few things to be aware of with Fusion. The image in the Fusion viewer and the output through UltraStudio to a monitor will be dark, as compared with that same image on the Edit page. Apparently this has been an ongoing user complaint and I have yet to find a color management setting that definitively solves this issue. There is also no way to “decompose” or “break apart” a Fusion composition on the timeline. You can reset the clip to a Fusion default status, but you cannot revert the timeline clip back to that camera file without it being part of a Fusion composition. Therefore, the best workaround is to copy the clip to a higher track before sending it to Fusion. That way you have both the Fusion composition and the original clip on the timeline.
In addition to visual effects, Fusion templates are also used for animated titles. These can be dropped onto a track in the Edit page and then modified in the inspector or the Fusion page. These Fusion titles function a lot like Apple’s Motion templates being used in Final Cut Pro.
Taken as a whole, this makes the Fairlight ecosystem a very scalable product line in its own right that can appeal to audio post engineers and other audio production professionals. In other words, you can use the Fairlight portion of Resolve without ever using any of the video-centric pages. In that way, Resolve with Fairlight competes with Adobe Audition, Avid Pro Tools, Apple Logic Pro, and others. In fact, Fairlight is still the only professional DAW that’s actually integrated into an NLE.
On the other hand, if you are into music creation, then Fairlight lacks software instruments and music-specific plug-ins, like amp emulation. The MIDI support is focused on sound design. A musician would likely gravitate towards Logic Pro, Cubase, Luna, or Ableton Live. Nevertheless, Fairlight is still quite capable as a DAW for music mixes. Each track/fader integrates a channel strip for effects, plus built-in EQ and compression. Resolve comes with its own complement of Fairlight FX plug-ins, plus it supports third-party AU/VST plug-ins.
I decided to test that concept using some of the mixes from the myLEWITT music sessions. I stacked LEWITT’s multitrack recordings onto a blank Fairlight timeline, which automatically created new mono or stereo tracks, based on the file. I was able to add new busses (stem or submaster channels) for each instrument group and then route those busses to the output. It was easy to add effects and control levels by clip, by track, or by submaster.
Fairlight might not be my first choice if I were a music mixer, but I could easily produce a good mix with it. The result is a transparent, modern sound. If you prefer vintage, analog-style coloration, then you’ll need to add third-party plug-ins for that. Whether or not Fairlight fits the bill for music will depend on your taste as a mixer.
Once again, Blackmagic Design has added more power in the DaVinci Resolve 18 release. Going back to the start of this post – is this the version that will finally cause a paradigm shift away from the leading editing applications? In my opinion, that’s doubtful. As good as it is, the core editing model is probably not compelling enough to coax the majority of loyal users away from their favorite software. However, that doesn’t mean those same users won’t tap into some of Resolve’s tools for a variety of tasks.
There will undoubtedly be people who shift away from Premiere Pro or Final Cut Pro and over to DaVinci Resolve. Maybe it’s for Resolve’s many features. Maybe they’re done with subscriptions. Maybe they no longer feel that Apple is serious. Whatever the reason, Resolve is a highly capable editing application. In fact, during the first quarter of this year I graded and finished a feature film that had been cut entirely in Resolve 17.
Software choices can be highly personal and intertwined with workflow, muscle memory, and other factors. Making a change often takes a big push. I suspect that many Resolve editors are new to editing, often because they got a copy when they bought one of the Blackmagic Design cameras. Resolve just happens to be the best application for editing BRAW files and that combo can attract new users.
DaVinci Resolve 18 is a versatile, yet very complex application. Even experienced users don’t tap into the bulk of what it offers. My advice to any new user is to start with a simple project. Begin in the Cut or Edit page, get comfortable, and ignore everything else. Then learn more over time as you expand the projects you work on and begin to master more of the toolkit. If you really want to dive into DaVinci Resolve, then check out the many free and paid tutorials from Blackmagic Design, Mixing Light, and Ripple Training. Resolve is one application where any user, regardless of experience, will benefit from training, even if it’s only a refresher.
I’ve embedded a lot of links throughout this post, so I hope you’ll take the time to check them out. They cover some of the enhancements that were introduced in earlier versions, the history of DaVinci Resolve, and links to the new features of DaVinci Resolve 18. Enjoy!
The mid 80s found me working for a year at a facility that operated two radio stations and owned two satellite transponders. I managed the video production side of the company. Satellite space was hard to get at the time, so they operated their own network on one of them and sublet the other to a different company and network.
At that same time MTV had come to the end of its first contract with cable companies and many wanted other options. Creating a new music video channel alternative was something of interest for us. Unfortunately, our other transponder client was still leasing space within that short window when cable companies could have chosen an alternative option rather than renewing with MTV. Thus, a missed opportunity, because it was shortly thereafter that our client moved on anyway, leaving us with an unfilled satellite transponder. In spite of the unfortunate timing, our company’s owner still decided to launch a new and competing music video network instead of seeking a new client. That new channel was called Odyssey.
As head of production, I was part of the team tasked with figuring out the hardware and general operation of this network. This was the era of the early professional videocassette formats, so we settled on the first generation of M-format decks from Panasonic.
The M-format was a professional videocassette format developed by Panasonic and RCA. It was marketed under the Recam name by Panasonic, RCA, and Ampex. Much like VHS versus Betamax, it was Panasonic’s M-format versus Sony’s Betacam. M-format decks recorded onto standard VHS videocassettes that ran at a faster speed. They used component analog instead of composite recording. This first generation of the M-format was later replaced by the MII series, which had a slightly better professional run, but ultimately still failed in the marketplace.
It was important for us to use a premium brand of VHS tape in these decks, since music videos would play in a high rotation, putting wear and tear on the tape. The Odyssey master control featured seven decks, plus a computer-controlled master control system designed to sequence the playlist of videos, commercials, promos, etc. The computer system was developed by Larry Seehorn, a Silicon Valley engineer who was one of the early developers of computer-assisted, linear editing systems.
We launched at the end of the year right at the start of the holiday week between Christmas and New Year. Everything was off and running… Until the playlist computer system crashed. We quickly found out that it would only support 1500 events and then stop. This was something that the manufacturer failed to disclose when we purchased the system. You had to reload a new list and start over, losing a lot of time in between. It would have been fine in a normal TV station operation, since you had long program segments between commercial breaks. For us, this was insufficient time, because we only had the length available of a music video in order to reload and reboot a new playlist.
Fortunately as a back-up in case of some sort of system failure, we had prepared a number of hourlong 1″ video tapes with music video blocks in advance. Running these allowed us to temporarily continue operation while we figured out plan B.
Ultimately the solution we settled on was to chuck the master control computer and replace it with a Grass Valley master control switcher. This was an audio-follows-video device, meaning that switching sources simultaneously switched audio and video. If you used the fader bar to dissolve between sources, it would also mix between the audio sources. This now became a human-controlled operation with the master control operator loading and cueing tapes, switching sources, and so on. Although manual, it proved to be superior to a playlist-driven automated system.
The operators effectively became radio station disk jockeys and those same guidelines applied. Our radio station program director selected music, set up a manual playlist, a “clock” for song genre and commercial rotation, and so on. Music videos sent to us by record labels would be copied to the M-format VHS tapes with a countdown and any added graphics, like music video song credits. Quite frankly, I have to say that our song selection were more diverse than the original MTV. In addition, having human operators allowed us to adjust timing on-the-fly in ways that an automated list couldn’t.
As ambitious as this project was, it had numerous flaws. The company was unable to get any cable provider to commit a full channel as they had with MTV. Consequently programming was offered to any broadcast station or cable company in any market on a first-come-first-served basis, but without a time requirement. If a small, independent TV station in a large market decided to contract for only a few hours on the weekend, then they locked up that entire market.
The other factor that worked against Odyssey was that Turner Broadcasting had already tried to launch their music channel with a LOT more money. Turner’s effort crashed and burned in a month. Needless to say, our little operation was viewed with much skepticism. Many would-be customers and advertisers decided to hold off at least a year to see if we’d still be in business at that time. Of course, that didn’t help our bottom line.
In spite of these issues, Odyssey hung on for ten months before the owner finally tossed in the towel. Even though it didn’t work out and I had moved on anyway, it was still a very fun experience that took me back to when I started out in radio.
The concept of the digital audio workstation stems from a near-century-old combination of a recording system and a mixing desk. Nearly every modern DAW is still built around that methodology. Gabriel Cowan, CEO and co-founder of Audio Design Desk, sought to modernize the approach with a DAW focused on sound design, using the power of metadata for workflow efficiency. The application was launched a couple of years ago and has since won several trade show awards for innovation, including a Product of the Year Award for audio just last week at the 2022 NAB Show.
Every video editor knows that a kicking sound track can often elevate an otherwise lackluster video. Audio Design Desk is intended to do just that, regardless of whether you are an editor, musician, or sound designer. The application is currently a Mac-only product that supports both Intel and M1 Macs natively. It breaks down into sound design (synthetic sounds, like whooshes, drones, hits, and risers), foley (real world sound effects), ambiences, and music.
As I previously mentioned, I don’t mix songs for a living, so why add this to my workload? Well, partially for fun, but also to learn. Consider it a video editor’s version of the “busman’s holiday.”
Each of the six songs poses different challenges. LEWITT’s marketing objective is to sell microphones and these recordings showcase those products. As such, the bands have been recorded in a live or semi-live studio environment. This means mics placed in front of instrument cabs, as well as mics placed all around the drum kit. Depending on the proximity of the band members to each other and how much acoustic baffling was used, some instrument tracks are more isolated than others. Guitars, bass, and keys might have additional direct input (DI) tracks for some songs, as well as additional overdubs for second and third parts. The track count ranged from 12 to 28 tracks. As is typical, drums had more tracks to deal with than any other instrument.
The performing styles vary widely, which also presents some engineering decisions that have to be made in how you mix. Move Like A Ghost was deliciously nasty by design. Do you try to clean that up or just embrace it? 25 Reasons is closer to a live stage performance with tons of leakage between tracks.
The video component
LEWITT, in conjunction with the performers, produced videos for five of the six songs. The mixes are specifically for LEWITT, not the official release versions of any of these songs. Therefore, the LEWITT videos were designed to accompany and promote the contest tracks. This makes it easy to sync your mix to each video, which is what I did for my compilation. In the case of Move Like a Ghost, LEWITT did not produce a full video with Saint Agnes. So, I pulled a stylized live music video for the band from YouTube for my version of the mix. I assembled this reel in Final Cut Pro, but any editing was really just limited to titles and the ins/outs for each song. The point is the mix and not the editing.
On the other hand, working with the video did inform my mix. For example, if a lead instrument had a riff in the song that’s shown in the video, then I’d emphasize it just a bit more in the mix. That’s not a bad thing, per se, if it’s not overdone. One quirk I ran into was on The Palace. The tracks included several vocal passes, using different mics. I picked the particular vocal track that I thought sounded best in the mix. When I synced it up to the video, I quickly realized that one vocal line (on camera) was out-of-sync and that LEWITT’s mixers must have blended several vocal performances in their own mix. Fortunately, it was an easy fix to use that one line from a different track and then everything was in sync.
Working the DAW
One of the tracks even included a Pro Tools session for Pro Tools users, but Logic Pro is my DAW of choice. Audition and Resolve (Fairlight) could have been options, but I prefer Logic Pro. It comes with really good built-in plug-ins, including reverbs, EQs, compressors, and amp modelers. I used all of these, plus a few paid and free third-party plug-ins from Accusonus, Analog Obsession, iZotope, FabFilter, Klevgrand, Sound Theory, TBProAudio, and Tokyo Dawn Labs.
One big selling point for me is Logic’s track stack feature, which is a method of grouping and organizing tracks and their associated channel strips. Use stacks to organize instrument tracks by type, such as drums, guitars, keys, vocals, etc. A track stack can be a folder or a summing stack. When summing is selected, then a track stack functions like a submix bus. Channel strips within the stack are summed and additional plug-ins can then be applied to the summing stack. If you think in terms of an NLE, then a track stack is a bit like a compound clip or nest. You can collapse or expand the tracks that have been grouped into the stack with a reveal button. Want to bring your visual organization from a couple of dozen tracks down to only a few? Then track stacks organized by instruments are the way to go.
For those unfamiliar with Logic Pro basics, here’s a simplified look at Logic’s signal flow. Audio from the individual track flows into the channel strip. That signal first hits any plug-ins, EQ, or filtering, and then flows out through the volume control (fader). If you need to raise or lower the volume of a track going into the plug-in chain, then you either have to adjust the input of the plug-in itself, or add a separate gain plug-in as the first effect in the chain. The volume control/fader affects the level after plug-ins have been applied. This signal is then routed through the track stack (if used). On a summing track stack, the signal flow through its channel strip works the same way – plug-ins first, then volume fader. Of course, it can get more complex with groups, sends, and side-chaining.
All track stack signals, as well as any channel not placed into a track stack, flow through the stereo out bus (on a stereo project) – again, into the plug-ins first and then out through the volume control. In addition to the stereo output bus, there’s also a master output fader, which controls the actual volume of the file written to the drive. If you place a metering plug-in into the chain of the stereo output bus, it indicates the level for peaks or loudness prior to the volume control of the stereo output AND the master output bus. Therefore, I would recommend that you ALWAYS leave both faders at their zero default, in order to get accurate readings.
All mixes are subjective
The approach to the mix varies with different engineers. What worked best for me was to concentrate on groups of instruments. The order isn’t important, but start with drums, for instance. The kit will likely have the highest number of tracks. Work with the soloed drum tracks to get a well-defined drum sound as a complete kit. Same for guitars, vocals, or any other instrument. Then work with the combination to get the right overall balance. Lastly, add and adjust mastering plug-ins to the stereo output channel strip to craft the final sound.
Any mix is totally subjective and technical perfection is merely an aspiration. I personally prefer my mix of Dirty to the others. The song is fun and the starting tracks nice and clean. But I’m certainly happy with my mix on the others, in spite of sonic imperfections. To make sure your mix is as good as it can be, check your mix in different listening environments. Fortunately, Audition can still burn your mix to an audio CD. Assuming you still own a disc burner and CD players, then it’s a great portable medium to check your mix in the car or on your home stereo system. Overall, during the course of mixing and then reviewing, I probably checked this on four different speaker set-ups, plus headphones and earbuds. The latter turned out to be the best way to detect stereo imaging issues, though not necessarily the most accurate sound otherwise. But, that is probably the way a large swath of people consume music these days.
I hope you enjoy the compilation if you take the time to listen. The order of the songs is: