Think you can mix?

Are you aspiring to be the next Chris Lord-Alge or Glyn Johns? Maybe you just have a rock ‘n roll heart. Or you just want to try your hand at mixing music, but don’t have the material to work with. Whatever your inspiration, Lewitt Audio – the Austrian manufacturer of high-quality studio microphones – has made it easier than ever to get started. Awhile back Lewitt launched the myLEWITT site as a user community, featuring educational tips, music challenges, and free content.

Even though the listed music challenge contests may have expired, Lewitt leaves the content online and available to download for free. Simply create a free myLEWITT account to access them. These are individual .wav stem tracks of the complete challenge songs recorded using a range of Lewitt microphones. Each file is labelled with the name of the mic used for that track. That’s a clever marketing move, but it’s also handy if you are considering a mic purchase. Naturally these tracks are only for your educational and non-commercial use.

Since these are audio files and not specific DAW projects, they are compatible with any audio software. Naturally, if you a video editor, it’s possible to mix these tracks in an NLE, like Premiere Pro, Media Composer, or Final Cut Pro. However, I wouldn’t recommend that. First of all, DAW applications are designed for mixing and NLEs aren’t. Second, if you are trying to stretch your knowledge, then you should use the correct tool for the job. Especially if you are going to go out on the web for mixing tips and tricks from noted recording engineers and producers.

Start with a DAW

If you are new to DAW (digital audio workstation) software, then there are several free audio applications you might consider just to get started. Mac users already have GarageBand. Of course, most pros wouldn’t consider that, but it’s good enough for the basics. On the pro level, Reaper is a popular free DAW application. Universal Audio offers Luna for free, if you have a compatible UA Thunderbolt audio interface.

As a video editor, you might also be getting into DaVinci Resolve. Both the free and paid Studio versions integrate the Fairlight audio page. Fairlight, the company, had a well-respected history in audio prior to the acquisition by Blackmagic Design, who has continued to build upon that foundation. This means that not only can you do sophisticated audio mixes for video in Resolve, but there’s no reason that you can’t start and end in the Fairlight page for a music project.

The industry standard is Avid Pro Tools. If you are planning to work in a professional audio environment like a recording studio, then you’ll really want to know Pro Tools. Unfortunately, Avid discontinued their free Pro Tools|First version. However, you can still get a free, full-featured 30-day trial. Plus, the subscription costs aren’t too bad. If you have an Adobe Creative Cloud subscription, then you also have access to Audition as part of the account. Finally, if you are deep into the Apple ecosystem, then I would recommend purchasing Logic Pro, which is highly regarded by many music producers. 

Taking the plunge

In preparing this blog post, I downloaded and remixed one of the myLEWITT music challenge projects – The Seeds of your Sorrow by Spitting Ibex. This downloaded as a .zip containing 19 .wav files, all labelled according to instrument and microphone used. I launched Logic Pro, brought in the tracks, and lined them up at the start so that everything was in sync. From there it’s just a matter of mixing to taste.

Logic is great for this type of project, because of its wealth of included plug-ins. Logic is also a good host application for third party plug-ins, such as those from iZotope, Waves, Accusonus, and others. Track stacks are a versatile Logic feature. You can group a set of tracks (like all of the individual drums kit tracks) and turn those into a track stack, which then functions like a submix bus. The individual tracks can still be adjusted, but then you can also adjust levels on the entire stack. Track stacks are also great for visual organization of your track layout. You can show or hide all of the tracks within a stack, simply by twirling a disclosure triangle.

I’m certainly not an experienced music mixer, but I have mixed simple projects before. Understanding the process is part of being a well-rounded editor. In total, I spent about six hours over two days mixing the Spitting Ibex song. I’ve posted it on Vimeo as a clip with three sections – the official mix, my mix, and the unmixed/summed tracks. My mix was relatively straightforward. I wanted an R&B vibe, so no fancy left-right panning, voice distortions, or track doubling.

I mixed it totally in Logic Pro using mainly the native plug-ins for EQ, compression, reverb, amp modeling, and other effects. I also used some third-party plug-ins, including iZotope RX8 De-click and Accusonus ERA De-esser on the vocal track. As I brightened the vocal track to bring it forward in the mix, it also emphasized certain mouth sounds caused by the singer’s proximity to the mic. These plug-ins helped to tame those. I also added two final mastering plug-ins: Tokyo Dawn’s Nova for slight multi-band compression, along with FabFilter’s Pro-L2 limiter. The latter is one of the smoothest mastering plug-ins on the market and is a nice way to add “glue” to the mix.

If you decide to download and play with the tracks yourself, then check out the different versions submitted to the contest, which are showcased at myLEWITT. For a more detailed look into the process, Dutch mixing/mastering engineer and YouTuber Wytse Gerichhausen (White Sea Studio) has posted his own video about creating a mix for this music challenge.

In closing…

Understand that a great music mix starts with a tight group of musicians and high-quality recordings. Without those, it’s hard to make magic. With those, you are more than three-quarters of the way there. Fortunately Lewitt has taken care of that for you.

The point of any exercise like this is to learn and improve your skills. Learn to trust your ears and taste. Should you remove the breaths in a singer’s track? Should the mix be wetter (more reverb) or not? If so, what sort of reverb space? Should the bottom end be fatter? Should the guitars use distortion or be clean? These are all creative judgements that can only be made through trial-and-error and repeated experimentation. If music mixing is something you want to pursue, then the Produce Like A Pro YouTube channel is another source of useful information.

Let me leave you with some pro tips. At a minimum, make sure to mix any complex project on quality nearfield monitors (assuming you don’t have an actual studio at your disposal). Test your mix in different listening environments, on different speakers, and at different volume levels to see if it translates universally well. If you are going for a particular sound or style, have some good reference tracks, such as commercially-mastered songs, to which you can compare your mix. How did they balance the instruments? Did the reference song sound bright, boomy, or midrange? How were the dynamics and level of compression? And finally, take a break. All mixers can get fatigued. Mixes will often sound quite different after a break or on the next day. Sometimes it’s best to leave it and come back later with fresh ears and mind.

In any case, you can get started without spending any money. The tracks are free. Software like DaVinci Resolve is free. As with so many other tasks enabled by modern technology, all it takes is making the first move.

©2022 Oliver Peters

Pro Tips for FCP Editors

Every nonlinear editing application has strengths and weaknesses. Each experienced editor has a list of features and enhancements that they’d like to see added to their favorite tool. Final Cut Pro has many fans, but also its share of detractors, largely because of Apple’s pivot when Final Cut Pro changed from FCP7 to FCPX a decade ago. That doesn’t mean it’s not adequate for professional-level work. In fact, it’s a powerful tool in its own right. But there are ways to adapt it to workflows you may miss from competing NLEs. I discuss five of these tips in my article Making Final Cut More Pro over at FCP.co.

©2022 Oliver Peters

Building a Scene

The first thing any film student learns about being an editor is that a film is not put together simply the way the editor thinks it should be. The editor is there as the right hand of the director working in service to the story.

Often a film editor will start out cutting while the film is still being shot. The director is on set or location and is focused on getting the script captured. Meanwhile, the editor is trying to “keep up to camera” and build the scenes in accordance with the script as footage is received. Although it is often said that the final edit is the last rewrite of any film, this first edited version is intended to be a faithful representation of the script as it was shot. It’s not up to the editor’s discretion to drop, change, or re-arrange scenes that don’t appear to work. At least not at this stage of the process.

Any good editor is going to do the best job they can to “sell” their cut to the director by refining the edits and often adding basic sound design and temp music. The intent is to make the story flow as smoothly as possible. Whether you call this a first assembly or the editor’s cut, this first version is usually based on script notes, possibly augmented by the director’s initial feedback during downtime from filming. Depending on the director, the editor might have broad license to use different takes or assemble alternate versions. Some directors will later go over the cut in micro detail, while others only focus on the broad strokes, leaving a lot of the editor’s cut intact.

Anatomy of a scene

Many editors make it their practice not to be on the set. Unfortunately the days of a crew watching “dailies” with the director are largely gone. Thus the editor misses seeing the initial reaction a director has to the material that has been filmed. This means that the editor’s first input will be the information written on the script and notes from the script supervisor. It’s important to understand that information.

A scene can be a complex dialogue interaction with multiple actors that may cover several pages. Or, it can be a simple transition shot to bridge two other scenes. While scenes are generally shot in multiple angles that are edited together, there are also scenes done as a single, unedited shot, called “oners.” A oner can be a complex, choreographed SteadiCam shot or it can be a simple static shot, like a conversation between a driver and passenger only recorded as a two-shot though the windshield. There are even films that are captured and edited as if they were a continuous oner, such as 1917 and Birdman or (The Unexpected Virtue of Ignorance). In fact, these films were cleverly built with seamless edits. However, individual component scenes certainly were actual oners.

The lined script

Scripts are printed as one-sided pages. When placed in a binder, you’ll have the printed text on the right and a blank facing page on the left (the backside of the previous script page). The script supervisor will physically or electronically (ScriptE) draw lines through the typed, script side of a page. These lines are labelled and represent each set-up and/or angle used to film the scene. Specific takes and notes will be written onto the left facing page.

Script scenes are numbered and systems vary around the world along with variations made by individual script supervisors. For US crews, it’s common to number angles and takes alphanumerically according to their scene numbers. A “master shot” will usually be a wide shot that covers the entire length of the scene. So for scene 48, the master shot will be labelled 48 or sometimes 48-ws, if it’s a wide shot. The scene/take number will also appear on the slate. The supervisor will draw a vertical line through the scene from the start to the end of the capture. Straight segments of the line indicate the person speaking is on camera. Wiggly or zig-zag segments indicate that portion of the scene will be on a different angle.

After the master, the director will run the scene again with different camera set-ups. Maybe it’s a tighter angle or a close-up of an individual actor in the scene. These are numbered with a letter suffix, such as 48A, 48B, and so on. A close-up might also be listed as 48A-cu, for example. Lengthy scenes can be tough to get down all at once without mistakes. So the director may film “pick-ups” – portions of a scene, often starting in the middle. Or there may be a need to record an alternate version of the scene. Pick-ups would be labelled 48-PU and an alternate would be A48. Sometimes a director will record an action multiple times in a row without stopping camera or re-slating. This might be the case when the director is trying to get a variety of actions from an actor handling a prop. Such set-ups would be labelled as a “series” (e.g. 48F-Ser).

On the left facing page, the script supervisor will keep track of these angles and note the various takes for each – 48-1, 48-2, 48-3, 48A-1, 48A-2, etc. They will also add notes and comments. For example, if a prop didn’t work or the actor missed an important line. And, of course, the take that the director really liked will be circled and is known as the “circle take.” In the days of physical film editing, only circle takes were printed from the negative to work print for the editors to use. With modern digital editing, everything is usually loaded in the editing system. The combination of drawn, lined set-ups with straight and zig-zag line segments together with circle takes provides the editor with a theoretical schematic of how a scene might be cut together.

The myth of the circle take

A circle take indicates a take that the director preferred. However, this is often based on the script supervisor’s observation of the director’s reaction to the performance. The director may or may not actually have indicated that’s the one and only take to use. Often a circle take is simply a good performance take, where actors and camera all hit their marks, and nothing was missed. In reality, an earlier take might have been better for the beginning of the scene, but the actors didn’t make it all the way through.

There are typically three scenarios for how a director will direct the actors in a scene. A) The scene has already been rehearsed and actions defined, so the acting doesn’t change much from take to take. The director is merely tweaking nuance out of the actors to get the best possible performance. B) The director has the actors ramp up their intensity with each take. Early takes may have a more subtle performance while later takes feature more exaggerated speech and mannerisms. C) The director wants a different type of performance with each take. Maybe sarcastic or humorous for a few, but aggressive and angry for others.

Depending on the director’s style, a circle take can be a good indication of what the editor should use – or it can be completely meaningless. In scenario A, it will be pretty easy to figure out the best performances and usually circle takes and other notes are a good guide. Scenario B is tougher to judge, especially in the early days of a production. The level of intensity should be consistent for a character throughout the film. Once you’ve seen a few days of dailies you’ll have a better idea of how characters should act in a given scene or situation. It’s mainly a challenge of getting the calibration right. Scenario C is toughest. Without actually cutting some scenes together and then getting solid, direct feedback from the director, the editor is flying blind in this situation.

Let’s edit the scene

NLEs offer tools to aid the editor in scene construction. If you use Avid Media Composer, then you can avail yourself of script-based editing. This lets you organize script bins that mimic a lined script. The ScriptSync option removes some of the manual preparation by phonetically aligning ingested media to lines of dialogue. Apple Final Cut Pro editors can also use keywords to simulate dialogue lines.

A common method going back to film editing is the organization of “KEM rolls.” These are string-outs of selected takes placed back-to-back, which enables fast comparisons of different performances. In the digital world this means assembling a sequence of best takes and then using that sequence as the source for your scene edit. Adobe Premiere Pro and Media Composer are the two main NLEs that facilitate easy sequence-to-sequence editing.

The first step before you make any edit is to review all of the dailies for the scene. The circle takes are important, but other takes may also be good for portions of the scene. The director may not have picked circle takes for the other set-ups – 48A, 48B, etc. If that case, you need to make that selection yourself.

You can create custom columns in a Media Composer bin. Create one custom column to rank your selections. An “X” in that column is for a good take. “XX” for one that can also be considered. Add your own notes in another custom column. Now you can use Media Composer’s Custom Sift command to show/hide clips based on these entries. If you only want to see the best takes displayed in the bin, then sift for anything with an X or XX in that first custom column. All other clips will be temporarily hidden. This is a similar function to showing Favorites in a Final Cut Pro Event. At this point you can either build a KEM Roll (selects) first or just start editing the scene.

Cutting a scene together is a bit like playing chess or checkers. Continuity of actors’ positions, props, and dialogue lines often determines whether a certain construct works. If an actor ad libs the lines, you may have a lengthy scene in which certain bits of dialogue are in a different order or even completely different words from one take to the next. If you pick Take 5 for the master shot, this can block your use of some other set-ups, simply because the order of the dialogue doesn’t match. Good editing can usually overcome these issues, but it limits your options and may result in a scene that’s overly cutty.

Under ideal conditions, the lines are always said the same way and in the right order, props are always handled the same way at the same times, and actors are in their correct positions at the same points in the dialogue. Those scenes are a dream to cut. When they aren’t, that’s when an editor earns his or her pay.

When I cut a scene, I’ve reviewed the footage and made my selections. My first pass is to build the scene according to what’s in my head. Once I’ve done that I go back through and evaluate the cut. Would a different take be better on this line? Should I go to a close-up here? How about interspersing a few reaction shots? After that round, the last pass is for refinement. Tighten the edits, trim for J-cuts and L-cuts, and balance out audio levels. I now have a scene that’s ready to show to the director and hopefully put into the ongoing assembly of the film. I know the scene will likely change when I start working one-on-one with the director, but it’s a solid starting point that should reflect the intent and text of the script.

Happy editing!

©2021 Oliver Peters

Easy Resolve Grading with 6 Nodes

Spend any time watching Resolve tutorials and you’ll see many different ways in which colorists approach the creation of the same looks. Some create a look with just a few simple nodes. Others build a seemingly convoluted node tree designed to achieve the same goal. Neither approach is right or wrong.

Often what can all be done in a single node is spread across several in order to easily trace back through your steps when changes are needed. It also makes it easy to compare the impact of a correction by enabling and disabling a node. A series of nodes applied to a clip can be saved as a PowerGrade, which is a node preset. PowerGrades can be set up for a certain look or can be populated with blank (unaltered) nodes that are organized for how you like to work. Individual nodes can also be labeled, so that it’s easy to remember what operation you will do in each node.

The following is a simple PowerGrade (node sequence) that can be used as a starting point for most color grading work. It’s based on using log footage, but can also be modified for camera RAW or recordings in non-log color spaces, like Rec 709. These nodes are designed as a simple operational sequence to follow and each step can be used in a manner that works best with your footage. The sample ARRI clip was recorded with an ALEXA camera using the Log-C color profile.

Node 2 (LUT) – This is the starting point, because the first thing I want to do is apply the proper camera LUT to transform the image out of log. You could also do this with manual grading (no LUT). In that case the first three nodes would be rolled into one. Alternately you may use a Color Space Transform effect or even a Dehaze effect in some cases. But for the projects I grade, which largely use ARRI, Panasonic, Canon, and Sony cameras, adding the proper LUT seems to be the best starting point.

Node 1 (Contrast/Saturation) – With the LUT added to Node 2, I will go back to Node 1 to adjust contrast, pivot, and saturation. This changes the image going into the LUT and is a bit like adjusting the volume gain stage prior to applying an effect or filter when mixing sound. Since LUTs affect how color is treated, I will rarely adjust color balance or hue offsets (color wheels) in Node 1, as it may skew what the LUT is doing to the image in Node 2. The objective is to make subtle adjustments in Node 1 that improve the natural result coming out of Node 2.

Node 3 (Primary Correction) – This node is where you’ll want to correct color temperature/tint and use the color wheels, RGB curves,  and other controls to achieve a nice primary color correction. For example, you may need to shift color temperature warmer or cooler, lower black levels, apply a slight s-curve in the RGB curves, or adjust the overall level up or down.

Node 4 (Secondary Correction) – This node is for enhancement and the tools you’ll generally use are hue/sat curves. Let’s say you want to enhance skin tones, or the blue in the sky. Adjust the proper hue/sat curve in this node.

Node 5 (Windows) – You can add one or more “power windows” within the node (or use multiple nodes). Windows can be tracked to follow objects, but the main objective is a way to relight the scene. In most projects, I find that one window per shot is typically all I need, if any at all. Often this is to brighten up the lighting on the main talent in the shot. The use of windows is a way to direct the viewer’s attention. Often a simple soft-edged oval is all you’ll need to achieve a dramatic result.

Node 6 (Vignette) – The last node in this basic structure is to add a vignette, which I generally apply just to subtly darken the corners. This adds a bit of character to most shots. I’ll build the vignette manually with a circular window rather than apply a stock effect. The window is inverted so that the correction impacts the shot outside of the windowed area.

So there’s a simple node tree that works for many jobs. If you need to adjust parameters such as noise reduction, that’s best done in Node 1 or 2. Remember that Resolve grading works on two levels – clip and timeline. These are all clip-based nodes. If you want to apply a global effect, like adding film grain to the whole timeline, then you can change the grading mode from clip to timeline. In the timeline mode, any nodes you apply impact the whole timeline and are added on top of any clip-by-clip correction, so it works a bit like an adjustment layer.

©2021 Oliver Peters

Application Color Management

I’m previously written about the challenge of consistent gamma and saturation across multiple monitoring points. Getting an app’s viewer, QuickTime playback, and the SDI output to all look the same can be a fool’s errand. If you work on a Mac, then there are pros and cons to using Mac displays like an iMac. In general, Apple’s “secret sauce” works quite well for Final Cut Pro. However, if you edit or grade in Resolve, Premiere Pro, or Media Composer, then you aren’t quite as lucky. I’ve opined that you might actually need to generate separate files for broadcast and web deliverables.

The extra step of optimized file creation isn’t practical for most. In my case, the deliverables I create go to multiple platforms; however, few are actually destined for traditional broadcast or to be played in a theater. In most cases, my clients are creating content for the web or to be streamed in various venues. I predominantly edit in Premiere Pro and grade with Resolve. I’ve been tinkering with color management settings in each. The goal is a reasonably close match across both app viewers, the output I see to a Rec 709 display, and the look of the exported file when I view it in QuickTime Player on the computer.

Some of this advice might be a bit contrary to what I previously wrote. Both situations are still valid, depending on the projects you edit or grade. Granted, this is based on what I see on iMac and iMac Pro displays, so it may or may not be consistent with other display brands or when using PCs. And this applies to SDR, Rec 709, or sRGB outputs and not HDR grading. As a starting point, leave the Mac display profile alone. Don’t change its default profile. Yes, I know an iMac is P3, but that’s simply something you’ll have to live with.

Adobe Premiere Pro

Premiere Pro’s Rec 709 timelines are based on 2.4 gamma, which is the broadcast standard. However, an exported file is displayed with a QuickTime color profile of 1-1-1 (1.96 gamma). The challenge is to work with the Premiere Pro viewer and see an image that matches the exported file. I have changed to disabling (unchecking) the Display Color Management in General Preferences. This might seem counter-intuitive, but it results in a setting where the viewer, a Rec 709 output to a monitor, and the exported image all largely look the same.

If you enable Display Color Management, you’ll get an image in the viewer with a somewhat closer match for saturation, but gamma will be darker than the QuickTime or the video monitor. If you disable this setting, the gamma will be a better match (shadows aren’t crushed); however, the saturation of reds will be somewhat enhanced in the Premiere Pro viewer. It’s a bit of a trade-off, but I prefer the setting to be off.

Blackmagic Design DaVinci Resolve

Resolve has multiple places that can trip you up. But I’ve found that once you set them up, the viewer image will be a closer match to the exported file and to the Rec 709 image than is the case for Premiere Pro. There are three sections to change. The first is in the Project Settings pane (gear menu). This is the first place to start with every new Resolve project. Under Color Management, set the Timeline color space to Rec. 709 (Scene). I’ve experimented with various options, including ACES. Unfortunately the ongoing ACES issue with fluorescent color burned me on a project, so I’ll wait until I really have a need to use ACES again. Hopefully it will be less of a work-in-progress then. I’ve gone back to working in Rec. 709, but new for me is to use the Scene variant. I also turn on Broadcast Safe, but use the gentler restricted range of -10 to -110.

The next adjustment is in Resolve Preferences. Go to the General section and turn on: Use 10-bit precision in viewers, Use Mac display color profiles, and Automatically tag Rec. 709 Scene clips as Rec. 709-A. What this last setting does is make sure the exports are tagged with the 1-1-1 QuickTime color profile. If this is not checked, the file will be exported with a profile of 1-2-1 (2.4 gamma) and look darker when you play it to the desktop using QuickTime Player.

The last setting is on the Deliver page. Data levels can be set to Auto or Video. The important thing is to set the Color Space Tag and Gamma Tag to Same as Project. By doing so, the exported files will adhere to the settings described above.

Making these changes in Premiere Pro and Resolve gives me more faith in what I see in the viewer of each application. My exports are a closer match with fewer surprises. Is it a perfect match? Absolutely not. But it’s enough in the ballpark for most footage to be functional for editing purposes. Obviously you should still make critical image and color adjustments using your scopes and a calibrated reference display, but that’s not always an option. Going with these settings should mean that if you have to go by the computer screen alone, then what you see will be close to what you get!

©2021 Oliver Peters