Six Premiere Pro Game Changers

When a software developer updates any editing application, users often look for big changes, fancy features, and new functionality. Unfortunately, many little updates that can really change your day-to-day workflow are often overlooked.

Ever since the shift to its Create Cloud subscription model, Adobe has brought a string of updates to its core audio and video applications. Although there are several that have made big news, the more meaningful changes often seem less than awe inspiring to Adobe’s critics. Let me counter that narrative and point out six features that have truly improved the daily workflow for my Premiere Pro projects.

Auto Reframe Sequence. If you deliver projects for social media outlets, you know that various vertical formats are required. This is truly a pain when starting with content designed for 16×9 horizontal distribution. The Auto Reframe feature in Premiere Pro makes it easy to reformat any sequence for 9×16, 4×5, and 1×1 formats. It takes care of keyframing each shot to follow an area of interest within that shot, such as a person walking.

While other NLEs, like Final Cut Pro, also offer reformatting for vertical aspect ratios, none offer the same degree of automatic control to reposition the clip. It’s not perfect, but it works for most shots. It you don’t like the results on a shot, simply override the existing keyframes and manually reposition the clip. Auto Reframe works best if you start with a flattened, textless file, which brings me to the next feature.

Scene Edit Detection. This feature is generally used in color correction to automatically determine cuts between shots in a flattened file. The single clip in the sequence is split at each detected cut point. While you can use it for color correction in Premiere Pro, as well, it is also useful when Auto Reframing a sequence for verticals. If you try to apply Auto Reframe to a flattened file, Premiere will attempt to analyze and apply keyframes across the entire sequence since it’s one long clip. With these added splices created by Scene Edit Detection, Premiere can analyze each shot separately within the flattened file.

Auto Transcribe Sequence / Captioning. Modern deliverables take into account the challenges many viewers face. One of these is closed captions, which are vital to hearing-impaired viewers. Captions are also turned on by many viewers with otherwise normal hearing abilities for a variety of reasons. Just a few short years ago, getting interviews transcribed, adding subtitles for foreign languages, or creating closed captions required using an outside service, often at a large cost. 

Adobe’s first move was to add caption and subtitle functions to Premiere Pro, which enabled editors to import, create, and/or edit caption and subtitle text. This text can be exported as a separate sidecar file (such as .srt) or embedded into the video file. In a more recent update, Adobe augmented these features with Auto Transcribe. It’s included as part of your Creative Cloud subscription and there is generally no length limitation for reasonable use. If you have an hourlong interview that needs to be transcribed – no problem. 

Adobe uses cloud-based AI for part of the transcription process, so an internet connection is required. The turnaround time is quite fast and the accuracy is one of the best I’ve encountered. While the language options aren’t as broad as some of the competitors, most common Romance and Asian languages are covered. After the analysis and the speech-to-text process has been completed, that text can be used as a transcription or as captions (closed captions and/or subtitles). The transcription can also be exported as a text file with timecode. That’s handy for producers to create a paper cut for the editor.

Remix. You’ve just cut a six-minute corporate video and now you have to edit a needle drop music cue as a bed. It’s only 2:43, but needs to be extended to fit the 6:00 length and correctly time out to match the ending. You can either do this yourself or let Adobe tackle it for you. Remix came into Premiere Pro from Audition. This feature lets you use Adobe Sensei (their under-the-hood AI technology) to automatically re-edit a music track to a new target length. 

Open the Essential Sound panel, designate the track containing the cue as Music, enable the Duration tab, and select Remix. Set your target length and see what you get. You can customize the number of segments and variations to make the track sound less repetitive if needed. Some tracks have long fade-outs. So you may have to overshoot your target length in order to get the fade to properly coincide with the end of the video. I often still make one manual music edit to get it just right. Nevertheless the Remix feature is a great time-saver that usually gets me 90% of the way there.

Audition. If you pay for a full Creative Cloud subscription, then you benefit from the larger Adobe ecosystem. One of those applications is Audition, Adobe’s digital audio workstation (DAW) software. Audition is often ignored in most DAW roundups, because it doesn’t include many music-specific features, like software instruments and MIDI. Instead, Audition is targeted at general audio production (VO recordings, podcasts, commercials) and audio-for-video post in conjunction with Premiere Pro. Audition is designed around editing and processing a single audio file or for working in a multitrack session. I want to highlight the first method here.

Noise in location recordings is a fact of life for many projects. Record an interview in a working commercial kitchen and there will be a lot of background noise. Premiere Pro includes a capable noise reduction audio filter, which can be augmented by many third party tools from Accusonus, Crumplepop, and of course, iZotope RX. But if the Premiere Pro filter isn’t good enough, you need look no further than Audition. Export the track(s) from Premiere and open those (or the original files) in Audition.

Select the Noise Reduction/Restoration category under the Effects pulldown menu. First capture a short noise print in a section of the track with only background noise. This “trains” the filter for what is to be removed. Then select Noise Reduction (process). Follow the instructions and trust your own hearing to remove as much noise as possible with the least impact on the dialogue. If the person speaking sounds like they are underwater, then you’ve gone too far. Apply the effect in order to render the processing and then bounce (export) that processed track. Import the new track into Premiere. While this is a two-step process, you aren’t encumbering your computer with any real-time noise reduction filter when using such a pre-processed audio file.

Link Media. OK, I know relinking isn’t new to Premiere Pro and it’s probably not a marquee feature for editors always working with native media. When moving projects from offline to online – creative to finishing editorial – you know that if you cannot properly relink media files, a disaster will ensue.

Media Composer, Final Cut Pro, and Resolve all have relink functions. They work well with application-controlled, optimized media. But at other times, when working with camera original, native files, it might not work at all. I find Premiere Pro works about the best of these NLEs when it comes to relinking a wide variety of media files. That’s precisely because the user has a lot of control of the relink criteria in Premiere Pro. It’s not left up entirely to the application.

Premiere Pro expects the media be in the same relative path on the drive. Let’s say that you move the entire project to a different folder (like from Active Projects to Archived Projects) on your storage system. Navigate to and locate the first missing file and Premiere will find all the rest.

The relinking procedure is also quite forgiving, because various file criteria used to relink can be checked or unchecked. For example, I frequently edit with watermarked temporary music tracks, which are 44.1kHz MP3 files. When the cut is approved and the music is licensed, I download new, non-watermarked versions of that music as 48kHz WAV or AIF files. Premiere Pro easily relinks to the WAV or AIF files instead of the MP3s once I point it in the right direction. All music edits (including internal edits made by Remix) stay as intended and there is no mismatch due the the sample rate change.

These features might not make it into everyone’s Top 10 list, but they are tools generally not found in other NLEs. I use them quite often to speed up the session and remove drudgery from the editing process.

©2022 Oliver Peters

Think you can mix?

Are you aspiring to be the next Chris Lord-Alge or Glyn Johns? Maybe you just have a rock ‘n roll heart. Or you just want to try your hand at mixing music, but don’t have the material to work with. Whatever your inspiration, Lewitt Audio – the Austrian manufacturer of high-quality studio microphones – has made it easier than ever to get started. Awhile back Lewitt launched the myLEWITT site as a user community, featuring educational tips, music challenges, and free content.

Even though the listed music challenge contests may have expired, Lewitt leaves the content online and available to download for free. Simply create a free myLEWITT account to access them. These are individual .wav stem tracks of the complete challenge songs recorded using a range of Lewitt microphones. Each file is labelled with the name of the mic used for that track. That’s a clever marketing move, but it’s also handy if you are considering a mic purchase. Naturally these tracks are only for your educational and non-commercial use.

Since these are audio files and not specific DAW projects, they are compatible with any audio software. Naturally, if you a video editor, it’s possible to mix these tracks in an NLE, like Premiere Pro, Media Composer, or Final Cut Pro. However, I wouldn’t recommend that. First of all, DAW applications are designed for mixing and NLEs aren’t. Second, if you are trying to stretch your knowledge, then you should use the correct tool for the job. Especially if you are going to go out on the web for mixing tips and tricks from noted recording engineers and producers.

Start with a DAW

If you are new to DAW (digital audio workstation) software, then there are several free audio applications you might consider just to get started. Mac users already have GarageBand. Of course, most pros wouldn’t consider that, but it’s good enough for the basics. On the pro level, Reaper is a popular free DAW application. Universal Audio offers Luna for free, if you have a compatible UA Thunderbolt audio interface.

As a video editor, you might also be getting into DaVinci Resolve. Both the free and paid Studio versions integrate the Fairlight audio page. Fairlight, the company, had a well-respected history in audio prior to the acquisition by Blackmagic Design, who has continued to build upon that foundation. This means that not only can you do sophisticated audio mixes for video in Resolve, but there’s no reason that you can’t start and end in the Fairlight page for a music project.

The industry standard is Avid Pro Tools. If you are planning to work in a professional audio environment like a recording studio, then you’ll really want to know Pro Tools. Unfortunately, Avid discontinued their free Pro Tools|First version. However, you can still get a free, full-featured 30-day trial. Plus, the subscription costs aren’t too bad. If you have an Adobe Creative Cloud subscription, then you also have access to Audition as part of the account. Finally, if you are deep into the Apple ecosystem, then I would recommend purchasing Logic Pro, which is highly regarded by many music producers. 

Taking the plunge

In preparing this blog post, I downloaded and remixed one of the myLEWITT music challenge projects – The Seeds of your Sorrow by Spitting Ibex. This downloaded as a .zip containing 19 .wav files, all labelled according to instrument and microphone used. I launched Logic Pro, brought in the tracks, and lined them up at the start so that everything was in sync. From there it’s just a matter of mixing to taste.

Logic is great for this type of project, because of its wealth of included plug-ins. Logic is also a good host application for third party plug-ins, such as those from iZotope, Waves, Accusonus, and others. Track stacks are a versatile Logic feature. You can group a set of tracks (like all of the individual drums kit tracks) and turn those into a track stack, which then functions like a submix bus. The individual tracks can still be adjusted, but then you can also adjust levels on the entire stack. Track stacks are also great for visual organization of your track layout. You can show or hide all of the tracks within a stack, simply by twirling a disclosure triangle.

I’m certainly not an experienced music mixer, but I have mixed simple projects before. Understanding the process is part of being a well-rounded editor. In total, I spent about six hours over two days mixing the Spitting Ibex song. I’ve posted it on Vimeo as a clip with three sections – the official mix, my mix, and the unmixed/summed tracks. My mix was relatively straightforward. I wanted an R&B vibe, so no fancy left-right panning, voice distortions, or track doubling.

I mixed it totally in Logic Pro using mainly the native plug-ins for EQ, compression, reverb, amp modeling, and other effects. I also used some third-party plug-ins, including iZotope RX8 De-click and Accusonus ERA De-esser on the vocal track. As I brightened the vocal track to bring it forward in the mix, it also emphasized certain mouth sounds caused by the singer’s proximity to the mic. These plug-ins helped to tame those. I also added two final mastering plug-ins: Tokyo Dawn’s Nova for slight multi-band compression, along with FabFilter’s Pro-L2 limiter. The latter is one of the smoothest mastering plug-ins on the market and is a nice way to add “glue” to the mix.

If you decide to download and play with the tracks yourself, then check out the different versions submitted to the contest, which are showcased at myLEWITT. For a more detailed look into the process, Dutch mixing/mastering engineer and YouTuber Wytse Gerichhausen (White Sea Studio) has posted his own video about creating a mix for this music challenge.

In closing…

Understand that a great music mix starts with a tight group of musicians and high-quality recordings. Without those, it’s hard to make magic. With those, you are more than three-quarters of the way there. Fortunately Lewitt has taken care of that for you.

The point of any exercise like this is to learn and improve your skills. Learn to trust your ears and taste. Should you remove the breaths in a singer’s track? Should the mix be wetter (more reverb) or not? If so, what sort of reverb space? Should the bottom end be fatter? Should the guitars use distortion or be clean? These are all creative judgements that can only be made through trial-and-error and repeated experimentation. If music mixing is something you want to pursue, then the Produce Like A Pro YouTube channel is another source of useful information.

Let me leave you with some pro tips. At a minimum, make sure to mix any complex project on quality nearfield monitors (assuming you don’t have an actual studio at your disposal). Test your mix in different listening environments, on different speakers, and at different volume levels to see if it translates universally well. If you are going for a particular sound or style, have some good reference tracks, such as commercially-mastered songs, to which you can compare your mix. How did they balance the instruments? Did the reference song sound bright, boomy, or midrange? How were the dynamics and level of compression? And finally, take a break. All mixers can get fatigued. Mixes will often sound quite different after a break or on the next day. Sometimes it’s best to leave it and come back later with fresh ears and mind.

In any case, you can get started without spending any money. The tracks are free. Software like DaVinci Resolve is free. As with so many other tasks enabled by modern technology, all it takes is making the first move.

©2022 Oliver Peters

Pro Tips for FCP Editors

Every nonlinear editing application has strengths and weaknesses. Each experienced editor has a list of features and enhancements that they’d like to see added to their favorite tool. Final Cut Pro has many fans, but also its share of detractors, largely because of Apple’s pivot when Final Cut Pro changed from FCP7 to FCPX a decade ago. That doesn’t mean it’s not adequate for professional-level work. In fact, it’s a powerful tool in its own right. But there are ways to adapt it to workflows you may miss from competing NLEs. I discuss five of these tips in my article Making Final Cut More Pro over at FCP.co.

©2022 Oliver Peters

Building a Scene

The first thing any film student learns about being an editor is that a film is not put together simply the way the editor thinks it should be. The editor is there as the right hand of the director working in service to the story.

Often a film editor will start out cutting while the film is still being shot. The director is on set or location and is focused on getting the script captured. Meanwhile, the editor is trying to “keep up to camera” and build the scenes in accordance with the script as footage is received. Although it is often said that the final edit is the last rewrite of any film, this first edited version is intended to be a faithful representation of the script as it was shot. It’s not up to the editor’s discretion to drop, change, or re-arrange scenes that don’t appear to work. At least not at this stage of the process.

Any good editor is going to do the best job they can to “sell” their cut to the director by refining the edits and often adding basic sound design and temp music. The intent is to make the story flow as smoothly as possible. Whether you call this a first assembly or the editor’s cut, this first version is usually based on script notes, possibly augmented by the director’s initial feedback during downtime from filming. Depending on the director, the editor might have broad license to use different takes or assemble alternate versions. Some directors will later go over the cut in micro detail, while others only focus on the broad strokes, leaving a lot of the editor’s cut intact.

Anatomy of a scene

Many editors make it their practice not to be on the set. Unfortunately the days of a crew watching “dailies” with the director are largely gone. Thus the editor misses seeing the initial reaction a director has to the material that has been filmed. This means that the editor’s first input will be the information written on the script and notes from the script supervisor. It’s important to understand that information.

A scene can be a complex dialogue interaction with multiple actors that may cover several pages. Or, it can be a simple transition shot to bridge two other scenes. While scenes are generally shot in multiple angles that are edited together, there are also scenes done as a single, unedited shot, called “oners.” A oner can be a complex, choreographed SteadiCam shot or it can be a simple static shot, like a conversation between a driver and passenger only recorded as a two-shot though the windshield. There are even films that are captured and edited as if they were a continuous oner, such as 1917 and Birdman or (The Unexpected Virtue of Ignorance). In fact, these films were cleverly built with seamless edits. However, individual component scenes certainly were actual oners.

The lined script

Scripts are printed as one-sided pages. When placed in a binder, you’ll have the printed text on the right and a blank facing page on the left (the backside of the previous script page). The script supervisor will physically or electronically (ScriptE) draw lines through the typed, script side of a page. These lines are labelled and represent each set-up and/or angle used to film the scene. Specific takes and notes will be written onto the left facing page.

Script scenes are numbered and systems vary around the world along with variations made by individual script supervisors. For US crews, it’s common to number angles and takes alphanumerically according to their scene numbers. A “master shot” will usually be a wide shot that covers the entire length of the scene. So for scene 48, the master shot will be labelled 48 or sometimes 48-ws, if it’s a wide shot. The scene/take number will also appear on the slate. The supervisor will draw a vertical line through the scene from the start to the end of the capture. Straight segments of the line indicate the person speaking is on camera. Wiggly or zig-zag segments indicate that portion of the scene will be on a different angle.

After the master, the director will run the scene again with different camera set-ups. Maybe it’s a tighter angle or a close-up of an individual actor in the scene. These are numbered with a letter suffix, such as 48A, 48B, and so on. A close-up might also be listed as 48A-cu, for example. Lengthy scenes can be tough to get down all at once without mistakes. So the director may film “pick-ups” – portions of a scene, often starting in the middle. Or there may be a need to record an alternate version of the scene. Pick-ups would be labelled 48-PU and an alternate would be A48. Sometimes a director will record an action multiple times in a row without stopping camera or re-slating. This might be the case when the director is trying to get a variety of actions from an actor handling a prop. Such set-ups would be labelled as a “series” (e.g. 48F-Ser).

On the left facing page, the script supervisor will keep track of these angles and note the various takes for each – 48-1, 48-2, 48-3, 48A-1, 48A-2, etc. They will also add notes and comments. For example, if a prop didn’t work or the actor missed an important line. And, of course, the take that the director really liked will be circled and is known as the “circle take.” In the days of physical film editing, only circle takes were printed from the negative to work print for the editors to use. With modern digital editing, everything is usually loaded in the editing system. The combination of drawn, lined set-ups with straight and zig-zag line segments together with circle takes provides the editor with a theoretical schematic of how a scene might be cut together.

The myth of the circle take

A circle take indicates a take that the director preferred. However, this is often based on the script supervisor’s observation of the director’s reaction to the performance. The director may or may not actually have indicated that’s the one and only take to use. Often a circle take is simply a good performance take, where actors and camera all hit their marks, and nothing was missed. In reality, an earlier take might have been better for the beginning of the scene, but the actors didn’t make it all the way through.

There are typically three scenarios for how a director will direct the actors in a scene. A) The scene has already been rehearsed and actions defined, so the acting doesn’t change much from take to take. The director is merely tweaking nuance out of the actors to get the best possible performance. B) The director has the actors ramp up their intensity with each take. Early takes may have a more subtle performance while later takes feature more exaggerated speech and mannerisms. C) The director wants a different type of performance with each take. Maybe sarcastic or humorous for a few, but aggressive and angry for others.

Depending on the director’s style, a circle take can be a good indication of what the editor should use – or it can be completely meaningless. In scenario A, it will be pretty easy to figure out the best performances and usually circle takes and other notes are a good guide. Scenario B is tougher to judge, especially in the early days of a production. The level of intensity should be consistent for a character throughout the film. Once you’ve seen a few days of dailies you’ll have a better idea of how characters should act in a given scene or situation. It’s mainly a challenge of getting the calibration right. Scenario C is toughest. Without actually cutting some scenes together and then getting solid, direct feedback from the director, the editor is flying blind in this situation.

Let’s edit the scene

NLEs offer tools to aid the editor in scene construction. If you use Avid Media Composer, then you can avail yourself of script-based editing. This lets you organize script bins that mimic a lined script. The ScriptSync option removes some of the manual preparation by phonetically aligning ingested media to lines of dialogue. Apple Final Cut Pro editors can also use keywords to simulate dialogue lines.

A common method going back to film editing is the organization of “KEM rolls.” These are string-outs of selected takes placed back-to-back, which enables fast comparisons of different performances. In the digital world this means assembling a sequence of best takes and then using that sequence as the source for your scene edit. Adobe Premiere Pro and Media Composer are the two main NLEs that facilitate easy sequence-to-sequence editing.

The first step before you make any edit is to review all of the dailies for the scene. The circle takes are important, but other takes may also be good for portions of the scene. The director may not have picked circle takes for the other set-ups – 48A, 48B, etc. If that case, you need to make that selection yourself.

You can create custom columns in a Media Composer bin. Create one custom column to rank your selections. An “X” in that column is for a good take. “XX” for one that can also be considered. Add your own notes in another custom column. Now you can use Media Composer’s Custom Sift command to show/hide clips based on these entries. If you only want to see the best takes displayed in the bin, then sift for anything with an X or XX in that first custom column. All other clips will be temporarily hidden. This is a similar function to showing Favorites in a Final Cut Pro Event. At this point you can either build a KEM Roll (selects) first or just start editing the scene.

Cutting a scene together is a bit like playing chess or checkers. Continuity of actors’ positions, props, and dialogue lines often determines whether a certain construct works. If an actor ad libs the lines, you may have a lengthy scene in which certain bits of dialogue are in a different order or even completely different words from one take to the next. If you pick Take 5 for the master shot, this can block your use of some other set-ups, simply because the order of the dialogue doesn’t match. Good editing can usually overcome these issues, but it limits your options and may result in a scene that’s overly cutty.

Under ideal conditions, the lines are always said the same way and in the right order, props are always handled the same way at the same times, and actors are in their correct positions at the same points in the dialogue. Those scenes are a dream to cut. When they aren’t, that’s when an editor earns his or her pay.

When I cut a scene, I’ve reviewed the footage and made my selections. My first pass is to build the scene according to what’s in my head. Once I’ve done that I go back through and evaluate the cut. Would a different take be better on this line? Should I go to a close-up here? How about interspersing a few reaction shots? After that round, the last pass is for refinement. Tighten the edits, trim for J-cuts and L-cuts, and balance out audio levels. I now have a scene that’s ready to show to the director and hopefully put into the ongoing assembly of the film. I know the scene will likely change when I start working one-on-one with the director, but it’s a solid starting point that should reflect the intent and text of the script.

Happy editing!

©2021 Oliver Peters

Easy Resolve Grading with 6 Nodes

Spend any time watching Resolve tutorials and you’ll see many different ways in which colorists approach the creation of the same looks. Some create a look with just a few simple nodes. Others build a seemingly convoluted node tree designed to achieve the same goal. Neither approach is right or wrong.

Often what can all be done in a single node is spread across several in order to easily trace back through your steps when changes are needed. It also makes it easy to compare the impact of a correction by enabling and disabling a node. A series of nodes applied to a clip can be saved as a PowerGrade, which is a node preset. PowerGrades can be set up for a certain look or can be populated with blank (unaltered) nodes that are organized for how you like to work. Individual nodes can also be labeled, so that it’s easy to remember what operation you will do in each node.

The following is a simple PowerGrade (node sequence) that can be used as a starting point for most color grading work. It’s based on using log footage, but can also be modified for camera RAW or recordings in non-log color spaces, like Rec 709. These nodes are designed as a simple operational sequence to follow and each step can be used in a manner that works best with your footage. The sample ARRI clip was recorded with an ALEXA camera using the Log-C color profile.

Node 2 (LUT) – This is the starting point, because the first thing I want to do is apply the proper camera LUT to transform the image out of log. You could also do this with manual grading (no LUT). In that case the first three nodes would be rolled into one. Alternately you may use a Color Space Transform effect or even a Dehaze effect in some cases. But for the projects I grade, which largely use ARRI, Panasonic, Canon, and Sony cameras, adding the proper LUT seems to be the best starting point.

Node 1 (Contrast/Saturation) – With the LUT added to Node 2, I will go back to Node 1 to adjust contrast, pivot, and saturation. This changes the image going into the LUT and is a bit like adjusting the volume gain stage prior to applying an effect or filter when mixing sound. Since LUTs affect how color is treated, I will rarely adjust color balance or hue offsets (color wheels) in Node 1, as it may skew what the LUT is doing to the image in Node 2. The objective is to make subtle adjustments in Node 1 that improve the natural result coming out of Node 2.

Node 3 (Primary Correction) – This node is where you’ll want to correct color temperature/tint and use the color wheels, RGB curves,  and other controls to achieve a nice primary color correction. For example, you may need to shift color temperature warmer or cooler, lower black levels, apply a slight s-curve in the RGB curves, or adjust the overall level up or down.

Node 4 (Secondary Correction) – This node is for enhancement and the tools you’ll generally use are hue/sat curves. Let’s say you want to enhance skin tones, or the blue in the sky. Adjust the proper hue/sat curve in this node.

Node 5 (Windows) – You can add one or more “power windows” within the node (or use multiple nodes). Windows can be tracked to follow objects, but the main objective is a way to relight the scene. In most projects, I find that one window per shot is typically all I need, if any at all. Often this is to brighten up the lighting on the main talent in the shot. The use of windows is a way to direct the viewer’s attention. Often a simple soft-edged oval is all you’ll need to achieve a dramatic result.

Node 6 (Vignette) – The last node in this basic structure is to add a vignette, which I generally apply just to subtly darken the corners. This adds a bit of character to most shots. I’ll build the vignette manually with a circular window rather than apply a stock effect. The window is inverted so that the correction impacts the shot outside of the windowed area.

So there’s a simple node tree that works for many jobs. If you need to adjust parameters such as noise reduction, that’s best done in Node 1 or 2. Remember that Resolve grading works on two levels – clip and timeline. These are all clip-based nodes. If you want to apply a global effect, like adding film grain to the whole timeline, then you can change the grading mode from clip to timeline. In the timeline mode, any nodes you apply impact the whole timeline and are added on top of any clip-by-clip correction, so it works a bit like an adjustment layer.

©2021 Oliver Peters