Edit Collaboration and Best Practices

There are many workflows that involve collaboration, with multiple editors and designers working on the same large project or group of projects. Let me say up front that if you want the best possible collaborative experience with multiple editors, then work with Avid Media Composer. Full stop. I have worked both sides of the equation and without a doubt, Media Composer connected to Avid Unity/Isis/Nexis shared storage is simply not matched by Final Cut Pro, Final Cut Pro X, Premiere Pro, or any other editing software/storage/cloud combination. Everything else is a compromise, which is why feature film and TV series editorial teams continue to select Avid solutions as their first choice.

In spite of that, there are many reasons to use other editing tools. I work most of the time in Adobe Premiere Pro CC and freelance at a shop with nine edit workstations connected to shared storage. We work mainly in Adobe Creative Cloud applications and our projects involve a lot of collaboration. Some of these are corporate videos that are frequently edited and revised by different editors. Some are entertainment shows, cut by a small editorial team focused on those shows. For some projects, Premiere Pro is the perfect tool. For others, we have to develop strategies to adapt Premiere to our workflow.

With that in mind, the following are tips and best practices that I’ll share for what has worked best for us over the past three years, while working on large projects with a team of editors. Although it applies to our work with Premiere Pro, the same would generally be true if we were working with Apple Final Cut Pro X instead.

Organization. We organize all projects into a specific folder structure, using a Post Haste template. All media files, like camera footage, audio, graphic elements, etc. go into common folders. Editors know where to look to find things. When new camera footage comes in, files are organized as “dailies” into specific folders by date, camera, and camera card. Non-pro formats, like GoPro and DSLR footage will be batch-renamed to reflect the project, date, and camera card. The objective is to have unique file names for each and every media file.

Optimized, transcoded, or proxy media. Depending on the performance and amount of media, you may need to do some prep work before even starting the edit process. Premiere and FCPX work well with some media formats and not with others. NAS/SAN storage is particularly taxing, especially once you get to resolutions greater than HD. If you want the most fluid experience in a shared workflow, then you will likely need to transcode proxy files from within the application. The reason to stay inside of FCPX or Premiere Pro is so that frame size offsets are properly tracked. Once proxies have been transcoded, it’s a simple matter of toggling between the proxy media (best playback performance) and full-resolution media (best image quality).

On the other hand, if you’d rather stick to full-resolution, native media, then some formats will have to be transcoded into “optimized” media. For instance, GoPro 4K footage is terrible to edit with natively. It should always be transcoded to ProRes or DNxHD before editing, if you don’t want to go the proxy route. This can be done inside or outside of the application and is an easy task with DaVinci Resolve, EditReady, Adobe Media Encoder, or Apple Compressor.

Finally, if you have image sequences from a drone or other source, forget trying to edit from these off of a network. Transcode them right away into some format of master movie file. I find Resolve to be the best tool for this. It’s fast and since these are often camera raw files, you can apply a base grade to them as a starting point for future color correction.

Break up your projects. Depending on the type and size of the job and number of editors working on it, you may choose to work in multiple Premiere projects. There may be a master file where all media is imported and initially organized. Then there may be multiple projects that are offshoots from this for component parts. In a corporate environment, it could be several different videos cut from a single, larger set of media. In a feature film, there could be different Premiere projects for each reel of the film.

Since Premiere Pro employs project locking, any project opened by one editor can also be opened in a read-only mode by other editors. Editors can have multiple Premiere projects open at one time. Thus, it’s simple to bring in elements from one project into another, even while they are all open. This workflow mimics Avid’s bin-locking strategy.

It helps to keep project files streamlined as progress on the production extends over time. You want to keep the number of sequences in any given project small. Periodically duplicate your project(s), strip out old sequences from the current project, and archive the older project files.

As a general note, while working to build the creative story edits – i.e. “offline editing” – you will want to keep plug-in filter effects to a minimum. In fact, it’s generally a good idea to keep the plug-in selection on each system small, so that all workstations in this shared environment are able to have the same set of installed plug-ins. The same is true of fonts.

Finishing stages of post. There are generally two paths in the finishing, aka “online editing” stage. Either all final color correction and assembly of effects is completed within Premiere Pro, or there is a roundtrip through a color correction application, like Blackmagic Design DaVinci Resolve. The same holds true for audio, where a separate sound editor/designer/mixer may handle the finishing touches in Avid Pro Tools.

To accomplish an easy roundtrip with Resolve, create a sequence with all color correction and effects removed. Flatten the video to a single track (if possible), and remove the audio or do a simple stereo mixdown for reference. Ideally, media with mixed frame rates should be addressed as slow motion in the edited sequence. Avoid modifying the frame rate through any sort of “interpret” function within the application. Export an XML or AAF and send that and the associated media to Resolve. When color correction is complete, you can render the entire timeline at the sequence resolution as a single master file.

Conversely, if you want to send it back to Premiere Pro for final assembly and to complete the roundtrip, then render individual clips at their source resolution with handles of one to two seconds. Back in Premiere, re-apply titles, insert completed visual effects, and add any missing plug-in effects.

With audio post, there will be no roundtrip of elements, since the mixer will deliver a completed mixed stereo or surround track. This should be imported into Premiere (or Resolve if the final master is created in Resolve) and married back to the final video sequence. The mixer should also supply “stems” – the individual dialogue, music, and sound effects (D/M/E) submix tracks.

Mastering. Final sequences should be exported in a master file format (ProRes, DNxHD/HR, uncompressed) in at least two forms: 1) master with final mix and titles, and 2) textless submaster with split-track audio (multiple channels containing the D/M/E stems). All of these files are stored within the same job-based folder structure outlined at the top. It is quite common that future revisions will be made using the textless submaster rather than re-opening the full project, or that it may be used as source material in another edit.

Another aspect of finishing the project is media consolidation. This means taking the final sequence and generating a new project file from it. That file contained only those elements from the sequence, along with a copy of the media used, where each file has been trimmed to the portion within the sequence (plus handles). This is the Project Manager function in Premiere Pro. Unfortunately, Premiere is not consistently good at this task. Some formats will be properly trimmed, while others will be copied in their entirety. That’s OK for a :10 take, but a bummer when it’s a 30-minute interview.

The good news is that if you went through the Resolve roundtrip workflow and rendered individual clips, then effectively Resolve has already done media consolidation as a byproduct. In addition, if your source media is 4K, but you only finished in HD, the Resolve renders will be 4K. If in the future, you need to deliver the same master in 4K, everything is already set. Of course, that assumes that you didn’t do a lot of “punching in” and reframing in your edit sequence.

Cloud-based services. Often collaboration requires a distributed team, when not everyone is under one roof. While Adobe does offer cloud-based team editing methods, this doesn’t really work when editors are on different Creative Cloud accounts or when the collaboration is between an editor and a graphic designer/animator/VFX artist working in non-Adobe tools. In that case the old standbys have been Dropbox, Box, or Google Drive. Syncing is easy and relatively reliable. However, these are really just designed for sharing assets. But when this involves a couple of editors and each has a local, mirrored set of media, then simple sharing/syncing of only small project files makes for a working collaborative method.

Frame.io is the newbie here, with updated extension tools designed for in-application workspace panels within Final Cut Pro X, After Effects, and Premiere Pro. While they tout the ease of moving full-resolution media into their cloud, including camera files, I really wouldn’t recommend doing that. It’s simply not very practical on must projects. But for sharing cuts using a standard review-and-approach workflow, Frame.io definitely hits most of the buttons.

©2018 Oliver Peters

Advertisements

Premiere Pro Multicam Editing

Over the years, a lot of the projects that I’ve edited have been based on real-person interviews. This includes documentaries, commercials, and corporate video. As the cost of camera gear has come down and DSLRs became capable of delivering quality video, interview-based production now almost always utilizes multiple cameras. Directors will typically record these sections with two or more cameras at various tangents to the subject, which makes it easy to edit for content without visible jump-cuts (hopefully). In addition, if they also shoot in 4K for an HD delivery, then you have the additional ability to cleanly punch-in for even more framing options.

While having a specific multicam feature in your NLE isn’t required for cutting these types of productions, it sure speeds up the process. Under the best of circumstances, you can play the sequence in real-time and cut between camera angles in the multicam viewer, much like a director calls camera switches in a live telecast. Since you are working within an NLE, you can also make these camera angle cuts at a slower or faster pace and, of course, trim the cuts for greater timing precision. Premiere Pro is my primary NLE these days and its multi-camera editing routines are a joy to use.

Prepping for multi-camera

Synchronization is the main requirement for productive multicam. That starts at the time of the original recording. You can either sync by common timecode, common audio, or a marked in-point.

Ideally, your production crew should use a Lockit Sync Box to generate timecode and sync to all cameras and any external sound recorder. That will only work with professional products, not DSLRs. Lacking that, the next best thing is old school – a common slate with a clap-stick or even just your subject clapping hands at the start, while in view on all cameras. This will allow the editor to mark a common in-point.

The last sync method is to match the common audio across all sources. Of course, that only works if the production crew has supplied quality audio to all cameras and external recorders. It has to be at least good enough so that the human editor and/or the audio analysis of the software can discern a match. Sometimes this method will suffer from a minor amount of delay – either, because of the inherent offset of the audio recording circuitry within the camera electronics – or, because an onboard camera mic was used and the distance to the subject results in a slight delay, compared to a lav mic on the subject.

In addition to synchronization, you obviously need to record high-quality audio. This can be a mixer feed or direct mic input to one or all of the camera tracks, or to a separate external audio recorder. A typical set-up is to feed a lav and a boom mic signal to audio input channels 1 and 2 of the camera. When a mixer and an external recorder are used, the sound recordist will often also record a mix. Another option, though not as desirable, is to record individual microphone signals onto different cameras. The reason this isn’t preferred, is that sometimes when these two sources are mixed in post (rather than only one source used at a time), audio phasing can occur.

Synching in Premiere Pro

To synchronize multicam clips in Premiere Pro, simply select the matching sources in the browser/bin, right-click, and choose “Create New Multi-Camera Source Sequence”. You will be presented with several options for sync, based on timecode, audio, or marked points. You may also opt to have the clips moved to a “Processed Clips” bin. If synchronization is successful, you’ll then end up with a multicam source clip that you can now cut to a standard sequence.

A multicam source clip is actually a modified, nested sequence. You can open the clip – same as a nested sequence – and make adjustments or apply filters to the clips within.

You can also create multicam clips without going through the aforementioned process. For example, let’s say that none of the three sync methods exist. You have a freewheeling interview with two or more cameras, but only one has any audio. There’s no clap and no common timecode. In fact, if all the cameras were DSLRs, then every clip arbitrarily starts at 00:00:00:00. The way to tackle this is to edit these cameras to separate video tracks of a new sequence. Sync the video by slipping the clips’ positions on the tracks. Select those clips on the timeline and create a nest. Once the nest is created, this can then be turned into a multicam source clip, which enables you to work with the multicam viewer.

One step I follow is to place the multicam source clip onto a sequence and replace the audio with the best original source. The standard multicam routine means that audio is also nested, which is something I dislike. I don’t want all of the camera audio tracks there, even if they are muted. So I will typically match-frame the source until I get back to the original audio that I intend to use, and then overwrite the multicam clip’s audio with the original on this working timeline. On the other hand, if the manual multicam creation method is used, then I would only nest the video tracks, which automatically leaves me with the clean audio that I desire.

Autosequence

One simple approach is to use an additional utility to create multicam sequences, such as Autosequence from software developer VideoToolShed. To use Autosequence, your clips must have matching timecode. First separate all of your clips into separate folders on your media hard drive – A-CAM, B-CAM, SOUND, and so on. Launch Autosequence and set the matching frame rate for your media. Then import each folder of clips separately. If you are using double-system sound you can choose whether or not to include the camera sound. Then generate an XML file.

Now, import the XML file into Premiere Pro. This will import the source media into bins, along with a sequence of clips where each camera is on a separate track. If your clips are broken into consecutive recordings with stops and starts in-between, then each recorded set will appear further down on the same timeline. To turn this sequence into one with multicam clips, just follow my explanation for working with a manual process, described above.

Multicam cutting

At this point, I dupe the sequence(s) and start a reductive process of shaping the interview. I usually don’t worry too much about changing camera angles, until I have the story fleshed out. When you are ready for that, right-click into the viewer, and change the display mode to multicam.

As you play, cut between cameras in the viewer by clicking on the corresponding section of the viewer. The timeline will update to show these on-the-fly edits when you stop playback. Or you can simply “blade” the clip and then right-click that portion of the clip to select the camera to be shown. Remember than any effects or color corrections you apply in the timeline are applicable to that visible angle, but do not follow it. So, if you change your mind and switch to a different angle, the effects and corrections do not change with it. Therefore, adjustments will be required to the effect or correction for that new camera angle.

Once I’m happy with the cutting, I will then go through and make a color correction pass. If the lighting has stayed consistent, I can usually grade each angle for one clip only and then copy that correction and paste it to each instance of that same angle on the timeline. Then repeat the procedure for the other camera angles.

When I’m ready to deliver the final product, I will dupe the sequence and clean it up. This means flattening all multicam clips, cleaning up unused clips on my timeline, deleting empty tracks, and usually, collapsing the clips down to the fewest number of tracks.

©2018 Oliver Peters

Audio Mixing with Premiere Pro

When budgets permit and project needs dictate, I will send my mixes out-of-house to one of a few regular mixers. Typically that means sending them an OMF or AAF to mix in Pro Tools. Then I get the mix and split-tracks back, drop them into my Premiere Pro timeline, and generate master files.

On the other hand, a lot of my work is cutting simple commercials and corporate presentations for in-house use or the web, and these are often less demanding  – 2 to 8 tracks of dialogue, limited sound effects, and music. It’s easy to do the mix inside of the NLE. Bear in mind that I can – and often have – done such a mix in Apple Logic Pro X or Adobe Audition, but the tools inside Premiere Pro are solid enough that I often just keep everything – mix included – inside my editing application. Let’s walk though that process.

Dealing with multiple channels on source clips

Start with your camera files or double-system audio recordings. Depending on the camera model, Premiere Pro will see these source clips as having either stereo (e.g. a Canon C100) or multi-channel mono (e.g. ARRI Alexa) channels. If you recorded a boom mic on channel 1 and a lavaliere mic on channel 2, then these will drop onto your stereo timeline either as two separate mono tracks (Alexa) – or as a single stereo track (C100), with the boom coming out of the left speaker and the lav out of the right. Which one it is will strictly depend on the device used to generate the original recordings.

First, when dual-mic recordings appear as stereo, you have to understand how Premiere Pro deals with stereo sources. Panning in Premiere Pro doesn’t “shift” the audio left, right, or center. Instead, it increases or decreases the relative volume of the left or right half of this stereo field. In our dual-mic scenario, panning the clip or track full left means that we only hear the boom coming out of the left speaker, but nothing out of the right. There are two ways to fix this – either by changing the channel configuration of the source in the browser – or by changing it after the fact in the timeline. Browser changes will not alter the configuration of clips already edited to the timeline. You can change one or more source clips from stereo to dual-mono in the browser, but you can’t make that same type of change to a clip already in your sequence.

Let’s assume that you aren’t going to make any browser changes and instead just want to work in your sequence. If your source clip is treated as dual-mono, then the boom and lav will cut over to track 1 and 2 of your sequence – and the sound will be summed in mono on the output to your speaks. However, if the clip is treated as stereo, then it will only cut over to track 1 of your sequence – and the sound will stay left and right on the output to your speakers. When it’s dual-mono, you can listen to one track versus the other, determine which mic sounds the best, and disable the clip with the other mic. Or you can blend the two using clip volume levels.

If the source clip ends up in the sequence as a stereo clip, then you will want to determine which one of the two mics you want to use for the best sound. To pick only one mic, you will need to change the clip’s audio configuration. When you do that, it’s still a stereo clip, however, both “sides” can be supplied by either one of the two source channels. So, both left and right output will either be the boom or the lav, but not both. If you want to blend both mics together, then you will need to duplicate (option-drag) the audio clip onto an adjacent timeline track, and change the audio channel configuration for both clips. One would be set to the boom for both channels and the other set to only the lav for its two channels. Then adjust clip volume for the two timeline clips.

Configuring your timeline

Like most editors, while I’m working through the stages of rough cutting on the way to an approved final copy, I will have a somewhat messy timeline. I may have multiple music cues on several tracks with only one enabled – just so I can preview alternates for the client. I will have multiple dialogue clips on a few tracks with some disabled, depending on microphone or take options. But when I’m ready to move to the finishing stage, I will duplicate that sequence to create a “final version” and clean that one up. This means getting rid of any disabled clips, collapsing my audio and video clips to the fewest number of tracks, and using Premiere’s track creation/deletion feature to delete all empty tracks – all so I can have the least amount of visual clutter. 

In other blog posts, I’ve discussed working with additional submix buses to create split-track exports; but, for most of these smaller jobs, I will only add one submix bus. (I will explain its purpose in a moment.) Once created, you will need to open the track mixer panel and route the timeline channels from the master to the submix bus and then the output of the submix bus back to the master.

Plug-ins

Premiere Pro CC comes with a nice set of audio plug-ins, which can be augmented with plenty of third-party audio effects filters. I am partial to Waves and iZotope, but these aren’t essential. However, there are several that I do use quite frequently. These three third-party filters will help improve any vocal-heavy piece.

The first two are Vocal Rider and MV2 from Waves and are designed specifically for vocal performances, like voice-overs and interviews. These can be pricey, but Waves has frequent sales, so I was able to pick these up for a fraction of their retail price. Vocal Rider is a real-time, automatic volume adjustment tool. Set the bottom and top parameters and let Vocal Rider do the rest, by automatically pushing the volume up or down on-the-fly. MV2 is similar, but it achieves this through compression on the top and bottom ends of the range. While they operate in a similar fashion, they do produce a different sound. I tend to pick MV2 for voice-overs and Vocal Rider for interviews.

We all know location audio isn’t perfect, which is where my third filter comes in. FxFactory is knows primarily for video plug-ins, but their partnership with Crumplepop has added a nice set of audio filters to their catalog. I find AudioDenoise to be quite helpful and fast in fixing annoying location sounds, like background air conditioning noise. It’s real-time and good-sounding, but like all audio noise reduction, you have to be careful not to overdo it, or everything will sound like it’s underwater.

For my other mix needs, I’ll stick to Premiere’s built-in effects, like EQ, compressors, etc. One that’s useful for music is the stereo imager. If you have a music cue that sounds too monaural, this will let you “expand” the track’s stereo signal so that it is spread more left and right. This often helps when you want the voice-over to cut through the mix a bit better. 

My last plug-in is a broadcast limiter that is placed onto the master bus. I will adjust this tight with a hard limit for broadcast delivery, but much higher (louder allowed) for web files. Be aware that Premiere’s plug-in architecture allows you to have the filter take affect either pre or post-fader. In the case of the master bus, this will also affect the VU display. In other words, if you place a limiter post-fader, then the result will be heard, but not visible through the levels displayed on the VU meters.

Mixing

I have used different mixing strategies over the years with Premiere Pro. I like using the write function of the track mixer to write fader automation. However, I have lately stopped using it – instead going back to manual keyframes within the clips. The reason is probably that my projects tend to get revised often in ways that change timing. Since track automation is based on absolute timeline position, keyframes don’t move when a clip is shifted, like they would when clip-based volume keyframes are used.

Likewise, Adobe has recently added Audition’s ducking for music to Premiere Pro. This uses Adobe’s Sensei artificial intelligence. Unfortunately I don’t find to be “intelligent” enough. Although sometimes it can provide a starting point. For me, it’s simply too coarse and doesn’t intelligently adjust for areas within a music clip that swell or change volume internally. Therefore, I stick with minor manual adjustments to compensate for music changes and to make the vocal parts easy to understand in the mix. Then I will use the track mixer to set overall levels for each track to get the right balance of voice, sound effects, and music.

Once I have a decent balance to my ears, I will temporarily drop the TC Electronic (included with Premiere Pro) Radar loudness plug-in to make sure my mix is CALM-compliant. This is where the submix bus comes in. If I like the overall balance, but I need to bring everything down, it’s an easy matter to simply lower the submix level and remeasure.

Likewise, it’s customary to deliver web versions with louder volume levels than the broadcast mix. Again the submix bus will help, because you cannot raise the volume on the master – only lower it. If you simply want to raise the overall volume of the broadcast mix for web delivery, simply raise the submix fader. Note that when I say louder, I’m NOT talking about slamming the VUs all the way to the top. Typically, a mix that hits -6 is plenty loud for the web. So, for web delivery, I will set a hard limit at -6, but adjust the mix for an average of about -10.

Hopefully this short explanation has provided some insight into mixing within Premiere Pro and will help you make sure that your next project sounds great.

©2018 Oliver Peters

Audio Splits and Stems in Premiere Pro Revisited

Creating multichannel, “split-track” master exports of your final sequences is something that should be a standard step in all of your productions. It’s often a deliverable requirement and having such a file makes later revisions or derivative projects much easier to produce. If you are a Final Cut Pro X user, the “audio lanes” feature makes it easy to organize and export sequences with isolated channels for dialogue, music, and effects. FCPX pros like to tweak the noses of other NLE users about how much easier it is in FCPX. While that’s more or less true – and, in fact, can be a lot deeper than simply a few aggregate channels – that doesn’t mean it’s particularly hard or less versatile in Premiere Pro.

Last year I wrote about how to set this up using Premiere submix tracks, which is a standard audio post workflow, common to most DAW and mix applications. Go back and read the article for more detail. But, what about sequences that are already edited, which didn’t start with a track configuration already set up with submix tracks and proper output routing? In fact, that’s quite easy, too, which brings me to today’s post.

Step 1 – Edit

Start out by editing as you always have, using your standard sequence presets. I’ve created a few custom presets that I normally use, based on the several standard formats I work in, like 1080p/23.976 and 1080p/29.97. These typically require stereo mixes, so my presets start with a minimum configuration of one picture track, two standard audio tracks, and stereo output. This is the starting point, but more video and audio tracks get added, as needed, during the course of editing.

Get into a habit of organizing your audio tracks. Typically this means dialogue and VO tracks towards the top (A1-A4), then sound effects (A5-A8), and finally music (A9-A12). Keep like audio types on their intended tracks. What you don’t want to do is mix different audio types onto the same track. For instance, don’t put sound effects onto tracks that you’ve designated for dialogue clips. Of course, the number of actual tracks needed for these audio types will vary with your projects. A simple VO+music sequence will only have two to four tracks, while dramatic entertainment pieces will have a lot more. Delete all empty audio tracks when you are ready to mix.

Mix for stereo output as you normally would. This means balancing components using keyframes and clip mixing. Then perform overall adjustments and “riding faders” in the track mixer. This is also where I add global effects, like compression for dialogue and limiting for the master mix.

Output your final mixed master file for delivery.

Step 2 – Multichannel DME sequences

The next step is to create or open a new multichannel DME (dialogue/music/effects) sequence. I’ve already created a custom preset, which you may download and install. It’s set up as 1080p/23.976, with two standard audio channels and three, pre-labelled stereo submix channels, but you can customize yours as needed. The master output is multichannel (8-channels), which is sufficient to cover stereo pairs for the final mix, plus isolated pairs for each of the three submixes – dialogue, music, and effects.

Next, copy-and-paste all clips from your final stereo sequence to the new multichannel sequence. If you have more than one track of picture and two tracks of audio, the new blank sequence will simply auto-populate more tracks once you paste the clips into it. The result should look the same, except with the additional three submix tracks at the bottom of your timeline. At this stage, the output of all tracks is still routed to the stereo master output and the submix tracks are bypassed.

Now open the track mixer panel and, from the pulldown output selector, switch each channel from master to its appropriate submix channel. Dialogue tracks to DIA, music tracks to MUS, and effects tracks to SFX. The sequence preset is already set up with proper output routing. All submixes go to output 1 and 2 (composite stereo mix), along with their isolated output – dialogue to 3 and 4, effects to 5 and 6, music to 7 and 8. As with your stereo mix, level adjustments and plug-in processing (compression, EQ, limiting, etc.) can be added to each of the submix channels.

Note: while not essential, multichannel, split-track master files are most useful when they are also textless. So, before outputting, I would recommend disabling all titles and lower third graphics in this sequence. The result is clean video – great for quick fixes later in the event of spelling errors or a title change.

Step 3 – Multichannel export

Now that the sequence is properly organized, you’ve got to export the multichannel sequence. I have created a mastering export preset, which you may also download. It works in the various Adobe CC apps, but is designed for Adobe Media Encoder workflows. This preset will match its output to the video size and frame rate of your sequence and master to a file with the ProRes4444 codec. The audio is set for eight output channels, configured as four stereo pairs – composite mix, plus three DME channels.

To test your exported file, simply reimport the multichannel file back into Premiere Pro and drop it onto a timeline. There you should see four independent stereo channels with audio organized according to the description above.

Presets

I have created a sequence and an export preset, which you may download here. I have only tested these on Mac systems, where they are installed into the Adobe folder contained within the user’s Documents folder. The sequence preset is placed into the Premiere Pro folder and the export preset into the Adobe Media Encoder folder. If you’ve updated the Adobe apps along the way, you will have a number of version subfolders. As of December 2017, the 12.0 subfolder is the correct location. Happy mixing!

©2017 Oliver Peters

A Light Footprint

When I started video editing, the norm was an edit suite with three large quadraplex (2”) videotape recorders, video switcher, audio mixer, B&W graphics camera(s) for titles, and a computer-assisted, timecode-based edit controller. This was generally considered  an “online edit suite”, but in many markets, this was both “offline” (creative cutting) and “online” (finishing). Not too long thereafter, digital effects (ADO, NEC, Quantel) and character generators (Chyron, Aston, 3M) joined the repertoire. 2” quad eventually gave way to 1” VTRs and those, in turn, were replaced by digital – D1, D2, and finally Digital Betacam. A few facilities with money and clientele migrated to HD versions of these million dollar rooms.

Towards the midpoint in the lifespan for this way of working, nonlinear editing took hold. After a few different contenders had their day in the sun, the world largely settled in with Avid and/or Media 100 rooms. While a lower cost commitment than the large online bays of the day, these nonlinear edit bays (NLE) still required custom-configured Macs, a fair amount of external storage, along with proprietary hardware and monitoring to see a high-quality video image. Though crude at first, NLEs eventually proved capable of handling all the video needs, including HD-quality projects and even higher resolutions today.

The trend towards smaller

As technology advanced, computers because faster and more powerful, storage capacities increased, and software that required custom hardware evolved to work in a software-only mode. Today, it’s possible to operate with a fraction of the cost, equipment, and hassle of just a few years ago, let along a room from the mid-70s. As a result, when designing or installing a new room, it’s important to question the assumptions about what makes a good edit bay configuration.

For example, today I frequently work in rooms running newer iMacs, 2013 Mac Pros, and even MacBook Pro laptops. These are all perfectly capable of running Apple Final Cut Pro X, Adobe Premiere Pro, Avid Media Composer, and other applications, without the need for additional hardware. In my interview with Thomas Grove Carter, he mentioned often working off of his laptop with a connected external drive for media. And that’s at Trim, a high-end London commercial editing boutique.

In my own home edit room, I recently set aside my older Mac Pro tower in favor of working entirely with my 2015 MacBook Pro. No more need to keep two machines synced up and the MBP is zippier in all respects. With the exception of some heavy-duty rendering (infrequent), I don’t miss using the tower. I run the laptop with an external Dell display and have configured my editing application workspaces around a single screen. The laptop is closed and parked in a BookArc stand tucked behind the Dell. But I also bought a Rain stand for those times when I need the MBP open and functioning as a second display.

Reduce your editing footprint

I find more and more editors working in similar configurations. For example, one of my clients is a production company with seven networked (NAS storage) workstations. Most of these are iMacs with few other connected peripherals. The main room has a 2013 “trash can” Mac Pro and a bit more gear, since this is the “hero” room for clients. If you are looking to downsize your editing environment, here are some pointers.

While you can work strictly from a laptop, I prefer to build it up for a better experience. Essential for me is a Thunderbolt dock. Check out OWC or CalDigit for two of the best options. This lets you connect the computer to the dock and then everything else connects to that dock. One Thunderbolt cable to the laptop, plus power for the computer, leaving you with a clean installation with an easy-to-move computer. From the dock, I’m running a Presonus Audiobox USB audio interface (to a Mackie mixer and speakers), a TimeMachine drive, a G-Tech media drive, and the Dell display. If I were to buy something different today, I would use the Mackie Onyx Blackjack interface instead of the Presonus/Mackie mixer combo. The Blackjack is an all-in-one solution.

Expand your peripherals as needed

At the production company’s hero room, we have the extra need to drive some video monitors for color correction and client viewing. That room is similarly configured as above, except with a Mac Pro and connection to a QNAP shared storage solution. The latter connects over 10Gb/s Ethernet via a Sonnet Thunderbolt/Ethernet adapter.

When we initially installed the room, video to the displays was handled by a Blackmagic Design UltraStudio device. However, we had a lot of playback performance issues with the UltraStudio, especially when using FCPX. After some experimenting, we realized that both Premiere Pro and FCPX can send a fullscreen, [generally] color-accurate signal to the wall-mounted flat panel using only HDMI and no other video i/o hardware. We ended up connecting the HDMI from the dock to the display and that’s the standard working routine when we are cutting in either Premiere Pro or Final Cut.

The rub for us is DaVinci Resolve. You must use some type of Blackmagic Design hardware product in order to get fullscreen video to a display when in Resolve. Therefore, the Ultrastudio’s HDMI port connects to the second HDMI input of the large client display and SDI feeds a separate TV Logic broadcast monitor. This is for more accurate color rendition while grading. With Media Composer, there were no performance issues, but the audio and video signal wants to go through the same device. So, if we edit Avid, then the signal chain goes through the UltraStudio, as well.

All of this means that in today’s world, you can work as lightly as you like. Laptop-only – no problem. iMac with some peripherals – no problem. A fancy, client-oriented room – still less hassle and cost than just a few short years ago. Load it up with extra control surfaces or stay light with a keyboard, mouse, or tablet. It all works today – pretty much as advertised. Gone are the days when you absolutely need to drop a small fortune to edit high-quality video. You just have to know what you are doing and understand the trade-offs as they arise.

©2017 Oliver Peters

Customize Premiere Pro with Workspaces

Most days I find myself in front of Adobe Premiere Pro CC, both by choice and by the jobs I’m booked on. Yes, I know, for some it’s got bugs and flaws, but for me it’s generally well-behaved. Given the choices out there, Premiere Pro feels the most natural to me for an efficient editing workflow.

Part of what makes Premiere Pro work for me is the ability to customize and fine-tune the user interface layout for the way I like to work or the tasks at hand. This is made possible by Adobe’s use of panels for the various tools and windows within the interface. These panels can float or be docked, stacked, or tabbed in a wonderfully large range of configuration possibilities. The Adobe CC applications come with a set of preset workspaces, but these can be customized and augmented as needed. I won’t belabor this post with an in-depth explanation of workspaces, because there are three very good explanations over at PremiereBro. (Click these links for Post 1, Post 2, and Post 3). My discussion with Simon Ubsdell made me think the topic would make a good blog post here, too.

It all starts with displays

I started my NLE journey with Avid and in the early days, two screens (preferably of a matching size) were essential. Bins on the left with viewers and timeline on the right. However, in the intervening years, screen resolution has greatly increased and developers have made their UIs work on dual and single-screen configurations. Often today, two screens can actually be too much. For example, if you have two side-by-side 27” (or larger) displays, the distance from the far left to the far right is pretty large. This makes your view of the record window quite a bit off-center. To counter-balance this issue, in a number of set-ups, I’ve taken to working with two different sized displays: a centered 27”, plus a smaller 20” display to the left. Sometimes I’ll have a broadcast display to the right. The left and right displays are at an angle, which means that my main working palette – the viewers and timeline – are dead-center on the display in front of me.

I also work with a laptop from time to time, as well as do some jobs in Final Cut Pro X. Generally a laptop is going to be the only available display and FCPX is well-optimized for single-screen operation. As a result, I’ve started to play around with working entirely on a single display – only occasionally using the landscape of the secondary display on my left when really needed. The more I work this way, the more I find that I can work almost entirely on one screen, if that screen offers a decent resolution.

So in order to optimize my workflow, I’ve created a number of custom Premiere Pro workspaces to serve my needs. (Click any of these images to see the enlarged view.)

Edit layout 1

This is the classic two-screen layout. Bins on the left and dual-viewer/timeline on the right. I use this when I have a lot of footage and need to tab a number of bins or expand a bin to see plenty of list details or thumbnails.

Edit layout 2

This layout collapses the classic layout onto a single screen, with the project panel, viewers and timeline.

Edit layout 3

This layout is the one I use most often, because most of what I need is neatly grouped as a tab or a stack on the left and right sides of a single viewer window. Note that there are actually source and record viewers, but they are stacked behind each other. So if I load a clip or match frame from the timeline, the source viewer becomes foremost for me to work with. Do an edit or go back to the timeline and the viewer switches back to the record side.

By tabbing panels on the left side, I can select the panel needed at the time. There is a logical order to what is on the left or right side. For instance, scopes are left and Lumetri Color controls on the right – thus, both can be open. Or I can drag an effect from the right pane’s Effects palette onto the Effects Control panel on the left.

Edit layout 4

This is the most minimalist of my workspaces. Just the viewers and timeline. Anything else can be opened as a floating window for temporary access. The point of this workspace is 100% focus on the timeline, with everything else hidden.

Edit layout 5

This workspace is designed for the “pancake timeline” style of editing. For example, build a “selects” timeline and then pull from that down to your main editing timeline.

Edit layout 6

This is another dual-display layout optimized for color correction. Lumetri Color and Effects Control panel flanking the viewer, with the Lumetri Scopes fullscreen on the lefthand monitor.

There are certainly plenty of other ways you can configure a workspace to suit your style. Some Premiere Pro editors like to use the secondary screen to display the timeline panel fullscreen. Or maybe use it to spread out their audio track mixer. Hence the beauty of Adobe’s design – you can make it as minimal or complex as you like. There is no right or wrong approach – simply whatever works to improve your editing efficiency.

Note: Footage shown within these UI screen grabs is courtesy of Imagine Dragons and Adobe from the Make the Cut Contest.

©2017 Oliver Peters

Premiere Pro Workflow Tips

When you are editing on projects that only you touch, your working practices can be as messy as you want them to be. However, if you work on projects that need to be interchanged with others down the line, or you’re in a collaborative editing environment, good operating practices are essential. This starts at the moment you first receive the media and carries through until the project has been completed, delivered, and archived.

Any editor who’s worked with Avid Media Composer in a shared storage situation knows that it’s pretty rock solid and takes measures to assure proper media relinking and management. Adobe Premiere Pro is very powerful, but much more freeform. Therefore, the responsibility of proper media management and editor discipline falls to the user. I’ve covered some of these points in other posts, but it’s good to revisit workflow habits.

Folder templates. I like to have things neat and one way to assure that is with project folder templates. You can use a tool like Post Haste to automatically generate a new set of folders for each new production – or you can simply design your own set of folders as a template layout and copy those for each new job. Since I’m working mainly in Premiere Pro these days, my folder template includes a Premiere Pro template project, too. This gives me an easy starting point that has been tailored for the kinds of narrative/interview projects that I’m working on. Simply rename the root folder and the project for the new production (or let Post Haste do that for you). My layout includes folders for projects, graphics, audio, documents, exports, and raw media. I spend most of my time working at a multi-suite facility connected to a NAS shared storage system. There, the folders end up on the NAS volume and are accessible to all editors.

Media preparation. When the crew comes back from the shoot, the first priority is to back-up their files to an archive drive and then copy the files again to the storage used for editing – in my case a NAS volume. If we follow the folder layout described above, then those files get copied to the production dailies or raw media (whatever you called it) folder. Because Premiere Pro is very fluid and forgiving with all types of codecs, formats, and naming conventions, it’s easy to get sloppy and skip the next steps. DON’T. The most important thing for proper media linking is to have consistent locations and unique file names. If you don’t, then future relinking, moving the project into an application like Resolve for color correction/finishing, or other process may lead to not linking to the correct file.

Premiere Pro works better when ALL of the media is in a single common format, like DNxHD/HR or ProRes. However, for most productions, the transcoding time involved would be unacceptable. A large production will often shoot with multiple camera formats (Alexa, RED, DSLRs, GoPros, drones, etc.) and generate several cards worth of media each day. My recommendation is to leave the professional format files alone (like RED or Alexa), but transcode the oddball clips, like DJI cameras. Many of these prosumer formats place the media into various folder structures or hide them inside a package container format. I will generally move these outside of this structure so they are easily accessible at the Finder level. Media from the cameras should be arranged in a folder hierarchy of Date, Camera, and Card. Coordinate with the DIT and you’ll often get the media already organized in this manner. Transcode files as needed and delete the originals if you like (as long as they’ve been backed up first).

Unfortunately these prosumer cameras often use repeated, rather than unique, file names. Every card starts over with clip number 0001. That’s why we need to rename these files. You can usually skip renaming professional format files. It’s optional. Renaming Alexa files is fine, but avoid renaming RED or P2 files. However, definitely rename DSLR, GoPro, and DJI clips. When renaming clips I use an app called Better Rename on the Mac, but any batch renaming utility will do. Follow a consistent naming convention. Mine is a descriptive abbreviation, month/day, camera, and card. So a shoot in Palermo on July 22, using the B camera, recorded on card 4, becomes PAL0722B04_. This is appended in front of the camera-generated clip name, so then clip number 0057 becomes PAL0722B04_0057. You don’t need the year, because the folder location, general project info, or the embedded file info will tell you that.

A quick word on renaming. Stick with universal alphanumeric conventions in both the files and the folder names. Avoid symbols, emojis, etc. Otherwise, some systems will not be able to read the files. Don’t get overly lengthy in your names. Stick with upper and lower case letters, numbers, dashes, underscores, and spaces. Then you’ll be fine.

Project location. Premiere Pro has several basic file types that it generates with each project. These include the project file itself, Auto-saved project files, renders, media cache files and audio peak (.pek) files. Some of these are created in the background as new media is imported into the project. You can choose to store these anywhere you like on the system, although there are optimal locations.

Working on a NAS, there is no problem in letting the project file, Auto-saves, and renders stay on the NAS in the same section of the NAS as all of your other media. I do this because it’s easy to back-up the whole job at the end of the line and have everything in one place. However, you don’t want all the small, application-generated cache files to be there. While it’s an option in preferences, it is highly recommended to have these media cache files go to the internal hard drive of the workstation or a separate, external local drive. The reason is that there are a lot of these small files and that traffic on the NAS will tend to bog down the overall performance. So set them to be local (the default).

The downside of doing this is that when another editor opens the Premiere Pro project on a different computer, these files have to be regenerated on that new system. The project will react sluggishly until this background process is complete. While this is a bit of a drag, it’s what Adobe recommends to keep the system operating well.

One other cache setting to be mindful of is the automatic delete option. A recent Premiere Pro problem cropped up when users noticed that original media was disappearing from their drives. Although this was a definite bug, the situation mainly affected users who had set Media cache to be with their original media files and had enabled automatic deletion. You are better off to keep the default location, but change the deletion setting to manual. You’ll have to occasional clean your caches manually, but this is preferable to losing your original content.

Premiere Pro project locking. A recent addition to Premiere Pro is project locking. This came about because of Team Projects, which are cloud-only shared project files. However, in many environments, facilities do not want their projects in the cloud. Yet, they can still take advantage of this feature. When project locking is enabled in Premiere Pro (every user on the system must do this), the application opens a temporary .prlock next to the project file. This is intended to prevent other users from opening the same project and overwriting the original editor’s work and/or revisions.

Unfortunately, this only works correctly when you open a project from the launch window. Do not open the project by double-clicking the project file itself in order to launch Premiere Pro and that project. If you open through the launch window, then Premiere Pro will prevents you from opening a locked project file. However, if you open through the Finder, then the locking system is circumvented, causing crashes and potentially lost work.

Project layout templates.  Like folder layouts, I’m fond of using a template for my Premiere Pro projects, too. This way all projects have a consistent starting point, which is good when working with several editors collaboratively. You can certainly create multiple templates depending on the nature and specs of the job, e.g. commercials, narrative, 23.98, 29.97, etc. As with the folder layout, I’ll often use a leading underscore with a name to sort an item to the top of a list, or start the name with a “z” to sort it to the bottom. A lot of my work is interview-driven with supportive B-roll footage. Most of the time I’m cutting in 23.98fps. So, that’s the example shown here.

My normal routine is to import the camera files (using Premiere Pro’s internal Media Browser) according to the date/camera/card organization described earlier. Then I’ll review the footage and rearrange the clips. Interview files go into an interview sources bin. I will add sub-bins in the B-roll section for general categories. As I review footage, I’ll move clips into their appropriate area, until the date/camera/card bins are empty and can be deleted from the project. Interviews will be grouped as multi-cam clips and edited to a single sequence for each person. This sequence gets moved into the Interview Edits sub-bin and becomes the source for any clips from this interview. I do a few other things before starting to edit, but that’s for another time and another post.

Working as a team. There are lots of ways to work collaboratively, so the concept doesn’t mean the same thing in every type of job. Sometimes it requires different people working on the same job. Other times it means several editors may access a common pool of media, but working in their own discrete projects. In any case, Premiere does not allow the same sort of flexibility that Media Composer or Final Cut Pro editors enjoy. You cannot have two or more editors working inside the same project file. You cannot open more than one project at a time. This mean Premiere Pro editors need to think through their workflows in order to effectively share projects.

There are different strategies to employ. The easiest is to use the standard “save as” function to create alternate versions of a project. This is also useful to keep project bloat low. As you edit a long time on a project, you build up a lot of old “in progress” sequences. After a while, it’s best to save a copy and delete the older sequences. But the best way is to organize a structure to follow.

As an example, let’s say a travel-style show covers several locations in an episode. Several editors and an assistant are working on it. The assistant would create a master project with all the footage imported and organized, interviews grouped/synced, and so on. At this point each editor takes a different location to cut that segment. There are two options. The first is to duplicate the project file for each location. Open each one up and delete the content that’s not for that location. The second option is to create a new project for each location and them import media from the master project using Media Browser. This is Adobe’s built-in module that enables the editor to access files, bins, and sequences from inside other Premiere Pro projects. When these are imported, there is no dynamic linking between the two projects. The two sets of files/sequences are independent of each other.

Next, each editors cuts their own piece, resulting in a final sequence for each segment. Back in the master project, each edited sequence can be imported – again, using Media Browser –  for the purposes of the final show build and tweaks. Since all of the media is common, no additional media files will be imported. Another option is to create a new final project and then import each sequence into it (using Media Browser). This will import the sequences and any associated media films. Then use the segment sequences to build the final show sequence and tweak as needed.

There are plenty of ways to use Premiere Pro and maintain editing versatility within a shared storage situation. You just have to follow a few rules for “best practices” so that everyone will “play nice” and have a successful experience.

Click here to download a folder template and enclosed Premiere Pro template project.

©2017 Oliver Peters