Project organization

Leading into the new year, it’s time to take a fresh look at a perennial subject. Whether you work as a solo editor or part of a team, having a plan for organizing your projects – along with a workflow for moving media though your system – will lead to success in being able to find and restore material when needed at a future date. For a day-to-day workflow, I rely on five standard applications: Post Haste, Hedge, Better Rename, DiskCatalogMaker, and Kyno. I work on Macs, but there are Windows versions or alternatives for each.

Proper project organization. Regardless of your NLE, it’s a good idea to create a project “silo” for each job on your hard drive, RAID, or networked storage (NAS). That’s a main folder for the job, with subfolders for the edit project files, footage, audio, graphics, documents, exports, etc. I use Post Haste to create a new set of project folders for each new project.

Post Haste uses default or custom templates that can include Adobe project files. This provides a common starting point for each new project based on a template that I’ve created. Using this template, Post Haste generates a new project folder with common subfolders. A template Premiere Pro project file with my custom bin structure is contained within the Post Haste template. When each new set of folders is created, this Premiere file is also copied.

In order to track productions, each job is assigned a number, which becomes part of the name structure assigned within Post Haste. The same name is applied to the Premiere Pro project file. Typically, the master folder (and Premiere project) for a new job created through Post Haste will be labelled according to this schema: 9999_CLIENT_PROJECT_DATE.

Dealing with source footage, aka rushes or dailies. The first thing you have to deal with on a new project is the source media. Most of the location shoots for my projects come back to me with around 1TB of media for a day’s worth of filming. That’s often from two or three cameras, recorded in a variety of codecs at 4K/UHD resolution and 23.98fps. Someone on location (DIT, producer, DP, other) has copied the camera cards to working SSDs, which will be reused on later productions. Hedge is used to copy the cards, in order to provide checksum copy verification.

I receive those SSDs and not the camera cards. The first step is to copy that media “as is” into the source footage subfolder for that project on the editing RAID or NAS. Once my copy is complete, those same SSDs are separately copied “as is” via Hedge to one or more Western Digital or Seagate portable drives. Theoretically, this is for a deep archive, which hopefully will never be needed. Once we have at least two copies of the media, these working SSDs can be reformatted for the next production. The back-up drives should be stored in a safe location on-premises or better yet, offsite.

Since video cameras don’t use a standard folder structure on the cards, the next step is to reorganize the copied media in the footage folder according to date, camera, and roll. This means ripping media files out of their various camera subfolders. Within the footage folder, my subfolder hierarchy becomes shoot date (MMDDYY), then camera (A-CAM, B-CAM, etc), and then camera roll (A001, A002, etc). Media is located within the roll subfolder. Double-system audio recordings go into a SOUND folder for that date and follow this same hierarchy for sound rolls. When this reorganization is complete, I delete the leftover camera subfolders, such as Private, DCIM, etc.

It may be necessary to rename or append prefixes to file names in order to end up with completely unique file names within this project. That’s where Better Rename comes in. This is a Finder-level batch renaming tool. If a camera generates default names on a card, such as IMG_001, IMG_002 and so on, then renaming becomes essential. I try to preserve the original name in order to be able to trace the file back to back-up drives if I absolutely have to. Therefore, it’s best to append a prefix. I base this on project, date, camera, and roll. As an example, if IMG_001 was shot as part of the Bahamas project on December 20th, recorded by E-camera on roll seven, then the appended file would be named BAH1220E07_IMG_001.

Some camera codecs, like those used by drones and GoPros, are a beast for many NLEs to deal with. Proxy media is one way or you can transcode only the offending files. If you choose to transcode these files, then Compressor, Adobe Media Encoder, or Resolve are the best go-to applications. Transcode at the native file size and resolution into an optimized codec, like ProRes. Maintain log color spaces, because these optimized files become the new “camera” files in your edit. I will add separate folders for ORIG (camera original media) and PRORES (my transcoded, optimized files) within each camera roll folder. Only the ProRes media is to be imported into the NLE for editing.

Back-up! Do not proceed to GO! Now that you’ve spent all of this effort reorganizing, renaming, and transcoding media, you first want a back-up the files before starting to edit. I like to back up media to raw, removable, enterprise-grade HGST or Seagate hard drives. Over the years, I’ve accumulated a variety of drive sizes ranging from 2TB to now 8TB. Larger capacities are available, but 8TB is a cost-effective and manageable capacity. When placed into a Thunderbolt or USB drive dock, these function like any other local hard drive. 

When you’ve completed dealing with the media from the shoot, simply copy the whole job folder to a drive. You can store multiple projects on the same drive, depending on their capacity. This is an easy overnight process with most jobs, so it won’t impact your edit time. The point is to back up the newly organized version of your raw media. Once completed, you will have three copies of the source footage – the “as is” copy, the version on your RAID or NAS, and this back-up on the raw drive. After the project has been completed and delivered, load up the back-up drive and copy everything else from this job to that drive. This provides a “clone” of the complete job on both your RAID/NAS and the back-up drive.

In order to keep these back-up drives straight, you’ll need a catalog. At home, I’ve accumulated 12 drives thus far. At work we’ve accumulated over 200. I’ve found the easiest way to deal with this is an application called DiskCatalogMaker. It scans the drive and stores the file information in a catalog document. Each drive entry mimics what you see in the Finder, including folders, files, sizes, dates, and so on. The catalog document is searchable, which is why job numbers become important. It’s a good idea to periodically mount and spin up these drives to maintain reliability. Once a year is a minimum.

If you have sufficient capacity on your RAID or NAS, then you don’t want to immediately delete jobs and media when the work is done. In our case, once a job has been fully backed up, the job folder is moved into a BACKED UP folder on the NAS. This way we know when a job has been backed up, yet it is still easily retrieved should the client come back with revisions. Plus, you still have three total copies of the source media.

Other back-ups. I’ve talked a lot about backing up camera media, but what about other files? Generally files like graphics are supplied, so these are also backed up elsewhere. Plus they will get backed up on the raw drive when the job is done.

I also use Dropbox for interim back-ups of project files. Since a Premiere Pro project file is light and doesn’t carry media, it’s easy to back up in the cloud. At work, at the end of each day, each editor copies in-progress Premiere files to a company Dropbox folder. The idea is that in the event of some catastrophe, you could get your project back from Dropbox and then use the backed up camera drives to rebuild an edit. In addition, we also export and copy Resolve projects to Dropbox, as well as the DiskCatalogMaker catalog documents.

Whenever possible, audio stems and textless masters are exported for each completed job. These are stored with the final masters. Often it’s easier to make revisions using these elements, than to dive back into a complex job after it’s been deeply archived. Our NAS contains a separate top-level folder for all finished masters, in addition to the master subfolder within each project. When a production is done, the master file is copied into this other folder, resulting in two sets of the master files on the NAS. And by “master” I generally mean a final ProRes file along with a high-quality MP4 file. The MP4 is most often what the client will use as their “master,” since so much of our work these days is for the web. Therefore, both NAS locations hold a ProRes and an MP4. That’s in addition to the masters stored on the raw, back-up drive.

Final, Final revised, no really, this one is Final. Let’s address file naming conventions. Every editor knows the “danger” of calling something Final. Clients love to make changes until they no longer can. I work on projects that have running changes as adjustments are made for use in new presentations. Calling any of these “Final” never works. Broadcast commercials are usually assigned definitive ISCI codes, but that’s rarely the case with non-broadcast projects. The process that works for us is simply to use version numbers and dates. This makes sense and is what software developers use.

We use this convention: CLIENT_PROJECTNAME_VERSION_DATE_MODIFIER. As an example, if you are editing a McDonald’s Big Mac :60 commercial, then a final version might be labelled “MCD_Big Mac 60_v13_122620.” A slight change on that same day would become “MCD_Big Mac 60_v14_122620.” We use the “modifier” to designate variations from the norm. Our default master files are formatted as 1080p at 23.98 with stereo audio. So a variation exported as 4K/UHD or 720p or with a 5.1 surround mix would have the added suffix of “_4K” or “_720p” or “_51MIX.”

Some projects go through many updates and it’s often hard to know when a client (working remotely) considers a version truly done. They are supposed to tell you that, but they often just don’t. You sort of know, because the changes stop coming and a presentation deadline has been met. Whenever that happens, we export a ProRes master file plus high-quality MP4 files. The client may come back a week later with some revisions. Then, new ProRes and MP4 files are generated. Since version numbers are maintained, the ProRes master files will also have different version numbers and dates and, therefore, you can differentiate one from the other. Both variations may be valid and in use by the client.

Asset management. The last piece of software that comes in handy for us is Kyno. This is a lightweight asset management tool that we use to scan and find media on our NAS. Our method of organization makes it relatively easy to find things just by working in the Finder. However, if you are looking for that one piece of footage and need to be able to identify it visually, then that’s where Kyno is helpful. It’s like Adobe Bridge on steroids. One can organize and sort using the usual database tools, but it also has a very cool “drill down” feature. If you want to browse media within a folder without stepping through a series of subfolders, simply enable “drill down” and you can directly browse all media that’s contained therein. Kyno also features robust transcode and “send to” features designed with NLEs in mind. Prep media for an edit or create proxies? Simply use Kyno as an alternative to other options.

Hopefully this recap has provided some new workflow pointers for 2021. Good luck!

©2021 Oliver Peters

Larry Jordan’s Techniques of Visual Persuasion

You may know him as a speaker, trainer, or web presenter. Or from the long-running Digital Production Buzz podcast series. Or his 2 Reel Guys series with the late Norman Hollyn. Regardless of how, Larry Jordan is well-known by most working and aspiring video professionals. But Jordan is also an accomplished author, with several books to his credit. The latest is Techniques of Visual Persuasion + Create powerful images that motivate.

Commercials, corporate videos, or entertainment – the art of persuasion is at the heart of what every editor does. Persuasion is about convincing someone to want to do whatever action you want to have happen or to share a feeling you are trying to convey. In addition to creating persuasive messages, we ourselves are also consumers and recipients of these same communications. Therefore, knowledge and understanding is key. It is Jordan’s premise that with modern life’s faster pace, proper communication today is more like haiku than a lengthy report. Every professional needs to know how to make their presentation – whether spoken, still, or motion – succinct and impactful. This book is perfectly laid out to get that point across.

Techniques of Visual Persuasion is arranged into three sections. The first covers the fundamentals of persuasion. The second is about developing persuasive still images and the last section is about persuasive motion images. This book is arranged like a text book, which is a good thing. It’s well-researched and detailed. Each chapter starts with the goals to be covered and ends with a recap. Each is also capped off with an anecdote (like Larry starting a fire in a TV studio) or a guest contributor’s point-of-view. The pages are illustrated nicely with sidebars, images, and charts that help make the point of how and why one example is more inviting or persuasive than another.

Jordan covers a wide range of theoretical and practice advice, such as the 180-degree rule, the rule of thirds, three-point lighting, sans serif vs. serif fonts, and much more. But it’s not all just concepts. Jordan has a lengthy background in software training, including several books around Final Cut Pro and Adobe products, as well as his PowerUp series of videos.

Section two includes two chapters on the basics of Photoshop with practical examples of how to use its tools to enhance and repair still images and create layered composites. Section three goes even deeper into real-world experience. Jordan covers topics, such as suggested camera and audio equipment, interviewing techniques, how to properly record audio, and how to properly plan and produce a video shoot. This section also goes deepest into software basics, including a detailed look at Adobe Audition, Apple Final Cut Pro X, and Apple Motion.

Techniques of Visual Persuasion is a like a college film program condensed into under 400 informative pages. All of it written in a very engaging manner. I found that it’s not only a good first read, but useful to have around for a quick reference, whether you are just entering the field or have been in the business for years. Larry Jordan is a gifted presenter who can express complex topics in an easy-to-digest manner and this latest book is no exception.

©2020 Oliver Peters

Time to Rethink ProRes RAW?

The Apple ProRes RAW codec has been available for several years at this point, yet we have not heard of any professional cinematography camera adding the ability to record ProRes RAW in-camera. I covered ProRes RAW with some detail in these three blog posts (HDR and RAW Demystified, Part 1 and Part 2, and More about ProRes RAW) back in 2018. But the industry has changed over the past few years. Has that changed any thoughts about ProRes RAW?

Understanding RAW

Today’s video cameras evolved their sensor design from a three CCD array for RGB into a single sensor, similar to those used in still photo cameras. Most of these sensors are built using a Bayer pattern of photosites. This pattern is an array of monochrome receptors that are filtered to receive incoming green, red, and blue wavelengths of light. Typically the green photosites cover 50% of this pattern and red and blue each cover 25%. These photosites capture linear light, which is turned into data that is then meshed and converted into RGB pixel information. Lastly, it’s recorded into a video format. Photosites do not correlate in a 1:1 relationship with output pixels. You can have more or fewer total photosite elements in the sensor than the recorded pixel resolution of the file.

The process of converting photosite data into RGB video pixels is done by the camera’s internal electronics. This process also includes scaling, gamma encoding (Rec709, Rec 2020, or log), noise reduction, image sharpening, and the application of that manufacturer’s proprietary color science. The term “color science” implies some type of neutral mathematical color conversion, but that isn’t the case. The color science that each manufacturer uses is in fact their own secret sauce. It can be neutral or skewed in favor of certain colors and saturation levels. ARRI is a prime example of this. They have done a great job in developing a color profile for their Alexa line of cameras that approximates the look of film.

All of this image processing adds cost, weight, and power demands to the design of a camera. If you offload the processing to another stage in the pipeline, then design options are opened up. Recording camera raw image data achieves that. Camera raw is the monochrome sensor data prior to the conversion into an encoded video signal. By recording a camera raw file instead of an encoded RGB video file, you defer the processing to post.

To decode this file, your operating system or application requires some type of framework, plug-in, or decoding/developing software in order to properly interpret that data into a color image. In theory, using a raw file in post provides greater control over ISO/exposure and temperature/tint values in color grading. Depending on the manufacturer, you may also apply a variety of different camera profiles. All of this is possible and still have a camera file that is of a smaller size than its encoded RGB counterpart.

In-camera recording, camera raw, and RED

Camera raw recording preceded the introduction of the RED One camera. These usually consisted of uncompressed movie files or image sequences recorded to an external recorder. RED introduced the ability to record a Wavelet-compressed, 4K camera raw signal at 24fps. This was a movie file recorded onboard the camera itself. RED was granted a number of patents around these processes, which preclude any other camera manufacturer from doing that exact same thing, unless entering into a licensing agreement with RED. So far these patents have been successfully upheld against Sony and Apple among others.

In 2007 – part way through the Final Cut Pro product run – Apple introduced its family of ProRes codecs. ProRes was Apple’s answer to Avid’s DNxHD codec, but with some improvements, like resolution independence. ProRes not only became Apple’s default intermediate codec, but also gained stature as the mastering and delivery codec of choice, regardless of which NLE you were using. (Apple was awarded an Engineering Emmy Award this year for the ProRes codecs.)

By 2010 Apple was successful in convincing ARRI to use ProRes as its internal recording codec with the introduction of the (then new) line of Alexa cameras. (ARRI camera raw recording was a secondary option using ARRIRAW and a Codex recorder.) Shooting with an Alexa, recording high-quality ProRes files, and posting those directly within FCP or any other compatible NLE created the simplest and smoothest capture-edit-deliver pipeline of any professional post workflow. That remains unchanged even today.

Despite ARRI’s success, only a few other camera manufacturers have adopted ProRes as an internal recording option. To my knowledge these include some cameras from AJA, JVC, Blackmagic Design, and RED (as a secondary file to REDCODE). The lack of widespread adoption is most likely due to Apple’s licensing arrangement, coupled with the fact that ProRes is a proprietary Apple format. It may be a de facto industry standard, but it’s not an official standard sanctioned by an industry standards committee.

The introduction of Apple’s ProRes RAW codecs has led many in the industry to wait with bated breath for cameras to also adopt ProRes RAW as their internal camera raw option. ARRI would obviously be a candidate. However, the RED patents would seem to be an impediment. But what if Apple never had that intention in the first place?

Do we have it all wrong?

When Apple introduced ProRes RAW, it did so in partnership with Atomos. Just like Sony, ARRI, and Panasonic recording their camera raw signals to an external recorder, sending a camera raw signal to an external Atomos monitor/recorder is a viable alternative to in-camera recording. Atomos’ own disagreements with RED have now been settled. Therefore, embedding the ProRes RAW codec into their products opens up that recording format to any camera manufacturer. The camera simply has to be capable of sending a compatible camera raw signal (as data) over SDI or HDMI to the connected Atomos recorder.

The desire to see ProRes RAW in-camera stems from the history of ProRes adoption by ARRI and the impact that had on high-end production and post. However, that came at a time when Apple was pushing harder into various pro film and video markets. As we’ve learned, that course was corrected by Steve Jobs, leading to the launch of Final Cut Pro X. Apple has always been about ease and democratization – targeting the middle third of a bell curve of users, not necessarily the top or bottom thirds. For better or worse, Final Cut Pro X refocused Apple’s pro video direction with that in mind.

In addition, during this past decade or more, Apple has also changed its approach to photography. Aperture was a tool developed with semi-pro and pro DSLR photographers in mind. Traditional DSLRs have lost photography market share to smart phones – especially the iPhone. Online sharing methods – Facebook, Flickr, Instagram, cloud picture libraries – have become the norm over the traditional photo album. And so, Aperture bit the dust in favor of Photos. From a corporate point-of-view, the rethinking of photography cannot be separated from Apple’s rethinking of all things video.

Final Cut Pro X is designed to be forward-thinking, while cutting the chord with many legacy workflows. I believe the same can be applied to ProRes RAW. The small form factor camera, rigged with tons of accessories including external displays, is probably more common these days than the traditional, shoulder-mounted, one-piece camcorder. By partnering with Atomos (and maybe others in the future), Apple has opened the field to a much larger group of cameras than handling the task one camera manufacturer at a time.

ProRes RAW is automatically available to cameras that were previously stuck recording highly-compressed M-JPEG or H.264/265 formats. Video-enabled DSLRs from manufacturers like Nikon and Fujifilm join Canon and Panasonic cinematography cameras. Simply send a camera raw signal over HDMI to an Atomos recorder. And yet, it doesn’t exclude a company like ARRI either. They simply need to enable Atomos to repack their existing camera raw signal into ProRes RAW.

We may never see a camera company adopt onboard ProRes RAW and it doesn’t matter. From Apple’s point-of-view and that of FCPX users, it’s all the same. Use the camera of choice, record to an Atomos, and edit as easily as with regular ProRes. Do you have the depth of options as with REDCODE RAW? No. Is your image quality as perfect in an absolute (albeit non-visible) sense as ARRIRAW? Probably not. But these concerns are for the top third of users. That’s a category that Apple is happy to have, but not crucial to their existence.

The bottom line is that you can’t apply classic Final Cut Studio/ProRes thinking to Final Cut Pro X/ProRes RAW in today’s Apple. It’s simply a different world.

____________________________________________

Addendum

The images I’ve used in this post come from Patrik Pettersson. These clips were filmed with a Nikon Z6 DSLR recording to an Atomos Ninja V. He’s made a a few sample clips available for download and testing. More at this link. This brings up an interesting issue, because most other forms of camera raw are tied to a specific camera profile. But with ProRes RAW, you can have any number of cameras. Once you bring those into Final Cut Pro X, you don’t have the correct camera profile with a color science that matches that model for each any every camera.

In the case of these clips, FCPX doesn’t offer any Nikon profiles. (Note: This was corrected with the FCPX 10.4.9 update.) I decided to decode the clip (RAW to log conversion) using a Sony profile. This gave me the best possible results for the Nikon images and effectively gives me a log clip similar to that from a Sony camera. Then for the grade I worked in Color Finale Pro 2, using its ACES workflow. To complete the ACES workflow, I used the matching SLog3 conversion to Rec709.

The result is nice and you do have a number of options. However, the workflow isn’t as straightforward as Apple would like you to believe. I think these are all solvable challenges, but 1) Apple needs to supply the proper camera profiles for each of the compatible cameras; and 2) Apple needs to publish proper workflow guides that are useful to a wide range of users.

©2020 Oliver Peters

Is good enough finally good enough?

Like many in post, I have spent weeks in a WFH (work from home) mode. Although I’m back in the office now on a limited basis, part of those weeks included studying the various webinars covering remote post workflows. Not as a solution for now, but to see what worked and what didn’t for the “next time.”

It was interesting to watch some of the comments from executives involved in network production groups and running multi-site, global post companies. While many offered good suggestions, I also heard a few statements about having to settle for something that was “good enough” under the circumstances. Maybe it wasn’t meant the way it sounded to me, but to characterize cutting in Premiere Pro and delivering ProRes masters as something they had to “settle for” struck me as just a bit snobbish. My apologies if I took it the wrong way.

A look back

The image at the top (click to expand) is a facility that I helped design and build and that I worked out of for over a dozen years. This was Century III, the resident post facility at Universal Studios Florida – back in the “Hollywood east” days of the 1990s. Not every post house of the day was this fancy and as equipped, but it represented the general state-of-the-art for that time. During its operation, we worked with 1″, D1, D2, Digital Betacam, and eventually some HD. But along the way, traditional linear post gave way to cheaper non-linear suites. We evolved with that trend and the last construction project was to repurpose one of the linear suites into a high-end Avid Symphony finishing suite.

All things come to an end and 2002 saw Century III’s demise. In part, because of the economic aftermath following September 11th, but also changes in the general film climate in Florida. That was also a time when dramatic and comedic filmed series gave way to many non-scripted, “reality” TV series.

I became a freelancer/independent contractor that year and about a year or so later was cutting and finishing an Animal Planet series. We cut and finished with four, networked Avid workstations spread across two apartments. There we covered all post, except the final audio mix. It was readily obvious to me that this was up to 160 hours/week of post that was no longer being done at an established facility. And that it was a trend that would accelerate, not go away.

Continued shift

It’s going on two decades now since that shift. In that time I’ve worked out of my home studio (picture circa 2011), my laptop on site, and within other production companies and facilities. Under various conditions, I’ve cut, finished, and delivered commercials, network shows, trade-show presentations, themed attraction projects, and feature films and documentaries. I’ve cut and graded with Final Cut Pro (“legacy” and X), Premiere Pro, Media Composer/Symphony, AvidDS, Color, Resolve, and others. The final delivered files have all passed rigid QC. It’s a given to me that you don’t need a state-of-the-art facility to do good work – IF you know what you are doing – and IF you can trust your gear in a way that you can generate predictable results. So I have to challenge the assumptions, when I hear “good enough.”

Predictable results – ah, there’s the rub. Colorists swear by the necessity for rooms with the proper neutral paint job and very expensive, calibrated displays. Yet, now many are working from home in ad hoc grading rooms. Many took home their super-expensive Sonys, but others are also using high-end LG, Flanders, or the new Apple XDR to grade by. And guess what? Somehow it all works. Would a calibrated grading environment be better? Sure, I’m not saying that it wouldn’t – simply that you can deliver quality without one when needed.

I’ve often asked clients to evaluate an in-progress grade using an Apple iPad, simply because they display good, consistent results. It’s like audio mixers who use the old Auratone cube speakers. Both devices are intended to be a “lowest common denominator.” If it looks or sounds good there, then that will translate reasonably well to other consumer devices. For grading I would still like to have the client present at the end for a final pass. Color is subjective and it’s essential that you are looking at the same display in the same room to make sure everyone is talking the same language.

I need to point out that I’m generally talking about finishing for streaming, the web, and/or broadcast with a stereo mix. When it comes to specialized venues, like theatrical presentations and custom attractions (theme parks or museums), the mixing and grading almost always has to be completed in properly designed suites/theaters/mix stages (motion pictures) or on-site (special venues). For example, if you mix a motion picture for theatrical display, you need a properly certified 5.1, 7.1, or Dolby Atmos environment. Otherwise, it’s largely a guessing game. The same for picture projection, which differs from TV and the web in terms of brightness, gamma, and color space. In these two instances, it’s highly unlikely that anyone working out of their house is going to have an acceptable set-up.

The new normal

So where do we go from here? What is the “new normal?” Once some level of normal has returned, I do believe a lot of post will go back to the way it was before. But, not all. Think of the various videoconference-style (Skype, Zoom, etc) shows you’ve been watching these weeks. Obviously, these were produced that way out of necessity. But, guess what! Quite a few are downright entertaining, which says to me that this format isn’t going away. It will become another way to produce a show that viewers like. Just as GoPros and drones have become a standard part of the production lexicon, the same will be true of iPhones and even direct Zoom or Skype feeds. Viewers are now comfortable with it.

At a time when the manufacturers have been trying to cram HDR and 8K down our throats, we suddenly find that something entirely different is more important. This will change not only production, but also post. Of course, many editors have already been working from home or ad hoc cutting rooms prior to this; but editing is a collaborative art working with other creatives.

All situations aren’t equal though. I’ve typically worked without a client sitting over my shoulder for years. Review-and-approval services like Frame.io have become standard tools in my workflow. Although not quite as efficient as haven’t a client right there, it still can be very effective. That’s common in my workflows, but has likely become a new way of working over these past two months for editors and colorists who never worked that way prior to Covid-19.

Going forward

Where does “good enough” fit in? If cutting in Media Composer and delivering DNxHR has been your norm within a facility, then using editors working from home may require a shift in thinking. For example, is cutting in Resolve, Premiere Pro, or Final Cut Pro X and then delivering ProResHQ (or higher) an acceptable alternative? There simply is no quality compromise, regardless of what some may believe, but it may require a shift in workflow or thinking.

Security may be harder to overcome. In studio or network-controlled features and TV series, security is tight, making WFH situations dicey. However, the truth of the matter is that the lowest common denominator may be more dangerous than a hacker. Think about the unscrupulous person somewhere in the chain who has access to files. Or someone with a smartphone camera recording a screen. In the end, do you or don’t you have employees and/or freelancers that you can trust? Frame.io is addressing some of these security questions with personalized screeners. Nevertheless, such issues need to be addressed and in some case, loosened.

Another item to consider is what are your freelancers using to cut or grade with? Do they have an adequate workstation with the right software, plug-ins, and fonts? Or does the company need to supply that? What about monitoring? All of these are items to explore with your staff and freelancers.

The hardest nut to crack is if you need access to a home base. Sure you can “sneakernet” drives between editors. You can transfer large files over the internet on a limited basis. These both come with a hit in efficiency. For example, my current work situation requires ongoing access to high-res, native media stored on QNAP and LumaForge Jellyfish NAS systems – an aggregate of about 3/4PB of potential storage. Fortunately, we have a policy of archiving all completed projects onto removable drives, even while still storing the projects on the NAS systems for as long as possible. In preparation for our WFH mode, I brought home about 40 archive drives (about 150TB of media) as a best guess of everything I might need to work on from home. Two other editors took home a small RAID each for projects that they were working on.

Going forward, what have I learned? The bottom line is – I don’t know. We can easily work from home and deliver high-quality work. To me that’s a given and has been for a while. In fact, if you are running a loaded 5K iMac, iMac Pro, or 16″ MacBook Pro, then you already have a better workstation than most suites still running 10-year-old “cheese grater” or 7-year-old “trash can” Mac Pros. Toss in a fast Thunderbolt or USB3.0 RAID and ProRes or DNxHR media becomes a breeze. Clearly this “good enough” scenario will deliver comparable results to a “blessed” edit suite.

Unfortunately, if you can’t stay completely self-contained, then the scenarios involve someone being at the home base. In larger facilities this still requires IT personnel  or assistant editors to go into the office. Even if you are an editor cutting from home with proxy files, someone has to go into the office to conform the camera originals and create deliverables. This tends to make a mockery out of stringent WFH restrictions.

If the world truly has changed forever, as many believe, and remote work will be how the majority of post-production operates going forward, then it certainly changes the complexion of what a facility will look like. Why invest in a large SAN/NAS storage solution? Why invest in a fleet of new Mac Pros? There’s no need, because the facility footprint can be much smaller. Just make sure your employees/freelancers have adequate hardware to do your work.

The alternative is fast, direct access over the internet to your actual shared storage. Technically, you can access files in a number of ways. None of them are particularly efficient. The best systems involve expense, like Teradici products or the HP RGS feature. However, if you have an IT hiccup or a power outage, you are back in the same boat. The “holy grail” for many is to have all media in the cloud and to edit directly from the cloud. That to me is still a total pipe dream and will be for a while for a variety of reasons. I don’t want to say that all of these ideas present insurmountable hurdles, but they aren’t cheaper – nor more secure – than being on premises. At least not yet.

The good news is that our experience over the past few months has spurred interest in new ways of working that will incentivize development. And maybe – just maybe – instead of fretting about the infrastructure to support 8K, we’ll find better, faster, more efficient ways to work with high-quality media at a distance.

©2020 Oliver Peters

Chasing the Elusive Film Look

Ever since we started shooting dramatic content on video, directors have pushed to achieve the cinematic qualities of film. Sometimes that’s through lens selection, lighting, or frame rate, but more often it falls on the shoulders of the editor or colorist to make that video look like film. Yet, many things contribute to how we perceive the “look of film.” It’s not a single effect, but rather the combination of careful set design, costuming, lighting, lenses, camera color science, and color correction in post.

As editors, we have control over the last ingredient, which brings me to LUTs and plug-ins. A number of these claim to offer looks based on certain film emulsions. I’m not talking about stylized color presets, but the subtle characteristics of film’s color and texture. But what does that really mean? A projected theatrical film is the product of four different stocks within that chain – original camera negative, interpositive print, internegative, and the release print. Conversely, a digital project shot on film and then scanned to a file only involves one film stock. So it doesn’t really mean much to say you are copying the look of film emulsion, without really understanding the desired effect.

My favorite film plug-in is Koji Advance, which is distributed through the FxFactory platform. Koji was developed between Crumplepop and noted film timer, Dale Grahn. A film timer is the film lab’s equivalent to a digital colorist. Grahn selected several color and black-and-white film stocks as the basis for the Koji film looks and film grain emulation. Then Crumplepop’s developers expanded those options with neutral, saturated, and low contrast versions of each film stock and included camera-based conversions from log or Rec 709 color spaces. This is all wrapped into a versatile color correction plug-in with controls for temperature/tint, lift/gamma/gain/density (low, mid, high, master), saturation, and color correction sliders. (Click an image to see an expanded view.)

This post isn’t a review of the Koji Advance plug-in, but rather how to use such a filter effectively within an NLE like Final Cut Pro X (or Premiere Pro and After Effects, as well). In fact, these tips can also be used with other similar film look plug-ins. Koji can be used as your primary color correction tool, applying and adjusting it on each clip. But I really see it as icing on the cake and so will take a different approach.

1. Base grade/shot matching. The first thing you want to do in any color correction session is to match your shots within the sequence. It’s best to establish a base grade before you dive into certain stylized looks. Set the correct brightness and contrast and then adjust for proper balance and color tone. For these examples, I’ve edited a timeline consisting of a series of random FilmSupply stock footage clips. These clips cover a mix of cameras and color spaces. Before I do anything, I have to grade these to look consistent.

Since these are not all from the same set-up, there will naturally be some variances. A magic hour shot can never be corrected to be identical to a sunny exterior or an office shot. Variations are OK, as long as general levels are good and the tone feels right. Final Cut Pro X features a solid color correction tool set that is aided by the comparison view. That makes it easy to match a shot to the clip before and after it in the timeline.

2. Adding the film look. Once you have an evenly graded sequence of shots, add an adjustment layer. I will typically apply the Koji filter, an instance of Hue/Sat Curves, and a broadcast-safe limiter into that layer.

Within the Koji filter, select generic Rec 709 as the camera format and then the desired film stock. Each selection will have different effects on the color, brightness, and contrast of the clips. Pick the one closest to your intended effect. If you also want film grain, then select a stock choice for grain and adjust the saturation, contrast, and mix percentage for that grain. It’s best to view grain playing back at close to your target screen size with Final Cut set to Better Quality. Making grain judgements in a small viewer or in Better Performance mode can be deceiving. Grain should be subtle, unless you are going for a grunge look.

The addition of any of these film emulsion effects will impact the look of your base grade; therefore, you may need to tweak the color settings with the Koji controls. Remember, you are going for an overall look. In many cases, your primary grade might look nice and punchy – perfect for TV commercials. But that style may feel too saturated for a convincing film look of a drama. That’s where the Hue/Sat Curves tool comes in. Select LUMA vs SAT and bring down the low end to taste. You want to end up with pure blacks (at the darkest point) and a slight decrease in shadow-area saturation.

3. Readjust shots for your final grade. The application of a film effect is not transparent and the Koji filter will tend to affect the look of some clips more than others. This means that you’ll need to go back and make slight adjustments to some of the clips in your sequence. Tweak the clip color correction settings applied in the first step so that you optimize each clip’s final appearance through the Koji plug-in.

4. Other options. Remember that Koji or similar plug-ins offer different options – so don’t be afraid to experiment. Want film noir? Try a black-and-white film stock, but remember to also turn down the grain saturation.

You aren’t going for a stylized color correction treatment with these tips. What you are trying to achieve is a look that is more akin to that of a film print. The point of adding a film filter on top is to create a blend across all of your clips – a type of visual “glue.” Since filters like this and the adjustment layer as a whole have opacity settings, is easy to go full bore with the look or simply add a hint to taste. Subtlety is the key.

Originally written for FCP.co.

©2020 Oliver Peters