Preparing your Film for Distribution

First-time filmmakers are elated when their film finally gets picked up for distribution. But the hardest work may be next. Preparing your film and companion materials can be a very detailed and complex endeavor if you didn’t plan for it properly from the outset. While each distributor and/or network has slightly different specs, the general requirements are the same. Here are the more common ones.

1. Film master. Supplying a master file is self-evident, but the exact details are not consistent across the board. Usually some additional post will be required when you get distribution. You will need to add the distributor’s logo animation up front, make sure the first video starts at a specified timecode, and that you have audio channels in a certain configuration (see Item 2).

In spite of the buzz over 4K, many distributors still want 1920×1080 files at 23.98fps (or possibly 24.0fps) – usually in the Apple ProResHQ* video codec. The frame rate may differ for broadcast-oriented films, such as documentaries. In that case, 29.97fps might be required. Also, some international distributors will require 25.0fps. If you have any titles over the picture, then “textless” material must also be supplied. Generally, you can add those sections, such as the video under opening titles, at the end of the master, following the end credits of the film.

*Occasionally film festivals and some distributors will also require a DCP package instead of a single QuickTime or MXF master file.

2. Audio mixes and tracks. Stereo and/or 5.1 surround mixes are the most commonly requested audio configurations. You’ll often be asked to supply both the full mixes and the “stems”. The latter are separate submixes of only dialogue, sound effects, and music. Some distributors want these stems as separate files, while others want them attached to the master file. These are easy to supply if the film was originally mixed with that in mind. But if your mixer only produced a final mix, then it’s a lot harder to go back and get new stem tracks. A typical channel assignment on a delivery master is eight tracks for the 5.1 surround mix (L, R, C, LFE, Ls, Rs), the stereo mix (left, right), and a stereo M&E mix (combined music and effects, minus the dialogue).

3. Subtitles and captions. In order to be compliant with various accessibility regulations, you will likely have to supply closed captioning sidecar files that sync to your master. There are numerous formats and several NLEs allow you to create these. However, it’s far easier and usually more accurate to have a service create your files. There are numerous vendors, with prices starting as low as $1/minute. Closed captions should not be confused with subtitles, also called open captions. These appear on-screen and are common when someone is speaking in another language. Check with your distributor if this applies to you, because they may want the video without titles, in the event of international distribution.

4. Legal documentation. There’s a wide range of paperwork that you should be prepared to turn over. This includes licensing for any music and stock footage, talent releases, contracts, and deal memos. One important element is to be able to prove “chain-of-title”. You must be able to prove that you own the rights to the story and the film. Music is often a sticking point for indie filmmakers. If you used temp music or had a special deal for film festival showings, now is the time to pay up. You won’t get distribution until all music is clearly licensed. Music info should also include a cue sheet (song names, length, and position within the film).

5. Errors and omissions insurance. This is a catch-all policy you’ll need to buy to satisfy many distributors. It’s designed to cover you in the event that there’s a legal claim (frivolous or otherwise) against the film. For example, if someone comes out of the woodwork saying that you ripped them off and stole their story idea and that you now owe them money.

6. Trailer. Distributors often request a trailer to be used to promote the film. The preference seems to be that the trailer is under two minutes in length. It may or may not need to include the MPAA card at the front and should have a generic end tag (no “coming soon” or date at the end). Often a simple stereo mix will be fine, but don’t take that for granted. If you are going through full sound post anyway in creating a trailer, be sure to generate the full audio package – stereo and surround mixes and splits in various combinations, just like your feature film master.

7. Everything else. Beyond this list, you’ll often be asked for additional “nice to have” items. These include screeners (DVD or web), behind-the-scenes press clips or photos, frame grabs from the film, a final script, biographies of the creative team and lead actors, as well as a poster image.

As you can see, none of this seems terribly difficult if you are aware of these needs going in. But if you have prepared none of this in advance, it will become a mad scramble at the end to keep the distributor happy.

Originally written for RedShark News

©2018 Oliver Peters

Advertisements

Five Decades of Edit Suite Evolution

I spent last Friday setting up two new Apple iMac Pros as editing workstations. When I started as an editor in the 1970s, it was the early days of computer-assisted video editing. Edit suites (or bays) were intended for either “offline” editing with simple hardware, where creative cutting was the goal – or they were “online”, designed for finishing and used the most expensive gear. Sometimes the online bay would do double-duty for both creative and final post.

The minimum investment for such a linear edit suite would include three 2” videotape recorders, a video switcher (vision mixer), edit controller, audio mixer, and a small camera for titles and artwork. Suites were designed with creature comforts, since clients would often spend days at a time supervising the edit session. Before smart phones and the internet, clients welcomed the chance to get out of the office and go to the edit. Outfitting one of these edit suites would start at several hundred thousand dollars.

At my current edit gig, the company runs nine Mac workstations within a footprint that would have only supported three edit suites of the past, including a centralized machine room. Clients rarely come to supervise an edit, so the layout is more akin to the open office plan of a design studio. Editing can be self-contained on a Mac or PC and editors work in a more collegial, collaborative environment. There’s one “hero” room for when clients do decide to drop in.

In these five decades, computer-assisted editing has gone through four phases:

Phase 1 – Offline and online edit suites, primarily based on linear videotape technology.

Phase 2 – Nonlinear editing took hold with the introduction of Avid, EMC, Media 100, and Lightworks. The resolution was too poor for finishing, but the systems were ideal for the creative process. VTR-based linear rooms still handled finishing.

Phase 3 – As the quality improved, nonlinear systems could deliver finished masters. But camera acquisition and delivery was still centered on videotape. Nonlinear systems still had to be able to output to tape, which required specialized i/o hardware.

Phase 4 (current) – Editing is completely based around the computer. Most general-purpose desktop and even laptop computers are capable of the whole gamut of post services without the need for specialized hardware. That has become optional. The full shift to Phase 4 came when file-based acquisition and delivery became the norm.

This transition brought about a sea change in cost, workflow, facility design, and talent needs. It has been driven by technology, but also a number of socioeconomic factors.

1. Technology always advances. Computers get more powerful at a lower cost point. Moore’s Law and all that. Although our demands increase – SD, HD, 4K, 8K, and beyond – computers, so far, have not been outpaced. I can edit 4K today with an investment of under $10K, which was impossible in 1980, even with an investment of $500K or more. This cost reduction also applies to shared storage solutions (NAS and SAN systems). They are cheaper, easier to install, and more reliable than ever. Even the smallest production company can now afford to design editing around the collaboration of several editors and workstations.

2. The death of videotape came with the 2011 Tohoku earthquake and tsunami in Japan that disabled the Fukushima nuclear plant. A byproduct of this natural disaster was that it damaged the Sony videotape manufacturing plant, putting supplies of HDCAM-SR stock on indefinite backorder. This pointed to the vulnerability of videotape and hastened the acceptance of file-based delivery for masters by key networks and distributors.

3. Interactions with clients and human beings in general has changed – thanks to smartphones, personal computers, and the internet. While both good and bad, the result is a shift in our communication with clients. Most of the time, edit session review and approval is handled over internet services. Post your cut. Get feedback. Make your changes and post again. Repeat. Along with a smaller hardware footprint than in the past, this is one of the prime reasons that room designs have changed. You don’t need a big, comfortable edit suite designed for clients, if they aren’t going to come. A smaller room will do as long as your editors are happy and productive.

Such a transition isn’t new. It’s been mirrored in the worlds of publishing, graphic design, and recording studios. Nevertheless, it is interesting to look back at how far things have come. Naturally, some will view this evolution as a threat and others as filled with opportunities And, of course, where it goes from here is anyone’s guess.

All I know is that setting up two edit systems in a day would have been inconceivable in 1975!

Originally written for RedShark News

©2018 Oliver Peters

Viva Las Vegas – NAB 2018

As more and more folks get all of their information through internet sources, the running question is whether or not trade shows still have value. A show like the annual NAB (National Association of Broadcasters) Show in Las Vegas is both fun and grueling, typified by sensory overload and folks in business attire with sneakers. Although some announcements are made before the exhibits officially open – and nearly all are pretty widely known before the week ends – there still is nothing quite like being there in person.

For some, other shows have taken the place of NAB. The annual HPA Tech Retreat in the Palm Springs area is a gathering of technical specialists, researchers, and creatives that many consider the TED Talks for our industry. For others, the Cine Gear Expo in LA is the prime showcase for grip, lighting, and camera offerings. RED Camera has focused on Cine Gear instead of NAB for the last couple of years. And then, of course, there’s IBC in Amsterdam – the more humane version of NAB in a more pleasant setting. But for me, NAB is still the main event.

First of all, the NAB Show isn’t merely about the exhibit floor at the sprawling Las Vegas Convention Center. Actual NAB members can attend various sessions and workshops related to broadcasting and regulations. There are countless sidebar events specific to various parts of the industry. For editors that includes Avid Connect – a two-day series of Avid presentations in the weekend leading into NAB; Post Production World – a series of workshops, training sessions, and presentations managed by Future Media Concepts; as well as a number of keynote presentations and artist gatherings, including SuperMeet, FCPexchange, and the FCPX Guru Gathering. These are places where you’ll rub shoulders with some well-known editors, colorists, artists, and mixers, learn about new technologies like HDR (high dynamic range imagery), and occasionally see some new product features from vendors who might not officially be on the show floor with a booth, like Apple.

One of the biggest benefits I find in going to NAB is simply walking the floor, checking out the companies and products who might not get a lot of attention. These newcomers often have the most innovative technologies and it’s these new things that you find, which were never on the radar prior to that week.

The second benefit is connection. I meet up again in person with friends that I’ve made over the years – both other users, as well as vendors. Often it’s a chance to meet people that you might only know through the internet (forums, blogs, etc.) and to get to know them just a bit better. A bit more of that might make the internet more friendly, too!

Here are some of my random thoughts and observations from Las Vegas.

__________________________________

Editing hardware and software – four As and a B

Apple uncharacteristically pre-announced their new features just prior to the show, culminating with App Store availability on Monday when the NAB exhibits opened. This includes new Final Cut Pro X/Motion/Compressor updates and the official number of 2.5 million FCPX users. That’s a growth of 500,000 users in 2017, the biggest year to date for Final Cut. The key new feature in FCPX is a captioning function to author, edit, and export both closed and embedded (open) captions. There aren’t many great solutions for captioning and the best to date have been expensive. I found that the Apple approach was now the best and easiest to use that I’ve seen. It’s well-designed and should save time and money for those who need to create captions for their productions – even if you are using another brand of NLE. Best of all, if you own FCPX, you already have that feature. When you don’t have a script to start out, then manual or automatic transcription is required as a starting point. There is now a tie-in between Speedscriber (also updated this week) and FCPX that will expedite the speech-to-text function.

The second part of Apple’s announcement was the introduction of a new camera raw codec family – ProResRAW and ProResRAW HQ. These are acquisition codecs designed to record the raw sensor data from Bayer-pattern sensors (prior to debayering the signal into RGB information) and make that available in post, just like RED’s REDCODE RAW or CinemaDNG. Since this is an acquisition codec and NOT a post or intermediate codec, it requires a partnership on the production side of the equation. Initially this includes Atomos and DJI. Atomos supplies an external recorder, which can record the raw output from various cameras that offer the ability to record raw data externally. This currently includes their Shogun Inferno and Sumo 19 models. As this is camera-specific, Atomos must then create the correct profile by camera to remap that sensor data into ProResRAW. At the show, this included several Canon, Sony, and Panasonic cameras. DJI does this in-camera on the Inspire 2.

The advantage with FCPX, is that ProResRAW is optimized for post, thus allowing for more streams in real-time. ProResRAW data rates (variable) fall between that of ProRes and ProResHQ, while the less compressed ProResRAW HQ rates are between ProRes HQ and ProRes 4444. It’s very early with this new codec, so additional camera and post vendors will likely add ProResRAW support over the coming year. It is currently unknown whether or not any other NLEs can support ProResRAW decode and playback yet.

As always, the Avid booth was quite crowded and, from what I heard, Avid Connect was well attended with enthused Avid users. The Avid offerings are quite broad and hard to encapsulate into any single blog post. Most, these days, are very enterprise-centric. But this year, with a new CEO at the helm, Avid’s creative tools have been reorganized into three strata – First, standard, and Ultimate. This applies to Sibelius, Pro Tools, and Media Composer. In the case of Media Composer, there’s Media Composer | First – a fully functioning free version, with minimal restrictions; Media Composer; and Media Composer | Ultimate – includes all options, such as PhraseFind, ScriptSync, NewsCutter, and Symphony. The big difference is that project sharing has been decoupled from Media Composer. This means that if you get the “standard” version (just named Media Composer) it will not be enabled for collaboration on a shared storage network. That will require Media Composer | Ultimate. So Media Composer (standard) is designed for the individual editor. There is also a new subscription pricing structure, which places Media Composer at about the same annual cost as Adobe Premiere Pro CC (single app license). The push is clearly towards subscription, however, you can still purchase and/or maintain support for perpetual licenses, but it’s a little harder to find that info on Avid’s store website.

Though not as big news, Avid is also launching the Avid DNxID capture/export unit. It is custom-designed by Blackmagic Design for Avid and uses a small form factor. It was created for file-base acquisition, supports 4K, and includes embedded DNx codecs for onboard encoding. Connections are via component analog, HDMI, as well as an SD card slot.

The traffic around Adobe’s booth was thick the entire week. The booth featured interesting demos that were front and center in the middle of one of the South Hall’s main thoroughfares, generally creating a bit of a bottleneck. The newest Creative Cloud updates had preceded the show, but were certainly new to anyone not already using the Adobe apps. Big news for Premiere Pro users was the addition of automatic ducking that was brought over from Audition, and a new shot matching function within the Lumetri color panel. Both are examples of Adobe’s use of their Sensei AI technology. Not to be left out, Audition can now also directly open sequences from Premiere Pro. Character Animator had been in beta form, but is now a full-fledged CC product. And for puppet control Adobe also introduced the Advanced Puppet Engine for After Effects. This is a deformation tool to better bend, twist, and control elements.

Of course when it comes to NLEs, the biggest buzz has been over Blackmagic Design’s DaVinci Resolve 15. The company has an extensive track record of buying up older products whose companies weren’t doing so well, reinvigorating the design, reducing the cost, and breathing new life into them – often to a new, wider customer base. This is no more evident than Resolve, which has now grown from a leading color correction system to a powerful, all-in-one edit/mix/effects/color solution. We had previously seen the integration of the Fairlight audio mixing engine. This year Fusion visual effects were added. As before, each one of these disparate tools appears on its own page with a specific UI optimized for that task.

A number of folks have quipped that someone had finally resurrected Avid DS. Although all-in-ones like DS and Smoke haven’t been hugely successful in the past, Resolve’s price point is considerably more attractive. The Fusion integration means that you now have a subset of Fusion running inside of Resolve. This is a node-based compositor, which makes it easy for a Resolve user to understand, since it, too, already uses nodes in the color page. At least for now, Blackmagic Design intends to also maintain a standalone version of Fusion, which will offer more functions for visual effects compositing. Resolve also gained new editorial features, including tabbed sequences, a pancake timeline view, captioning, and improvements in the Fairlight audio page.

Other Blackmagic Design news includes updates to their various mini-converters, updates to the Cintel Scanner, and the announcement of a 4K Pocket Cinema Camera (due in September). They have also redesigned and modularized the Fairlight console mixing panels. These are now more cost-effective to manufacture and can be combined in various configurations.

This was the year for a number of milestone anniversaries, such as the 100th for Panasonic and the 25th for AJA. There were a lot of new product announcements at the AJA booth, but a big one was the push for more OpenGear-compatible cards. OpenGear is an open source hardware rack standard that was developed by Ross and embraced by many manufacturers. You can purchase any OpenGear version of a manufacturer’s product and then mix and match a variety of OpenGear cards into any OpenGear rack enclosure. AJA’s cards also offer Dashboard support, which is a software tool to configure and control the cards. There are new KONA SDI and HDMI cards, HDR support in the IO 4K Plus, and HDR capture and playback with the KiPro Ultra Plus.

HDR

It’s fair to say that we are all learning about HDR, but from what I observed on the floor, AJA is one of the only companies with a number of hardware product offerings that will allow you to handle HDR. This is thanks to their partnership with ColorFront, who is handling the color science in these products. This includes the FS | HDR – an up/down/cross, SDR/HDR synchronizer/converter. It also includes support for the Tangent Element Kb panel. The FS | HDR was a tech preview last year, but a product now. This year the tech preview product is the HDR Image Analyzer, which offers waveform and histogram monitoring at up to 4K/60fps.

Speaking of HDR (high dynamic range) and SDR (standard dynamic range), I had a chance to sit in on Robbie Carman’s (colorist at DC Color, Mixing Light) Post Production World HDR overview. Carman has graded numerous HDR projects and from his HDR presentation – coupled with exhibits on the floor – it’s quite clear that HDR is the wild, wild west right now. There is much confusion about color space and dynamic range, not to mention what current hardware is capable of versus the maximums expressed in the tech standards. For example, the BT 2020 spec doesn’t inherently mean that the image is HDR. Or the fact that you must be working in 4K to also have HDR and the set must accept the HDMI 2.0 standard.

High dynamic range grading absolutely requires HDR-compatible hardware, such as the proper i/o device and a display with the ability to receive metadata that turns on and sets its target HDR values. This means investing in a device like AJA’s IO 4K Plus or Blackmagic’s UltraStudio 4K Extreme 3. It also means purchasing a true grading monitor costing tens of thousands of dollars, like one from Sony, Canon, or Flanders. You CANNOT properly grade HDR based on the image of ANY computer display. So while the latest version of FCPX can handle HDR, and an iMac Pro screen features a high nits rating, you cannot rely on this screen to see proper HDR.

LG was a sponsor of the show and LG displays were visible in many of the exhibits. Many of their newest products qualify at the minimum HDR spec, but for the most part, the images shown on the floor were simply bright and not HDR – no matter what the sales reps in the booths were saying.

One interesting fact that Carman pointed out was that HDR displays cannot be driven across the full screen at the highest value. You cannot display a full screen of white at 1,000 nits on a 1,000 nits display without causing damage. Therefore, automatic gain adjustments are used in the set’s electronics to dim the screen. Only a smaller percentage of the image (20% maybe?) can be driven at full value before dimming occurs. Another point Carman made was that standard lift/gamma/gain controls may be too coarse to grade HDR images with finesse. His preference is to use Resolve’s log grading controls, because you can make more precise adjustments to highlight and shadow values.

Cameras

I’m not a camera guy, but there was notable camera news at the show. Many folks really like the Panasonic colorimetry for which the Varicam products are known. For people who want a full-featured camera in a small form factor, look no further than the Panasonics AU-EVA-1. It’s a 4K, Super35, handheld cinema camera featuring dual ISOs. Panasonic claims 14 stops of latitude. It will take EF lenses and can output camera raw data. When paired with an Atmos recorder it will be able to record ProResRAW.

Another new camera is Canon’s EOS C700 FF. This is a new full-frame model in both EF and PL lens mount versions. As with the standard C700, this is a 4K, Super35 cinema camera that records ProRes or X-AVC at up to 4K resolution onboard to CFast cards. The full-frame sensor offers higher resolution and a shallower depth of field.

Storage

Storage is of interest to many. As costs come down, collaboration is easier than ever. The direct-attached vendors, like G-Tech, LaCie, OWC, Promise, and others were all there with new products. So were the traditional shared storage vendors like Avid, Facilis, Tiger, 1 Beyond, and EditShare. But three of the newer companies had my interest.

In my editing day job, I work extensively with QNAP, which currently offers the best price/performance ratio of any system. It’s reliable, cost-effective, and provides reasonable JKL response cutting HD media with Premiere Pro in a shared editing installation. But it’s not the most responsive and it struggles with 4K media, in spite of plenty of bandwidth  – especially when the editors are all banging away. This has me looking at both Lumaforge and OpenDrives.

Lumaforge is known to many of the Final Cut Pro X editors, because the developers have optimized the system for FCPX and have had early successes with many key installations. Since then they have also pushed into more Premiere-based installations. Because these units are engineered for video-centric facilities, as opposed to data-centric, they promise a better shared storage, video editing experience.

Likewise, OpenDrives made its name as the provider for high-profile film and TV projects cut on Premiere Pro. Last year they came to the show with their highest performance, all-SSD systems. These units are pricey and, therefore, don’t have a broad appeal. This year they brought a few of the systems that are more applicable to a broader user base. These include spinning disk and hybrid products. All are truly optimized for Premiere Pro.

The cloud

In other storage news, “the cloud” garners a ton of interest. The biggest vendors are Microsoft, Google, IBM, and Amazon. While each of these offers relatively easy ways to use cloud-based services for back-up and archiving, if you want a full cloud-based installation for all of your media needs, then actual off-the-shelf solutions are not readily available. The truth of the matter is that each of these companies offers APIs, which are then handed off to other vendors – often for totally custom solutions.

Avid and Sony seem to have the most complete offerings, with Sony Ci being the best one-size-fits-all answer for customer-facing services. Of course, if review-and-approval is your only need, then Frame.io leads and will have new features rolled out during the year. IBM/Aspera is a great option for standard archiving, because fast Aspera up and down transfers are included. You get your choice of IBM or other (Google, Amazon, etc.) cloud storage. They even offer a trial period using IBM storage for 30 days at up to 100GB free. Backblaze is a competing archive solution with many partnering applications. For example, you can tie it in with Archiware’s P5 Suite of tools for back-up, archiving, and server synchronization to the cloud.

Naturally, when you talk of the “cloud”, many people interpret that to mean software that runs in the cloud – SaaS (software as a service). In most cases, that is nowhere close to happening. However, the exception is The Foundry, which was showing Athera, a suite of its virtualized applications, like Nuke, running on the Google Cloud Platform. They demo’ed it running inside the Chrome browser, thanks to this partnership with Google. The Foundry had a pod in the Google partners pavilion.

In short, you can connect to the internet with a laptop, activate a license of the tool or tools that you need, and then all media, processing, and rendering is handled in the cloud, using Google’s services and hardware. Since all of this happens on Google’s servers, only an updated UI image needs to be pushed back to the connected computer’s display. This concept is ideal for the visual effects world, where the work is generally done on an individual shot basis without a lot of media being moved in real-time. The target is the Nuke-centric shop that may need to add on a few freelancers quickly, and who may or may not be able to work on-premises.

Interesting newcomers

As I mentioned at the beginning, part of the joy of NAB is discovering the small vendors who seek out NAB to make their mark. One example this year is Lumberjack Systems, a venture by Philip Hodgetts and Greg Clarke of Intelligent Assistance. They were in the Lumaforge suite demonstrating Lumberjack Builder, which is a text-based NLE. In the simplest of explanations, your transcription or scripted text is connected to media. As you re-arrange or trim the text, the associated picture is edited accordingly. Newly-written text for voiceovers turns into spoken word media courtesy of the computer’s internal audio system and system voice. Once your text-based rough cut is complete, an FCPXML is sent to Final Cut Pro X, for further finesse and final editing.

Another new vendor I encountered was Quine, co-founded by Norwegian DoP Grunleik Groven. Their QuineBox IoT device attaches to the back of a camera, where it can record and upload “conformable” dailies (ProRes, DNxHD) to your SAN, as well as proxies to the cloud via its internal wi-fi system. Script notes can also be incorporated. The unit has already been battle-test on the Netflix/NRK production of “Norsemen”.

Closing thoughts

It’s always interesting to see, year over year, which companies are not at the show. This isn’t necessarily indicative of a company’s health, but can signal a change in their direction or that of the industry. Sometimes companies opt for smaller suites at an area hotel in lieu of the show floor (Autodesk). Or they are a smaller part of a reseller or partner’s booth (RED). But often, they are simply gone. For instance, in past years drones were all the rage, with a lot of different manufacturers exhibiting. DJI has largely captured that market for both vehicles and camera systems. While there were a few other drone vendors besides DJI, GoPro and Freefly weren’t at the show at all.

Another surprise change for me was the absence of SAM (Snell Advanced Media) – the hybrid company formed out of Snell & Wilcox and Quantel. SAM products are now part of Grass Valley, which, in turn, is owned by Belden (the cable manufacturer). Separate Snell products appear to have been absorbed into the broader Grass Valley product line. Quantel’s Go and Rio editors continue in Grass Valley’s editing line, alongside Edius – as simple, middle, and advanced NLE products. A bit sad actually. And very ironic. Here we are in the world of software and file-based video, but the company that still has money to make acquisitions is the one with a heavy investment in copper (I know, not just copper, but you get the point).

Speaking of “putting a fork in it”, I would have to say that stereo 3D and 360 VR are pretty much dead in the film and video space. I understand that there is a market – potentially quite large – in gaming, education, simulation, engineering, training, etc. But for more traditional entertainment projects, it’s just not there. Vendors were down to a few, and even though the leading NLEs have ways of working with 360 VR projects, the image quality still looks awful. When you view a 4K image within even the best goggles, the qualitative experience is like watching a 1970s-era TV set from a few inches away. For now, it continues to be a novelty looking for a reason to exist.

A few final points… It’s always fun to see what computers were being used in the booths. Apple is again a clear winner, with plenty of MacBook Pros and iMac Pros all over the LVCC when used for any sort of creative products or demos. eGPUs are of interest, with Sonnet being the main vendor. However, eGPUs are not a solution that solves every problem. For example, you will see more benefit by adding an eGPU to a lesser-powered machine, like a 13” MacBook Pro than one with more horsepower, like an iMac Pro. Each eGPU takes one Thunderbolt 3 bus, so realistically, you are likely to only add one additional eGPU to a computer. None of the NLE vendors could really tell me how much of a boost their application would have with an eGPU. Finally, if you are looking for some great-looking, large, OLED displays that are pretty darned accurate and won’t break the bank, then LG is the place to look.

©2018 Oliver Peters

NLE as Post Production Hub

df2316_main_sm

As 2009 closed, I wrote a post about Final Cut Studio as the center of a boutique post production workflow. A lot has changed since then, but that approach is still valid and a number of companies can fill those shoes. In each case, rather than be the complete, self-contained tool, the editing application becomes the hub of the operation. Other applications surround it and the workflow tends to go from NLE to support tool and back for delivery. Here are a few solutions.

Adobe Premiere Pro CC

df2316_prproNo current editing package comes as close to the role of the old Final Cut Studio as does Adobe’s Creative Cloud. You get nearly all of the creative tools under a single subscription and facilities with a team account can equip every room with the full complement of applications. When designed correctly, workflows in any room can shift from edit to effects to sound to color correction – according to the load. In a shared storage operation, projects can stay in a single bay for everything or shift from bay to bay based on operator speciality and talent.

While there are many tools in the Creative Cloud kit, the primary editor-specific applications are Premiere Pro CC, After Effects CC and Audition CC. It goes without saying that for most, Photoshop CC and Adobe Media Encoder are also givens. On the other hand, I don’t know too many folks using Prelude CC, so I can’t say what the future for this tool will be. Especially since the next version of Premiere Pro includes built-in proxy transcoding. Also, as more of SpeedGrade CC’s color correction tools make it into Premiere Pro, it’s clear to see that SpeedGrade itself is getting very little love. The low-cost market for outboard color correction software has largely been lost to DaVinci Resolve (free). For now, SpeedGrade is really “dead man walking”. I’d be surprised if it’s still around by mid-2017. That might also be the case for Prelude.

Many editors I know that are heavy into graphics and visual effects do most of that work in After Effects. With CC and Dynamic Link, there’s a natural connection between the Premiere Pro timeline and After Effects. A similar tie can exist between Premiere Pro and Audition. I find the latter to be a superb audio post application and, from my experience, provides the best transfer of a Premiere Pro timeline into any audio application. This connection is being further enhanced by the updates coming from Adobe this year.

Rounding out the package is Photoshop CC, of course. While most editors are not big Photoshop artists, it’s worth noting that this application also enables animated motion graphics. For example, if you want to create an animated lower third banner, it can be done completely inside of Photoshop without ever needing to step into After Effects. Drop the file onto a Premiere Pro timeline and it’s complete with animation and proper transparency values. Update the text in Photoshop and hit “save” – voila the graphic is instantly updated within Premiere Pro.

Given the breadth and quality of tools in the Creative Cloud kit, it’s possible to stay entirely within these options for all of a facility’s post needs. Of course, roundtrips to Resolve, Baselight, ProTools, etc. are still possible, but not required. Nevertheless, in this scenario I typically see everything starting and ending in Premiere Pro (with exports via AME), making the Adobe solution my first vote for the modern hub concept.

Apple Final Cut Pro X

df2316_fcpxApple walked away from the market for an all-inclusive studio package. Instead, it opted to offer more self-contained solutions that don’t have the same interoperability as before, nor that of the comparable Adobe solutions. To build up a similar toolkit, you would need Final Cut Pro X, Motion, Compressor and Logic Pro X. An individual editor/owner would purchase these once and install these on as many machines as he or she owned. A business would have to buy each application for each separate machine. So a boutique facility would need a full set for each room or they would have to build rooms by specialty – edit, audio, graphics, etc.

Even with this combination, there are missing links when going from one application to another. These gaps have to be plugged by the various third-party productivity solutions, such as Clip Exporter, XtoCC, 7toX, Xsend Motion, X2Pro, EDL-X and others. These provide better conduits between Apple applications than Apple itself provides. For example, only through Automatic Duck Xsend Motion can you get an FCPX project (timeline) into Motion. Marquis Broadcast’s X2Pro Audio Convert provides a better path into Logic than the native route.

If you want the sort of color correction power available in Premiere Pro’s Lumetri Color panel, you’ll need more advanced color correction plug-ins, like Hawaiki Color or Color Finale. Since Apple doesn’t produce an equivalent to Photoshop, look to Pixelmator or Affinity Photo for a viable substitute. Although powerful, you still won’t get quite the same level of interoperability as between Photoshop and Premiere Pro.

Naturally, if your desire is to use non-Apple solutions for graphics and color correction, then similar rules apply as with Premiere Pro. For instance, roundtripping to Resolve for color correction is pretty solid using the FCPXML import/export function within Resolve. Prefer to use After Effects for your motion graphics instead of Motion? Then Automatic Duck Ximport AE on the After Effects side has your back.

Most of the tools are there for those users wishing to stay in an Apple-centric world, provided you add a lot of glue to patch over the missing elements. Since many of the plug-ins for FCPX (Motion templates) are superior to a lot of what’s out there, I do think that an FCPX-centric shop will likely choose to start and end in X (possibly with a Compressor export). Even when Resolve is used for color correction, I suspect the final touches will happen inside of Final Cut. It’s more of the Lego approach to the toolkit than the Adobe solution, yet I still see it functioning in much the same way.

Blackmagic Design DaVinci Resolve

df2316_resolveIt’s hard to say what Blackmagic’s end goal is with Resolve. Clearly the world of color correction is changing. Every NLE developer is integrating quality color correction modules right inside of their editing application. So it seems only natural that Blackmagic is making Resolve into an all-in-one tool for no other reason than self-preservation. And by golly, they are doing a darn good job of it! Each version is better than the last. If you want a highly functional editor with world-class color correction tools for free, look no further than Resolve. Ingest, transcoded and/or native media editing, color correction, mastering and delivery – all there in Resolve.

There are two weak links – graphics and audio. On the latter front, the internal audio tools are good enough for many editors. However, Blackmagic realizes that specialty audio post is still the domain of the sound engineering world, which is made up predominantly of Avid Pro Tools shops. To make this easy, Resolve has built-in audio export functions to send the timeline to Pro Tools via AAF. There’s no roundtrip back, but you’d typically get composite mixed tracks back from the engineer to lay into the timeline.

To build on the momentum it started, Blackmagic Design acquired the assets of EyeOn’s Fusion software, which gives then a node-based compositor, suitable for visual effects and some motion graphics. This requires a different mindset than After Effects with Premiere Pro or Motion with Final Cut Pro X (when using Xsend Motion). You aren’t going to send a full sequence from Resolve to Fusion. Instead, the Connect plug-in links a single shot to Fusion, where it can be effected through series of nodes. The Connect plug-in provides a similar “conduit” function to that of Adobe’s Dynamic Link between Premiere Pro and After Effects, except that the return is a rendered clip instead of a live project file. To take advantage of this interoperability between Resolve and Fusion, you need the paid versions.

Just as in Apple’s case, there really is no Blackmagic-owned substitute for Photoshop or an equivalent application. You’ll just have to buy what matches your need. While it’s quite possible to build a shop around Resolve and Fusion (plus maybe Pro Tools and Photoshop), it’s more likely that Resolve’s integrated approach will appeal mainly to those folks looking for free tools. I don’t see too many advanced pros doing their creative cutting on Resolve (at least not yet). However, that being said, it’s pretty close, so I don’t want to slight the capabilities.

Where I see it shine is as a finishing or “online” NLE. Let’s say you perform the creative or “offline” edit in Premiere Pro, FCPX or Media Composer. This could even be three editors working on separate segments of a larger show – each on a different NLE. Each’s sequence goes to Resolve, where the timelines are imported, combined and relinked to the high-res media. The audio has gone via a parallel path to a Pro Tools mixer and graphics come in as individual clips, shots or files. Then all is combined inside Resolve, color corrected and delivered straight from Resolve. For many shops, that scenario is starting to look like the best of all worlds.

I tend to see Resolve as less of a hub than either Premiere Pro or Final Cut Pro X. Instead, I think it may take several possible positions: a) color correction and transcoding at the front end, b) color correction in the middle – i.e. the standard roundtrip, and/or c) the new “online editor” for final assembly, color correction, mastering and delivery.

Avid Media Composer

df2316_avidmcThis brings me to Avid Media Composer, the least integrated of the bunch. You can certainly build an operation based on Media Composer as the hub – as so many shops have. But there simply isn’t the silky smooth interoperability among tools like there is with Adobe or the dearly departed Final Cut Pro “classic”. However, that doesn’t mean it’s not possible. You can add advanced color correction through the Symphony option, plus Avid Pro Tools in your mixing rooms. In an Avid-centric facility, rooms will definitely be task-oriented, rather than provide the ease of switching functions in the same suite based on load, as you can with Creative Cloud.

The best path right now is Media Composer to Pro Tools. Unfortunately it ends there. Like Blackmagic, Avid only offers two hero applications in the post space – Media Composer/Symphony and Pro Tools. They have graphics products, but those are designed and configured for news on-air operations. This means that effects and graphics are typically handled through After Effects, Boris RED or Fusion.

Boris RED runs as an integrated tool, which augments the Media Composer timeline. However, RED uses its own user interface. That operation is relatively seamless, since any “roundtrip” happens invisibly within Media Composer. Fusion can be integrated using the Connect plug-in, just like between Fusion and Resolve. Automatic Duck’s AAF import functions have been integrated directly into After Effects by Adobe. It’s easy to send a Media Composer timeline into After Effects as a one-way trip. In fact, that’s where this all started in the first place. Finally, there’s also a direct connection with Baselight Editions for Avid, if you add that as a “plug-in” within Media Composer. As with Boris RED, clips open up in the Baselight interface, which has now been enhanced with a smoother shot-to-shot workflow inside of Media Composer.

While a lot of shops still use Media Composer as the hub, this seems like a very old-school approach. Many editors still love this NLE for its creative editing prowess, but in today’s mixed-format, mixed-codec, file-based post world, Avid has struggled to keep Media Composer competitive with the other options. There’s certainly no reason Media Composer can’t be the center – with audio in Pro Tools, color correction in Resolve, and effects in After Effects. However, most newer editors simply don’t view it the same way as they do with Adobe or even Apple. Generally, it seems the best Avid path is to “offline” edit in Media Composer and then move to other tools for everything else.

So that’s post in 2016. Four good options with pros and cons to each. Sorry to slight the Lightworks, Vegas Pro, Smoke/Flame and Edius crowds, but I just don’t encounter them too often in my neck of the woods. In any case, there are plenty of options, even starting at free, which makes the editing world pretty exciting right now.

©2016 Oliver Peters

NLEs at NAB 2015

df2115_NAB6

NAB is the biggest toy store in our industry. As in years past, I’ve covered it for DV magazine, where you’ll find the expanded version. The following is the segment covering the four – soon to become five – most popular NLE vendors.

df2115_NAB2Editing options largely focused on the four “A” companies – Apple, Adobe, Avid and Autodesk. Apple wasn’t officially at the show, but held private press meetings at an area hotel. Consulting company FCPworks presented a series of workflow and case study sessions at the Renaissance Hotel next door to the South Hall. This coincided with Apple’s release of the updated versions of Final Cut Pro X, Motion and Compressor. FCP X 10.2 includes a number of enhancements, but the most buzz went to the addition of a new 3D text engine for FCP X and Motion. Apple’s implementation is one of the easiest to use and best-looking in any application. The best part is that the performance is excellent. Two other big features fall more in line with user wish lists. These include built-in masking and changing the color correction tool into a standard effect filter. Compressor has now added a preset designed for iTunes submission. Although Apple still encourages users to go to iTunes though an approved third-party portal, this new preset makes it easier to create the proper file package necessary for delivery.

df2115_NAB1Adobe has the momentum as the next up-and-coming professional editing tool. At NAB Adobe was showing technology previews of the application features that will be released as part of a Creative Cloud subscription in the coming months. Premiere Pro CC now integrates more of SpeedGrade CC’s color correction capabilities through the addition of the Lumetri Color panel. This tabbed control integrates tools that are familiar from SpeedGrade, but also from Lightroom. Since Premiere already includes built-in masking and tracking, this means the editor is capable of doing very sophisticated color correction right inside of Premiere. Morph Cut is a new effect that everyone cutting interviews will love. The effect is designed to smoothly transition across jump cuts in a seamless manner. It uses advanced tracking and frame interpolation functions to build new “in-between” frames. After Effects adds an outstanding face tracker and improved previews. View design iterations, adjust composition properties, and even resize interface panels without halting composition playback. The face tracker locks onto specific points (pupils, mouth, nose), which enables accurate tracking when elements need to be composited onto an actor’s face.

Adobe is also good for out-of-the-box thinking on new technologies. Character Animator was demonstrated as a live animation tool. Using real-time facial tracking, such as from a laptop’s webcam, the animator could do live animation key framing of an on-screen cartoon character. Import a cartoon character as a layered Photoshop file as the starting point. When you move and talk, so does the character in real-time – all controlled by the tracking. Not only can you add the real-time animation, but certain animation functions are automatically applied, like a character’s breathing motion. Another interesting tool is Candy. This is a mobile app which analyses the tonal color scheme of photos stored on your mobile device. It creates a “look” file and stores it to your Creative Cloud library. This, in turn, can be synced with your copy of Premiere Pro CC and then applied as a color correction look to any video clip.

df2115_NAB3Avid ran the second annual Avid Connect event for members of their customer association in the weekend leading up to the NAB exhibition. Although this was the first show appearance of Media Composer 8.3.1 – Avid’s first move into true 4K editing – they did very little to promote it. That’s not to say there wasn’t any news. Several new products were announced, including the Avid Artist | DNxIO. Instead of developing their own 4K hardware, Avid opted to partner with Blackmagic Design. The DNxIO is essentially the same as the UltraStudio 4K Extreme, except with the addition of Avid’s DNxHR codec embedded into the unit. Only Avid will sell the Avid-branded version and will also provide any technical support. The DNxIO supports both PCIe or Thunderbolt host connections and can also be used for Adobe Premiere Pro CC, Apple FCP X and DaVinci Resolve running on the same workstation as Media Composer.

In an effort to attract new users to Media Composer, Avid also announced Media Composer | First. This is a free version with a reduced feature set. It’s intended as functional starter software from which users will hopefully transition to the full, paid application. However, it uses a “freemium” sales model, allowing users to extend functionality through add-on purchases. For example, Media Composer | First permits users to only store three active projects in the cloud. Additional storage for more projects can be purchased from Avid.

df2115_NAB5Autodesk’s NAB news was all about the 2016 versions of Flame, Maya and 3ds Max. Flame and Flame Premium customers gain new look development tools, including Lightbox – a GPU shader toolkit for 3D color correction – and Matchbox in the Action module. This applies fast Matchbox shaders to texture maps without leaving the 3D compositing scene. Maya 2016 received performance and ease-of-use enhancements. There are also new capabilities in Bifrost to help deliver realistic liquid simulations. 3ds Max 2016 gains a new, node-based creation graph, a new design workspace and template system, as well as other design enhancements. If you’ve been following Smoke, then this NAB was disappointing. Autodesk told me that an update is in the works, but development timing didn’t allow it to be ready in time for the show. I would presume we’ll hear something at IBC.

df2115_NAB4For editors, all eyes are on Blackmagic Design. DaVinci Resolve 12 was demonstrated, which is the first version that the company feels can compete as a full-fledged NLE. Last NAB, Resolve 11 was introduced as an online editor, but once it was out in the wild, most users found the real-time performance wasn’t up to par with other NLEs. Resolve 12 appears to have licked that issue, with a new audio engine and improved editing features. New in Resolve 12 is a multi-camera editing mode with the ability to sync angles by audio, timecode or in/out points. The new, high-performance audio engine was designed to greatly improve real-time playback, but also supports VST and AU audio plug-ins. Editors will also be able to export projects to ProTools using AAF.  Don’t forget that there are also updates to its color correction functions. Aside from interface and control enhancements, the most notable additions are a new keyer and a new perspective tracker. The latter will allow users to better track objects that move off-screen during the clip. Resolve 12 is scheduled to be released in July. Blackmagic acquired Fusion last year. It’s a node-based, compositing application, built on Windows. At the booth, Blackmagic previewed Fusion 8 on the Mac and announced that it will be available for Windows, Mac and Linux. Like Resolve, Fusion 8 will be offered in both a free and a paid version.

This post is an abbreviated overview written for CreativePlanetNetwork and Digital Video magazine. Click here for the full-length version to find out about more post news, as well as cameras, effects and other items presented at NAB.

©2015 Oliver Peters

Tips for Production Success – Part 2

df2015_prodtips_2_smPicking up from my last post (part 1), here are 10 more tips to help you plan for a successful production.

Create a plan and work it. Being a successful filmmaker – that is, making a living at it – is more than just producing a single film. Such projects almost never go beyond the festival circuit, even if you do think it is the “great American film”. An indie producer may work on a project for about four years, from the time they start planning and raising the funds – through production and post – until real distribution starts. Therefore, the better approach is to start small and work your way up. Start with a manageable project or film with a modest budget and then get it done on time and in budget. If that’s a success, then start the next one – a bit bigger and more ambitious. If it works, rinse and repeat. If you can make that work, then you can call yourself a filmmaker.

Budget. I have a whole post on this subject, but in a nutshell, an indie film that doesn’t involve union talent or big special effects will likely cost close to one million dollars, all in. You can certainly get by on less. I’ve cut films that were produced for under $150,000 and one even under $50,000, but that means calling in a lot of favors and having many folks working for free or on deferment. You can pull that off one time, but it’s not a way to build a business, because you can’t go back to those same resources and ask to do it a second time. Learn how to raise the money to do it right and proceed from there.

Contingencies at the end. Intelligent budgeting means leaving a bit for the end. A number of films that I’ve cut had to do reshoots or spend extra days to shoot more inserts, establishing shots, etc. Plan for this to happen and make sure you’ve protected these items in the budget. You’ll need them.

Own vs. rent. Some producers see their film projects as a way to buy gear. That may or may not make sense. If you need a camera and can otherwise make money with it, then buy it. Or if you can buy it, use it, and then resell it to come out ahead – by all means follow that path. But if gear ownership is not your thing and if you have no other production plans for the gear after that one project, then it will most likely be a better deal to work out rentals. After all, you’re still going to need a lot of extras to round out the package.

Shooting ratios. In the early 90s I worked on the post of five half-hour and hourlong episodic TV series that were shot on 35mm film. Back then shooting ratios were pretty tight. A half-hour episode is about 20-22 minutes of content, excluding commercials, bumpers, open, and credits. An hourlong episode is about 44-46 minutes of program content. Depending on the production, these were shot in three to five days and exposed between 36,000 and 50,000 feet of negative. Therefore, a typical day meant 50-60 minutes of transferred “dailies” to edit from – or no more than five hours of source footage, depending on the series. This would put them close to the ideal mark (on average) of approximately a 10:1 shooting ratio.

Today, digital cameras make life easier and with the propensity to shoot two or more cameras on a regular basis, this means the same projects today might have conservatively generated more than 10 hours of source footage for each episode. This impacts post tremendously – especially if deadline is a factor. As a new producer, you should strive to control these ratios and stay within the goal of a 10:1 ratio (or lower).

Block and rehearse. The more a scene is buttoned down, the fewer takes you’ll need, which leads to a tighter shooting ratio. This means rehearse a scene and make sure the camera work is properly blocked. Don’t wing it! Once everything is ready, shoot it. Odds are you’ll get it in two to three takes instead of the five or more that might otherwise be required.

Control the actors. Unless there’s a valid reason to let your actors improvise, make sure the acting is consistent. That is, lines are read in the same order each take, props are handled at the same point, and actors consistently hit their marks each take. If you stray from that discipline, the editorial time becomes longer. If allowed to engage in too much freewheeling improvisation, actors may inadvertently paint you into a corner. To avoid that outcome, control it from the start.

Visual effects planning. Most films don’t require special effects, but there are often “invisible” fixes that can be created through visual effects. For example, combining elements of two takes or adding items to a set. A recent romantic drama I post-supervised used 76 effects shots of one type or another. If this is something that helps the project, make sure to plan for it from the outset. Adobe After Effects is the ubiquitous tool that makes such effects affordable. The results are great and there are plenty of talented designers who can assist you within almost any budget range.

Multiple cameras vs. single camera vs. 4K. Some producers like the idea of shooting interviews (especially two-shots) in 4K (for a 1080 finish) and then slice out the frame they want. I contend that often 4K presents focus issues, due to the larger sensors used in these cameras. In addition, the optics of slicing a region out of a 4K image are different than using another camera or zooming in to reframe the shot. As a result, the look that you get isn’t “quite right”. Naturally, it also adds one more component that the editor has to deal with – reframing each and every shot.

Conversely, when shooting a locked-off interview with one person on-camera, using two cameras makes the edit ideal. One camera might be placed face-on towards the speaker and the other from a side angle. This makes cutting between the camera angles visually more exciting and makes editing without visible jump cuts easier.

In dramatic productions, many new directors want to emulate the “big boys” and also shoot with two or more cameras for every scene. Unfortunately this isn’t always productive, because the lighting is compromised, one camera is often in an awkward position with poor framing, or even worse, often the main camera blocks the secondary camera. At best, you might get 25% usability out of this second camera. A better plan is to shoot in a traditional single-camera style. Move the camera around for different angles. Tweak the lighting to optimize the look and run the scene again for that view.

The script is too long. An indie film script is generally around 100 pages with 95-120 scenes. The film gets shot in 20-30 days and takes about 10-15 weeks to edit. If your script is inordinately long and takes many more days to shoot, then it will also take many more days to edit. The result will usually be a cut that is too long. The acceptable “standard” for most films is 90-100 minutes. If you clock in at three hours, then obviously a lot of slashing has to occur. You can lose 10-15% (maybe) through trimming the fat, but a reduction of 25-40% (or more) means you are cutting meat and bone. Scenes have to be lost, the story has to be re-arranged, or even more drastic solutions. A careful reading of the script and conceiving that as a finished concept can head off issues before production ever starts. Losing a scene before you shoot it can save time and money on a large scale. So analyze your script carefully.

Click here for Part 1.

©2015 Oliver Peters

Tips for Production Success – Part 1

df1915_prodtips_1_smThroughout this blog, I’ve written numerous tips about how to produce projects, notably indie features, with a successful outcome in mind. I’ve tried to educate on issues of budget and schedule. In these next two entries, I’d like to tackle 21 tips that will make your productions go more smoothly, finish on time, and not become a disaster during the post production phase. Although I’ve framed the discussion around indie features, the same tips apply to commercials, music videos, corporate presentations, and videos for the web.

Avoid white. Modern digital cameras handle white elements within a shot much better than in the past, but hitting a white shirt with a lot of light complicates your life when it comes to grading and directing the eye of the viewer. This is largely an issue of art direction and wardrobe. The best way to handle this is simply to replace whites with off-whites, bone or beige colors. The sitcom Barney Miller, which earned DP George Spiro Dibie recognition for getting artful looks out of his video cameras, is said to have had the white shirts washed in coffee to darken them a bit. The whiteness was brought back once the cameras were set up. The objective in all of this is to get the overall brightness into a range that is more controllable during color correction and to avoid clipping.

Expose to the right. When you look at a signal on a histogram, the brightest part is on the righthand side of the scale. By pushing your camera’s exposure towards a brighter, slightly over-exposed image (“to the right”), you’ll end up with a better looking image after grading (color correction). That’s because when you have to brighten an image by bringing up highlights or midtones, you are accentuating the sensor noise from the camera. If the image is already brighter and the correction is to lower the levels, then you end up with a cleaner final image. Since most modern digital cameras use some sort of log or hyper gamma encoding to record a flatter signal, which preserves latitude, opening up the exposure usually won’t run the risk of clipping the highlights. In the end, a look that stretches the shadow and mids to expose more detail to the eye gives you a more pleasing and informative image than one that places emphasis on the highlight portion.

Blue vs. green-screen. Productions almost ubiquitously use green paint, but that’s wrong. Each paint color has a different luminance value. Green is brighter and should be reserved for a composite where the talent should appear to be outside. Blue works best when the composited image is inside. Paint matters. The correct paint to use is still the proper version of Ultimatte blue or green paint, but many people try to cut corners on cost. I’ve even had producers go so far as to rig up a silk with a blue lighting wash and expect me to key it! When you light the subject, move them as far away from the wall as possible to avoid contamination of the color onto their hair and wardrobe. This also means, don’t have your talent stand on a green or blue floor, when you aren’t intending to see the floor or see them from their feet to their head.

Rim lighting. Images stand out best when your talent has some rim lighting to separate them from the background. Even in a dark environment, seek to create a lighting scheme that achieves this rimming effect around their head and shoulders.

Tonal art direction. The various “blockbuster” looks are popular – particularly the “orange and teal” look. This style pushes skin tones warm for a slight orange appearance, while many darker background elements pick up green/blue/teal/cyan casts. Although this can be accentuated in grading, it starts with proper art direction in the set design and costuming. Whatever tonal characteristic you want to achieve, start by looking at the art direction and controlling this from step one.

Rec. 709 vs. Log. Digital cameras have nearly all adopted some method of recording an image with a flat gamma profile that is intended to preserve latitude until final grading. This doesn’t mean you have to use this mode. If you have control over your exposure and lighting, there’s nothing wrong with recording Rec. 709 and nailing the final look in-camera. I highly recommend this for “talking head” interviews, especially ones shot on green or blue-screen.

Microphone direction/placement. Every budding recording engineer working in music and film production learns that proper mic placement is critical to good sound. Pay attention to where mics are positioned, relative to where the person is when they speak. For example, if you have two people in an interview situation wearing lavaliere mics on their lapels, the proper placement would be on each’s inner lapel – the side closer to the other person. That’s because each person will turn towards the other to address them as they speak and thus talk over that shoulder. Having the mic on this side means they are speaking into the mic. If it were on their outer lapel, they would be speaking away from the mic and thus the audio would tend to sound hollow. For the same reasons, when you use a boom or fish pole overhead mic, the operator needs to point the mic in the direction of the person talking. They will need to shift the mic’s direction as the conversation moves from one person to the next to follow the sound.

Multiple microphones/iso mics. When recording dialogue for a group of actors, it’s best to record their audio with individual microphones (lavs or overhead booms) and to record each mic on an isolated track. Cameras typically feature on-board recording of two to four audio channels, so if you have more mics than that, use an external multi-channel recorder. When external recording is used, be sure to still record a composite track to your camera for reference.

Microphone types. There are plenty of styles and types of microphones, but the important factors are size, tonal quality, range, and the axis of pick-up. Make sure you select the appropriate mic for the task. For example, if you are recording an actor with a deep bass voice using a lavaliere, you’d be best to use a type that gives you a full spectrum recording, rather than one that favors only the low end.

Sound sync. There are plenty of ways to sync sound to picture in double-system sound situations. Synchronizing by matched timecode is the most ideal, but even there, issues can arise. Assure that the camera’s and sound recorder’s timecode generators don’t drift during the day – or use a single, common, external timecode generator for both. It’s generally best to also include a clapboard and, when possible, also record reference audio to the camera. If you plan to sync by audio waveforms (PluralEyes, FCP X, Premiere Pro CC), then make sure the reference signal on the camera is of sufficient quality to make synchronization possible.

Record wild lines on set. When location audio is difficult to understand, ADR (automatic dialogue replacement, aka “looping”) is required. This happens because the location recording was not of high quality due to outside factors, like special effects, background noise, etc. Not all actors are good at ADR and it’s not uncommon to watch a scene with ADR dialogue and have it jump out at you as the viewer. Since ADR requires extra recording time with the actor, this drives up cost on small films. One workaround in some of these situations is for the production team to recapture the lines separately – immediately after the scene was shot – if the schedule permits. These lines would be recorded wild and may or may not be in sync. The intent is to get the right sonic environment and emotion while you are still there on site. Since these situations are often fast-paced action scenes, sync might not have to be perfect. If close enough, the sound editors can edit the lines into place with an acceptable level of sync so that viewers won’t notice any issues. When it works, it saves ADR time down the road and sounds more realistic.

Click here for Part 2.

©2015 Oliver Peters