Video Technology 2020 – The Cloud

The “cloud” is merely a collection of physical data centers in multiple locations around the world – not much different than a small storage center you might have. Of course, they employ more advanced systems for power, redundancy, and security than you do. When you work with one of the companies marketing cloud-based editing or a review-and-approval service, like Frame.io or Wipster, they provide the user-facing interface, but are actually renting storage space from one of the big three cloud providers – Google, Amazon, or Microsoft.

There are three reasons that I’m skeptical about ubiquitous, cloud-based editing (with media at native resolutions) in the short term: upload speeds, cost, and security.

Speed

5G (fifth generation wireless) is the technology predicted to offer adequate speeds and low latency for native 4K (and higher) media. While 5G will be a great advancement for many things, it’s a short distance signal requiring more transmission spots than current wireless technology. Full coverage in most metro areas, let alone widespread geographical coverage worldwide, will take many years to fully deploy. Other than potential camera-to-cloud uploads of proxy media in the field, 5G won’t soon be the killer solution. Current technology still dictates that if you want the fastest possible upload speeds for large amounts of data, then you have to tap as close as possible to the internet’s backbone.

Cost

Cloud storage is cheap, but extensive upload and download times aren’t. Unfortunately modern video resolutions also result in huge amounts of data generated on every shoot. Uploading native 4K media for a week-long production is considerably more expensive than FedEx and overnight charges to ship drives. What about long term storage? Let’s say that all of your native media is in the cloud and you pay according to a monthly or annual subscription plan. But what if you want to stop? That media will have to be downloaded and stored locally, which will incur data rate charges, as well as your time to download everything.

Security

Think these sites are unequivocally secure? Look at any data hack at a major company. Security is such a concern in our business that most major movie studios won’t let their editors connect the computers to the internet. Many make these editors check their cell phones at the door. No matter how secure, it’s going to be a hard sell, except for limited slices of the production, such as cloud-based VFX rendering.

I do believe 2020 will be a year in which many will take advantage of some modes of long distance, cloud-based edit services using low-res proxy media. Increasingly some services will be used to move dailies and deliverables around the globe via the cloud. But that’s a big difference from cloud-based editing becoming the norm. One edit scenario many will experiment with is to store the edit project files in the cloud, but with the media mirrored locally at each edit site. This way only the lightweight files used for edit collaboration need be moved over the internet. Think of this as Google Docs for editing. Adobe already offers a version of this, but I suspect you’ll see others, including solutions for Final Cut Pro X. So while true cloud-based editing is not a near-term solution, bits and pieces will become increasingly commonplace.

Originally written for Creative Planet Network.

©2020 Oliver Peters

Video Technology 2020 – Apple and the PC Landscape

Apple enjoys a small fraction of the total computer market, yet has an oversized influence on video production and post. Look anywhere in our business and you’ll see a high percentage of Apple Mac computers and laptops in use by producers, DITs, editors, mixers, and colorists. This has influenced the development and deployment of certain technologies, such as optimization for Metal, Thunderbolt i/o, ProRes codecs, and more. This may irritate Windows users, but it’s something companies like Avid, Adobe, and others cannot ignore. Apple deprecates OpenGL, OpenCL, and CUDA in favor of Metal, and so, developers of software for Apple computers will follow suit so that their Mac-based customers enjoy a good experience.

Going into 2020, Apple is offering a better line-up of professional Mac products than it has in years. MacBook Pro laptops, iMacs and iMac Pros, and the new Mac Pro are clearly targeted at the professional customer. Add to this the Pro Display XDR and authorized third-party products available through Apple, like LumaForge Jellyfish storageBlackmagic and Sonnet eGPUs. Clearly Apple intends to offer an end-to-end hardware and software ecosystem designed to appeal to the pro video customer.

Apple’s prices can be a turn-off for some. Similar investments in a PC – especially custom configurations – may yield better performance in certain applications. Nevertheless, most former and present owners of Mac Pro “cheese grater” towers feel like they got their money’s worth and will at least have interest in the new Mac Pro. Same for MacBook Pro owners. So while these new machines may not move the needle for the larger consumer computer market, it will definitely keep current Mac users in the fold and prevent migration to Windows or Linux PCs. It also reinforces Apple’s interest in the professional market – not just video, but also animation, design, audio, science, and engineering.

The unknown will be the impact of Apple’s new Afterburner card for the Mac Pro. While accelerator cards have been offered by various manufacturers in the past, recent computing developments have focused on processor core counts and GPU technology. The Apple Afterburner is the first introduction for Apple of a new FPGA-based (programmable ASIC) hardware accelerator card. Designed for transcoding, it promises to increase stream counts with 4K and 8K raw and standard codecs in the Mac Pro. Once it’s out in the wild, we will have a better idea of who supports it (beyond Apple’s own software) and its real-world performance.

As Apple goes, so goes the rest of the industry. How will the PC world counter this? Will we see similar cards from HP or Dell? Or will NVIDIA respond with similar results using their GPUs? That’s unknown right now, but my guess is that it will take at least this next year for the rest of the world to respond with competing solutions.

Originally written for Creative Planet Network.

©2020 Oliver Peters

Storage Reliability

Recently I’ve written about storage strategies designed to future-proof access to your files. Other than questions of whether future software can still play your files, the biggest issue is whether of not the media is playable at all in a number of years. Unfortunately, there are simply no guarantees. All media can and does fail. Let’s look at various answers.

Everyone touts “the cloud” as the ultimate solution. Although cloud-based storage space is relatively cheap, the cost and data charges for massive uploads and downloads along with local internet speeds pose the stumbling blocks. There’s very little in the near term to change that. Remember, too, that cloud storage is a subscription service than never ends if you want to keep that media in the cloud.

The LTO (Linear Tape Open) data tape format is considered the “gold standard” for physical back-up and retrieval, but it’s really a format designed for long-term industrial and financial data applications. In other words, back it up once and forget it unless you need to restore from a backup tape in the future.

While many studios require original camera footage for major feature films to be archived onto LTO, the format doesn’t fit well into the needs of most small-to-medium production companies and post houses. There are three reasons for this: 1) As file capacities grow, LTO barely keeps up in equivalent capacity and transfer speeds. 2) The LTO standards keep evolving with limited forward or backward version compatibility. 3) If you need to continually go back to your archive to revise and update older projects, the linear design of LTO isn’t very attractive. In addition, frequent shuttling back and forth on LTO tapes to retrieve materials from random sections of the tape will cause an LTO tape to prematurely fail before its rated life.

One alternative to LTO is Sony’s Optical Disc Archive. It’s essentially a videotape deck-sized unit that records on writeable optical media (like a Blu-ray disc). They offer a robotic juke-box type of system for automated retrieval with large library systems. It’s a robust solution, but is mainly relevant to large facilities, such as at broadcast networks.

Storing on a large, RAID-protected array is a good, short-term idea, but it won’t be very cost-effective as your storage needs mount. I don’t recommend small 2-drive or 4-drive RAID enclosures for extended storage. These are more likely to have the RAID structure (whether hardware or software) fail and leave you will nothing accessible on that array. In my experience, single, enterprise-grade drives are more reliable. I buy these as raw drives (so I’m not paying extra for a power supply and interface with every drive) and mount them in a drive dock when I need to use them.

Hard drives do carry a manufacturer’s warranty for a rated lifespan, but I will reiterate that there are no guarantees. A 3-year-warranted drive may last as long as a 5-year drive and either one could fail in one year or last 10 years or longer. I currently have some drives that are as old as that. With drive failure is always a looming possibility, the reasonable strategy is to maintain multiple copies of any media of value. Three duplicate copies is recommend.

Let’s address how to select the drive to buy. Most of these types of drives come in several speeds and warranty levels. 5400 or 7200 RPM are the normal speed offerings. Both are fine for archiving, but 7200 is preferred if you occasionally need to edit directly from them. Warranties are usually three or five years. As with any physical media, it covers the replacement of the product, but not the value of the data stored, which you may have permanently lost.

A warranty is like life insurance. A 3-year drive isn’t necessarily better than a 5-year drive. The company has developed actuarial tables that tell them statistically enough of the  5-year drives last to the 5-year mark, so they won’t lose too much money by replacing the few drives that do fail. Sometimes the difference between three and five years may simply be that drives tested with more minor errors end up in the 3-year pile, while the ones with fewer errors go into the 5-year pile. I haven’t looked into the manufacturing specifics too deeply, but that’s generally how product warranties work.

With those two criteria in mind, I usually purchase 7200 RPM enterprise-grade drives with 5-year warranties. These are drives intended to be used in servers and shared storage systems running 24/7/365. There has been a lot of consolidation in the hard drive business, so regardless of the brand name, there are really only a handful of companies manufacturing the media.

One source to track which drives to buy is Backblaze. They are a cloud provider that publishes their testing results, based on a current pool of over 100,000 drives that they have in operation. Right now the front-runners are ToshibaHGST (Hitachi enterprise) and Seagate. The HGST brand has been absorbed by Western Digital. All these are good options. I also hold back on the largest drives rather than be on the bleeding edge. For example, you can now purchase 14 TB drives, but I’ll tend to stick with 8 TB for a while.

Mechanical hard drives are meant to spin and not to sit on a shelf indefinitely. Periodically load each drive into a dock and spin it up. Make sure the contents are still retrievable and files can be opened. This process should happen no less than once a year. More frequent is even better. And yes, if you have 100 drives in your archive, don’t get lazy. This needs to be done. If a drive sounds odd, has difficulty spinning up or mounting, or has lot of vibration, then clone and replace it ASAP, because it’s likely to fail soon.

Many spinning drives and solid state drives employ S.M.A.R.T. technology. This is a prediction of drive failure. Diagnostics fail the S.M.A.R.T. test when they determine that enough sectors on drive are no longer writeable. Other drive issues, like excessive heat and slow spin-up can cause errors. The drive may outwardly act and seem fine, but it’s time to clone and replace the drive. Shared storage servers monitor for S.M.A.R.T. errors in their RAID drives, but you can also get some diagnostic applications to test individual drives.

The final level of security is to develop a plan to routinely transfer your entire library to the current format of the day. If you use hard drives, then plan on migrating your library to a replacement within five to ten years. Many feature film operations, like ILM, have done that for years, because they sit on a library of material with a ton of value. Your media files, might not be that, but this should be a strategy you follow to future-proof your production investment.

©2019 Oliver Peters

Shared Storage Solutions

 

I’m certainly no IT whizz, but as an editor and all-around “workflow guy,” I’ve used and done basic management of a number of different shared storage solutions, going all the way back to Avid MediaShare SCSI. Shared storage solutions, aka storage area networks (SAN), have evolved from SCSI connectivity to Fibre Channel (both copper and fiber optic cables) and now to Ethernet. The latter set-ups are technically considered network attached storage (NAS); but to the user, there are only a few operational differences between SAN and NAS volumes.

A shared storage primer

In a nutshell, shared storage is a chassis of RAID-configured drives that can be simultaneously accessed by multiple workstations. Depending on the needs of the facility and the type of control software used, this storage can appear as one large volume to all users, or it can be parsed so that it shows up as several volumes with lower capacities per volume. Read/write permissions can be controlled in various ways. All users can have read/write access to everything or that can be selectively assigned by the system administrator.

The basic building block of a NAS is the main chassis, which contains storage, but also a small, on-board computer – the “brain” of the system. This is running its own operating system, which is usually a variation of Linux, CentOS, or Sun/ZFS. That internal OS is independent of whether the system is connected to Mac, Windows, or Linux workstations. That computer is the server portion of the NAS, which controls the drives, permissions, and the file structure. The server can be accessed from an external computer via the manufacturer’s installed applications – usually through a web browser. This is where the system administrator can adjust settings and handle general system maintenance, like installing firmware updates.

The volumes can be mounted by the workstations using a number of different network protocols, such as AFP, NFS, or SMB. Through these protocols, the files will look as you expect to see them from the Mac Finder or Windows File Explorer. However, it may not be perfectly compatible. For example, some file names using special characters that are valid in macOS, may not be properly read through one of these network protocols. So be very structured when using naming conventions for files that end up on a network volume. Numbers, letters, spaces, dashes, and underscores are fine. Avoid everything else and do not start or end a file name with a space.

The unformatted capacity of your system is based on the number and size of the installed drives. A 20-drive chassis populated with 8TB drives would tally 160TB. If you rebuilt that same chassis with newer 14TB drives you’d end up with a pool of 280TB. But, you cannot mix and match drive types or sizes within the chassis.

Most manufacturers offer the option to daisy-chain one or more expansion chassis onto this main server chassis. These are “dumb” rack units, meaning there’s no on-board computer in them – only drives with a power supply. Normally these don’t have to be the same capacity as the original chassis, if they are going to used as a separate volume. However, if you purchase and configure several matched units at the start, then they can be grouped together and used as a single volume.

The impact of RAID protection

NAS and SAN configurations are RAID-protected in various configurations. RAID-protection means that redundant data is spread across all of the drives in such a manner that one or more drives can go down without losing all of your media. However, that takes overhead, which means you must give up some of the total capacity to enable this data protection.

The standard set-up with a large rack unit allows you to lose up to two drives in a chassis without losing any data. If a drive is going bad or goes bad, the unit will continue to operate, but with reduced performance. In some cases that may not be noticed by the operator. When a drive goes bad, it can be replaced by a matching raw drive and the unit will rebuild the RAID data, which redistributes it across all of the drives again. This can take up to 24 hours to complete. While many manufacturers say you can operate during this rebuilding period, I have found that in actual practice, performance is so bad, that you don’t want to work during the rebuild.

RAID protection is a wonderful safety net, but at the cost of available storage. Different manufacturers have different ways of handling RAID configurations, so there is no rule-of-thumb as to what percentage you will lose with every NAS. For instance, 256TB of QNAP storage (gross) will yield 206TB of net storage. 480TB of LumaForge storage yields 316TB net. On top of this, the recommendation for all shared storage is to stay under 80-90% of the available net capacity for optimal performance. If you ignore that advice and decide to fill up your drives to something like 97%, your system will crawl and possibly not function at all.

Connecting the system

Most shared storage systems used in modern, small-to-medium post facilities will be Ethernet-based at either 1Gbps or 10Gbps (aka 1GigE or 10GigE). The topology of your network will impact the performance. Your server unit can be configured with individual Ethernet cards that would allow a direct run to each workstation. Or it may connect to an Ethernet network switch, which then distributes the signals to the workstations. Or a combination of the two.

The chassis and/or network switch(es) are connected to the workstations with Cat6 or Cat7 Ethernet cable. Cat6 is generally good up to 100′, while Cat7 is recommended for runs longer than 100′ or if the cable in routed through walls or in the ceiling close to other electrical wiring that can create interference. For a 10GigE storage network, the workstations will require 10GigE ports (like on an iMac Pro) or you will need to add a 10GigE-to-Thunderbolt adapter (Promise, Sonnet, Akitio) to the computer.

Storage racks are very sensitive to power fluctuations, so you’ll want a beefy uninterruptible power supply/battery back-up (UPS) unit. Since these chassis draw power, don’t expect to hook everything to a single UPS if you are putting in an entire equipment rack of gear. Small, desktop NAS units – no sweat. But a faculty with a larger system should plan on several UPS units for its installation. For example, at my day job, we have a large QNAP and a large Jellyfish system (more on that in a minute) – just under 3/4 PB total – plus other peripherals – all in a single equipment rack. Each NAS has its own dedicated UPS. The peripheral gear runs on a third. To make sure the gear also had plenty of juice, we had an electrician run additional dedicated circuits for each of the two UPS units used for the two NAS systems.

Finally, make sure you have adequate air conditioning, because excessive heat will damage electronics. Modern systems no longer require a meat locker environment, but an unventilated closet for a server/storage rack simply won’t do. Any room that falls into the cool to comfortable range for a human will be suitably cool for the gear. Staying on the cooler side of that range will be best for a room with a number of equipment racks.

Practical experience with shared storage in the real world

The creative content production company where I freelance as senior editor and “workflow guy” has had some history with shared storage. In the Final Cut Pro “legacy” days, we were running a sweet Fibre Channel SAN for four workstations. Media was managed through Final Cut Server software on an Apple Xserve computer, but with third-party storage hardware. Up until FCP7 everything ran well. Final Cut Pro X arrived and SAN usage with the early versions was to be avoided. Apple pulled the plug on FCP7, Final Cut Server, and Xserve. Then to make matters worse, the hardware reliability of our storage started to falter. As a result, the production company ended up back on local storage for a while.

Fast forward to about three years ago when we switched to a QNAP shared storage system. We quickly doubled the system capacity with an additional QNAP expansion chassis. Ultimately nine workstations were connected via a 10GigE network switch. General performance was good, but as we started to work steadily with 4K media, performance suffered, especially with nine editors banging away. For example, long-form Premiere Pro projects required a proxy workflow to avoid editor frustration. Certain tasks, like copying a multi-TB batch of files on one of the systems while editing proceeded on the others, slowed performance. Image sequence files really hurt overall system performance. You could not pull media from and render back to the same QNAP volume during Resolve render passes.

In looking for options to improve the system, we decided to shift to LumaForge and spec’ed a larger Jellyfish Rack installation. Other than system optimization (a biggie) the key difference in the two systems is architecture. Unlike our QNAP unit, which uses a network switch, we opted for enough on-board cards on the Jellyfish to enable a direct run to all nine workstations without a separate network switch. There’s also a small NVMe unit used as a dedicated Adobe cache volume.

We didn’t get rid of QNAP, though. It has been very robust and recent firmware updates have actually improved its performance compared to how editing “felt” with it before. We maintain it for some legacy projects (rather than move them to Jellyfish), as well as an additional back-up storage pool.

All workstations get Ethernet cable runs to both NAS systems, so any editor can access any media from any location – Jellyfish or QNAP. We configured Jellyfish with a tenth Ethernet direct port, which goes to a separate 1GigE switch. These Ethernet feeds are distributed to several staffers handling media management and file upload tasks, using MacBook Pro and Air laptops and a Mac Mini in the server room. The connection to Jellyfish gives them the ability to work with media files without tying up editing workstations.

The acquisition of the Jellyfish system has proven itself over time. Direct head-to-head performance between Jellyfish and QNAP with a small project or a few media files is not that dramatically different. But when we compare day-to-day workflow efficiency, the improvements add up. Long-form 4K edits can proceed with native media without the prerequisite of creating proxies. Sidebar tasks, like batch encodes and file copies on one or more stations, don’t impact performance of the other edit sessions. Image sequences are easier to deal with. I can render to and from Jellyfish when I work grading sessions on Resolve.

In general, both brands have worked well for us, but LumaForge has definitely provided an edge. However, I have no qualms about QNAP either for the right customer in the right situation. There are, of course, other shared storage brands that offer outstanding products, including Avid, OpenDrives, Facilis, Synology, and EditShare. If you want to build an all-Avid shop, then Avid storage is probably the best option for you. However, even though Avid storage works with other NLEs, shops that are focused on Premiere Pro, Final Cut Pro X, or Resolve are better served by the other options. In any case, deploying a NAS system is easier than it’s ever been. Heck, you can even buy and configure a smaller Jellyfish through Apple’s online store!

But do your homework, check your OS compatibility, and make sure you tap a workflow consultant who knows video post and not just IT. Plenty of NAS systems developed for the data world don’t perform up to par in the world of video post. And don’t go it alone, no matter how many YouTubers you’ve watched. Qualified systems specialists, like Bob Zelin (Rescue 1, Inc) or the teams at LumaForge or Avid or most of the other companies, can help you get your system up and running at peak performance.

©2019 Oliver Peters

Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters

Did you pick the right camera? Part 1

There are tons of great cameras and lenses on the market. While I am not a camera operator, I have been a videographer on some shoots in the past. Relevant production and camera logistical issues are not foreign to me. However, my main concern in evaluating cameras is how they impact me in post – workflow, editing, and color correction. First – biases on the table. Let me say from the start that I have had the good fortune to work on many productions shot with ARRI Alexas and that is my favorite camera system in regards to the three concerns offered in the introductory post. I love the image, adopting ProRes for recording was a brilliant move, and the workflow couldn’t be easier. But I also recognize that ARRI makes an expensive albeit robust product. It’s not for everyone. Let’s explore.

More camera choices – more considerations

If you are going to only shoot with a single camera system, then that simplifies the equation. As an editor, I long for the days when directors would only shoot single-camera. Productions were more organized and there was less footage to wade through. And most of that footage was useful – not cutting room fodder. But cameras have become cheaper and production timetables condensed, so I get it that having more than one angle for every recording can make up for this. What you will often see is one expensive ‘hero’ camera as the A-camera for a shoot and then cheaper/lighter/smaller cameras as the B and C-cameras. That can work, but the success comes down to the ingredients that the chef puts into the stew. Some cameras go well together and others don’t. That’s because all cameras use different color science.

Lenses are often forgotten in this discussion. If the various cameras being used don’t have a matched set of lenses, the images from even the exact same model cameras – set to the same settings – will not match perfectly. That’s because lenses have coloration to them, which will affect the recorded image. This is even more extreme with re-housed vintage glass. As we move into the era of HDR, it should be noted that various lens specialists are warning that images made with vintage glass – and which look great in SDR – might not deliver predictable results when that same recording is graded for HDR.

Find the right pairing

If you want the best match, use identical camera models and matched glass. But, that’s not practical or affordable for every company nor every production. The next best thing is to stay within the same brand. For example, Canon is a favorite among documentary producers. Projects using cameras from the EOS Cinema line (C300, C300 MkII, C500, C700) will end up with looks that match better in post between cameras. Generally the same holds true for Sony or Panasonic.

It’s when you start going between brands that matching looks becomes harder, because each manufacturer uses their own ‘secret sauce’ for color science. I’m currently color grading travelogue episodes recorded in Cuba with a mix of cameras. A and B-cameras were ARRI Alexa Minis, while the C and D-cameras were Panasonic EVA1s. Additionally Panasonic GH5, Sony A7SII, and various drones cameras were also used. Panasonic appears to use a similar color science as ARRI, although their log color space is not as aggressive (flat). With all cameras set to shoot with a log profile and the appropriate REC709 LUT applied to each in post (LogC and Vlog respectively) I was able to get a decent match between the ARRI and Panasonic cameras, including the GH5. Not so close with the Sony or drone cameras, however.

Likewise, I’ve graded a lot of Canon C300 MkII/C500 footage and it looks great. However, trying to match Canon to ARRI shots just doesn’t come out right. There is too much difference in how blues are rendered.

The hardest matches are when professional production cameras are married with prosumer DSLRs, such as a Sony FS5 and a Fujifilm camera. Not even close. And smartphone cameras – yikes! But as I said above, the GH5 does seem to provide passible results when used with other Panasonic cameras and in our case, the ARRIs. However, my experience there is limited, so I wouldn’t guarantee that in every case.

Unfortunately, there’s no way to really know when different brands will or won’t create a compatible A/B-camera combination until you start a production. Or rather, when you start color correcting the final. Then it’s too late. If you have the luxury of renting or borrowing cameras and doing a test first, that’s the best course of action. But as always, try to get the best you can afford. It may be better to get a more advanced camera, but only one. Then restructure your production to work with a single-camera methodology. At least then, all of your footage should be consistent.

Click here for the Introduction.

Click here for Part 2.

©2019 Oliver Peters