Shared Storage Solutions

 

I’m certainly no IT whizz, but as an editor and all-around “workflow guy,” I’ve used and done basic management of a number of different shared storage solutions, going all the way back to Avid MediaShare SCSI. Shared storage solutions, aka storage area networks (SAN), have evolved from SCSI connectivity to Fibre Channel (both copper and fiber optic cables) and now to Ethernet. The latter set-ups are technically considered network attached storage (NAS); but to the user, there are only a few operational differences between SAN and NAS volumes.

A shared storage primer

In a nutshell, shared storage is a chassis of RAID-configured drives that can be simultaneously accessed by multiple workstations. Depending on the needs of the facility and the type of control software used, this storage can appear as one large volume to all users, or it can be parsed so that it shows up as several volumes with lower capacities per volume. Read/write permissions can be controlled in various ways. All users can have read/write access to everything or that can be selectively assigned by the system administrator.

The basic building block of a NAS is the main chassis, which contains storage, but also a small, on-board computer – the “brain” of the system. This is running its own operating system, which is usually a variation of Linux, CentOS, or Sun/ZFS. That internal OS is independent of whether the system is connected to Mac, Windows, or Linux workstations. That computer is the server portion of the NAS, which controls the drives, permissions, and the file structure. The server can be accessed from an external computer via the manufacturer’s installed applications – usually through a web browser. This is where the system administrator can adjust settings and handle general system maintenance, like installing firmware updates.

The volumes can be mounted by the workstations using a number of different network protocols, such as AFP, NFS, or SMB. Through these protocols, the files will look as you expect to see them from the Mac Finder or Windows File Explorer. However, it may not be perfectly compatible. For example, some file names using special characters that are valid in macOS, may not be properly read through one of these network protocols. So be very structured when using naming conventions for files that end up on a network volume. Numbers, letters, spaces, dashes, and underscores are fine. Avoid everything else and do not start or end a file name with a space.

The unformatted capacity of your system is based on the number and size of the installed drives. A 20-drive chassis populated with 8TB drives would tally 160TB. If you rebuilt that same chassis with newer 14TB drives you’d end up with a pool of 280TB. But, you cannot mix and match drive types or sizes within the chassis.

Most manufacturers offer the option to daisy-chain one or more expansion chassis onto this main server chassis. These are “dumb” rack units, meaning there’s no on-board computer in them – only drives with a power supply. Normally these don’t have to be the same capacity as the original chassis, if they are going to used as a separate volume. However, if you purchase and configure several matched units at the start, then they can be grouped together and used as a single volume.

The impact of RAID protection

NAS and SAN configurations are RAID-protected in various configurations. RAID-protection means that redundant data is spread across all of the drives in such a manner that one or more drives can go down without losing all of your media. However, that takes overhead, which means you must give up some of the total capacity to enable this data protection.

The standard set-up with a large rack unit allows you to lose up to two drives in a chassis without losing any data. If a drive is going bad or goes bad, the unit will continue to operate, but with reduced performance. In some cases that may not be noticed by the operator. When a drive goes bad, it can be replaced by a matching raw drive and the unit will rebuild the RAID data, which redistributes it across all of the drives again. This can take up to 24 hours to complete. While many manufacturers say you can operate during this rebuilding period, I have found that in actual practice, performance is so bad, that you don’t want to work during the rebuild.

RAID protection is a wonderful safety net, but at the cost of available storage. Different manufacturers have different ways of handling RAID configurations, so there is no rule-of-thumb as to what percentage you will lose with every NAS. For instance, 256TB of QNAP storage (gross) will yield 206TB of net storage. 480TB of LumaForge storage yields 316TB net. On top of this, the recommendation for all shared storage is to stay under 80-90% of the available net capacity for optimal performance. If you ignore that advice and decide to fill up your drives to something like 97%, your system will crawl and possibly not function at all.

Connecting the system

Most shared storage systems used in modern, small-to-medium post facilities will be Ethernet-based at either 1Gbps or 10Gbps (aka 1GigE or 10GigE). The topology of your network will impact the performance. Your server unit can be configured with individual Ethernet cards that would allow a direct run to each workstation. Or it may connect to an Ethernet network switch, which then distributes the signals to the workstations. Or a combination of the two.

The chassis and/or network switch(es) are connected to the workstations with Cat6 or Cat7 Ethernet cable. Cat6 is generally good up to 100′, while Cat7 is recommended for runs longer than 100′ or if the cable in routed through walls or in the ceiling close to other electrical wiring that can create interference. For a 10GigE storage network, the workstations will require 10GigE ports (like on an iMac Pro) or you will need to add a 10GigE-to-Thunderbolt adapter (Promise, Sonnet, Akitio) to the computer.

Storage racks are very sensitive to power fluctuations, so you’ll want a beefy uninterruptible power supply/battery back-up (UPS) unit. Since these chassis draw power, don’t expect to hook everything to a single UPS if you are putting in an entire equipment rack of gear. Small, desktop NAS units – no sweat. But a faculty with a larger system should plan on several UPS units for its installation. For example, at my day job, we have a large QNAP and a large Jellyfish system (more on that in a minute) – just under 3/4 PB total – plus other peripherals – all in a single equipment rack. Each NAS has its own dedicated UPS. The peripheral gear runs on a third. To make sure the gear also had plenty of juice, we had an electrician run additional dedicated circuits for each of the two UPS units used for the two NAS systems.

Finally, make sure you have adequate air conditioning, because excessive heat will damage electronics. Modern systems no longer require a meat locker environment, but an unventilated closet for a server/storage rack simply won’t do. Any room that falls into the cool to comfortable range for a human will be suitably cool for the gear. Staying on the cooler side of that range will be best for a room with a number of equipment racks.

Practical experience with shared storage in the real world

The creative content production company where I freelance as senior editor and “workflow guy” has had some history with shared storage. In the Final Cut Pro “legacy” days, we were running a sweet Fibre Channel SAN for four workstations. Media was managed through Final Cut Server software on an Apple Xserve computer, but with third-party storage hardware. Up until FCP7 everything ran well. Final Cut Pro X arrived and SAN usage with the early versions was to be avoided. Apple pulled the plug on FCP7, Final Cut Server, and Xserve. Then to make matters worse, the hardware reliability of our storage started to falter. As a result, the production company ended up back on local storage for a while.

Fast forward to about three years ago when we switched to a QNAP shared storage system. We quickly doubled the system capacity with an additional QNAP expansion chassis. Ultimately nine workstations were connected via a 10GigE network switch. General performance was good, but as we started to work steadily with 4K media, performance suffered, especially with nine editors banging away. For example, long-form Premiere Pro projects required a proxy workflow to avoid editor frustration. Certain tasks, like copying a multi-TB batch of files on one of the systems while editing proceeded on the others, slowed performance. Image sequence files really hurt overall system performance. You could not pull media from and render back to the same QNAP volume during Resolve render passes.

In looking for options to improve the system, we decided to shift to LumaForge and spec’ed a larger Jellyfish Rack installation. Other than system optimization (a biggie) the key difference in the two systems is architecture. Unlike our QNAP unit, which uses a network switch, we opted for enough on-board cards on the Jellyfish to enable a direct run to all nine workstations without a separate network switch. There’s also a small NVMe unit used as a dedicated Adobe cache volume.

We didn’t get rid of QNAP, though. It has been very robust and recent firmware updates have actually improved its performance compared to how editing “felt” with it before. We maintain it for some legacy projects (rather than move them to Jellyfish), as well as an additional back-up storage pool.

All workstations get Ethernet cable runs to both NAS systems, so any editor can access any media from any location – Jellyfish or QNAP. We configured Jellyfish with a tenth Ethernet direct port, which goes to a separate 1GigE switch. These Ethernet feeds are distributed to several staffers handling media management and file upload tasks, using MacBook Pro and Air laptops and a Mac Mini in the server room. The connection to Jellyfish gives them the ability to work with media files without tying up editing workstations.

The acquisition of the Jellyfish system has proven itself over time. Direct head-to-head performance between Jellyfish and QNAP with a small project or a few media files is not that dramatically different. But when we compare day-to-day workflow efficiency, the improvements add up. Long-form 4K edits can proceed with native media without the prerequisite of creating proxies. Sidebar tasks, like batch encodes and file copies on one or more stations, don’t impact performance of the other edit sessions. Image sequences are easier to deal with. I can render to and from Jellyfish when I work grading sessions on Resolve.

In general, both brands have worked well for us, but LumaForge has definitely provided an edge. However, I have no qualms about QNAP either for the right customer in the right situation. There are, of course, other shared storage brands that offer outstanding products, including Avid, OpenDrives, Facilis, Synology, and EditShare. If you want to build an all-Avid shop, then Avid storage is probably the best option for you. However, even though Avid storage works with other NLEs, shops that are focused on Premiere Pro, Final Cut Pro X, or Resolve are better served by the other options. In any case, deploying a NAS system is easier than it’s ever been. Heck, you can even buy and configure a smaller Jellyfish through Apple’s online store!

But do your homework, check your OS compatibility, and make sure you tap a workflow consultant who knows video post and not just IT. Plenty of NAS systems developed for the data world don’t perform up to par in the world of video post. And don’t go it alone, no matter how many YouTubers you’ve watched. Qualified systems specialists, like Bob Zelin (Rescue 1, Inc) or the teams at LumaForge or Avid or most of the other companies, can help you get your system up and running at peak performance.

For more information about storage, here’s an article I wrote for Pro Video Coalition.

©2019 Oliver Peters