Progress Report (Network and Data)

Following closely our progress report on the video capabilities of the studio, this posting details some of the major progress and changes we’ve been making to our infrastructure.  Some may find this information way too technical or way too dry, but it’s part of what makes the studio work, and I think it’s pretty cool.

To paint the picture of our network requirements, let’s start with our digital recording devices.  We have three Harrison X-dubbers that record 64 channels at 96K in 32-bit floating point.  That works out to a data rate of just under 200Mbps.  When that data goes over the network, nfs metadata and ethernet packet overheads drive that to about 220Mbps per machine, or 660Mbps if running all at once.  That’s a healthy, but not overly aggressive amount of data to push down a 1000Base-T cable.  We also have two 64-channel ProTools machines that generate such data to their own local disks.  That doesn’t put anything on the network until its time to do a backup, at which point the backup scripts may run at full disk bandwidth (80-100MB/sec bursts), which can, by itself, saturate gigabit ethernet.  To keep the backup traffic from spilling into the audio traffic, we put them on isolated subnets, effectively routing them in parallel.

Video data has become a huge X-factor in our equations.  While our two AF100 cameras have a native (and highly compressed) recording rate of 28Mbps, that rate explodes to 160Mbps when we encode directly from the SDI interface (giving us 4:2:2 resolution instead of 4:2:0).  Our two Canon EOS5D MkIII cameras record All-I video to their SD cards at 90Mbps.  Technically, these cameras all record to local media, but in reality, after an hour of recording there are four hours worth of data that need to be stored on the server.  If all we ask of our system is the ability to transfer this data in real-time, then we need a 500Mbps network to handle the traffic.  (A full 1000Base-T network could save data at 2x real-time speed.)

But the real fun begins with our newer cameras–cameras that can record at higher-than-HD resolutions.  Our Blackmagic Design Cinema Camera records 12-bit RAW files recorded at 2432 x 1366 resolution–about 40% more pixels than FullHD, with 2-3 stops more dynamic range than conventional cameras.  That camera can fill a 480GB SSD in about 65 minutes shooting 23.98 fps (and faster when shooting 29.97), which is (coincidentally?) almost exactly the flat-out limit of a 1000Base-T connection.  Blackmagic make a docking station that can accept four SDD cards and offload them at Thunderbolt (10G) speed, which was a life-saver for us when we were running three Blackmagic cameras for over two hours when we recorded Kimiko Ishizaka playing Book 1 of Bach’s Well-Tempered Clavier.  At that time we had to cheat, because we simply could not get data to our servers fast enough.  But now that we have mLink boxes with 10G Myricom cards connected to our 10G-enabled server, we can offload multiple terabytes of data per hour.  That’s huge!

So this is what we now have: the ability to record audio to five 64-track devices running at 96/32, the ability to offload HD media at better-than-realtime speed from our four HD cameras, and the ability to offload data from up to four Blackmagic cameras at real-time speed, all at the same time.  That’s pretty cool!

Crucial to making all this work has been the transparency and robustness of Red Hat Enterprise Linux.  Linux has great networking support, making it easy to define many isolated networks on a single storage server.  cgroups are a simple and powerful way of binding specific tasks to specific CPUs (or even cores within a CPU).  My servers are standard-issue dual-socket Thinkmate storage servers with 8 cores per socket.  While these machines deliver great performance by default, a little tweaking went a long way to getting optimum performance from both the network and the storage subsystems.  I found that a single core was not enough to service a 10G network running at full bore, but that two cores were easily enough.  Similarly, a single core could easily handle multiple gigabit network interfaces.  Thus, by dedicating 3 of my 16 cores to networking, I could measure wire-speed performance across both gigabit and 10G network interfaces.

I’m looking forward to measuring next how well my storage systems are optimized: NFS, XFS, MD_RAID, etc.  It could well be that I don’t need to do any additional manual intervention to achieve my performance goals.  But I am very happy to know that if I do need to reach in to do something, the knowledge is there to do it, and the knobs are there to make the job easy.

In the next month we will be bringing a new data generator into the studio: a RED DRAGON camera.  For those not familiar, the RED DRAGON shoots 6K images with 16 or more stops of dynamic range.  Phil Holland produced this graphic to explain “What is 6K”:

RED cameras compress their RAW images using wavelet compress, so they can capture 6K images at 24fps at data rates ranging from 10GB/sec (at a 5:1 compression ratio) down to 2.9GB/sec (at an 18:1 ratio).  We cannot quite handle that in real-time, but if we have 4-6 hours of media we can always have a fresh card every 24 hours using multiple RAIDs and multiple 10Gb pipes.

It makes me happy that all these changes–almost unimaginable 5-10 years ago–are easily adopted, integrated, and optimized within our open source environment.

3 thoughts on “Progress Report (Network and Data)”

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s