Last year, John Heitzenrater and the band Hindugrass came to Manifold Recording to track their new album. John used crowd-funding to help defray the costs of the tracking session, and to use his home studio to edit and mix the resulting tracks. The theory was that by going “all in” on the quality of the recorded material, he would wouldn’t need all the firepower of a high-end studio to produce a good result. But as good as the tracks were, he began to realize that his artistic vision for the album was way more complicated than just selecting the right takes, putting the faders at zero, and letting the songs mix themselves. He began to inquire about mixing dates toward the end of the year, and we agreed to do a joint project. We would mix the album, but he would let us produce video of the process. We are proud to present the first fruits of that collaboration:
Following closely our progress report on the video capabilities of the studio, this posting details some of the major progress and changes we’ve been making to our infrastructure. Some may find this information way too technical or way too dry, but it’s part of what makes the studio work, and I think it’s pretty cool.
To paint the picture of our network requirements, let’s start with our digital recording devices. We have three Harrison X-dubbers that record 64 channels at 96K in 32-bit floating point. That works out to a data rate of just under 200Mbps. When that data goes over the network, nfs metadata and ethernet packet overheads drive that to about 220Mbps per machine, or 660Mbps if running all at once. That’s a healthy, but not overly aggressive amount of data to push down a 1000Base-T cable. We also have two 64-channel ProTools machines that generate such data to their own local disks. That doesn’t put anything on the network until its time to do a backup, at which point the backup scripts may run at full disk bandwidth (80-100MB/sec bursts), which can, by itself, saturate gigabit ethernet. To keep the backup traffic from spilling into the audio traffic, we put them on isolated subnets, effectively routing them in parallel.
Video data has become a huge X-factor in our equations. While our two AF100 cameras have a native (and highly compressed) recording rate of 28Mbps, that rate explodes to 160Mbps when we encode directly from the SDI interface (giving us 4:2:2 resolution instead of 4:2:0). Our two Canon EOS5D MkIII cameras record All-I video to their SD cards at 90Mbps. Technically, these cameras all record to local media, but in reality, after an hour of recording there are four hours worth of data that need to be stored on the server. If all we ask of our system is the ability to transfer this data in real-time, then we need a 500Mbps network to handle the traffic. (A full 1000Base-T network could save data at 2x real-time speed.)
But the real fun begins with our newer cameras–cameras that can record at higher-than-HD resolutions. Our Blackmagic Design Cinema Camera records 12-bit RAW files recorded at 2432 x 1366 resolution–about 40% more pixels than FullHD, with 2-3 stops more dynamic range than conventional cameras. That camera can fill a 480GB SSD in about 65 minutes shooting 23.98 fps (and faster when shooting 29.97), which is (coincidentally?) almost exactly the flat-out limit of a 1000Base-T connection. Blackmagic make a docking station that can accept four SDD cards and offload them at Thunderbolt (10G) speed, which was a life-saver for us when we were running three Blackmagic cameras for over two hours when we recorded Kimiko Ishizaka playing Book 1 of Bach’s Well-Tempered Clavier. At that time we had to cheat, because we simply could not get data to our servers fast enough. But now that we have mLink boxes with 10G Myricom cards connected to our 10G-enabled server, we can offload multiple terabytes of data per hour. That’s huge!
So this is what we now have: the ability to record audio to five 64-track devices running at 96/32, the ability to offload HD media at better-than-realtime speed from our four HD cameras, and the ability to offload data from up to four Blackmagic cameras at real-time speed, all at the same time. That’s pretty cool!
Crucial to making all this work has been the transparency and robustness of Red Hat Enterprise Linux. Linux has great networking support, making it easy to define many isolated networks on a single storage server. cgroups are a simple and powerful way of binding specific tasks to specific CPUs (or even cores within a CPU). My servers are standard-issue dual-socket Thinkmate storage servers with 8 cores per socket. While these machines deliver great performance by default, a little tweaking went a long way to getting optimum performance from both the network and the storage subsystems. I found that a single core was not enough to service a 10G network running at full bore, but that two cores were easily enough. Similarly, a single core could easily handle multiple gigabit network interfaces. Thus, by dedicating 3 of my 16 cores to networking, I could measure wire-speed performance across both gigabit and 10G network interfaces.
I’m looking forward to measuring next how well my storage systems are optimized: NFS, XFS, MD_RAID, etc. It could well be that I don’t need to do any additional manual intervention to achieve my performance goals. But I am very happy to know that if I do need to reach in to do something, the knowledge is there to do it, and the knobs are there to make the job easy.
In the next month we will be bringing a new data generator into the studio: a RED DRAGON camera. For those not familiar, the RED DRAGON shoots 6K images with 16 or more stops of dynamic range. Phil Holland produced this graphic to explain “What is 6K”:
RED cameras compress their RAW images using wavelet compress, so they can capture 6K images at 24fps at data rates ranging from 10GB/sec (at a 5:1 compression ratio) down to 2.9GB/sec (at an 18:1 ratio). We cannot quite handle that in real-time, but if we have 4-6 hours of media we can always have a fresh card every 24 hours using multiple RAIDs and multiple 10Gb pipes.
It makes me happy that all these changes–almost unimaginable 5-10 years ago–are easily adopted, integrated, and optimized within our open source environment.
It has been a while since my last blog posting, which means there is much, Much, MUCH to tell. I’m not sure that I can do it all justice in one evening, but there are some highlights I want to hit.
In early November, pianist Kimiko Ishizaka performed Bach’s Well-Tempered Clavier in the Music Room of Manifold Recording. The event was recorded in front of a live studio audience and webcast around the word, presented by The Miraverse. We produced two videos, one for studio geeks showing all our microphones, microphone locations, and all sorts of other studio gear that would be involved in the session, and one of Kimiko’s actual performance (which was magnificent). Thanks again to Robert Douglass for doing the legwork to make this event possible, and to Ms. Ishizaka for sharing her life’s study and practice of Bach with us.
It was very much my intention to write a blog posting shortly after the session–especially because it was such a great experience for all who participated, but we got too busy with all that this event put into motion for us. Continue reading “Progress Report (Video)”
When I decided to leave the certainty of multiple steady paychecks to start a new company, everybody I briefed thought there was no possible way it could succeed, and that gave me the confidence that I’d have no competition. The rest, as they say is history. But since that time, I have also come to appreciate that sometimes it is more valuable to have at least some competition proving that the business idea has at least some merit. Some percentage of a provable market is worth more than 100% of a market that simply does not exist. Enter GrooveBox Studios.
GrooveBox Studios was born of a frustration that is nearly universal among all artists I’ve encountered: bands spend too much of their own money on projects and tours that generally enrich everybody else before the band earns a dollar. Which is not sustainable. The founders of GrooveBox Studios hit the business reset button and came up with a model that is really quite analogous to what we, too derived: the co-production model. For starters, both GrooveBox and The Miraverse® promote the idea that instead of being an up-front cost that the artist must bear, the recording process is something that delivers cash and profit directly to the artist, up-front. Continue reading “Competition vs. Validation”
The Chapel Hill News reports on a project we managed to squeeze in at the end of the summer: the Community Chorus Project. These kids worked hard, and it was great to be a part of documenting their efforts just in time for their new school year. As the news article reports, we’re not the only ones pleased by the results. R.E.M. manager Bertis Downs promoted the video on the R.E.M. news page, and mention on R.E.M.’s Facebook page received over 1,000 likes. Downs told the Chapel Hill news “It’s very professionally done. It’s a great arrangement, it has a really good feel, nice energy, and I like the way the choral director interacts with the chorus. The kids did a great job.”
Indeed! Here’s a link to the video, which you can watch in HD if you have the bandwidth to do so: