Cracking Cloud Rendering in the Real World

(Article in July 2015 for Seekscale company, a cloud rendering startup)

Studios Need Ever More Render Power

VFX and 3D animation studios computing power needs keep growing, year after year. Visual effects are everywhere, 3D is spreading like wildfire, and studios render farms struggle to keep pace. Many studios are accustomed to renting physical servers during crunch time, in order to increase their render farm capacity. More and more studios are considering renting cloud servers, or bare-metal servers in remote datacenters, to focus on what they do best (vfx, not hardware operations).

The Assets Sync Issue

Problem is, many studios do not have great network connectivity to datacenters and cloud providers. Many times, bandwidth and/or latency are not good enough to let studios fully enjoy the power of the cloud.
A little-known fact about rendering is that most of the time, 3d artists make 3d scenes that reference many 3d assets stored in many other files. This means that when you want to render a frame, most of the time you don’t know in advance what files/dependencies will be required for the job. Parsing scene files to get the dependencies in advance is most of the time impossible due to proprietary 3d file formats. You cannot either send all your assets to the cloud since artists keep working on them so it could cause sync issues. Ideally you could have a dropbox-like mirror of your 3d assets on the cloud. This is possible with NFS or SMB shared drive, but those protocols are very sensitive to latency, so cloud render nodes accessing your local shared drive over a VPN or something alike yields terrible performance.

The Solution: Assets Sync on the Fly

Here at Seekscale, we have found a solution to this problem. We implemented what we call a turbocharged SMB proxy.
We designed a “fake” SMB server, that displays to clients accessing it whatever we want it to display. We put our SMB proxy on our cloud render farm, and it shows the render nodes an exact mirror of all studio assets, but in reality no file is present in the cloud (yet).
When a cloud render node starts rendering a 3D scene, it believes it has access to all studio assets (thanks to the SMB proxy) so it starts the job without complaining.
When the renderer tries to *actually* open a file, the SMB proxy freezes the file open command, downloads the file from the studio assets server over https (very quick protocol even under high latency), and then releases the open call.
So, what this means is that the SMB proxy, shows all studio assets to cloud render nodes, and actually downloads the required files to the cloud render farm on the fly, as the renderer needs them, without the renderer ever noticing the files have been downloaded locally at runtime.

Native Integration with the Cloud
Concretely, this means that studios can use our cloud render nodes exactly as if they were on their local network, reading from their assets server.
Add to this a couple of smart NAT (network address translation) agents here and there, and we’re able to have our cloud render nodes show up on your local network and file system, as if they were physically present in your office, with minimal performance loss.
This kind of technology is crucial, as studios need to enjoy the possibilities of the cloud, without having to rethink all their pipeline. Cloud render nodes should fit in the local pipeline, not the contrary.
And, good news, we open sourced the SMB proxy: https://github.com/Seekscale/smbproxy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s