SolidWorks:Heard! – CAD On The Cloud

I was supposed to take part in a round table discussion on “CAD on the Cloud” held by Lou Gallo, but could not make it [Friday night boozing session got in the way ;-)]. Lou has written about the discussion on  his blog along with a link to the audio.

One of the topics discussed was 3D graphics and how quick and responsive it is going to be when CAD starts running in the Cloud and not on your desktop. I’d like to make a quick comment on that. When you load a 3D solid model in your CAD system what you are actually seeing on your screen is a 3D render mesh, basically a bunch of colored triangles. So you may wonder what will happen if you are working on a large assembly which is represented by a million triangles. Every time you perform a modeling operation you will need to download a million new triangles which your client application running on your desktop will then render on your screen. Obviously this is going to create a tremendous amount of lag which will leave you twiddling your thumbs while you wait for the million triangles to be download before you can run your next command. Either that or the CAD on the Cloud solution will need to use some kind of screen scraping technology. By screen scraping, I mean that the server will need to render an image of the model in the Cloud and send it to the client running on your desktop which displays the model on the screen. While sending an image may sound faster than sending the vertex coordinates and colors of a million triangles, screen scraping is undesirable for a number of reasons. One of them being that the server will need to create a new image ever time you move your mouse to navigate around the model, or show or hide parts, display cross sections, etc. Basically anything that needs a graphics window refresh now.

This is how Spatial is approaching the problem. The 3D model as well as the software that works on it is runs on their server in the Cloud. So the first time you load your model, the client running on your desktop will need to download the million triangles. Now suppose you add a blend to an edge. This alters the topology of the model, but only in the vicinity of the concerned edge. This means that a new face is added (the blend itself) and say four faces are changed due to the introduction of the blend. The server sends only the triangle data associated with these five faces to the client, which may amount to say about 100 triangles in all. The client deletes the triangles associated with the four faces (not five because the blend is not yet added to the model on the desktop) and plugs in the new faces into the million triangles already sitting on the desktop. This way, over a fast internet connection, the time lag will be virtually non-existent. Of course, if you trash the model so much in one operation that the server needs to send you half a million triangles then it probably would result in a time lag.

Thought I’d just mention this here.

  • Rick

    That will be just dandy with surfacing. I tug a spline handle on a boundary and everything changes. Time for tea.

  • Maybe next time will combine your Friday night boozing and the podcast. That should make for some lively audio! We will get you in the next session for sure. Thanks Deelip!

  • I think that’s a splendid idea. 😉

  • Ken

    HP has a product called RGS which does something similar with the screen image but it just send the changed pixels. Like Remote Desktop on steroids.

    I've been thinking about this cloud thing and I got to thinking about my Playstation 3. I play a game called War Hawk which is really only able to be played online with other users. It is amazing how much stuff goes on and is tracked relativley in real-time. One thing I realized while playing the game is that each client runs the app and generates the geometry you see (terrain, buildings, planes, jeeps, tanks missles, projectiles). It seems that all that is being pushed back and forth is the object coordinates. The interaction between the objects all happens on the client machine and only the resulting status is relayed back (hit, damage, kill).

    So wouldn't it just make sense to have the clients all have the geometry engine and when that round is performed, what is sent is actually the kernel instruction that creates the round.

    I guess we need to understand the reason for the cloud. I'm assuming that collabrative editing/review is the goal, otherwise who cares?

  • Ken

    Actually, I think the “Cloud” has been around for quite a while. It used to be called “VapourWare” 🙂

  • Rick

    1 million triangles will take more than 80 seconds with a 1Mb data channel. We had better stick with simple toy models.

  • Well, then I hope you are terribly wrong. 😉

  • Jim Merry

    One of the benefits of cloud hosting is adding more capacity on the server and then finding clever ways to use those resources to solve issues like multi-user collaborative access. For developers this approach necessitates exploiting massively parallel compute resources, which is no easy task.

    I am with mental images, subsidiary of NVIDIA, and our parent company's GPU hardware coupled with our server side software, Reality Server, solves this problem of multi-user 3D visualization via the web and insulates developers from the details of the cluster configuration. The latency problem is addressed with available video streaming technology combined with right-sized GPU clusters. The very first cloud providers are coming on line this month with 64 and 128 GPU clusters and the first few software developers starting to test their apps in this environment. E.g., Autodesk was showing 3D Studio Max in the cloud with our interactive ray tracing renderer, iRay, last week at Siggraph. 3D graphics, especially with high quality, physics based rendering, is an obvious place to look to gain compute advantages in the cloud, but there are others and they will follow.

    Clever developers will be able to leverage these parallel compute resources available via the cloud in core CAD algorithms to speed up current methods like parametrics, constraints, booleans, etc, but then also to find new methods to apply similar to the way direct modeling exploits compute resources to analyze dumb geometry in ways that were impossible just a few years back on Pentium 4 class machines.