In Part 2 I discussed the Solid Edge ST steering wheel, a smart tool that adapts to the geometry it is attached to. In this article I will be discussing a concept called “Live Rules”. To put it simply, live rules is a set of instructions that the user passes on to the steering wheel. The steering wheel decides the direction in which the faces should move/rotate. The live rules simply decide which faces should take part in the operation.
In the figure below I intend to move the hole further away from the center of the handle. The most obvious thing to do it select the hole (highlighted in orange) and pull it along the Y axis. This is what would happen in other solid modeling applications.
The hole moves all right, but the rest of the model stays put. Not what I had in mind. But if I do the same thing in Solid Edge ST, the hole moves and the rest of the model adjusts accordingly. Exactly what I had in mind.
This happens because of live rules. So now you must be thinking, “Wow! This live rules thing must be really smart.” I got news for you. Live rules is the dumbest part of Synchronous Technology.
Please allow me to explain. First I need to show you the live rules window in Solid Edge ST. This is what it looks like.
This window pops up everytime you select a face. It is basically a extended face selection utility. A tool to select faces other than those you have already manually selected with the mouse. When I was moving the hole, the “Concentric” live rule was checked. That is why the outer cylindrical face of the hole (which happens to be concentric to the cylindrical face that I picked) was added to the selection and both faces were moved together. The other live rules kicked in to maintain tangency. Which means that if I had unchecked the “Concentric” live rule and then moved the hole, the rest of the model would stay put and give me the same undesired result as the first figure above. And this is exactly how other solid modeling programs work. To move the hole I would need to select the hole as well as the outer cylindrical face and pull them both to get the desired effect.
So why am I calling the live rules in Synchronous Technology dumb? Well, because they are not set automatically every time you select a face. The live rules window simply pops up with the settings you last used. And this is the fundamental difference between Solid Edge ST and other solid modeling programs. The developers of these programs have tried to make the software smart enough to guess what the user intends to do and set up similar rules internally. In effect they make the software think for the user, and not surprisingly, often get it wrong. In Solid Edge ST, the user thinks for the software. He sets up the rules and the software simply follows instructions. If the result of the operation is undesirable, then the user has himself to blame and not the software. Using this approach the user will always get a desirable result because he is the brain behind the operation and not some artificial intelligence coded into a complex face selection algorithm trying to read the user’s mind.
As you can see, in both programs the same thing happens – two concentric faces are moved. The only difference is that in the solid modeling program the user has to manually select both faces, and in Solid Edge ST the user selects only one face and leaves instructions for the software to select the second. Basically Siemens has simply put the intelligence back where it belongs, in the human. Just that they have made it easier for him to use it.
They way Siemens has packaged this live rules thing into Synchronous Technology is brilliant. I salute the person who came up with it. When I model a part in Solid Edge ST everything happens exactly how I had planned. I have to remind myself that it is because my brain is driving the modeling operations. The software is simply following my instructions. The modeling process is so smooth that one can easily lose sight of that simple but profound fact.
Making a user set live rules is a good thing but I think Siemens can lend a helping hand here. I suggest that when I select a face, it would help if there is some visual feedback that shows me which other faces will be party to the operation due to the live rules that have kicked in. These faces could be given a glow or something that makes them look different from the rest of the model. This will save the user a lot of undo operations. When I select a face I find myself doing some mental calculations to figure out which other faces are going to join the party. While this may be quick and easy for small models, things can get complicated for complex models where the probability of gate crashers can increase. Sometimes I skip the mental calculations and go ahead and move/rotate the face and then find myself looking around to see of anything else is moving. This is a major distraction and takes away the joy of working with this technolgy.
When working with large models you often need to zoom into a particular section to interact more closely with that part of the model. It may so happen that a live rule that you set up may end up selecting a face which is out of view and out of your intented plan as well. For example, take the “Coplanar” live rule which automatically selects faces that lie in the same plane as the face you selected. You may want to move just the face you selected, but may end up moving another face that is out of view and that is coincidently coplanar with the face you selected. I suggest that in such cases, a visual feedback warning the user of something happenning out of the current view would be helpful. May be a thin band touching the side of the window most closest to the face which is out of view.
In the next part I will discuss 3D dimensions. This will address the issue of “design intent” that skeptics of direct modeling software have been talking about.