Some Thoughts On Simulation

I am in Pune and just attended the Siemens PLM Connection today. I attended as press, not as a Siemens PLM Solution Partner. I learned a lot of stuff about the company, its products and its customers. I will need some time to synthesize all this information which I hope to do over the next few days.

The Siemens simulation solutions were highlighted a great deal in a number of presentations today. As I watched and listened I began thinking about simulation from a higher level. It dawned on me that the Siemens PLM simulation solutions like Nastran, Femap, etc, are actually not solving users’ problems. The same goes for the simulation solutions from every other CAD and analysis vendor out there today.

Before you think that I’ve lost it, I need to explain myself a little more here. When I say that today’s simulation solutions do not solve users problems I am referring to the fact that these solutions merely report the problem and quantify it, but leave the fixing to be done by the user. For example, take a FEA analysis on a part. A user sets up loads, defines boundary conditions, assigns material properties, etc. and then tells the simulation software to do its thing. The thing that the simulation software actually does is it that it tells the user whether the part will fail or not, where and how it will fail and gives him a host of other data which the user is left to study and decide what to do next. The software does not automatically tweak the geometry of the model for the user and reanalyze for failure and continue to do so till a fail safe design has been reached.

What I am trying to say is that the simulation software merely reports the symptoms of a disease but does not cure it. You still need an experienced analyst to decide which parts of the geometry of the part need to be changed and then start the analysis process over again till a fail safe design is reached. My point here is the the real problem of a user is not to determine where and how a part will fail. His real problem is to come up with a fail safe part, which is something that today’s Simulation software do not do. They merely aid the user in arriving at a fail safe part.

I am actually talking about part optimization here. Imagine a situation wherein you design a part, run an FEA analysis on it and the software actually goes ahead and adds ribs in places where it feels they are required for the part not to fail. The end result of such an operation will be changed geometry and not just a bunch of stress and strain values, factor of safety numbers or a picture showing a part in different colors. Similarly, the software could automatically weaken certain parts of the model where it can so that you save on material and/or manufacturing cost. For example, it could remove ribs or reduce their quantity and/or size.

I understand that analysis takes a lot of time and making it into an iterative process will only take it longer. But I am looking at this with the future in mind. What do you think? Do you think engineers in the future will continue to work like how they work today. If you do FEA analysis do you stop at the point where a part becomes fail safe or do you tweak the geometry where you can to see if you can arrive at a better and more efficient design. Do you think computers will be fast enough and software will be smart enough to start with an initial design from a user and take it forward from there on? Is this already possible to an extent? I’d love to know what you think.

  • Bohdan

    Morphogenesis …

  • Allan

    Deelip, I believe what you're suggesting is possible but very tricky to implement. You would need to harness KBE tools like Catia BKT or Siemens Knowledge Fusion and setup “generative” behaviour. This presents multiple challenges:
    1. Cost of the prerequisite technology (not just software but the implementation costs).

    2. Finding the experts capable of doing the work. You have an unusual advantage because you understand the fundamentals of product design and software design. Very few individuals exist on the planet today that provide both.

    3. “hit by the bus” syndrome. what happens if I'm fortunate enough to have an individual as describe in my second point but I'm unable to maintain that person as an employee?

  • Allan,

    Actually, I am suggesting that the CAD/analysis vendors put the KBE intelligence into the simulation software itself. I am not suggesting that each company hires specialists to automate the current simulation software that we have today. That would defeat the purpose. The point is to minimize human involvement to the later stage where a design is accepted or rejected by a human after careful examination. Up until then the simulation/optimization software will run on autopilot and simulate/optimize the design on its own. It could also poke around the design and try out some what-if scenarios. I am talking some real intelligent software here. And of course, some real powerful hardware to go along with it.

  • Kevin Quigley

    I see where you are coming from Deelip but that brings with it a whole new can of worms for CAD vendors. Like liability. If you are asking the CAE system to churn out part solutions this opens it up to a potential liability issue – what happens when the said automatically optimised part fails spectacularly taking down a plane full of passengers?

    There is also the question if end user “trust” in the outcome. What you are advocating is actually exactly what CAD vendors have been trying to suggest for years – that their CAD system can make analysis a task for 2 year olds. Now all you are adding to that is the 2 year old producing the optimised part and issuing it for manufacture.

    I can see that for some CAE tasks this kind of thing would be useful – like rib placement/optimisation of part thickness for moulding/weld locations/gate placement etc, but for fundamental part development I am not so sure.

    I have no doubt that it could be done, but the question is just because it can be done does that mean it should be done, and more importantly, do you trust those results?

    CAE is the serious end of the CAD business. Unlike visualisation where a few shortcuts and fakes can get a good result, or part modelling where we can slice and dice to get a valid result, with CAE you do kind of hope that the person driving the bus at least knows how to turn the wheel on the mountainous twisty road with sheer drops on either side – rather than relying on the auto pilot…..

  • While working for the DoD we used to use CADDS5 pretty extensively working on SSN boats of the late 80's/early 90's and it had a very nice optimization tool. In fact we used to joke that given enough latitude with regard to parameters it was allowed to alter and left to it's own devices you could concieveably end up with a part that bore very little resemblance to the original version defined by the designer.

    There are plenty of optimization tools on the market today that are just as capable, but you would have known that already so I am wondering if I am missing something?

  • Paulw

    All things are possible given enough money and time – but given software has no intelligence, would it be a wise to hand your design over to those who define the rules driving their software and re-designing your product.

    Additionally, who checks on design intent (what ever that is. An extra rib – safer but ugly and or providing interference etc? ), trust and liability?

  • Shyamalroy

    Deelip now perhaps you are beginning the light that I have been talking about – Post modeling simulation tells the user where he screwed up. Pre-modeling simulation enables users to get the design right from the beginning.

    Making more powerful post-mdeling simulation tools is wiping the floor while the tap is still running.

  • Greg

    The biggest problem is still, as you state, “[the] user sets up loads, defines boundary conditions, assigns material properties, etc”… All of these tasks have very, very large uncertainties involved, and this fundamental input will obviously affect the quality of any output (GIGO).

    I believe this is the elephant in the room when anyone resurrects the 18+ year old marketing spin about Designer FEA. Sure the codes today are faster and have easier to use UI's, however, real life problems are still far too difficult to solve for anyone who is not privy to the inner workings of the code (what simplifications/abstractions can I use? is use of symmetry valid? why is contact not converging? why is this linear tet a bad idea, or is it? is my mesh affecting the convergence? did I even bother to check my mesh?)

    When we've nailed this issues, then we can finally start to move ahead.

  • Kevin,

    No, no, no. Some experienced engineer will still need to sign off on the final design. That is a must. I am not going to sit in a plane that is designed completely by a computer. Hell no.

  • True, software may not have intelligence as we humans know it. But by plugging in knowledge based engineering and stuff like that into it, we can give it some of that ability. One thing is for sure, software is capable of doing far more what-if scenarios than humans due to its sheer superior computing capability as compared to humans. We could certainly make use of that.

    Eventually a human will need to sign off on the final design and before signing off he will do whatever he needs to do ensure whatever needs to be ensured. My point here is that it is quite possible that the design that he is eventually handed to work with may be of better quality and more optimized than the initial design that was fed into the software.

  • and how would you know?

  • How do you know that the planes that you fly in today are designed by engineers and not janitors? Do they paste a copy of the degrees of those engineers on the back of every airplane seat?

  • Exactly.

  • Yeah, exactly. But people still fly, don’t they?

  • Aniruddha

    Deelip,

    Have you ever tried Pro/MECHANICA (Pro/Engineer's CAE module)? It has built-in capabilities of “Design Studies”. Pro/MECHANICA being a p-mesh code, has some distinct advantages over other h-codes in the market, especially when it comes to running iterative simulations. One basic advantage is that it doesn't have to remesh the complete model in between two iterations. It can re-use the mesh from the previous iteration and can incrementally update the mesh in the changed geometry.
    Another advantage is that the user ALWAYS gets a “converged” solution. Mechanica internally checks for convergence and keeps running the problem iteratively by increasing the p-order locally untill it gets a converged solution.

  • Kevin Quigley

    I tried this a while back, very interesting technology but it is primarily for visual design to try to get the design off to a good structural start. You couldn't actually do anything from the results of the Morph – it is like a napkin sketch of a framework to build the design over. Still interesting though.

  • Kevin Quigley

    I agree, and that is what any good engineer or designer would start with anyway. But some problems are more complex than a simple skeleton design can solve. I'm not exactly sure what Deelip is proposing with all this but my training always taught me to start with the big picture and gradually iterate to the detailed solution. If you simply go ahead and plug in a set of variables into an automatic optimisation system you get straight to the detail immediately and as a result can miss out on the “best” solution to the design problem. That old computer adage still applies – crap in=crap out – no matter how clever your software is.

    besides all this has been/is being done right now in other areas – think long term climate modelling, financial predictive modelling – they are so accurate right?

  • murray

    I remember a bunch of “you can't possibly think” arguments from back when drawings smelt of ammonia.

  • Ohh, missed chance to meet you in Siemens PLM Connection in Pune, my bad luck!
    I heard your name while speaking with Mr.Mawah, Diector maketing in tea break.
    But I thought they are talking about your blog…
    Sir, next time in Pune would like to meet you.

  • Bohdan

    Yes I know. It's just an example. But I can imagine complete structure or assembly evolution + virtual testing in a computer. 🙂

  • No, I haven't tried it. It does sound interesting. Thanks for sharing.

  • If you put crap in you will definitely get crap out. I am not suggesting that we let the software put crap into the analysis. The input to the analysis will still be the initial model from the user like how it is today. Now if that is crap then we are headed nowhere, even today. Actually, the big picture you mention is precisely where I want to start as well. Just that if the analysis software is given a relatively free hand to hunt around and look for a better solution based upon some intelligence that the software vendor has put it and some more that the user himself has added, it may be a good thing.

  • Sure, lets catch up next time. You can get me on +91 9822689298.

    And please stop calling me “Sir”. You are making me sound old. 😉

  • Smartin

    I know at least some analysis programs can do iterative optimization in changing the dimensions of existing features. I don't know of one that can add or subtract features on its own, which is where I think you're looking. I think software with free reign to add features would be a challenge to implement effeciently, due to the large increase in the amount of data the designer would need to feed the software. Giving software the ability to wholly delete selected features (instead of just resizing them) during the optimization might be workable, however.

  • Smartin,

    That’s an excellent point. Deleting geometry may be much easier than figuring out what and where to add geometry. I would like to see software actually doing more to solve the problem. The idea here is not to eliminate the human, as some people commenting may be thinking. The idea is to let the software to work harder for the human.

  • Tony

    Deelip,
    You should look into what other industries are doing. For example, I'm pretty sure that for high end PCBs, even the most expensive auto-routers produce crap compared to an experienced designer. But, they do provide a lot of helpful tools (e.g. checking on impedance matching) that would be painful for the designer to constantly calculate.

    One approach is to hand route the critical signals (multi-gigahertz, analog, high power, etc), and then let the auto-router handle the rest.

    Another analogy is software optimization — sure you can try to find all kinds of little speed ups (like CAE optimization), but most of the time if you need a big speedup, you'll have to change your approach (algorithms). I guess this ties back to Shyamal's point — it's best to have engineers who can evaluate various basic approaches at the start, instead of always starting with one approach, detailing it, then doing FEA and optimization.

  • Agree. Multiple start points will be the best thing since they will lead to multiple end solutions from which you can pick the best one. However, in cases when we don’t have that kind of luxury, maybe software could be smartened up a bit to test multiple scenarios for us.

  • Mook

    Deelip, you're on the right track in recognizing the importance of FEA in the overall design, but the solution imo is to make FEA more integrated in the design early-on instead of treated as a non-integral “specialty” discipline as it's often considered now.

    Regarding the possibility of FEA-driven geometry optimization, the parameters and choices would seem to approach infinity depending on the design situation, making FEA geometric optimization impractical. If you moved something in the optimization, that change could bump into or interfere with other parts in the assembly or outside of the assembly, or change overall functionality or make the part less constructable from a manufacturing standpoint, or make it less aesthetic. The potential problems are endless. I don't see how anyone could possibly add enough “intelligence” to the FEA software account for all that. Deleting parts in an optimization may pass the stress test, but may introduce a myriad of other problems unrelated to FEA. Perhaps if you were to limit the FEA optimization to very specific types of designs, then maybe some geometric optimization could take place within those specific applications, but I don't see it being feasible on the kind of scale that you're suggesting.

  • thanks

  • Shyamalroy

    Best time to establish boundary conditions is in the pre-modeling stage of design. In this stage conflicts are the easiest to detect and the least expensive to fix – while it is easier to perform “what if” analysi of design options.

    In mechanical design geometry and mathematical issues are inseparable. A CAD model from any vendor includes negligible mathematical value – it just represents half the description of the design challenge. To “retrofit” the mathematical aspects of the design after the model has been built limits the user's flexibility to get the best possible design.

    The best designs occur when careful consideration is given to the mathematical aspects of the design before committing to modeling.

    To make FEA more powerful is akin to a story I heard – two people started a buisness to buy tomatoes for $1 a pound in Los Angeles and they were selling them for 75 cents a pound in San francisco. Two months later they had a meeting to find out why they were loosing money and decided to buy bigger trucks!

  • Mook,

    I get your point of trying to search an infinite number of solutions. That’s not the idea here. This is not like creating a chess game on a computer where the software needs to drill down to each and every possible branch and leaf in a decision tree. The software can make educated guesses as to which few paths it could follow. The idea is not to arrive at the ultimate best solution. But just see if it can find one that is better than the one originally created by the user.

    Another thing. Say you have a plastic part and have not yet designed the ribs that it obviously needs to give it the require strength. You could apply the loads and an algorithm using best practices and knowledge based engineering techniques could come up with the rib network. It would help doing this in the context of the assembly so that interference with other parts are checked as well. I understand that this would not work well for external facing surfaces because the part needs to look good. But then nobody puts ribs on external faces, right?

  • deelip… I've heard this same argument for a number of years. Valid question.. I think what you are suggesting has a place in the simulation world. But, what you are suggesting is a fairly significant departure from what simulation is all about and as others have noted- more of a KBE function tying in simulation data.

    Many have griped that vendors are making FEA/CFD too easy and making it dangerous.. Silly, IMHO. Think of what “customer problem” the simulation market is trying to address. The problem is physical prototyping is too expensive, takes too long and often doesn't provide the insight to why something failed or a look “inside” the model to possible causes. That is precisely what simulation software addresses. Allows a faster, cheaper and more insightful way to “test” your designs. It can provide far more info than many physical tests. Does it require a bit of know-how on how to interpret the results- of course it does. It's called engineering and design- simulation is just a tool int he toolchest. Not much different than if your company purchased a wind tunnel or a thermal chamber or a physical test bench. Requires some know-how on how to operate it correctly and understand what info you are getting and up to you the engineer/designer to take that info and improve your design; “test” it again and iterate..

    So.. I'm not disagreeing with you that providing suggestions and having options to provide suggestions on alternatives would be cool. But, how about designers and engineers being trained to use the tools (digital or physical) that are available out there to design better products?

  • Mook

    Deelip, I like your chess example.. it raises a fair point. Yes, for some designs that have limited design parameters which can be clearly defined and lend themselves to a FEA optimization I could see benefit. The problem is, with most real world designs, the number of possibilities far exceeds that of the most advanced computer chess game programs since in addition to all the many FEA decisions (element types, BCs, material properties, meshing, optimization parameters, load limits on connecting parts/equipment, failure modes etc.), we also need to consider countless possibilities involving constructability, interferences, availability of materials and standard parts, aesthetics, manufacturing procedures already in place, ease of maintenance/repair and on and on and on. Tons of FEA and non-FEA considerations to be accounted for. To take your example, who says ribs are never applied on external faces? Sometimes yes, sometimes no depending on what you're designing. And the FEA shape optimization needs of a designer of plastic parts would be significantly different than those of a designer of pressure vessels which are way different from the needs of a designer of wellhead drilling devices or limb prosthetic designers or bridge engineers or whatever. FEA is used in so many, many different industries with huge differences in design requirements.

    Simple rules for certains aspects of the design such as spacing or thickness or even materials, sure, those optimization parameters could probably work to some extent in many industries. But to take a simple example, FEA of a plate to see if you could get away with a thinner design, even in that simple example, maybe localized cover plates or thicker edges would do the trick instead of thinning the entire area… or maybe cover plates are not feasible for whatever reason. I still think there are too many possibilities for a “generalized” FEA shape finder since design requirements vary too much between different industries and even vary significantly within vendors in the same industry.

    FEA has the potential to drive many designs and should be implemented early-on in designs where FEA is applicable. Why not do the initial modeling in FEA and export to CAD instead of vice versa? Geometry generation tools in many FEA programs are quite capable of doing that for many designs. In a lot of situations, designers are literally “flying blind” in preliminary designs without FEA results to guide their decisions.

  • Mook,

    Good points about the number of factors affecting which parameters can and cannot be adjusted. However as I was reading your comment it occurred to me that you must have already segregated these parameters and factors into important and unimportant (or some other classification) depending upon what you were designing. The same could be done in the software itself. As a first step the user could tell the software what kind of part he is designing. The same rules of thumb that engineers use for each kind of part could be encoded into the software and those rules would be the guiding logic for the software to go down various decision paths or stay away from them. My point is that software will not use the same logic for designing welded joints and plastic parts. So you really don't need to come up with one huge generalization for all kinds of FEA analysis. That would be next to impossible.

    As every engineer knows so well, huge problems usually become manageable when broken down to smaller ones.

  • Ken

    Unfortunately in the mechanical world, the variability of how parts are used and their intended function is so broad, I would expect it would be impractical to set up a simulation to account for every variation and still provide an adequate solution. I expect it would mean that for FEA to do as you proposed, not only would the engineer need to specify stress based loads and constraints, they would also need to specify every other design constraint that must be adhered to as well such as volume constraints (intruding into other parts, air flow obstruction, heat retention), weight restrictions, cost restriction, alternative materials & manufacturing method options, aesthetics, and a host of other constraints. Now all of a sudden you've added administrative overhead to manage all of these variables as a company (read “additional head count to do the same job”) because you won't get rid of the engineer who is ultimately responsible for making the right decision.

  • You know how computing power doubles every 9 months (Moore's law, although I badly mis-stated it). Most FEA analysts are unimpressed because even at that pace, we still won't see the type of problems we want to solve solvable in our lifetime.

    The second problem is that the FEA method is an approximate method and 99.99% of all results are wrong. The real challenge for an analyst is knowing how wrong.This is not something that is hard to address with software and is what makes the analyst (or the training Derrek mentioned above) so valuable.

    Mark

  • Nobody is getting rid of the engineer at all. Just that the software will actually be doing more to solve the real problem, which is arriving at a better fail safe part or assembly and not just a part or assembly that does or does not fail.

  • Mook

    Although I'm sure you meant the 99.99% wrong in FEA analysis to be in a specific context, it's hugely misleading to make that statement in a general context which you did.

  • Mook,

    Sorry, I'm not quite sure what you mean. All I meant was that errors of 1% 5% 20% even 200% in the results should not surprise anyone.

    Too many people see a pretty colored plot and think that is the right (as in “exact”) answer. A good analyst can give you a feel for how accurate that answer really is. And even results that are off by 200% can still be valuable.

    FEA is incredibly useful but its not like CAD where you have 8 or more digits of precision.

  • Mook

    Please define “wrong” in a general FEA context.

  • Greg

    Since it is ultimately a physical test that validates an aeroplane's design, why does it matter who presses the solve button on an early FEA?

  • Greg

    I agree with your point Mook. In fact the instances of the software getting it wrong are quite rare indeed. i.e. software bugs that cause incorrect or misleading results. The VAST majority of errors are by the user either on the input (again, what are those loads?, how are you constraining that part? can you really ignore that fillet?) or on interpretation of output.

  • Yea, it looks like I fumbled the ball trying to make my point. When I teach FEA to CAD folks, I try to make the point that the precision you get in CAD should not be expected in FEA.

    In CAD, a number that is not spot on is likely wrong. In FEA, you need to think it through a bit more.

  • Mook

    Precision comparisons between FEA and CAD seem misleading imo. High precision CAD models and drawings of buildings or parts can be built which immediately fail under use. That's where FEA can help in some applications. High precision drawings or CAD models don't mean squat if the design is no good.

    Although CAD models are typically more detailed, they don't model and dimension everything, only what's “needed” for procurement, manufacturing or construction. FEA models include what's needed as determined by the engineer.. a subjective judgement based on experience. A foundation or piece of equipment or even an entire connected assembly may be simplified using joint springs or nonlinear links. And it often makes sense to model certain elements centerline only using infinitely thin line/stick finite elements rather than shells or solids since line elements are not sensitive to meshing for static loads. FEA optimization would have to be very specific and limited in order to add value.