6 minute read

Self-randomization in MD Nastran

It is not necessary to explain why the inclusion of scatter and uncertainty in FE models (and any other numerical models for that matter) is important. It is impossible to talk of robust-design or risk assessment and management if elements of uncertainty are not taken into account. In the past, uncertainty has been taken into account via safety factors. Today, thanks to advances in software and hardware technology, uncertainty may finally enter a numerical model directly in the way it manifests itself in nature, for example via Probability Distribution Functions (PDFs) and not through the “back door”, in the form of safety factors. In fact, in the mid 1990s, the first stochastic meta-application - ST-ORM – was launched, which enabled large industrial problems to be efficiently solved using Monte Carlo Simulation methods. The tool allowed the user to plug-in random values of variables in the input file(s) of a given (FEM) solver(s) and to efficiently repeat this process until a desired statistical relevance in the results was reached. Typically, jobs in the order of one hundred samples were sufficient to reveal new and unexpected behaviour. An emblematic result was achieved in 1996 with the BMW 3-Series model in a frontal crash simulation with PAM-Crash, in which pronounced and unexpected clustering of the behaviour pointed to a hidden bifurcation in the design. Numerous similar examples have been obtained and documented in the literature. However, one major limitation of these first-generation tools like ST-ORM was due to the fact that they required plenty of user interaction in order to define a stochastic problem. In cases of hundreds of variables, the user was forced to spend hours interacting with the meta-application.

Today, with the new self-randomization feature, available in MD Nastran r2, and co-developed by MSC Software and Ontonix, the situation changes dramatically. The term “self-randomization” implies that, contrary to conventional practice in defining a stochastic simulation, the solver takes over from the user the burden of having to “plug-in” random values of selected fields in the input deck. But why introduce uncertainty directly from inside of the solver? There are three good reasons:

Advertisement

 FE Models must be realistic if CAE is to flourish. The inclusion of uncertainty boosts dramatically the levels of realism of computer models. Such models deliver a wealth of responses which deterministic models cannot. However, it is necessary to randomize all the possible variables in an FE model, not just a reduced set, as has been done in the

past. Complete randomization can only be performed efficiently and reliably by the solver itself.  Uncertainty stems from physics. Contemporary deterministic solvers account for only a part of physics, typically in the form of the following equations: F = K x and F = m a, where the various symbols have their generally accepted meaning. Uncertainty, however, accounts for a huge chunk of physics, which is neglected if the above equations don’t contain elements of this uncertainty.  The availability of powerful and low-cost computing power makes it now possible to move away from surrogate-based approaches, such as the Response Surface Method. The further growth of CAE will not stem from building finer meshes and then funnelling them through a surrogate.

In order to understand better the rationale behind self-randomization, it is important to appreciate the difference between the conventional approaches to uncertainty – based on the Response Surface Method (RSM) – and direct full FE model-based Monte Carlo Simulation, as advocated by Ontonix.

The RSM method has been introduced in the 1980s and follows the scheme indicated below:

 Call the solver using an a-priori defined sampling scheme (typically defined via Design Of

Experiments, DOE, and other methods).  Build a response surface using the DOE tables and solver results in the DOE points.  Use the generated response surface – typically a multi-dimensional polynomial, to replace the solver. The advantage in doing this is that CPU consumption for the RSM is negligible if compared to typical FEM solver execution times.  Execute a Monte Carlo Simulation replacing the solver with the response surface and introducing uncertainties (PDF) into the coefficients and/or variables of the response surface.

The advantages of the RSM are:

 Very fast – executes in fractions of a second, compared to hours of FE solvers  May be used for optimisation with numerous algorithms  May be used for Monte Carlo Simulation with a very large number of samples

The disadvantages of the RSM are:

 The RSM cannot show discontinuities (these cannot be captured by the DOE if not known a-priori). Direct solver-based Monte Carlo Simulations conducted over the past decade show that these discontinuities are quite frequent and account for numerous important phenomena. Their understanding is key to engineers who pursue robust designs.  The RSM introduces, typically, a 5-10% approximation the moment it is used as a surrogate of the solver. One may ask, then, what is the point in building very refined FE models if the added accuracy is ultimately wiped away by the RSM.  The RSM cannot deliver outliers – these too, like discontinuities, are paramount toward a better understanding of a given system, as well as towards robust design.

 The RSM is a poor substitute for an FEM solver in a general sense. Monte Carlo

Simulations based on full FEM models show that the density in the resulting multidimensional sets of points, which represent the solution, exhibits local fluctuations.

Information on these density fluctuations is lost when the data is projected on a given response surface.  The computational cost of the RSM depends on the number of variables. This means that in order to reduce this cost one is forced to cut variables and this, when done a-priori, i.e. without knowing which variables are truly important, is quite dangerous.  The philosophy of the RSM induces a dangerous form of thinking, and namely that it is possibly to simply “stack” or superimpose processes in an almost “linear” fashion. First you define the DOE points, then you build the RSM, then you introduce uncertainty into the RS coefficients, then an MCS is run, etc., etc. The order of operations may not, in general, be changed freely. It is not the same to introduce identical PDFs into the RS or into the solver directly – results are in most cases totally different.

All FE solvers are based on numerous assumptions and hypotheses. An FE model which correctly reproduces 90-95% of a given structural problem is regarded as an excellent model. In most cases, however, the situation is more embarrassing and the “level of trust” of an FE model is very rarely quantified. Moreover, in many cases huge compute power is used to “solve accurately” such approximate problems. If that were not enough, the RSM adds additional uncertainty, in the order of 5-10% in the best of cases. Furthermore, it is not uncommon to see the RSM being employed in Monte Carlo Simulations in which hundreds of thousands of samples are executed, precisely because the RSM is computationally so cheap. The paradox goes unnoticed in the majority of the cases – high precision is sought with a numerical model which, in the best of cases “misses” 10-15% of the physics. What alternatives are available? Direct solverbased Monte Carlo Simulation. With such an approach, the credibility of the overall result depends on the quality of the FE model alone.

Invoking self-randomization in MD Nastran is very simple. All the user needs to do is introduce the following statement into the Case Control Deck:

STOCHASTICS = ALL

This statement "sprinkles" default uncertainties on all P, C, M cards, as well as SPCD and load cards, thereby accomplishing a full randomization of the input deck. In many cases this may mean tens of thousands of stochastic entries. In practical terms this means that the same input deck, executed twice, will yield different results. Of course the process may be repeated automatically, using one of the numerous multi-run environments available on the market, until the results are statistically significant.

The two plots at the top are relative to the same problem, executed with MD Nastran doing all the randomization (left) and with MSC Robust Design (right) for verification purposes. In the latter case, the user had to define all the random variables himself (i.e. the STOCHASTICS = ALL option was switched off).

This article is from: