newsletter12-2

Page 1

Newsletter Simulation Based Engineering & Sciences Year

Trapped-vortex approach for syngas combustion in gas turbines Extreme Ensemble under Uncertainty - Optimization of a Formula 1 tire brake intake

Fluid Refrigerant Leak in a Cabin Compartment: Risk Assessment by CFD Approach Research of the Maximum Energy Efficiency for a Three Phase Induction Motor

9

n째2 Spring 2012

International CAE Conference 21-22 October 2012 Reducing Fuel Consumption, Noxious Emissions and Radiated Noise by Selection of the Optimal Control Strategy of a Diesel Engine



Newsletter EnginSoft Year 9 n°2 -

3

Flash While the warm summer waves surround us, the teams of EnginSoft and the company’s global Network are preparing one of the major annual CAE summits: the International CAE Conference which will take place from 22nd-23rd October 2012, in Pacengo, Lazise (Verona) – Italy. With this Newsletter, we extend a warm invitation to our readers to join us for a most productive and creative Gettogether in Northern Italy! The Conference, which has become a brand in itself over the past 28 years, will present the state-of-the-art of CAE in diverse industries. Under a networking “umbrella”, the program will feature the latest innovations in the use of simulation and engineering analysis, as well as the various initiatives of EnginSoft and its international partners. Speakers from leading companies in product development will demonstrate how the use of CAE enhances efficiency and delivers ROI. A small foretaste is presented in this Newsletter, which as you may note, has a slightly different appearance, nine years after its first edition. To reflect our international network, our role and culture, EnginSoft has also

renewed the layout of its website and brochures. We hope that you like our new look, and we welcome your feedback! This issue reports about ENEA’s trapped vortex approach for syngas combustion. We look at how the optimal control strategy of a diesel engine helps to reduce fuel consumption, emissions and radiated noise. Fabrizio Mattiello of Centro Ricerche Fiat outlines the Center’s risk assessment with CFD. We hear how optimization can perfect the design of a high-performance engine’s exhaust pipes and a steel case hardening process. This issue also presents Rossi Group’s optimized motor which fulfills the criteria for the IE3 premium efficiency class. Standford University, University of Naples, EnginSoft Americas illustrate their work for a three-dimensional geometry using an extreme ensemble. We learn about high pressure die casting, one of EnginSoft’ s key expertise areas, and the importance of structural CAE for the development of today’s appliances. ANSYS and EnginSoft maintain excellent collaborations with the food&beverage industries. For this issue, we interviewed Massimo Nascimbeni from Sidel, one of the world’s leading groups for packaging solutions for liquid foods. Our software news highlight ANSYS 14 and the ANSYS product family, HP and NVIDIA Maximus, and their benefits for ANSYS Mechanical. We introduce EUCOORD, a web-based collaborative solution for FP7 EU Project Management, and the new Research Projects that EnginSoft supports in 2012. We are delighted to announce the opening of EnginSoft Turin, the launch of EnginSoft’s Project Management Office, our membership with AMAFOND and our cooperation with Flow Design Bureau (FDB) in Norway. The Japan Association for Nonlinear CAE informs us about a new framework for CAE researchers and engineers. To hear more and to discuss with the experts of CAE and Simulation, please join us on 22nd and 23rd October – We look forward to welcoming you to Lazise!

Stefano Odorizzi Editor in chief

Ing. Stefano Odorizzi EnginSoft CEO and President

Flash


4 - Newsletter EnginSoft Year 9 n°2

Sommario - Contents CASE STUDIES

6 10 17 22 26 32 35 38

Trapped-vortex approach for syngas combustion in gas turbines Extreme Ensemble under Uncertainty High Pressure Die Casting: Contradictions and Challenges Reducing Fuel Consumption, Noxious Emissions and Radiated Noise by Selection of the Optimal Control Strategy of a Diesel Engine Fluid Refrigerant Leak in a Cabin Compartment: Risk Assessment by CFD Approach Numerical Optimization of the Exhaust Flow of a High-Performance Engine Multi-Objective optimization of steel case hardening Research of the Maximum Energy Efficiency for a Three Phase Induction Motor by means of Slots Geometrical Optimization

CAE WORLD

42

Applicazioni strutturali CAE nel settore Elettrodomestici

INTERVIEW

44

EnginSoft interviews Massimo Nascimbeni from Sidel

HARDWARE UPDATE

46

Productivity Benefits of HP Workstations with NVIDIA® Maximus™ Technology

SOFTWARE UPDATE

48

Uno sguardo alle principali novità riguardanti l’integrazione dei prodotti ANSOFT in bassa frequenza in piattaforma ANSYS Workbench 14

RESEARCH & TECHNOLOGY TRANSFER

52 54 56

Updates on the EASIT2 project: educational base and competence framework for the analysis and simulation industry now online EnginSoft in £3m EU Partnership to Optimise Remote Laser Welding DIRECTION: Demonstration of Very Low Energy New Buildings

ENGINSOFT NETWORK

58 59 60

EnginSoft and Flow Design Bureau (FDB) launch collaboration. Dinamica esplicita: nuovo competence center EnginSoft a Torino Avvio del Project Management Office di EnginSoft

The EnginSoft Newsletter editions contain references to the following products which are trademarks or registered trademarks of their respective owners:

Forge and Coldform are trademarks of Transvalor S.A. (www.transvalor.com)

ANSYS, ANSYS Workbench, AUTODYN, CFX, FLUENT and any and all ANSYS, Inc. brand, product, service and feature names, logos and slogans are registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries in the United States or other countries. [ICEM CFD is a trademark used by ANSYS, Inc. under license]. (www.ansys.com)

LS-DYNA is a trademark of Livermore Software Technology Corporation. (www.lstc.com)

modeFRONTIER is a trademark of ESTECO srl (www.esteco.com)

Grapheur is a product of Reactive Search SrL, a partner of EnginSoft (www.grapheur.com)

Flowmaster is a registered trademark of Menthor Graphics in the USA.

SCULPTOR is a trademark of Optimal Solutions Software, LLC (www.optimalsolutions.us)

(www.flowmaster.com) MAGMASOFT is a trademark of MAGMA GmbH. (www.magmasoft.de)

Contents

For more information, please contact the Editorial Team


Newsletter EnginSoft Year 9 n°2 -

JAPAN COLUMN

61

The Japan Association for Nonlinear CAE: a New Framework for CAE Researchers and Engineers

EVENTS

64

EnginSoft diventa socio di AMAFOND: Associazione nazionale fornitori macchine e materiali per fonderie

64

METEF Foundeq 2012: EnginSoft soddisfatta della partecipazione all’esposizione

66 66 66 67

Constructive Approximation and Applications Graz Symposium Virtual Vehicle International modeFRONTIER Users’ Meeting 2012 Event Calendar

5

Newsletter EnginSoft Year 9 n°2 - Summer 2012 To receive a free copy of the next EnginSoft Newsletters, please contact our Marketing office at: newsletter@enginsoft.it All pictures are protected by copyright. Any reproduction of these pictures in any media and by any means is forbidden unless written authorization by EnginSoft has been obtained beforehand. ©Copyright EnginSoft Newsletter.

Advertisement For advertising opportunities, please contact our Marketing office at: newsletter@enginsoft.it

EnginSoft S.p.A. 24126 BERGAMO c/o Parco Scientifico Tecnologico Kilometro Rosso - Edificio A1, Via Stezzano 87 Tel. +39 035 368711 • Fax +39 0461 979215 50127 FIRENZE Via Panciatichi, 40 Tel. +39 055 4376113 • Fax +39 0461 979216 35129 PADOVA Via Giambellino, 7 Tel. +39 049 7705311 • Fax +39 0461 979217 72023 MESAGNE (BRINDISI) Via A. Murri, 2 - Z.I. Tel. +39 0831 730194 • Fax +39 0461 979224 38123 TRENTO fraz. Mattarello - Via della Stazione, 27 Tel. +39 0461 915391 • Fax +39 0461 979201 10133 TORINO Corso Moncalieri, 223 Tel. +39 011 3473987 • Fax +39 011 3473987 www.enginsoft.it - www.enginsoft.com e-mail: info@enginsoft.it

COMPANY INTERESTS

PAGE 26 FLUID REFRIGERANT LEAK IN A CABIN COMPARTMENT: RISK ASSESSMENT BY CFD APPROACH

EnginSoft GmbH - Germany EnginSoft UK - United Kingdom EnginSoft France - France EnginSoft Nordic - Sweden Aperio Tecnologia en Ingenieria - Spain www.enginsoft.com Cascade Technologies www.cascadetechnologies.com Reactive Search www.reactive-search.com SimNumerica www.simnumerica.it M3E Mathematical Methods and Models for Engineering www.m3eweb.it

ASSOCIATION INTERESTS NAFEMS International www.nafems.it • www.nafems.org TechNet Alliance www.technet-alliance.com RESPONSIBLE DIRECTOR Stefano Odorizzi - newsletter@enginsoft.it

PAGE 42 CAE STRUCTURAL APPLICATIONS IN THE INDUSTRIAL APPLIANCES SECTOR

PRINTING Grafiche Dal Piaz - Trento The EnginSoft NEWSLETTER is a quarterly magazine published by EnginSoft SpA

Contents

Autorizzazione del Tribunale di Trento n° 1353 RS di data 2/4/2008

PAGE 10 EXTREME ENSEMBLE UNDER UNCERTAINTY

ESTECO srl www.esteco.com CONSORZIO TCN www.consorziotcn.it • www.improve.it


6 - Newsletter EnginSoft Year 9 n째2

Trapped-vortex approach for syngas combustion in gas turbines The so-called trapped vortex technology can potentially offer several advantages for gas turbine burners. In the systems experimented with so far, this technology is mainly limited to the pilot part of the whole burner. The aim of the work we present here, is to design a combustion chamber completely based on the trapped vortex principle, investigating the possibility to establish a diluted combustion regime, in case of syngas as fuel. This article presents some results obtained by a 3D CFD analysis, using both RANS and LES approaches. Introduction A key issue in combustion research is the improvement of combustion efficiency to reduce fossil fuel consumption and carbon dioxide emission. Researchers are involved in the development of a combustion technology able to accomplish energy savings with low pollutant emissions. The differences between syngas and natural gas combustion are mainly two. For the same power, fuel mass flow should be 4-8 times higher than natural gas, due to the lower calorific value. Premixed combustion of natural gas and air is one of the most commonly used methods for reducing NOx emissions, by maintaining a sufficiently low flame temperature. This technique poses some problems with syngas because of a significant presence of hydrogen and the consequent danger of flashback in the fuel injection systems. In the case of a non-premixed diffusion flame, diluents such as nitrogen, carbon dioxide and water, or other techniques, such as MILD combustion, can be employed to lower flame temperatures and hence NOx. The systems developed so far use combustion in cavities as pilot flames for premixed high speed flows. The goal is to design a device operating entirely on the principle of trapped vortices, which is able, based on its intrinsic nature of improving mixing of hot combustion gases and fresh

Case History

mixture, that represents a prerequisite for a diluted combustion and at the most a MILD combustion regime. The trapped vortex technology offers several advantages for a gas turbine burner: 1. It is possible to burn a variety of fuels with medium and low calorific values. 2. NOx emissions reach extremely low levels without dilution or post-combustion treatments. 3. It provides extended flammability limits and improves flame stability. MILD combustion is one of the promising techniques proposed to control pollutant emissions from combustion plants. It is characterized by high preheating of combustion air and massive recycle of burned gases before reaction. These factors lead to high combustion efficiency and good control of thermal peaks and hot spots, lowering NOx thermal emissions. Two important aspects are crucial for MILD combustion. First of all, the reactants have to be preheated above the self-ignition temperature. Secondly, the reaction region has to be entrained by a sufficient amount of combustion products. The first requirement ensures high thermal efficiency, whereas the latter allows flame dilution, reducing the final temperature well below the adiabatic flame temperature. In this way, reactions take place in a larger portion of the domain in absence of ignition and extinction phenomena, due to the small temperature difference between burnt and unburnt gases and as a result a flame front is no longer identifiable; this is why MILD combustion is often denoted as flameless combustion. Another advantage is represented by the fact that the temperature homogeneity reduces materials deterioration. Because of the limited temperature, NOx emissions are greatly reduced and soot formation is also suppressed, thanks to the lean conditions in the combustion chamber,


Newsletter EnginSoft Year 9 n°2 -

due to the large dilution level and the large CO2 concentration. The introduction of MILD technology in gas turbines is of great interest because it is potentially able to answer two main requirements: 1. A very low level of emissions. 2. An intrinsic thermo acoustic stability (humming).

7

CFD models The simulations, performed with the ANSYS-FLUENT code, have been carried out according to steady RANS and LES approaches. The models adopted for chemical reactions and radiation are the EDC, in conjunction with a reduced CO/H2/O2 mechanism made up of 32 reactions and 9 species, and the P1, respectively. NOx have been calculated

Air tangential mass flow [kg/s]

Air tangential flow velocity [m/s]

Air vertical mass flow [kg/s]

Air vertical flow velocity [m/s]

Fuel mass flow

Air/Fuel global

Air/Fuel primary

[kg/s]

Fuel mass flow velocity [m/s]

0.01238

75

0.00592

62

0.00462

61

3.96

1.28

Table 1 - Boundary conditions for the reference case.

Prototype description The TVC project concerns gas turbines which use annular combustion chambers. The prototype to be realized, for simplicity of design and measurement, consists in a linearized sector of the annular chamber having a square section of 190x190mm (fig. 1). The power density is about 15 MW/m3 bar. The most obvious technique to create a vortex in a combustion chamber volume is to set one or more tangential flows. In this case two flows promote the formation of the vortex, while other streams of air and fuel, distributed among the tangential ones, feed the "vortex heart". The air flows placed in the middle provide primary oxidant to combustion reaction, while the tangential ones provide the air excess, cool the walls and the combustion products, in analogy to what happens in the traditional combustion chambers, in which this process occurs in the axial direction. The primary and the global equivalence ratio (Fuel/Air/(Fuel/Air)stoic) were

Fig. 1 - Burner geometry.

equal to 1.2 and 0.4, respectively. Given the characteristics of the available test rig, the prototype will be tested under atmospheric pressure conditions, but the combustion air will be at a temperature of 700 K, corresponding to a compression ratio of about 20 bar, in order to better simulate the real operating conditions. The syngas will have the following composition: 19% H2 - 31% CO - 50% N2 LHV 6 MJ/kg. The reference case boundary conditions are reported in table 1.

in post-processing. In order to save computational resources, the simulations have been conducted only on one sector (1/3) of the whole prototype reported in the figure 1, imposing a periodicity condition on side walls. A structured hexahedral grid, with a total number of about 2 million cells has been generated. Results A big effort has been made to properly modulate flow rates, velocities, momentum and minimum size of the combustion chamber. It’s clear that inlets placement plays a key role in the formation of a energetic vortex which can be able to properly dilute the reactants and create sufficient residence time that favorites complete combustion. The resulting configuration establishes a perfect balance between the action of the tangential flows, which tend to generate the vortex and the action of the vertical flows that tend to destroy it (fig. 2). In this sense, it is worth pointing out that it is especially the tangential flow further from the outlet which is the most effective. The negative effects on vortex location and size resulting from a reduction of its strength, compared with the other inlets, have been evident. Even the distance between the two vertical hole rows has been properly adjusted. In fact, the upper tangential stream flows between the two vertical rows on the opposed side and, if the available space is insufficient, it doesn’t remain adherent to the wall and the vortex is destroyed. In principle, a significant presence a very reactive hydrogen, can produce elevated temperatures and fast reaction, especially near fuel inlets. For this reason, inlet velocities are sufficiently high to generate a fast rotating vortex and then a rapid mixing. Further increase in fuel injection velocity has a negative influence on vortex

Case History


8 - Newsletter EnginSoft Year 9 n째2

Fig. 2 - Temperature (K) field on central section plane. (left) RANS (right) instantaneous LES.

gas recirculation, which associated to a good degree of mixing with reactants, represents a necessary condition for MILD combustion. In order to quantify the degree of mixing, the following variable has been mapped: MIX=|H2-H2mean|+|H2OH2Omean|+|CO2-CO2mean|+|O2O2mean|+|CO-COmean|+|N2-N2mean| Fig. 3 - OH mass fraction on different planes.

shape and position, without slowing reaction and reducing temperature peaks. The aim of the LES simulation has been to analyze the unsteadiness of the system. The vortex appears very stable in the cavity, i.e. trapped. No vortex shedding has been noted. Compared to axial combustors, the minimum space required for the vortex results in an increase in volume and a reduction in power density. On the other hand, carbon monoxide content, with its slow chemical kinetic rate compared to natural gas, requires longer residence time and a bigger volume. It can be observed that high temperature zones (fig. 2) are concentrated in the vortex heart. If the hydrogen content is rapidly consumed, the carbon monoxide lasts longer and accumulates given that the amount of primary air is less than the stoichiometric value. The mean LES and the steady RANS fields have provided very similar results. It is interesting to evaluate the residence time inside the chamber. In the central part of the chamber, the residence time ranges from 0.02 to 0.04 sec. The establishment of a MILD combustion regime depends especially on a sufficient internal exhaust

Case History

If all the species involved were perfectly mixed, MIX should be zero everywhere. In practice, the more MIX tends to zero, the more reactants and products are well mixed. If the zones immediately downstream the inlets are neglected, MIX assumes very low values in the chamber. This supports the fact that the vortex is able to produce the expected results. The recirculation factor, i.e. the ratio between exhaust recirculated and fresh mixture introduced, is about 0.87, while exhaust composition is: 0.19% CO2, 0.05% H2O, 0.005% CO, 0.039% O2, 0.713% N2.

Fig. 4 - Maximum temperature vs compressor ratio.


Newsletter EnginSoft Year 9 n째2 -

9

recirculation ratio increases only to 0.92. A 30% decrement of the equivalence ratio (leaner combustion) causes a reduction in temperature and a subsequent increment in unburnt species, especially CO. NOx emissions are in general unimportant for all the cases mentioned.

Fig. 5 - EICO (CO emission index) vs compressor ratio.

Fig. 6 - EINOx (NOx emission index) vs compressor ratio.

Fig. 7 - EICO and EINOx vs equivalence ratio, operating pressure 20 bar.

In order to determine where reactions are concentrated, it is useful to analyze radical species distribution, such as OH. The fact that radicals are not concentrated in a thin flame front, but well distributed in the volume (fig. 3), represents an evidence of a volumetric reaction regime. A sensitivity analysis has been conducted varying the boundary conditions around the references previousy reported in table 1. The quality of the different cases was judged in terms of pollutants emission indices (g pollutant/kg fuel), in particular for CO and NOx. CO is an indicator for incomplete combustion. Its presence in the exhaust is favored by low temperature and lack of oxygen. On the contrary, NOx are favored by high temperature and oxygen abundance. A 30% increment of the tangential flow velocity causes a faster rotation of the vortex, but the

If the operating pressure is augmented from 1 bar to 10-20 bar at constant geometry and inlet velocities, the mass flow rates and then the burner power increases by about 10-20 times, even if the specific power density (MW/m3 bar) remains constant. The temperature and pollutants trends for those cases are reported in figures 4-6. The maximum and average temperatures in the chamber increase almost linearly, while EICO increases and EINOx decreases as pressure increases. For each operating pressure, it is then possible to identify an optimal equivalence ratio condition, at the intersection of the two curves in figure 7, where the major pollutants are kept down at the same time. Scaling One of the aims of this work has been to verify if when varying the prototype dimensions, their behavior would remain unchanged or not, in terms of fluid dynamics and chemistry. The scaling method based on constant velocities has been chosen for this purpose, because all the physics of the system are based on a fluid dynamic equilibrium, if one refers to the above discussion. If one imagines to halve all sizes, the volume will be reduced by a factor of 0.5x0.5x0.5=0.125, while the areas will be reduced by a factor of 0.5x0.5=0.25. Then mass flowrates will be scaled by a factor of 0.25. Therefore, the power density (power/volume) will increase by a factor of 0.25/0.125=2. The simulations show that the behavior of the burner remains unchanged, in terms of velocity, temperature, species, etc. fields. It can be concluded that, if the overall size of the burner is reduced, the power density increases accordingly to Power0.5. Conclusions A gas turbine combustion chamber prototype has been designed based on the trapped vortex technique. A preliminary optimization of the geometry has led to a prototype in which the actions of the different flows are in perfect balance, and a vortex filling the entire volume has been established. The large exhaust recirculation and the good mixing determine a diluted combustion regime, which helps to keep down flame temperature. A sensitivity analysis has allowed to determine the optimal operating conditions for which the contemporary lowering of the major pollutants species was achieved. In addition, a scaling study has been conducted which demonstrated how the prototype keeps its characteristics unaltered if halved or doubled in dimensions. A. Di Nardo, G. Calchetti, C. Mongiello ENEA-Italian National Agency for New Technologies Energy and Economic Development

Case History


10 - Newsletter EnginSoft Year 9 n°2

Extreme Ensemble under Uncertainty The development of robust design strategies coupled with detailed simulation models requires the introduction of advanced algorithms and computing resource management tools. On the algorithmic side, we explore the use of simplex-based stochastic collocation methods to characterize uncertainties, and multi-objective genetic algorithms to optimize a large-scale, three-dimensional geometry using a very large number (a so-called “extreme ensemble”) of CFD simulations on HPC clusters. In this article, we concern ourselves with the optimization under uncertainty of a Formula 1 tire brake intake to maximize cooling eciency and to minimize aerodynamic resistance. The uncertainties are introduced in terms of tire deformation and free stream conditions. The simulations environment Leland has been developed to dynamically schedule, monitor and stir the calculation ensemble and to extract runtime information as well as simulation results and statistics. Leland is equipped with an auto-tuning strategy for optimal load balancing and fault tolerance checks to avoid failures in the ensemble. Introduction In the last few years, clusters with 10,000 CPUs have become available, and it is now feasible to design and optimize complex engineering systems using computationally intensive simulations. This development highlights the need to create resource managers that deliver cost-effective utilization with fault tolerance. The BlueGene/L cluster with 65,536 nodes was designed to have less than one failure every ten days. In fact, this cluster and others like it, experience an average of one processor failure every hour. In light of this, it is necessary to study, develop, and to continually improve strategies for the ecient completion of large simulations. Theoretical work has been published in the literature that suggests that advanced algorithms might be available although they have only been demonstrated using test functions on a small number of compute nodes.

Case History

The design process involves running an extreme number of large computations or “extreme ensemble” (in the range of thousands), in order to create a robust solution that will remain optimal under conditions that cannot be controlled (“uncertainties”). We call this process optimization under uncertainty. The ensemble is a list of runs generated by the optimization and uncertainty analysis algorithms that is dynamic in nature and not deterministic. This means that the number of additional simulations is dependent on the results of the prior converged simulations. In this article, we explore the computational design of a Formula 1 tire and brake assembly using large-scale, threedimensional Reynolds-Averaged Navier-Stokes simulations on a high performance computing cluster. The purpose of designing the brake duct is to increase the amount of air captured by the duct while minimizing the total drag of the tire. This multi-objective optimization problem is tacked using a genetic algorithm which produces a Pareto front of best solutions. An uncertainty analysis of 4 specific points on the Pareto front (minimum drag, maximum cooling, best operating point or trade-off, and baseline F1 tire geometry) is summarized in the results section of this paper. Some of our future work will include a study on how uncertainties can be invasively incorporated in the optimization procedure, producing a probabilistic Pareto front rather than analyzing the sensitivity of the deterministic Pareto due to uncertainties. For such a study, approximately 400 simulations have to be performed per optimization cycle (i.e. generation). When the results of these 400 simulations are analyzed, an additional list of 400 simulations, each with a unique range of input parameters, will be created for the next generation of the optimization process. The values of these input parameters are not known a priori. The optimization procedure needs to account for uncertainties arising from variables in ow conditions as well


Newsletter EnginSoft Year 9 n째2 -

11

Fig. 1 - Leland flowchart

as from variability in the flexible tire geometry. This complex baseline geometry consists of 30 million mesh cells. In order to generate an optimal design under uncertainty, the mesh is deformed locally, creating 5000 unique simulations to perform. Each simulation (or realization) will be run on our in-house cluster using 2400 cores; the full design process should take approximately 2 weeks to complete. The second part of this article is dedicated to the development of a software platform able to reduce the total time needed to carry out an engineering design process such as the one described above. Leland, the simulations environment we have developed, allows us to schedule the resources, to monitor the calculation ensemble and to extract runtime information as well as simulation results and statistics on the y. Leland is equipped with an autotuning strategy for selecting an optimal processor count. Moreover, an implemented fault tolerance strategy ensures that a simulation or a processor stall is detected and does not impact the overall ensemble finish time. The results of this study document the actual computational time savings achieved through the efficient use of resources with Leland, as opposed to submitting individual jobs on the cluster, one at a time, using traditional queue managers (e.g. Torque, SLURM, etc.). Robust Design Algorithm The impact of uncertainties in the robust design process is characterized using the Simplex Stochastic Collocation (SSC) algorithm, which combines the effectiveness of random sampling in higher dimensions (multiple uncertainties) with the accuracy of polynomial interpolation. This approach is characterized by a superlinear convergence behavior which outperforms classical Monte Carlo sampling while retaining its robustness. In the SSC methods, a discretization of the space spanned by the uncertain parameters is employed, and the simplex

elements obtained from a Delaunay triangulation of sampling points are constructed. The robustness of the approximation is guaranteed by using a limiter approach for the local polynomial degree based on the extension of the Local Extremum Diminishing (LED) concept to probability space. The discretization is adaptively refined by calculating a refinement measure based on a local error estimate in each of the simplex elements. A new sampling point is then added randomly in the simplex with the highest measure and the Delaunay triangulation is updated. The implementation of advanced algorithms to improve the scalability of Delaunay triangulation in higher dimensions, in order to circumvent the curse of dimensionality, has not been fully investigated as part of this study. There are proofs in literature that show that Delaunay triangulation can achieve linear scaling with higher dimensions. In this work, we analyze a nontrivial multi-objective problem within which it is not possible to find a unique solution that simultaneously optimizes each objective: when attempting to improve an objective further, other objectives suffer as a result. A tentative solution is called non-dominated, Pareto optimal, or Pareto ecient if an improvement in one objective requires a degradation of another. We use the NSGA-II algorithm to obtain the nondominated solutions, hence we analyze the more interesting solutions on the deterministic Pareto set in presence of uncertainty. Our goal here is to prove the importance of considering the variability of several input conditions in the design process. For all these solutions, the SSC is used to obtain a reconstruction of the objective function statistical moments, refining the simplexes until an accuracy threshold is reached. Dynamic Resource Manager - Leland The structure of Leland is based on a workflow through I/O sub-systems that represent the software applications (i.e.

Case History


12 - Newsletter EnginSoft Year 9 n째2 Sculptor, Fluent, Tecplot, Matlab etc.) involved in the process. This environment is designed to run natively on any high-performance computing (HPC) system, by integrating with the job-submission/queuing system (e.g. Torque). Moreover, it does not require continuous management: once the analysis is initiated, multiple simulations are submitted and monitored automatically. In Leland, a job is an instance of the entire multi-physics simulations, which might include grid generation, mesh morphing, flow solution and post-processing. The main objective of Leland is to set-up a candidate design as a job, to manage it until it is completed and to gather relevant results that are used to inform the optimization under the uncertainty process. ROpt (robust optimum), shown in Figure 1a, is the engine behind this design environment. (1a) (1b)

(2)

simulation using 1 processor divided by the total time required to finish the simulation using p processors (see Equation 2). The speed-up curve in Figure 2 was generated by artificially replicating an HPC simulation. The time required to complete an HPC simulation is primarily a function of three factors: i) portion of the code that is not parallelizable (tserial in Equation 1), ii) portion of the code that is parallelizable (t parallel in Equation 1) and iii) the communication time between CPUs (tcomm in Equation 1). The serial portion of the code in the example illustrated in Figure 2 is constant (5000 seconds) and not a function of the number of processors allocated to the job. The length of time required to complete the parallel portion of the code in the example shown in the same figure, is 5 million seconds divided by the number of processors used. Finally, there will always be some latency between CPUs and this is characterized by the communication time between nodes. The linear penalization we used in this example is 40 seconds per processor, but the latency slowdown could also be a more complex function related to the specific application.

(3) Linear speed-up, also referred to as ideal speed-up, is shown as the green dotted line in the middle plot of Figure 2. An algorithm has linear speed-up if the time required to finish the simulation halves when the number of processors is doubled. It is common for fluid dynamic simulations to experience speed-down; this occurs when the total time required to finish the simulation actually rises with an increasing number of processors. Leland has the ability to recognize the point at which speed-down occurs (at about 400 processors in Figure 2) and never uses more than this number of processors. The rightmost plot in Figure 2 shows the eciency (defined by Equation 3) curve for this artificial HPC simulation. The eciency typically ranges between values of 0 1 estimating how well utilized the processors are compared to the effort wasted in synchronization and communication. It is clear from this plot, that the highest eciency occurs with the lowest number of processors.

Given the design and/or uncertain input variables, the ROpt continuously generates new design proposals (samples) based on the evolutionary strategy and/or analysis of the uncertainty space, until a convergence criterion is met. The Job Liaison shown in Figure 1b, defines the characteristics of each single job and continuously monitors the progress of the simulations until completion, in order to communicate the objective evaluations back to the ROpt. It is the job of this module, to continuously monitor for faults, stalls, or errors, to ensure that the total runtime is not detrimentally affected by processor/memory failure. The Job Submission engine, shown in Figure 1c, ensures that the correct number of jobs is always running on the cluster. The variables (number of cores, number of jobs, etc.) from the input file that are used to initialize the runs are dynamic, which means that they can be edited on the y and the system will respond accordingly. Leland has the ability to dynamically select the optimal number of processors to run per realization. This is achieved by auto-tuning. The user selects an optimal window of cores to use per realization prior to launching the full ensemble. The auto-tuning algorithm then samples the space by using a unique number of cores per realization in the ensemble. Once two or more realizations are complete, the auto- (a) Total time required to complete simulation tuning algorithm can start to construct as a function of the number of processors (left), an application-specific speed-up curve Speed-up curve (middle), and Eciency curve (Figure 2). Speed-up is defined as the (right) total time required to finish the Fig. 2 - Sample HPC simulation diagnostics

Case History

(b) Number of simulations that would be completed in a 24 hour window with 1000 available processors using exactly p processors for each simulation


Newsletter EnginSoft Year 9 n°2 -

13

elements is considered for a fully detailed 3D wheel model (Figure 4). The simulations that require geometrical modification (for optimization for uncertainty) are created using Sculptor, a commercial mesh deforming software from Optimal Solutions. The software is used to generate multiple CFD mesh model variants, while keeping CAD and grid generators out of the design process loop, thus saving design time and costs substantially. The (a) Outer view of tire generated models are then used to compute the (b) Inner view of tire Fig. 3 - Front right tire of the Formula 1 race car used in this study showing green airfoil air flow around the tire geometry by a parallel strut used to secure tire to the experimental wind tunnel facility and the outer brake duct CFD solver (Fluent v12.1.4). It is important to (magenta) used to cool the brake assembly closely monitor the skewness of elements in Sculptor to ensure grid quality. If the deformation in This speed-up curve will guide Leland's auto-tuning Sculptor is too large, the CFD solver will diverge. The algorithm in assigning the optimal number of cores per boundary conditions, computational setup, and realization (which may not be in the user’s original experimental comparison for this case are outlined in window). Since an ensemble of this size takes more than a separate studies. few weeks on a large cluster, multiple job submissions need to be submitted to the local queuing system. These jobs are typically limited to 24 hour runtimes (or a Optimization Variables A local mesh morphing software, Sculptor (v2.3.2), was wall clock time of 24 hours). Thus, it is essential that the used to deform the baseline Formula 1 brake duct (Figure 3). Specific control volumes were used to deform the brake duct in three dimensions, namely i) width of opening (Figure 5(a)), ii) height of opening (Figure 5(b)) and iii) protrusion length (Figure 5(c)). Each design variable was allowed to change by 1cm as depicted in Figure 5. (a) Isometric view of ground plane showing contact patch

(b) Streamwise cut plane showing mesh inside rotor passages

(c) Spanwise cut plane showing full brake assembly

(d) Top view of plane cutting through the center of the tire

Fig. 4 - Four different views showing the Formula 1 tire mesh

auto-tuning algorithm recognizes how many hours remain prior to the job terminating due to the wall clock time, and tries to increase the number of cores to finish as many realizations as possible within a specific time frame. Application Description Leland is used here to optimize the shape of a F1 tire brake duct (magenta color in Figure 3(b)), taking into account the geometrical uncertainties associated with the rotating rubber tire and uncertain inflow conditions. The objectives are to minimize the tire drag [N] while maximizing the captured mass flow (kg/s) needed to cool the brake assembly. A computational mesh consisting of 30 million

Uncertain Variables Multiple uncertain variables were tested to determine their sensitivity to output quantities of interest using a DOE (design of experiments) approach. Some of the uncertain variables were based on the in flow conditions (i.e. yaw angle, turbulent intensity, turbulent length scale) while others were based on geometric characteristics of the tire (i.e. contact patch details, tire bulge radius, camber angle). Figure 6 shows 9 geometric modifications which were performed. Each subfigure shows the minimum, baseline F1 tire geometry, and maximum deformation for each uncertain variable. From the results of a purely one-dimensional perturbation analysis, the turbulence length scale (on the order of 0m 2m) results in less than a 0.1% difference in both, the mass flow rate through the brake duct and the overall drag on the

(a) Brake duct width

(b) Brake duct height

(c) Brake duct length

Fig. 5 - Brake duct optimization variables

Case History


14 - Newsletter EnginSoft Year 9 n°2 tire. Conversely, both the mass flow rate and tire drag are very sensitive to the turbulence intensity. The mass flow rate decreased by 7.8% compared to the baseline (less cooling) with 40% turbulence intensity, and the tire drag increased by 7.2% with 40% turbulence intensity. This analysis confirms that the car performance decreases with “dirty” air compared to “clean” air. The sensitivity of the output quantities of interest caused by the tire yaw angle is reflected in the first row of Table 1. The remaining rows in Table 1, show the sensitivity of mass flow rate and drag force to geometric characteristics, specifically contact patch, tire bulge radius, tire compression, and brake duct dimensions. In the end, the three most sensitive uncertain variables, namely tire contact patch width, tire yaw angle, and turbulence intensity, were selected for the optimization under uncertainty study. The tire contact patch width was able to expand and contract up to 1cm, the tire yaw angle varied between 3, and the turbulence intensity varied between 0% and 5%. Results Formula 1 engineers are interested in primarily three factors related to tire aerodynamics: i) overall tire lift and drag ii),

Fig. 6 - Subset of uncertain variables tested for sensitivity in output quantities of interest

cooling performance of the brakes, and iii) how the tire air flow affects downstream components (wake characteristics). All three factors are tightly coupled which makes the design quite complicated, especially when uncertainty in the flexible tire walls and upstream conditions can negatively effect the car performance. Figure 7 illustrates the wake sensitivity caused by flow traveling through the tire hub and exiting from the outboard side of the tire. If the flow of air is not allowed to pass through the tire hub (see top left and bottom left images in Figure 7), there is no mass eux from the outboard side of the tire and the wake is quite symmetric about the wheel centerline. The wake is dominated by a counterrotating vortex pair and both the inboard (left) and outboard (right) vortex are of similar size. Alternatively, if the flow of air is allowed to pass through the tire hub, the inboard (left) vortex becomes larger than the outboard (right) vortex causing wake asymmetry (see top right and bottom right images in Figure 7).

Table 1 - Mass flow rate into the brake duct and drag force on the tire sensitivity for 9 uncertain variables and 3 design variables

Case History

The results of the single parameter perturbations indicated previously show that the mass flow rate through the brake duct and tire drag force are more sensitive to the brake duct width than the brake duct height or length (in the range of deformation between 1cm). The physical explanation of this result becomes evident when visualizing iso-contours of turbulent kinetic energy around the tire. Figure 7 presents the difference between a low width configuration (top) and high width configuration (bottom). The larger width of the brake duct causes a larger separation region immediately behind the brake duct in addition to higher turbulence levels in the shear layer immediately behind the inboard back edge of the tire.


Newsletter EnginSoft Year 9 n째2 -

15

selection 6 (e.g. mating pool, parent sorting) and reproduction (e.g. crossover and mutation). Leland was used to handle the job scheduling and management and as a result, the time required to complete the 450 simulations was 2 days compared to about 4 days without using Leland, which requires submitting jobs manually to the job queuing system using a constant number of processors. Among the Pareto set (see Figure 9), the design that achieves the highest mass flow rate is shown in blue and the design that achieves the lowest overall drag on the tire is shown in magenta. The green design is labeled as the tradeFig. 7 - Wake sensitivity (shown by streamwise x-velocity contours for a plane located 1.12 wheel off design, since this design tries to diameters downstream from the center of the tire) for a simplified tire with wheel fairings (top left), baseline F1 tire (top right), baseline F1 tire with blocked hub passages (bottom left), and achieve the highest mass ow through simplified tire with artificial mass eux from blue segment (bottom right) the inlet of the brake duct while minimizing the total drag on the tire. The baseline geometry, reported in red, was shown not to be on the Pareto front in the deterministic setting. In the previous results, once the tire configuration and other input conditions are specified, the solution is uniquely determined without vagueness. On the other hand, when uncertainties are present, the results have to be expressed in a non-deterministic fashion either probabilistically or as ranges of possible outcomes. The approach we followed here using the SSC is strictly nonTable 2 - Multi-objective optimization strategy intrusive, in the sense that the existing tools are used The Pareto frontier showing the optimal brake duct designs without modifications, but the solution - or more precisely, under no uncertainty are illustrated in Figure 9. Ten their probability distributions - are constructed performing generations, which equates to 450 simulations, were needed an ensemble of deterministic analyses. Further details about to eventually construct the Pareto frontier. Further details the uncertainty quantification strategy can be found in about the optimization strategy can be found in Table 2. Table 3. This table reports the settings of the NSGA-II algorithm The variability of the four geometries described above adopted to drive the main phases of the genetic algorithm: (namely trade-off, highest mass flow, lowest drag, and baseline) as a result of the uncertainties in the tire yaw angle, turbulence intensity, and contact patch width, are presented in Figure 10. The variability of the minimum drag design is the highest, as illustrated by the spread of magenta dots, followed by the maximum mass flow design shown by blue dots, trade-off design presented by green dots and baseline design reflected by red dots. The colored dots in this figure represent the mean probabilistic values and the black lines represent 1 standard deviation of the probabilistic distribution. It is evident in this figure that the optimal designs, on average, move away from the Pareto frontier, decreasing the overall performance of Fig. 8 - Turbulent kinetic energy contours for the minimum drag configuration (top) and maxithe racing car. mum cooling configuration (bottom)

Case History


16 - Newsletter EnginSoft Year 9 n°2

Fig. 9 - Deterministic Pareto front (left); the green, blue, magenta, and gray brake ducts in the subfjgure on the right correspond to the trade-off, max cooling, minimum drag, and baseline configurations respectively

Table 3 - Uncertainty quantification strategy

A similar conclusion can be drawn when we look at the probability density of the drag force and the brake mass flow (Figure 11). The former shows a large excursion of both, the position of the peak and the support, while the

Conclusions In this work, we introduced an ecient method to perform massive ensemble calculations with application to a complex Formula 1 tire assembly optimization case. Special attention has been placed on the creation of an effective resource manager to handle the large number of computations that are required. Since the geometrical uncertainties associated with rubber tires and inflow uncertainties associated with upstream “dirty” air, have an impact on the dominating solutions, their presence has to be taken into account in the design process. The next step of this study is to consider the presence of uncertainties invasively in the optimization procedure, generating a probabilistic Pareto front rather than analyzing the sensitivity of the deterministic Pareto due to uncertainties.

Fig. 10 - Pareto frontier for F1 wheel assembly showing the variability of the minimum drag (magenta), baseline (red), trade-off (green), and maximum cooling (blue) designs to uncertainty in the in flow conditions and exible tire geometry.

(a) Drag force on tire

(b) Mass flow through tire inlet brake duct

Fig. 11 - PDF's of the output quantities of interest used for this study

Case History

latter is only marginally affected. This directional sensitivity under uncertainty with respect to drag force might suggest that only the drag minimization could be treated as a probabilistic objective, while the brake mass flow optimization can be handled using conventional (deterministic) optimization. Since the solutions identified above move away from the deterministic Pareto, the optimization process cannot be decoupled from the uncertainty quantification process. We plan to tackle the joint problem in a future study.

Acknowledgments The authors would like to acknowledge first and foremost Sculptor Optimal Solutions, specifically Taylor Newill and John Jenkins, for their generous support, training, and licenses. The authors wish to thank Dr. J. Witteveen for providing the initial version of the Simplex Stochastic Collocation algorithm and Steve Jones and Michael Emory for helping with the resource allocation manager. The authors also thank Toyota Motor Corporation - F1 Motorsports Division for providing the original geometry used in this study.

John Axerio-Cilies, Gianluca Iaccarino Stanford University Giovanni Petrone University of Naples “Federico II” Vijay Sellappan EnginSoft Americas


Newsletter EnginSoft Year 9 n°2 -

17

High Pressure Die Casting: Contradictions and Challenges This article is aimed at offering an overview of the actual status of the High Pressure Die Casting (HPDC) technology, putting into evidence both the critical aspects and the potential advantages. Specific attention is paid to quality requirements from the end-users, production rate achievable, process monitoring and control, European and worldwide scenarios. This overview leads to the individuation of the 6 most relevant challenges for the HPDC industry: “zero-defect” production, real time process control, understanding the role of process variables, process optimisation, introduction of R&D activities and disseminating the knowledge about the HPDC technology. Introduction The High Pressure Die Casting (HPDC) process is particularly suitable for high production rates and is applied in several

industrial fields; actually about half of the world production of light metals castings is obtained by diecasting. In HPDC of Aluminium alloys, “cold chamber machines” (Fig. 1) are used: the alloy is molten and kept in a crucible, from which a dosing system loads the injection chamber. The main steps of the process are the filling of the cold chamber, the injection into the die, and the extraction. HPDC is a complex process, not only due to the phase transformation the metal undergoes when solidifying in the die. In fact: • high pressure die castings are produced by pouring liquid metal into a metallic shot sleeve; • a steel piston accelerates quickly and transports the metal into a steel die, resulting in metal velocities between 100 and 200 kilometres per hour; • the subsequent extremely short fill time of 50 to 100 milliseconds guarantees perfect fill of small sections in the die such as ribs before metal solidification; • when the metal solidifies, the volume diminishes leaving shrinkage in the casting; • the process tries to overcome these physical phenomena by pressing liquid metal into the die using a couple of hundred atmospheres. While scrap rates in other production lines, such as machining, are measured in ppm (parts per million), in HPDC, due to the complexity of the process, casting scrap rates lie in the range of % (parts per hundred).

Fig. 1 - Schematics of HPDC process

HPDC is widely employed, for instance, in automotive components (about 60% of light alloy castings in this field are made by HPDC – a rough estimation could be 80-100 kg of HPDC components in the “average EU car”) but the amount of scraps is sometimes “embarrassing” (it is not so rare to have 5 to 10% of scraps, due to different kinds of defects, in almost all cases detected during or after the final operations (e.g. machining or painting).

Case History


18 - Newsletter EnginSoft Year 9 n°2 The Contradictions of HPDC Process and the Related Challenges When the HPDC process is approached, at least six contradictory features can be immediately individuated: 1. The final application fields of HPDC products are increasingly requiring enhanced quality, reliability and safety BUT Among the foundry & forming technologies, HPDC is certainly the most critical one in terms of defect generation and poor reliability of products 2. HPDC is a manufacturing route which becomes highly advantageous when elevated production rates are required BUT Actually, the quality control is performed at the end of the production stage, with no real-time correction of critical parameters. 3. A variety of HPDC process parameters is measured today and used for defect detection BUT The quality relevant process parameters are monitored by individual control systems of equipment and no interfacing to the resulting part quality takes place. 4. The HPDC process is highly automated and extended use of process simulation techniques is done by the companies BUT The in-field HPDC process setup and optimisation is still based on specific experience and skills of few people. 5. Specific studies have demonstrated that HPDC companies are innovation-sensitive, with investments mainly directed at increasing production capacity (27%), efficiency (22%), product quality (13%) BUT Within SMEs, the internal R&D potential of HPDC companies is relatively limited, and innovation requires multi-disciplinary integration and cooperation (which are quite unusual for SMEs) 6. HPDC products are typically addressed to Large Industries, for assembling components, systems, machines to be used in widely extended markets BUT 90% of EU light alloys HPDC foundries are SMEs, with an average number of employees going from 20 (IT, PT) to 40 (SE, FI, DK), 80 (DE, ES), up 140 (NO)

Fig. 2 - Defects classification and origin

It seems that each attractive issue of HPDC is counterbalanced by technical or structural difficulties, leading, in the opinion of the authors, to six inter-related challenges which must be faced by HPDC-oriented companies: 1. leading HPDC to “zero-defect environment”; 2. by introducing real-time tools for process control; 3. by monitoring and correlating all the main process variables; 4. by making the process set up and optimisation a knowledge-based issue; 5. thanks to multi-disciplinary R&D activities; 6. impacting on HPDC foundries scenario. In the next paragraphs, each of the six challenges mentioned above will be described, both in terms of actual technologies and approaches and of future and innovative solutions.

Leading HPDC to “Zero-Defect Environment” Due to the extreme conditions to which the molten alloy (injection speed up to 200 km/h, solidification in few seconds) and the die (contact with a molten alloy at more than 700°C and, after 30-40 Defect % of Predictable Experimental It can be monitored by seconds, with a sprayed lubricant at sub-class occurrence by simulation? validation by room temperature) are subjected, X-Rays, light temperature, pressure, Shrinkage defects 20 only partially microscopy metal front sensors the difficult-to-keep-constant process parameters and the lacking X-Rays, light Gas-related defects 15 no air pressure, humidity microscopy, blister test interactions among process control Optical inspection, air pressure, metal front units, HPDC can be considered as Filling related defects 35 yes pressure tests sensors, temperature “defect-generating process”. Not only an average of 5-10% scrap is Undesired phases 5 no light microscopy, SEM shot chamber sensoring consequently produced, also kind, Thermal contraction Optical inspection, size and criticism of defects are 5 yes temperature defects light microscopy various. According to a recently Metal-die interaction temperature, proposed defects classification for 5 only partially Stereo microscopy, SEM defects ejection force HPDC components, 9 sub-classes of by advanced defects (leading to more than 30 Out of tolerance 5 Visual inspection geometry measures simulation specific defect types) can be Lack of material 5 yes Visual inspection geometry measures identified, as summarised in Table 1. They present also their estimated by advanced Excess of Material, flash 5 Visual inspection geometry measures percentage of occurrences. Each simulation stage of the conventional HPDC Table 1 - HPDC defects classification and possibility of prediction, validation and monitoring

Case History


Newsletter EnginSoft Year 9 n°2 -

19

of the manufacturing cycle, often only by visual inspection or with a delay which still cannot not be still accepted, as it strongly affects costs and delivery time. Advanced sensors applied to HPDC process will allow first of all the continuous control of the process itself, recording all the variables evolution during each manufacturing cycle, in order to individuate all the deviations from the optimal set up. Obviously, completely new hardware & software solutions are needed, to be incorporated into the die to achieve a “reactive control” of the quality.

process can generate these defects, which at first-level classification can fall into three categories (surface defects, internal & surface defects, geometrical defects) as shown in Fig. 2. All these defects are classified in further detail, their morphology and origin are well-known and, in some cases, an advanced engineering tool such as process simulation can be used for their prediction (see Table 1). Such tools have been experimentally validated, with reference to specific subclasses of defects. Some of the process parameters and the experimental variables which possibly generate defects can be monitored by dedicated sensors and devices. One of the shortcomings of the HPDC process is that the overall complexity of the process is not handled in one system: machine controls measured machine parameters, furnace temperatures are controlled separately, lubrication systems only control lubricant pressures and application times and so on. Within the die, where the solidification of the metal takes place and the final part quality is determined nearly no process parameter is measured, neither controlled nor related to final part quality. The challenge of leading HPDC to a “zero-defect environment” requires advanced engineering tools capable of managing the complexity of the process. Key-variables identification, knowledge of variables-defects relationship, implementation of real-time sensor device to monitor these variables must be managed by these tools. They must also have the ability to integrate all these information and to carry out re-active strategies to instantaneously “balance” the process in view of zero defect production.

Monitoring and Correlating all the main Process Variables Today’s state-of-the-art is that HPDC machines are equipped with sensor systems which allow to measure basic process data such as hydraulic pressures or piston velocities. If the fill time varies only in the range of milliseconds, there will be an impact on defect location. If however, the fill time lies in the range of milliseconds, the piston speed should be controlled one order of magnitude faster, which is not the case in the state-of-the-art equipment. Until today, other relevant issues are not taken into consideration at all, although each impacts casting quality: thermal loads change the geometry of the shot sleeve and lead to delayed fill and liquid metal cools in the shot sleeve leading to presolidification. Taking into account the actual HPDC process architecture (Fig.4), • in Product and Process design, only mechanical performances of the component are considered, neglecting the real behaviour of the diecast alloy, which results also from the presence and characteristics of defects and imperfections; • in the Overall Casting process, there is a very partial control of the pre-casting stages (melting, degassing, pouring, etc), focussed on the setup of the alloy degassing, furnace level, cup movement to pour the shot chamber. On the other hand, the HPDC machine setup is actually the “best” controlled system, but it simply “tries” to replicate the input setup, with the real shot profile visualised after casting, allowing only “postmortem” evaluations; • in the Die Management, the temperature and all the other parameters of conditioning media (water, oil) are set up and controlled, but correlations with the final quality of castings are performed only when defect causes have to

Introducing Real-Time tools for Process Control It is well-known that HPDC, whose typical cycle time ranges from 40 to 60 seconds, is particularly suitable when high production rates are required. The high costs associated to HPDC tools/dies are recovered when at least 5.000-10.000 casting/per year are produced (Fig. 3). If this manufacturing context is considered, it is clear that quality investigations (carried out on a statistical basis or on the whole production) are a critical issue, which must be as close as possible to the moment at which defects are generated. In the actual HPDC production, defect detection is usually carried out at the end

Fig. 4 - Actual HPDC process architecture

Fig. 3 - Competitiveness area for different casting processes of Al alloys

Case History


20 - Newsletter EnginSoft Year 9 n°2 be individuated, while no action to prevent defect formation is carried out; • in Post-Casting Operations (including machining, heat and surface treatments), the main variables are certainly recorded (e.g. in movement robot and cutting equipment; the actual programming method allows to have positiontime control), but their setup is usually defined a-priori, without considering the possible generation of defects. From the described situation, the need for an effective integration of the design and manufacturing chain with the following HPDC components becomes evident: • sophisticated and accurate results interpretation to estimate the die and process output as well as the scale and morphology of the defects; • the use of advanced sensors to monitor the level of hydrogen in the alloy, the temperature and volume in the furnace and the temperature and volume in pouring cup; • the use of real-time sensors to control position, acceleration and velocities of the plunger and to correlate them to product quality, on the basis of meta-models of the machine behaviour, defining the robustness of the process and die design and the range of quality tolerances; • specific sensors (temperature, pressure, humidity, airpressure) dedicated to real-time monitoring of the thermal-mechanical behaviour of the die, including special reaction devices to modify the process in view of “zero defects” self-adaptation by active gate section variation or venting valve modification; • prediction of the durability of the die by simulating the deterioration mechanisms; • control of die surface lubrication by temperature, flux and direction sensors as well as thermo-regulation by change in temperature, flux and medium consistency; • efficient thermo-regulation by temperature control of heating/cooling media and by time activation/deactivation, to optimise the heat balancing of the die during production and/or in warm-up phase. Making the Process Setup and Optimization a Knowledge-Based Issue The HPDC process only seems to be a “fully” automated production process that promises repeatability, high production rate and automation in any phase of the production cell. Practically, the IT control of the process is attributed to a single mechanism with active synchronisation, but excluding any interaction with product quality and equipment performance. The result is that the process setup and its cost optimisation are essentially based on the skills and competenceies of few persons. In other words, it can be said that HPDC technology is more a “personal” than the company’s ownership. This strongly impacts production costs, whose quantification becomes uncertain and variable. Furthermore, a traditional quality management perspective is employed in HPDC, assuming that the failure costs decrease as the money spent on appraisal

Case History

and prevention increases, and that there is an optimum amount of quality effort to be applied in any situation, which minimizes the overall investments. As quality effort is increased, the costs of providing this effort – through extra quality controllers’ inspection procedures and so on – get higher than the benefits achieved. However, it is interesting to note that the costs of errors and faulty products decrease due to these investments. Furthermore, in this traditional perspective, the inevitability of errors is accepted, failure costs are generally underestimated (“reworking” defective products, “re-serving” customers scrapping and materials, loss of goodwill, warranty costs, management time wasted, loss of confidence among operations processes), and prevention costs are considered inevitably high. The importance of quality to each individual operator is not assessed, and preventing errors is not considered an integral part of everyone’s work. Training, automatic checks, anything which helps to prevent errors occurring in the first place are often considered simply as costs. The breakthrough expected for the HPDC process is the change from a simple mechanism input setup to dynamic total quality management. This implies that the set up process is accelerated and that a continuous cost optimisation activity will autonomously run, parallel to the process itself. The Total Quality Management (TQM) method, which stresses the relative balance between different types of quality cost, rather than looking for “optimum” levels of quality effort, must be implemented in HPDC foundries. TQM approach emphasizes prevention to stop errors happening in the first place, rather than placing most emphasis on appraisal. The more effort that is put into error prevention, the more internal and external failure costs are reduced. Then, once confidence has been firmly established, appraisal costs can be reduced. Eventually, even prevention costs can be decreased in absolute terms, though prevention remains a significant cost in relative terms. Initially, total quality costs may rise as investment in some aspects of prevention as mainly training is increased. However, a reduction in total costs will quickly follow. Process data will be organised thanks to Statistical Process Control (SPC) and Control Charts, to see whether the process looks as though it is performing as it should, or alternatively whether it is going “out of control”. An equally important issue to take into account is if the variation in the process performance is acceptable to external customers. This will depend on the acceptable range (called specification range) of performance that will be tolerated by the customers. Multi-disciplinary R&D Activities The fact that several disciplines and competenceies are needed to run the whole HPDC design and manufacturing chain is obvious. Process metallurgy, machine design, automation, numerical simulation, heat transfer, furnace and die design and construction, and many other are involved. The actual problem is that the approach to HPDC is typically mono-disciplinary: dies are designed and built for productivity, but not interfaced with control systems, lubrication is carried out for die safety and cycle time,


Newsletter EnginSoft Year 9 n°2 -

21

without taking into account metallurgical quality, high productivity is always targeted but a real cost analysis is never carried out. The level of interaction among the disciplines playing a role in HPDC field is still poor and limited: each of them is certainly introducing relevant innovations, which, however, are in most cases not integrated or are fully separated. The impact of a specific innovation Fig. 6 - Weight savings and market penetration for Al alloys in automotive applications on productivity and costs is experimented directly in-field, with Impacting on HPDC Foundries Scenario high cost and no consistent background. Although, innovation The relevant figures about the situation of EU HPDC foundries is strongly needed by HPDC foundries (and they know this very are the following: well) they have no scientific, technological and organisational • there are more than 2000 light alloys foundries in Europe background to develop and apply it. These foundries are very (Table 2); limited through their SMEs structure, which intrinsically stops • they are basically SMEs, with an average number of emplomulti-disciplinary innovation, which, on the other side, is the yees around 50 (though most of them have less than 20 key for survival. employees); • the end users of diecast products are the transport industry Methods and tools for the integration of different technologies, (60%), mechanics (7%), electro-mechanics (9%), civil engidisciplines and competences are strongly needed, to lead the neering (20%), with a growing trend in automotive and tranHPDC manufacturing sector towards a more knowledge-based sport, (Fig. 5), supported by the reduction achievable in fuel and interactive approach, leading to the optimal use of personal consumption and emission (Fig. 6); and company resources. Only an integrated use of each personal • the production, due to the well-known effects of the crisis, skill and expertise, in order to have a complete quality has been strongly reduced in 2008 and 2009, with a partial management of the HPDC process and products, will assure the recovery in 2010 (see also Table 3); development of a “new generation” of HPDC foundries. • almost 30% of HPDC machines have been installed in foundries more than 25 years ago, thus they are very close to obsolescence; Italy Germany France Poland UK • HPDC technology makes use, in almost all cases, of re-cycled (the so-called “secondary”) alloys, with a relevant savings in 2008 960 346 335 245 236 terms of energy and natural resources. 2009

920

344

319

n.d.

220

Table 2 - Number of non-ferrous foundries in Europe (Al alloys foundries are the 80% of them) Country

2006

2007

2008

2009

2010*

Germany

773.000

882.000

802.000

560.000

759.000

Italy

897.000

912.000

820.000

560.000

727.000

Spain

129.000

125.000

110.000

81.000

94.000

Sweden

55.000

57.000

51.000

31.000

38.000

Table 3 - Production of Al alloys castings in European Countries (values in tons): about 60% of the production is obtained by HPDC (*estimated)

For EU HPDC foundries and their survival of the crisis, competitiveness, efficiency and innovation will be crucial. Of key importance in the next years will be the implementation of focussed dissemination activities for the personnel of the companies involved in the HPDC design and manufacturing chain. The concepts of multidisciplinary integration and effective interaction for quality management need to be shared and “absorbed” by anyone in the chain. Performing these actions will allow HPDC foundries to achieve a more mature and efficient approach to address large end-users, and to exploit their relevant potential. Acknowledgements This article is the result of a survey carried out by the authors with key people in the HPDC fields. In particular, we would like to thank: Jeorg and Uwe Gauermann (Electronics GmbH), Lothar Kallien (GTA, Aalen University), Marc Schnenider (MAGMA GmbH), Lars Arnberg (NTNU, Trongheim University), Aitor Alzaga (Tekniker), Luca Baraldi and Flavio Cecchetto (MOTUL).

Fig. 5 - Data on Aluminium alloys use in automotive

For more information: Franco Bonollo, Giulio Timelli - Università di Padova - DTG, Vicenza, Italy Nicola Gramegna - EnginSoft, Padova, Italy info@enginsoft.it

Case History


22 - Newsletter EnginSoft Year 9 n°2

Reducing Fuel Consumption, Noxious Emissions and Radiated Noise by Selection of the Optimal Control Strategy of a Diesel Engine Despite recent efforts devoted to the development of alternative technologies, it is likely that the internal combustion engine will remain the dominant propulsion system for the next 30 years and beyond. Due to more and more stringent emission regulations, methods and technologies able to enhance the performance of these engines in terms of efficiency and environmental impact are strongly required. Our present work focuses on the development of a numerical method for the optimization of the control strategy of a diesel engine equipped with a high pressure injection system, a variable geometry turbocharger and an Exhaust Gas Recirculation (EGR) circuit. In this article, we present a preliminary experimental analysis for the characterization of the considered six-cylinder engine under various speeds, loads and EGR ratios. The fuel injection system is separately tested on a dedicated test bench, to determine the instantaneous fuel injection rate for different injection strategies. The collected data are employed for tuning proper numerical models, able to reproduce the engine behavior in terms of performances (in-cylinder pressure, boost pressure, air-flow rate, fuel consumption), noxious emissions (soot, NO) and radiated noise. In particular, a 1D tool is developed with the aim of characterizing the flow in the intake and exhaust systems and predicting the engineturbocharger. Matching conditions, by including a short-route EGR circuit. A 3D model (AVL Fire™) is assessed to reproduce into detail the in-cylinder thermo-fluid-dynamic processes, including mixture formation, combustion, and main pollutants production. An in-house routine, also validated against available data, is finally developed for the prediction of the combustion noise, starting from in-cylinder pressure cycles.

Case History

Obviously, data exchange between the codes is previewed. The overall numerical procedure is firstly checked with reference to the experimentally analyzed operating points. The 1D, 3D and combustion noise models are then coupled to an external optimizer (modeFRONTIER™) in order to select the optimal combination of the engine control parameters to improve the engine performance and to contemporary minimize noise, emissions and fuel consumption. Under the hypothesis of a pilot-main

Table 1 - Engine data

Fig. 1 - Architectural layout of the engine


Newsletter EnginSoft Year 9 n°2 -

injection strategy, a multi-objective optimization problem is solved through the employment of a genetic algorithm. Eight degrees of freedom are defined, namely start of injection, dwell time, energizing time of pilot and main pulses, EGR valve opening, throttle valve opening, swirl level, and turbine opening ratio. As this work shows, nonnegligible improvements can be gained, depending also on the importance given to the various objectives. EXPERIMENTAL ANALYSIS The considered engine is an in-line six-cylinder turbocharged Diesel engine, equipped with a common rail fuel injection system (CR-FIS). The main engine characteristics data, together with its architectural layout, are reported in Table I and Figure 1, respectively. A comprehensive characterization of the engine behavior in terms of energy conversion performance, noxious emissions and radiated noise is obtained under different operating conditions. A very important issue, in a 3D simulation, is the correct specification of the fuel injection profile. In fact, the latter has a great impact on the spray development, air mixing, and fuel impingement/evaporation. A characterization of the hydraulic behavior of the six-hole injector is made by measuring the instantaneous mass flow rate on a dedicated test bench under different injection strategies and rail pressures. However, since the tested points cannot cover all the engine working conditions, a procedure is developed to gain parameterized injection mass flow rates, starting from the available ones, and only using, as input variables, data relevant to rail pressure, energizing current, and total amount of injected fuel. 1D MODEL A one-dimensional simulation code is employed to initially predict the performance of the considered engine. The 1D code solves the mass, momentum and energy equations in the ducts constituting the intake and exhaust system, while the gas inside the cylinder is treated as a zero-dimensional system. 3D MODEL The multidimensional modeling of the in-cylinder processes characterizing the operation of the considered engine, is realized within the AVL Fire™ software environment. It is conceived to obtain a rather large database of results to be processed by means of a multiobjective optimization tool. Therefore, the engine model is built by introducing simplifying assumptions allowing cost-

23

effective solutions in terms of amount of time needed for the computation of each engine operating condition. The simplifications are properly chosen, depending on factors including the required level of accuracy and the available computing power, but also on the basis of the authors' knowledge of similar engines and of high pressure injection systems for Diesel fuel. COMBUSTION NOISE ESTIMATION PROCEDURE In order to understand the effects of the combustion characteristics on the overall engine noise, an empirical model is developed for the engine under test, whose results are compared with those experimentally measured through the employment of a noise meter instrument. The overall noise is predicted starting from the in-cylinder pressure data. The procedure is suitable to be implemented within the optimization process. OPTIMIZATION The previously described 1D, 3D and combustion noise models constitute the basis for the optimization of the control strategy of the considered engine. A reference operating condition, corresponding to the experimental case measured at 1500 rpm, 4 bar brake mean effective pressure (BMEP), is chosen (overall mass of injected fuel Mf = 13.64 mg/cycle/cylinder). The logical development of the optimization problem within the modeFRONTIER™ environment is explained in Figure 2. Basing on the values of 8 degrees of freedom (start of injection, dwell time, energizing time of pilot and main pulses, EGR valve opening, throttle valve opening, tumble port valve opening, and turbine opening ratio), a Fortran routine firstly computes the coordinates of the 8 points (xi, yi, i=1,8) defining the overall injection profile, for the assumed constant amount of total injected fuel, Mf. The injection profile, together with the values of other control parameters, are written in the input files of both 1D and 3D codes, which are then sequentially executed. In particular,

Fig. 2 - Workflow of the optimization problem developed in modeFRONTIER environment

Case History


24 - Newsletter EnginSoft Year 9 n°2 the 1D code computes the engine-turbocharger matching conditions and passes to the 3D code the cylinder-bycylinder averaged pressure, temperature and composition at Inlet Valve Closing (IVC), together with the estimated swirl level. At the end of each 3D run, a proper script routine extracts the 3D computed pressure cycles and gives them in input to the Matlab™ procedure, predicting the overall combustion noise. Simultaneously, the NO and Soot levels at the end of the 3D run are returned back to the optimizer. during the optimization loop, the 3D code is executed during the closed valve period, while the 1D code gives information on the mass exchange phase. This allows reconstructing the whole pressure cycle and the related Indicated Mean Effective Pressure (IMEP). The BMEP is then obtained thanks to a mechanical loss correlation. As a consequence, the brake specific fuel consumption (BSFC), can be estimated as:

where: • Mf: mass of injected fuel • BMEP: brake mean effective pressure • V: engine displacement

(#112), low soot (#263) and low fuel consumption (#529). The latter parameter is here considered as the most important one. Each solution, in fact, provides a substantial BSFC improvement with respect to the initial reference case (#000). Table II also highlights the best and worst objective levels, which are illustrated in blue and red colors. Figure 4 compares the pressure cycles and noxious emissions in the selected optimized solutions. A very advanced injection process with a small pilot is specified in solution #081; it realizes the best NO emission, which is even lower than for the reference #000 case. A low BSFC is attained in this case as well. The low NO emission is determined by the occurrence of a premixed low-temperature combustion (LTC), diluted by the presence of a high residuals concentration. Soot emission is indeed comparable with the reference case. A slight delay of the injection process, coupled with a greater pilot injected fuel (#529) and a reduced EGR amount, produces a strong NO increase, but also provides the best result in terms of fuel consumption, with a reduced soot emission. A further soot reduction is obtained with a single-shot injection and a negligible EGR rate in solution #263. In this case, however, both NO emission and radiated noise reach very high values. Solution #112, finally, presents a more delayed injection and a very low EGR value. This is the way to obtain a reduced noise

As a next step, a multi-objective optimization is defined to contemporary search the minimum BSFC, the minimum Soot, the minimum NO and the minimum overall noise. To solve the above problem, the MOGA-II algorithm is utilized, which belongs to the category of genetic algorithms and employs a range adaptation technique to overcome time-consuming evaluations. As usual with multi-objective optimization problems, a multiplicity of solutions is expected, belonging to the so-called Pareto frontiers. Figure 3 displays a bubble chart of the optimization process. It highlights the wellknown trade-off between NO and soot emissions. Each bubble is colored proportionally to the BSFC level, while the bubble size is proportional to the externally radiated Sound Pressure Level (SPL). This representation allows to easily locate the best efficiency solutions (dark blue points). These, however, also realize a considerable NO Fig. 3 - Bubble chart of the optimization process emission and a generally high radiated noise. As expected, it is not possible to identify a unique optimal solution; a different choice must be made depending on the importance assigned to the various objectives. As an example, Table II reports the accomplishment of the solutions, respectively, low NO emissions (#081), acceptable radiated noise Table II - Values of the objective parameters in selected optimized solutions

Case History


Newsletter EnginSoft Year 9 n°2 -

25

utilization of a 3D CFD code. This allows exploring the effects of the injection strategy, EGR and swirl levels on the in-cylinder pressure cycle and the main pollutants formation. The results of the fluiddynamic analyses also allow the prediction of the radiated noise, based on the structural characteristics of the engine under consideration. The coupling of these different procedures with an optimization Fig. 4 - Comparison of pressure cycles and noxious emissions in selected optimized solutions code is realized to analyze very different operating conditions. The developed procedure is level, while maintaining fuel consumption very low. In this able to select a proper combination of 8 control parameters, case, unfortunately, the NO emission is much higher than in identifying the best compromise solution between the the reference case. The presented results highlight the predefined objectives of low fuel consumption, low soot and difficulties encountered in realizing an improvement among NO emissions and low radiated noise. The methodology all the considered objectives. The best compromise solution presented has the main advantage of contemporary is probably obtained in solution #081, which confirms the considering many different aspects in the engine tuning potential of the recently proposed innovative combustion process, and highlights the difficulties in the selection of modes. A non-negligible penalty on radiated noise, the optimal solution, due to the presence of conflicting however, has to be paid. needs. Nevertheless, it is shown that it is possible to improve the experimentally defined set point, mainly in The above discussion demonstrates the difficulties in terms of fuel consumption and NO emission, with a nonidentifying an optimal set of parameters, able to comply negligible penalty however, on the radiated noise (solution with so many conflicting needs, even when a modulated #081). injection process is considered and a high number of degrees of freedom are available for the engine control. The The optimal setting of the control parameters confirms the best compromise is definitely accomplished through trend of some current research in the field, going towards increased EGR, boost and swirl levels and takes advantage operating conditions characterized by increased EGR and of an early injection strategy. Although the relationship swirl levels, together with the arrangement of an early between the overall engine behavior and the control injection strategy. These settings, in fact, promote a typical parameters is roughly expected, the proposed methodology low temperature combustion, where NO emission can be offers the advantage of quantifying the estimated minimized without significant detrimental effects on soot. improvements, taking into account mutual dependencies and cross-correlation effects. In addition, the selection of In the future, a further validation of the 3D code, together the optimal solution can be carried out through a with the inclusion of a kinetic scheme, will allow to more standardized set of designer preferences, depending on the directly analyze the recently proposed combustion modes importance assigned to the single objectives. These may and to extend the methodology to multiple operating vary in the different operating conditions, say by focusing conditions and objectives (CO and HC production). In this the attention on the noxious emissions at low load, while way, a variable combustion mode can be realized, improving the performance and reducing the noise emission depending on the working conditions and selected at high load. objectives. CONCLUSION This article presents integrated numerical techniques aimed Daniela Siano to characterize the behavior of a six-cylinder Diesel engine Istituto Motori CNR - Italy equipped with a CR-FIS, from both an energetic and an Fabio Bozza environmental point of view. In particular, a 1D code is Universita' di Napoli Federico II Italy employed to evaluate the overall engine performance by Michela Costa varying the control parameters that affect the engineIstituto Motori CNR – Italy turbocharger matching conditions and the EGR level. The classical limitations of this kind of numerical approach, i.e. For more information: the absence of a description of the mixture formation and Francesco Franchini, EnginSoft emission production processes, are overcome through the info@enginsoft.it

Case History


26 - Newsletter EnginSoft Year 9 n°2

Fluid Refrigerant Leak in a Cabin Compartment: Risk Assessment by CFD Approach In the framework of the Greenhouse gas emissions reduction initiatives the European Union (EU) requires that the airconditioned vehicles sold in EU countries use refrigerants with Global Warming Potential (GWP) less than 150 starting by January 1st, 2011 for new vehicles type and in all vehicles by 2017 (regulation 2006/40/EC). The air conditioning system of passenger cars, trucks and the vehicles manufactured uses the hydro fluorocarbon R-134a as refrigerant that has no effect on the atmospheric ozone layer but has a GWP of 1430. The Honeywell-DuPont fluid refrigerant named HFO-1234yf is, at the moment, the first option to replace the R-134a. Critical open issues of this fluid are: • It is flammable (R12 classified), even if its flammability is low (ASHRAE A2L); • when burning, or a contact with hot surfaces, it produces toxic gases (HF); • high cost. The present work has been focused to evaluate the risk of burning associated to the new refrigerant fluid leak in the cabin due to an evaporator fault in case of a vehicle crash and in case of exchanger corrosion. A 3D model of a B-segment passenger car cabin and HVAC module and ducts has been analyzed. By means of CFD approach the refrigerant concentration has been identified for different working conditions. Activity objective The purpose of the work is to predict the refrigerant leakage distribution into the passenger compartment both in case of a vehicle crash and in case of evaporator

Case History

corrosion, in several working conditions, to identify the potential risk of refrigerant burning in the cabin. For this type of damages the leak diameters has a standard size: • evaporator damage due to the vehicle crash => Φ = 6.35 mm leak • evaporator damage due to the corrosion => Φ = 0.5 mm leak With the CFD approach it is possible identify the refrigerant flammable concentration within the passenger cabin. Cabin model The car cabin selected is a B-segment vehicle with 2 m3 of internal air, typical size of a small-medium European vehicle. This low air volume set the worst condition because the refrigerant concentration will be greater than an higher volume cabin with the same refrigerant leak rate. The geometry will include the following objects: • full cabin model with 1 and 4 passengers; • HVAC (Heat Ventilation and Air Conditioning module with condensate drain box); • complete air distribution system: ducts, vent outlets with vanes, floor ducts and outlets.

Fig. 1 - CAD of passengers, cabin, HVAC, ducts and outlets


Newsletter EnginSoft Year 9 n°2 -

27

Starting from the CAD evaporator surface the effective one is defined: the real opening area due to the difference between the CAD surface and the closed surface of the radiator pipes and fins. This area (green colour in fig. 3) is very important to define the correct air velocity from the evaporator once the air mass flow data is known, because the air velocity is the most responsible of the refrigerant diffusion. Fig. 2 - HVAC simplified, rear floor ducts and outlets

The condensate drain box is take into account to consider the potential discharge of air and refrigerant leak through this outlet. The 6.35 mm and 0.5 mm leaks are placed in the middle of the effective evaporator surface. Cabin air exhaust The cabin ventilation valves are modelled with three areas: • the areas position is coherent wih the real air path between the cabin and boot compartment; • the areas size has been defined from experimental data to reproduce the cabin fluid dynamic resistance value: 105 Pa @ 350 m3/h.

Fig. 3 - evaporator and leak geometry details

Mesh model The mesh model has been realized in ANSYS Icem CFD environment using tetrahedron cells mesh. The mesh dimension realized is different for the several components, with a high density into the HVAC to allows the code to simulate the correct refrigerant transport phenomena by means the air. Six different mesh model has been build-up for the different geometry configurations: Vent, Floor, Bi-level air distribution with 1 and 4 passengers. The mean mesh dimension is equal to 7 million of cells.

Fig. 4 - cabin ventilation valves

Geometry In the figure 1 are present the cabin components take into account. HVAC and floor ducts The HVAC has been simplified respect the original geometry because the air mass flow is known, so it is not necessary to introduce the blower and the evaporator with their performance curve (fan pressure and evaporator pressure drop). Therefore the evaporator has been defined with its surface toward the cabin only. Moreover the air mixer valve is set in max cold for the simulation, so the heater and its duct portion into the HVAC is not considered.

Simulation conditions The mesh model has been solved with ANSYS CFX-13 numerical code, using 8 parallel processors. The physical conditions set are the following: • transient simulation; • thermal energy; • turbulent model = K-ε; • buoyant model; • fluid model = variable composition mixture.

Fig. 5 - surface mesh of vent duct and front floor outlets

Case History


28 - Newsletter EnginSoft Year 9 n°2

Fig. 6 - surface mesh of HVAC with rear floor ducts and rear floor outlet

Where: • G [kg/s] is the fluid mass flow • ρf [kg/m3] is the density of the fluid • Pc [Pa] is the critical pressure • f(.) [-] is the correlation • π1,…, πn are the non-dimensional parameters • a1,…, an are the coefficients of the correlation For the two-phase flow the calculation requires the evaluation of single phase flow at saturated upstream conditions, then the basic equation for this condition is:

Fig. 7 - surface mesh with 4 passengers

The fluid components are defined by means of a variable mass fractions (Variable Composition Mixture). In the solution to a multicomponent simulation, a single velocity field is calculated for each multicomponent fluid. Individual components move at the velocity of the fluid of which they are part, with a drift velocity superimposed, arising from diffusion. The transient model is used to take into account the real amount of refrigerant fluid in the air conditioning system and its discharge into the cabin, setting the correct discharge time in the code. After the complete refrigerant delivering in the cabin, an additional transient time of 30 minutes is simulated to understand the diffusion of the refrigerant in the cabin itself.

Where: • C(.) is the single versus two phase flow • tp1,…, tpn are the non-dimensional parameters • b1,…, bn are the coefficients of the correlation • sp and tp indicate the single and two phase flow, respectively For the present application, the hole on the evaporator has been assumes equal to an orifice, and applying the Payne model: the mass flow estimated are reported on the table 1. Ambient @ 56°C

Ambient @ 15°C

AC on @ sat 4°C

AC off @ sat 56°C

AC on @ sat 4°C

AC off @ sat 15°C

hole 0.5 mass flow [g/s]

0.05

0.09

0.02

0.02

hole 6.35 mass flow [g/s]

7.61

13.95

2008

2008

Table 1 - fluid refrigerant mass flow

In particular, at 15 deg. of ambient temperature a leakage condition of double phase refrigerant fluid has been defined with a quality of 0.7 liquid and 0.3 vapor both for driving and parking condition. Whereas at 56 deg. of ambient temperature the refrigerant leakage will be in vapor phase only.

Leak rate calculation – Payne model The aim of the model is to generalize the correlation for refrigerant mass flow through a short tube orifice used in vapor compression cycle. Moreover, the model desired has to correlate the mass flow rate of several different refrigerants into a single closed-form equation, capable of predicting mass flow rate over a wide range of conditions. The Payne model adopted predicts both single phase and two-phase mass flow rate.

Fluid models About the gas-gas model, the fluid components are defined by means of a variable mass fractions (Variable Composition Mixture): additional transport equations are applied by the code.

The Payne model is a semi-empirical one, based on nondimensional group build from geometrical parameter of the orifice (as the hole diameter) and from the thermo-fluid dynamics characteristic of the fluid (as the critical pressure and temperature, density). The basic equation of the model for a single phase flow is:

About the liquid-gas model, the liquid evaporation model is a model for particles with heat and mass transfer. The model uses two mass transfer correlations depending on whether the droplet is above or below the boiling point. The boiling point is determined through the Antoine equation (vapor pressure equation that describes the relation

Case History


Newsletter EnginSoft Year 9 n°2 -

between vapor pressure and temperature for pure components) that is given by:

Leak Rate

where A is the Antoine Reference State Constant, B is the Antoine Enthalpic Coefficient and C is the Antoine Temperature Coefficient. The particle is boiling if the vapor pressure, psat, is greater than the ambient gas pressure, pambient. In the next picture the Antoine equation curve extracted from the fluid refrigerant data. To verify the difference between the gas and the liquid leak, an additional test is done using a reduced mesh model with the HVAC only in front floor distribution. The boundary conditions set are illustrated in Table 2. Where ACH means air change per hour.

• 0.09 g/s (hole 0.5 mm diameter) @ 56°C ambient • 13.95 g/s (hole 6.35 mm diameter) @ 56°C ambient • 0.02 g/s (hole 0.5 mm diameter) @ 15°C ambient • 4.02 g/s (hole 6.35 mm diameter) @ 15°C ambient

Sim. Number

Parking Thermodynamic conditions

Air Exchange rate

• 100% vapor @ 56°C ambient • 70% liquid, 30% vap @ 15°C ambient

• 1.0 air changes per hour @ T ambient • 2.5 air changes per hour @ T ambient

Fluid in saturation conditions @ 56 and 15°C

29

Air Path • dashboar d outlets • floor outlets

Pass • driver only •4 passengers

32

Table 3 - number of simulations and boundary conditions in parking situation Leak Rate • 0.05 g/s (hole 0.5 mm diameter) @ 56°C ambient • 7.61 g/s (hole 6.35 mm diameter) @ 56°C ambient • 0.02 g/s (hole 0.5 mm diameter) @ 15°C ambient • 3.48 g/s (hole 6.35 mm diameter) @ 15°C ambient

Sim. Number

Parking Thermodynamic conditions

Air Exchange rate

• 100% vapor @ 56°C ambient • 70% liquid, 30% vap @ 15°C ambient

• 2.5 air changes per hour @ T ambient • 50 l/s @ 4°C • 130 l/s @ 4°C

Fluid in saturation conditions @ 56 and 15°C

Air Path • dashboar outlets • floor outlets • bi-level outlets

Pass • driver only •4 passengers

72

Figure 9 show the difference between the gas and the liquid ejection: • the gas leak is more diffused than the Table 4 - number of simulations and boundary conditions in driving situation liquid near the leak hole yet; • the liquid jet has a greater penetration towards the ducts and therefore towards the cabin in spite of a lower mass flow. This means that it is possible to have a similar amount of flammable refrigerant fluid in the passenger compartment between vapor and liquid leak with different leak rate as long as the air mass flow and the air distribution are the same. Parking mode @ 56 ˚C

Parking mode @ 15 ˚C

Parking mode @ 15 ˚C

4.02 liquid-vapour leak (quality 0.7 liq., 0.3 vap.)

Leak diameter [mm]

6.35

6.35

Air mass flow [ACH]

2.5 @ 56 ˚C

2.5 @ 15 ˚C

Air distribution

Floor

Floor

Simulation time [s]

32.26

32.26

Mass flow [g/s]

Table 2 - boundary conditions to compare/verify the vapour and liquidvapour models

Fig. 8 - Antoine equation curve for the fluid evaporation model

Fig. 9 - comparison between gas and liquid refrigerant leak in a simplified model

Configuration of the simulations The refrigerant charge considered is equal to 450 grams for the vehicle selected (mean optimal charge for A and B segment Fiat vehicles). Simulation phases: 1. first phase (duration depending on leak rate) => full refrigerant charge release 2. second phase (additional 30 minutes) => refrigerant diffusion into the cabin In the following table the simulations conditions analyzed. Results analysis The ANSYS CFD code allows to simulate the transport and the diffusion of a gas into the air of which it is a part, impossible to understand by experimental test because of the discrete and poor number of sensors detector and their intrusive geometry. At the same time it is possible simulate the evaporation and the transport with the diffusion of a liquid within the air.

Case History


30 - Newsletter EnginSoft Year 9 n°2 Design of experiments The total simulations planned are 104. After some simulations done it is clear that the configurations with 0.5 mm diameter of leak hole are not dangerous, it means that there is no risk to reach the refrigerant flammable concentration into the cabin compartment (both in driving and in parking conditions, at ambient temperature of 15 and 56 °C). This is the effect of the very low refrigerant mass flow, therefore the flammable volume concentration remains very closed to the evaporator (into the HVAC). Due to these results some numerical experiment in these conditions have not been done. The configurations with 6.35 mm diameter of leak hole have a potential dangerous, it depends on the HVAC blower speed if it is switched on or off. • blower on: there is not any risk, both with minimum (50 l/s) and maximum air mass flow produced; • blower off: with the air mass flow moving in natural convection mode (1 ACH and 2.5 ACH) there is an amount of refrigerant fluid with flammable concentration in the cabin. Results In the next sections are presented the main results. The simulations results are very similar between one and four passengers, therefore only four passengers configurations are shown and only the significant results. About the simulations with 0.5 mm leak hole only one configuration results are described because the other ones give the same not dangerous results. The results are defined in the refrigerant volume fraction distribution between the lower and upper flammability level of the reference refrigerant fluid. Driving mode @ 56 ˚C Boundary conditions: • refrigerant mass flow: 7.61 g/s vapor leak • leak diameter: 6.35 mm • air mass flow: 2.5 ACH @ 56 ˚C, 50 l/s @ 4 ˚C, 130 l/s @ 4 ˚C • air distribution: bi-level, vent, floor • 4 passengers • time to discharge: 59.13 s

Case History

In any configuration with a leak hole of 0.5 mm (evaporator corrosion) the refrigerant flammable concentration is very closed to the evaporator, thus it is not dangerous for the occupant, even with air in natural convection mode. Driving mode @ 15 ˚C Boundary conditions: • refrigerant mass flow: 3.48 g/s, liquid-vapor leak (quality 0.7 liq., 0.3 vap.) • leak diameter: 6.35 mm • air mass flow: 2.5 ACH @ 15 ˚C, 50 l/s @ 4 ˚C, 130 l/s @ 4 ˚C • air distribution: bi-level, vent, floor • 4 passengers • time to discharge: 129.3 s


Newsletter EnginSoft Year 9 n°2 -

Parking mode @ 56 ˚C Boundary conditions: • refrigerant mass flow: 13.95 g/s vapor leak • leak diameter: 6.35 mm • air mass flow: 1.0 ACH @ 56 ˚C, 2.5 ACH @ 56 ˚C • air distribution: vent, floor • 4 passengers • time to discharge: 32.26 s

31

Conclusions The ANSYS CFD code allows to simulate the transport and the diffusion of a gas within the air of which it is a part, impossible to understand by experimental test because of the discrete and poor number of sensors detector and their intrusive geometry. With the code it is possible to identify the volume location of the flammable fluid concentration and propose the corrective actions. The analysis, within the considered perimeter and test cases, allows to affirm that if the flammable refrigerant fluid is adopted as refrigerant in a conventional direct expansion automotive air conditioning, in case of: Leak due to corrosion a) there is no formation in the passenger compartment of zones where the refrigerant concentration is within the flammability region Leak due to an evaporator crash b) there is no formation in the passenger compartment of zones where the refrigerant concentration is within the flammability region when the blower is active: air flow higher or equal to 50 l/s c) there is formation in the passenger compartment of small volumes (front lower part) where the refrigerant concentration is within the flammability region if the blower is off and the air moves in natural convection mode only (ACH lower or equal to 2.5)

Parking mode @ 15 ˚C Boundary conditions: • refrigerant mass flow: 4.02 g/s liquid-vapor leak (quality 0.7 liq., 0.3 vap.) • leak diameter: 6.35 mm • air mass flow: 1.0 ACH @ 15 ˚C, 2.5 ACH @ 15 ˚C • air distribution: vent, floor • 4 passengers • time to discharge: 111.94 s

Further comment on case c): • if the vehicle is parked with the A/C off it is not probable that the evaporator has a dramatic fault like welding problem • if the evaporator crash is a consequence of a vehicle accident likely, surely the engine compartment will be damage seriously therefore the fluid will discharge completely into the ambient due to the refrigerant pipes collapse • if the crash is a consequence of an accident likely, there will be additional ventilation due to the cabin collapse Therefore the case c), the only critical one, is not easy to get in real application. Fabrizio Mattiello Centro Ricerche Fiat s.c.p.a - Thermofluid-dynamic and air Conditioning Senior Specialist, Orbassano (Turin), Italy

Case History


32 - Newsletter EnginSoft Year 9 n°2

Numerical Optimization of the Exhaust Flow of a High-Performance Engine An automotive catalytic converter is a crucial yet sensitive component of a vehicle’s exhaust system; its impact on the overall design is considerable. In order to achieve high efficiency and durability, good uniformity of the flow inside the monolith and low pressure gradients are indispensable, both depend heavily on the engineering design of the exhaust pipes. Many of today’s high-performance engines rely on exhaust systems with a minimum pressure drop. A joint approach based on 3D CFD and optimization tools can help engineers to significantly improve the design of the component and to meet different objectives: high engine performance, high emission reduction and long durability of the catalytic converter. This article describes how the embedded CAD instruments of the CFD code Star-CCM+ were linked with and led by modeFRONTIER to modify the geometry of an exhaust system. Using this approach, the above mentioned objectives could be achieved. The numerical methodology was developed and validated on a real exhaust system and delivered both: increased catalytic converter efficiency and decreased pressure drop.

engine. This approach is based on CFD simulations and ensures the durability of the catalytic converter. In this context, we should note that the optimization of the exhaust flow requires a large number of simulations in order to determine a good trade-off between the different project targets. Engineers typically spend considerable time modifying the geometry and setting up different calculations. Therefore, the aim of this work was to develop

Fig. 1

Fig. 2

Aim of the work The optimal design of an exhaust system is based on a tradeoff between the efficiency and the durability of the catalytic converter and the efficiency of the exhaust system; the latter is required in order to minimize the engine pumping loss. Over the past few years, a correlation between numerical 3D CFD results and experimental results has been developed which allows to properly design the exhaust system of an Fig. 3

Case History


Newsletter EnginSoft Year 9 n°2 -

33

• Pgrad, which is the radial pressure gradient. It was recorded on plane C1, and it has to be lower than 20 mbar/mm in order to ensure the honeycomb structure integrity. Optimization methodology Fig. 4 shows the modeFRONTIER layout. As we can see, the displacement of the sketches and the control points represents the input variables of the optimization process. The objectives of the calculation are: to minimize the pressure drop for each primary pipe, to minimize the mean radial pressure gradient and to maximize the mean uniformity index. Several constraints have been adopted to discard the designs with a radial pressure gradient higher than 20 mbar/mm and with a Gamma factor lower than 0.85.

Fig. 4

In order to include computational costs starting from a random DoE for about 20 designs, the response surfaces of the objectives have been generated as functions of the input variables using a RBF interpolation algorithm. This allowed to perform a virtual optimization by using the MOGA-II genetic algorithm. In a final step, the best designs were

an automated process in which modeFRONTIER would drive all the operations required for the geometry modeling and the set up of the CFD. The process finally allowed to improve the design of the component and to fulfill all the objectives. Geometry modeling Exploiting the Star-CCM+ embedded CAD features, the connecting pipe between the exhaust manifold was directly modeled inside the CFD code, without the use of any external CAD software, by lofting three different sketches as illustrated in fig.1.

Fig. 5

It then became possible to modify the connecting pipe, simply by adjusting the shape of the sketches and the position of some control points (represented in yellow in fig.2) or the position of the plane onto which the sketches have been drawn.

Fig. 6

Outputs of the calculation The work was performed for an 8-cylinder engine. The right exhaust system comprised four primary pipes, hence four different simulations were needed for each design. In order to evaluate the accuracy of each design for each simulation, three different parameters were recorded: • DeltaP, which represents the pressure drop between the inlet and the outlet section; it has to be as low as possible in order to minimize the engine pumping loss; • Gamma Factor, which is a uniformity index. It was recorded on the plane C25, and it has to be higher than 0.85, in order to ensure a proper exploitation of the catalyst inside the honeycomb structure;

Fig. 7

Case History


34 - Newsletter EnginSoft Year 9 n째2 Delta P [bar]

C1

C2

C3

C4

Baseline design

1497

1293

1331

1625

Final design

1368

1103

1150

1268

-8.62%

-14.69%

-13.60%

-21.97%

Variance

Weight losses

Conclusion Using modeFRONTIER it became possible for the authors to develop a workflow into which all geometry modeling operations and CFD simulations could be integrated and automated. This approach also allowed to exploit the response surface method in modeFRONTIER, to perform a virtual optimization with the MOGA-II genetic algorithm. In this way, a suitable design for the connecting pipe could be defined by carrying out only a low number of real CFD simulations. The developed methodology proved to be very effective in reducing computational costs and operator time. The best design delivered an increase of the mean gamma factor of about +11.98%, a decrease of the mean radial pressure gradient of circa -54.84% and a reduction of the mean pressure drop of approximately -14.72%. Prof. Ing. Gian Marco Bianchi, Ing. Marco Costa, Ing. Ernesto Ravenna University of Bologna, Italy

Current geometry

Optimized geometry

For more information: Francesco Franchini, EnginSoft info@enginsoft.it

Percentage variation

validated with real CFD simulations. In fig. 5, the results of the optimization process in terms of Gamma, Pgrad and DeltaP are shown. Fig. 6 presents all the suitable real designs out of which the solution with the lower mean pressure drop was chosen. Best design: Connecting pipe geometry The geometry of the connecting pipe between the exhaust manifold and the monolith was modified considerably by the optimization. The resulting geometry is clearly visible in fig. 7, it looks as if it was created by a human operator.

Gamma factor

C1

C2

C3

C4

Baseline design

0.80

0.75

0.77

0.79

Final design

0.87

0.87

0.88

0.86

Variance

+8.75%

+16.00% +14.29%

Best design: Engine efficiency The re-shaping of the connecting pipe assured a significant decrease of the pressure drop for each primary pipe flow. In fact, compared to the original layout, the mean pressure drop of the final design was about -14.72% lower which should lead to a decrease of the engine pumping loss and enhanced engine performance. Best design: Catalytic converter efficiency and durability The best design also improved greatly the efficiency and the durability of the monolith, both are important factors for a high-performance engine. In particular, each primary pipe flow proved to be capable of satisfying both the gamma factor constraint and the radial pressure gradient constraint. Both are mandatory to obtain the catalytic converter manufacturer's approval. The recorded decrease of the mean radial pressure gradient was about 54.84% while the mean gamma factor increased by about 11.98%.

Case History

+8.86%

Fig. 8

Pgrad [mbar/mm]

C1

C2

C3

C4

Baseline design

38.45

32.07

22.35

26.63

Final design

11.93

17.38

12.00

11.11

Variance

-8.62% -45.81% -46.31% 58.28%


Newsletter EnginSoft Year 9 n°2 -

35

Multi-Objective optimization of steel case hardening Steel case hardening is a thermo-chemical process largely employed in the production of machine components to solve mainly wear and fatigue damage in materials. The process is strongly influenced by many different variables, such as steel composition, carbon and nitrogen potential, temperature, time and quenching media. In the present study, the influence of these parameters on the carburizing and nitriding quality and efficiency was evaluated. The aim was to streamline the process by numerical-experimental analysis to define optimal conditions for the product development work. The optimization software used was modeFRONTIER, with which a set of input parameters was defined and evaluated on the basis of an optimization algorithm that was carefully chosen for the multi-objective analysis. Introduction For the deep analysis of industrial processes which depend on different parameters, the use of computational multiobjective optimization tools is indispensable. modeFRONTIER is a multidisciplinary and multi-objective software written to allow easy coupling to any computer aided engineering (CAE) tool. modeFRONTIER refers to the so-called “Pareto Frontier”, the ideal limit beyond which every further implementation compromises the system, in other words, the Pareto Frontier represents the set of best possible solutions. The complex algorithms in modeFRONTIER are able to spot the optimal results, even if they are conflicting with each other or belonging to different fields. Optimization achieves this goal by integrating multiple calculation tools and by providing effective post-processing tools. The more accurate the analysis, the more complex the later design process can be. The modeFRONTIER platform allows the organization of a

wide range of software and an easy management of the entire product development process. modeFRONTIER’s optimization algorithms identify the solutions which lie on the trade-off curve, known as Pareto Frontier: none of them can be improved without prejudicing another. In other words, the best possible solutions are the optimal solutions. Generally speaking, optimization can be either singleobjective or multi-objective. An attempt to optimize a design or system where there is only one objective usually entails the use of gradient methods where the algorithms search for either the minimum or maximum of an objective function, depending on the goal. One way of handling multiobjective optimization is to incorporate all the objectives (suitably weighted) into a single function, thereby reducing the problem to one single- objective optimization again. However, the disadvantage of this approach is that the weights must be provided a priori which can influence the solution to a large degree. Moreover, if the goals are very different in substance (for example: cost and efficiency) it can be difficult, or even meaningless, to try to produce a single all-inclusive objective function. True multi-objective optimization techniques overcome these problems by keeping the objectives separate during the optimization process. We should keep in mind that in cases with opposing objectives (e.g. when we try to minimize a beam's weight and its deformation under load) often there will be no “Intervento cofinanziato dall'U.E. – F.E.S.R. sul P.O. Regione Puglia 2007-2013, Asse Isingle optimum because Linea 1.1 - Azione1.1.2. Aiuti agli any solution will be just Investimenti in Ricerca per le PMI”

Case History


36 - Newsletter EnginSoft Year 9 n째2

Fig. 1 - workflow description of carburizing and nitriding

statistical analysis and data visualization. Design of Experiments (DOE) is a methodology that maximizes the knowledge gained from experimental data. It provides a strong tool to design and analyze experiments, it eliminates redundant observations and reduces the time and resources to make experiments. Hence, DOE techniques allow the user to try to extract as much information as possible from a limited number of test runs. DOE is generally used in two ways. Firstly, it is extremely important in experimental settings to identify which input variables most affect the experiment being run. Since it is often not feasible in a multi-variable problem to test all combinations of input parameters, DOE techniques allow the user to try to extract as much information as possible from a limited number of test runs. However, if the engineer's aim is to optimize his design, he/she will need to provide the optimization algorithm with an initial population of designs from which the algorithm can "learn". In such a setting, the DOE is used to provide the initial data points. Secondly, exploration DOEs are useful for getting information about the problem and about the design space. They can serve as the starting point for a subsequent optimization process, or as a database for response surface training, or for verifying the response sensitivity of a candidate solution. Such an analysis has been performed in order to define the behavior of different industrial steels after nitriding and carburizing. These treatments are fundamental in industrial applications. In many cases, they require a thorough control of the processing parameters in order to achieve the best mechanical and microstructural properties of the components.

a compromise. The role of the optimization algorithm here is to identify the solutions which lie on the trade-off curve, known as the Pareto Frontier. All these solutions have the characteristic that none of the objectives can be improved without prejudicing another. High performance computing nowadays provides us with accurate and reliable virtual environments to explore several possible configurations. In real applications, however, it is not always possible to reduce the complexity of the problem and to obtain a model that can be quickly solved. Usually, every single simulation can take hours or even days. The amount of time needed to run a single analysis, does not allow us to run more than a few simulations, hence some other smart approaches are needed. These factors led to a Design of Experiment (DOE) technique to perform a reduced number of calculations. Subsequently, these well-distributed results can be used to create an interpolating surface. The surface represents a meta-model of the original problem and Experimental and numerical procedure can be used to perform the optimization without computing From a database of experimental results, computational any further analyses. models (so-called virtual n-dimensional surfaces) have been The use of mathematical and statistical tools to developed. The models are able to reproduce the actual approximate, analyze and simulate complex real-world process in the best possible way. The analysis has made it systems is widely applied in many scientific domains. These possible to optimize the output variables. kinds of interpolation and regression methodologies are now The applied method is called RSM (Response Surface becoming common, even in engineering, where they are also Methodology). RSM has been used for the creation of the known as Response Surface Methods (RSMs). Engineers are meta-models to simulate the actual process by using very interested in RSMs for their computational work physical laws with appropriate coefficients to be calibrated. because they offer a surrogated model with a second The RSM method consists of creating n-dimensional surfaces generation of improvements in speed and accuracy in computer aided engineering. Once the data has been obtained, either from an optimization or DOE, or from data import, the user can turn to the extensive post-processing features in modeFRONTIER to analyze the results. The software offers a wideranging toolbox that allows the Fig. 2 - carbon concentration on the samples surface as a function of carbon potential and carburizing temperature; user to perform sophisticated microhardness as a function of nitriding temperature.

Case History


Newsletter EnginSoft Year 9 n°2 -

37

Fig. 3 - input-output matrixes for the carburizing and nitriding process.

that are "trained" based on the actual input and output. The surfaces are predicated on large experimental data sets and are able to provide the output numbers that reflect the real process of carburizing and nitriding. In the validation phase, they have been included in the RSM, “trained” only according to the remaining input conditions. The numerically calculated output has been compared with the experimental output, measuring the Δ error. The phase of training and validation represents the Design of Experiment (DOE). The carburizing and nitriding processes that have been evaluated by the analysis with modeFRONTIER are summarized in the workflow shown in Fig. 1. The workflow is divided into data flow (solid lines) and logic flow (dashed lines). Their common node is the computer node in which the physical and mathematical functions representing the carburizing and nitriding processes have been introduced. In the data flow, all input parameters are included and optimized in the numerical simulations and in the corresponding output. Results and discussion When we analyze the carburizing output, we can observe that the microhardness and carbon concentration in steels are strongly dependent on the carbon potential. At the same time, they depend differently on the carburizing temperature as a function of the steel composition and on the carburizing time (Figure 2a). For the nitriding process, we can observe that the microhardness in steels is strongly dependent on the nitriding temperature; however, it dependents differently on the nitrogen potential as a function of the steel composition and of the nitriding time (Figure 2b). The input-output matrix is reported in Fig. 3. It shows the weight of each single input parameter on the output results. It is clear how the carbon concentration on the surface (C[0]) is strongly dependent on the carbon potential, then on the carburizing time and finally on the carburizing temperature. The same behavior can be observed for the carbon concentration at 0.5 mm from the surface (C[3]), with a stronger dependence on the carburizing time. The surface hardness (H[0]) is strongly and directly dependent on the carbon potential while it is inversely

proportional to the carburizing temperature. The hardness at 0.5 mm (H[3]) of the surface is also directly related to the carburizing time. Surface residual stresses (szz[0]) are strongly dependent on quenching temperature, while residual stresses at 0.5 mm (szz[3]) are also dependent on carburizing time and temperature. As far as the hardening of the nitriding steel/alloy is concerned, the nitrogen concentration on the surface (N[0]) is strongly dependent on the nitrogen potential and time. The nitrogen concentration at 0.2 mm from the surface (N[5]) is directly dependent on the heat treating temperature before nitriding and inversely proportional to nitriding temperature and nitriding potential. The surface hardness (H[0]) is strongly dependent on the nitriding temperature, then on nitriding time, then on nitriding potential and heat treating temperature. The microhardness at 0.2 mm from the surface (N[5]) is dependent on the same parameters with almost the same weight. The residual stresses on the surface (szz[0]) are dependent on the nitrogen potential, while the residual stresses at 0.2 mm from the surface (szz[5]) are dependent on the nitriding temperature and heat treating temperature. Conclusions The case hardening of different steels has been studied during thorough experimental investigations. We have evaluated the different input parameters that have an effect on the treatments, and we have analyzed the correspondent mechanical and microstructural results. The data we obtained have been employed to build a database which we later analyzed with modeFRONTIER. In this way, it has become possible to identify the optimal processing windows for different steels and to evaluate the weight of the different input processing parameters on the corresponding mechanical and microstructural properties. For more information: Vito Primavera, EnginSoft info@enginsoft.it Pasquale Cavaliere, Università del Salento pasquale.cavaliere@unisalento.it

Case History


38 - Newsletter EnginSoft Year 9 n°2

Research of the Maximum Energy Efficiency for a Three Phase Induction Motor by means of Slots Geometrical Optimization Rossi Group from Modena is one of Europe’s largest industrial groups for the production and sales of gear reducers, gear motors, electronic speed variations and electrical brake motors. A few years ago, Rossi Group decided to rely on numerical simulation in order to increase the value of its electrical motors. Their simulation work focused on achieving maximum energy efficiency for a three-phase induction motor by geometrical optimization of rotor and stator. RMxprt, one of the former Ansoft software tools. which has been recently incorporated into the ANSYS Workbench interface, was used for the optimization work at Rossi.

Fig. 1 - Efficiency classes for 50 Hz 4-pole motors (IEC 60034-30:2008)

Background The topic of energy efficiency is becoming more and more crucial for the design of electrical engines, in particular when we consider that two third of the total energy consumed by industry drives electrical motors. Until recently, higher efficiency has been mainly achieved by either increasing the motor’s global dimensions or by changing the materials. Owing to some standard manufacturing processes applied by lamination makers, so far not much time or efforts have been invested into optimizing the geometry of the slots. Premium Efficiency

IE3

High Efficiency

IE2

Standard Efficiency

IE1

Recently, Rossi Group has decided to place more emphasis on the geometrical optimization of the slots with the aim

Case History

Fig. 2 - CAD geometry of the motor (left) and its representation in RMxprt (right).


Newsletter EnginSoft Year 9 n°2 -

39

Fig 3 - Rotor and stator slots, parameters are depicted.

to enhance the efficiency of its motors without raising their global dimensions nor changing the materials used. Efficiency classes Electric motor technologies have continuously advanced over the last two decades. Still today, AC induction motors are the mainstream products sold in most industrial markets. For the purpose of energy efficiency, only recently a global standardization has been established. This new IEC classification harmonizes regional and national standards that have been in use so far. The recently introduced standard IEC 60034-30 defines energy efficiency classes for single-speed, three-phase cage-induction motors, 2, 4, 6 poles, 0,75á375kW, <1000V, 50 and 60Hz. (Table 1). Figure 1 reports the value of efficiency versus rated power for each of the three efficiency classes with regard to the three-phase induction motor types used in the optimization analysis described in this article. RMxprt characteristics RMxprt offers a machine-specific, interface that allows users to easily enter design parameters and calculate critical performance data, such as torque versus speed, power loss, flux in the air gap, and efficiency. The interface also allows to determine lamination and winding schemes. Performance data is calculated using a combination of classical electrical machine theory and a magnetic circuit

template-based

approach. The software uses improved Schwarz-Christoffel transformation to compute field distribution in both uniform and non-uniform air gaps and applies Gaussian Quadrature to treat the equivalent surface current of permanent magnets. Leakage field and corresponding inductance are derived based on analytical field computation. Using a model based on distributed parameters, RMxprt takes skin effects into account and is able to calculate 3D end effects. Due to a very efficient relaxation-based iterative technique, the calculation of the saturation coefficients in the equivalent circuit is fast and accurate. This allows saturation effects to be taken into account efficiently for all supported types of electrical machines. Its ability to build completely parametric models and the presence of an embedded optimizer allows RMxprt to speed up the design and optimization process of rotating electrical machines. Motor data The motor data are inserted into the pre-processing of RMxprt by means of dedicated sheets. The model is completely described in the RMxprt interface: geometrical data, windings arrangement, rating plate and performance

Fig. 4 - Rotor and stator geometry of the original model (left) and the optimized model (right).

Case History


40 - Newsletter EnginSoft Year 9 n°2 Motor data sheet

RMxprt model

Optimized model

Stator Ohmic Loss

232.3[W]

222.5[W]

167.4[W]

Rotor Ohmic Loss

179.4[W]

187.7[W]

137.9[W]

Iron Core Loss

95.0[W]

89.7[W]

117.9[W]

Frictional loss

18.0[W]

18.0[W]

18.4[W]

Stray Loss

59.8 [W]

60[W]

60[W]

Total loss

584.5[W]

577.9[W]

501.6

Input Power

4584.5[W]

4577.9[W]

4501.6[W]

Power Factor

0.82

0.85

0.83

26,62[Nm]

26,65[Nm]

26,34[Nm]

87.2%

87.4%

88.85%

Torque Efficiency

data along with iron core material characteristics (B-H and B-P curves). Figure 2 illustrates the CAD geometry of the motor and its representation in RMxprt. The performances evaluated by RMxprt match the test bench results. In particular, RMxprt estimates an efficiency equal to 87.4% while at the test bench, the motor reaches an efficiency equal to 87.2%. The latter value does not allow the Rossi Group designers to register the model in the premium efficiency class, and consequently the motor belongs to the IE2 class. The RMxprt validated model is used to run an optimization analysis in order to increase the efficiency of the motor. The optimization analysis is operable since the model built in RMxprt is totally parametric and will be performed by Optimetric, an optimization tool embedded in RMxprt. Figure 3 shows the rotor and stator geometry. The geometry parameters managed by the optimization analysis are reported as well. Optimization analysis Optimetric steers the optimization process by choosing the parameter values with the aim to match the output objectives, by means of a genetic algorithm.

OUTPUT 1. Efficiency parameter 2. Slot Fill Factor parameter CONSTRAINT 1. Parallel stator and rotor tooth OBJECTIVE 1. Maximization of the efficiency 2. Conservation of the SlotFillFactor parameter value.

The overall used parameters are: • 11 slot geometric dimension parameters (these parameters are dependent in order to achieve parallel tooth configuration in rotor and stator) • winding arrangement parameters: Conductors for slot and wire strand diameter. To efficiently optimize the system, the defined objectives are grouped by means of weights in a single objective cost equation, that is subsequently being minimized. Optimization results More than 2000 designs have been evaluated. The cost function is minimized at Id 1627. Figure 4 shows a comparison of the original and the optimized geometry. In table 2, the pertinent results of the original and optimized model are compared with the data sheet characteristics. The geometrical optimization results in a better exploitation of the lamination and in a larger stator/rotor slot able to accept more copper/aluminum, thus significantly reducing the losses of the active parts of the motor. The optimized model reaches an energy efficiency of 88.85% which allows to register the motor in the IE3 premium efficiency class. Conclusions • RMxprt provided the necessary tools to build a model that completely describes the performance of the present motor, subject of the optimization, with reasonable and minor errors. • The results, evaluated by the optimization analysis, achieved the goal to improve the energy efficiency class of the motor from IE2 to IE3. • Rossi Group, the manufacturer of the motor, is now preparing the prototype with the geometric values of rotor and stator slots as evaluated by RMxprt.

The optimization process is structured in the following manner:

The prototype is currently under construction.

INPUT 1. Geometrical parameters of stator and rotor slots. 2. Wire diameter 3. Conductors per slot

For more information: Emiliano D’Alessandro, EnginSoft info@enginsoft.it

Case History


YOU GET THE IDEA WE PROVIDE THE TECHNOLOGY www.e4company.com


42 - Newsletter EnginSoft Year 9 n°2

Applicazioni strutturali CAE nel settore Elettrodomestici Il settore elettrodomestici si può considerare, particolarmente negli ultimi anni, un settore fortemente orientato all’innovazione, a causa della sempre maggiore competitività, all’importanza degli aspetti economici uniti alla crescente attenzione per le questioni legate al risparmio energetico e all’impatto ambientale. Il cliente finale medio diventa sempre più consapevole ed esigente, prestando maggiore attenzione ad aspetti quali il prezzo, la facilità d’uso, i consumi energetici, l’estetica. Anche la situazione dell'economia attuale ha inciso sulle abitudini di spesa dei consumatori, inducendoli a ponderare con maggiore attenzione le spese per beni durevoli, di cui gli elettrodomestici fanno parte. Per quanto riguarda gli aspetti energetici e ambientali assumono sempre maggiore importanza anche i vincoli normativi, in particolare le nuove Normative Europee tendono a porre sempre più vincoli riguardanti l’energetica e il design a basso impatto ambientale (eco-design). In quest’ambito le analisi CAE, sia in ambito strutturale che fluidodinamico e/o di ottimizzazione, contribuiscono in modo

Fig. 1 - Previsione della radiazione acustica di una lavabiancheria

CAE World

determinante al raggiungimento degli obiettivi che i produttori e il Fig. 2 – Simulazione multibody di una mercato richiedono. Questo attraverso analisi prova di omologazione al trasporto di una lavatrice mirate che tendono a verificare in modo “virtuale” vari aspetti del processo di produzione e del funzionamento dell’elettrodomestico, dalla resistenza strutturale nei confronti dei carichi di progetto alla contemporanea esigenza di riduzione di peso e di materiale impiegato, allo studio CFD dell’influenza di vari parametri sulle prestazioni dei sistemi. EnginSoft ha contribuito e contribuisce in modo determinante alla diffusione delle analisi CAE nel settore, essendo key partner delle maggiori aziende produttrici, italiane ed internazionali (tra cui Merloni, Indesit, Candy, Electrolux..). Le attività svolte da EnginSoft spaziano dal settore del freddo (frigoriferi, congelatori) a quello del lavaggio (lavatrici, asciugatrici, lavastoviglie) e della cottura (piani cottura, forni a incasso). Considerando ad esempio la progettazione di lavatrici e lavasciuga, le applicazioni CAE in ambito strutturale vanno dall’esecuzione di analisi cinematiche e dinamiche dell’intera struttura (analisi multi-body) al fine di ottimizzare i supporti e determinare le forze trasmesse tra i componenti, alle verifiche di resistenza dei cabinet e dei cesti. Per quanto riguarda la resistenza della struttura, una fase particolarmente importante è quella di trasporto dell’oggetto. Durante il trasporto gli elettrodomestici pesanti (lavatrici ma anche frigoriferi) possono subire urti di una certa entità. Per questo motivo i nuovi progetti devono superare dei test standardizzati prima di ottenere l’approvazione. La simulazione d’urto, supportata preferibilmente da misure sperimentali, consente di verificare in fase di progetto il corretto dimensionamento della struttura e dell’imballaggio, nonché di valutare velocemente eventuali alternative.


Newsletter EnginSoft Year 9 n°2 -

Fig. 3 - Analisi del processo di stampaggio a iniezione di un componente per lavatrice

I cesti delle lavatrici possono poi essere analizzati nei confronti di carichi dinamici e termici, considerando eventuali non linearità dei materiali (es. polimeri). Per le componenti polimeriche (ad esempio le vasche per lavatrici) è determinante ai fini delle prestazioni finali anche la fase di processo (stampaggio a iniezione). Simulazioni di processo sono abitualmente svolte su questi componenti al fine di studiare il corretto riempimento e raffreddamento dello stampo, la presenza di eventuali punti di debolezza strutturale quali linee di giunzione dei flussi, deformazioni post-stampaggio (che possono pregiudicare il corretto montaggio dei componenti), stress residui. Un altro campo di interesse è la previsione della vita a fatica dei componenti strutturalmente più importanti (ad esempio le crociere). Per quanto riguarda il settore freddo (frigoriferi), oltre ai test di trasporto a cui accennato in precedenza, l’utilità delle attività

Fig. 4 - Analisi termo-strutturale di un cabinet di frigorifero

software ANSYS con programmi di ottimizzazione multidisciplinare e multi-obiettivo come modeFRONTIER. EnginSoft ha esperienza di simulazione anche nel campo del processo di produzione di celle frigo, ottenute per termoformatura di polimeri. Le analisi consentono di cogliere le distribuzioni di spessore sulla cella frigo, i particolare per quanto riguarda i dettagli critici, ed eventualmente di eseguire analisi strutturali sulla cella tenendo in conto gli spessori derivanti dall’analisi di processo. Anche nel settore del caldo (forni, piani cottura) le analisi termo-meccaniche sono di fondamentale importanza ai fini di indagare le temperature, gli stati deformativi e gli stati tensionali raggiunti dai vari componenti. Temperature eccessive possono provocare degrado di alcuni materiali. Per i piani cottura, il funzionamento ad alte temperature unito alla necessità di ottimizzare gli spessori di materiale impiegato, possono portare ad eccessive deformazioni di origine termica del piano stesso. Le analisi CAE (eventualmente non lineari per considerare l’intervento di grandi deformazioni) consentono di analizzare velocemente varie configurazioni e di apportare i necessari correttivi. Un altro settore di analisi a tutt’oggi meno diffuso ma non per questo meno importante (anche a causa delle crescenti richieste normative) è quello delle simulazioni di tipo acustico, come può essere la previsione della radiazione acustica di una lavatrice in condizioni di esercizio. Partendo dalla valutazione della risposta vibrazionale (analisi dinamiche) è possibile arrivare alla previsione dei livelli acustici e della potenza emessa. Riassumendo, si comprende come in un mercato sempre più esigente, in cui diventa basilare raggiungere standard qualitativi sempre più elevati accanto ad altre esigenze quali la riduzione del time-to-market, il risparmio energetico e di materiali, l’apporto della prototipazione virtuale assuma un importanza fondamentale. In questo ambito EnginSoft già da

Fig. 5 - Ottimizzazione del sistema di raffreddamento di un frigorifero

di simulazione si manifesta nello studio delle deformazioni in esercizio, della resistenza strutturale degli accoppiamenti e nelle analisi termo-strutturali. In particolare l’analisi termica agli elementi finiti può essere rivolta all’ottimizzazione degli spessori di isolante, con lo scopo di ottenere il massimo spazio possibile a disposizione dell’utente, mantenendo al contempo inalterata la classe energetica cui il frigorifero appartiene. Tali analisi possono essere svolte sfruttando la combinazione dei

43

Fig. 6 - Analisi dinamica di un forno

decenni si propone come partner d’elezione delle aziende per l’innovazione del processo progettuale tramite le tecnologie CAE e si pone all’avanguardia nell’affrontare insieme ai propri clienti le sfide future. Per maggiori informazioni: Sergio Sarti, Maurizio Facchinetti, EnginSoft info@enginsoft.it - info@enginsoft.it

CAE World


44 - Newsletter EnginSoft Year 9 n°2

EnginSoft interviews Massimo Nascimbeni from Sidel Sidel is one of the world’s leaders in solutions for packaging liquid foods including water, soft drinks, milk, beer and many other beverages and is one of Tetra Laval’s three industry divisions along with Tetra Pak and DeLaval. The Sidel Group, consisting of more than 5,000 employees in 32 sites over five continents, designs, manufactures, assembles, supplies and sells complete packaging lines for liquid foods packaged in three main categories: glass bottles, plastic and drink cans. Following a detailed assessment of customer needs, Sidel is able to design complete solutions covering all the steps of liquid food packaging, like processing, bottle design and molding, blowing, filling and rinsing, washing and pasteurizing up to wrapping, palletizing and logistics management. Research and development is fundamental for all these aspects and is the key to improve the quality of the final product as well as to increase lines productivity, that involves challenges like increasing the production rate, sustainability and safety or reducing costs and maintenance. Mr. Massimo Nascimbeni is based in Sidel’s Parma site as Simulation Engineer, focused on developing filling technologies. What is the role of virtual prototyping in Sidel? The first step to improve our products and our production lines is to gain knowledge on the physics involved. We have based this process on our experiences in the field and on our testing facilities for years, but if you want to go deeper to really understand certain phenomena, involved for example in the filling process, and be able to control and improve the process you need simulation tools. Modeling your process and product gives you the ability to design and

Interview

manufacture in reduced time and at lower costs by speeding up the “experience process”: this is a requirement for any company to stay competitive in the near future. Why have you decided to rely on EnginSoft support? EnginSoft has been a reliable partner since the start-up phase, when you have to assess the cost and benefit of introducing simulation in your company and in your design and development process. In this critical phase you have to compare and make a decision between different simulation tools, you have to build a new methodology and you are asked to bring concrete results to give evidence of the advantages of simulation.


Newsletter EnginSoft Year 9 n°2 -

EnginSoft ‘s support has been significant thanks to the long and persistent experiencse EnginSoft has gained in supporting companies in several industries and not just by distributing software tools. Me and the EnginSoft engineers have spent a lot of time arguing and working together on specific issues that were relevant to the company and were also good test benches to build the simulation approach to our products. Added to this solid base, ANSYS and Flowmaster provide mature and robust software which have allowed us to meet our design objectives with tools that are integrated in our product development process. What are the objectives of simulation in Sidel Simulation activities can be performed at different stages and with different degrees of detail. We apply simulation from the early phases of the projects, when you have to approach general aspects, you do not need to go into the details, but you have to make important decisions about the right direction to take. Moreover simulation is fundamental to study in details and to meet functional requirements of single components that are the core of filling machines and that affect the reliability and efficiency of the entire system. A hierarchical approach has been implemented coupling 3D (ANSYS) and 1D (Flowmaster) tools. ANSYS CFD is usually applied to characterize and optimize the behavior of parts and components of the filling machine. This detailed information is then transferred to global models developed in Flowmaster using a 1D approach. This for example gives fast answers about the behavior of the machine under different operating conditions or when you change one component. In this way we are able to study and improve our products and processes at different levels with considerable reduction of physical prototypes and tests. Sidel, with EnginSoft’s support has been working on implementing an effective, efficient and robust design process based on simulation. In Sidel we think that improving the performance and the reliability of our filling systems is essential to stay one step ahead of competitors. For more information: www.sidel.com

45

EnginSoft and ANSYS in the food & beverage industry EnginSoft works with the world’s leading players in the food&beverage industry. For these companies, food safety, process robustness and productivity are the most important factors of their research and development. Food safety for the consumer is the first priority. Bottles, packaging and machines have to be washed and in some cases also sterilized. Moreover, aseptic or non-oxidant conditions have to be assured for perishable food. All these issues can be addressed and food safety can be guaranteed with the support of simulation. Computational fluid dynamics can give an insight in all these processes, to study for example the interaction of chemical species with packaging, machines and food. Thermal management is another topic that is relevant for many applications, it involves several disciplines from electromagnetism to thermo fluid dynamics. Also, process robustness is a key factor which leading companies in these sectors that use industrial lines to fill bottles and packaging with their products, have to guarantee to their customers. Each component, the whole line and the process itself, have to be reliable and robust. Machines have to function in completely different operating conditions (from polar to equatorial regions), for diverse products (carbonated drinks, highly viscous liquids sometimes containing solid parts) supervised by different staff. Here, simulation can help to foresee the machines' behavior in normal or critical scenarios, thus reducing failures and maintenance time. Last but not least, productivity has to reach the limit. Hence, every step has to be optimized in terms of efficiency, which means reducing time and keeping the process quality at the maximum level. Performances involve mechanical, thermal, electromagnetic and fluid dynamics aspects. The ANSYS technologies and EnginSoft's experiences in engineering simulation can build synergy, to cover multidisciplinary applications and to give simulation the right role, which is to support and drive the design process in advance with respect to physical prototyping thus reducing development time and costs.

For more information: Massimo Galbiati, EnginSoft info@enginsoft.it

Interview


46 - Newsletter EnginSoft Year 9 n°2

Productivity Benefits of HP Workstations with NVIDIA® Maximus™ Technology

In today’s competitive manufacturing environment, getting to market faster provides a huge financial advantage. Technology that can cut long wait times that might otherwise force engineering to put critical decisions on hold can be an important investment in any successful design project. A typical example is time spent waiting on results from a conventional workstation used for engineering computations to find out whether a design can withstand expected structural loads and heat – a common bottleneck of simulation-driven product design. Industry leaders HP and NVIDIA have joined to create a new class of workstations that deliver the highest levels of parallel productivity to let design and engineering applications such as CAD modeling, photorealistic rendering, and CAE simulation jobs — all run at the same time. Built for high-end visualization and computing, the HP Z820 workstation equipped with NVIDIA Maximus technology is powered by a combination of NVIDIA graphics processing units (GPUs), the Quadro GPU for visual interactivity, and the Tesla GPU for massively parallel computations.

Today’s conventional design or engineering workstation contains a multicore CPU and a professional GPU (like NVIDIA’s Quadro). In the HP Z820 Maximus system, however, the combined horsepower of dual processors, a Quadro GPU, and a Tesla GPU are all available to enable concurrent design and simulation. Previously, engineers could only perform rendering and simulation tasks one at a time because they tend to consume all the CPU cores available in a system and slow down the workstation. But with a Maximusclass workstation, engineers may perform all these tasks in parallel — without suffering from a system slowdown or software performance degradation. NVIDIA Maximus Features and Benefits for ANSYS Mechanical During Nov 2010 ANSYS introduced support for CUDA with release of ANSYS Mechanical 13, which included static and dynamic analysis using either the iterative PCG/JCG or direct sparse solvers for SMP parallel. Later with release of ANSYS 14 during Dec 2011, performance enhancements were made along with an extension for distributed parallel. Parallel ANSYS Mechanical on a Maximus configured workstation requires addition of an ANSYS HPC Pack license to unlock a base configuration of 2 core use to 8 cores, and GPU use is included in the price. Once implemented, a typical ANSYS simulation can accelerate by ~4x over the base 2 core use. Examples are provided of several product design and engineering companies who have proven the benefits of HP Z800 series workstations configured with NVIDIA Maximus technology.

NVIDIA Maximus Feature

Benefit to ANSYS Mechanical

Quadro 6000 GPU

GPU for accelerated visualization tasks of pre- and post-processing for daytime use and accelerated computing for overnight use

Tesla C2075 GPU

GPU for maximum performance of accelerated computing at all times for static and dynamic analysis, PCG and sparse solvers, SMP and DANSYS

Single Unified Driver

Intelligently allocates proper visualization and compute tasks to the proper GPU, to ensure sufficient GPU (and CPU) resource utilization

Hardware update

Parametric Solutions: Maximus Provides Productivity Boosts and Cost Cuts Parametric Solutions (PSi) provides engineering product development services, with its primary specialization in gas turbine applications for power generation and aviation design. Much of PSi’s projects require the use of ANSYS software for complex analysis and design: thermal,


Newsletter EnginSoft Year 9 n°2 -

47

Liquid Robotics: Revolutionizing Ocean Research with Maximus Liquid Robotics’ ocean research robot, the Wave Glider, is a solar and wave-powered autonomous design of an unmanned marine vehicle (UVM) that offers a costeffective way to gather ocean data for commercial and governmental applications. Critical ocean data can help manage climate change on fish populations, ANSYS Mechanical 14 performance on V14sp-5 for the Intel Xeon E5-2687W / 3.1 GHz CPU provide earthquake monitoring and tsunami warning, and Tesla C2075 GPU monitoring water quality following an oil spill or natural disaster, forecast weather, and; assess placement of wind structural, fluid dynamics, vibration, kinematic synthesis and or wave-powered energy projects for just a few major optimization, and the coupling of these disciplines in fully applications. integrated simulations. PSi works with large ANSYS simulations such as the structural analysis of turbine blade that might contain up to 8 million degrees of freedom (DOF). Before Maximus, the solving requirements for a typical simulation were such that design and analysis could not be conducted on the same system. The ANSYS

The Wave Glider is designed to operate continuously, without intervention, for months and a year at a time, and ANSYS software is used to simulate the complex mechanical designs such as a structural assembly or the reduction of hydrodynamic drag.

ANSYS Mechanical analyses of a shrouded blade and disk configuration (Images courtesy of Parametric Solutions)

The Liquid Robotics Wave Glider ocean robot (Images courtesy of Liquid Robotics)

Astrobotic’s lunar lander and ANSYS Mechanical simulation (Images courtesy of Astrobotic Technology)

analyses alone would require use of the entire system for 8 to 12 hours or more, and days if engineers wanted to conduct interactive design on the system at the same time. NVIDIA’s Maximus proved to be an extremely powerful technology for their operations, with a remarkable 2x boost in ANSYS Mechanical performance and increases in productivity for all their ANSYS interactive and compute-intensive processes. These gains were achieved while continuing to do other work, such as CAD modeling, simultaneously on a single Maximuspowered workstation.

In the past, doing simulation or rendering required the complete computational power of Liquid Robotics systems, but HP workstations with NVIDIA Maximus has changed the way its engineers operate. A limited number of engineers are now able to do multiple tasks at once, which has transformed the engineering workflow. The design began with years of research requiring millions of dollars, but now in just a few weeks, design changes can be made that incrementally increase performance and reduce costs. Astrobotic Technology: Maximus Helps Ignite New Era of Moon Exploration Astrobotic develops lunar landers, rovers, and other space robotic technology for lunar surface exploration and imaging. The potential of the moon to contain enough oxygen, rocket fuel, and other materials, to build and fuel what’s needed to go to Mars and other planets is motivation for Astrobotic technology development. Robots could be used to set up a moon base, extract materials, and even assemble equipment. Remote robots working autonomously obviously require extreme precision in design and direction, and ANSYS Mechanical was deployed on HP Maximus configured workstations.

Before using HP and NVIDIA Maximus technology, each step of Astrobotic’s engineering process of 3D design, or analysis, or rendering completely consumed their systems. For ANSYS analysis that also meant restricting models to about ~500 K degrees of freedom (DOF) for reasonable turn-around times. With Maximus the ANSYS models were refined with up to 3,000 K DOF to capture simulations more completely and in less time – all without interruption of other applications on the workstation. Contact HP or NVIDIA today to learn more about the benefits for your ANSYS workflow and Maximus.

Hardware update


48 - Newsletter EnginSoft Year 9 n°2

Uno sguardo alle principali novità riguardanti l’integrazione dei prodotti ANSOFT in bassa frequenza in piattaforma ANSYS Workbench 14 Si realizza quindi un’analisi multifisica di tipo sequenziale fra Maxwell e gli ambienti di simulazione di ANSYS. L’import/export dei dati necessari all’analisi multifisica avviene interamente in interfaccia ANSYS Workbench e non necessità di script dedicati. Fig. 1 - Field (left) and circuit (right) coupling in ANSYS 14.

Con l’uscita della release 14 di ANSYS vengono introdotte importanti novità anche per quanto riguarda la suite dei prodotti ANSOFT in bassa frequenza: ci riferiamo in particolare a Maxwell 15 ed a Simplorer 10. Il seguente documento ha lo scopo di sintetizzare alcune di queste novità per quanto riguarda l’integrazione di queste tecnologie in interfaccia WB2 di ANSYS 14. Field coupling e circuit coupling Nella figura 1 viene riportato lo stato dell’arte per quanto riguarda l’accoppiamento di campo (field coupling) e di sistema (circuit coupling) attualmente implementabile in ANSYS 14. In relazione all’accoppiamento di campo (field coupling), La figura 1 evidenzia come Maxwell possa scambiare dati con le tradizionali tecnologie di casa ANSYS sia in ambito meccanico strutturale che CFD.

Software update

La tipologia di dato scambiato dipende dalla fisica che si sta analizzando. Maxwell calcola, in relazione alla soluzione di campo richiesta (magnetica od elettrica) ed al solutore considerato (statico, armonico o transitorio), sia forze elettromagnetiche che potenze dissipate; queste soluzioni di campo possono essere trasferite rispettivamente alle analisi strutturali e termiche di ANSYS, sia per gli ambienti di analisi statica che armonica e dinamica. Allo stesso modo le perdite di potenza possono essere esportate verso un modello CFD di Fluent. Per quanto riguarda l’analisi di sistema il software di riferimento di casa ANSYS è Simplorer. Simplorer può integrare in uno schema circuitale le principali tecnologie di casa ANSYS, tipicamente attraverso le seguenti tecniche: Cosimulazione e model order reduction. La Cosimulazione o “Co-simulation” (co-operative simulation) è una metodologia di simulazione che consente


Newsletter EnginSoft Year 9 n°2 -

49

Nella figura 2 si riporta lo schema che in ANSYS Workbench implementa questa analisi.

Fig. 2 - Accoppiamento bidirezionale termico fra Maxwell e Fluent.

Trascurando la parte strutturale, trattata in seguito, la figura 2 evidenzia come si imposta un’analisi multifisica di tipo sequenziale fra gli ambienti di simulazione Maxwell e Fluent. I modelli vengono preparati separatamente nei due ambienti di simulazione, ragion per cui il modello nodi-elementi utilizzato in Maxwell e Fluent risulta diverso. Il trasferimento dei carichi viene impostato manualmente in interfaccia Workbench e si realizza attraverso interpolazione.

a componenti singoli di essere simulati in maniera simultanea da software diversi. Questa tecnica permette quindi ai due software di scambiarsi le rispettive soluzioni in maniera collaborativa e sincronizzata.

Accoppiamento strutturale Maxwell-ANSYS Maxwell calcola le forze elettromagnetiche. Queste forze tipicamente afferiscono a due gruppi differenti:

La tecnica della Model Order Reduction (MOR) è una disciplina della teoria dei sistemi e dei controlli che studia le proprietà dei sistemi dinamici in modo tale da ridurne la complessità, preservandone il comportamento agli ingressi ed alle uscite (input ed output). Utilizzando proprio questa tecnica è possibile trasferire nell’ambiente di simulazione di Simplorer modelli a parametri concentrati estratti da modelli agli elementi finiti realizzati tra gli altri in ANSYS Mechanical, ANSYS Fluent e ANSYS ICEPack.

1. La densità di forza superficiale rappresenta la forza di riluttanza magnetica e si genera all’interfaccia fra materiali con permeabilità magnetica diversa. Tutte le volte che si analizzano modelli aventi materiali con permeabilità magnetica molto maggiore di 1 si deve considerare questo tipo di forza. 2. La densità di forza volumetrica rappresenta la forza di Lorentz e va considerata nel caso di conduttori percorsi da corrente immersi in un campo magnetico.

Nel seguito del documento vengono approfondite le principali novità che riguardano l’accoppiamento di campo e di sistema in ANSYS 14, in particolare:

Maxwell esporta entrambe queste categorie di forze verso gli ambienti di simulazione strutturali statici, armonici e dinamici e legge la mesh deformata output della simulazione strutturale. L’analisi descritta si realizza in ANSYS Workbench con lo schema riportato in figura 3.

1) Accoppiamento bidirezionale termico Maxwell-Fluent. 2) Accoppiamento bidirezionale strutturale Maxwell-Ansys Mechanical. 3) Cosimulazione Simplorer-Fluent. Accoppiamento bidirezionale Maxwell-Fluent L’accoppiamento termico già implementabile fra Maxwell2D/3D e l’ambiente di simulazione di ANSYS Mechanical diviene disponibile anche per la tecnologia Fluent. Il calcolo elettromagnetico di Maxwell fornisce in output le potenze dissipate per unità di volume. Fluent legge in input le potenze dissipate e calcola le temperature output dell’analisi CFD. Queste temperature possono essere quindi passate nuovamente a Maxwell per una nuova soluzione elettromagnetica. Affinchè la nuova soluzione di Maxwell abbia senso, è necessario che siano state definite opportune proprietà dei materiali in funzione della temperatura. Il processo si conclude allorché la differenza delle temperature calcolate fra due iterazioni successive è trascurabile.

Fig. 3 - Schema dell’accoppiamento bidirezionale strutturale Maxwell-ANSYS Mechanical.

Software update


50 - Newsletter EnginSoft Year 9 n°2 Integrazione geometrica completa In interfaccia Workbench ANSYS 14 si realizza un’integrazione completa fra i modelli geometrici di ANSYS ed i software già ANSOFT in bassa ed alta frequenza (Maxwell e HFSS). Per modelli geometrici di Ansys si intendono: le geometrie create dai modellatori geometrici proprietari ANSYS ( DesignModeler e SpaceClaim) e le geometrie eventualmente caricate in interfaccia usufruendo dei numerosi plug-in a disposizione per la quasi totalità dei CAD commerciali, così come per i principali formati geometrici neutri.

Fig. 4 - Simplorer: Schema di cosimulazione Simplorer-Fluent

Cosimulazione Simplorer-Fluent Una delle novità più significative in ANSYS 14, riguardante la simulazione di sistema, è rappresentata dalla tecnica della Cosimulazione implementabile fra Simplorer10 e Fluent. Uno degli ambiti per i quali questa integrazione risulta più significativa è l’analisi termoelettrica dei package delle batterie. Tipicamente è infatti necessario impostare un’analisi CFD per il calcolo delle temperature di interesse, che in input possa leggere sia la portata in massa del fluido refrigerante, eventualmente controllata in temperatura, che le potenze elettriche dissipate dalle batterie.

In particolare, una qualsiasi geometria parametrica importata o creata in interfaccia Workbench, può essere esportata in Maxwell15 o HFSS14, sempre in formato parametrico, così come una geometria eventualmente creata in Maxwell15 o HFSS14 può essere utilizzata in tutti gli ambienti di simulazione a disposizione in ANSYS senza perdere le informazioni sui parametri. In figura 5, a mo’ di esempio, si illustra come 3 formati geometrici parametrici, provenienti da fonti diverse (proE, Maxwell e DesignModeler), possano essere trasferiti tramite interfaccia al simulatore per le analisi in alta frequenza HFSS. Per ulteriori informazioni: Emiliano D’Alessandro, EnginSoft info@enginsoft.it

La tecnica della Cosimulazione consente infatti di impostare per il caso specifico un’analisi dinamica con le seguenti caratteristiche: • Simplorer e Fluent eseguono insieme il run dei rispettivi modelli. • Simplorer agisce come master, Fluent come slave. • Ad ogni time step dell’analisi dinamica Simplorer passa i valori relativi alla potenza termica generati dalle batterie ed alla portata di fluido refrigerante alla simulazione di Fluent. Fluent passa a Simplorer i valori di temperatura di interesse calcolati. • In Simplorer non c’è limite alla complessità per quanto riguarda la modellazione sia dei modelli elettrochimici delle batterie che del circuito di controllo. In figura 4 è schematizzato un esempio di analisi come descritta. In questo caso Il modello di Fluent si compone di un'unica cella di batteria. Simplorer si interfaccia con il modello di Fluent attraverso la definizione dei parametri di input e di output delle grandezze di interesse.

Software update

Figura 5: Esempio di integrazione geometrica ANSOFT-ANSYS in interfaccia Workbench.


HP recommends Windows® 7.

simulati supercomputing power

HP Z400 and Z800 Workstations with NVIDIA® Tesla™, Intel® Xeon® processors and HP performance displays. HP Z Workstations utilize the latest computing technology to make innovation easier, through their chipset, memory, multiprocessor and multi-core capabilities. See even simulation models with daily increasing complexity come to life with NVIDIA® Tesla™, which enables pre/post-processing to run simultaneously with analysis at up to 515 GigaFlops (DP) or 1030 Gflops (SP).

Visit hp.eu/workstations to learn more.

©2011 Hewlett-Packard Development Company, L.P. All rights reserved. The information contained herein is subject to change without notice. NVIDIA Corporation, NVIDIA, the NVIDIA logo, are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries.


52 - Newsletter EnginSoft Year 9 n°2

Updates on the EASIT2 project: educational base and competence framework for the analysis and simulation industry now online Project objectives and background The EASIT2 project is a Leonardo da Vinci European Union cofunded project, part of the European Vocational Training Action Programme. Leonardo da Vinci projects are aimed at designing, testing, evaluating and disseminating innovative vocational training and lifelong learning practices, and at promoting innovation in training as well as methodologies, contents, products. EASIT2 is coordinated by the department of Mechanical Engineering of the University of Strathclyde (Glasgow, UK), and is partnered by EnginSoft, NAFEMS, EoN, EADS, Renault, GEOFEM, Nokia, Nevesbu, TetraPak, AMEC, Selex Galileo. Aim of the EASIT2 project is the development of 3 distinct, but functionally related, competence-based tools: • the educational base, a database of “standard” analysis and simulation competencies; • the competence framework, a software that enables companies and motivates individuals to verify, track, develop and attest competencies in the field; • the new NAFEMS competence-based registered analyst scheme, that will foster a transparent and independent certification of individual analysis and simulation competencies. A guide through the analysis and simulation knowledge: the EASIT2 Educational Base At the foundation of the project there is the EASIT2 educational base: this is intended as guidance to those who are engaged in continuing professional development, both at a personal level and at an organisational level. The educational base is a database of competence statements covering most of the whole spectrum of analysis and simulation competencies: a competence statement is a sentence or paragraph that captures a specific competence,

Research & Technology Transfer

for example “List the various steps of the analysis and simulation process”, or “Explain why strains and stresses are generally less accurate than displacements for any given mesh of elements, using the Displacement FEM”. Statements express what an analyst should be able to do: the emphasis on doing is what distinguishes the EASIT2 competence based approach from existing approaches based

The structure of the EASIT2 project, showing the interdependence between the 3 competence-based tools.


Newsletter EnginSoft Year 9 n°2 -

53

Capturing analysis and simulation competencies: the EASIT2 Competence Framework Around the educational base, the EASIT2 competence framework has been built. The competence framework is a secure web based system, designed to be accessed over the The EASIT2 project partners Internet or a company Intranet. The competence framework enables individuals or company on more intangible ideas related to educational aims, such as staff to actually record their competencies into a relational for example a list of training objectives or course syllabus. database. In fact, the competence framework integrates with The EASIT2 educational base contains over 1400 competence the educational base allowing its users to verify, attest and statements, subdivided into 23 competence areas, starting track their competence. from basic topics “Introduction to Finite Element Analysis”, For the individual user, the EASIT2 competence framework and covering for more specialized topics, such as “Materials helps tracking learning progresses and guiding further Modelling, Characterization and Selection”, “Fatigue and learning: the user can authenticate into the system, navigate Fracture”, “Nonlinear Geometric Effects”, “Computational the educational base and the related educational resources, Fluid Dynamics”, etc. assess his/her own competencies in the various areas of In general, each competence area contains a varying number competence available, and generate his/her own individual of 50 to 100 statements. Inside a competence area, competence report. For organisations, the competence competence statements are presented in an order that framework is designed to provide an open and highly generally reflects increasing competence, that is, basic customisable system. In fact, the competence framework competencies are presented at the top of the list, while allows the definition of groups of users, enabling companies higher level competencies are presented at the bottom. to define and track the competence development for both Statements thus ideally guide the learner from mere individual employees and teams. Furthermore, the knowledge to more actionable abilities. educational base itself is customisable, and can Finite Elements Analysis Thermo-Mechanical Behaviour be modified or extended for example by adding Mechanics, Elasticity and Strength of Materials Computational Fluid Dynamics additional competence areas or statements of specific interest. The competence framework is Materials for Analysis and Simulation Electromagnetics designed to be capable of interfacing to existing Flaw Assessment and Fracture Mechanics Fundamentals of Flow, Heat and Mass Transfer staff development or human resource Fatigue Multi-body Dynamics management systems: in fact it is envisioned that the ability it provides to define various levels and Nonlinear Geometric Effects and Contact Multi-physics subsets of competencies will be useful for Beams, Membranes, Plates and Shells Multi-Scale Analysis planning technical careers. Dynamics and Vibration

Noise and Acoustics

Buckling and Instability

Optimization

Composite Materials and Structures

Probabilistic Analysis

Creep and Time-Dependency

Simulation Management

Plasticity Areas of competence currently covered by the EASIT2 educational base.

Furthermore, each competence statement includes information regarding the level of the competence relative for example to the European Qualification Framework (EQF) level. The educational base can be used for example for educational purposes: every statement is linked to appropriate educational resources, such as books, papers, codes of practice, etc., that will help an engineer to gain the appropriate competence. Each statement and educational resource included in the educational base underwent a peer review process during the EASIT2 project: it is anticipated that the educational base itself will be further enriched and improved after the end of the project. A public version of educational base will be released to the public over the next months.

Enabling transparent analysis and simulation qualification attestation: the new NAFEMS competence-based registered analyst scheme The project has developed and is currently testing a new competence-based registered analyst scheme, derived from the points-based scheme currently offered by NAFEMS. The new scheme will retain much of the sound set-up of the present NAFEMS scheme, such as the requirements of workplace experience, product and industry sector knowledge, etcetera, but will also make use of the EASIT2 educational base and competence framework. In the new competence-based registered analyst scheme set-up currently being tested, analysts will be required to access the competence framework, attest their own competencies, and produce their own individual competence report. The report will then be attached to the new registered analyst application. The information provided in the individual competence report, will then be used for the assessment procedure. Conclusions The topic of the certification of competencies is a key issue of the European debate over the relationship between the

Research & Technology Transfer


54 - Newsletter EnginSoft Year 9 n°2 rapid change of the technical knowledge and the development of the job market. Over the last years, the European Commission introduced a number of tools, such as the Europass and the certificate supplement, aimed at improving the transparency and transfer of competencies and qualifications as part of the Bologna process. The EASIT2 project will contribute a new set of tools, specifically designed for the analysis and simulation industry, that fit ideally into this paradigm shift from a curriculum based qualification attestation, to a more sound and transparent competence based certification. EnginSoft values the EASIT2 project as strategic, and therefore joined the project as a core partner to contribute more closely to the project steering. EnginSoft has an active role in the development of the innovative competence based tools, and specifically it is in charge of the development of the competence framework and is leading the competence based registered analyst scheme workpackage. For more information on the project please visit its website: http://www.easit2.eu

The EASIT2 competence framework home page

To contact the author: Giovanni Borzi - EnginSoft info@enginsoft.it

EnginSoft in £3m EU Partnership to Optimise Remote Laser Welding

The EU has awarded a £3.35m research grant to a consortium involving EnginSoft to develop a technique for optimising the use of Remote Laser Welding in assembly processes. The consortium is led by Warwick Manufacturing Group (WMG) at the University of Warwick, and also involves Jaguar Land Rover, Comau, Stadco, Precitec and several important academic institutions including Politecnico Milano, the University of Molise, Ulsan NIST, the University of Patras, Lausanne Polytechnic and SZTAKI Budapest. This is an exciting project for the participants, of course - but why is it important for the EU? Vehicle assembly is a complex process involving the joining of many subsystems by a variety of methods. For many years, resistance spotwelding has been a key technology, with a

Research & Technology Transfer

welding head simultaneously bringing together metal components (typically steel) whilst passing an electrical current to locally re-melt the material and form a mechanical connection. This is far from the only method of body assembly, however. Recent years have seen an increasing use of techniques such as gluing and riveting, for example. In each case, the ability to determine the best joining method for an assembly process is critical to assembly efficiency, and therefore vital to the competitive position of many companies within Europe. Designing an efficient assembly process is frequently far from trivial - each method will have its own limitations, power requirements, cycle times and so forth, so determining the best configuration for an assembly sequence will be a complex procedure.


Newsletter EnginSoft Year 9 n째2 -

55

That is where the "Remote Laser Welding System Navigator for Eco & Resilient Automotive Factories" enters the picture. Remote Laser Welding is a promising and relatively-new joining method which involves an intense laser beam being focused onto the material to be joined from one side only. Local re-melting and fusing to the underlying material then takes place. This can be very rapid, and since the beam can be manoeuvred from place to place with small angular adjustments of the welding head there is a great opportunity for rapid cycle times at the welding stations. However, there are also challenges to be met - in particular, the weld locations must be "visible" to the welding head, and the gap tolerances on the parts being fixed must be well-controlled. The aim of the RLW project is to provide a software tool that

will enable the process designer to develop an optimal configuration for the use of such processes. At the highest level, it will take a series of conventional assembly workstations and consider all the different ways in which they could be combined to make use of RLW techniques. This system level will propose an RLW-efficient assembly process, leaving parts of the assembly sequence which are unsuitable for RLW unchanged, but introducing RLW where it is most effective. This may involve the introduction of additional workstations or processes that are necessary for the optimum use of RLW, as well as the combining of workstations where RLW is able to perform in place of multiple original locations. At a deeper level, the project will assist in developing individual workstations. Here, the whole process of gap-

Fig. 2 - Remote laser welding uses a robot to direct a laser beam to a location on one surface of a part, fusing it to an underlying part by the local melting of the part materials

tolerances at the assembly interfaces must be managed by part-screening and appropriate fixing, as well as defining the process parameters for the laser and the geometrical manipulation of the parts and the welding head. This detailed model will also define the control requirements and calculate the process timings and energy requirements necessary to refine the higher-level definition of the assembly system. Finally, the software should assist the designer in developing components that are suitable for efficient laser welding by providing appropriate feedback on the part properties. So the project will be an invaluable tool in the development of state-of-the-art assembly processes using RLW technology, and thereby play an important role in maintaining Europe's competitive advantage in systems assembly.

Fig. 1 - Effective remote laser welding relies on carefully controlling the gap between the parts and the control of the robot-mounted laser

For more information: David Moseley, EnginSoft UK info@enginsoft.com

Research & Technology Transfer


56 - Newsletter EnginSoft Year 9 n°2

DIRECTION: Demonstration of Very Low Energy New Buildings How to enhance the overall energy efficiency of a building in order to achieve a consumption level of primary energy lower than 60 kWh/m2 per year? This four year EU-funded (Framework Programme 7 - FP7) project, launched end of January 2012, aims at the creation of a framework of demonstration and dissemination of very innovative and cost-effective energy efficiency technologies for the achievement of very low energy new buildings. DIRECTION proposes five main progresses beyond the state of the art in the following areas: • Energy Efficiency Measures: Energy consumption reductions of more than 50%. • Low-Energy Buildings: CO2 emission reductions of more than 60%. • Modeling and simulation. • Building monitoring. • Standards & regulations implemented by European and national policy-makers.

An important impact for the building sector is expected in the following four main areas: • Energetic: drastic energy consumption reduction. • Environmental: significant CO2 emissions reduction. • Building sector business: encouraging a mass market for very low energy new buildings, facilitating understanding of all stakeholders. • European policies: contributing to boost the implementation of standards and regulations.

Demonstration activity Based on the analysis of suitable energy efficiency technologies and their technical and economic viability, the demonstration Building and construction engineers, architects, energy activity will be deployed at the sites in three new buildings, in researchers, IT specialists and public authorities will work which a set of very innovative measures such as constructive together in order to show how the ambitious goal can be elements for energy optimization, high efficient energy reached. equipment and advanced energy management will be applied. Local, national and European Spain – Valladolid Italy – Bolzano Germany – Munich CARTIF III new building New Technology Park of Bolzano Nu-Office stakeholders including public authorities, users and citizens at large will be kept up-to-date about the progress and the outcomes of the demonstration.

This research center will host offices and test facilities (industrial activities).

This building will host different stakeholders (enterprises, research institutes and public entities) sharing the common goal to develop and deploy energy efficiency solutions.

Research & Technology Transfer

This office building, located in Domagkstrasse, will be built to “Sustainable Building” standards as a non-private housing.

EnginSoft’s Contribution EnginSoft will be in charge of software development and will be mainly committed in Building Modeling & Process simulation to improve the design of the energy


Newsletter EnginSoft Year 9 n°2 -

efficiency solutions. The objective is to optimize the performance of the building as a whole and not only its single components, accompanying the building with modeling and simulation throughout its life cycle from design through construction to operation. Software interfaces will be developed to integrate the simulation tools necessary to perform the envisaged numerical analyses. • Building Information Modeling (BIM) will gather all the information on the building taking into account the complex interactions and interdependencies and allowing to transfer information without loss from one actor in the design and construction process to the other. • Integrated design will be strongly supported by the developed building models and dynamic simulation at different stages of the process. • Continuous commissioning will be integrated with modeling and simulation and guarantee the smooth transition from design to operation phase. • Simulation aided automation and model based control will allow to tap the full potential of energetically optimized building operation. • The building model will support the evaluation of the single energy saving measures. • Optimization analyses, based on dynamic simulation, will be split as follows: building envelope, equipment and HVAC concept, building energy management system (BEMS). • A Black Box Model based on experimental data, by means of suitable meta-modeling algorithms for the building and for single parts/components, will be developed. More information is available at: www.direction-fp7.eu

Other Partners of the Consortium SPAIN • Centro tecnológico CARTIF Research centre Project coordinator • 1A Ingenieros Engineering, architecture and technical consultancy • Dragados System integration Building contractor GERMANY • NUOffice Domagk Gewerbepark - Project developer specialized in commercial real estate • Fraunhofer Institute for Building Physics • Facit General contractor ITALY • EURAC Research EURopean Academy - Research centre • CL&aa Claudio Lucchin & architetti Associated architecture studio • Provincia Autonoma di Bolzano Alto Adige Government partner BELGIUM • Youris.com Media agency

57

Dissemination dei risultati del Progetto BENIMPACT

EnginSoft e l’Istituto per Geometri Pozzo di Trento vincono il Concorso "Tu Sei" 2012 Il progetto “Tu sei”, giunto quest’anno alla quarta edizione, è nato da un Protocollo promosso da Confindustria Trento e sottoscritto dalla Provincia Autonoma di Trento per ridurre la distanza tre mondo reale ed attività scolastiche attraverso percorsi creativi ed interessanti, favorendo un positivo avvicinamento degli studenti al mondo delle imprese per semplificare il futuro sbocco occupazionale dei ragazzi. Lo scopo è quello di stimolare i giovani ad approcciarsi al mondo del lavoro con responsabilità e intraprendenza. Quest’anno sono stati presentati 21 progetti e hanno partecipato: • 3 istituti comprensivi (scuole secondarie di primo grado); • 12 istituti di istruzione secondaria di secondo grado; • 600 STUDENTI circa; • 27 AZIENDE. Partnerships vincitrici del Premio “Tu Sei” 2012 • Istituto Comprensivo Tione (primo ciclo) di Tione con Girardini Srl di Tione • Istituto Tecnico per Geometri “A. Pozzo” di Trento con EnginSoft SpA di Trento • CFP Istituto Pavoniano Artigianelli di Trento con SILVELOX SpA di Castelnuovo Valsugana La cerimonia conclusiva si è tenuta il 5 maggio 2012 presso l’auditorium Melotti del MART di Rovereto. In quest’occasione i ragazzi hanno presentato i loro progetti e sono stati premiati i tre vincitori: uno per le scuole secondarie di primo grado e due, a pari merito, per gli istituti di istruzione secondaria di secondo grado. Il progetto di EnginSoft con l’Istituto per Geometri Pozzo di Trento Una decina di studenti del quinto anno dell’Istituto per Geometri “A. Pozzo” di Trento hanno collaborato alla realizzazione di un database sui materiali utilizzati per la realizzazione di involucri edilizi e partecipato a un corso di formazione sugli strumenti informatici sviluppati in BENIMPACT. Hanno poi utilizzato alcuni di questi per modellare il comportamento energetico in regime dinamico di un edificio da esporre in sede di esame di stato. Per ulteriori informazioni: Angelo Messina, EnginSoft info@enginsoft.it

Research & Technology Transfer


58 - Newsletter EnginSoft Year 9 n°2

EnginSoft and Flow Design Bureau (FDB) launch collaboration. Best-in-class computer aided engineering (CAE) software and consultancy services for the Offshore and Oil&Gas industries in Norway

Despite the fact that the renewable energy sectors are gradually growing and are becoming more important for the future of our energy supply and our earth, Oil&Gas will continue to be the dominant energy sources for several decades to come. This issue highlights Norway as a responsible supplier of petroleum. Today, Norway is one of the world’s leading producers of fossil fuels, as well as one of the largest exporters of natural gas and oil. Over the past 50 years, the country has developed an advanced petroleum industry, that today also encompasses some of the front runners in subsea technologies, which are considered vital for the future supply of fossil fuels. Up to 20 years ago, Norwegian oil companies and most of their suppliers and service providers concentrated their efforts on the Norwegian continental shelf. In the recent past, however, we can notice Norwegian-headquartered Oil&Gas related industries all over the world. In fact, small and medium-sized Norwegian companies seek to compete for market shares internationally, and they are succeeding. EnginSoft and Flow Design Bureau (FDB) are well aware of this scenario, so much that they have decided to sign a collaboration agreement for computer aided engineering

(CAE) software sales and consultancy for the Offshore and Oil&Gas industries in Norway. FDB is a technology development and consultancy company that specializes in fluids engineering and heat transfer for the energy sectors and businesses. FDB has a track record of delivering high quality consultancy services to Norwegian businesses within the Oil&Gas fields using computational-fluid-dynamics. EnginSoft is an engineering software and consultancy organization which has the know-how and resources to provide services based on a large variety of CAE software, for fluid-dynamics, mechanical, structural, electromagnetic, process simulations in the aerospace, automotive, chemical and Oil&Gas industries. Based also on its own competencies developed over the past 25+ years and its ambitions for international growth (EnginSoft supports business operations in France, the UK, Germany, Spain, Sweden, as well as in the USA, in the Houston area, and with Standford University), EnginSoft has identified the Oil&Gas sector as a core business for its activities into which it can deliver services for most of the required engineering applications. In this light, EnginSoft and FDB agreed that by combining efforts both companies can benefit and grow business. EnginSoft now has a partner in Norway through which the Norwegian Oil&Gas industry can be approached while FDB can count on EnginSoft, to offer services and to participate in multidiscipline engineering projects. Eventually, the goal of the collaboration is to formalize the partnership, and for FDB to become a member, a node in EnginSoft’s Network of engineering companies. For more information: Livio Furlan, EnginSoft info@enginsoft.it

EnginSoft Network


Newsletter EnginSoft Year 9 n°2 -

59

Dinamica esplicita: nuovo competence center EnginSoft a Torino Da Gennaio 2012 è operativo, con sede Torino, il nuovo Competence Center di EnginSoft specializzato soprattutto nel calcolo strutturale, nella fluidodinamica e nella simulazione dinamica esplicita. Di grande attualità, i codici di simulazione detti “espliciti”, diffusi e consolidati in primis per analisi di sicurezza passiva, si prestano a molteplici applicazioni industriali. Sviluppati, in origine, per ridurre costose campagne sperimentali di verifica prodotto in ambito Automotive ed Aerospace, (immaginate le risorse necessarie ad un Crash Test!) sono oggigiorno applicate in tutti i settori industriali, elettronica compresa. Il telefono cellulare che possediamo e che è diventato d’uso comune, con molta probabilità, è stato progettato, sviluppato e raffinato anche attraverso una campagna di Drop Test Virtuale prima ancora di test di caduta con prototipi “Fisici”. In generale, in tutti i casi di forti non linearità (spostamenti, deformazioni e contatti) e di dinamica veloce (collisioni, impatti, esplosioni) l’approccio Esplicito è la soluzione tecnicamente ed economicamente ideale ai fini di una fedele rappresentazione del fenomeno fisico o multifisico. Il team di ingegneri del Competence Center di Torino, vanta competenza ed ampie esperienze nello sviluppo di applicazioni di Passive Safety avanzate quali: • Sistemi di ritenuta innovativi (sedile, cinture, pretensionatori, limitatori di carico). • Airbag con logiche di apertura, gonfiaggio e attivazione di ultima generazione. • Accoppiamento prodotto-processo (es. stampaggio, saldatura e crash). • Simulazione di materiali innovativi. • Ottimizzazione multiobiettivo. Come logico immaginare, per tecnici che si sono formati in un’area industriale che, per tradizione, è fortemente “Automotive Oriented”, metodi, strumenti ed esperienze sono state originariamente sviluppate e raffinate proprio per far fronte alle tematiche dell’Auto e dei veicoli in generale. Tuttavia le tematiche del mondo Auto, in apparenza differenti da quanto richiesto per altri settori industriali, presentano molte analogie ed affinità dal punto di vista della simulazione numerica. Anzi! Molto spesso le necessità di elevare qualità, sicurezza ed affidabilità del prodotto che si contrappongono all’esasperata rincorsa alla riduzione di tempi e costi stimolano la creatività degli ingegneri EnginSoft verso lo sviluppo di metodologie innovative

che determinano l’innovazione del prodotto e/o del processo. Un esempio concreto è l’articolo pubblicato nello scorso numero della Newsletter e riguardante la riduzione dell’imballo di un frigorifero INDESIT. Temi specifici affrontati da EnginSoft in cui i solutori espliciti sono impiegati sono: elevate deformazioni di materiali iperelastici, fenomeni di danno e rottura, impatti e collisioni di ogni genere e natura (es. misuse nel settore machinery), multibody, sistemi elastocinematici, progettazione di serbatoi tramite l’approccio FSI (SPH), analisi termo-strutturali di guarnizioni, etc etc. Non di rado le analisi esplicite ed implicite vengono accoppiate, qualora necessario, al fine di elevate qualitativamente la rappresentatività del modello. Al Competence Center di Torino è demandata la promozione, vendita e supporto tecnico specialistico agli utenti, compresa la formazione, per ciò che concerne il solutore LS-DYNA che oggigiorno è la soluzione Software più diffusa al mondo per l’analisi dinamica: dell’Esplicito appunto. Per ulteriori informazioni: Alfonso Ortalda, EnginSoft info@enginsoft.it

EnginSoft aderisce all’Unione Industriale di Torino Lo scorso 13 Aprile la sede piemontese di EnginSoft si è associata all’Unione Industriale di Torino. L’adesione si inserisce in un crescente interesse di EnginSoft per le associazioni territoriali di questo tipo, nella convinzione dell’importanza di aggregarsi alle realtà industriali locali, di comprenderne le specifiche problematiche di sviluppo e le richieste di innovazione. In un mercato sempre più globale, la comprensione del tessuto sociale ed economico in cui i nostri diversi centri di competenza si trovano ad operare sono un’imprescindibile asset ed uno strumento vincente per rendere le nostre aziende competitive sul piano internazionale. EnginSoft è associata anche all’Unione Industriali di Trento, il cui presidente, Ing. Paolo Mazzalai, è anche presidente di SWS Group, holding del gruppo a cui appartiene EnginSoft, e all’Unione Industriali di Brindisi. Per maggiori informazioni: www.ui.torino.it

EnginSoft Network


60 - Newsletter EnginSoft Year 9 n°2

Avvio del Project Management Office di EnginSoft La crescente complessità di EnginSoft, conseguente alla costante crescita del portafoglio Clienti e del fatturato del gruppo, dell’insieme di prodotti e servizi offerti al mercato, del numero di dipendenti e di sedi aziendali, in seguito anche al processo di internazionalizzazione avviato nel 2006, ha portato ad una notevole crescita del numero e della rilevanza dei progetti interni ed esterni che l’azienda si trova ad affrontare. I progetti, che sempre più spesso rappresentano una prassi delle organizzazioni, sono iniziative per loro natura temporanee, coinvolgono un team di lavoro interfunzionale e sono sovente finalizzate a realizzare prodotti, servizi o cambiamenti organizzativi cruciali per l’innovazione e la competitività delle imprese. La gestione progetti, disciplina di management operativo che coordina le risorse per consentire il raggiungimento di obiettivi prefissati secondo tempi e costi controllati, è quindi un asset organizzativo fondamentale per le imprese. Coerentemente con questa visione, la direzione di EnginSoft ritiene che la gestione progetti richieda uno specifico supporto organizzativo: EnginSoft ha pertanto avviato la costituzione dell’ufficio gestione progetti aziendale (PMO, Project Management Office). Il modello organizzativo del PMO, ampiamente collaudato e diffuso a livello internazionale, è efficace per favorire l’adozione e lo sviluppo di buone pratiche di project management in azienda: queste sono fondamentali per garantire il raggiungimento degli obiettivi che i progetti singolarmente si prefiggono e per una corretta gestione della qualità dei risultati e dei rischi che ogni progetto porta con sè. Inoltre, il PMO rappresenta una risposta al crescente bisogno di gestione integrata dei progetti: in tal senso esso diviene strumento fondamentale per supportare efficacemente le decisioni strategiche del management aziendale. Il PMO si colloca in modo ideale in EnginSoft, che è già definita dal proprio sistema di gestione qualità come una organizzazione “a matrice”, ed è quindi già organizzata per processi e, secondariamente, per progetti. EnginSoft è, inoltre, culturalmente in grado di sostenere il cambiamento organizzativo che la costituzione del PMO comporta, perchè guidata da un gruppo dirigente altamente motivato e focalizzato al raggiungimento di risultati, dove la gestione progetti è, al

EnginSoft Network

tempo stesso, una esigenza ed una pratica quotidiana. Il Project Management Office di EnginSoft, pertanto, riporterà direttamente alla Direzione, con le seguenti responsabilità: • assistere la Direzione nella gestione progetti, in riferimento a singoli progetti rilevanti, sia nelle valutazioni strategiche preliminari che per quanto riguarda la gestione operativa. Il PMO garantirà in particolare l’allineamento tra le scelte strategiche aziendali e gli obiettivi progettuali; • fornire supporto metodologico alla gestione progetti, inclusa la definizione di standard di lavoro e la formazione interna. Allo scopo il PMO farà riferimento alla metodologia sviluppata dal PMI, Project Management Institute; • fornire consulenza e supporto ai Project Manager aziendali; • costituire, mantenere aggiornato e monitorare il portafoglio progetti aziendale; • integrare le diverse funzioni aziendali in modo da facilitare il collegamento tra processi e progetti, la comunicazione tra gli attori coinvolti e la gestione del portafoglio progetti. Gli ambiti di intervento iniziali e privilegiati del PMO EnginSoft saranno i seguenti: • supporto alle nuove iniziative di business; • sviluppo delle iniziative di business già lanciate (spin off, start up, joint venture); • facilitazione dei progetti interfunzionali (di ambito tecnico); • metodologie e best practice. Tramite la formalizzazione del PMO aziendale, EnginSoft intende fattivamente allinearsi alle migliori pratiche e standard internazionali, adeguate e funzionali alla propria proposta di valore e al mercato, perseguendo la visione di gruppo multinazionale, basato sulla specificità delle competenze e coordinato in modo unitario da un management diffuso, secondo precise direttrici individuate dalla Direzione Generale. Per maggiori informazioni: Giovanni Borzi, EnginSoft info@enginsoft.it


Newsletter EnginSoft Year 9 n°2 -

61

The Japan Association for Nonlinear CAE: a New Framework for CAE Researchers and Engineers The joint industry-academia CAE association “The Japan Association for Nonlinear CAE (JANCAE)” is a nonprofit organization that mainly provides the “Nonlinear CAE training course” which was introduced and started in December 2001 by the founder Professor Noboru Kikuchi, University of Michigan, USA. As of January 2012, it is organized by its chairperson, associate prof. Kenjiro Terada of Tohoku University, 10 executive board members and 49 staff members from universities and companies. Over the past 10 years, a total of 460 teachers and 3300 participants joined the training courses which have been held 20 times since their introduction. Additionally, JANCAE organizes committees for specific themes and special seminars, and aims at achieving and realizing maximum effects for the real CAE business by referring to participants’ opinions and feedback, to develop training, seminars and planning accordingly for the future. The purpose of JANCAE The change in the needs of CAE users and the expansion of the fields in which CAE is used, along with the rapid development of computer technologies, and the obvious trend that nonlinear simulation is “a must” are clear requirements today. The movement from linear to nonlinear is regarded as a step-up to the second generation of CAE. However, there is a major hurdle with which past experiences can’t help! CAE is not just about using CAE software. It means creating correct models, choosing appropriate analysis methods, evaluating result outputs from software properly and giving feedback to the design team after understanding the surrounding environments and possible phenomena. This is really a requirement from industry. However, there is a gap between this clear “requirement from industry” and the

general “university curriculum”. To fill this gap, it cannot be said often enough, that we rely on the backup from academia and the support of the software vendors. It would be dangerous if we ignore this gap while the usage of nonlinear CAE increases in industry. This being the situation, JANCAE started its activities from establishing a forum including CAE training courses, which offer participants the opportunity to learn nonlinear CAE intensively, to work hard and learn from each other - as an initial target. Of course, it is impossible to fill all gaps just with this training course. For this reason, the long term target of JANCAE is to establish a new framework for CAE researchers and engineers. (JANCAE website: http://www.jancae.org) Nonlinear CAE training course A major activity of JANCAE is the CAE training course, which gathered a total of 3300 participants at its past 20 editions and about 150 to 200 at each of the courses recently. Fig.1 shows the participants classification by industry. The

Fig 1 - Participants classification by industry

Japan Column


62 - Newsletter EnginSoft Year 9 n°2 universities, material manufacturers and software vendors, JANCAE has developed a major rubber material database and simulation templates, which have been uploaded to their web site in the meantime. Now, the association also develops a material model subroutine which can be used for different commercial FEM codes - This topic was introduced in the EnginSoft Newsletter 2011 Winter Edition. Interview I had the pleasure to conduct the following interviews with the executive board members of JANCAE. Fig. 2 - The nonlinear CAE training course curriculum from 2001 to 2010

numbers reflect the overall industrial structure of Japan with its major sectors: automotive/automotive components, electrical equipment, general machinery and beyond these, participants from a variety of fields, such as materials/chemicals, steel/metal and civil engineering/building who also attended the training courses. The attendees from research/academia and software/software vendors account for about 10%. Each training course runs over 4 days, the courses are held twice a year. Despite the fact that the program always starts on the week-end, the attendance is very good and many students attend repeatedly. It is very important and recommended to regularly attend different courses, because CAE has changed a lot over the last 10 years and evolves constantly. The program of the training courses is structured in a first and second half, 2 days for each half. The first half is for basics. The lessons teach the basics of each topic, such as material models and elements, and analysis methods. The second half focuses on the applications. It covers the main themes of each course, such as coupled fields and multiscale. Fig.2 shows the training course topics from 2001 to 2010. The courses are conducted by professional researchers and industry experts. For sound practical experiences in CAE, it is necessary to understand coupled fields at various levels. As you can see in fig.2, the curriculum is well-thought-out, so that the participants can learn a wide variety of what CAE covers today. Aside from classroom lectures, the courses also provide time for hands-on training in the review and discussion parts of the program. Committee activities Independently from the CAE training course, which mainly consists of classroom lectures, JANCAE organizes “The Material Modeling Committee” as a practical approach to the study of nonlinear materials. The Committee was originally established in 2005 as “The Rubber Committee”. In the following years, its research activities have diversified into all material nonlinearity topics including metal plasticity. In the frame of the Committee, members learn about typical nonlinear material modeling by studying the basic theory of the constitutive equations, material testing methods, and how to handle test data. Moreover, in collaboration with

Japan Column

What changes do you see in the use of CAE in Japan? H Takizawa, PhD, Mitsubishi Materials Corporation says: In the past, the engineers who want to use a numerical simulation had to program it by themselves. Nowadays CAE software is commonly used in manufacturing companies, also because the general purpose codes, which are mainly developed in western countries, have become widespread. Well-established graphical user interfaces and visualization tools offer the benefits to simulate many different problems and gain results quickly. Capabilities like these support engineers in many companies to reduce time and costs of development cycles. However, this environment which delivers results very easily, also brought along a negative aspect, as we tend to lose our preparedness to think about the objects of the analysis from different aspects and to understand it deeply. Indeed, just getting results is not enough, we should decipher the necessary information from simulation results and reflect it with the idea of the design. I have doubts that the quantitative aspect of substituting prototype testing by simulation for cost reduction, is standing out in some recent CAE work. I fear that it has weakened our sense of value, to see the reality of interest which can’t be seen in prototype testing, and to understand it deeply. I strongly believe that we need to get back to such a sense of value. What are the main characteristics of the way CAE is used in Japan and what kind of CAE tools are required? Y Umezu, JSOL Corporation says, For operations within the organization which require lower dependence on individual abilities, CAE managements seek to automatize CAE tools as much as possible, even if this requires considerable investments. To do this, scripting capabilities, which make customization easy for matching it to their own operation processes, are required. On the other hand, to take full advantage of the individual abilities, CAE tools are used for the purpose of checking the effects of his/her own ideas (design change, countermeasure, etc.) rather than looking for a new idea, because the users know the risks of using CAE like a black box. In such cases, CAE tools, which show the effects of their idea through differences in the results, are required. That’s called KAIZEN style.


Newsletter EnginSoft Year 9 n°2 -

What do you expect from CAE and its surrounding environment to contribute to the growth of the Japanese manufacturing sectors? T Kobayashi, Mechanical Design & Analysis Co. says, Japanese manufacturers had been good at making higher and added-value products with less components (for example, when we think of cameras and motorcycles). In recent days, CAE is intended for reasonable simulations for large assembly products. This tendency/trend can be called emulation rather than simulation. I think not only emulation, but also the methods which close in on the essentiality more intuitively (for example First Order Analysis) are suited to Japanese engineers. 10 years after the establishment of JANCAE, how do think has it contributed to the manufacturing in Japan and what are the remaining challenges? K Terada, PhD, Tohoku University says, There are always conscientious CAE engineers who are aware of the importance of gaining an understanding of both the basic principles and software usage for simulations. Although the activities of JANCAE were something grabbing and immature, I believe that they must have helped satisfying the engineers’ desire to learn. At the same time, we succeeded in creating a unique community that enables them to share the information, mindsets as well as the sense of

63

value with other participants and instructors, by providing an endorsement to their aforementioned problem consciousness. It seems that such accomplishments are well recognized and being taken over steadily by the younger generation. However, the activities necessarily have limitations because all the staffs in the JANCAE are volunteers. We are not satisfied with the current situation and would like younger supporters, especially staffs from the academia, to be involved and proactive in expanding our activities. Creating such an environment and organization is an urgent problem to be solved. Conclusions The environment around nonlinear CAE will change further in the future. Now, a polarization of “concentrated CAE for structures” and “distributed CAE for each person” can be observed in industry. What we should learn is different in both cases, and we will be required to have better skills in many different situations. JANCAE will be a new framework to raise the whole level of CAE users and their individual skills – both increases when we work with conscious people. This article has been written in collaboration with the Japan Association of Nonlinear CAE Akiko Kondoh, Consultant for EnginSoft in Japan

easy and efficient coordination of your FP7 EU research projects 24 hours 2 ho a day, from m any platfo platform fo orm

www.EUCOORD.com

powered by

European Projects Coordination Tool info@eucoord.com


64 - Newsletter EnginSoft Year 9 n°2

METEF Foundeq 2012: EnginSoft soddisfatta della partecipazione all’esposizione Lo stand di EnginSoft al salone internazionale dedicato al mondo dell'alluminio e delle macchine per la lavorazione dei metalli, raccoglie consensi e cattura l’attenzione dei visitatori

METEF Foundeq, expo internazionale dedicata alla filiera produttiva dell'alluminio e dei metalli non ferrosi, è uno degli eventi di maggior rilievo e di richiamo internazionale che si occupa di tecniche e tecnologie innovative per l'industria fusoria. L’evento, che si è svolto per la prima volta presso la Fiera di Verona dal 18 al 21 aprile 2012, ha registrato 15.000 presenze, un incremento degli operatori professionali presenti e della presenza di buyer esteri: un risultato positivo, nonostante la non facile congiuntura economica. Fin dalla prima edizione, EnginSoft partecipa alla manifestazione con un proprio spazio espositivo: quest’anno ha riscosso molto interesse la presentazione su una pedana girevole di alcuni getti, ottimizzati con l’uso del software di simulazione dei processi di fonderia MAGMA5. I visitatori hanno potuto apprezzare componenti di alta qualità meccanica come il forcellone di una moto Ducati, il sottobasamento di un dodici cilindri Ferrari o il canotto di uno sterzo, vicino alle tecnologie di simulazione che contribuiscono alla perfetta realizzazione di questi capolavori dell’ingegneria italiana. Componenti realizzati grazie alla passione

EnginSoft diventa socio di AMAFOND: Associazione nazionale fornitori macchine e materiali per fonderie AMAFOND è l’Associazione Italiana dei Fornitori di Macchine, Prodotti e Servizi per la Fonderia, nata nel 1946, punto di riferimento per gli operatori del settore offrendo servizi tecnici, normativi, economici e legislativi. Dalla sua costituzione è aderente all’Unione del Commercio e dei Servizi della provincia di Milano. L’Amafond è inoltre socio fondatore del Cemafon-Comitato Europeo dei Costruttori di Macchine ed Impianti per Fonderia. L’Associazione ha lo scopo di coordinare, tutelare e promuovere gli interessi tecnici ed economici del settore macchine e prodotti per fonderia, in generale di tutti i fornitori delle industrie metallurgiche. Le Aziende associate sono circa settanta, divise in quattro grandi gruppi: gruppo

Events

di fonderie come Perucchini, GFT e modellerie come CPC. EnginSoft ha anche presentato le anime che si impiegano nella realizzazione di questi componenti: esempi che dimostrano le potenzialità di MAGMA Core and Mold, un software innovativo che finalmente permette di simulare il processo di generazione delle anime (Core). Il modulo propone un modello di lavoro facile e intuitivo, che aiuta a comprendere a fondo il processo produttivo delle anime e consente di analizzarne tutte le fasi: strumento indispensabile per individuare le migliori strategie progettuali per la definizione delle casse d’anima. All’interno della manifestazione si è svolto anche l’International Forum on Stuctural Components by High Pressure Die casting HPDC - Prospettive e sfide dell'industria della fonderia ad alta pressione, organizzato da AIM - Die Casting Study Group e Alfin-Edimet Spa in cooperazione con Amafond, Assofond, Assomet, Cemafon e Caef. Al forum, coordinato dal nostro Piero Parona, presidente del Centro di Studio Pressocolata dell’AIM, Associazione Italiana Metallurgia.

prodotti, gruppo macchine ed impianti, gruppo forni per fusione e trattamenti termici, gruppo pressofusione. Queste Aziende rappresentano un volume di affari di circa 1 miliardo di euro con una quota export superiore al 60% ed impiegano quasi 5000 addetti. EnginSoft si è associata ad Amafond per rafforzare e sviluppare ulteriormente i propri contatti con i titolari e i decision maker delle Fonderie Italiane e per promuovere EnginSoft anche internazionalmente come fornitore di servizi, grazie alla presenza istituzionale di Amafond alle principali fiere Internazionali con la diffusione della directory delle proprie aziende associate. Per ulteriori informazioni: www.amafond.com


The Fisker Karma 2012 Electric Luxury Car will be showcased at the CAE Conference 2012 Henrik Fisker will be the Keynote Speaker of the Automotive session

22 ber to Oc 2012

CAE

The CAE Poster Award is an EnginSoft initiative which is part of the program of the International CAE Conference 2012 that will take place in Lazise (Verona) - Italy from 22 to 23 October. INTERNATIONAL The CAE Poster Award is a competition dedicated to the best posters that show original and relevant CAE applications. The CAE Poster Award is part of the EnginSoft CAE Culture Promotion Program to improve the correct use of simulation tools, both in industry and the academia, and to foster the growth of the CAE analysts community. The Poster Award is divided into two categories: - industry: all types of companies are welcome to take part - accademia: students, graduated students and researchers are welcome to take part. A Scientific Committee will select the 10 best posters for each category, and the first three posters will be awarded during the Conference evening on 22 October. The participation in the competition is free. It includes a free pass to the International CAE Conference 2012 for the nominated authors. Deadline for the poster submission: 28 September 2012. For more details, please visit: www.caeconference.com

POSTER AWARD

Promoted by

www.caeconference.com


66 - Newsletter EnginSoft Year 9 n°2

International modeFRONTIER Users’ Meeting 2012

From innovative wind generators to the calibration of a diesel oxidation catalyst and the optimal assembly of a genome, ESTECO successfully celebrated the fifth edition of the International modeFRONTIER Users' Meeting, on 21st and 22nd May 2012 in Trieste, Italy. Speakers and participants travelled from different parts of the world to share their experiences and knowledge, address some of the most relevant issues of the sector and to learn about the potential, the latest use and applications of the software. In his welcome speech, Prof. Carlo Poloni, President of ESTECO, introduced the meeting’s primary theme: collaboration, and highlighted how nowadays sharing knowledge and resources is ranked high as a key success factor in many companies. Technology can be of paramount importance in the process of cooperation, for example within the application of multidisciplinary methodologies, such as the ones supported and enhanced by the ESTECO technology. The guest of honor was David Edward Goldberg, Director of the Illinois Genetic Algorithms Laboratory (IlliGAL) and professor at the Department of Industrial and Enterprise Systems Engineering (IESE) of the University of Illinois. As the leading expert of genetic algorithms, Prof. Goldberg focused in his speech on the concept of "innovation" and "collaboration" from the perspective of genetic algorithms (GAs), search procedures based on the mechanics of natural selection and genetics. “GAs may be thought of as computational models of innovation, - said Prof. Goldberg - but may also be thought of as inducing a kind of collaborative process between human and machine and also as models of certain kinds of social systems.” Among other specialists, Prof. Alberto Tessarolo showed the results of a recent work completed with Ansaldo Sistemi Industriali on the application of modeFRONTIER as an aid to the design of innovative wind generators of different size and conception. Prof. Tessarolo explained how the use of genetic optimization techniques is applied to the design of complex systems and components in industrial applications. He outlined how in the particular example, interfacing electromagnetic and thermal finite-element computation programs in the modeFRONTIER multi-objective constrained optimization environment made it possible to identify the most promising design configuration in absence of previous industrial experiences of electric machines with similar characteristics. Mr. Luciano Mariella from Ferrari GeS, unveiled encouraging results about an innovative project for the London 2012 Olympic Games carried out by Ferrari Gestione Sportiva and C.O.N.I., aiming at improving the hydrodynamic performance of the rudder of the K1

Events

and K2 Kayak. The optimization loop was focused on CFD simulations to properly evaluate the rudder's, the fin's hydrodynamic performance and identified new “optimum” shapes which were built and tested by Italian national athletes in preparation for the upcoming Olympic Games. Several other experts from leading companies and prominent international research centers, provided exciting results on application cases using the modeFRONTIER software across a wide range of hightech industrial sectors during the two busy days. For more information, please visit: um12.esteco.com

Graz Symposium Virtual Vehicle EnginSoft Germany participated as an exhibitor in the 5th edition of the Graz Symposium Virtual Vehicle on April 17th and 18th 2012. Organised by the competence center "Das virtuelle Fahrzeug Forschungsgesellschaft mbH" it hosted around 150 particpants from companies like Daimler and Audi coming together presenting and discussing processes, methods, tools and best practices in interdisciplinary vehicle development. EnginSoft Germany showcased multiple disciplines such as structural analyses and aerodynamic simulations driving a vehicles' development can be connected through the workflow automation and design optimization software modeFRONTIER in order to enhance virtual vehicle development. www.gsvf.at/cms/

Constructive Approximation and Applications EnginSoft will participate in and sponsor the 3rd Dolomites Workshop on Constructive Approximation and Applications (DWCAA12). The conference will take place in Alba di Canazei (Trento, Italy), from September 9 to 14, 2012. DWCAA12 aims to provide a forum for researchers to present and discuss ideas, theories, and applications of constructive approximation. The conference will host several interesting keynote speakers and presentations. EnginSoft will sponsor the Best Paper Award for young researchers. This award (worth up to 500 euro) will be given for the best paper submitted for the proceedings of the conference by either a student, PhD student or postdoc. In case of more than one author (with all co-authors in one of the above categories), the amount will be divided equally among the co-authors. The selection of the best paper will be made by the Scientific Committee of DWCAA12. Further information on the themes and sessions of the conference and the Call for Paper can be found at: http://events.math.unipd.it/dwcaa2012/ By participating in DWCAA12, EnginSoft is supporting new studies on approximation and the importance that these techniques have to tackle real-case industrial problems.


Newsletter EnginSoft Year 9 n°2 -

67

Event Calendar ITALY

International CAE Conference 22-23 October 2012 www.caeconference.com CALL FOR PAPER IS OPEN! _________________________________________________ 19,20,21.06.2012 - Corso di formazione "La simulazione elettromagnetica di apparati elettrici - caratterizzazione avanzata dei materiali magnetici" Padova c/o Competence Center EnginSoft 26.06.1012 - Workshop "Massimizzare le performances nei processi fusori della ghisa e dell’acciaio per ottenere getti di qualità superiore" Padova c/o Competence Center EnginSoft 27.06.2012 - Workshop "Simulazione dei Processi di Stampaggio a Caldo di Metalli non Ferrosi: Nuovi Sviluppi, Vantaggi e Prospettive" Brescia c/o API 04.07.2012 - Workshop "Sperimentazione virtuale come strumento srategico per la competitività delle aziende" Altavilla Vicemtina (VI) c/o CUOA 11.07.2012 - Workshop "Sperimentazione virtuale nella Dinamica della frattura e caratterizzazione dei materiali"; Torino c/o Centro di Formazione FIAT del Lingotto 18.07.2012 - Workshop "Progettare strutture e componenti con materiali compositi - scelta dei materiali e simulazione" Bergamo c/o Competence Center EnginSoft 18.07.2012 - Workshop "Lo stato dell'arte delle tecnologie di simulazione dei settori strategici: Oil&Gas, Power e Chemical" SanDonato (MI) c/o Hotel Crown Plaza Per maggiori informazioni e dettagli sugli eventi: www.enginsoft.it/eventi eventi@enginsoft.it

FRANCE 6-7.06.2012 NAFEMS French Conference, Paris. 27-28.06.2012 TERATEC Forum, Paris meet us at EnginSoft booth. 13-14.11.2012 Virtual PLM, Reims EnginSoft will be exhibiting and presenting. GERMANY 24-25.05.2012 CST European User Conference. Mannheim. EnginSoft presented: "Microwave Bandpass Filter MultiObjective Optimization using modeFRONTIER & CST MWS". 18-22.06.2012 Achema Conference 2012. Frankfurt/Main. 25-26.06.2012 FLOW 3D European Users Conference. Munich. 03.07.2012 JMAG Users Conference, Frankfurt/Main. 24-26.10.2012 ANSYS User's Conference & CADFEM Users Meeting, Kassel. 10.2012 GT Power user's conference. Frankfurt/Main. UK modeFRONTIER Workshop at the University of Warwick * 13 June * 18 July * 5 September * 16 October * 8 November * 10 December 30-31.05.2012 NAFEMS UK Conference - EnginSoft gave a presentation. 04.07.2012 modeFRONTIER for InfoWorks CS Workshop New Interface demonstration day, University of Warwick 11.07.2012 NAFEMS - Using Variability in Simulation: A Practical Workshop, Teddington -EnginSoft will be attending SPAIN 10.05.2012 NAFEMS Awareness Seminar on Numerical Methodologies and Modeling of Coupled Systems, Madrid. For more information: www.enginsoft.com

Events


INTERNATIONAL

22-23 OCTOBER

2012

CONFERENCE

www.cobalto.it

Presentations in

The new voice of CAE Join us at the International CAE Conference from the 22nd-23rd October Key industry leaders, solutions and insight; further your professional experience and expand your industrial network

LAGO DI GARDA

Hotel Parchi del Garda Via Brusá, località Pacengo Lazise (VR) - Italy Tel. +39 045 6499611 www.hotelparchidelgarda.it

INTERNATIONAL CAE CONFERENCE - INFOLINE info@caeconference.com - Tel. +39 0461 915391

Special Guest:

Professor Parviz Moin Mechanical Engineer Professor at Stanford University; Worldwide expert in fluid dynamics

www.caeconference.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.