newsletter2011_scilab

Page 1

Year 8 - Openeering Special Issue



Newsletter EnginSoft Year 8 - Openeering Special Issue -

3

Welcome With this special edition of the Newsletter, EnginSoft welcomes Openeering, a new business unit dedicated to open source software for engineering and applied mathematics.

“We think that the existence of Free and Open Source software for numerical computation is essential. Scilab is such software.” This is what Dr. Claude Gomez, CEO of Scilab Enterprises, claims. In his article, Dr. Gomez gives an insight into Scilab software, its future and the development model. Gomez explains that even if Scilab is free and open source, it is developed in a professional way with a Consortium of more than twenty organizations that are supporting and guiding Scilab’s future and promotion.

The original idea of Openeering routes back to 2008, when it became clear that, at least in some application areas, open source software had reached a good quality level and that it was worth our time and resources to investigate it further. Since then, many things have happened: an internal project was started, a dedicated Ing. Stefano Odorizzi In the last months, various real world team was selected and trained, significant EnginSoft CEO and President engineering applications have come to life benchmarking has took place, a brand new using Scilab. In the following pages, the reader can find marketing approach has been imagined, planned and some of them. For example, the reader can see how Scilab pursued. can be used for developing a finite element solver for stationary and incompressible Navier-Stokes equations or The Openeering name comes from the words Open Source for thermo-mechanical problems. and Engineering, much like EnginSoft comes from the words Engineering and Software.“Open source” and But Scilab is not only for finite elements solvers. In this ”business” look, at a first glance, an impossible pair. Newsletter, Scilab demonstrates to be extremely versatile Software vendors may associate “open source” with “no being able to manage multiobjective optimization revenues”. Software customers may associate it with “no problems, text classification with self organizing maps support”. (SOMs) and even being able to contribute to the weather forecast. Not only we believe this is not necessarily true, but, on the contrary, we think that Open Source and commercial To support the initiative of Openeering, a new website has source software will both play a role in the future CAE and been created. www.openeering.com contains useful PIDO market. For EnginSoft this vision of the future is of tutorials and real-case applications examples, together course challenging, and we believe that competencies are the most important weapon we and our customer with the Openeering SCILAB education and training calendar. We take the opportunity to invite you following companies need to win this challenge. The business model our new publications on the website. we pursue is well described by Giovanni Borzi in a dedicated article of this newsletter. We welcome feedback, ideas and hints to improve the Scilab is the open source software with which Openeering quality of this brand new website with the idea that it is starting its activity. In the last years, EnginSoft has should be, first and foremost, a website to support our customers. become member of the Scilab Consortium and, more recently, started a collaboration with Scilab Enterprises as Scilab Professional Partner. This new partnership certifies Together with the Openeering team, I hope you will enjoy that EnginSoft has a team dedicated in providing Scilab reading this first dedicated Newsletter. education and consultancy services to industries. Stefano Odorizzi Editor in chief


4 - Newsletter EnginSoft Year 8 - Openeering Special Issue

Contents SCILAB ENTERPRISES NEWS

6

Scilab: The Professional Free Software for Numerical Computation

OPENEERING

9

Why Openeering

SCILAB CASE STUDIES FEM APPLICATIONS

11 16 21 26

Scilab Finite Element Solver for stationary and incompressible Navier-Stokes equations A simple Finite Element Solver for thermo-mechanical problems A Simple Parallel Implementation of a FEM Solver in Scilab The solution of exterior acoustic problems with Scilab

DATA MINING

31 37

An unsupervised text classification method implemented in Scilab Weather Forecasting with Scilab

OPTIMIZATION

41 46

Optimization? Do It with Scilab! A Multi-Objective Optimization with Open Source Software

EDUCATION AND TRAINING

51

Scilab training courses by Openeering

PAGE 21 - SCILAB FINITE ELEMENT SOLVER FOR STATIONARY AND INCOMPRESSIBLE NAVIER-STOKER EQUATIONS

PAGE 31 - AN UNSUPERVISED TEXT CLASSIFICATION METHOD IMPLEMENTED IN SCILAB


Newsletter EnginSoft Year 8 - Openeering Special Issue -

5

Newsletter EnginSoft Openeering Special Issue To receive a free copy of the next EnginSoft Newsletters, please contact our Marketing office at: newsletter@enginsoft.it All pictures are protected by copyright. Any reproduction of these pictures in any media and by any means is forbidden unless written authorization by EnginSoft has been obtained beforehand. ©Copyright EnginSoft Newsletter.

Massimiliano Margonari Business Development m.margonari@openeering.com

Advertisement

Silvia Poles

For advertising opportunities, please contact our Marketing office at: newsletter@enginsoft.it

Product Manager s.poles@openeering.com

EnginSoft S.p.A.

Giovanni Borzi Project Manager, PMP® g.borzi@openeering.com

powered by

www.openeering.com

24126 BERGAMO c/o Parco Scientifico Tecnologico Kilometro Rosso - Edificio A1, Via Stezzano 87 Tel. +39 035 368711 • Fax +39 0461 979215 50127 FIRENZE Via Panciatichi, 40 Tel. +39 055 4376113 • Fax +39 0461 979216 35129 PADOVA Via Giambellino, 7 Tel. +39 49 7705311 • Fax 39 0461 979217 72023 MESAGNE (BRINDISI) Via A. Murri, 2 - Z.I. Tel. +39 0831 730194 • Fax +39 0461 979224 38123 TRENTO fraz. Mattarello - Via della Stazione, 27 Tel. +39 0461 915391 • Fax +39 0461 979201 www.enginsoft.it - www.enginsoft.com e-mail: info@enginsoft.it

COMPANY INTERESTS

OPEN

SOURCE

ENGINEERING

A SCILAB PROFESSIONAL PARTNER Via della Stazione, 27 - 38123 Mattarello di Trento

CONSORZIO TCN 38123 TRENTO Via della Stazione, 27 - fraz. Mattarello Tel. +39 0461 915391 • Fax +39 0461 979201 www.consorziotcn.it - www.improve.it EnginSoft GmbH - Germany EnginSoft UK - United Kingdom EnginSoft France - France EnginSoft Nordic - Sweden Aperio Tecnologia en Ingenieria - Spain www.enginsoft.com

ASSOCIATION INTERESTS NAFEMS International www.nafems.it www.nafems.org TechNet Alliance www.technet-alliance.com RESPONSIBLE DIRECTOR Stefano Odorizzi - newsletter@enginsoft.it PRINTING Grafiche Dal Piaz - Trento The EnginSoft NEWSLETTER is a quarterly magazine published by EnginSoft SpA

Autorizzazione del Tribunale di Trento n° 1353 RS di data 2/4/2008

ESTECO srl 34016 TRIESTE Area Science Park • Padriciano 99 Tel. +39 040 3755548 • Fax +39 040 3755549 www.esteco.com


6 - Newsletter EnginSoft Year 8 - Openeering Special Issue

Scilab: The Professional Free Software for Numerical Computation Numerical Computation: a Strategic Domain When engineers need to make modeling, simulation and design of complex systems, they usually use numerical computation software: this kind of tool is needed with regard to the complexity of the computation they have to make and it can also be used for plotting and visualization. From the software it is also possible to generate code for embedding in real system. With the increasing of the capabilities of computers, using parallelism, multicore and GPU, simulating very complex systems is now possible and numerical computations have been applied to a lot of domains where it was not possible before to use it efficiently. So, today major scientific challenges can be tackled in Biology, Medicine, Environment, Natural Resources and Risks, and Materials. And numerical Computation is much more efficient in industry and service sectors such as Energy, Defense, Automotive, Aerospace, Telecommunications, Finance, Transportation and Multimedia. So, numerical computation software is strategic software in strategic sectors and domains. It is also used in Education and Research. For all these reasons, we think that the existence of Free and Open Source software for numerical computation is essential. Scilab is such software. What is Scilab? Scilab is software for numerical computation which can be freely downloaded from www.scilab.org. Binary versions are available for Windows (XP, Vista and 7), GNU/Linux and Mac OS X. Scilab has about 1,700 functions in many scientific domains such as: • Mathematics. • Matrix computation, sparse matrices. • Polynomials and rational functions. • Simulation: ODE and DAE. • Classic and robust control, LMI optimization. • Differentiable and non-differentiable optimization. • Interpolation, approximation. • Signal processing. • Statistics. It has also 2-D and 3-D graphics, with animation capabilities. For problems where symbolic computation is needed, such as mechanical problems, a link with Computer Algebra System Maple is available.

In domains where modeling, simulation and control of dynamical systems are needed, the block diagram modeler and editor Xcos module can be used and comes with Scilab (see below). It is very important that the use of such software be as simple as possible. Indeed, for instance, engineers do not have the time to learn complicate language and syntax. So, Scilab has a powerful programming language well adapted to mathematical and matrix computation which is the basis in the applied mathematics domain. In Figure 1 we can see a typical session with matrix computation:

Figure 1: Simple matrix computation with Scilab.

Graphics are of paramount importance for the interpretation and diffusion of results. In Scilab it is easy to plot 2-D and 3-D curves. Graphs are compound by objects with properties which can be modified independently using Scilab commands or with a graphics editor. An example is given in Figure 2. In order to make programs easily, a program editor knowing Scilab syntax is integrated into Scilab. It allows automatic indentation, matching syntax, complexion, Scilab execution, fast access to the on-line help of functions, etc. As Scilab is Open Source, it is very simple for the user to have access interactively from the editor to all the source code of all the functions written in Scilab code: this is a very efficient way for modifying Scilab and having good example for programming.


Newsletter EnginSoft Year 8 - Openeering Special Issue -

At the console level, a variable browser and an editor of previous commands is also available. On-line help is available for each Scilab function with examples which can be executed directly in Scilab console. After using Scilab for a moment, the user has a lot of windows he has to manage. For that Scilab has a docking system which allows having all the windows in a same frame. This can be seen in Figure 2 below where the console is on the left, and on the right the corresponding program and graphics window.

Figure 2: Docking of console, editor and graphics windows.

In fact, Scilab is made of libraries of computation programs in C and FORTRAN, which are linked to an interpreter which is an interface between the programs and the user by the means of Scilab functions. Above all the system a light and powerful Graphics User Interface allows the user to use Scilab easily. A large number of Scilab functions are also written in Scilab. All this Scilab internal is summarized in the following figure. We can see on the preceding figure that it is possible for

7

is really an open system. Conversely, Scilab can be used by other programs as a calculation engine. Tools for making external modules are available from Scilab Web site. For that a forge can be used. A major improvement of the latest releases of Scilab is the existence of the ATOMS system which allows downloading and installing directly from Scilab external modules available in Scilab Web site: a lot of external modules are already available. Users can create their own ATOMS module easily. Everything about ATOMS can be found here: atoms.scilab.org. What is Xcos? Scilab has powerful solvers for explicit (ODE) and implicit (DAE) differential equation systems. So, it is possible to simulate dynamical systems and the Xcos module, which is integrated into Scilab, is a block diagram graphical editor which allows representing most of hybrid (with continuous and discrete time components and conditioning events) dynamical systems. It is possible to build the model of a dynamical system by gathering blocks copied from various palettes and by linking them. The connections between the input/output ports model the communication of data from a block to another block. The connections between the activation ports model the communication of information for controlling the block. Many different clocks can be used in the system. Like in Scilab, Xcos is an open system and the user can build blocks and gather them into new palettes.

Figure 4: System to be controlled with an observer.

An example of a system that we want to control by using a hybrid observer is given in the following figure. We can see two asynchronous clocks and two super blocks containing other blocks representing the system and the estimator. Figure 3: Scilab internal components.

a user to extend Scilab by adding what we call “External Modules�. The user has only to add FORTRAN, C, C++ and/or Scilab code together with the corresponding online help, and to link interactively with Scilab. So, Scilab

Scilab Future Scilab has to evolve in order to stick to the fast evolution of the world of computers and to the everyday growing needs of the users of numerical computation, mainly engineers in strategic domains. For that, the strategic


8 - Newsletter EnginSoft Year 8 - Openeering Special Issue roadmap of Scilab can be summarized in 4 important points: 1. High Performance Computing: using multicore, GPU and clusters for making parallel computations. For that new parallel algorithms have to be made and Scilab language and interpreter have to be adapted. 2. Embedded systems: generating code from Xcos and from Scilab code to embed into devices, cars, planes… Today this point joins the preceding because of the new multiprocessor embedded chips. 3. Links with other scientific software, free or not. This is an important point to have Scilab be a numerical platform that can be used in coordination with other specialized software. Scilab has already such links with Excel, LabVIEW and ModeFRONTIER. 4. Dedicated professional external modules. For points 1 and 2, a brand new Scilab kernel has been made which will combine improved performance and memory management together to an adaptation to parallelization and code generation. This is Scilab 6 which will be released at the beginning of 2012 and which will be a major evolution of Scilab. Scilab Development Model Scilab is software coming from research. It was conceived at INRIA (the French Research Institute for Computer Science and Control www.inria.fr) and a consortium has been created to take in charge the development and the promotion of Scilab. 24 organizations are members of Scilab Consortium:

Figure 6: Scilab Enterprises operation.

making dedicated professional versions and external modules for Scilab are a necessary requirement for the use of Scilab. It is the reason why the Scilab Enterprises Company has been created to take in charge all the Scilab operation: development of free Scilab and delivering of services. The corresponding development model can be summarized in figure 6. Dr. Claude Gomez CEO Scilab Enterprises claude.gomez@scilab-enterprises.com Dr. Claude Gomez was graduated in 1977 from the École Centrale de Paris. He received a Ph. D. degree in numerical analysis in 1980 at the Orsay University (Paris XI). He was a senior scientist researcher at INRIA (The French National Institute for Research in Computer Science and Control). He began to work in numerical analysis of partial differential equations. Then his main topics of interest are the links between Computer Algebra and Numerical Computations and he made the Macrofort Maple package for generating complete FORTRAN programs from Maple. He is involved in the development of the Scientific Software Package Scilab since 1990. He is co-author of the Metanet toolbox for graphs and networks optimization and he made a Maple package for generating Scilab code from Maple.

Figure 5: Members of Scilab Consortium.

Scilab Consortium is hosted by the DIGITEO Foundation until mid-2012 (www.digiteo.fr). The development of Scilab in the Consortium was made by a dedicated team working full time for Scilab. So, even if Scilab is Free and Open Source, and uses the help of the community of users for its development, it was developed in a professional way in order to become The Professional Free Software for Numerical Computation. Now that Scilab can be used in a professional way both by Industry and Academics, delivering support, services and

He is co-author of a book in French about Computer Algebra (Masson, 1995), editor and co-author of a book in English about Scilab (Birkhäuser, 1999), and co-author of a book in French about Scilab (Springer, 2002). He was the leader of the Scilab Research and Development team since it was created in 2003 at INRIA and he was the Director of Scilab Consortium since its integration in the DIGITEO foundation in 2008. Now he is the CEO of Scilab Enterprises company and so he is in charge of all Scilab operations.


Newsletter EnginSoft Year 8 - Openeering Special Issue -

9

Why Openeering EnginSoft and Openeering Over its 25 years of operations, EnginSoft activities and continuous growth have always been driven by a strong entrepreneurial spirit able to read market signals, and sometimes, to anticipate them. This attitude, combined with the belief that, to provide real value to our customers, competencies are more important than software, has been at the basis of many strategic choices made by the company over the years. Today EnginSoft welcomes Openeering, a new business unit dedicated to open source software for engineering and applied mathematics. The idea of Openeering routes back to 2008, when an EnginSoft technical team was selected internally to start benchmarking open source

“Open source” and ”business” look, at a first glance, an impossible pair, leading necessarily to an oxymoron. Software vendors may associate “open source” with “no revenue”. Software customers may associate it with “no support”. This is not necessarily true, and several successful businesses are currently based on open source software. What really is peculiar to open source, indeed, is the shift of business focus from licensing intellectual property (commercial) to selling added value services (open source).

It has to be clarified that open source software is free of charge, but this does not mean that it is unlicensed: on the contrary, open source software comes with a license that clearly sets out rights and duties of the licensee. Several open source licenses are available, such as the GPL license in its various versions, or the Apache license, each one suitable to specific business scenarios. For example, the Scilab 5 OPEN SOURCE ENGINEERING software is governed by the CeCILL open source A SCILAB PROFESSIONAL PARTNER license, that was introduced in order to provide an open source license better conforming to software. Since then, various real world engineering French law, with the aim of maintaining it compatible applications has come to life using the Scilab software, with the more popular GPL license. some of them presented in this newsletter. Furthermore, EnginSoft has become member of the Scilab Consortium Why is open source attractive? First of all, open source is and, more recently, Scilab Professional Partner, dedicated to providing Scilab education and consultancy services to attractive to companies because it carries the promise of lower costs. Not only open source software takes the fixed industry. cost of licensing and maintenance fees away, but it also enables companies to maximise productivity by installing The Openeering name comes from the words Open Source the software when they need it, where they need it, and and Engineering, much like EnginSoft comes from the as many licenses as needed, for example to accommodate words Engineering and Software. In the company, the usage peaks or training sessions. word goes round that we took the “eering” that EnginSoft Companies will also find open source software attractive dropped, and reused it. More seriously, you may be because of the availability of a better “ecosystem” of wondering about the Open Source business model that service providers. In fact, most of the open source Openering is aiming at, and how it relates to EnginSoft. software value is provided by companies and consultants operating in this ecosystem. The ecosystem grows around Open Source business model well managed Open Source software initiatives because Open source is a software licensing model that has been the absence of license fees lowers the barrier to software widely adopted in several areas of business. This model is adoption, stimulating a greater number of service generally based on free software and the availability of companies to adopt it: as an effect, a company needing the software source code, that can thus be examined, for specialised services will generally have more choices example for educational purposes, or modified in order to available being an open source user than a closed source improve existing functionalities, add new ones, or adapt it one. to specific needs.


10 - Newsletter EnginSoft Year 8 - Openeering Special Issue Furthermore, open source service providers not only can provide education, training and support, but also more specialised services such as customisation, that are seldom available with commercial, closed source software. Quality and open source software Quality is probably the greatest challenge that open source software have to win. Fortunately, the times of sloppy software quality and poor development management are behind us. Today, successful open source software is associated with a company or consortium that takes care of quality, non only during software development and integration of third parties contributions, but also defining clear and effective market

cumulated costs of Scilab software with that of a closed source competitor, in the case of an industry that is already customer of the competitor, closed source software. Using simple, realistic assumptions, such as that Open Source software needs an in-depth initial training and additional initial costs for the migration from the closed source competitor, and not taking into account any Open Source advantage that can’t be immediately estimated, such as a productivity increase, our conclusion is that under almost any condition the investment into Open Source software will repay itself in less than two years. The calculation details are www.openeering.com website.

available

on

the

The role of EnginSoft - Openeering Partner companies have an important role in the open source business model. As it was previously mentioned, most of the value for the open source software users, especially for industry users, is created by partner companies. We believe that EnginSoft, as a leading European engineering software and services provider focusing on technical competencies and building long term, excellent relationships with customers, is perfectly placed to partner with Scilab Enterprises to bring related education and services to the market. strategy, development roadmap (including release scheduling) and technical objectives. An example of successful quality management in the open source software business is the Linux Ubuntu distribution, that is characterised by a clearly defined roadmap for the releases (one every six months), the availability of long term support releases, an extremely active community and the possibility of purchasing commercial support from the mother company, Canonical ltd. With a similar approach, the Scilab Consortium was founded in 2003 by INRIA (the French national institute for research in computer science and control), and has joined the Digiteo Foundation in 2008. The Scilab Consortium plays a fundamental role in the Scilab development, monitoring the quality and organizing contributions to the code, keeping Scilab aligned with industry, research and education requirements, organizing the community of users, maintaining the necessary resources and facilities, and associating industrial and academic partners in a powerful international ecosystem. As a result, the latest Scilab release was developed by a dedicated team working full time. Return on investment in Open Source software The economical advantage of Open Source is self evident when we try to compare the annual

To support the initiative a new website has been created, www.openeering.com, where useful resources are published, together with the Openeering SCILAB education and training calendar. Giovanni Borzi Project manager, PMPÂŽ info@openeering.com


Newsletter EnginSoft Year 8 - Openeering Special Issue -

11

Scilab Finite Element Solver for stationary and incompressible Navier-Stokes equations Scilab is an open source software package for scientific and numerical computing developed and freely distributed by the Scilab Consortium. Scilab offers a high level programming language which allows the user to quickly implement his/her own applications in a smart way, without strong programming skills. Many toolboxes, developed by users all over the world and made available through the internet, represent real opportunities to create complex, efficient and multiplatform applications. Scilab is regarded almost as a clone of the well-known MATLAB®, actually, the two technologies have many points in common: the programming languages are very similar (despite some differences), they both use a compiled version of numerical libraries to make basic computations efficient, they offer nice graphical tools and more. In brief, they adopt the same philosophy, but Scilab is completely free. Unfortunately, Scilab is not yet widely used in industrial areas where, on the contrary, MATLAB® and MATLAB SIMULINK® are the most known and frequently used. This is probably due to the historical advantage that MATLAB® has over all the competitors. Launched to the markets in the late 70’s, it was the first software of its kind. However, we have to recall that MATLAB® has many built-in functions that Scilab does not yet provide. In some cases, this could be determinant. While the number of Scilab users, their experiences and investments have grown steadily, the author of this article thinks that the need to satisfy a larger and more diverse market, also led to faster software

developments in recent years. As in many other cases, also the marketing played a fundamental role in the diffusion of the product. Scilab is mainly used for teaching purposes and, probably for this reason, it is often considered inadequate for the solution of real engineering problems. This is absolutely false, and in this article, we will demonstrate that it is possible to develop efficient and reliable solvers using Scilab, also for non trivial problems. To this aim, we choose the Navier-Stokes equations to model a planar stationary and incompressible fluid motion. The numerical solution of such equations is actually considered a difficult and challenging task, as it can be seen reading [3] and [4], just to provide two references. If the user has a strong background in fluid dynamics, he/she can obviously implement more complex models than the one proposed in this document using the same Scilab platform . Anyway, there are some industrial problems that can be adequately modeled using these equations: heat exchangers, boilers and more, just to name a few possible applications. The Navier-Stokes equations for the incompressible fluid Navier-Stokes equations can be derived applying the basic laws of mechanics, such as the conservation and the continuity principles, to a reference volume of fluid (see [2] for more details). After some mathematical manipulation, the user usually reaches the following system of equations: (1)

Fig. 2 - The benchmark problem of a laminar flow around a cylinder used to test our solver; the boundary conditions are drawn in blue. The same problem has been solved using different computational strategies in [6]; the interested reader is addressed to this reference for more details.


12 - Newsletter EnginSoft Year 8 - Openeering Special Issue

Fig. 3 - The two meshes used for the benchmark. On the top the coarse one (3486 unknowns) and on the bottom the finer one (11478 unknowns).

which are known as the continuity, the momentum and the energy equation respectively. They have to be solved in the domain Ω, taking into account appropriate boundary conditions. The symbols “ ·” and “ ” are used to indicate the divergence and the gradient operator respectively, while U, P and T are the unknown velocity vectors, the pressure and the temperature fields. The fluid properties are the density ρ, the viscosity µ, the thermal conductivity k and the specific heat c which could depend generally speaking on temperature. We have to remember that in the most general case equations, such as explained in (1), other terms such as heat sources or body forces could be involved, which we have neglected in the present case. Δ

Δ

For sake of simplicity we imagine all the fluid properties as constant and we will consider, as mentioned before, only two dimensional domains. The former hypothesis represents a very important simplification because the energy equations completely decouple and therefore, it can be solved separately once the velocity field has been computed using the first two equations. The latter one can be easily removed, with some additional effort in programming. A source of difficulty is given by the first equation in (1), which represents the incompressibility constraint. In order to satisfy the inf-sup condition (also known as BabuskaBrezzi condition) we decide to use the six-noded triangular elements; the velocity field is modeled using quadratic shape functions and two unknowns at each node are considered, while the pressure is modeled using linear shape functions and only three unknowns are used at the corner nodes. For the solution of the equations reported in (1) we decide to use a traditional Galerkin weighted residual approach, which is not ideally suitable for convection dominated problems: it is actually known that when the so-called Peclet number (which expresses the ratio between convective and diffusion contributions) grows, the computed solution

Fig. 4 - The sparsity pattern of the system of linear equations that have to be solved each iteration for the solution of the first model of the channel benchmark (3486 unknowns) is drawn. It has to be noted that the pattern is symmetric with respect to the diagonal, but unfortunately the matrix is not. The non-zero terms amount to 60294, leading to a storage requirement of 60294x(8+2*4) = 965 Kbytes, if a double precision arithmetic is used. If a full square matrix were used, 3486*3486*8 = 97217568 Kbytes would be necessary!

suffers from a non-physical oscillatory behavior (see [2] for details). The same problem appears when dealing with the energy equation (the third one in (1)), whenever the convective contribution is sufficiently high. This phenomenon can uniquely be ascribed to some deficiency of the numerical technique. For this reason, many workarounds have been proposed to correctly deal with convection dominated problems. The most known are surely the streamline upwinding schemes, the Petrov-Galerkin, least square Galerkin approaches and other stabilization techniques. In this work, we do not adopt any of these techniques, knowing that the computed solution with a pure Galerkin


Newsletter EnginSoft Year 8 - Openeering Special Issue -

Fig. 5 - Starting from top, the x and y components of velocity, the velocity magnitude and the pressure for Reynolds number equal to 20, computed with the finer mesh.

13

Fig. 6 - Starting from top, the x and y components of velocity, the velocity magnitude and the pressure for Reynolds number equal to 20, computed with the ANSYS-Flotran solver (2375 elements, 2523 nodes).

approach will be reliable only in the case of diffusion Jacobian of the system hence more time for their dominated problems. As already mentioned, it could be in implementation will be needed. principle possible to implement whatever technique to improve the code and to make the solution process less Laminar flow around a cylinder sensitive to the flow nature, but this is not the objective of In order to test the solver just written with Scilab, we this work. decided to solve a simple problem which has It is fundamental to note that been used by different authors (see [3], [6] for the momentum equation is nonexample) as a benchmark problem to test linear due to the presence of different numerical approaches for the solution the advection term ρUU. The of incompressible, steady and unsteady, Naviersolution strategy adopted to Stokes equations. In Figure 2 the problem is deal with this nonlinearity is drawn, where the geometry and the boundary probably the simplest one and conditions can be found. The fluid density is it is usually known as the set to 1 and the viscosity to recursive approach (or Picard 10-3. A parabolic (Poiseulle) velocity field in x approach). direction is imposed on the inlet, as shown in An initial guess for the velocity equation (2), field has to be provided and a Fig. 7 - The geometry and the boundary conditions of first system of linear equations the second benchmark used to test the solver. (2) can be assembled and solved. Once the linear system has been solved the new computed velocity field can be compared with the guess field: if no with Um=0.3, a zero pressure condition is imposed on the outlet. The velocity in both directions is imposed to be zero significant differences are found, the solution process can be on the other boundaries. The Reynolds number is computed stopped. Otherwise a new iteration has to be performed using the new velocity field just computed as the guess as Re=(¯U D)⁄ν, where the mean velocity at the inlet (¯U=(2Um)⁄3), the circle diameter D and the kinematic field. This process usually leads to the solution within a viscosity ν=µ⁄ρ have been used. reasonable amount of iterations, and it has the advantage In Figure 3 the adopted meshes have been drawn. The first that it can be very easily implemented. For sure, there are has 809 elements, 1729 nodes, totally 3486 unknowns while more effective techniques, such as, for example the Newtonthe second has 2609 elements, 5409 nodes, totally 11478 Raphson scheme, but they usually require to compute the unknowns.


14 - Newsletter EnginSoft Year 8 - Openeering Special Issue our case, the coarser mesh gives 0.1191 while the finer gives 0.1177.

Table 1 - The results collected in [3] have been reported here and compared with the analogous quantities computed with our solver (Scilab solver). A satisfactory agreement is observed.

The cavity flow problem A second standard benchmark for incompressible flow is considered in this section. It is the flow of an isothermal fluid in a square cavity with unit sides, as schematically represented in Figure 7; the velocity field has been set to zero along all the boundaries, except for the upper one, where a uniform unitary horizontal velocity has been imposed. In order to make the problem solvable a zero pressure has been imposed to the lower left corner of the cavity. We would like to guide the interested reader to [3], where the same benchmark problem has been solved. Some comparisons between the position of the main vortex obtained with our solver and the analogous quantity computed by different authors and collected in [3] have been done and summarized in Table 1. In Figure 8 the velocity vector (top) and magnitude (bottom) are plotted for three different cases; the Reynolds number is computed as the inverse of the kinematic viscosity, being the reference length, the fluid density and the velocity all set to one. As the Reynolds number grows, the center of the main vortex tends to move through the center of cavity.

The computations can be performed on a common laptop pc. In our case, the user has to wait around 43 [sec] to solve the first mesh, while the total solution time is around 310 [sec] for the second model; in both cases 17 iterations are necessary to reach convergence. The larger part of the solution time is spent to compute the element contributions and fill the matrix: this is mainly due to the fact that the system solution invokes the taucs, a compiled library, while the matrix fill-in is done directly in Scilab which is interpreted, and not compiled, leading to a less performing Thermo-fluid simulation of a heat exchanger run time. The solver has been tested and it has been verified that it The whole solution time is however always acceptable, even provides accurate results for low Reynolds numbers. A new for the finest mesh. problem, may be more interesting from an engineering point The same problem has been solved with ANSYS-Flotran (2375 of view, has been considered: let us imagine that a warm elements, 2523 nodes) and results can be compared with the water flow (density of 1000 [Kg/m3], viscosity of 5¡10-4 [Pa ones provided by our solver. The comparison is encouraging because the global behavior is well captured also with the s], thermal conductivity 0.6 [W/m°C] and specific heat 4186 coarser mesh. Moreover, the numerical differences registered [J/Kg°C]) with a given velocity enters into a sort of heat between the maximum and minimum values are always exchanger where some hot circles are present. We would like acceptable, considering that different grids are used by the solvers. Other two quantities have been computed and compared with the analogous quantities proposed in [6]. The first one is the recirculation length which is the region behind the circle where the velocity along x is not positive, whose expected value is between 0.0842 and 0.0852; the coarser mesh provides a value of 0.0836 and the finer one a value of 0.0846. The second quantity which can be compared is the pressure drop across the circle, computed as the difference between the pressures in (0.15; 0.20) and (0.25; 0.20); the Fig. 8 - The velocity vector (top) and the velocity magnitude (bottom) plotted superimposed to the mesh for expected value should fall Re=100 (left), for Re=400 (center) and for Re=1000 (right). The main vortex tends to the center of the cavity as between 0.1172 and 0.1176. In the Reynolds numbers grows and secondary vortexes appear.


Newsletter EnginSoft Year 8 - Openeering Special Issue -

15

to compute the outlet fluid temperature imaging that the flow is sufficiently low to allow a pure Galerkin approach. In Figure 9 the mesh for this model is drawn, together with some dimensioning: we decided to consider only the upper part of this heat exchanger in view of the symmetry with respect to Fig. 9 - The heat exchanger considered in this work. The symmetry axis is highlighted in blue the x-axis. The mesh contains 10673 nodes, and some dimensioning (in [cm]) is reported. leading to 22587 velocities and pressures nodal unknowns and 10302 nodal temperatures unknowns. The symmetry conditions are simply given by imposing homogeneous vertical velocity and thermal flux on the boundaries lying on the symmetry axis. The horizontal inlet velocity Fig. 10 - The velocity magnitude plotted superimposed to the mesh. follows a parabolic law which goes to zero on the boundary and assumes a maximum value of 1·10-3 [m/s] on the symmetry axis. The inlet temperature is 20 [°C] and the temperature of the circle surfaces has been set to 50 [°C]. The outlet pressure has been set to zero in order to get a unique solution. Fig. 11 - The temperature field. It can be seen that the inlet temperature is 20 [°C], the As explained above, the velocity and pressure circles temperature is 50 [°C], while the outlet temperature vary from a minimum of 32.60 fields can be computed first, and then the energy [°C] up to a maximum of 44.58 [°C]. equation can be tackled in a second phase to compute the temperature in each point. stationary flow, has been implemented using the standard In Figure 10 the fluid velocity magnitude and in Figure 11 tools provided in Scilab. Three examples have been the temperature field are drawn. presented, and some comparisons with results provided by commercial software and available in literature, have been Conclusions performed in order to test the solver. In this work we have shown how to use Scilab to solve It is worth mentioning that obviously a certain background complex problems in an efficient manner. In order to in finite element analysis is mandatory, but no advanced convince the reader that this is feasible, a solver for the programming skills are necessary to implement the solver. Navier-Stokes equations, for the incompressible and References About Scilab and EnginSoft [1] http://www.scilab.org/ to have more information on Scilab is a free open source software with a GPL Scilab. compatible licence EnginSoft France supports the Scilab [2] The Gmsh can be freely downloaded from: Consortium as a member from industry with a strong http://www.geuz.org/gmsh/. background in R&D and educational initiatives for CAE. [3] J. Donea, A. Huerta, Finite Element Methods for Flow Based in Rocquencourt, near Versailles/ Paris, the Scilab Problems, (2003) Wiley. Consortium currently includes 19 members (both [4] J. H. Ferziger, M. Peric, Computational Methods for Fluid industrials and academics). Scilab’s Research and Dynamics, (2002) Springer, third edition. Development Team implements the development and [5] R. Rannacher, Finite Element Methods for the promotional policies decided by the Consortium. Incompressible Navier-Stokes Equations, (1999) Over the years, contributions have been numerous on downloaded from http://ganymed.iwr.uniprojects such as gfortran, matio, lapack, hdf5, jhdf, heidelberg.de/Oberwolfach-Seminar. jgraphx, autoconf, automake, libtool, coin-or, getfem, [6] M. Schafer, S. Turek, Benchmark Computations of laminar indefero, kdbg, OpenMPI, Launchpad... Flow Around a Cylinder, downloaded from The Scilab Consortium R&D Team also collaborates with http://www.mathematik.uni-dortmund.de/de/personen/ many packagers of GNU/Linux, Unix and BSD distributions person/Stefan+Turek.html. (Debian, Ubuntu, Mandriva, Gentoo, Redhat, Fedora, OpenSolaris...) in order to help them to provide Scilab in Contacts their distributions in the best possible way. For more information on this document please contact the author: To communicate with Scilab, and for more information, Massimiliano Margonari - Openeering please visit: www.scilab.org info@openeering.com


16 - Newsletter EnginSoft Year 8 - Openeering Special Issue

A simple Finite Element Solver for thermo-mechanical problems In this paper we would like to show how it is possible to develop a simple but effective finite element solver to deal with thermo-mechanical problems. In many engineering situations it is necessary to solve heat conduction problems, both steady and unsteady state, to estimate the temperature field inside a medium and, at the same time, compute the induced strain and stress states. To solve such problems many commercial software tools are available. They provide user-friendly interfaces and flexible solvers, which can also take into account very complicated boundary conditions, such as radiation, and nonlinearities of any kind, to allow the user to model the reality in a very accurate and reliable way.

program new tools and, last but not least, to have a costand-license free product. This turns out to be very useful when dealing with the solution of optimization problems. Keeping in mind these considerations, we used the Scilab platform and the gmsh (which are both open source codes: see [1] and [2]) to show that it is possible to build tailored software tools, able to solve standard but complex problems quite efficiently. Of course, to do this it is necessary to have a good knowledge basis in finite element formulations but no special skills in programming, thanks to the ease in developing code which characterizes Scilab.

However, there are some situations in which the problem to be solved requires a simple and standard modeling: in these cases it could be sufficient to have a light and dedicated software able to give reliable solutions. Moreover, other two desirable features of such a software could be the possibility to access the source to easily Feature

Commercial codes

Flexibility

It strongly depends on the code. Commercial codes are thought to be general purpose but rarely they can be easily customized.

Cost

Numerics and mathematics knowledge required Programming skills

Performance

Reliability of results

In this paper we firstly discuss about the numerical solution of the parabolic partial differential equation which governs the unsteady state heat transfer problem and then a similar strategy for the solution of elastostatic problems will be presented. These descriptions are absolutely general and they represent the starting point for more complex and richer In-house codes models. The main objective of this work is In principle the maximum certainly not to present revolutionary results or flexibility can be reached with a new super codes, but just and simply to show good organization of programming. Applications tailored on a specific that in some cases it could be feasible, useful need can be done. and profitable to develop home-made applications.

The license cost strongly depends No license means no costs, except on the code. Sometimes a those coming out from the maintenance has to be paid to development. access updates and upgrades. No special skills are required even if an intelligent use of simulation software requires a certain engineering or scientific background.

A certain background in mathematics, physics and numerical techniques is obviously necessary.

Usually no skills are necessary.

It depends on the language and platform used and also on the objectives that lead the development.

Commercial codes use the state-of The performance strongly depends –the-art of the high performance on the way the code has been computing to provide to the user written. very efficient applications. Usually commercial codes do not provide any warranty on the goodness of results, even though many benchmarks are given to demonstrate the effectiveness of the code.

A benchmarking activity is recommended to debug in-house codes and to check the goodness of results. This could take a long time.

Table 1 - A simple comparison between commercial and in-house software is made in this table. These considerations reflect the author opinion and therefore the reader could not agree. The discussion is open.

The thermal solver The first step to deal with is to implement a numerical technique to solve the unsteady state heat transfer problem described by the following partial differential equation: (1) which has to be solved in the domain Ω, taking into account the boundary conditions, which apply on different portions of the boundary (Γ = ΓT U ΓQ U ΓC). They could be of Dirichlet, Neumann or Robin kind, expressing a given temperature , a given flux or a convection condition with the environment: (2)

being the unit normal vector to the boundary and the upper-lined quantities known values at


Newsletter EnginSoft Year 8 - Openeering Special Issue -

17

Figure 1 - In view of the symmetry of the pipe problem we can consider just one half of the structure during the computations. A null normal flux on the symmetry boundary has been applied to model symmetry as on the base line (green boundaries), while a convection condition has been imposed on the external boundaries (blue boundaries). Inside the hole a temperature is given according to the law described on the right.

each time. The symbols “ ” and “ ” are used to indicate the divergence and the gradient operator respectively, while T is the unknown temperature field. The medium properties are the density ρ, the specific heat c and the thermal conductivity k which could depend, in a general case, on temperature. The term f on the right hand side represents all the body sources of heat and it could depend on both the space and time. For sake of simplicity we imagine that all the medium properties are constant; in this way the problem comes out to be linear, dramatically simplifying the solution. For the solution of the equations reported in (1) we decide to use a traditional Galerkin residual approach. Once a discretization has been introduced, we obtain the following expression, in matrix form: (3)

starting from already computed or known quantities. Moreover, the use of a lumped finite element approach leads to a diagonal matrix [C]; this is a desirable feature, because the solution of equation (5), which passes through the inversion of [C], reduces to simple and fast computations. The gain is much more evident if a nonlinear problem has to be solved, when the inversion of [C] has to be performed at each integration step. Unfortunately, this scheme is not unconditionally stable; the time integration step Δt has actually to be less than a threshold which depend on the nature of the problem and on the mesh. In some cases this restriction could require very small time steps, giving high solution time. On the contrary, if =1, an implicit scheme comes out from (5), which can be specialized as:

where the symbols [.] and {.} are used to indicate matrices and vectors. A classical Euler scheme can be implemented. If we assume the following approximation for the first time derivative of the temperature field:

In this case the matrix on the left involves also the conductivity contribution, which cannot be diagonalized

(6)

(4) being =[0,1] and ΔT the time step, we can rewrite, after some manipulation, equation (3) as:

(5)

It is well known (see [4]) that the value of the parameter plays a fundamental role. If we choose =0 an explicit time integration scheme is obtained, actually the unknown temperature at step n+1 can be explicitly computed

Figure 2 - Temperature field at time 30 The ANSYS Workbench (left) and our solver (right) results. A good agreement can be seen comparing these two images.


18 - Newsletter EnginSoft Year 8 - Openeering Special Issue As shown in the following pictures there is a good agreement between the results obtained with ANSYS Workbench and our solver.

Figure 3 - Temperature field in the point P plotted versus time. The ANSYS Workbench (red) and our solver (blue) results. Also in this case a good agreement between results is achieved.

trough a lumped approach and therefore the solution of a system of linear equations has to be computed at each step. The system matrix is however symmetric and positive definite, so a Choleski decomposition can be computed once for all and at each integration step the backward process, which is the less expensive from a computational point of view, can be performed. This scheme has the great advantage to be unconditionally stable: this means that there are no restriction on the time step to adopt. Obviously, the larger the step, the larger the errors due to the time discretization introduced in the model, according to (4). In principle all the intermediate values for are possible, considering that the stability of the Euler scheme is guaranteed for > 1⁄2, but usually the most used version are the full explicit or implicit one. In order to test the goodness of our application we have performed many tests and comparisons. Here we present the simple example shown in Figure 1. Let us imagine that in a long circular pipe a Figure 4 - The holed plate under tension fluid flows with a considered in this work. We have taken advantage from the symmetry with temperature which respect to x and y axes to model only a changes with time quarter of the whole plate. Appropriate according to the law boundary conditions have been adopted, drawn in Figure 1, on the as highlighted in blue. right. We want to estimate the temperature distribution at different time steps inside the medium and compute the temperature of the point P. It is interesting to note that for this simple problem all the boundary conditions described in (2) have to be used. A unit density and specific heat for the medium has been taken, while a thermal conductivity of 5 has been chosen for this benchmark. The environmental temperature has been set to 0 and the convection coefficient to 5.

The structural solver If we want to solve a thermo-structural problem (see [3] and references reported therein) we obviously need a solver able to deal with the elasticity equations. We focus on the simplest case, that is two dimensional problems (plane strain, plane stress and axi-symmetric problems) with a completely linear, elastic and isotropic response. We have to take into account that a temperature field induces thermal deformations inside a solid medium. Actually: (7) where the double index i indicates that no shear deformation can appear. The TREF represents the reference temperature at which no deformation is produced inside the medium. Once the temperature field is known at each time step, it is possible to compute the induced deformations and then the stress state. For sake of simplicity we imagine that the loads acting on the structure are not able to produce dynamic effects and therefore, if we neglect the body forces contributions, the equilibrium equations reduce to: (8) or, with the indicial notation The elastic deformation ε can be computed as the difference between the total and the thermal contributions as: (9) which can be expressed in terms of the displacement vector field u as:

or, with the indicial notation

(10)

A linear constitutive law for the medium can be adopted and written as: (11) where the matrix D will be expressed in terms of μ and λ which describe the elastic response of the medium. Finally, after some manipulation involving equations (9), (10) and (11), one can obtain the following governing equation, which is expressed in terms of the displacements field u only: (12) As usual, the above equation has to be solved together with the boundary conditions, which typically are of Dirichlet (imposed displacements on Γu) or Neumann kind (imposed tractions on Γp): (13)


Newsletter EnginSoft Year 8 - Openeering Special Issue -

19

involving a plate of unit thickness under tension with a hole, as shown in Figure 4. A unit Young modulus and a Poisson coefficient of 0.3 have been adopted to model the material behavior. The vertical displacements computed with ANSYS and our solver are compared in Figure 5: it can be seen that the two colored patterns are very similar and that the maximum values are very closed one another (ANSYS gives 551.016 and we obtain 551.014). In Figure 6 the tensile stress in y-direction along the Figure 5 - The displacement in y direction computed with ANSYS (left) and our solver (right). symmetry line AB is reported. It can be seen The maximum computed values for this component are 551.016 and 551.014 respectively. that there is a good agreement between the results provided by the two solvers. Thermo-elastic analysis of a pressure vessel In the oil-and-gas industrial sector it happens very often to investigate the structural behavior of pressure vessels. These structures are used to contain gasses or fluids; sometimes also chemical reactions can take place inside these devices, with a consequent growth in temperature and pressure.

Figure 6 - The y-component of stress along the vertical symmetry line AB (see Figure 4). The red line reports the values computed with ANSYS while the blue one shows the results obtained with our solver. No appreciable difference is present.

For this reason the thin shell of the vessel has to be checked taking into account both the temperature distribution, which inevitably appears within the structure, and the mechanical loads. If we neglect the holes and the nozzles which could be present, the geometry of these structures can be viewed, very often, as a solid of

The same approach Thermal Young described above for the Thermal Density Specific heat Poisson ratio expansion coeff. conductivity modulus heat transfer equation, Material [kg/m3] [J/kg째C] [---] [1/째C] [W/m째C] [N/m2] the Galerkin weighted residuals, can be used Steel 7850 434 60.5 2.0.10 0.30 1.2.10 with equation (12) and Insulation 937 303 0.5 1.1.10 0.45 2.0.10 a discretization of the domain can be Table 2 - The thermal and the mechanical properties of the materials involved in the analysis. introduced to numerically solve the problem. Obviously, we do not need a time integration technique anymore, being the problem a static one. We will obtain a system of linear equations characterized by a symmetric and positive definite matrix: special techniques can be exploited to take advantage of these properties in order to reduce the storage requirements (e.g. a sparse symmetric storage scheme) and to improve the efficiency (e.g. a Choleski decomposition, if a direct solver is adopted). As for the case of the thermal solver, many tests have Figure 7 - A simple sketch illustrates the vessel considered in this work. The revolution axis is drawn with been performed to check the the red dashed line and some dimensioning (in [m]) is reported. The nozzle on top is closed thanks to a accuracy of the results. Here we cap which is considered completely bonded to the structure. The nozzle neck is not covered by the insulating material. On the right the fluid temperature versus time is plotted. A pressure of 1 [MPa] acts inside the propose a classical benchmark vessel. 11

-5

9

-4


20 - Newsletter EnginSoft Year 8 - Openeering Special Issue use a uniform mesh within the domain for the thermal solver, while we adopted a finer mesh near the neck for the stress computation.

Figure 9: The radial (left) and vertical (right) displacement of the vessel.

In Figure 8 the temperature field at time 150 [s] is drawn: on the right a detail of the neck is plotted. It can be seen that the insulating material plays an important role, the surface temperature is actually maintained very low. As mentioned above a uniform mesh is employed in this case. In Figure 9 the radial (left) and the vertical (right) deformed shapes are plotted. In Figure 10 the von Mises stress is drawn and, on the right, a detail in proximity of the neck is proposed: it can be easily seen that the mesh has been refined in order to better capture the stress peaks in this zone of the vessel.

Conclusions In this work it has been shown how it is possible to use Scilab to solve thermo-mechanical problems. For sake of simplicity the focus has been posed Figure 10: The von Mises stress and a detail of the neck, on the right, together with the structural mesh. on two dimensional problems but the revolution. Moreover, the applied loads and the boundary reader has to remember that the extension to 3D problems conditions reflect this symmetry and therefore it is very does not require any additional effort from a conceptual common, when applicable, to calculate a vessel using an point of view. axi-symmetric approach. Some simple benchmarks have been proposed to show the In the followings we propose a thermo-mechanical analysis effectiveness of the solver written in Scilab. The reader of the vessel shown in Figure 7. The fluid inside the vessel should have appreciated the fact that also industrial-like has a temperature which follows a two steps law (see problems can be solved efficiently, as the complete Figure 7, on the right) and a constant pressure of 1 [MPa]. thermo-mechanical analysis of a pressure vessel proposed We would like to know which is the temperature reached at the end of the paper. on the external surface and which is the maximum stress inside the shell, with particular attention to the upper References neck. [1] http://www.scilab.org/ to have more information on Scilab We imagine that the vessel is made of a common steel and [2] The Gmsh can be freely downloaded from: that it has an external thermal insulating cover: the http://www.geuz.org/gmsh/ relevant material properties are listed in Table 2. [3] O. C Zienkiewicz, R. L. Taylor, The Finite Element When dealing with a thermo-mechanical problem it could Method: Basic Concepts and Linear Applications (1989) be reasonable to use two different meshes to model and McGraw Hill. solve the heat transfer and the elasticity equations. [4] M. R. Gosz, Finite Element Method. Applications in Actually, if in the first case we usually are interested in Solids, Structures and Heat Transfer (2006) Francis accurate modeling the temperature gradients, in the &Taylor. second case we would like to have a reliable estimation of [5] Y. W. Kwon, H. Bang, The Finite Element Method using stress peaks, which in principle could appear in different Matlab, (2006) CRC, 2nd edition zones of the domain. For this reason we decided to have the possibility to use different computational grids: once the temperature field is known, it will be mapped on to the For more information: structural mesh allowing in this way a better flexibility of Massimiliano Margonari - Openeering our solver. In the case of the pressure vessel we decided to info@openeering.com


Newsletter EnginSoft Year 8 - Openeering Special Issue -

21

A Simple Parallel Implementation of a FEM Solver in Scilab Nowadays many simulation software have the possibility to take advantage of multi-processors/cores computers in order to reduce the solution time of a given task. This not only reduces the annoying delays typical in the past, but allows the user to evaluate larger problems and to do more detailed analyses and to analyze a greater number of scenarios. Engineers and scientists who are involved in simulation activities are generally familiar with the terms “High Performance Computing” (HPC). These terms have been coined to indicate the ability to use a powerful machine to efficiently solve hard computational problems. One of the most important keywords related to the HPC is certainly parallelism. Total execution time will be reduced if the original problem can be divided in a given number of subtasks which are then tackled concurrently, that is in parallel, by a number of cores. To completely take advantage of this strategy three conditions have to be satisfied: the first one is that the problem we want to solve has to exhibit a parallel nature or, in other words, it should be possible to reformulate it in smaller problems, which can be solved simultaneously, whose solutions, opportunely combined, give the solution of the original large problem. Secondly, the software has to be organized and written to exploit this parallel nature. So typically, the serial version of the code has to be modified where necessary to this aim. Finally, we need the right hardware to support this strategy. Of course, if one of these three conditions is not fulfilled, the benefits could be poor or even non-existent in the worst case. It is worth to mention that not all the problems arising from engineering can be solved effectively with a parallel approach, if their associated numerical solution procedure is intrinsically serial. One parameter which is usually reported in the technical literature to judge the goodness of a parallel implementation of an algorithm or a procedure is the so-called speedup, which is simply defined as the ratio between the execution time on a single core machine and the same quantity on a multicore machine (S = T1/Tp), being p the number of cores used in the computation. Ideally, we would like to have a speedup not lower than the number of cores: unfortunately this does not happen mainly, but not only, because some serial operations have to be performed during the solution. In this context it is interesting to mention the Amdahl’s law which bounds the theoretical speedup that can be obtained, given the percentage of serial operations (f [0,1]) that has to be globally performed during the run. It can be written as:

It can be easily understood that the speedup S is strongly (and badly) influenced by f rather than by p. If we imagine to

have an ideal computer with infinite number of cores (p=∞) and implement an algorithm whit just 5% of operations that have to be performed serially (f=0.05), we get a speedup of 20 as a maximum. This clearly means that it is worth to invest in algorithms rather than simply increasing the number of cores… Someone in the past has moved criticism to this law, saying that it is too pessimistic and unable to correctly estimate the real theoretical speedup: in any case, we think that the most important lesson to learn is that a good algorithm is much more important that a good machine. As said before, many commercial software propose since many years the possibility to run parallel solutions. With a simple internet search it is quite easy to find some benchmarks which advertize the high performances and high speedup obtained using various architectures and solving different problems. All these noticeable results are usually the result of a very hard work of code implementation. Probably the most used communication protocols to implement parallel programs, through opportunely provided libraries, are the MPI (Message Passing Interface), the PVM (Parallel Virtual Machine) and the openMP (open Message Passing): there certainly are other protocols and also variants of the aforementioned ones, such an the MPICH2 or HPMPI, which gained the attention of the programmers for some of their features. As the reader has probably seen, in all the acronyms listed above there is a letter “P”. With a bit of irony we could say that it always stands for “problems”, in view of the difficulties that a programmer has to tackle when trying to implement a parallel program using such libraries. Actually, the use of these libraries is often and only a matter for expert programmers and they cannot be easily accessed by engineers or scientists who want to easily cut the solution time of their applications. In this paper we would like to show that a naïve but effective parallel application can be implemented without a great programming effort and without using any of the above mentioned protocols. We used the Scilab platform (see [1]) because it is free and it provides a very easy and fast way to implement applications: on the other hand, the fact that Scilab scripts are substantially interpreted and not compiled is paid with a not performing code in absolute sense. It is however possible to rewrite all the scripts using a compiled language, such as C, to get a faster run-time code. The main objective of this work is actually to show that it is possible to implement a parallel application and solve large problems efficiently (e.g.: with a good speedup) in a simple way rather than to propose a super-fast application. To this aim, we choose the stationary heat transfer equation written for a three dimensional domain together with


22 - Newsletter EnginSoft Year 8 - Openeering Special Issue any loss of generality, we decided to only use ten-noded tetrahedral elements with quadratic shape functions (see [4] for more details on finite elements). The solution of the resulting system is performed through the preconditioned conjugate gradient (PCG) (see [5] for details). In Figure 1 a pseudo-code of a classical PCG scheme is reported: the reader should observe that the solution process firstly requires to compute the product between the preconditioner and a given vector (*) and secondly the The stationary heat transfer equation product between the system matrix and another known vector As mentioned above, we decided to consider the stationary (**). This means that the coefficient matrix (and also the and linear heat transfer problem for a three-dimensional preconditioner) is not explicitly required, as it is when using domain Ί. Usually it is written as: direct solvers, but it could be not directly computed and [1] stored. This is a key feature of all the iterative solvers and we together with Dirichlet, Neumann and Robin boundary certainly can take advantage of it, when developing a parallel conditions, which can be expressed as: code. The basic idea is to partition the mesh in such a way that, [2] more or less, the same number of elements are assigned to each core (process) involved in the solution, to have a well balanced job and therefore to fully exploit the potentiality of The conductivity k is considered as constant, while f the machine. In this way each core fills a portion of the matrix and it will be able to compute some terms resulting from the represents an internal heat source. On some portions of the matrix-vector product, when required. It is quite clear that domain boundary we can have imposed temperatures , given fluxes and also convections with an environment some coefficient matrix rows will be split on two or more characterized by a temperature and a convection processes, since some nodes are shared by elements on coefficient h. different cores. The discretized version of the Galerkin The number of overlapping rows formulation for the above reported equations resulting from this strongly leads to a system of linear equations which can depends on the way we partition be shortly written as (* and **) the mesh. The ideal [3] partition produces the minimum overlap, leading to the lesser number of non-zero terms that The matrix of coefficients [K] is symmetric, each process has to compute and positive definite and sparse. This means that a store. great amount of its terms are identically zero. In other words, the efficiency of The vector {T} and {F} collect the unknown the solution process can depends nodal temperatures and nodal equivalent loads. If large problems have to be solved, it on how we partition the mesh. To solve this problem, which really immediately appears that an effective strategy is a hard problem to solve, we to store the matrix terms is needed. In our case we decided to store in memory the non-zero decided to use the partition functionality of gmsh (see [2]) terms row-by-row in a unique vector opportunely allocated, together with their column positions: which allows the user to partition a mesh using a well known in this way we also access terms efficiently. We library, the METIS (see [3]), decided to not take advantage of the symmetry of the matrix (actually, only the upper or lower which has been explicitly written to solve this kind of problem. The part could be stored, requiring only half as much resulting mesh partition is storage) to simplify a little the implementation. Moreover, this allows us to potentially use the certainly close to the best one and our solver will use it when same pieces of code without any change, for the solution of problems which lead to a notspreading the elements to the symmetric coefficient matrix. parallel processes. Fig. 1 - The pseudo-code for a classical The matrix coefficients, as well as the known preconditioned conjugate gradient solver. It can be An example of mesh partition vector, can be computed in a standard way, noted that during the iterative solution it is required performed with METIS is plot in compute two matrix-vector products involving the performing the integration of known quantities to Figure 3, where a car model mesh preconditioner M (*) and the coefficient matrix K over the finite elements in the mesh. Without (**). is considered: the elements have appropriate boundary conditions. A standard Galerkin finite element (see [4]) procedure is then adopted and implemented in Scilab in such a way as to allow a parallel execution. This represents a sort of elementary “brickâ€? for us: more complex problems involving partial differential equations can be solved starting from here, adding new features whenever necessary.


Newsletter EnginSoft Year 8 - Openeering Special Issue -

been drawn with different colors according to their partition. This kind of partition is obviously suitable when the problem is run on a four cores machine. As a result, we can imagine that the coefficient matrix is split row-wise and each portion filled by a different process running concurrently with the others: then, the matrix-vector products required by the PCG can be again computed in parallel by different processes. The same approach can be obviously extended to the preconditioner and to the postprocessing of element results. For sake of simplicity we decided to use a Jacobi preconditioner: this means that the matrix [M] in Figure 1 is just the main diagonal of the coefficient matrix. This choice allows us to trivially implement a parallel version of the preconditioner but it certainly produces poor results in terms of convergence rate. The number of iterations required to converge is usually quite high and it could be reduced adopting a more effective strategy. For this reason the solver will be hereafter addressed to as JCG and no more as PCG.

23

approach. All the models proposed in the following have been solved on a Linux 64 bit machine equipped with 8 cores and 16 Gb of shared memory. It has to be said that our solver does not necessarily require so powerful machines to run: the code has been actually written and run on a common Windows 32 bit dualcore notepad. A first benchmark: the Mach 4 model A first benchmark is proposed to test our solver: we downloaded from the internet a funny CAD model of the Mach 4 car (see the Japanese anime Mach Go Go Go), produced a mesh of it and defined a heat transfer problem including all kinds of boundary conditions. The problem has no physical nor engineering meaning: the objective is here to have a sufficiently large and non trivial n° of nodes

n° of tetrahedral elements

n° of unknowns

n° of nodal imposed temperatures

511758

317767

509381

2377

A brief description of the solver structure Table 1: Some data pertaining to the Mach 4 model are proposed in this In this section we would like to briefly describe the structure table. of our software and highlight some key points. The Scilab System Analysis JCG System 5.2.2 platform has been used to develop our FEM solver: we n° of Analysis JCG fill-in time time fill-in only used the tools available in the standard distribution (i.e.: cores time speedup speedup [s] [s] speedup [s] avoiding external libraries) to facilitate the portability of the resulting application and eventually to allow a fast translation 1 6960 478 5959 1.00 1.00 1.00 to a compiled language. 2 4063 230 3526 1.71 2.08 1.69 A master process governs the run. It firstly reads the mesh 3 2921 153 2523 2.38 3.12 2.36 partition, organizes data and then starts a certain number of 4 2411 153 2079 2.89 3.91 2.87 slave parallel processes according to the user request. At this point, the parallel processes read the mesh file and load the 5 2120 91 1833 3.28 5.23 3.25 information needed to fill their own portion of the coefficient 6 1961 79 1699 3.55 6.08 3.51 matrix and known vector. 7 1922 68 1677 3.62 7.03 3.55 Once the slave processes have finished their work the master starts the JCG solver: when a matrix-vector product has to be 8 2093 59 1852 3.33 8.17 3.22 computed, the master process asks to the slave processes to Table 2: Mach 4 benchmark. The table collects the times needed to solve the compute their contributions which will be appropriately model, to perform the system fill-in and to solve the system through the JCG. The speedup are also reported in the right part of the table. summed together by the master. When the JCG reaches the required tolerance the postprocessing phase (e.g.: the computation of fluxes) is performed in parallel by the slave processes. The solution ends with the writing of results in a text file. A communication protocol is mandatory to manage the run. We decided to use binary files to broadcast and receive information from the master to the slave processes and conversely. The slave processes are able to wait for the binary files and consequently read them: once the task (e.g.: the matrix-vector product) has been performed, they write the result in another binary file which will be read by the master process. This way of managing communication is very simple but certainly not the best from an efficiency point of view: writing and reading files, even if binary ones, could take a not-negligible time. Moreover, the Fig. 2 - The speedup values collected in Table 2 have been plotted here against the speedup is certainly badly influenced by this number of cores.


24 - Newsletter EnginSoft Year 8 - Openeering Special Issue

Fig. 3 - The Mach 4 mesh has been divided in 4 partitions (see colors) using the METIS library available in gmsh. This mesh partition is obviously suitable for a 4 cores run.

performs 1202 iterations to converge. It immediately appears that the global speedup is strongly influenced by the JCG solution phase, which does not scale as well as the fill-in phase. This is certainly due to the fact that during the JCG phase the parallel processes have to communicate much more than during the other phases: a guess solution vector has actually to be written at each iteration and the result of the matrix vector product has to be written back to the master process by the parallel runs. The adopted communication protocol, which is extremely simple and easy to implement, shows here all its limits. However, we would like to underline that the obtained speedup is more than satisfactory. In Figure 4 the temperature field computed by ANSYS Workbench (top) and the same quantity obtained with our solver (bottom) working with the same mesh are plotted. A second benchmark: the motorbike engine model The second benchmark involves the model of a motorbike engine (also in this case the CAD file has been downloaded from the internet) and the same steps already performed for the Mach 4 model have been repeated. The model is larger than before (see Table 3) and it can be seen in Figure 6, where the grid is plotted. However, it has to be mentioned that conceptually the two benchmarks have no differences; the main concern was also in this case to have a model with a non-trivial geometry and boundary conditions. The final termination accuracy for the JCG has been set to 10-6 reaching convergence after 1380 iterations. The Table 4 is analogous to Table 2: the time needed to complete different phases of the job and the analysis time are reported, as obtained for runs performed with increasing number of parallel processes involved. Also in this case, the trend in the reduction of time with the increase of number of cores seems to follow the same law as

n° of nodes

n° of tetrahedral elements

n° of unknowns

n° of nodal imposed temperatures

2172889

1320374

2136794

36095

Table 3: Some data pertaining to the motorbike engine model.

Fig. 4 - Mach 4 model: the temperature field computed with ANSYS Workbench (top) and the same quantity computed with our solver (bottom). No appreciable differences are present.

model to solve on a multicore machine, to compare the results with those obtained with a commercial software and to measure the speedup factor. In Table 1 some data pertaining to the mesh has been reported. The same mesh has been solved with our solver and with ANSYS Workbench, for comparison purposes. In Table 2 the time needed to complete the analysis (Analysis time), to compute the system matrix and vector terms (System fill-in time) and the time needed to solve the system with the JCG are reported together with their speedup. The termination accuracy has been always set to 10-6: with this set up the JCG

n° of cores

Analysis time [s]

System fill-in time [s]

JCG time [s]

Analysis speedup

System fill-in speedup

JCG speedup

1

33242

2241.0

28698

1.00

1.00

1.00

2

20087

1116.8

17928

1.65

2.01

1.60

3

14679

744.5

12863

2.26

3.01

2.23

4

11444

545.6

9973

2.90

4.11

2.88

5

9844

440.9

8549

3.38

5.08

3.36

6

8694

369.6

7524

3.82

6.06

3.81

7

7889

319.7

6813

4.21

7.01

4.21

8

8832

275.7

7769

3.76

8.13

3.69

Table 4: Motorbike engine benchmark. The table collects the times needed to solve the model (Analysis time), to perform the system fill-in (System fill-in) and to solve the system through the JCG, together with their speedup.


Newsletter EnginSoft Year 8 - Openeering Special Issue -

25

Fig. 5 - A comparison between the speedup obtained with the two benchmarks. The ideal speedup (the main diagonal) has been highlighted with a black dashed line. In both cases it can be see that the speedup follow the same roughly linear trend, reaching a value between 3.5 and 4 when using 6 cores. The performance drastically deteriorates when involving more than 6 cores probably because the machine where runs were performed has only 8 cores.

Fig. 6 - The motorbike engine mesh used for this second benchmark.

before (see Figure 5). The run with 8 parallel processes does not perform well because the machine has only 8 cores and we start up 9 processes (1 master and 8 slaves): this certainly wastes the performance. In Figure 7 a comparison between the temperature field computed with ANSYS Workbench (top) and our solver (bottom) is proposed. Also in this occasion no differences are presents. Conclusions In this work it has been shown how it is possible to use Scilab to write a parallel and portable application with a reasonable programming effort, without involving hard-to-use message passing protocols. The three dimensional heat transfer equation has been solved through a finite element code which takes advantage of the parallel nature of the adopted algorithm: this can be seen as a sort of “elementary brick� to develop more complicated problems. The code could be rewritten with a compiled language to improve the run-time performance: also the message passing technique could be reorganized to allow a faster communication between the

Fig. 7 - The temperature field computed by ANSYS Workbench (top) and by our solver (bottom). Also in this case the two solvers lead to the same results, as it can be seen looking the plots.

concurrent processes, also involving different machines connected through a net. Stefano Bridi is gratefully acknowledged for his precious help.

References [1] http://www.scilab.org/ to have more information on Scilab. [2] The Gmsh can be freely downloaded from: http://www.geuz.org/gmsh/ [3] http://glaros.dtc.umn.edu/gkhome/views/metis to have more details on the METIS library. [4] O. C. Zienkiewicz, R. L. Taylor, (2000), The Finite Element Method, volume 1: the basis. Butterworth Heimemann. [5] Y. Saad, (2003), Iterative Methods for Sparse Linear Systems, 2nd ed., SIAM.

For more information on this document please contact the author: Massimiliano Margonari - Openeering info@openeering.com


26 - Newsletter EnginSoft Year 8 - Openeering Special Issue

The solution of exterior acoustic problems with Scilab The boundary element method (shortly BEM) is a well established numerical method which is known within the Academia since the Seventies for the solution of boundary integral equations. It has been proposed to solve a wide range of engineering problems, ranging from structural mechanics up to fluid mechanics, with alternating success (see [1], [2] and the reported bibliography to have more details on BEM). This method has been often compared to the finite element method (FEM) underlying, time to time, the “pros” or the “cons”. To the finite element method is recognized a relatively “simple and scalable” mathematical approach which practically means the capability to model a wide range of problems, including multiphysics and nonlinearities of any kind. On the contrary, the BEM is generally considered to be based on a much more difficult mathematical frame, which yields to a non trivial implementation. Moreover, the range of applications which can be efficiently tackled with the BEM, and where the benefits are evident, is definitely smaller with respect to the FEM. These are probably some of the reasons why the BEM has not gained the attention of the software houses which develop commercial simulation software for engineers. Some notable exceptions are however present in the scenario; one example is given by software dedicated to the solution of acoustic and vibro-acoustic problems. The reason is quite simple: it has been shown in a clear fashion that the BEM gives the possibility to solve acoustic problems, above all those involving unbounded domains, in a smarter and, probably, much efficient way than the FEM. One of the most limiting aspect of the BEM is that, being N the number of unknowns in the model, the computational cost and the data storage grow quadratic with N (shortly O(N2)) in the best case. This obviously represents a tremendous limit when one tries to use this technique to solve industrial-like models, which usually are characterized by “large” Ns. In the last decade a new approach, the fast multipole method (FMM), has been

applied to the traditional BEM to mitigate this limit: the resulting approach yields to a computational cost and storage which grow as O(N log(N)), which obviously make the BEM much more appealing than in the past. Implementations of such approach into commercial software are very recent. It is interesting to visit the websites of LMS international [5], FFT [6] and CYBERNET SYSTEMS [7] to just have an idea on what the market offers today. In this paper we present an implementation of the traditional collocation approach for the solution of exterior acoustic problems in Scilab. The idea is here to show that it is possible to solve non-trivial problems in a reasonable time or, better, in a time compatible with an industrial context: to improve the global performance of the first version of the application, which has been fully written in Scilab Figure 1: The OpenMP logo. The language, two steps have been OpenMP® API specification for performed. Firstly, the most parallel programming. time consuming pieces of code have been rewritten in C, compiled and linked directly in Scilab; the solution time, which was really high, has been sensibly reduced thanks to this change. Another step has been finally performed: some openMP directives (see [8]) have been introduced in the C routines in order to parallelize the execution and allow to run the simulation on a multicore shared memory machine. With this improvement we finally get a consistent reduction of the solution time.

Figure 2: The picture schematically shows a typical exterior acoustic problem. A closed and vibrating body, bounded by a surface Γ, is embedded in an unbounded domain Ω where the pressure wave can propagate. The aim is to predict the pressure and the velocity field on Γ and, potentially, in any point of Ω.

It is important to remember that, theoretically, there is no limit in the number of threads that can be activated during a run by the openMP directives: the real limit is due to the hardware facility used to run the simulation. A brief summary of the boundary element method (BEM) in acoustics The Helmoltz equation is the basis of linear acoustics. It can be used to model the propagation of harmonic pressure waves and it can be written as:


Newsletter EnginSoft Year 8 - Openeering Special Issue -

in Ω

(1)

where p is the complex pressure amplitude, q represents the volume sources and k is the wave number. In this work we imagine that the domain Ω is unbounded and that appropriate boundary conditions (Dirichlet, Neuman and Robin) are set to the boundary Γ (see Figure 2). Equation (1) can be numerically solved by means of the FEM, which inevitably requires to introduce a non-physical cut in the unbounded domain Ω, with a consequent violation of the Sommerfeld’s radiation condition at infinity (see [2]). This means that the numerical model could suffer from undesired spurious wave scattering, therefore producing wrong results. In literature (see [1] and [2]) we can also find an integral version of the Helmoltz equation, which is the starting point for the BEM. Specifically, for exterior problems it is:

(2) The boundary points x and y are called collocation and field point respectively. The kernel function G is known as fundamental solution and it gives the pressure field induced by a point load in an infinite space. The pressure and the velocity field are p and v; c(x) is a term which depends on the solid angle in x of the boundary, but it always assumes the value of 0.5 if the boundary is smooth. One interesting feature of equation (2), which is absolutely equivalent to equation (1), is that it only involves integrals over the boundary Γ: this implies a sort of “reduction of dimensionality”, being the problem shifted from the domain Ω to the boundary Γ. Moreover, the Sommerfeld’s radiation condition is naturally satisfied by (2) (see [2]) leading to a correct modeling of the infinity condition.

27

Once the boundary has been discretized (we always adopt triangular six-noded elements), equation (2) can be “collocated” in each node (x) in the grid: it comes out a system of linear equations which involves a fully populated non symmetric coefficient complex matrix, as well as a known vector. Usually, equation (2) has to be solved as many times as the number of frequencies we want to analyze. Very often the boundary conditions applied to Γ are all of Neumann kind, that is the normal velocity to the boundary is imposed, usually coming out from a previously performed structural analysis. Unfortunately, the exterior Neumann problem and the interior Dirichlet problem are self-adjoint: this means that, if equation (2) is used as it is, some non-realistic singularities in the response could appear in proximity of the natural frequencies of the vibrating body. In the literature many numerical treatment have been proposed to overcome this physical non-sense. We decided to adopt the CHIEF technique (see [2]), which simply consists to opportunely collocate the integral equation in a certain number of fictitious points randomly positioned inside the body. The number of the resulting equations becomes larger than the number of unknowns and therefore a least square solver is used. If the CHIEF technique is not adopted a classical direct or iterative solver can be used. Once the pressure and the velocity fields are known on the boundary Γ it is possible to compute the same quantities in all the desired points x in the domain Ω, just collocating the equation (2) and taking c(x) equal to one. Hereafter, this last phase of the analysis will be addressed to as postprocessing.

The implementation in Scilab One of the most computationally expensive parts of a BEM code is usually the one where the boundary integrals (see equation (2)) are computed. Unfortunately, such integrals can exhibit a singular behavior, with different degrees of singularity, when y approaches x. The effective numerical evaluation of these integrals has been the argument of discussion for many years; we are able today to estimate weakly singular, strongly singular and hypersingular integrals with sufficient accuracy (see [3] and the references) but the computations required are really time consuming even if an optimized coding of such algorithms is done. The first version of the acoustic solver has been completely written in Scilab scripting language and some profiling has been done, using simple benchmarks, to find out the most expensive parts of the code. One valuable feature of Scilab is that some integrated tools for easily profiling the scripts are available; these tools can be used iteratively in Figure 3: An example of the profiling results obtained running a simple benchmark. order to get a final optimized code in a short time. Thanks to the Scilab profiling tools, the most important bottlenecks in a function can In Figure 3 the results obtained for a given be easily found out and fixed.


28 - Newsletter EnginSoft Year 8 - Openeering Special Issue monitored function are plotted in terms of number of calls, complexity and CPU time. This activity has been fundamental to find out the most important bottlenecks in the application and remove them. As imagined, we discovered that the fill-in phase, that is the portion of the code where the coefficient matrix is fulfilled, is characterized by a very high computational time. As mentioned above, the first step done to improve the performance has been to rewrite in C the functions which computes the integrals, compile and link them inside Scilab. This operation which does not have required a long time (it just consists in translating a function from a language to another) has allowed to seriously decrease the run time.

Table 1: The table summarizes the number of boundary unknowns and the field points used in the postprocessing in the three meshes chosen to test the acoustic solver and its scalability.

We finally have an optimized serial version of the acoustic Table 2: The table summarizes the results obtained for the three meshes (A, solver. In order to further improve the performance of our B and C) of different sizes used to test the performances of our acoustic code we decided to use the openMP API, which allows to solver. The fill-in, the postprocessing and the global solution time are reported, together with their speedup. The computation for a single develop parallel applications for multicore shared-memory frequency is considered here. platforms (see [8]). The fill-in phase in a boundary element code exhibits a parallel nature; it actually consists of two nested cycles, usually the inner one over the elements, the outer one over the boundary nodes. We decided to parallelize the inner loop, for sake of simplicity, inserting some openMP directives inside the C routines written for the serial version. The same has been done also for the postprocessing phase, dedicated to the computation of the pressure level in some points located in the unbounded domain. Also in this case we managed to reduce the solution time, running our benchmarks on a multicore Linux machine. For the solution of the linear system of equations we both ran a least square solver and an iterative (GMRES) solver, depending on the CHIEF technique is in use or not. We decided to adopt the Jacobi preconditioner for the iterative solver being the Figure 5: The solution time speedup obtained with the three models considered in the benchmark. coefficient matrix diagonal dominant, obtaining in this way satisfactory convergence rates (see [4]). The pulsating sphere: a first benchmark One of the most used benchmarks in acoustics is the pulsating sphere in the unbounded domain. For this problem also a closed solution is available (see equation (3)) and therefore it is often used to prove the goodness of a specific solver or technique. The pressure level at a distance r from the sphere center of radius R can be computed as:

(3) being V the uniform radial velocity of the sphere, k the wave number, Ď and c the fluid density and the speed of sound in the fluid respectively. Figure 4: The pressure level on the sphere surface obtained at different frequencies. The exact solution (red) is compared with the numerical solution obtained with (green) and without (blue) the CHIEF technique.

We decided to firstly test the goodness of our results: in Figure 4 we compare the exact solution


Newsletter EnginSoft Year 8 - Openeering Special Issue -

29

are collected. The Fill-in column reports the time needed to fill the linear system (matrix and known vector), the Postprocessing column reports the time required for the estimation of the pressure level in the points in the domain while the Solution column reports the run time. Looking at results contained in Table 2 and Figure 5 it immediately appears that the speedup deteriorates very quickly increasing the number of threads, does not matter the model size. This can be probably ascribed to the fact that, increasing the model size, the influence of serial portions of the code becomes not negligible with respect to the parallel ones. The speedup obtained for the postprocessing phase is definitely better that the one obtained for the fill-in phase; this is probably is due to our naĂŻve openMP implementation but also to the much more complex task do be done. Even though the solver does not scale extremely well it has however to be noted that the run time is always acceptable and compliant with an industrial context. Figure 6: The pressure level computed with the model C.

(see equation (3)) with the numerical solution obtained with our solver, with and without adopting the CHIEF technique. It can be seen that in both cases the exact solution is well captured, except that, as expected, in proximity of the resonance frequencies of the sphere inside the considered frequency range. A step of 5 [Hz] has been adopted to

The radiated noise from an intake manifold We present here the results of an acoustic analysis of an intake manifold with a non trivial geometry to show that our solver is able to also tackle industrial like problems. In Figure 7 the surface mesh adopted in the analysis is plotted. The mesh is made of 12116 surface collocation nodes, 8196 triangular quadratic elements and 9905 points have been positioned to estimate the pressure level all around the collector. Usually, a structural analysis is firstly performed to estimate the dynamic response of the structure under exam and, in a second step, the velocity vectors, normal to the boundary of the structure, are applied as boundary conditions to an acoustic mesh previously prepared. This “acoustic� mesh should have an adequate number of elements / nodes to capture the desired frequencies accurately but without compromising too much the computational cost. Some commercial software have utilities that help the user in generating such mesh and in applying automatically the boundary conditions coming from the structural mesh.

Figure 7: The surface mesh of the manifold. All the corners and sharp edges in the original CAD model have been filled in order to have a smooth surface: in this case the c(x) term in equation (2) can be always set to 0.5.

span the frequency range and 16 randomly generated points inside the sphere volume have been used when adopting the CHIEF approach. Secondly, we generated three different models and ran them on a Linux machine to check the speedup performances of our application. In Table 2 the results in terms of time and speedup for different phases of the run

This is a key feature which obviously makes the simulation process much easier and faster. In our case we simply decided to modify the original CAD geometry removing sharp corners and edges knowing that this is probably not the best way to deal with. With the modified geometry we can produce a sufficiently smooth mesh but at a cost of a larger number of elements with respect to a more sophisticated technique. The size of the final acoustic model is however affordable. We decided to apply different values of normal velocities to some patches of the boundary as Neumann boundary conditions, keeping in mind that the obtained results have not any engineering meaning.


30 - Newsletter EnginSoft Year 8 - Openeering Special Issue different points located in the space (A, B and C) as reported in Figure 9. The analysis ran on a Linux 8 cores machine and it roughly took 8 hours and 50 minutes. Conclusions In this document we showed that it is possible to build an efficient solver for exterior acoustic problems in Scilab. Taking advantage of the possibility to add interfaces to external compiled libraries we decided to improve the computational performance of our application just rewriting in C the most expensive portions of it. Moreover, the openMP API has been used to parallelize these pieces of code and allow in this way to run the simulation on multicore shared memory platforms, reducing, once again, the solution time.

Figure 8: The acoustic pressure expressed in [dB] estimated around the manifold when the vibrating frequency of the manifold is 100 [Hz].

References [1] M. Bonnet, Boundary Integral Equation Method for Solids and Fluids, Wiley (1999). [2] S. Kirkup, The boundary element method in acoustics: a development in Fortran, Integrated Sound Software (1998). [3] M. Guiggiani, A. Gigante, A general algorithm for multidimensional Cauchy principal value integrals in the boundary element method Journal of Applied Mechanics, vol 112, 906-915 (1990). [4] Y. Saad, M.H Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7, pp 856-869 (1986). [5] http://www.lmsintl.com/ [6] http://www.fft.be/ [7] http://www.cybernet.co.jp/waon/english/ [8] http://openmp.org/wp/

Contacts For more information on this document please contact the author: Massimiliano Margonari - Openeering info@openeering.com

Figure 9: The acoustic pressure expressed in [Pa] estimated around the manifold when the vibrating frequency of the manifold is 500 [Hz]. Points A, B and C, are located along an horizontal radial axis (see Figure 10).

We spanned the frequency range from 50 [Hz] up to 500 [Hz] with a step of 5 [Hz]. In Figure 8 the computed pressure expressed in [dB] is reported for frequency 100 [Hz], while in Figure 9 the acoustic pressure [Pa] is shown on a vertical plane when the manifold vibrating frequency is 500 [Hz]. Similar outputs are obviously available for all the computed frequencies. In Figure 10 we report the pressure versus the frequency range registered in three

Figure 10: The acoustic pressure in [dB] versus the frequency registered in the points A (red), B (green) and C (blue) (see Figure 9).


Newsletter EnginSoft Year 8 - Openeering Special Issue -

31

An unsupervised text classification method implemented in Scilab Text mining is a relatively new research field whose main concern is to develop effective procedures able to extract meaningful information - with respect to a given purpose from a collection of text documents. There are many contexts where large amounts of documents have to be managed, browsed, explored, categorized and organized in such a way that the information we are looking for can be accessed in a fast and reliable way. Let us simply consider the internet, which is probably the largest and the most used library we know today, to immediately understand why the interest around text mining has increased so much during the last two decades. A reliable document classification strategy can help in information retrieval, to improve the effectiveness of a search engine for example, but it can be also used to automatically understand whether an e-mail message is spam or not. The scientific literature proposes many different approaches to classify texts: it is sufficient to perform a web search to find a large variety of papers, forums and sites discussing this topic. The subject is undoubtedly challenging for researchers who have to consider different and problematic aspects emerging when working with text documents and natural language. Usually texts are unstructured, they have different lengths and they are written in different languages. Different authors means different topics, styles, lexicons, vocabularies and jargons, just to highlight some issues. One concept can be

expressed in many different ways and, as an extreme case, also the same sentence can be graphically rendered in different ways: You are welcome! U @r3 w31c0m3!

This strategy can be used to cheat the less sophisticated email spam filters, which probably are not able to correctly categorize the received message and remove it; some of them are based on simple algorithms which do not consider the real meaning of the message but just look the words inside, one at a time. The search for an exhaustive and exact solution to the text mining problem is extremely difficult, or practically impossible. Many mathematical frameworks have been developed for text classification: naïve Bayes classifiers, supervised and unsupervised neural networks, learning vector machines and clustering techniques are just a short - and certainly not complete - list of possible approaches which are commonly used in this field. They have both advantages and disadvantages. For example, some of them usually ensure a good performance but they have to be robustly trained in advance using predefined categories: other ones do not require a predefined list of categories, but they are less effective. For this reason the choice of the strategy is often tailored to the specific categorization problem that has to be solved. In spite of their differences, all of the text categorization approaches have however a first common problem to solve: the text has to first processed in order to extract the main features contained inside. This operation erases the “superfluous” from the document, retrieving only the most relevant information: the categorization algorithm will therefore work only with a series of features characterizing the document. This operation has a fundamental role and it can lead to unsatisfactory results if it has not been conducted in an appropriate way. Another crucial aspect of data mining techniques is the postprocessing and the summarization of results, which have to be read and interpreted by a user. Fig. 1 - This image has been generated starting from the text of the EnginSoft Flash of the Year 7 n°1 This means that the faster and the more effective data mining algorithm is useless issue and the tool available in [4].


32 - Newsletter EnginSoft Year 8 - Openeering Special Issue if improperly fed or if results cannot be represented and interpreted easily. Our personal interest for these techniques was born some weeks ago when reading the last issue of the EnginSoft newsletter. In a typical newsletter issue there usually are many contributions of different kinds: you probably noticed that there are papers presenting case studies coming from several industrial sectors, there are interviews, corporate and software news and much more. Sometimes there are also papers discussing topics “strange”, for the CAE community, as probably this one may seem to be. A series of questions came out. Does the categorization used in the newsletter respect a real structure of the documents, or is it simply due to an editorial need? Can we imagine a new categorization based on other criteria? Can we discover categories without knowing them a-priori? Can we finally have a representation of this categorization? And finally, can we have a deeper insight into our community? We decided to use the EnginSoft newsletters (see [3]) and extract from them all the articles written in English, starting from the first issue up to the last one. In this way we built the “corpus”, as it is usually called, by the text miners community, the set of text documents that have to be considered. The first issues of the newsletter were almost completely written in Italian, but English contributions occupy the most of pages in the later years. This certainly reflects the international growth of EnginSoft. The corpus was finally composed of 248 plain text documents of variable lengths. The second step we performed was to set up a simple text mining procedure to find out possible categorizations of the corpus, taking into account two fundamental aspects: first the fact that we do not have any a-priori categorization, and secondly the fact that the corpus cannot be considered as “large” but, on the contrary, probably too poor to have clear and robust results. We finally decided to use an unsupervised self organizing map (SOM) as a tool to discover possible clusters of documents. This technique has the valuable advantage of not requiring any predefined classification and certainly of allowing a useful and easily readable representation of a complex dataset, through some two-dimensional plots. The preprocessing of the corpus It easy to understand that one of the difficulties that can arise when managing text, looking one word at a time and disregarding for simplicity all the aspects concerning lexicon, is that we could consider as “different” words which conceptually can have the same meaning. As an example, let us consider the following words which can appear in a text; they can be all summarized in a single word, such as “optimization”: optimization, optimizing, optimized, optimizes, optimization, optimality. It is clear that a good preprocessing of a text document should recognize that different words can be grouped under

a common root (also known as stem). This capability is usually obtained through a process referred to as stemming and it is considered fundamental to make the text mining more robust. Let us imagine to launch a web search engine with the keyword “optimizing”: we probably would like that also documents containing the words “optimization” or “optimized” are considered when filling the results list. This probably because the true objective of the search is to find out all the documents where optimization issues are discussed. The ability of associating a word to a root is certainly difficult to codify in a general manner. Also in this case there are many strategies available: we decided to use the Porter stemming approach (it is one of the most used stemming technique for processing English words: see the paper in [5]) and apply it to all words composed by more than three letters. If we preprocess the words listed above with the Porter stemming algorithm the result will be always the stem “optim”. It clearly does not have any meaning (we cannot find “optim” in an English dictionary) but this does not represent an issue for us: we actually need “to name” in a unique way the groups of words that have the same meaning. Another ability that a good preprocessing procedure should have is to remove the so-called stop words, that is, all the words which are used to build a sentence in a correct way, according to the language rules, but that usually do not significantly contribute to determine the meaning of the sentence. Lists of English stop words are available on the web and they can be easily downloaded (see [2]): they contains words such as “and”, “or”, “for”, “a”, “an”, “the”, etc… In our text preprocessor we decided to also insert a procedure that cuts out all the numbers, the dates and all the words made of two letters or less; this means that words such as “2010” or “21th” and “mm”, “f”, etc… are not considered. Also mathematical formulas and symbols are not taken into consideration. Collect and manage information The corpus has to be preprocessed to produce a sort dictionary, which collects all the stems used by the community; then, we should be able to find out all the most interesting information describing a document under examation in order to characterize it. It’s worth mentioning that the dictionary resulting from the procedure described above using the EnginSoft newsletters is composed of around 7000 stems. Some of them are names, surnames and acronyms such as “CAE”. It immediately appears necessary to have a criterion to judge the importance of a stem in a document within a corpus. To this purpose, we decided to adopt the so–called tf-idf coefficient, term frequency – inverse document frequency, which takes into account both the relative frequency of a stem in a document and the frequency of the stem within the corpus. It is defined as:


Newsletter EnginSoft Year 8 - Openeering Special Issue -

33

coefficients computed for each stem, listed in columns, as they appear while processing the documents, listed in rows. The strange The current status of research and applications in profile of the non-zero coefficients in the Year 6, issue 2 0.0082307 Multiobjective Optimization. matrix is obviously due to this fact: it is interesting to see that the most used stems Multi-objective optimization for antenna design. Year 5, issue 2 0.0052656 appear early on while processing documents, Third International Conference on Multidisciplinary and that the rate of dictionary growth - that Year 6, issue 3 0.0050507 Design Optimization and Applications. is the number of new stems that are added to the dictionary by new documents - tends modeFRONTIER at TUBITAK-SAGE in Turkey. Year 5, issue 3 0.0044701 to gradually decrease. This trend does not depend, on average, on the order used in Optimal Solutions and EnginSoft announce Distribution Year 6, issue 3 0.0036246 Relationship for Sculptor Software in Europe. document processing: the resulting matrix is always denser in the left part and sparser on Table 1 - The results of the search for “optimization” in the corpus using the tf-idf coefficient. the lower-right part. Obviously, the top-right Published in Document title Stem tf-idf the Newsletter corner is always void. The matrix in Figure 2 VirtualPaintShop. Max Year 2, issue 4 VPS represents a sort of Simulation of paint processes of car bodies. 0.0671475 database which can be Combustion Noise Prediction in a Small Diesel Engine Min (non-zero) Year 7, issue 3 design used to accomplish a Finalized to the Optimization of the Fuel Injection Strategy 0.0000261 document search, Table 2 - The stem with the maximum and the minimum (non zero) tf-idf respectively found in the corpus are reported in according to a given the table together with the document title where they appear. criterion; for example, if we wanted to find out the most relevant documents with respect to the “optimization” topic, we should simply look for the documents corresponding to the highest tf-idf of the being stem optim. The results of this search are collected in Table 1, where the first 5 documents are listed. In Table 2 we list the stems which register the highest and the lowest (non zero) tf-idf in the dictionary, together with the documents where they appear. More generally, it is interesting to see that high values of tf-idf are obtained by words that appear frequently in a short document, but that where the subscripts w and d stand for a given word and a globally are not used at all (see the acronym “VPS”). On the given document respectively in the corpus C - done by N contrary, low values of this coefficient are obtained by documents - while ni,j represents the number of times that the common words in the corpus (see “design”) that are word i appears in the j-th document. This coefficient allows us to translate words into numbers. infrequently used in long documents. In Figure 2, the corpus has been graphically represented, In Figure 3 the histogram of the tf-idf coefficient and the plotting the matrix containing the non-zero tf-idf empirical cumulate density function are plotted. It can be seen that the distribution is strongly left-skewed: this means that there are many stems that are largely used in the corpus, therefore having very low values of tf-idf. For this reason the logarithmic scale is preferred in order to have a better representation of the data. Document title

Published in the Newsletter

Fig. 2 - A matrix representation of the non-zeros tf-idf coefficients within the corpus. The matrix rows collect the text files sorted in the same order as they are processed, while the columns collect the stems added to the dictionary in the same order as they appear while processing the files.

tf-idf of stem “optim”

A text classification using Self Organizing Maps Self Organizing Maps (SOMs) are neural networks which have been introduced by Teuvo Kohonen (we address the interested reader to [6] to have a complete review of SOMs). One of the most valuable characteristics of such maps is certainly the fact that they allow a two-dimensional representation of multivariate datasets, preserving the original topology; this means that the map does not alter the distances between records in the original space when projecting them in the two-dimensional domain. For this reason they can be used to navigate multidimensional


34 - Newsletter EnginSoft Year 8 - Openeering Special Issue

Fig. 3 - The histogram (left) and the empirical cumulative distribution (right) of the tf-idf. The distribution has clearly a high skewness: the large majority of stems has a low tf-idf. For this reason the logarithmic scale has been used in the graphs.

datasets and to detect groups of records, if present. A second interesting characteristic of these maps is that they are based on an unsupervised learning: this is the reason why, sometimes, such maps are said to learn from the environment. They do not need any imposed categorization nor classification of data to run, but they simply project the dataset “as it is”. The mathematical algorithm behind these maps is not really difficult to understand and therefore is not difficult to implement; however, the results have to be graphically represented in such a way that they can be easily accessed by the user. This is probably the most difficult task when developing as SOM: fortunately Scilab has a large set of graphical functions which can be called upon to build complex outputs, such the one in Figure 6. A common practice is to use a sort of honey-comb representation of the map, where each hexagon stands for a neuron: colors and symbols are used to draw a result (e.g. a dataset component or the number of records in a neuron). The user has to set the dimensions of the map, choosing the number of neurons along the horizontal and the vertical directions (see Table 3, where the set up of our SOM is briefly reported) and the number of training cycles that have to be performed. Each neuron has a prototype vector (that is a vector with the same dimension of the designs in the dataset) which should be representative, once the net has been trained, of all the designs pertaining to that neuron. Certainly the easiest way to initialize the prototypes is to choose random values for all their components, as we did in our case. The training consists of two phases: the first one is called “rough phase”, the second one “fine tuning” and they usually have to be done with slightly different set-ups to obtain the best training, but operationally, they do not present any difference. During the training a design is submitted to the net and assigned to the neuron whose prototype vector is closest to the design itself; then, the prototypes of the neurons in the neighborhood are updated trough an equation which rules the strength of the changes according, for example, to the training iteration number and to the neuron distances. During a training cycle all the designs have to be passed to

the net, always following for example a different order of submission, to ensure a more robust training. There is a large variety of rules for updating available in the literature which can be adopted according to the specific problem. We decided to use a Gaussian training function with a constant learning factor which is progressively damped with the iteration number. This leads to a net which progressively “freezes” to a stable configuration, which should be seen as the solution of a nonlinear projection problem of a multivariate dataset on a two dimensional space. At the end of the training phase, each design in the dataset has a reference neuron and each prototype vector should summarize at best the designs in their neuron. For this reason the prototype vectors can be thought as a “summary” of the original dataset and used to graphically render information through colored pictures. One of the most frequent criticism to SOMs that we hear within the engineering community is that these maps do not provide, as a result, any number but rather colored pictures that only “gurus” can interpret. All this, and the fact that results often depend on the guru who reads the map, confuses engineers. We are pretty convinced that this is a wrong feeling; these maps, and consequently the colored pictures used to present results, are obtained with a precise algorithm such those used in other fields. As an example, let us remember that even results coming from a finite element simulation of a physical phenomenon are usually presented through a plot (e.g.: stress, velocity or pressure fields in a domain) and that they can change as the model set up changes (e.g.: mesh, time integration step…) and that therefore they have to be always interpreted by a skilled engineer. We submitted the dataset with the tf-idf coefficients and ran an SOM training with the setup summarized in Table 3. To prevent stems with too high or too low values playing a role in the SOM training, we decided to keep only those belonging to the interval [0.0261 - 2.6484]·10-3. This interval has been chosen starting from the empirical cumulative distribution reported in Figure 3 and looking for the tf-idf corresponding to the 0.1 and the 0.8 probability respectively. In this way, the extremes, which could be due, for example, to spelling


Newsletter EnginSoft Year 8 - Openeering Special Issue -

35

between neurons’ prototypes outside the blue zones. Number of horizontal neurons = 15 Training = sequential nCycles = 50 nCycles = 10 The dimension of the white diamonds Number of vertical neurons = 15 Sample order = random iRadius = 4 iRadius = 1 superimposed on the neurons is Grid initialization = random Learning factor = 0.5 fRadius = 1 fRadius = 1 proportional to the number of documents which pertains to the neuron. It is clear Scaling of data = no Training function = gaussian that there are many files that fall into Table 3 - The setup used for the SOM training phase. See [6] to have an exhaustive description of them. one of these two groups. Looking to the map drawn in Figure 6, we can try to understand what is the main subject discussed by papers in these groups. We decided to report the stems which gain the highest tf-idf in the prototype vectors, providing in this way two “keywords” that identify papers falling in the neurons. In the first group, positioned on the left-upper part of the map, certainly there are documents discussing EnginSoft and the international conference. Documents discussing optimization and computational fluid dynamics belong to the second group, positioned on the central-lower part of the net; actually, stems such as “optim” and “cfd” often gain the Fig. 4 - The quantization error plotted versus the number of the training itehighest tf-idf. rations. This gives us a measure of the goodness of the map training. Grid

Rough Phase

Fine Phase

mistakes, are cancelled out from the dataset, ensuring a more robust training. The dictionary decreases from 7000 to around 5000 stems, which are considered to be enough to describe exhaustively the corpus, keeping very common words and preserving the peculiarities of documents. Once the SOM has been trained (in Figure 4 the quantization error versus the training iteration is drawn), we decided to use the “distance matrix” as the best tool to “browse” the results. The so-called D-matrix is a plot of the net where the color scale is used to represent the mean distance between the neurons’ prototype vector and their neighbors (red means “far”, blue means “close”). In this way, with just a glance, Fig. 5 - The D-matrix. The white diamonds give evidence of the number of one can understand how the dataset is distributed on the files pertaining to the neuron. The colormap represents the mean distance net, and also detect clusters of data, if any. This graphical between a neuron’s prototype and the prototypes of the neighbor neurons. Two groups of documents (blue portions) can be detected. tool can be also enriched with other additional information, plotted together with the color scale, making it possible to represent the dataset in a more useful way. An example of these enriched versions is given in Figures 5 and 6. Looking at the plot of the D-matrix reported in Figure 5, one can conclude that there are mainly two large groups of papers (the two blue zones), which are not however sharply separated, and there are many outliers. It is not easy to identify in a unique way other clusters of papers, since the Fig. 6 - The D-matrix. For each neuron the first two stems with highest tf-idf as given by the prototype vectors are reported, distance is too high in the attempt to highlight the main subject discussed by articles falling in the neurons.


36 - Newsletter EnginSoft Year 8 - Openeering Special Issue

Fig. 7 - The contributions by Stefano Odorizzi (left), by Akiko Kondoh (middle) and by Silvia Poles (right) as they fall in the SOM (see white diamonds).

Conclusions It is interesting to see some of the relations and links that We have considered the English articles published in the old appear in the net. For example, the lower-right corner is issues of the EnginSoft newsletter and preprocessed them occupied by documents mainly discussing laminates and adopting some well-known methodologies in the field of text composite materials; going up in the net, following the right mining. The resulting dataset has been used to train a self border, we meet papers on casting and alloys and the Turkish organizing map; the results have been graphically presented corner at top, where contributions by Figes have found a and some considerations on the documents set have been place. Moving to the left we meet stems such as “technet”, proposed. “allianc” and “ozen”, that remind us of the great importance All the work has been performed using Scilab scripts, that EnginSoft gives to international relationships and to the expressly written to this aim. “net”. We also find several times “tcn”, “cours” and “train”, which is certainly due to the training activities held and References sponsored by EnginSoft in the newsletter. In the upper left [1] http://www.scilab.org/ to have more information on corner the “race” stem can be found: the competition corner Scilab. - we could say - because contributions coming from the world [2] http://www.ranks.nl/resources/stopwords.html to have of racing (by Aprilia, Volvo and others) fall here. an exhaustive list of the English stop words. Figure 6 certainly gives us a funny but valuable view on our [3] http://newsletter.enginsoft.it/ to download the pdf community. version of the EnginSoft newsletters. Another interesting output which can be plotted is the [4] http://www.wordle.net/ to generate funny images position that documents written by an author assume in the starting from text. net. This could be useful to detect common interests [5] http://tartarus.org/~martin/PorterStemmer/def.txt between people in a large community. This kind of output is [6] http://www.cis.hut.fi/teuvo/ summarized in Figure 7, where, starting from left to right, the position of documents by Stefano Odorizzi, by Akiko Kondoh and by Silvia Poles are reported. It can be seen that Contacts our CEO contributions, the “EnginSoft Flash” at the For more information on this document please contact the beginning of all the issues, fall in the first group of author: documents, where EnginSoft and its activities are the focus. Akiko’s contributions are much more Massimiliano Margonari spread on the net: some of them fall Openeering in the left-lower portion, that could info@openeering.com be viewed as the Japanese corner, some other between the two main groups. Finally, we could conclude that Silvia’s contributions mainly focus on PIDO and multi-objective optimization topics. In Figure 8 the prototype vector of a neuron in the first group of documents is drawn. On the right side of the picture the first 10 stems which register the highest values of tf-idf are reported. These stems could be read as keywords that concisely Fig. 8 - The prototype vector of the pointed neuron in the net: the tf-idf is plotted versus the stems in the define documents falling in the dictionary. On the right the first 10 highest tf-idf stems are displayed. The horizontal red line gives the lowest tf-idf registered by the 10th stem. neuron.


Newsletter EnginSoft Year 8 - Openeering Special Issue -

37

Weather Forecasting with Scilab The weather is probably one of the most discussed topics all around the world. People are always interested in weather forecasts, and our life is strongly influenced by the weather conditions. Let us just think of the farmer and his harvest or of the happy family who wants to spend a weekend on the beach, and we understand that there could be thousands of good reasons to be interested in knowing the weather conditions in advance. This probably explains why, normally, the weather forecast is the most awaited moment by the public in a television newscast. Sun, wind, rain, temperature‌ the weather seems to be unpredictable, especially when we consider extreme events. Man has always tried to develop techniques to master this topic, but practically only after the Second World War the scientific approach together with the advent of the media have allowed a large diffusion of reliable weather forecasts. To succeed in forecasting, it is mandatory to have a collection of measurements of the most important physical indicators which can be used to define the weather in some relevant points of a region at different times. Then, we certainly need a reliable mathematical model which is able to predict the values of the weather indicators at points and times where no direct measurements are available. Nowadays, very sophisticated models are used to forecast the weather conditions, based on registered measurements such as the temperature, the atmospheric pressure, the air humidity as so forth. It is quite obvious that the larger the dataset of measurements the better the prediction: this is the reason why the institutions involved in monitoring and forecasting the weather usually have a large number of stations spread on the terrain, opportunely positioned to capture relevant information. This is the case of Meteo Trentino (see [3]), which manages a network of measurement stations in Trentino region and provides daily weather forecasts. Among the large amount of interesting information we can find in their website, there are the temperature maps, where the predicted temperature at the terrain level for the Trentino province is reported for a chosen instant. These maps are based on a set of measurements available from the

stations: an algorithm is able to predict the temperature field in all the points within the region and, therefore, to plot a temperature map. We do not know the algorithm that Meteo Trentino uses to build these maps, but we would like to set up our own procedure able to obtain similar results. To this aim, we decided to use Scilab (see [1]) as a platform to develop such a predictive model and gmsh (see [2]) as a tool to display the results. Probably one of the most popular algorithms in the geosciences domain used to interpolate data is Kriging (see [5]). This algorithm has the notable advantage of exactly interpolating known data; it is also able to potentially capture non-linear responses and, finally, to provide an estimation of the prediction error. This valuable last feature could be used, for example, to choose in an optimal way the position of new measurement stations on the terrain. Scilab has an external toolbox available through ATOMS, named DACE (which stands for Design and Analysis of Computer Experiments), which implements the Kriging algorithm. This obviously allows us to implement more rapidly our procedure because we can use the toolbox as a sort of black-box, avoiding in this way spending time implementing a non-trivial algorithm. The weather data We decided to download from [3] all the available temperatures reported by the measurement stations. As a result we have 102 formatted text files (an example is given in Figure 1) containing the maximum, the minimum and the mean temperature with a timestep of one hour. In our work we only consider the “good� values of the

Fig. 1 - The hourly temperature measures for the Moena station: the mean, the minimum and the maximum values are reported together with the quality of the measure.


38 - Newsletter EnginSoft Year 8 - Openeering Special Issue mean temperature: there is actually an additional column which contains the quality of the reported measure which could be “good”, “uncertain”, “not validated” and “missing”. The terrain data Another important piece of information we need is the “orography” of the region under exam. In other words we need to have a set of triplets giving the latitude, the longitude and the elevation of the terrain. This last information is mandatory to build a temperature map at

Fig. 2 - An example of the DTM file formatted to the ESRI standard. The matrix contains the elevation of a grid of points whose position is given with reference to the Gauss Boaga Roma 40 system.

the terrain level. To this aim we downloaded the DTM (Digital Terrain Model) files available in [4] which, summed all together, contain a very fine grid of points (with a 40 meters step both in latitude and longitude) of the Trentino province. These files are formatted according to the ESRI standard and they refer to the Gauss Boaga Roma 40 system. Set up the procedure and the DACE toolbox We decided to translate all the terrain information to the UTM WGS84 system in order to have a unique reference for our data. This operation can be done just once and the results stored in a new dataset to speed up the following computations. Then we have to extract from the temperature files the available data for a given instant, chosen by the user, and

Fig. 4 - 6th May 2010 at 17:00. Top: the predicted temperature at the terrain level using Kriging is plotted. The temperature follows very closely the height on the sea level. Bottom: the temperature map predicted using a linear model relating the temperature to the height. At a first glance these plots could appear exactly equal: this is not exact, actually slight differences are present especially in the valleys.

Fig. 3 - The information contained into one DTM file is graphically rendered. As a results we obtain a plot of the terrain.

store them. With these data we are able to build a Kriging model, thanks to the DACE toolbox. Once the model is available, we can ask for the temperature at all the points belonging to the terrain grid defined in the DTM files and plot the obtained results. One interesting feature of the Kriging algorithm is that it is able to provide an expected deviation from the prediction. This means that we can have an idea of the degree to which our prediction is reliable and eventually estimate a possible range


Newsletter EnginSoft Year 8 - Openeering Special Issue -

Fig. 5 - 6th May 2010 at 17:00. The measured temperatures are plotted versus the height on the sea level. The linear regression line, plotted in red, seems to be a good approximation: the temperature decreases 0.756 [째C] every 100 [m] of height.

39

the linear model is appropriate to capture the relation between the temperature and the height. If we compare the results obtained with Kriging and this last approach some differences appear, especially down in the valleys: the Kriging model seems to give more detailed results. If we consider January 20th, the temperature can no longer be computed as a function of only the terrain height. It immediately appears, looking at Figure 8, that there are large deviations from a pure linear correlation between the temperature and the height. The Kriging model, whose result is drawn in Figure 7, is able to capture also local

of variation: this is quite interesting when forecasting an environmental temperature. Some results We chose two different days of 2010 (the 6th of May, 17:00 and the 20th of January, 08:00) and ran our procedure to build the temperature maps. In Figure 5 the measured temperatures at the 6th of May are plotted versus the height on the sea level of the stations. It can be seen that a linear model can be considered as a good model to fit the data. We can conclude that the temperature decreases linearly with the height of around 0.756 [째C] every 100 [m]. For this reason one could be tempted to use such model to predict the temperature at the terrain level: the result of this prediction, which is reported in Figure 4, is as accurate as

Fig. 6 - 6th May 2010 at 17:00. The estimated error in predicting the temperature field with Kriging is plotted. The measurement stations are reported on the map with a code number: it can be seen that the smallest errors are registered close to the 39 stations while, as expected, the highest estimated errors are typical of zones where no measure is available.

Fig. 7 - 20th January 2010 at 08:00. Top: the predicted temperature with the Kriging model at the terrain level is plotted. Globally, the temperature still follows the height on the sea level but locally this trend is not respected. Bottom: the temperature map predicted using a linear model relating the temperature to the height.


40 - Newsletter EnginSoft Year 8 - Openeering Special Issue starting from a set of measurements and from information regarding the terrain of the region. We have shown that the Kriging algorithm can be used to get an estimated value and an expected variation around it: this is a very interesting feature which can be used to give a reliability indication of the prevision.

Fig. 8 - 20th January 2010 at 08:00. The measured temperatures are plotted versus the height on the sea level. The linear regression line, plotted in red, says that the temperature decreases 0.309 [°C] every 100 [m] of height but it seems not to be a good approximation in this case; there are actually very large deviations from the linear trend.

This approach could be used also with other atmospheric indicators, such as the air pressure, the humidity and so forth.

References [1] http://www.scilab.org/ to have more information on Scilab. [2] The Gmsh can be freely downloaded from: http://www.geuz.org/gmsh/ [3] The official website of Meteo Trentino is http://www.meteotrentino.it from where the temperature data used in this work has been downloaded. [4] The DTM files have been downloaded from the official website of Servizio Urbanistica e Tutela del Paesaggio http://www.urbanistica.provincia.tn.it/ sez_siat/siat_urbanistica/pagina83.html. [5] Søren N. Lophaven, Hans Bruun Nielsen, Jacob Søndergaard, DACE A Matlab Kriging Toolbox, download from http://www2.imm.dtu.dk/~hbn/dace/dace.pdf.

Fig. 9 - 20th January 2010 at 08:00. The estimated error in predicting the temperature field with the Kriging technique is plotted. The measurement stations are reported on the map with a code number: it can be seen that the smallest errors are registered close to the 38 stations.

Contacts For more information on this document please contact the author: Massimiliano Margonari - Openeering info@openeering.com

positive or negative peaks in the temperature field, which cannot be predicted otherwise. In this case, however, it can be seen that the estimated error (Figure 9) is larger than the one obtained for 17th of May (Figure 6): this lets us imagine that the temperature is in this case much more difficult to capture correctly. Conclusions In this work it has been shown how to use Scilab and its DACE toolbox to forecast the temperature field

Fig. 10 - 20th January 2010 at 08:00. The estimated temperature using Kriging: a detail of the Gruppo Brenta. The black vertical bars reports the positions of the meteorological stations.


Newsletter EnginSoft Year 8 - Openeering Special Issue -

41

Optimization? Do It with Scilab! Several times in this Newsletter we have written about the importance of optimization in companies’ daily activities. We never miss the opportunity to stress the importance of optimization and to explain how optimization can play a significant role in the design cycle. When we talk about optimization, we always refer to real-life applications, as we know that our readers are interested in methods and software for solving industrial cases. Particularly, we refer to problems where multiple and nonlinear objectives are involved. In this article we will introduce you to Scilab1, a numerical computing environment that should be considered as a powerful multiobjective and multidisciplinary optimization software. Scilab is a high-level matrix language with a syntax that is very similar to MATLAB®. Exactly as MATLAB® does, Scilab allows the user to define mathematical models and to connect to existing libraries. As for MATLAB®2, optimization is an important topic for Scilab. Scilab has the capabilities to solve both linear and nonlinear optimization problems, single and multiobjective, by means of a large collection of available algorithms. Here we are presenting an overall idea of the optimization algorithms available in Scilab; the reader can find some code that can be typed and used in the Scilab console to verify the potential of this numerical computing environment for solving very common industrial optimization problems3. Linear and nonlinear optimization As our readers may already know, “optimize” means selecting the best available option from a wide range of possible choices. Doing this as a daily activity can be a complex task as, potentially, a huge number of choices

should be tested when using a brute force approach. The mathematical formulation of a general optimization problem can be stated as follows:

(x1, …, xn) are the variables, the free parameters which can vary in the domain S. Any time that k>1, we speak about multiobjective optimization. Graphical methods Scilab is very helpful for solving daily optimization problems even simply by means of a graphical method. For example, suppose that you would like to find out the minimum point of the Rosenbrock function. The contour plot can be a visual aid to identify the optimal area. Start up Scilab, copy the following Scilab script and you obtain the plot in Figure 1. function f=rosenbrockC(x1, x2) x = [x1 x2]; f = 100.0 *(x(2)-x(1)^2)^2 + (1-x(1))^2; endfunction xdata = linspace(-2,2,100); ydata = linspace(-2,2,100); contour( xdata , ydata , rosenbrockC , [1 10 100 1000]) The contour plot can be the first step for finding an optimal solution. By the way, solving an optimization problem by means of graphical methods is only feasible when we have a limited number of input variables (2 or 3). In all other cases we need to proceed further and use numerical algorithms to find solutions.

Fig. 1 - Contour plot (left) and 3d plot (right) of the Rosenbrock function. With this chart we can identify that the minimum is in the region of the black contour with label 1. These means that a good starting point for further investigations could be x=(0.5,0.5)


42 - Newsletter EnginSoft Year 8 - Openeering Special Issue Optimization algorithms A large collection of different numerical methods is available for further investigations. There are tens of optimization algorithms in Scilab, and each method can be used to solve a specific problem according to the number and smoothness of functions f, the number and type of variables x, the number and type of constraints g. Some methods can be more suitable for constrained optimization, others may be better for convex problems, others can be tailored for solving discrete problems. Specific methods can be useful for solving quadratic programming, nonlinear problems, nonlinear least squares, nonlinear equations, multiobjective optimization, and binary integer programming. Table 1 gives an overview of the optimization algorithms available in Scilab. Many other optimization methods are made available from the community every day as external modules using the ATOMS Portal, http://atoms.scilab.org/. For showing the potentiality of Scilab as an optimization tool, we can start from the most used optimization

function: optim. This command provides a set of algorithms for nonlinear unconstrained and boundconstrained optimization problems. Let’s see what happens if we use the optim function for the previous problem: function [ f , g, ind ] = rosenbrock ( x , ind ) f = 100*(x(2)-x(1)^2)^2+(1-x(1))^2; g(1) = - 400. * ( x(2) - x(1)^2 ) * x(1) -2. * ( 1. - x(1) ) g(2) = 200. * ( x(2) - x(1)^2 ) endfunction x0 = [-1.2 1]; [f, x] = optim(rosenbrock, x0); // Display results4 mprintf("x = %s\n", strcat(string(x)," ")); mprintf("f = %e\n", f); If we use x0=[-1.2 1] as initial point, the function converges easily to the optimal point x*=[1,1] with f=0.0. The previous example calculates both the value of the Rosenbrock function and its gradient, as the gradient is required by the optimization method. In many real case applications, the gradient can be too complicated to be computed or simply not available since the function is not known and available only as a black-box from an external function calculation. For this reason, Scilab has the ability to compute the gradient using finite differences by means of the function derivative or the function numdiff. For example the following code define a function f and compute the gradient on a specific point x. function f=myfun(x) f=x(1)*x(1)+x(1)*x(2) endfunction x=[5 8] g=numdiff(myfun,x)

Fig. 2 - Convergence of the Nelder-Mead Simplex algorithms (fminsearch function) on the Rosenbrock example.

These two functions (derivative and numdiff) can be used together with optim to minimize problem where gradient is too complicated to be programmed. The optim function uses a quasi-Newton method based on BFGS formula that is an accurate algorithm for local optimization. On the same example, we can even apply a different optimization approach such as the derivative-free Nelder-Mead Simplex [1] that is implemented in the function fminsearch. To do that we just have to substitute the line: [f, x] = optim(rosenbrock, x0); With [x,f] = fminsearch(rosenbrock, x0);

Table 1 - This table gives an overview of the optimization algorithms available in Scilab and the type of optimization problems which can be solved. For the constraints columns, the letter “l” means “linear”. For the problem size s,m,l indicate small, medium and large respectively that means less than ten, tens or hundreds of variables

This Nelder-Mead Simplex algorithm, starting from the same initial point,


Newsletter EnginSoft Year 8 - Openeering Special Issue -

43

converges very closely to the optimal point and precisely to x*=[1.000022 1.0000422] with f=8.177661e-010. This shows that the second approach is less accurate than the previous one: this is the price to pay in order to have a more robust approach that is less influenced by noisy functions and local optima. Figure 2 shows the convergence of the Nelder-Mead Simplex method on the Rosenbrock function. It is important to say that, in the previous example, the function is given by means of a Scilab script but this was only done for simplicity. It is always possible to evaluate the function f as an external function such as a C, Fortran or Java program or external commercial solver. Parameter identification using measured data In this short paragraph we show a specific optimization problem that is very common in engineering. We demonstrate how fast and easy it can be to make a parametric identification for nonlinear systems, based on input/output data. Suppose for example that we have a certain number of measurements in the matrix X and the value of the output in the vector Y. Suppose that we know the function describing the model (FF) apart from a set of parameters p and we would like to find out the value of those parameters. It is sufficient to write few lines of Scilab code to solve the problem: //model with parameters p function y=FF(x, p) y=p(1)*(x-p(2))+p(3)*x.*x; endfunction Z=[Y;X]; //The criterion for evaluating the error function e=G(p, z) y=z(1),x=z(2); e=y-FF(x,p), endfunction

Fig. 3 - Optimization of the Rosenbrock function by means of a Genetic Algorithm. Initial random population is in yellow, final population is in red and converges to the real optimal solution.

algorithms are largely used in real-world problems as well as in a number of engineering applications that are hard to solve with “classical” methods. Using the genetic algorithms in Scilab is very simple: in a few lines it is possible to set the required parameters such as the number of generations, the population size, the probability of cross-over and mutation. Fig. 3 shows a single objective genetic algorithm optim_ga on the Rosenbrock function. Twenty initial random points (in yellow) evolve through 50 generations towards the optimal point. The final generation is plotted in red. Multiobjective Scilab is not only for single objective problems. It can easily deal with multiobjective optimization problems. Just to list one of the available methods, Scilab users can take advantage of the NSGA-II. NSGA-II is the second version of the famous “Non-dominated Sorting Genetic Algorithm” based on the work of Prof. Kalyanmoy Deb [3].

//Solve the problem giving an initial guess for p p0=[1;2;3] [p,err]=datafit(G,Z,p0); This method is very fast and efficient, it can find parameters for a high number of input/output data. Moreover it can take into consideration parameters’ bounds and weights for points. Evolutionary Algorithms: Genetic and Multiobjective Genetic algorithms [2] are search methods based on the mechanics of natural evolution and selection. These methods are widely used for solving highly non-linear real-life problems because of their ability to remain robust even against noisy functions and local optima. Genetic

Fig. 4 - ZDT1 problem solved with the Scilab’s NSGA-II optimization algorithm. Red points on the top represents the initial populations, black points on the bottom the final Pareto population. The solid line represents the Pareto frontier that, in this specific example, is a continuous convex solution.


44 - Newsletter EnginSoft Year 8 - Openeering Special Issue NSGA-II is a fast and elitist multiobjective evolutionary algorithm. Figure 4 shows a multiobjective optimization run with NSGA-II using the test problem ZDT1. The test problem states: function f=zdt1(x) f1_x1 = x(:,1); g_x2 = 1 + 9 * ((x(:,2)-x(:,1)).^2); h = 1 - sqrt(f1_x1 ./ g_x2);

Solving the cutting stock problem: reducing the waste The cutting stock problem is a very common optimization problem in industries and it is economically significant. It consists on finding the optimal way of cutting a semiprocessed product into different sizes in order to satisfy a set of customers’ demands by using the material in the most efficient way. This type of problem arises very often in industries and can involve a variety of different goals such as minimizing the costs, minimizing the number of

f(:,1) = f1_x1; f(:,2) = g_x2 .* h; endfunction With the ZDT1 we want to minimize both f1 and f2: this means that we are dealing with a multiobjective problem. With these problems, the notion of optimal solutions changes. A multiobjective optimization does not produce a unique solution but a set of solutions. These solutions are named non-dominated5 or Pareto solutions, the set of solutions can be called Pareto frontier. Figure 4 shows the solutions given by the Scilab’s NSGAII optimization algorithm for solving the ZDT1 problem. Red points on the top are the initial random populations, black points on the bottom the final Pareto population. The solid line represents the actual Pareto frontier that, in this specific example, is a continuous convex solution and is known. In this example, the concept of Pareto dominance is clear. Red points on the top are dominated by black points on the bottom because red points are worse than black points with respect to both objectives f1 and f2. On the contrary, all black points on the bottom figure are not dominating each other, and we may say in this case that all the black points represent the set of efficient solutions. To understand how Scilab recognizes the importance of multiobjective optimization, we can even note that it has an internal function named pareto_filter that is able to filter non-dominated solutions on large set of data. X_in=rand(1000,2); F_in=zdt1(X_in); [F_out,X_out,Ind_out] = pareto_filter(F_in,X_in) drawlater; plot(F_in(:,1),F_in(:,2),'.r') plot(F_out(:,1),F_out(:,2),'.b') drawnow; The previous code generates 1,000 random input values, evaluates the zdt1 function and computes the nondominated solutions. The last four lines of the code generate the following chart (Figure 6) with all the points in red and the Pareto points in blue.

Fig. 5 - zdt1 function evaluate on 1,000 random points. Blue points are non-dominated Pareto solutions. The code for selecting the Pareto solution is reported in the text. The main function to be used is “pareto_filter”.

cuts, minimizing the waste of material and consequently costs, and so on. Whatever the target is, it is always true that small improvements in the cutting layout can result in remarkable savings of material and considerable reduction in production costs. In this section we will show how to solve a onedimensional (1D) cutting stock problem with Scilab. Solving a 1D cutting stock problem is less complex than solving a 2D cutting stock problem (e.g. cutting rectangles from a sheet), nevertheless it represents an interesting and common problem. The 1D problem can arise, for example, in the construction industries where steel bars are needed in specified quantities and lengths and are cut from existing long bars with standard lengths. It is well-known that cutting losses are perhaps the most significant cause of waste. Suppose now that you are working for a company producing pipes that have usually a fixed length waiting to be cut. These tubes are to be cut into different lengths to meet customers’ requests. How can we cut the tubes in order to minimize the total waste? The mathematical formulation for the 1D cutting stock problems can be:

Where, i is the index of the patterns, j the index of the lengths, xi are the number of cutting patterns i (decision


Newsletter EnginSoft Year 8 - Openeering Special Issue -

variables) and ci are the costs of the pattern i. A=(aij) the matrix of all possible patterns and qj the customers’ requests. We may say that the value aij indicates the number of pieces of length j within one pipe cut in the pattern i. The goal of this model is to minimize the objective function which consists of the total costs of the cutting phase. If ci is equal to 1 for all the patterns, the goal corresponds to the total number of pipes required to accomplish the requirements. Let’s make a practical example and solve it with Scilab. Suppose that we have 3 possible sizes, 55mm, 26mm, and 24mm in which we can cut the original pipe of 100 mm. The possible patterns are: 1. One cut of type one and one of type two and zero of type three [1 1 0] 2. One cut of type one and one of type three [1 0 1] 3. Two cut of type two and two of type three [0 2 2] These patterns define the matrix A. Then we have the costs that are 4, 3 and 1 for the pattern 1, 2 and 3 respectively. The total request from the customers are: 150 pieces of length 55mm, 200 with length equal to 26mm and 300 pieces with length 24mm. For solving this problem in Scilab we can use this script. //pattern aij=[ 1 1 0; 1 0 1; 0 2 2]; //costs ci=[4; 3; 1]; //request qj=[150; 200; 300]; xopt = karmarkar(aij',qj,ci) Running this script with Scilab you obtain xopt=[25, 125, 87.5], this means that to satisfy the requests reducing the total number of pipes we have to cut 25 times the pattern (1), 125 times with pattern (2) and 87.5 times the pattern (3). We show here a simple case with only three different requests and three different patterns. The problem can be much more complicated, with many more options, many different dimensions, costs and requests. It may include the maximum number of cuts on a single piece, it may require a bit of effort in generating the list of feasible patterns (i.e. the matrix A). All these difficulties can be coded with Scilab and the logic behind the approach remains the same. The previous script uses the Karmarkar’s algorithm [4] to solve this linear problem. The result gives an output that is not an integer solution, hence we need to approximate because we cannot cut 87.5 pipes with the third pattern. This approximated solution can be improved with another different optimization algorithm, for example evaluating

45

the nearest integer solutions or using a more robust genetic algorithm. But even if we stop with the first step and we round off the solution, we have a good reduction of waste. Conclusions As the solution of the cutting stock problem demonstrates, Scilab is not just an educational tool but a product for solving real industrial problems. The cutting stock problem is a common issue in industries, and a good solution can result in remarkable savings. By the way, in this article we presented only a small subset of the possibilities that a Scilab user can have for solving realworld problems. For the sake of simplicity, this paper shows only very trivial functions that have been used for the purpose of making a short and general tutorial. Obviously these simple functions can be substituted by more complex and time consuming ones such as FEM solvers or other external simulation codes. MATLAB® users have probably recognized the similarities between the commercial software and Scilab. We hope all other readers have not been scared by the fact that problems and methods should be written down as scripts. Once the logic is clear, writing down scripts can result in an agile and exciting activity. For more information and for the original version of Scilab scripts: Silvia Poles – Openeering info@openeering.com References [1] Nelder, John A.; R. Mead (1965). "A simplex method for function minimization". Computer Journal 7: 308–313 [2] David E. Goldberg. Genetic Algorithms in Search, Optimization & Machine Learning. Addison-Wesley, 1989. [3] N. Srinivas and Kalyanmoy Deb. Multiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation, 2:221{248, 1994 [4] Narendra Karmarkar (1984). "A New Polynomial Time Algorithm for Linear Programming", Combinatorica, Vol 4, nr. 4, p. 373–395. 1

Download Scilab for free at http://www.scilab.org/

2

MATLAB is a registered trademark of The MathWorks, Inc

3

Contact the author for the original version of the Scilab scripts

4

The symbol “//” indicates a comment

5

By definition we say that the design a dominates b if [f1(a) <= f1(b) and f2(a) <= f2(b)...and fk(a) <= fk(b)], for all the f and [f1(a) < f1(b) or f2(a) < f2(b)...or fk(a) < fk(b)] for at least one f


46 - Newsletter EnginSoft Year 8 - Openeering Special Issue

A Multi-Objective Optimization with Open Source Software Sometimes it happens that a small-to-medium sized firm does not benefit from the advantages that could be achieved through the use of virtual simulation and optimization techniques. This represents in many cases a great limitation in the innovation of products and processes, and this can lead, in a very short time, to a complete exclusion from the market and to an inevitable end. Nowadays, it is mandatory to reduce as much as possible the time-to-market, while always improving the quality of products and satisfying the customer needs better that the competitors. In some fields it is a question of “life or death”. According to our opinion, the main reasons that limit or, in the worst case, make impossible the use of the virtual simulation and optimization techniques can be grouped into three categories: 1. These techniques are not yet sufficiently known and the possible users do not have a great confidence in the results. Sometimes physical experimentation, guided by experience maturated through many years of practice, is thought to be the only possible way to proceed. This is actually wrong in the great majority of cases, especially when new problems have to be solved. A change of vision is the most difficult but essential step to take in this context. Commercial

Open Source

License

Many possibilities are available

GNU license largely used or similar versions with some restrictions

Development

Continuous improvement and a clear guideline

Left to the community

Available features

State of the art

It strongly depends on “who” leads the development. Sometimes, very advanced features can be available.

Technical support

Usually the distributor Usually no support is available offers a technical but in some cases forums can support help

Usability

Easy-to-use and smart GUIs

Customization Only in some cases

Some effort could be required to the user If the source code is available the possibility of customization and development is complete

Table 1: The table compares some key features that characterize commercial and open source software, according to our opinion.

2. Adequate hardware facilities considered necessary to perform an optimization are not available and therefore the design time becomes too long. We are convinced that, in many cases, common personal computers are enough to efficiently solve a large variety of engineering problems. So, this second point, which is often seen as an enormous obstacle, has to be considerably downsized. 3. The simulation software licenses are much too expensive given the firm’s financial resources. Even though the large majority of commercial software houses offer a low-cost first entry license, it is not always immediately evident that these technologies are not just an expense, but rather a good investment. As briefly stated above, the second point often does not represent a real problem; the most important obstacle is summarized in the first point. People actually find it hard to leave a well-established procedure, even if obsolete, for a new one which requires a change in the everyday way of working. The problem listed in the third point can be solved, when possible, by using open source (see [1]), free and also home-made software. It is possible to find, with an accurate search on internet, many simulation software systems which are freely distributed by the authors (under GNU license in many cases). Some of them also exhibit significant features that usually are thought to be exclusive to commercial software. As usual, when adopting a new technology, it is recommended to consider both the advantages and the disadvantages. We have compared in Table 1 some aspects that characterize the commercial and the open source codes which should be considered before adopting a new technology. Open source codes are usually developed and maintained by researchers; contributions are also provided by advanced users all over the world or by people who are supported by research projects or public institutions, such as universities or research centers. Unfortunately, this can lead to a discontinuous improvement, not driven by a clear guideline, but rather left to the free contributions given by the community. On the contrary, commercial software houses drive the development according to wellknown roadmaps which generally reflect specific industry trends and needs. Commercial software is usually “plug-and-play”: the user has just to install the package and start using it. On the contrary - but not always - open source software could


Newsletter EnginSoft Year 8 - Openeering Special Issue -

47

only open source software, are presented in require some skill and effort in this paper. We decided to use Scilab (see compiling the code or adapting a [2]) as the main platform to drive the package to a specific system configuration. optimization process through its genetic Software houses usually offer to the algorithm toolbox. For the solution of the structural problem, presented in the customer technical support, which following, we adopted two packages. The can be, in some cases, really helpful first one is the Gmsh (see [3]) to manage a to make the investment profitable. An internet forum, when it exists, is the parametric geometry and mesh it; the only way to have support for a user of second one is CalculiX (see [4]), an FEM solver. It is important to remember that an open source code. this choice is absolutely not mandatory, but Another important issue is the usability of the simulation software, is strictly a matter of taste. which is mainly given by a userfriendly graphical interface (often The structural optimization problem In this work a C-type press is considered, as referred to as GUI). The commercial software usually has sophisticated the one shown in Figure 2. This kind of geometry is preferred to other ones when graphical tools that allow the user to Fig. 1 - An example of C-type press. The easily manage and explore large steel C-shaped profile which will be optimized the force that has to be expressed by the in this work is highlighted with a red line. hydraulic cylinder is not very high, usually models in an easy and smart way; the not greater than roughly 200 [Ton]. The main advantages open source codes rarely offer a similar suite of tools, but of this type of press are essentially the relative low weight they have simpler and less easy-to-use graphical and volume of the machine and the possibility of interfaces. accessing the workbench The different magnitude of the investment can explain all from three sides. these differences between the commercial and open The dimensioning of the source codes. lateral C-shaped profiles is probably one of the However, there are some issues that can make the use of most challenging phases a free software absolutely profitable, even in an industrial in the design process; context. Firstly, no license is needed to run simulations: the large majority of the in other words, no money is needed to access the virtual weight and cost, for the simulation world. Secondly, the use of open source mechanical part at least, software allows to break all the undesired links with third is actually concentrated party software houses and their destiny. Third, the number there. Consequently, a of simultaneous runs that can be launched is not limited, good designer looks for and this could be extremely important when performing the lightest profile which an optimization process. Last, but not least, if the source code is available all sorts of customizations are in is able to satisfy all the Fig. 2 - The C-shaped plate geometry has structural requirments. been modeled using the dimensioning principle possible. Moreover, an economical drawn in this picture, together with the The results of a structural optimization, performed using configuration is also thickness TH of the plate. Plate thickness [mm] Plate max dimensions [m] Available steel codes desired, in order to reduce as much as possible the production cost. 20 Vertical <4 A, B, C When optimizing, the designer should also take into Horizontal <2 30 account some aspects which are not strictly related to 40 Vertical <3 structural issues but are however important or, in some A, B Horizontal <2 50 cases fundamental, to deal with an optimal design. These aspects could be related to the availability of materials Table 2: The table collects some limitations in the steel plate provision. and components on the market, technical norms that have to be satisfied, marketing indications and more. In our Young modulus Maximum stress / Steel code Cost [$/Kg] case the steel plate supplier, for example, can provide only [MPa] Yield limit [MPa] the configurations collected in Table 2. A 200 (220) 1.2 It is clear that an optimization process that does not take 210000 B 300 (330) 2.3 into consideration these requisites could lead to configurations which are optimal only from a theoretical C 600 (630) 4.0 point of view, but which cannot be practically Table 3: The maximum desired stress, the yield limit and the cost per unit implemented. For example, a very light configuration is weight for the three available steels.


48 - Newsletter EnginSoft Year 8 - Openeering Special Issue

Fig. 3 - A possible version of the C-shaped plate meshed in Gmsh.

Fig. 4 - The CalculiX GraphiX window, where the von Mises stress for the Cshaped plate is plotted.

not of practical interest, if it requires a steel characterized by a yield limit greater that 600 [MPa]. For this reason all the requirements collected in Tables 2 and 3 have been included in order to drive the optimization algorithm to feasible and interesting solutions. Moreover, it is required that the hollow in the plate (H2max(R1,R2) x V2, see Figure 2) is at least 500 x 500 [mm] to allow the positioning of the transversal plates and the hydraulic cylinder. Another technical requisite is that the maximum vertical displacement is less than 5 [mm] to avoid excessive misalignments between the cylinder axis and the structure under the usual operating conditions. This limit has been chosen arbitrarily, in the attempt to exclude the designs that are not sufficiently stiff, taking into account, however, that the C-plate is a part of a more complex real structure which will be much more stiff than what is calculated with this simple model. A designer should recognize that the solution of such a problem is not exactly trivial. Firstly, it is not easy to find a configuration which is able to satisfy all the requisites listed above; secondly, it is rather challenging to obtain a design that minimizes both the volume of the plate (the weight) and the production cost.

The open source software for the structural analysis Gmsh has been used as a preprocessor to manage the parametric geometry of the C-shaped plate and mesh it in batch mode. Gmsh has the ability to mesh a non-regular geometry using triangular elements; many controls are available to efficiently define the typical element dimension, the refinement depth and more. It is a very powerful software tool which is also able to manage complicated threedimensional geometries and efficiently mesh them using a rich element library. The mesh can be exported in a formatted text file where the nodes and the element connectivities are listed together with some useful information related to the so-called physical entities, previously defined by the user; this information can be used to correctly apply, for example, boundary conditions, domain properties and loads to a future finite element model. The CalculiX finite element software has been used to solve the linear elastic problem. Also in this case a batch run is available; among the many interesting features that this software offers are the easy input text format, and the ability to perform both linear and non-linear static and dynamic analyses. CalculiX also offers a pre and post processing environment, called CalculiX GraphiX, which can be used to prepare quite complicated models and, above all, display results. These two software tools are both well documented and also some useful examples are provided for new users. The input and output formats are, in both cases, easy to understand and manage. In order to make the use of these tools completely automatic, it is necessary to write a procedure that translates the mesh output file produced by Gmsh into an input file readable by CalculiX. This translation is a relatively simple operation and it can be performed without a great effort using a variety of programming languages; a text file has to be read, some information has to be captured and then rewritten into a text file using some simple rules. For this, a simple executable file (named translate.exe) has been compiled and it will be launched whenever necessary. A similar operation has also to be performed in an optimization context to extract the interesting quantities from the CalculiX result file and rewrite them into a compact and accessible text file. As before, an executable file (named read.exe) has been produced to read the .dat results file and write the volume, the maximum vertical displacement and the nodal von Mises stress corresponding to a given design into a file named results.out. Many other open source software codes are available, both for the model setup and for its solution. Also for the results visualization, there are many free tools with powerful features. For this reason the interested reader


Newsletter EnginSoft Year 8 - Openeering Special Issue -

can imagine the use of other tools to solve this problem in an efficient way.

49

mutation and the selection. These routines are extremely flexible and they can be modified by the user H1 250 150. 5 according to his or her own needs, The optimization process driven since the source code is available. This H2 500 1500 5 by Scilab is exactly what we have done, V1 250 1500 5 The genetic algorithm toolbox, by modifying the optim_moga.sci script to V2 500 1500 5 Yan Collette, is available in the handle the constraints (with a penalty V3 250 1500 5 standard installation of Scilab and approach) and manage the infeasible R1 50 225 5 it can be used to solve the multidesigns efficiently (i.e.: all the objective optimization problem configurations which cannot be R2 50 225 5 described above. This toolbox is computed); we have then redefined the TH 20 50 10 composed of some routines which Table 4: The lower and upper bounds together with the coding_ga_binary.sci to allow the implement both a MOGA and a step for the input variables. discretization of the input variables as listed in Table 4. Other max max small changes have been Design H1 H2 V1 V2 V3 R1 R2 TH Cost vM vertical Volume name [mm] [mm] [mm] [mm] [mm] [mm] [mm] [mm] [$] stress displ. [mm ] made to the routines to [MPa] [mm] perform some marginal . A 670 665 575 500 490 165 110 20 1304 577.7 4.93 operations, such as 3.53 10 writing partial results to a B 1155 695 725 545 840 185 165 30 1097 199.8 1.73 1.06.10 file. Table 5: The optimal solutions. When the genetic algorithm requires the NSGA2 algorithm and also a version for the operations evaluation of a given configuration, we run a Scilab script that have been performed when running a genetic which is charged to prepare all the text files needed to algorithm, that is the encoding, the crossover, the perform the run and then launch the software (Gmsh, CalculiX and the other executables) through a call to the system in the right order. The script finally loads the results needed by the optimization. It is important to highlight that this script can be easily changed to launch other software tools or perform other operations whenever necessary. In our case, eight input variables are sufficient to completely parameterize the geometry of the plate (see Fig. 5 - The Cost of the computed configurations can be plotted versus the Volume. Red points stand for the feasiFigure 2): the lower and upper ble configurations while the blue plus indicates the configurations that do not respect one constraint at least. bounds together with the steps The two green squares are the Pareto (optimal) solutions (A and B in Table 5). are collected in Table 4. Note that the lower bound of variable V2 has been set to 500 [mm], in order to satisfy the constraint on the minimal vertical dimension required for the hollow. We can use a rich initial population, (200 designs randomly generated), considering the fact that a high number of them will violate the imposed constraints, or worse, be unfeasible. The following Fig. 6 - The vertical displacement for the design A. Fig.7 - The von Mises stress for the design A. Variable

Lower bound [mm]

Upper bound [mm]

Step [mm]

3

7

8


50 - Newsletter EnginSoft Year 8 - Openeering Special Issue generations will however consist of only 50 designs, to limit the optimization time. After 50 generations we obtain the results plotted in Figure 5 and Table 5, where the two Pareto (optimal) solutions are collected. We finally decided to choose, between the two optimal ones, the configuration with the lowest volume (named as “A” in Table 5). Fig. 8 - The vertical displacement for the modified design.

In Figures 6 and 7 the vertical displacement and the von Mises stress are plotted for the optimal configuration named “A” (see Table 5). Note that during the optimization, the maximum value of the von Mises stress computed in the finite element Gauss points has been used, while in Figure 7 the von Mises stress extrapolated by CalculiX to the mesh nodes is plotted; this is the reason why the Horizontal and vertical length of cuts starting from corners [mm]

Cost [$]

max vM stress [MPa]

max Vertical displ. [mm]

Volume [mm3]

H1/3

1304

581.3

4.80

7 3.33.10

Table 6: The modified design. It can be seen that there is an interesting reduction in the volume with respect to the original design, the “A” configuration in Table 5. Other output quantities do not present significant variations.

maximum values are different. However, they are both less than the yield limit corresponding to the steel type C, as reported in Table 3. Another interesting consideration is that the Pareto front in our case consists of just two designs: this shows that the solution of this optimization problem is far from trivial. The design of the C-shaped plate can be further improved. If we run other generations with the optimization algorithm better solutions could probably be found, but we feel that the improvements that might be obtained in this way do not justify additional computations. Substantial improvements can be achieved in another way. Actually, if we look at the von Mises stress distribution drawn in Figure 7 we note that the corners of the plate do not have a very high stress level and that they should not influence the structural behavior very much. A new design can be tested, cutting the corners of the plate; for the sake of simplicity we decided to use four equal cuts of horizontal and vertical dimensions equal to H1/3, starting from the corners. The results are drawn in Figures 8 and 9, which can be compared with Figures 6 and 7. As expected, there is a reduction in volume with respect to the original design, but no significant variations are registered in the other outputs. This corroborates the idea

Fig. 9 - The von Mises stress for the modified design.

that the cut dimensions can be excluded from the set of input variables, since the output does not strongly dependent on them, and this leads to a simpler optimization problem. The cost does not change; actually it represents the cost of the rectangular plate needed to produce the C-shaped profile. Other configurations with a lower volume can be probably found with some other runs; however, the reader has to consider that these improvements are not really significant in an industrial context, where, probably, it is much more important to find optimal solutions in a very short time. Conclusions In this work it has been shown how it is possible to use open source software to solve a non-trivial structural optimization problem. Some aspects which characterize the commercial and the open source software have been stressed in order to help the reader to make his or her own best choice. We are convinced that there is not a single right solution but rather that the best solution has to be found for each situation. Whichever the choice, the hope is that virtual simulation and optimization techniques are used to innovate. References [1] Visit http://www.opensource.org/ to have information on open source software [2] Scilab can be freely downloaded http://www.scilab.org/ [3] Gmsh can be freely downloaded http://www.geuz.org/gmsh/ [4] Calculix can be freely downloaded http://www.calculix.de/

more from: from: from:

Contacts For more information on this document please contact the author: Massimiliano Margonari - Openeering info@openeering.com


Newsletter EnginSoft Year 8 - Openeering Special Issue -

51

Scilab training courses by Openeering Openeering offers Scilab training courses designed with industry in mind. Openeering trainers have the highest level of competence: not only they are skilled Scilab software trainers, but, most importantly, they are senior engineers and mathematicians with a deep knowledge in the use of engineering and scientific software, knowing how to apply Scilab to industry level case studies. Openeering offers two sets of Scilab courses: scheduled standard courses and tailored, custom courses. Standard courses are provided on a regular basis, at different locations over Europe, aiming to provide a standard education trail, at introductory and specialized levels. Great care have been put in designing course syllabus and training materials. The Openeering training courses calendar is continuously updated and is available on the openeering.com website.

SCILAB-01: Introduction to Scilab The course has two main objectives. At the end of the course, the trainee will be able to effectively use Scilab. The trainee will also be aware of potential Scilab application and strategies to solve various common industry problems, such as the cutting stock problem, data fitting, Monte Carlo analysis, Six Sigma.

SCILAB-02: Optimizing with Scilab At the end of the course the trainee will be able to apply Scilab to common optimization problems. The course will provide the theoretical basis, such as how to correctly analyze and model single- and multiobjective optimization problems, and strategies to build real world optimization applications with Scilab. Participation to SCILAB-01 is strongly suggested.

Openeering custom courses are tailored taking into account the initial competencies of the trainees and the specific customer needs. Custom courses can be provided at EnginSoft or at customer’s premises.

Country

Location

Date

Date

March 20 and 21, 2012

SCILAB-01-IT - Introduzione a Scilab

March 22, 2012

SCILAB-02-IT - Ottimizzare con Scilab

June 12 and 13, 2012

SCILAB-01-IT - Introduzione a Scilab

June 14, 2012

SCILAB-02-IT - Ottimizzare con Scilab

Trento

Italy

Bergamo September 11 and 12, 2012 SCILAB-01-IT - Introduzione a Scilab Padova

Germany

Frankfurt Main

am

September 13, 2012

SCILAB-02-IT - Ottimizzare con Scilab

October 3 and 4, 2012

SCILAB-01-EN – Introduction to Scilab

October 5, 2012

SCILAB-02-EN - Optimizing with Scilab



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.