Introduction to Metaheuristic Algorithms
1.1 WHAT IS A METAHEURISTIC ALGORITHM?
Engineering problems from different domains such as mechanical design engineering, truss structural, manufacturing and combinatorial require various analytical and experimental processes in order to build a mathematical model. Such models include several parameters and variables that are interdependent, as well as linear and nonlinear constraints. Most of these problems have both discrete and mixed design variables. These variables have a limited search space, due to which the solution may get stuck in local minima. Such problems are difficult/cumbersome to solve using traditional gradient-based optimisation methods. Hence, the researchers are motivated to apply Artificial Intelligence (AI)-based stochastic optimisation techniques. However, these stochastic optimisation techniques can not necessarily solve all types of problems. Problem-specific algorithms need to be developed; these are referred to as heuristic algorithms. Heuristic algorithms require several modifications in order to make them appropriate for solving problems from different domain; these modified algorithms are referred to as metaheuristic algorithms. Metaheuristic algorithms such as the Genetic Algorithm (GA) (Goldberg, 1989), Particle Swarm Optimisation (PSO) algorithm (Eberhart and Kennedy, 1995) and
the Ant Colony Algorithm (ACO) (Colorni et al., 1991) have shown their applicability in solving problems from different areas. AI-based stochastic optimisation techniques are classified as follows: a) bio-inspired, b) swarm intelligence and c) physics and chemical system-inspired. These techniques are also referred to as nature-inspired techniques. The detailed classification of these techniques is illustrated in Figure 2.1 (taken from Kumar et al., 2018).
1.2 DESIGN VARIABLES
Design variables play an important role in the optimisation process. They are classified as follows: a) continuous variables, b) discrete variables and c) integer variables. Let xi be any design variable to be selected from defined range bounds (where Lb is the lower bound and U b is the upper bound). The design variable is referred to as continuous if any finite value must be selected from within the range bounds LbixU b ≤≤ . Discrete variables are selected from a finite set of values as follows: Let set A consists of finite/ discrete values, e.g., set A = {. ,.,.,.,.
0103050709}, within which the feasible design variables are selected to minimise/maximise the cost function. Such variables are referred to as discrete variables. The integer variable, as the name implies, is an integer value selected from the defined range bounds. Several engineering problems have both discrete and continuous design variables; such problems are said to be as mixed design variable problems.
1.3 CONSTRAINT HANDLING
When constraints occur in engineering problems, it is necessary to include supporting techniques within the methods. Several constraint handling techniques, such as penalty-based, probability-based and feasibility-based, have been proposed to date. As the choice of penalty parameter necessitates a significant number of preliminary trials, a parameter-less approach, referred to as a niched penalty function approach, was proposed by Deb and Agrawal (1999). In this approach, a feasible solution was selected based on three criteria: accepting the feasible solution rather than infeasible solution, accepting best fit solution from two feasible solutions and accepting infeasible solutions based on a smaller number of constraint violations. These three rules were then referred to as feasibility-based rules and were used as a constraint handling technique (Deb, 2000). The probability distribution-based constraint handling technique was proposed by Kulkarni and Shabir (2016) and Kulkarni et al. (2016) to ensure the
fitness function was biased towards feasibility. However, the mathematical construction of this approach is problem dependent and needs to be generalised.
1.4 OVERVIEW OF COHORT INTELLIGENCE (CI) ALGORITHM
An AI-based optimisation technique, referred to as Cohort Intelligence (CI), was proposed by Kulkarni et al. (2013). It was motivated by the social behaviour of cohort candidates. The term ‘cohort’ refers to a group of learning candidates cooperating, competing and interacting with one another in order to achieve and improve their individual goal; this goal is inherently common to all the candidates. In the course of learning through interaction and competition, a candidate may follow some other candidate. This may result in the improvement of its own behaviour as well as that of the overall cohort. The cohort could be considered successful when the behaviour of every candidate is saturated and does not improve further. The methodology of the CI algorithm incorporated with the static penalty function (SPF) approach is applied in solving several structural and engineering problems that have discrete, integer and mixed variables. These problems are classified into two domains: truss structure design problems with discrete variables and mechanical design engineering problems with mixed variables. The truss structure problems considered here are the 6-bar, 10-bar (2 cases), 25-bar (2 cases), 38-bar, 45-bar, 52-bar and 72-bar (2 cases) problems. The mechanical design engineering problems considered here are the reinforced concrete beam design problem, the steeped cantilever beam design problem, the welded beam problem (two cases), the speed reducer problem, the pressure vessel design problem, the helical compression spring problem, the multi-clutch brake problem, the I-section beam problem, the cantilever beam problem and the compound gear design problem. Furthermore, 17 linear and nonlinear integer variable test problems (linear, nonlinear, global, convex and monotonous functions) are also considered. The constraints involved in these problems are handled using a static penalty function (SPF) approach. A round off integer sampling approach is devised for handling the discrete variables. The results are compared with those obtained using the Multi Random Start Local Search (MRSLS) method proposed in Kulkarni et al. (2016). The performance of the CI algorithm is also evaluated for two parameters (the sampling space reduction factor () R and the number of candidates () C ) using two illustrative examples (one from each problem
domain) (see Section 3.3). To overcome the limitation of the SPF approach that the choice of penalty parameter () θ necessitates a significant number of preliminary trials, as in Kale and Kulkarni (2018), the Self-Adaptive Penalty Function (SAPF) approach is proposed. The effect of the SAPF approach on outcomes such as the penalty function, constraint violations and the pseudo objective function are thoroughly discussed. Additionally, in order to overcome the limitations of the CI-SAPF approach (i.e., setting the sampling space reduction factor R ) discussed in Section 3.3, it is hybridized with the CBO and referred to as the CI-SAPF-CBO. The proposed CI-SAPF and CISAPF-CBO are tested by solving 10 discrete truss structure problems, 11 mixed variable design engineering problems and 17 discrete variable test problems (linear, nonlinear, global, convex and monotonous functions).
Previously (Kaveh and Mahdavi, 2014; Kale and Kulkarni, 2018), CBO was applied to solve certain truss structure and engineering design problems. In this book, CBO is also applied to solve other problems from similar domains and is considered here in order to compare the performance of individual algorithms. The CI-SPF, CI-SAPF, CI-SAPF-CBO and CBO algorithms were applied to two real-world applications. These problems were from the manufacturing engineering domain: (i) the multi-pass turning problem for minimisation of unit production cost and (ii) the multi-pass turning problem for the minimisation of production time. The solutions obtained from the proposed techniques are compared with other contemporary techniques discussed in the literature.
1.5 ORGANISATION OF THE BOOK
This book is organised as follows:
The detailed literature review of nature inspired optimisation techniques including the classification is given in Chapter 2. The review emphasises the classification of socio-based optimisation techniques and considers the pros and cons of various constraint handling methods.
In Chapter 3, the algorithms of CI and MRSLS are incorporated with SPF and are discussed in detail along with the solutions to discrete and mixed variable problems from the truss structure and design engineering domains. In order to handle discrete variables, a round off integer sampling approach is also proposed. The CI-SPF and MRSLS approaches are validated by solving the 6-bar truss structure problem, the pressure vessel design engineering problem and two linear and nonlinear test problems. The discussion of the results includes comparisons between these approaches and,
additionally, those of other nature-inspired optimisation techniques. In addition, the CI algorithm is analysed by varying the number of candidates C and the sampling space reduction factor R and solving the discrete variable 10-bar truss structure problem and the mixed variable steeped beam design engineering problem.
In Chapter 4, the proposed SAPF approach is discussed in detail. The approach is validated by solving the problems discussed in Chapter 3. Using the SAPF approach, the impact of a penalty parameter on the behaviour of the function value, constraint violations and pseudo objective function are analysed.
As discussed in the literature review, the nature-inspired optimisation algorithms are governed by computational parameters. The algorithm for CI also requires two parameters to be tuned: the number of candidates C and the sampling space reduction factor R . The fine-tuning of such parameters necessitates several preliminary trials. In order to reduce the dependence on the sampling space reduction factor R , the intrinsic properties of CI and CBO are combined to form a new hybrid metaheuristic algorithm CI-SAPF-CBO (see Chapter 5). For the validation of the proposed CISAPF-CBO algorithm, one problem from each of the domains discussed in Chapter 3 is solved and the results compared to those obtained using other contemporary techniques.
In Chapter 6, the applicability of the CI-SPF, MRSLS, CI-SAPF and CISAPF-CBO techniques on different domain problems (9 discrete variable truss structure problems, 10 mixed variable design engineering problems and 15 discrete variable test problems (linear, nonlinear, global, convex and monotonous functions)) is compared to other contemporary techniques. The formulations of these problems are given in the Appendix.
In Chapter 7, the applicability of proposed the CI-SPF, CI-SAPF and CI-SAPF-CBO on real-world applications from the manufacturing engineering domain is discussed in detail. The problems on multi-pass turning and multi-pass milling process are solved. A comparison of the results with those obtained using other contemporary techniques is also presented.
REFERENCES
Colorni, A., Dorigo, M. and Maniezzo, V. (1991) ‘Distributed optimization by ant colonies’, ECAL91 European Conference on Artificial Life, Paris, pp. 134–142. Deb, K. (2000) ‘An efficient constraint handling method for genetic algorithms’, Computer Methods in Applied Mechanics in Engineering, Vol. 186, Nos. 2–4, pp. 311–338.
Deb, K. and Agrawal, S. (1999) ‘A niched-penalty approach for constraint handling in genetic algorithms’, in Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms (ICANNGA-99), pp. 235–243.
Eberhart, R. and Kennedy, J. (1995) ‘A new optimizer using particle swarm theory’. In: MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, IEEE, Nagoya, pp. 39–43.
Goldberg, D.E. (1989) ‘Genetic algorithms, in Search, Optimization and Machine Learning, Addison-Wesley.
Kale, I.R. and Kulkarni, A.J. (2018) ‘Cohort intelligence algorithm for discrete and mixed variable engineering problems’, International Journal of Parallel, Emergent and Distributed Systems, Vol. 33, No. 6, pp. 627–662.
Kaveh, A. and Mahdavi V.R. (2014) ‘Colliding bodies optimization: A novel metaheuristic method’, Computers and Structures, Vol. 39, 15 July, pp. 18–27.
Kulkarni, A.J. and Shabir H. (2016) ‘Solving 0–1 knapsack problem using cohort intelligence algorithm’, International Journal of Machine Learning and Cybernetics, Vol. 7, No. 3, pp. 427–441.
Kulkarni, A.J., Durugkar I.P. and Kumar M. (2013) ‘Cohort intelligence: A self supervised learning behavior’, Systems, Man, and Cybernetics (SMC), IEEE International Conference, pp. 1396–1400.
Kulkarni, A.J., Baki, M.F. and Chaouch, B.A. (2016) ‘Application of the cohortintelligence optimization method to three selected combinatorial optimization problems’, European Journal of Operational Research, Vol. 250, No. 2, pp. 427–447.
Kumar, M., Kulkarni, A.J., Satapathy, S.C. (2018) ‘Socio evolution and learning optimization algorithm: A socio-inspired optimization methodology’, Future Generation Computer Systems, Vol. 81, pp. 252–272.
CHAPTER
Literature Survey on Nature Inspired Optimisation Methodologies and Constraint Handling
2.1 CLASSIFICATION OF NATURE INSPIRED OPTIMISATION
TECHNIQUES
The classification of nature inspired optimisation techniques are presented in Figure 2.1 (taken from Kumar et al. 2018). The details are described as follows:
Bio-inspired Techniques: The bio-inspired metaheuristic Genetic Algorithm (GA) was proposed by John Holland in the 1960s (Holland, 1975) and was based on Darwin’s theory of evolution, i.e., the survival of the fittest. The GA was further modified by David Goldberg and team (Goldberg, 1989). The algorithm relied on the three important biological operators of mutation, crossover and selection, which make the algorithm more able to approximate better quality solutions. An evolutionary intelligence is a set
Nature-Inspired Algorithms
Bio-Inspired Algorithm
Evolutionary Algorithm
Genetic Algorithm
Evolution Strategies
Genetic Programming
Evolutionary
Programming
Differential Evolution
ArtificialImmune
System
Bacteria Foraging and many others
Swarm Intelligence Algorithm
Ant Colony
Cat Swarm Optimization
Cuckoo Search
Firefly Algorithm
Bat algorithm
Cultural / Social Algorithm ( SA)
Physical, Chemical Systems Based Algorithm
Simulated
Annealing, Harmony Search
FIGURE 2.1 Classification of Nature-Inspired Optimisation Algorithms.
Source: Kumar et al., 2018.
of evolutionary strategies, such as evolutionary programming, differential evolution and neuro-evolution (Michalewicz et al., 1996). Such strategies are inspired by biological evolutionary strategies such as reproduction, mutation, recombination and selection. In evolutionary programming (Fogel et al., 1966), a stochastic finite element approach was used to evolve the generations, with mutation being the key operator utilised. Furthermore, a heuristic evolutionary strategy was proposed by Rechenberg (1971). This technique is dependent on both mutation and selection search operators. The mutation strength was governed by modifying the individual step size for each coordinate correlation using a covariance matrix adaptation approach.
Swarm Intelligence Techniques: Inspired by the flocking of birds and the behaviour of schools of fish, a stochastic Particle Swarm Optimisation (PSO) model was proposed by Eberhart and Kennedy (1995). Based on the intelligence exhibited by a swarm, several other techniques have also been proposed. These include Ant Colony Optimisation (ACO) (Colorni et al., 1991), the Firefly Algorithm (FA) (Yang, 2010a) and the Bat Algorithm (BA) (Yang, 2010b). The ACO algorithm models the foraging behaviour of ants, in which they search for a shortest path between food and their nest. This is possible due to the excretion of pheromones by the ants that enables them to follow every other ant in the system. In the FA, all the fireflies are unisex and attracted towards every other firefly based on the intensity of
flash signals. Fireflies are attracted to areas of higher intensity (brightness) and move towards better search spaces by decreasing the distance between them. In a similar way to the FA, the behaviour of bats is based on the frequency of ultrasonic waves. The performance of these swarm algorithms is very similar to Multi Agent Systems (MAS) where all the agents work in a group to achieve the best possible outcome. These metaheuristics have been proven to be highly compatible in the solution of complex problems with both linear and nonlinear constraints (Gandomi et al., 2011; He and Wang, 2007; Kaveh and Talatahari, 2009).
Socio-inspired Techniques: The main socio-inspired optimisation techniques are the Probability Collective (PC) (Wolpert and Turner, 1999), Symbiosis Organism Search (SOS) (Cheng and Prayogo, 2014) and the Socio evolution and learning optimisation algorithm (SELO) (Kumar et al., 2018). The classification is illustrated in Figure 2.2 (taken from Kumar et al., 2018). The PC is a distributed and decentralised approach defined in the framework of Collective Intelligence (COIN). It decomposes the entire system into subsystems and treats them as a group of learning, rational and self-interested agents, or as a MAS. The SOS models symbiotic interaction
SA based on SocioPolitical Ideologies
Ideology Algorithm
Election Algorithm
Election Campaign Optimization Algorithm
Cultural / Social Algorithm (SA)
SA based on Sports Competitive Behavior
Soccer League Competition Algorithm
League Championship Algorithm
SA based on Social and Cultural Interaction
Cohort Intelligence
Teaching Learning Based Optimization
Social Group Optimization
Social Learning Optimization
Cultural Evolution Algorithm
Social Emotional Optimization
Socio Evolution and Learning Optimization
FIGURE 2.2 Classification of Socio-Inspired Algorithms.
Source: Kumar et al., 2018.
SA based on Colonization
Society and Civilization Optimization Algorithm
Imperialist Competitive Algorithm
Anarchic Society Optimization
strategies that the independent agents (organisms) use to survive in the ecosystem. Several political election-based socio-algorithms such as the Ideology Algorithm (IA) (Teo et al., 2017), the Election Algorithm (EA) (Emami and Derakshan, 2015) and Election Campaign Optimisation (ECO) (Lv et al., 2010) have also been proposed. ECO models the social behaviour of voters when the candidate attempts to obtain maximum support from them. Based on the position of the candidate and voters, both global and local voters are considered. The uniform distribution process is used to identify the supported focus of the candidate. The EA is based on the process of candidate promotion during the election campaign. There is a series of steps that candidates use to work towards reinforcing their positive image and to compete with one another to increase their popularity. Additionally, candidates who have similar ideologies ally together to increase the chance of success of the united party. With similar motivation, the IA was proposed by Teo et al. (2017). This algorithm emphasises the behaviour of political parties aiming to improve their rank.
The League Championship Algorithm (LCA) (Kashan, 2009) is inspired by distinct features of a sporting activity and models the social tendencies of sport competition in a league. Using a similar approach to LCA, the Soccer League Competition (SLC) algorithm was proposed by Moosavian and Roodsari (2014). It is based on the interaction of players during a soccer match. The socio-inspired algorithms are also classified according to cultural interactions such as Teaching Learning Based Optimisation (TLBO) (Rao, 2011), which depicts the process of outcome-based education. The influence of teaching process on student outcome is modelled. The Social Group Optimisation (SGO) (Satapathy and Naik, 2016) and Social Learning Optimisation (SLO) (Liu et al., 2016) algorithms are based on the process of the propagation of human knowledge in the learning society/group to solve complex engineering problems. A Cultural Evolution Algorithm (CEO) (Kuo and Lin, 2013) is inspired by the evolution of social species. It adopts certain strategies such as group consensus, individual learning, innovative learning and self-improvement to evolve. The Social Emotional Optimisation (SEO) algorithm (Xu et al., 2010) is another swarm-based socio-inspired metaheuristic which simulates an individual who wishes to achieve a higher status in the society and whose decisions are guided by their emotions. The level of emotion is based on the index (supporting parameter) that controls its current behaviour, and the society decides whether its current behaviour is better or worse and the
emotion index value is affected accordingly. SELO is the social learning of humans organised as families in a society. It models the social evolution and learning of parents and children who constitute a family. Individuals organised as family groups (parents and children) interact with one another and other distinct families to attain predefined individual goals.
The Society and Civilization Optimisation (SCO) Algorithm (Ray and Liew, 2003) is inspired by human social behaviour seen among society individuals. The individuals in a society interact with one another to improve their overall behaviour and a cooperative interaction among such societies represents a civilization. The Imperialistic Competitive Algorithm (ICA) (Atashpaz-Gargari and Lucas, 2007) simulates socio-political behaviours seen across imperialist nations which compete to take possession of weaker colonies or empires. This imperialist competition results in the power of stronger and successful imperialist empires being enhanced, whilst weaker empires gradually collapse, leading to a state of convergence. Another optimisation algorithm which seeks inspiration from the commonly observed human behaviours where greed and disorder are used to achieve goals is referred to as the Anarchic Society Optimisation (ASO) algorithm (Ahmadi-Javid, 2011).
2.2 BACKGROUND OF THE COHORT INTELLIGENCE ALGORITHM
Using a similar approach to the socio-inspired techniques discussed earlier, the Cohort Intelligence (CI) algorithm was proposed by Kulkarni et al. (2013). It is motivated by the social learning behaviour of candidates such as following, interacting, cooperating and competing with every other candidate in the cohort. Initially, the CI algorithm was tested on unconstrained benchmark test examples (Kulkarni et al., 2013). It was then implemented for constrained problems and applied to solve the combinatorial NP-hard 0-1 Knapsack problem with the number of items varying from 4 to 75 (Kulkarni and Shabir, 2016). The constraints involved in this problem were handled by a problem-specific probabilitybased constraint handling technique. The algorithm yielded competent results as compared to integer programming solutions. This approach was also applied when solving real-world combinatorial problems from the healthcare and logistics domain as well as for large-sized complex problems from the Cross Border Supply Chain domain (Kulkarni et al.,
2016a), the Traveling Salesman Problem (TSP) (Kulkarni et al., 2017) and several other benchmark problems (Kulkarni et al., 2017). The algorithm performed significantly better than both integer programming and other specific heuristics techniques. Krishnasamy et al. (2014) modified the CI algorithm by incorporating a mutation mechanism, which helped in expanding the sampling space by introducing diversity and, additionally, avoided premature convergence. The modified CI was compared with the original CI for solving several clustering problems. In addition, it was hybridized with a K-Means algorithm which also exhibited superior performance. Gaikwad et al. (2015) proposed a Modified Analytical Hierarchy Process (MAHP) which was combined with GA and CI to identify the level of sugar in ice-cream for diabetic patients. The constraint handling for the static and dynamic penalty function approaches were incorporated in the CI (CI-SPF and CI-DPF, respectively) for solving several test problems and manufacturing engineering problems (Kulkarni et al., 2016c). The CI-SPF was adopted for solving complex problems from truss structure and mechanical engineering domain (Kale and Kulkarni, 2018). Furthermore, Patankar and Kulkarni (2017) introduced seven variations of the CI algorithm. The variations were associated with the choices candidates made when selecting other candidates from which to learn certain characteristics. They are labelled as: follow best, follow better, follow worst, follow itself, follow median, follow roulette wheel selection, and alienation and random selection. The algorithm was tested on several unimodal and multimodal unconstrained problems. Moreover, CI was also applied to the security of secret messages using steganography (Sarmah and Kulkarni, 2017, 2018). The three cases of shell-and-tube heat exchanger design problem were also solved for minimisation of cost and obtained significantly better results as compared to other contemporary algorithms (Dhavle et al., 2016).
As with other nature inspired methods, the performance of CI deteriorates when constraints are involved. To date, a dynamic penalty function (DPF) (Kulkarni et al., 2016b) has been proposed; however, the choice of penalty parameter requires a significant number of preliminary trials of the algorithm. Also, a probability-based constraint handling approach was proposed by Kulkarni and Shabir (2016). The approach is problem specific and may become tedious when the number of constraints is increased. The review of various constraint handling approaches is thoroughly discussed in the next section.
2.3 LITERATURE REVIEW ON CONSTRAINT HANDLING TECHNIQUES
The algorithms discussed in Section 2.1 were incorporated with constraint handling techniques such as penalty function methods, probability distribution and feasibility-based rule. The penalty function techniques are employed to handle constraints when using a certain penalty value and convert the constrained problem into an unconstrained problem. As the choice of suitable penalty parameter necessitates a significant number of preliminary trials, a parameter-less approach, referred to as a niched penalty function approach, was proposed by Deb and Agrawal (1999). In this approach, a feasible solution was selected based on three criteria: accept the feasible solution rather than infeasible solution; accept the best fit solution from two feasible solutions; and accept an infeasible solution based on fewer constraint violations. These three rules were then referred to as the feasibility-based rule and were used as a constraint handling technique (Deb, 2000). The probability distribution-based constraint handling technique was proposed to make fitness function bias towards feasibility (Kulkarni and Shabir, 2016; Kulkarni et al., 2016b). However, the mathematical construction of this approach is problem dependent and needs to be generalised.
The penalty function approach has been widely used due to its simple construction and ease of implementation. Several penalty-based constraint handling techniques have been proposed so far, such as the barrier (death) penalty function approach, which was based on the elimination of infeasible solutions (Luenberger and Ye, 2016), the exact penalty function (Homaifar et al., 1994) and the dynamic penalty function (Joines and Houck, 1994) based on setting the value of the penalty parameter and its multiplication factors (penalty reduction or expansion factor) to penalise the objective function. Other techniques were also proposed such as the annealing penalty function (Michalewicz and Attia, 1994; Carlson and Shonkwiler, 1998), which was based on idea of Simulated Annealing (SA), and the adaptive penalty function (Gen and Cheng, 1996; Hadj-Alouane and Bean, 1997; Smith and Tate, 1993; Yokota et al., 1996), which was aimed at eliminating the setting of penalty parameter from other penalty function approaches. In penalty-based segregated GA (Le Riche et al., 1995), a distinct penalty parameter was set for different evaluated fitness functions. These techniques were successfully employed with nature inspired optimisation techniques in order to deal with linear and nonlinear constrained optimisation
problems. These techniques are simple and easy to apply to a wide variety of constrained optimisation problems (Yu et al., 2010; Li et al., 2011); however, as the number of constraints increase, their performance degenerates (Luenberger and Ye, 2016). Additionally, an exact penalty approach was adopted by Shin et al. (1990), Wu and Chow (1995) and Azad et al. (2013) for nonlinear optimisation problems with discrete design variables. For every independent problem, several preliminary trials were required to set an appropriate penalty parameter (Homaifar et al., 1994; Morales and Quezada, 1998). A similar approach was adopted for FA (Gandomi et al., 2011) and the CI algorithm with static penalty function approach (CI-SPF) (Kale and Kulkarni, 2018) for solving discrete and mixed variable problems with linear as well as nonlinear constraints from engineering design and truss structure domains. However, it was noticed that the selection of penalty parameter becomes tedious with the increase in number of constraints.
Kannan and Kramer (1994) adopted an augmented Lagrange Multiplier approach (of Viswanathan and Grossmann, 1990) incorporated with dynamic penalty function method for solving discrete and mixed variable problems from the design engineering domain. In this approach, the penalty parameter was multiplied by a suitable factor and iteratively penalised the objective function. A generalised Hopfield network using an extended penalty function approach was proposed by Shih and Yang (2002) to solve nonlinear engineering problems with discrete and mixed variables. In these methods, the penalty parameter was initialised based on arbitrary value (0 or 1) and then updated iteratively with an incremental multiplication factor. A limitation was observed that if the multiplication factor is too high, then the objective function value may become unstable and stuck at the local minima. In a similar way to the dynamic penalty function approach, Curtis and Nocedal (2008) introduced a flexible penalty function to handle nonlinear constraints. Here, the penalty parameter was arbitrarily chosen from the prescribed interval rather than choosing a fixed value which influentially guided the convergence.
Nanakorn and Meesomklin (2001) proposed an adaptive penalty function approach in which the modified binary scaling technique was employed to scale the fitness value. This fitness value then scaled in three different categories: the minimum fitness value, the average fitness of all feasible value and the best feasible value. Every infeasible value was then penalised by the best infeasible value having scaled fitness equal to ϕ times that of average fitness value. The parameter ϕ needed to be set based on preliminary trials. Broyden and Attia (1984) proposed a smooth sequential
penalty function incorporated with a Quasi Newton approach and then it was combined with orthogonal transformations based on Jacobian constraints. The non-stationary multi-stage penalty function approach was implemented by Parsopoulos and Vrahatis (2002) and then followed by Coath and Halgamuge (2003) with a feasibility preservation method for solving nonlinear problems.
Using an evolutionary algorithm, Michalewicz et al. (1996) and Coello (2000) proposed an approach in which the penalty function is split into two distinct parts such as sum of violations the constraints have and the number of constraints that are violated and then penalty was individually applied. For this approach, independent weighting factors needed to be chosen that increased the number of working parameters. In addition, several initial trials were required to be set up using suitable penalty parameters for both parts. Nie (2006) proposed a new approach of semi-penalty that considered the qualities of the Sequential Quadratic Programming (SQP) method and the Sequential Penalty Quadratic Programming (SPQP) method where equality and inequality constraints received distinct treatment.
An external penalty function scheme was adopted by Hasançebi and Azad (2015) in different manner, where a relaxation strategy was incorporated into the Adaptive Dimensional Search (ADS) method. In this strategy, the infeasible solution was penalised and dominated by selecting an infeasible solution in order to escape from the local minima. Once the solution is saturated, the intensity of the penalty parameter was reduced by multiplying the reduction factor. After every Stagnation Escape Period (SEP), the solution was recalculated using an updated penalty parameter and then compared with the previous saturated solution. It was observed that, if the recalculated solution was worse than the previous solution, the original penalty parameter was utilised to recalculate the solution and continue the process in the search of best solution. In order to update the penalty parameter, an additional multiplication factor need to be included. This may require additional time to set the multiplication factor, which tends to increase the overall computational cost. Furthermore, the feasibility-based constraint handling approach proposed in Deb (2000) was later implemented by Bansal et al. (2009), Kaveh and Talatahari (2009) and Kulkarni and Tai (2011). It was further modified by Kulkarni et al. (2016a) where a worse solution after a stagnation period was successfully accepted, which helped the algorithm jump out of the local minima. It was successfully applied to solving problems from design engineering and truss structure domain.
2.4 CONCLUSION
This chapter provides a detailed literature survey on nature inspired optimisation techniques and constrained handling techniques. The classification of nature inspired optimisation techniques is presented. The socio-inspired approach is one of the subdomains of nature inspired optimisation techniques; hence, its classification is also presented and the literature discussed in detail. In this book, the constrained problems various domains are considered in order to validate the proposed techniques (presented in Chapter 3, 4 and 5); the merits and demerits of various constraint handling techniques are also discussed.
REFERENCES
Ahmadi-Javid, A. (2011) ‘Anarchic society optimization: A human-inspired method’, Evolutionary Computation (CEC), 2011 IEEE Congress, New Orleans, pp. 2586–2592.
Atashpaz-Gargari, E., and Lucas, C. (2007) ‘Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition’, Evolutionary Computation (CEC), 2007 IEEE Congress, Singapore, pp. 4661–4667.
Azad, S.K., Hasançebi, O., Azad, S.K. and Erol, O.K. (2013) ‘Upper bound strategy in optimum design of truss structures: A big bang-big crunch algorithm based application’, Advances in Structural Engineering, Vol. 16, No. 6, pp. 1035–1046.
Bansal, S., Mani, A. and Patvardhan, C. (2009) ‘Is Stochastic ranking really better than feasibility rules for constraint handling in evolutionary algorithms?’, Proceedings of World Congress on Nature and Biologically Inspired Computing, pp. 1564–1567.
Broyden, C.G. and Attia N.F. (1984) A Smooth Sequential Penalty Function Method for Solving Nonlinear Programming Problems, Springer.
Carlson, S.E. and Shonkwiler, R. (1998) ‘Annealing a genetic algorithm over constraints’, SMC’98 Conference Proceedings, IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 98CH36218), Vol. 4, pp. 3931–3936.
Cheng, M.Y. and Prayogo, D. (2014) ‘Symbiotic organisms search: a new metaheuristic optimization algorithm’, Computers and Structures, Vol. 139, pp. 98–112.
Coath, G. and Halgamuge, S.K. (2003) ‘A comparison of constraint-handling methods for the application of particle swarm optimization to constrained nonlinear optimization problems’, Evolutionary Computation, Vol. 4, pp. 2419–2425.
Coello, C.A.C. (2000) ‘Use of a self-adaptive penalty approach for engineering optimization problems’, Computers in Industry, Vol. 41, pp. 113–127.
Colorni, A., Dorigo, M. and Maniezzo, V. (1991) ‘Distributed optimization by ant colonies’, ECAL91 European Conference on Artificial Life, Paris, pp. 134–142.
Curtis, F.E. and Nocedal, J. (2008) ‘Flexible penalty functions for nonlinear constrained optimization’, IMA Journal of Numerical Analysis, Vol. 28, No. 4, pp. 749–769.
Deb, K. (2000) ‘An efficient constraint handling method for genetic algorithms’, Computer Methods in Applied Mechanics in Engineering, Vol. 186, Nos. 2–4, pp. 311–338.
Deb, K. and Agrawal, S. (1999) ‘A niched-penalty approach for constraint handling in genetic algorithms’, Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms (ICANNGA-99), pp. 235–243.
Dhavle, S.V., Kulkarni, A.J., Shastri, A. and Kale, I.R. (2016) ‘Design and economic optimization of shell-and-tube heat exchanger using cohort intelligence algorithm’, Neural Computing and Applications, Vol. 30, No. 1, pp. 111–125.
Eberhart, R. and Kennedy, J. (1995) ‘A new optimizer using particle swarm theory’, in MHS’95. Proceedings of the Sixth International Symposium on micro Machine and Human Science, IEEE, Nagoya, pp. 39–43.
Emami, H. and Derakhshan, F. (2015) ‘Election algorithm: A new socio-politically inspired strategy’, AI Communications, Vol. 28, No. 3, pp. 591–603.
Fogel, L.J., Owens, A.J., Walsh, M.J. (1966) Artificial Intelligence through Simulated Evolution, John Wiley.
Gaikwad, S.M. , Joshi, R.R. and Kulkarni, A.J. (2015) ‘Cohort intelligence and genetic algorithm along with AHP to recommend an ice cream to a diabetic patient’, International Conference on Swarm, Evolutionary, and Memetic Computing, pp. 40–49.
Gandomi, A.H., Yang, X-S. and Alavi, A.H. (2011) ‘Mixed variable structural optimization using firefly algorithm’, Computers and Structures, Vol. 89, Nos. 23–24, pp. 2325–2336.
Gen, M. and Cheng, R. (1996) ‘A survey of penalty techniques in genetic algorithms’, Proceedings of International Conference on Evolutionary Computation, IEEE, pp. 804–809.
Goldberg, D.E. (1989) Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley.
Hadj-Alouane, A.B. and Bean, J.C. (1997) ‘A genetic algorithm for the multiplechoice integer program’, Operations Research, Vol. 45, pp. 92–101.
Hasançebi, O. and Azad, S.K. (2015) ‘Adaptive dimensional search: A new metaheuristic algorithm for discrete truss sizing optimization’, Computers and Structures, Vol. 154, pp. 1–16.
He, Q. and Wang, L. (2007) ‘An effective co-evolutionary particle swarm optimization for constrained engineering design problem’, Engineering Applications of Artificial Intelligence, Vo1. 20, No. 1, pp. 89–99.
Holland, J.H. (1975) Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, University Michigan Press.
Homaifar, A., Lai, S.H.Y. and Qi, X. (1994) ‘Constrained optimization via genetic algorithms’, Simulation, Vol. 62, No. 4, pp. 242–254.
Joines, J. and Houck, C. (1994) ‘On the use of non-stationary penalty functions to solve non-linear constrained optimization problems with gas’, Proceedings of the First IEEE International Conference on Evolutionary Computation, pp. 579–584.
Kale, I.R. and Kulkarni, A.J. (2018) ‘Cohort intelligence algorithm for discrete and mixed variable engineering problems’, International Journal of Parallel, Emergent and Distributed Systems, Vol. 33, No. 6, pp. 627–662.
Kannan, B.K. and Kramer, S.N. (1994) ‘An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design’, Journal of Mechanical Design, Vol. 116, No. 2, pp. 405–411.
Kashan, A.H. (2009) ‘League championship algorithm: A new algorithm for numerical function optimization’, IEEE International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, pp. 43–48.
Kaveh, A. and Talatahari, S. (2009) ‘A particle swarm ant colony optimization for truss structures with discrete variables’, Journal of Constructional Steel Research, Vol. 65, pp. 1558–1568.
Krishnasamy, G., Kulkarni A.J. and Paramesran, R. (2014) ‘A hybrid approach for data clustering based on modified cohort intelligence and K-means’, Expert Systems with Applications, Vol. 41, pp. 6009–6016.
Kulkarni, A.J. and Shabir, H. (2016) ‘Solving 0–1 knapsack problem using cohort intelligence algorithm’, International Journal of Machine Learning and Cybernetics, Vol. 7, No. 3, pp. 427–441.
Kulkarni, A.J. and Tai, K. (2011) ‘A probability collectives approach with a feasibility-based rule for constrained optimization’, Applied Computational Intelligence and Soft Computing, Article ID 980216.
Kulkarni, A.J., Durugkar, I.P. and Kumar, M. (2013) ‘Cohort intelligence: A self supervised learning behavior’, Systems, Man, and Cybernetics (SMC), IEEE International Conference, pp. 1396–1400.
Kulkarni, A.J., Kale, I.R. and Tai, K. (2016a) ‘Probability collectives for solving discrete and mixed variable problems’, International Journal of Computer Aided Engineering and Technology, Vol. 8 No. 4, pp. 325–361.
Kulkarni, A.J., Baki, M.F. and Chaouch, B.A. (2016b) ‘Application of the cohortintelligence optimization method to three selected combinatorial optimization problems’, European Journal of Operational Research, Vol. 250, No. 2, pp. 427–447.
Kulkarni, O., Kulkarni, N., Kulkarni, A.J. and Kakandikar, G. (2016c) ‘Constrained cohort intelligence using static and dynamic penalty function approach for mechanical components design’, International Journal of Parallel, Emergent and Distributed Systems, Vol. 33, No. 6, pp. 1–19.
Kulkarni, A.J., Krishnasamy, G. and Abraham, A. (2017) Cohort Intelligence: A Socio-Inspired Optimization Method, Springer.
Kumar, M., Kulkarni, A.J., Satapathy, S.C. (2018) ‘Socio evolution and learning optimization algorithm: A socio-inspired optimization methodology’, Future Generation Computer Systems, Vol. 81, pp. 252–272.
Kuo, H.C. and Lin, C.H. (2013) ‘Cultural evolution algorithm for global optimizations and its applications’, Journal of Applied Research and Technology, Vol. 11, No. 4, pp. 510–522.
Le Riche, R., Knopf-Lenoir, C. and Haftka, R.T. (1995) ‘A segregated genetic algorithm for constrained structural optimization’, Proceedings of the Sixth International Conference on Genetic Algorithms, pp. 558–565.
Li, B., Yu, C.J., Teo, K.L. and Duan, G.R. (2011) ‘An exact penalty function method for continuous inequality constrained optimal control problem’, Journal of Optimal Theory and Applications, Springer, Vol. 151, pp. 260–291.
Liu, Z.Z., Chu, D.H., Song, C., Xue, X. and Lu, B.Y. (2016) ‘Social learning optimization (SLO) algorithm paradigm and its application in QoS-aware cloud service composition’, Information Sciences, Vol. 326, pp. 315–333.
Luenberger, D.G., and Ye, Y. (2016) ‘Penalty and barrier methods’, in Linear and Nonlinear Programming, Springer, Vol. 228.
Lv, W., Liu, Z., Zhang, X., Luo, S. and Cheng, S. (2010) ‘Election campaign algorithm’, 2nd International Asia Conference on Informatics in Control, Automation and Robotics, Vol. 2, pp. 71–74.
Michalewicz, Z. and Attia, N. (1994) ‘Evolutionary optimization of constrained problems’, Proceedings of the Third Annual Conference on Evolutionary Programming, World Scientific, pp. 98–108.
Michalewicz, Z., Dasgupta, D., Le Riche, R. and Schoenauer, M. (1996) ‘Evolutionary algorithms for constrained engineering problems’, Computers and Industrial Engineering Journal, Vol. 30, No. 4, pp. 851–870.
Moosavian, N. and Roodsari, B.K. (2014) ‘Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks’, Swarm and Evolutionary Computation, Vol. 17, pp. 14–24.
Morales, A.K. and Quezada, C.V. (1998) ‘A universal eclectic genetic algorithm for constrained optimization’, In Proceedings of the 6th European Congress on Intelligent Techniques and Soft Computing, Vol. 1, pp. 518–522.
Nanakorn, P. and Meesomklin, K. (2001) ‘An adaptive penalty function in genetic algorithms for structural design optimization’, Computer and Structures, Vol. 79, pp. 2527–2539.
Nie, P.Y. (2006) ‘A new penalty method for nonlinear programming’, Computers and Mathematics with Applications, Vol. 52, pp. 883–896.
Parsopoulos, K. and Vrahatis, M. (2002) ‘Particle swarm optimization method for constrained optimization problems’, Intelligent Technologies Theory and Applications: New Trends in Intelligent Technologies, Vol. 76, No. 1, pp. 214–220.
Patankar, N.S., Kulkarni, A.J. (2017) ‘Variations of cohort intelligence’, Soft Computing, Vol. 22, No. 6, pp. 1731–1747.
Rao, R.V. (2011) Advance Modeling and Optimization of Manufacturing Processes, Springer.
Ray, T. and Liew, K.M. (2003) ‘Society and civilization: An optimization algorithm based on the simulation of social behavior’, IEEE Transactions on Evolutionary Computation, Vol. 7, No. 4, pp. 386–396.
Rechenberg, I. (1971) ‘Evolutions strategie – Optimierung technischer Systeme nach Prinzipien der biologischen Evolution’, PhD thesis. Reprinted by Frommann-Holzboog (1973).
Sarmah, D.K. and Kulkarni, A.J. (2017) ‘Image steganography capacity improvement using cohort intelligence and modified multi-random start local search methods’, Arabian Journal for Science and Engineering, pp. 1–24.
Sarmah, D.K. and Kulkarni, A.J. (2018) ’JPEG based steganography methods using cohort intelligence with cognitive computing and modified multi random start local search optimization algorithms’, Information Sciences, Vol. 430, pp. 378–396.
Satapathy, S. and Naik, A. (2016) ‘Social group optimization (SGO): A new population evolutionary optimization technique’, Complex and Intelligence Systems, Vol. 2, pp. 173–203.
Shih, C.J. and Yang, Y.C. (2002) ‘Generalized Hopfield network based structural optimization using sequential unconstrained minimization technique with additional penalty strategy’, Advances in Engineering Software, Vol. 33, No. 7–10, pp. 721–729.
Shin, D.K., Gurdal, Z. and Griffin, O.H. (1990) ‘A penalty approach for nonlinear optimization with discrete design variables’, Engineering Optimization, Vol. 16, No. 1, pp. 29–42.
Smith, A. and Tate, D. (1993) ‘Genetic optimization using a penalty function’, Proceedings of the Fifth International Conference on Genetic Algorithms, Morgan Kaufmann, pp. 499–503.
Teo, T.H., Kulkarni, A.J., Kanesan, J., Chuah, J.H. and Abraham, A. (2017) ‘Ideology algorithm: A socio-inspired optimization methodology’, Neural Computing and Applications, Vol. 28, No. 1, pp. 845–876.
Viswanathan, J. and Grossmann, I.E. (1990) ‘A combined penalty function and outer-approximation method for MINLP optimization’, Computers and Chemical Engineering, Vol. 14, No. 7, pp. 769–782.
Wolpert, D.H. and Tumer, K. (1999) ‘An introduction to collective intelligence’, Technical Report, NASA ARC-IC-99-63, NASA Ames Research Center.
Wu, S.J. and Chow, P.T. (1995) ‘Steady-state genetic algorithms for discrete optimization of trusses’, Computers and Structures, Vol. 56, No. 6, pp. 979–991.
Xu, Y., Cui, Z. and Zeng, J. (2010) ‘Social emotional optimization algorithm for nonlinear constrained optimization problems’, Swarm, Evolutionary, and Memetic Computing (SEMCCO 2010), Lecture Notes in Computer Science, Springer Berlin Heidelberg, Vol. 6466, pp. 583–590.
Yang, X.S. (2010a) ‘Firefly algorithm, stochastic test functions and design optimisation’, International Journal of Bio-inspired Computation, Vol. 2, No. 2, pp. 78–84.
Yang, X.S. (2010b) ‘A new metaheuristic bat-inspired algorithm’, Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), pp. 65–74.
Yokota, T., Gen, M., Ida, K., and Taguchi, T. (1996) ‘Optimal design of system reliability by an improved genetic algorithm’, Electronics and Communications in Japan (Part III: Fundamental Electronic Science), Vol. 79, No. 2, pp. 41–51.
Yu, C., Teo, K.L., Zhang, L. and Bai, Y. (2010) ‘A new exact penalty function method for continuous inequality constrained optimization problems’, Journal of Industrial and Management Optimization, Vol. 6, No. 4, pp. 895–910.