Artificial intelligence a guide to intelligent systems

Page 304

EVOLUTIONARY NEURAL NETWORKS

Figure 8.14 Initial and final membership functions of the ANFIS: (a) initial membership functions; (b) membership functions after 100 epochs of training

The ANFIS has a remarkable ability to generalise and converge rapidly. This is particularly important in on-line learning. As a result, Jang’s model and its variants are finding numerous applications, especially in adaptive control.

8.5 Evolutionary neural networks Although neural networks are used for solving a variety of problems, they still have some limitations. One of the most common is associated with neural network training. The back-propagation learning algorithm that is often used because it is flexible and mathematically tractable (given that the transfer functions of neurons can be differentiated) has a serious drawback: it cannot guarantee an optimal solution. In real-world applications, the back-propagation algorithm might converge to a set of sub-optimal weights from which it cannot escape. As a result, the neural network is often unable to find a desirable solution to a problem at hand. Another difficulty is related to selecting an optimal topology for the neural network. The ‘right’ network architecture for a particular problem is often chosen by means of heuristics, and designing a neural network topology is still more art than engineering. Genetic algorithms are an effective optimisation technique that can guide both weight optimisation and topology selection. Let us first consider the basic concept of an evolutionary weight optimisation technique (Montana and Davis, 1989; Whitley and Hanson, 1989; Ichikawa and

285


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.