Figure 11 A control system with cascade-control, feed-back and feed-forward. The main controller generates a reference-signal for the secondary controller which it uses to generate some control-signal to the system being controlled. A very common controller today is the PID controller which is based on the feed-back concept described above. PID stands for proportional, integral and derivative which refers to how the control-signal depends on the control error. By making the control-signal proportional to the control error, the controller responds to an increasing control error by increasing the magnitude of the control-signal. The integral part means that the control error is integrated over time and added to the control-signal. This is useful since it gets rid of static control errors. The derivative part of the controller looks at the derivative of the control error. This makes it possible to “predict” the future behaviour of the system and act in accordance in order to prevent the control error from growing. Many of today’s control-strategies, such as PID, LQ (linear quadric) and LQG (linear quadric gaussian) are assuming that the system to be controlled is a linear system. This is not actually true for many systems but often it does not matter since the system can be assumed to be linear close to some working-point of interest, or sometimes it is possible to linearise the system by applying an inverted version of the non-linearity. Other control strategies might not be inherently dependent on a systems linearity but instead they might be heavily dependent on accurate models. An example of such a control strategy is MPC (model predictive control). 2.5.3
Extremum Seeking Control
For many processes the control objective is to maximize or minimize an output. The problem becomes to find for which reference value of some input the output is optimised. When the system is non-linear, it can be difficult to find the optimal value, especially if no model of the system is available. If the optimum remains constant over time, it is usually sufficient to make an experiment to find the optimal value, which then can be set as a fixed reference value for the process. However, the optimum is not always constant and then a single experiment will not suffice since the optimum will change over time. There is a number of controllers dealing with this problem that commonly is referred to as extremum seeking controllers. Some are based on known trajectories of the optimum, others require less knowledge of the system. The latter typically perturb the control signal in some way while analysing the output in order to find information about the extremum point. One strong point of those controllers is that they usually make relatively few assumptions about the system to be controlled (Ariyur and Krstić 2003, 3).
Published on Nov 9, 2011