
4 minute read
The Fallacy of Optimization
Optimal solutions are fragile, and therefore should not be pursued. Why is this true? Imagine that indeed you do have a system that is optimal, i.e. it offers the best possible characteristics in terms of performance (mass, cost, weight, etc.). Non plus ultra. Now, because we don't know exactly all the characteristics of the environment in which it functions, i.e. wind velocity or outside temperature, and because these characteristics change constantly, and randomly (this means they are stochastic) we cannot expect the system to be optimal for more than a fraction of a second. Once the environment is slightly different than the one for which the system was designed, the system no longer is said to be optimal, and in that it offers worse performance. The bad thing about environments is that they happen to change all the time. Stability of the ecosystem stems from a continuous play of change and adaptation. This is what Nature is all about.
But let's continue. Imagine that indeed we have managed to put our system on the very peak of performance. It is impossible to go any higher, the system is beyond improvement. What is the most likely path, then, that our system can take if it occupies the highest possible peak of performance? Clearly, the most likely way that you can take when you're on a peak is down. Therefore, the most probable thing that can happen is that you will get less performance than expected. Always. Why is this so? Because the ideal of perfection, or optimality, is based on the faulty assumption of determinism. This means that everything is exactly known, and than nothing changes, nothing mutates. Many people over the ages have learned the bitter lesson that in Nature the only constant is change. So, determinism induces excessive optimism. You think you have peak performance, and in reality you don't. You always will have less. The peak, never attainable, is some kind of fetish, which is pursued in a maniac fashion, while life carries on in parallel, at slightly lower and more realistic altitudes.
Advertisement
Now, let's look at the system itself. The assumption of optimality is based on an additional assumption, namely that we can physically manufacture a system that has been numerically designed to be optimal. Two logical faults appear here. First, we cannot manufacture anything exactly (because of the existence of tolerances), second, because the distance of a numerical model to reality is not null. There are many engineers that question the existence of tolerances (for example those that end up with a fresh PhD right in front of an FE code and a computer
screen). Those same people have so much blind and unfounded confidence that their model represent perfectly reality, that model quality and confidence is rarely questioned? The question is, again a bit philosophical: what sense does it make to seek peak performance in an abstract world of computer models, when we know that models are not perfect (actually they are still quite far from reality)? Secondly, even if this were the case, we are not able to manufacture these perfect systems, given that tolerances will always interfere.
An evident example of how our culture battles with itself on this slippery terrain is that of economics. Our economy is fragile because we seek optimality. Where? Maximum profit, with minimum risk, minimum investment, in the shortest possible time. People want no risk! Corporations want customers before a product exists, and even want sales forecasts, and possibly with no R&D investment. This stretching of everything to the limit has an equivalent in mathematics known as the mini-max approach. Clearly, all this is not possible. The economy is subject to laws, just like physical systems. Reality and desires have to meet somewhere. The fragility into which the economy has been pushed is reflected in more and more frequent market crashes and imminent recessions. Market crashes, even of minor entity, relax the internal tensions that have been caused by unrealistic desires, expectations, excessive optimism and false promises of profits that not always come. A boiling liquid, with bubbles that form and inevitably burst. The point that many miss, and this refers to Nature in general, is to participate, not to control. To understand not to force. Fitness, not specialization, is the best defence against uncertainty and complexity. Is this so difficult to grasp?
The same applies to engineering systems, which have, nowadays, evolved to impressive levels of complication, sophistication and performance. However, at the same time, they can fail with surprising ease, and frequently due to very simple and trivial causes. A very small change somewhere can lead to catastrophic collapse. This is the basic characteristic of non-linear systems. A fact of life. Our models must be able to reflect this kind of behaviour if they are to serve any purpose. How can a simplistic polynomial response surface reflect anything beyond what was pre-packaged into it? Millions of finite elements end up squeezed into a hollow polynomial world, which easily offers the comfort of placebo-generating optimality. Optimality on paper. In physics, if you come up with a theory or a claim, you must design a repeatable experiment that can be performed elsewhere so as to sustain the claim. Do we do this in engineering? Does anybody prove optimality of his design via experimentation?
A natural means of obviating optimality and directly designing robust systems is via Monte Carlo techniques. However, these methods are incredibly complex, highly non-intuitive, terribly expensive, difficult to understand and certainly politically incorrect!!! The days of Inquisition are not over yet! Oh, and one last word of caution: if you think like computers, you will be replaced by computers.