Introduction
When planning this book, in which I want to refute the idea that there can be processes in machines comparable to mental processes in humans, i.e. that machines can be human-like, I first thought to take typical examples of computer functions allegedly comparable to human mental functions, compare the two and then decide whether it makes sense to regard computer functions as really human-like. But I soon realized that my task would be comparable to Sisyphus’. With the latest computer accomplishment popping up on a monthly if not weekly basis, I would have to start pushing the rock uphill anew every month or week with no end in sight. The book would have to be rewritten at least once a year. Machines with human-like mental functions now have been predicted for many decades (since the advent of electronic computers) and I am afraid that this will go on for many decades more unless the scientific community realizes that there are qualitative differences between human mental functions and functions running on computers, which cause quantitative comparisons (as with, for example, intelligence) to make no sense whatsoever. So the book will be about a priori grounds on which computers cannot be human-like. When, long ago, I read Weizenbaum’s comment that to compare the socialization of a machine with that of a human is a sign of madness, I immediately felt that this was the ultimate comment possible and did not expect Weizenbaum to explain why (which he did not bother to do anyway). But the insight into this madness not having befallen the scientific community nearly half a century later that “Why” must finally be delivered. A comparison between the socialization of a human and that of a computer is not seen as mad in the AI community. This is so not because obvious similarities could be shown to exist between the two (there aren’t any; there is just a faint analogy in that both are somehow affected by their environment) but simply because this kind of speak (and think) has become quite