Page 1

Christoph Burkhardt

Don‘t be a Robot

Seven Survival Strategies in the Age of Artificial Intelligence Midas Management

y p o C g in d a e R e d l e a t c S e r r Uncor W • Not fo E I V E R P



Midas Management Verlag St. Gallen • Zürich

y p o C g in d a e R e d l e a t c S e r r Uncor W • Not fo E I V E R P


Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1

HOW ROBOTS BECAME HUMAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

It’s Evolution, Stupid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Your Next Big Idea. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 You Can’t Stop the Robots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 How They Think . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


HOW HUMANS BECAME ROBOTS . . . . . . . . . . . . . . . . . . . . . . . . . . 43

We are Obsessed with Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 We are Obsessed with Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 How We Think. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 In Love with Predictability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Standardized and Normalized. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76


WHAT ROBOTS WILL DO NEXT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Inevitable Technologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Cognification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Human Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4

WHAT HUMANS WILL DO NEXT. . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Four Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 How Do You Know You’re a Robot? . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5


Don’t be a Robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Forget Occupation: It’s About Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Forget Creativity: It’s About Change. . . . . . . . . . . . . . . . . . . . . . . . . . 136





IDENTITY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 PURPOSE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 CURIOSITY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 AMBIGUITY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 ATTENTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 CONNECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 TRUST. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201


Introduction May, 2015. I have just invested in a smart scale to go with my fitness tracker. The tracker gives me my heart rate and steps as well as an indicator of how well I slept; the scale gives me my weight and body fat percentage. All data points are automatically synchronized and show up in the app I use to analyze my status quo on my phone. After several months this data set shows some interesting correlations between workout patterns and sleep patterns, and I start learning about what is good for me based on my own data. I love it; it is amazing, and it works. June, 2017. My internet finally got upgraded to the fastest possible speed in San Francisco. I receive a new router and everything works. Except for my smart scale, which would not even tell me my weight anymore. It literally did not only stop being smart; it stopped being a scale. After searching online I find other people with similar issues and learn that the recommendation is to call the hotline. I hate hotlines. But I want my scale to work. So I call and, as expected, the call goes to somewhere far outside the United States. I speak with Jenny and I highly doubt that this is her real name. Jenny is following a script; I can hear how she reads from her screen to ask me questions to determine what the cause of the problem is. A lot of the questions sound like she is expecting me to be the problem. “Have you checked the batteries of the scale?” Jenny asks. “They might be empty.” Slightly annoyed, I reply that the app her company developed shows that the batteries are full and that I had exchanged them just a few days earlier. “Okay, batteries are fine…” Jenny continues, without any emotion, and I can hear her click with her mouse before she moves on to ask me whether the scale had been underwater or had fallen from some height. Not only am I rolling my eyes at how ridiculous this question seems, but I’m secretly wondering about how Jenny’s script came together. Why would they ask this if it had not hap-


D O N ’ T B E A R O B OT

pened with some other customer before? Now I’m rolling my eyes about the weirdness of humans in general. After thirty minutes of more or less diagnostic questioning, Jenny determined that she would send me a new scale. To keep it short, the new scale did not respond in the same way the old one did. Yet this time, I knew that calling the hotline would probably not solve the issue. After Googling a little more outside the official FAQs of the developer, I found the source of the problem. The company had stopped developing the scale and it only worked with an older wifi standard that my new router would not support. I fixed the problem by buying an old router online for no money, installed a second wifi network just for my scale, and it worked. We might laugh about how ridiculous it is to set up an outdated network to make an outdated piece of technology work, but this is literally what so many of my clients and the corporations they work for face when dealing with new technologies all the time. I want to talk about a different takeaway from this story though. I want to talk about Jenny. There is a big problem with Jenny: Jenny is a robot. And yet, she is a human being. Jenny followed a script like a robot would, she showed no emotions like a robot would, and she did not connect with me outside the problem she tried to solve— just like a robot would. But Jenny is no robot; she is a human being. And I know she is a human being. Yet she acts like a robot. The reason behind this book and, to me, the most fascinating paradox of our times, has to do with Jenny and her many colleagues. Humans who act like robots. How did we end up in a world in which humans behave like robots while robots become more and more like humans? The title of this book does not hide my message to Jenny and everybody else working, acting, behaving or thinking like a robot. If you want to survive in the age of artificial intelligence, we need to focus less on being like a robot, but this should not be our only focus. We need to understand how to be more human and for that we need to know what it means to be human. This is what this book is about.



D O N ’ T B E A R O B OT

It’s Evolution, Stupid

Computers make excellent and efficient servants, but I have no wish to serve under them. Mr. Spock

Something happened, something big. Over thousands of years we became the humans we are today. Homo sapiens: to our current knowledge, the most intelligent species on this planet. But are we the most intelligent species, period? We are standing at the brink of a massive paradigm shift. A shift so fundamental, so far-reaching and transformative that we cannot even begin to understand what is going to happen to us and our intelligence. We have already developed artificial intelligence; smart robots with surprisingly human traits are running through many homes and even more factories. Industrialized manufacturing and the use of machines to replace physical labor used to be a breakthrough of historic proportion; now this breakthrough fades as just another stepping stone in human development. The fading takes place due to an unfolding advent of intelligence that will not only transform how we work in factories and businesses around the world. For the first time, we have created a tool that might surpass our own intelligence. What some researchers refer to as the singularity might happen in our lifetime. To some, this development is scary; to others, it is fascinating. And most humans do not yet realize the extent of the consequences behind this dramatic shift. If you think the last fifty years of technological developments were revolutionary wait for the next fifty years to turn your world upside down. And if you thought the speed of change we see today is accelerating at a high frequency, be prepared. We have not seen anything. We are facing the most transformative change in about 10,000 years. Industrialization and globalization, the connectedness of minds and


machines in the worldwide web, and the use of data as a new currency are mere precursors of what is going to happen next. We will no longer be the only species using reason, experience and intelligence to make sense of our world. Maybe we should rethink calling it our world anyway. I am asking you, for now, to think big. Let’s get the bigger picture, a good understanding of the driving forces behind the curtain before we look at what is actually happening around this data-powered paradigm shift in intelligence we are facing. Here is an interesting fact that we are eager to forget or at least ignore it for most of the time: humans have not always been humans. If we only go back a few thousand years to the point we started occupying most of the land mass on this planet, we were a very different species. (Actually, there was more than one human species.) We looked different, we relied on our hunting and gathering skills, we formed small groups to ensure protection and survival, and we communicated very differently than we do today. The way we used to live has changed so drastically that we have a hard time imagining how life might have been at the time. The way we connect with each other and the incredible number of connections we learned to handle has turned our social lives upside down. And ultimately we have changed the way we think over and over again. Being busy thinking about what to eat and how to protect ourselves from adverse weather conditions and other hostile animals while looking for food gave us little time to go on vacation or travel at all. Our minds were busy to help us survive. We are no longer forced to spend much time thinking about food. It is exactly this last change, the way we think, that I am most interested in. All that we do follows what we think, so it seems worth looking at how we actually think today to understand how we got to invent and develop robots with artificial intelligence that would ultimately surpass our own minds’ capabilities. How often do you think about the fact that we all were fish at some point? Well, not us directly, but our ancestors. Isn’t this a strange thought? Life evolved out of the water and before mam-



D O N ’ T B E A R O B OT

mals could occupy land, their ancestors were living under water. It is very hard to imagine that fish at some point turned into birds and humans; isn’t it? What would people have thought about this crazy idea to leave the ocean and occupy land? Maybe it is good that there was nobody to comment on this development at the time. It happened without a plan, without a goal, and without the idea of an end result. The reason thinking about fish becoming birds is so strange is the fact that we treat fish and birds as very different categories each with a unique set of features. Humans love categories: they simplify our lives, they organize our environment, and they make abstract reasoning possible. Categorical thinking becomes very apparent when some of the features in a category don’t really match. For example, we think of a penguin as a bird even though it cannot fly, while we would not think of penguins as fish, even though they certainly spend a lot of time underwater. I have always found categories a particularly interesting field to explore. They organize our world and they change very slowly. They are such an essential cognitive mechanism that if we want to understand how robots think, we need to explore how we use categories that are very stable and the ones that are changing. And when categories change, big things happen. Now, humans think in categories because they are very useful. It is simply very adaptive to think in categories, so we learned to do it everywhere with everything. If we did not have categories, we would have to identify every bird we see as a new species and we would not be able to call them “birds” as a group. We would also not know what a human is or what a robot is. If we put a robot next to a bird and another robot next to another robot, we would not know how to group the robots together; we would not know that they are different from the bird. Categories help us to think in abstract terms rather than concrete examples of a category. So yes, categories are absolutely necessary and an inevitable part of human thinking. Despite some misguided attempts to fight stereotypes (pretty much another word for category) by avoiding cate-


gorical thinking altogether, stereotypical categories exist and persist because we cannot change the mechanism of abstraction or avoid false conclusions. These mechanisms are part of who we are. They constitute how we operate. We cannot change them by thinking differently. We can only change the categories, but nor the categorical thinking behind it. If we want to understand what it means to be human, we need to understand where the fundamental differences between the two categories of humans and machines lie. How are they different? How do we know a robot is a robot? What exactly is a machine? And how are we so sure we are not machines? To investigate these questions, we need to understand how we come to define ourselves as a category. What is it really that makes a human being different from a machine? Is there really that much of a difference? Or is this again another mind trick we use to protect our existing categories? Let’s see. Let’s examine how categories are being formed. When we compare a fish to a human and a human to a bird and then a bird to a fish, we will find very different features of each category that we use to explain the difference. Fish live under water, humans on the ground. Birds fly; humans walk. Birds eat fish; humans eat fish. Humans eat birds. Whatever we see as crucial for a member of the category bird or fish, they do not threaten the category we know as “human.” Even though the fish, the bird, and the human are part of the same evolutionary chain, they are distinct enough from each other for us to group them in very different categories. Here is the analogy to robots: we can no longer easily differentiate them from humans based on some of the features we have used for thousands of years. They walk like us, they talk like us, and they look like us. When we see a bird next to a fish, we can pinpoint all the obvious differences. When we are to compare a human and a robot, we see quite a number of similarities. For many people, these similarities often outweigh the differences, which naturally makes the category of “robots” a threatening, destructive force to the catego-



D O N ’ T B E A R O B OT

ry of “humans.” Since we do not accept robots as equals (yet), we are under pressure to define the differences between us in the most obvious way possible. As we struggle to do so, robots become more and more human. On nearly a monthly basis, we see new skills, from understanding and using language to communicate to deep learning skills in playing games and planned behavior. Robots come closer and closer to being human at an incredibly fast pace. Imagine the most human-like robot you have ever seen, maybe one of the almost perfect robotic replicas of humans that try to imitate a particular human in every move. Now, combine this robot with the smartest chatbot we have today to simulate natural language in human conversations. Finally, add language production that does not sound like a machine and boom, we are very close to passing the Turing test. In this test, you sit across from an artificial being (our robot) and you are unable to tell whether you are talking to a human or a machine. If you can’t tell the difference, your robot passed the Turing test. If it is indistinguishable from a human, would we call it human? Would we grant the same rights? Probably not. But on what basis? To understand this shift in detail, we have to ask why robots became human at all. Why did we build them as copies of ourselves? And once we understand that, we will be able to see what is going to happen next in our evolution. So how did we get to where we are? We are born into the status quo of a world that bombards us with questions we cannot answer.

The Evolution Paradox One of the most dangerous, and yet most powerful, ideas the human mind has created in its 70,000-year history on this planet is the idea of magical interference from outside forces. Every time we


cannot explain the sources and reasons behind a phenomenon around us, we apply magical thinking. We do this for no other reason than mere desperation. We believe because our abilities to investigate are limited. Sometimes we simply cannot know the answer to a question. For example, we do not know why we are here. Since we cannot answer this question (yet) we make use of belief systems to justify why we are here. The human mind has evolved to answer all questions. Why this is the best thing that could ever happen to us we will see later. For now, whatever belief system you apply to answer a question, we have to be very aware of the fact that as humans we are not made to leave any questions unanswered. We cannot accept the fact that there are questions without the possibility to answer them (right now). When we face the limits of what we know, we rather than accepting our limits, we make up a story that serves as a temporary fix to answer the question. These temporary fixes take the shape of religious beliefs, supernatural explanations, and paranormal activities but also more mundane forms such as urban myths, beliefs on nutrition and exercise or the idea that Oprah would make a great president of the United States. Here is the problem with this type of thinking: ●●


When, over time, beliefs are reinforced by the fact that the questions behind our beliefs still cannot be answered by us (such as the question of why we are here), the made-up stories become convictions. At that point it becomes virtually impossible to break them. Even the most convincing evidence is then not evidence enough to give up a belief system. What initially starts as a lack of knowledge and an incapability to answer a question turns into a pseud-answer that satisfies our need to know just enough to end all investigations into the truth or different versions of what an answer might look like. We stop exploring and start justifying. We enter the post-factual world.



D O N ’ T B E A R O B OT


The great paradox of evolution lies in the fact that it is evolution which got us to reject the theory of evolution. Hardly any other theory has received more resistance than the powerful explanation of how we became who we are today. Because our belief systems have become convictions over time, many people still struggle to accept evolution as a fact despite overwhelming evidence. In fact, there is so much evidence for evolution that it should not be called a theory. Intelligent design, on the other hand, has yet to deliver any scientifically sound evidence.

How can this type of thinking be a good thing at all? Why would it be beneficial for homo sapiens to have this kind of belief? Why can we not simply move on and accept facts as facts? The answer is quite tricky. But there is one that does not require you to believe. So hold on—we are getting there. If you take the magical thinking to explain the unexplained and combine it with the simplifying logic of categorical thinking, you come up with a powerful mix that is responsible for most of what makes us human, which is why it is so crucial that we understand why we do what we do in our thoughts as well as in our actions. Now more than ever, we need to acknowledge the workings of the human mind as they are and not be blinded by wishful thinking about being more rational than we actually are. Any false and oversimplified explanation of what makes us human will only drag us further down into living with machines that outperform us on every level. Now is the time to rethink what it means to be human, rethink the skills that make us stronger, deepen the capabilities that make us different, and understand our minds to extend its powers rather than limit ourselves in irrational fights against technological changes and developments that can no longer be stopped or avoided. It is absolutely out of question that we will live in a world run in large parts by artificial intelligence. The question is not how we avoid such a world; the question is how we want to live in this world—how we want to be human in this world.


Your Next Big Idea

The most unfortunate lack in current computer programming is that there is nothing available to immediately replace the starship’s surgeon. ~ Mr. Spock

If you take these two very fundamental human processes, categorical and magical thinking, and you understand that an evolutionary process got us to first develop this way of thinking and dealing with the world because it was adaptive to do so, and, secondly, used our cognitive tools, including categorical and magical thinking, to create new ideas, come up with innovations, and ultimately make progress on a global scale, we then understand that we need to look at evolution as a process that made us human. Evolution created the way we think and the way we think created robots that can think. And now evolution will be responsible for the next leap in human intelligence, the evolution of non-human intelligence. So we need to look at how evolution does this in order to know where this is going. How will evolution shape artificial intelligence? How will evolution force us to adapt in the age of smart machines?

The Evolution of our Ideas Evolution is not about species and organic systems developing biologically from one generation to the next; evolution actually is about a process that we can see in action everywhere. From individuals and companies to societies, from pop music to business models, from technologies to preferences, everything is evolving. Many people do not realize that we are still evolving biologically. We are not done. This is not the ultimate human being without



D O N ’ T B E A R O B OT

any further needs to adapt. Yet, our biological evolution is too slow for us to observe. We simply do not see the tiny changes that happen from one generation to the next. What we do see, though, is our social evolution, how our organizations and institutions change, the way our political systems operate and the way we see the roles of state, government, and citizens. We see how the music from the sixties is different from what we listen to today but at the same time, we can hear the connection. Music obviously relies on existing material to create new pieces. Our ideas evolve. Within the corporate world, this happens in the shape of new products and services but also in terms of economic shifts, new business models, new platforms, and cultural movements. The link between the evolution of ideas and the evolution of humans lies in the social realm. We need to realize that outside the evolution of ideas, which so many people contribute to, there is no other process of creation. Everything new, everything innovative, every paradigm shift in our cultural lives is based on the evolution of ideas. The individual mind in this game is, at the same time, the driver of ideas and hardly necessary to make the process happen which surprises many innovation strategists because it means that the individual with the greatest ideas is not really necessary, and certainly not sufficient for breakthrough changes. Ideas exist outside human minds. It does not feel this way, but we do not really have ideas; rather, we work with ideas. They are not ours. This point is crucial to keep in mind if we want to understand how robots became robots. Ideas are independent of their human hosts. We do not create, own, or store ideas. We share them. This is a very important difference that, for many organizations, makes all the difference between being innovative and being stuck. Look at your parent generation and your generation, you are exemplars of the same species. While you will certainly find differences between the two generations, you will probably not assume that there was a categorical shift between the two, making one human while the other is


something other than human. If you go back in time, far back, even further, the first mammals did not have much in common with us humans today. Yet, if you go back generation by generation, there will never be a point at which you can say that this was a big jump between two generations that justifies calling one generation of mammals homo sapiens and the next homo sapiens sapiens. This clear-cut difference between the categories we apply is only possible because we do not zoom in and compare two generations, but instead hundreds of generations at the same time. We see the difference between early mammals and us very clearly, but we do so only because we ignore all the connecting mammals in the chain. And here is the point: The same is true for all our ideas. Between an idea that a human shares and the next generation of this idea, there will be hardly any difference. You know how this feels when it happens. Someone shares an idea and if you listen, your mind will start immediately to come up with variations and mutations of this idea. Your mind creates the next generation. This process is of course much faster than biological mutation but it is not fundamentally different from it. So the idea someone shared and yours are only different to a very small degree. Yet, the first generation of this idea is supposed to be owned by the first human and the next generation by you? Before your mind starts telling you that this can’t be true, since your idea is obviously different from the first, just imagine this process rippling through thousands of evolutionary generations within a few hours and a group of over a hundred people adding their mutations. I go through this process with my clients regularly and within just a few (quite exhausting) hours the hundreds of useful innovative ideas that make it to the actual project kick-off phase will not be similar to the original ideas we started with. Yet, nobody will be able to tell who came up with the resulting ideas. And that is because the ideas themselves went through thousands of evolutionary steps to get where they are. They did that by utilizing all the brains and minds in the room. We do not have great ideas; great ideas have us.



D O N ’ T B E A R O B OT

I am fully aware that this is quite a strange view of how we innovate, but it is so crucial if we want to understand how we are different from the robots we create. And ultimately recognize that we can be so much more if we become better hosts for ideas and let go of being a robot. The real task is not to become a better creator of ideas. Our job will be to become a better platform for ideas. Organizations need to make sure they are platforms for the evolution of ideas to take place fast and with many minds in the mix. If we accept the separation of ideas from the humans who have them, we can move on to look at how robots became human.

Nobody Invents Anything New Whenever we adopt an idea or feel we have invented something new we walk through a door. Every evolutionary step or generation of ideas provides these doors. When we actually walk through that door, we have accepted the underlying idea of that door. After going through that door, though, we see a new set of doors. Maybe we see three or four doors to open now—doors that we had not seen before. We did not know they were options that we could end up with. Now that we walked through the first door, we see the next set of doors and pick one to walk through. The concept of “the adjacent possible”1 describes the map of doors the current reality can lead to if we walk through the first door. Steven notes that the adjacent possible “captures both the limits and the creative potential of change and innovation”.2 The limits are set by what we can see from the status quo. We simply do not know what the next door will hold for us. We can only know by opening the door. The creative potential, on the other hand, is set by our capability to open the doors and keep exploring what is behind the next door. Indeed, we are afraid at times, since a closed door could hide danger. Yet, our drive for progress leads us to open another door at every turn. And the further we walk through door after door, the further we move away from where we started. Here is how some of


the robots in our life have become what they are today and what they might become after we open the next door. Of course, there is a long list of evolutionary steps leading up to the personal computer, but let’s start from there. Do you remember the first time you opened Excel on your computer? How would you describe the experience of using the software from today’s perspective? Did it feel natural? Was it easy to use? I did not think so when I used it first. There was a lot to learn. ●●




Most technological systems (for the sake of categorical simplicity, we’ll call them all “robots”) around us have forced us humans for quite a long time now to adjust to whatever the logic of the technology required. We literally had to learn to use the software, meaning we had to form new neural pathways in our brains to adapt to the software requirements. After the first generation of interfaces became more visually appealing, they started mimicking inanimate objects in our environment. The desktop looked like a desk, files looked paper files, and folders would hold files, just like in the real world. The process of duplicating a file would even be called “copying” it. Some time later, interfaces would start animating and anthropomorphizing inanimate objects. Now paperclips started talking (though Microsoft’s Clippy was hardly of any help when working on a document), and they started bouncing around. When you threw something in the trash folder, an invisible hand would crumble the paper and throw it in the basket (which resembled the basket next to your real-world desk). Then we took on the interaction with the interface by adding to the mouse and keyboard as main input sources voice commands and touch screens. While we went mobile with our personal computers (sort of robots too) software had to adjust to smaller screens and different functionality. For the first time interfaces started adjusting to humans, rather than humans having to adjust to the requirements of software.



D O N ’ T B E A R O B OT



Next, our robots learned to talk to us and to take commands in natural language which is transforming right now how we interact with our computers and the devices we need. Chatbots learned to mimic us and have conversations like a human being would naturally have. In the near future, robots will connect and exchange information to serve us without us doing anything. Think of a robot sitting in every conference room, displaying information that it thinks might be relevant to the current conversation in the room. When the conversation is about a sales report, it will be right there ready for you to look at. If the conversation is about a marketing campaign including a Youtube clip your competitor made, it will be ready to play without your doing anything.

The doors we walk through open up opportunities and challenges to us. We cannot know for sure what is behind the next door. We also cannot know how many doors there are. But without going through them, we will not be part of creating our future. It will simply happen to us and that is not a good idea. Robots became human because we as humans like to interact with humans. We designed them step by step to serve our needs. The ideas behind them went through hundreds of evolutionary generations before they became what they are today.

How to Become a Platform for Ideas Whether you want to turn yourself, your team, or an organization into an effective hub for innovation and progress, you will need to invest in the same strategies. And many of them don’t exactly come naturally to us: 1. Stop wondering whether an idea is good or not so good. Evaluating ideas while you are creating an evolution around them is a waste of time. Your impulsive judgment will probably perform






some sort of evaluation no matter what. So you might not actually be able to stop this judgmental inner voice, but certainly you can choose not to listen to it. Take whatever you like and ignore the rest. Whatever the reason is for your liking an idea or concept someone came up with, take it, reuse it, rethink it, work with it, but resist the temptation to discuss the ideas you did not like immediately. In a room full of people who do not really know why they like or dislike what they hear, a discussion about the “why” is a waste of energy. Make other ideas your own and let the same happen to your ideas. Birds did not invent wings. Evolution did. You do not invent ideas. Evolution does. Let go of ownership. Play your part, contribute, improve, adjust, change, turn upside down, but do not fight for your ideas just because you think they are yours. Fight for the ones that are good, particularly when they are not your own. Enjoy the process rather than the outcome. Of course it is satisfying to see an idea become a success, but you will last much longer and become a much better platform if you are about the process and not the outcome. Work on the ideas around you, because working on them speeds up the evolution, not because you are looking for that one big hit. Companies need, in many cases, to change incentives so that they do not rely heavily on the outcome but rather on the process. Yes, when it comes to progress having tried is more important than being successful. Unsuccessful ideas inspire successful ones. That’s why they need to get out there. Connect people and their ideas. Be strategic about whom you want to meet to create new ideas. The more people who realize that you are a connector, the more people will want to use your platform to get connected.



D O N ’ T B E A R O B OT

You Can’t Stop the Robots

It is curious how often you humans manage to obtain that which you don’t want. ~Mr. Spock

Do you remember when the first phones without a cord made it to the mass market, and some people celebrated their new freedom, while others were very skeptical about potential health risks? I grew up in a household that was one of the latter. I remember very well when my family was supposed to get our first handheld phone without a cable. My mom very strongly opposed the idea of having invisible waves go through the house that could potentially hurt our bodies and brains. I don’t think the opposition was very much driven by a concrete fear of something we could grasp, but rather by a technology that we, frankly, did not understand. But this is how humans operate. We are afraid, attempt to make up our mind, resist for a while, and then give up under the pressure of convenience. So, as we have every time ever since, we (despite being late to the party) jumped on the bandwagon and got our handheld phones and so far nobody has gotten sick. A couple of years later we had the same discussion when the first wifi stations came to homes and replaced those long and annoying LAN cables. At the time, my parents were building a house in Germany. Concerned with the effects of even more invisible information pathways through the air and the obvious yet invisible exposure we all would have to confront, they actually went through the enormous efforts of bringing ethernet cables to every room in the house just to get wifi in the house a few years later. Again, a potential risk factor first stopped progress before surrendering to conve-


nience. I find it fascinating that resistance to technology for legitimate reasons is, in most cases, being overcome not by good reasoning and convincing on the technology side, but on the side of adopters. The more users make use of a technology, the higher the pressure on the rest to go along and equally as important, the more convenience a technology delivers compared to the status quo, the faster the adoption. In other words, it is not important whether a technology really proves that it is not harmful. It matters how many people a technology can convince to adopt it. The critical mass of adopters ultimately determines whether we adopt too. A lesson I learned from my family’s resistance, as well as many of my clients trying to transform their businesses to meet digital standards, is about the way we use our energy to make technology less harmful and more useful. Because here is the fact: by resisting technological change none of us did anything stop its implementation, nor did we make it potentially less harmful, nor did we implement it in a way that added more value than it had originally offered. And that is a real issue when it comes to intelligent machines and smart robots. We resist them right now in many areas of life. Many people do not want companies like Amazon or Apple to listen to our conversations 24/7. Yet it is likely that we will not stop technologies that are already doing this. We will not stop technology with artificial intelligence to enter our homes and offices, our schools, and government institutions. We will—and here is where things get real—we will not make them less harmful or intrusive by resisting them and ultimately, we will not make them better, apply them to more important problems, or build better cases for their use, because we are busy fighting them. With my corporate clients this leads to real problems, problems that ultimately put the survival of companies at risk. Being late to the game is not a problem; not contributing to better-use cases is. In a world of accelerating innovation driven by technologies, there is simply no time to be wasted on fighting change that will inevitably hit



D O N ’ T B E A R O B OT

every one of us. If you spend your energy on resisting what is inevitable, rather than using your energy to contribute to progress of all by developing better-use cases for new technologies, you will likely be kicked out of the game, and rightly so. In terms of business evolution, your business is no longer adaptive enough to justify your existence. Everybody deserves the chance to contribute to the changes that technologies cause. Decision-makers are in charge to make sure that resistance to those changes does not cloud their judgment about useful ways to implement technology to create value. Today, more than at any point in the past 200 years, we have the chance to create value for humans on our planet at a scale unprecedented in our history. Sleepy decision-makers are already missing many opportunities to do so. We need to start thinking bigger. Our survival depends on it.

Surrendering to Convenience Technology offers incredible opportunities to improve lives and, in fact, to drive progress for all. But the way we handle technology misses most of the opportunities we could see. We have three fundamentally different ways to deal with new technologies: 1. We resist them. In most cases, people ignore new technologies as long as possible if they have doubts or fears surrounding them (like my mom, for example). This resistance escalates to open objection and attempts to veto implementation. In the workplace we sometimes see even acts of sabotage. Usually resistance around technologies is followed by surrendering to convenience. If you resisted email, after too many people start using it, it simply becomes inconvenient to not use it. Resisting electricity, internet, cars or platform ideas such as Uber, Airbnb, or Alibaba will not change how they operate but it will make it inconvenient for you to resist. It’s part of their strategy and it usually works.


2. We reduce doubts. By looking for ways to make technology less harmful in its potential risk to our health or even more so to our business processes (think cybersecurity) we open a back door to accept new technologies despite their potential risks. By addressing the resulting risk rather than the technology directly, we open another door to unanticipated potential. With many of my clients, the introduction and implementation of a new technology in one business unit sparked the interest of other business units to try it in a different context. The starting point is hardly ever the final way in which the technology is being applied to solve problems. 3. We build a case. This is a less natural and sometimes counterintuitive thing to do. When a new technology arrives, we do not need to implement it the way it comes. The real challenge is to identify the best-use case in your own opinion. When Amazon’s echo dot, and Alexa as its voice, flooded US households, I could see the resistance with my European clients. Understandably, due to a history of government overreach and spying their feelings around privacy and security were different from the Americans. Yet, there is a place for Alexa in Europe, as we found out in several workshops with one of Europe’s largest corporate facility management companies. A zero touch interface with voice control and natural language capabilities can add enormous value to the management of buildings and the efficiency of operations (a point my German clients, in particular, like a lot). What did not work for them was the case that Amazon had built around Alexa. The benefit of having her at home to control the music and the lights did not outweigh the potential risks of having her listen 24/7 to everything being said in their private homes. As you can probably already guess, resistance is the natural human response to new technologies and has always been. Imagine a farmer trying to convince a hunter/gatherer to build a house and farm at home rather than hunting for your food all day long. The



D O N ’ T B E A R O B OT

doubts of this massive shift in behavior must have been incredibly strong around 12,000 years ago. But even more recently, when electricity came to manufacturing plants and factories, there was massive resistance and widespread doubts. It is electricity, after all, that we are talking about; it potentially kills people. Why would you want that at home? Reducing doubts is the job of innovators and, honestly, most of them do a lousy job at it. It is part of being at the forefront of technology to get the rest of the pack on board before you move on. You have a responsibility. We are seeing a massive economic gap between people who understand and have access to technology and those who do not. And this social and economic gap is widening. It is the responsibility of those leading these changes to take the rest with them on the journey. It is not the responsibility of the rest to catch up. That being said, there is no excuse for anybody with access not to look into and understand all the technologies out there. Finally, building good cases by asking better questions is what really will drive progress. Once you understand what a technology can do, you will find ways to utilize its power in ways that make lives better, that benefit many people economically, or that solve problems many people actually have. The reason why people will always introduce and use technologies, whether they have positive or negative consequences, is twofold: 1. It only takes a few people to bring a new technology to life. Sometimes even a single developer can introduce something extraordinary that has dramatic consequences for our lives. In most cases, though, it is a small group of people who create a new technology. Think of Blockchain as a technology. It is a system, an idea with many parts and features. Its introduction came within a scientific paper that a mysteriously secretive author (or many) published. Bitcoin, the most prominent case for Blockchain, helped spread the idea because the technology could be applied to a problem that people cared about.


2. The second reason for the inevitability of new technologies is that, as in the case of Bitcoin, the people introducing the technology are hardly ever the same people making a decision about its use case. In other words, when a new technology is born, its creators are, in most cases, not able to predict where someone is going to use it or how. So the number of people it takes to introduce a technology is very small, while it would take many people to really stop a technology through regulation. For this reason we have AI and all other technologies in the world, no matter how much we resist. We can avoid the cases they come in such as in the case of Amazon’s Alexa. You have the choice to not buy it, or to pull the plug. But you cannot stop the technologies behind Alexa. They are here now. And they are here to stay. The real question is how you want to make use of the technology, rather than wasting your energy on fighting it.

How to Build Better Cases To really unleash the power of AI and other new technologies, we need to build cases that incorporate the benefits of the technology with a purpose and function we agree with. Here is how to start: 1. Define your purpose statement. What is your company, your product or service, ultimately here for? What is the ultimate goal? For a pharma giant, this might be to eliminate a disease or to make people healthier with better nutrition. For a car manufacturer, this might be to bring people from A to B as safely as possible. It might also be to make people enjoy every ride. For you personally, it might be to contribute to the knowledge base of humankind or to support a certain vision for the world that you find appealing. 2. Redefine the question you are trying to answer. A good way to start this is by asking “How are we (you and the people around



D O N ’ T B E A R O B OT

you) going to…?” You can collect as many questions as you want. The more diverse your set of questions, the better the result will be for your answers. 3. Allow for an evolution in your answers. Never answer a question directly with the first possible solution and stop there. Answer every of your questions with at least five possible scenarios. Again, the more diverse these answers are, the better your result will serve your purpose. 4. Now look at available technologies to serve your purpose by helping to answer your questions. Only select technologies that really help to answer your question. Don’t include technologies just because you find them interesting. If new technologies don’t solve a real problem, then you don’t have a case. 5. Define whose problem you are solving with the answer to your question. Who is your answer most valuable to? Who needs this answer and ultimately the case you are building? Who is it most relevant for? What objections might they have? And why would they love your solution?


How They Think

Logic is the beginning of wisdom, not the end.

Artificial Intelligence has many names and there is, from time to time, quite a bit of confusion as to what we are actually talking about, which is why this brief overview of buzzwords will attempt to create some clarity, even if it is just for the purpose of this book. What we expect every AI system to be able to do is ultimately to represent the real world in some way: to sense it, classify it, apply reason and planning, and to communicate in natural language. Let’s take a look at Machine Learning as the most developed Artificial Intelligence application today. Two major developments have to be seen separately: Supervised Learning and Unsupervised Learning. All of Machine Learning relies on what is called reinforcement learning, which is very similar to how humans learn—but more about that a little later.

Supervised Learning “Supervised” means that a machine learns from examples; it learns by recognizing patterns that help to classify an object or to make a prediction about a future state or event. Classification and prediction are major intentions behind the development of smart machines. Classifying objects and events is what makes AI very valuable in diagnostics and medicine. A supervised learning process could include the symptoms of hundreds of thousands of patients and their respective diagnoses by doctors. The classifying machine uses these cases as examples to learn which symptoms are linked



D O N ’ T B E A R O B OT

to which diagnosis. Since this form of supervised machine learning is able to read unstructured information to make a judgment about what category a case belongs to, it is also being used in the field of law. Since it can read case files in natural language and understand the verdict by a judge, it can learn, over the course of reading hundreds and thousands of court rulings, how humans apply the rule of law. Another application of classification in supervised learning is fraud detection to protect from identity theft. By learning from many examples, the smart machine can identify suspicious activity and alert users of possible identity theft. Also, a famous task to give to a supervised learning machine is the classification of images, and here we see the major differences between different approaches in Artificial Intelligence. We will come back to images later when we look at how deep learning works. For now, we only need to remember that not all image recognition is real recognition. When you take millions of pictures of cats and dogs and let a supervised learning machine look at all of them while telling the machine that one picture has a cat in it while others have dogs, some have neither dogs nor cats in them and again others have both cats and dogs in them. After many, many rounds of learning the machine will be able to tell whether a picture has a cat in it or not. The way the machine figures this out is not by a human pointing at the cat and saying “cat,” like humans do with their kids. The machine only knows that some pictures are labeled “cat.”. From there it starts looking at the pixel level of the image, identifying colors, contrast, edges, shapes, and so on. It is important here to stress that the machine does not understand what a cat is. It only learns to classify a cat in a picture from other pictures. Technically, the machine does not need to know what a cat is to recognize one. While this seems okay when we are talking about identifying cats, it might turn out to be a very different discussion when we talk about medical diagnoses and the rule of law.


The other big field of supervised learning deals with predictions. Artificial Intelligence relies on statistical modeling for many of its processes, and so it is not surprising that predictions are among some of the most convincing use cases of machine learning. In situations in which an enormous amount of factors have to be considered to make a good prediction about what is going to happen next, machine learning can quickly outperform humans due to its ability to deal with unstructured or incomplete data and its neverending patience to learn. In this way supervised learning can predict life expectancy, it can predict stock-market movements, and of course weather forecasts are also a field for machine learning. In each of these cases the benefit of machine learning is its ability to look at patterns in data sets on a very large scale. Supervised learning means learning from examples that no human has ever explained to the machine. Again, this is very different from parents teaching their kids by pointing and naming. The machine learns about patterns by itself. If you show the machine only pictures of cats who play with a tiny red ball and label every one of those pictures with “cat,” the machine will not be able to know that the red ball is not part of the cat. As a consequence, it will likely falsely classify a cat that is not playing with a red ball as “not a cat.” How would the machine know? Nobody has taught the machine what a cat is. All we did was show it pictures. Unsupervised learning, on the other hand, goes one step further. “Unsupervised” means that the system learns to cluster data and abstract from them. For that, the machine does not need a human to label the cat. The unsupervised learning machine will look at all the data and identify patterns by clustering them. In other words, the machine will look at pictures of cats and try to figure out what makes them similar to or different from each other. While supervised learning means training the machine in a manner similar to how we train a child, unsupervised learning would be like leaving a child who has never played with Lego bricks in his or her life in a



D O N ’ T B E A R O B OT

room with nothing but a box of Legos and the task to build a spaceship out of the Lego bricks. You see already how these two types of learning imply very different training methods and fields in which they succeed. Cluster analyses shine in targeting specific customers and segmenting a market; they also form the basis of recommender systems. These systems are able to work with very little input from a customer who does not know exactly what he or she is looking for and makes pretty good suggestions. We will see some examples of this soon. All the applications that we are looking at right now show that smart machine learning, whether supervised or unsupervised, can help humans do their job rather than replacing humans in their job. Yes, we have not talked about Deep Learning yet, and this is where the lines get blurry and, still, artificial intelligence in its most successful applications makes humans better at what they are good at doing. To me this is a fundamentally good sign. At the same time, it shows the path into the future by underlining the importance of human-machine collaboration rather than human-machine replacement. In a fascinating study by researchers at MIT, a prediction market for outcomes of football games was set up in which participants could pretty much bet on certain outcomes by placing money. It’s a market because bets are being bought and sold while prices depend on the beliefs of the participants in those markets. Prediction markets can be stunningly accurate at predicting events because they leverage the power of collective intelligence. Every individual in a prediction market knows something about the outcome. No individual knows enough to really predict the final outcome but when predictions are being traded the information underlying the prediction becomes public knowledge. The system of a prediction market is thereby better at predicting events such as the outcome of football games than are the individuals in the market. But what about machines?


Smart machines are awesome at predicting based on statistical analyses and likelihoods for certain events. So what happens when you let machines trade their predictions in a prediction market based on what the individual machine calculates as likely? They are also pretty good at predicting collectively. But the most surprising finding of the MIT study came about when humans and machines started trading their predictions and expectations in the same market. This market of combined human machine effort outperformed the human as well as the machine market. Since this is a very abstract example of how human-machine collaboration can look, I want to share two cases in which the link between humans and robots creates real world value in day to day settings. First, meet Fin—your 24/7 personal assistant that is half robot/half human. Fin Fin is a personal assistant who can also perform many business-related tasks. Assistants that use software to support their work, as well as assistants that are located somewhere far away from you but respond in real time, are not new. What is new is that Fin is technically not human. You text Fin your request and it acts and responds like a personal assistant would for a fraction of the cost. Here are some examples of requests people are making right now. If you want to see in real time what people are requesting just check out fin.com/feed. “Yo Fin. I’m just curious, can I get a large limo today for a 2h party in it? There’re 12-13 of us. Are there limos that big?” Yes, you might think that this is something you could just Google yourself and book online. And that is exactly the point. You will not do tasks like these in the future. Systems like Fin will. How are they doing this, though? Think about the task. Fin needs to know what a limo is (which is not a word people use all the time and you would also not find it easily in written text). “Finding a large limousine” is



D O N ’ T B E A R O B OT

a very ambiguous search term. You, of course, know that the next sentence refers to the actual size needed. But for Fin to understand, it takes a lot of reasoning. “There’re 12-13 of us.” Twelve or thirteen of what? Who is us? “Are there limos that big?” You understand the task is not to answer the question of whether they exist, but to actually find one. For Fin, this is not that clear. After all, the request ends with this question. But it is very likely that the user here would not be satisfied with the answer “Yes, there are limos that big.” What makes Fin so special and different from many other systems is the collaboration between humans and robots. Many requests that reach Fin are full of ambiguities that only humans can decipher in context—contexts that Fin does not know naturally. Fin has never eaten at a restaurant, partied in a limousine, or made a dentist appointment. It takes a human to fully understand a human. This is why, behind Fin, there are real humans who help Fin understand; they do all the things that the robot cannot do yet. By helping each other out, humans can offer personal assistance at a low cost while Fin becomes smarter and smarter. It learns about humans and the contexts they are in. “Hey, Fin; I am trying to eat better. I want to start with nutrition. Is there an app out there that can help me understand how many calories I should eat? Can you help me find a website for better eating habits?” You notice already that users treat Fin like another human. Since you actually do not know whether you are talking to a robot or a human right now, this is not very surprising. If you look at interaction with other fully automatic systems like Alexa on Amazon’s Echo Dot or Siri on Apple devices, you notice similar behavior. The fact that a system understands and replies in human language makes us treat the system as more human. I recently had a conversation with a friend who builds robots to support people in physical activities such as carrying items and performing service tasks like picking up and cleaning things. In the research my friend is


doing, the company found that it pays off to make the robots as cute as possible. Not because we like to interact with cute robots or more human-like robots; the true reason lies somewhere else and it reveals a lot about human nature. The reason it pays off to make robots look like humans is that humans treat them better when they are cute. Especially when the robot does not do what it is supposed to, people who get frustrated and angry would become confrontational towards robots who are not human enough or sufficiently cute. If you want people to respect technology, make it look and act like a cute human. “Hey Fin! I visited a ritz carlton over 2 years ago and we had a poor experience there. I received points as a comp but I think they just expired. Can you call and check it out?” Fin does not stop at researching things for you. The more digital interaction we allow, the more a virtual assistant can do for us. When you look at this request, you can see that there is hardly any information in the request itself to help Fin figure out what to do. But—and this is the beauty of the system—Fin can go through your emails, find the booking, and contact the hotel. Of course intelligent algorithms can write emails; that is not new. They are able to make calls too. In this case, I assume that a human would ultimately make the call. But, first of all, this is going to change fairly soon and secondly, the human who makes the call will not do the research to find out about the points, the hotel location, or anything around the case. In this separation of labor, we see the next generation of automation. We do not replace occupations like the virtual personal assistant with robots. We replace many of the tasks we all do on a regular basis, such as researching stuff, dealing with customer service, booking tables or flights, doing taxes, and replying to emails. The fact that many people would welcome help in these areas is sufficient proof to me that we will see these systems take over tasks in all areas of life in no time, probably around 2020-2025.



D O N ’ T B E A R O B OT

And once we get used to having systems like these around all day long, we will start using them for more and more tasks, in more specific and context-rich environments, as well as for very personal requests. Look at this request from a few minutes ago (as of this writing). Do you think Fin will know why would want to do this? The point here is that it will take a lot of human level insight and context to understand why humans would want to do this. But it will not take much understanding to solve the problem and act on our behalf: “Can you find a place that can laser engrave some text onto the top sheet of a snowboard?” Stitch Fix And secondly, meet the company who knows what you will want to wear—hello, Stitch Fix. I learned about the company in detail when a friend switched from working for a big American fashion brand in a very good position to this startup, which is now valued at over two billion dollars. Stitch Fix works with over two thousand stylists all over the US, who all work from home to identify the perfect outfit for their clients and contexts. Customers first enter a set of preferences and create Pinterest boards with things they like. The data are very unstructured and match what we would expect from a creative field such as fashion. Machine Learning, Natural Language Processing, and Visual Data Learning are being deployed to make sense of what the user might want to wear. A stylist looks at an interface providing him or her with detailed information, but not always complete information, about the person and preferences. Stitch Fix runs experiments not only on their AI system but also on their stylists. What if you see a picture of the customer? How does that influence your choice of clothes for this respective customer? Will having less information help you make better choices in some areas? Once the clothes are chosen, they get shipped to you; you keep what you like and only pay for what you decide to keep. According


to Harvard Business Review authors H. James Wilson, Paul Daugherty and Prashant Shukla,3 the San Francisco-based fashion company Stitch Fix offers three lessons about how robots and humans create value when they work together: 1. Humans need to support artificial intelligence. It is too risky in business terms to lose the trust of customers because of mistakes that inevitably happen due to limited understanding from the AI systems as to what humans experience and prefer. Particularly in terms of what to wear, misunderstandings might lead to harsher responses if they come from a machine rather than a human stylist. At the same time, AI systems can support stylists in their work and, rather than only augment their intelligence, actually create systems of augmented creativity in which stylists become better stylists due to better and more adaptive data available to them about their clients. Similarly, the AI learns from stylists how to read fashion demands and, in effect, what looks good. 2. Machines augment intelligence and create interfaces for working humans that supercharge productivity and creativity. 3. Not one, but the combination of many AI-driven data tools empowers the extraordinary success of companies like Stitch Fix, that have understood that the interaction of humans and machines is where we will find the most convincing business cases in the near future. There are several hidden lessons behind the stories of Fin and Stitch Fix. First of all, we learn that AI robots are not ready to do the job, but they are ready to take over many tasks while they outperform humans on some tasks already. We need to let go of the idea that there is a black and white in the future of AI in our lives. AI systems will have a tremendous impact on our lives by replacing humans in specific tasks; they will not replace us completely in many jobs, but they will massively augment our intelligence in pretty much every job while also—and this is something most of



D O N ’ T B E A R O B OT

my clients are completely unaware of—dramatically increasing our creative output by augmenting our creativity.

Tools - Assistants - Peers - Managers When we think about the augmentation that machines have created in the past and keep in mind where the human-machine interaction is probably going to head next, we see certain patterns that clarify and demystify the doubts and fears around robots. We can compare these new technologies in terms of their computational basis (how they do what they do), we can look at their interfaces (how they communicate with us), and we can look at the value they create for us (how useful they are at doing what they do). Let us, for the moment, compare a traffic light, the software program Excel, and the chatbot by the airline KLM. Traffic lights seem like a pretty basic and quite simple technology but, of course, at some point they were as innovative and surprising as many robots today. How does a traffic light do what it does? So yes, there are lights and they are coordinated within an intersection with lots of traffic lights. This coordination must have, for some time, amazed people. And I am not sure that there are many of us who can explain how exactly this coordination happens. When you look at current traffic models in bigger cities, though, you realize that the coordination of traffic lights is much more complex than just turning on and off lights within an intersection. They actually respond in real time to traffic on the streets; they coordinate across the whole city and are essential in avoiding traffic shutdowns. The interface might be simple; they simply tell us what to do and coordinate our actions on the streets. But neither their basis nor their value is simple. They are absolutely essential for the survival of cities. Excel, in comparison, certainly has a more complex interface and also its basis might be more complicated to explain. Excel can


be found in almost all offices around the globe. It is still an industry standard for data processing and, without a doubt, has dramatically increased our business intelligence. The KLM chatbot knows how to talk to airline passengers; it has access to real time flight data and all kinds of sources passengers usually look for around their travel experience. The chatbot uses natural language to communicate and allows users to also simply use natural language to chat and ask all sorts of questions usually directed at customer reps. The technology behind chatbots is very complex and requires the mix of several technologies to work together to make the outcome useful. It needs to understand questions and requests, find an answer and solution, reply in natural language, and do it in a way that satisfies a human who is used to talking to humans and not machines. When you compare these examples, you see a massive shift in how technologies augment our intelligence, but also in how their interfaces started matching our needs. While the traffic light has a very simple interface, we have no intuitive access to its meaning. We have to learn what a red light means and what it wants from us. Once we understand, we follow its instructions. We do not rely on the light’s ability to augment our own intelligence, but rather trust that it is part of a system intelligence that is greater than what our intelligence can process at the moment. Excel, on the other hand, clearly helps us to make better decisions by augmenting our own intelligence directly. It does so not by telling us what to do. It does so by outperforming us in computing power. While Excel is also not intuitive in its interface, it takes us quite a while to learn how to augment our intelligence using its power. Talking to a chatbot is different. We have a very intuitive access to natural language. We do not need to learn to navigate the interface. The interface is not outside our own normal experience. It is a natural tool. At the same time, the tool itself outperforms us in many areas, such as speed, accuracy and reliability of information. What we see these days is a fascinating shift from technologies that we have to adapt to in order to make use of them towards



D O N ’ T B E A R O B OT

technologies that adapt to us. This shift is the basis for the development of human-like robots. It also explains how, at the same time humans, became more like robots. We will explore this interesting paradox next.

y p o C g in d a e R e d l e a t c S e r r Uncor W • Not fo E I V E R P

Christoph Burkhardt

DON‘T BE A ROBOT opy ding C E a e R d recte R SAL Uncor W • NOT FO E PREVI

Seven Survival Strategies in the Age of Artificial Intelligence MIDAS MANAGEMENT

Pub Date: 25th of May 2018 Pre-Order here: www.bit.ly/robot_midas Christoph Burkhardt DON‘T BE A ROBOT ISBN 978-3-03876-511-0, € 20.00 Midas Management, www.midas.ch

Profile for Midas Verlag AG

Reading Excerpt for "Don't be a Robot"  

Reading Excerpt from "Don't be a Robot" – Seven Survival Strategies in the Age of Artificial Intelligence. Written by Christoph Burkhardt....

Reading Excerpt for "Don't be a Robot"  

Reading Excerpt from "Don't be a Robot" – Seven Survival Strategies in the Age of Artificial Intelligence. Written by Christoph Burkhardt....