Page 1

Pottabathini & Thomas|1

Swathi Pottabathini and Trinisha Thomas Philosophy Final Paper Philosophy and Cognitive Science Young India Fellowship Ritwik Agarwal 18th June, 2017

Can we make machines understand language? Philosophical questions such as “What is mind?”, “Are mind and brain the same?” and “Can machines think?” have been doing rounds in philosophical circles since the days of yore. In order to arrive at answers to such complex questions, philosophers started subdividing the question into further smaller domains. By doing so, one fundamental question which they are trying to address is that, “Is it possible for a machine to have a mind?” If yes, then the society could not just advance technologically but also philosophically and spiritually by dissecting the understanding of mind. One such way of proceeding, as Chomsky suggests in his paper “Creative use of language” is to narrow down the domain of enquiry to linguistics. In the paper, we try to address the question, “Can we make machines understand language?” by considering various philosophical literature and scientific information. Understanding “understanding”

Language is essential to our very living and survival on this planet. Without language, not only would we be not able to communicate, but probably not be able to think as well. So, understanding a language furthers thinking. In this context, understanding the meaning of “understanding” becomes imperative. Oxford dictionary defines “understand”, a two-place predicate used in sentences such as, “You'll never be able to understand her” or “He can never understand mathematics”, as the ability to “perceive the intended meaning of (words, a language, or a speaker)”. For a machine to understand anything, considering only the input and the outcome of right results are not sufficient conditions, but the right perception of the intended meanings is necessary as the definition suggests. One is said to have understood if the meaning is clear to the person which implies that the person has conceptual grasp of the subject matter at


Pottabathini & Thomas|2

hand. For example, if one is given a watch, s/he should be able to acknowledge and imbibe the structure and properties of it such as the type of the dial, the color of the strap, the time being shown, its brand, etc. and be able to answer various questions related to it (The Understanding Machine 2016).

Another facet of understanding is related to intentionality. The concept of intentionality brings us closer to the knowledge of the mind.

Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs. The puzzles of intentionality lie at the interface between the philosophy of mind and the philosophy of language... ‘Intentionality’ is a philosopher's word. It derives from the Latin word intentio, which in turn derives from the verb intendere, which means being directed towards some goal or thing (Pierre 2003).

Thus, thinking is the “essence of the mind”, and language the “window to the mind” (Pinker 2007). For the language to be understood, faculty of language is an innate and necessary condition (Cowie 2008). Through the paper, we argue there is a possibility to develop the faculty of language in machines through coding—artificial intelligence (AI)—which simulates the mental or cognitive states of the mind. For doing so, we first delve into the arguments proposed by Searle on this issue. Secondly, we delve into various technological and empirical evidences based on AI which suggest that Searle could be completely disproved in future. Finally, we explore the possible counter arguments to our own arguments and provide concluding remarks. “Machines Can Never Understand Language” In his paper, “Minds, brains and programs”, Searle brings forth a clear distinction between Strong AI and Weak AI, in an age where the technological and scientific might is being put into simulating computers to emulate human cognitive capacities. Weak AI considers computers to be a crucial tool in order to pursue the study of the mind. Strong AI, on the other hand, considers computers not just as a tool to study the mind, but also to possess a mind of their own. Searle, being a critic of functionalism, argues that strong AI is not possible because even if machines are able to take input, process (or parse) and provide a relevant output, they actually never “understand” the process. He calls the argument, “Chinese Room Argument”. Searle asks us to


Pottabathini & Thomas|3

imagine a scenario wherein he is locked in a room and is assigned the task of answering questions given in Chinese language which he has no understanding of. The only equipment he has are an instruction book that lists down rules written in English for Chinese translation and plenty of paper. He receives a paper with Chinese "squiggle squoggles" from the in-slot, and is able to produce the Chinese "squoggle squiggles" as output by referring to the instructions in the book. This operation would be able to pass a “Turing Test” conducted in Chinese language. However, Searle argues that just like he was able to produce a relevant output without any understanding of the Chinese language, any machine that has the ability to parse the input by the aid of a program (analogous to the rule book in our case) would generate correct output and pass the Turing test without real understanding of the language; thus putting forward the argument that the imitation of behaviour does not equate to understanding.

Machines might understand language: Towards Strong AI With the developments in science and technology, neural networks have become the “talk of the town” with major firms investing a great deal of resources into it—be it Facebook’s face recognition, Amazon’s Alexa, Apple’s personal assistant Siri, Google’s voice recognition on phone or Microsoft’s language translation (Metz 2017). However, scientists are yet to develop the ideal machine; a machine that is intelligent enough to have a conversation as natural as a human. Research on AI has rapidly progressed and has moved beyond hardcoding the set of rules to be followed and creating repositories that act as a lookup table, but rather works based on a constant feedback mechanism, much alike our neural networks in the brain that automatically learn the unsaid rules of a language and understand the meaning. In AI parlance, this is “generative research”. It is here that we believe that the “System’s reply” proposed as an argument against Searle’s proposition is accurate because the person is a product of the system, and the system shapes the understanding and the meaning that an individual learns and hence attributes to the word and sentences s/he uses. By employing heuristics with pattern recognition, simple machine learning, rule based expression matching and deep learning, AI scientists are trying to create a module that analyses and learns how to decode the input and generate a response that is comprehendible by both the human and the machine, similar to the faculty of the mind that humans possess. But, Searle’s argument to this type of a machine is that it


Pottabathini & Thomas|4

fundamentally operates on an algorithm that processes an input and produces an output; which is not necessarily understanding but is only the functioning of the algorithm that has been plugged into it. While this is true of the first generation simple chat bots which responded only to grammatical sentences devoid of any grammatical errors, today research in AI—using “training” algorithms can recognize the human natural language and react to almost any situation. The caveat here is in the training cycles that the reinforcement and generative learnings on which these machines operate on, that require great amount of time and data to process for a decent accuracy; which in case of humans is an innate faculty that grows almost organically.

Cleverbot, Cortana, Siri and Tay are well known examples of AI voice activated technologies and chat bots are testimonies to the advancement of technology which deliver coherent conversations that mimic human interactions. One could however raise the argument that Tay, Microsoft's first public experiment with a Twitter bot was a failure as it had started to act like a feminist bashing racist xenophobe. Our counter argument to this is whether one would then classify humans who are sexist, racist and xenophobic in their attitude as ‘non-understanding’ creatures or just irrational. If the former were to be true, then the premise that human understanding is the irrefutable basis of “understanding language” would fall through and be invalid, and if the latter is true, then we would in fact accept the argument that machines do understand language but might act in irrational ways like how humans do.

While it is true that the human intentionality is behind the knowledge of the algorithms, why isn't the ability of these algorithms to evolve be acknowledged as analogous to how humans understand linguistic structures through experiences? By attributing computational properties of the brain and mental states as biological phenomenon, Searle blatantly dismisses the idea that a system that may be capable to replicate the brain would still be meaningless due to the sheer absence of life.

Could Searle be right?

Though AI is advancing in a way never imagined before and has been backed by a strong research on linguistics, there are various arguments which favour Searle. One argument suggests


Pottabathini & Thomas|5

that Cartesian duality which implies that mind and body could exist on its own but are causally related, would point to the fact that until and unless a biologically similar infrastructure as that of a human is in place, no machine would be able to actually understand language. This is related to the “Robot’s reply” and “Brain Simulator reply” in Searle’s paper. Answering the former reply, he points out that even if the sensory paraphernalia is attached to the machine, it would just mean more input being given in the Chinese Room Argument, and not imply intentionality as in case of humans. This is well observed in the existing humanoid robots who aesthetically appear and mechanistically function as humans, but can’t understand as humans. While countering the biological simulation of neurons too, he points out the distinction between simulation and intentionality.

Furthermore, morphology and semantics are important components in understanding a language as their collective usage generates meaning ((Natural Language Processing 2016). Despite language being one of the easiest things for humans to learn, conversations are known to be complicated because of the ambiguity of the language and the context in which words are used despite the correct semantics of the language. This means there is a clear distinction between machine learning and machine truly understanding. At a recent AI conference, Richard Socher, Chief Scientist at Salesforce, gave an excellent example of ambiguity: “The question ‘can I cut you?’ means very different things if I’m standing next to you in line or if I am holding a knife” (Yao 2017). Will a machine be able to decipher the distinction that is expressed here like an otherwise sound human could? When we argue for the “understanding machine”, it should be one which should be able to answer both open ended and closed questions, just like humans do. Will technologists ever be able to develop a mechanistic module that is smart enough to answer questions like “Are you happy?” versus “What is happiness?” Also while making a conversation, there is a flow of ideas that follow from one sentence to the other. Humans are capable to interconnect these ideas, and continue a long conversation without having to repeatedly mention which object relates to what word in the conversation. For example: “Rohan has gone to work. Do you think I can go meet him? The office is five blocks away from here.” A modern chat bot's ability to continue a long conversation by processing on earlier information is quite debatable in this context. A human


Pottabathini & Thomas|6

could decipher this sentence to understand that I am talking about meeting Rohan and Rohan’s office is five blocks away from my house (Natural Language Processing 2016). are two challenges that the chat bot will face in this context: one of context and the other ambiguity; not only will the machine have to confer from the other sentences what office am I referring to, but also for the ambiguity which is characteristic of human conversations. Computer Science Professor at UCLA, Michael G Dyer’s puts forward the “unsimulatable world” argument to further Searle’s proposition. Since the brain is a complex subject that has yet to be fully studied, it is impossible to create a machine that can fully simulate its working, and thus understand language. At some point, AI scientists will give up as they would not be able to simulate more than what has been discovered. Thus the machine will never be developed until the mystery behind the brain is realised (Dyer 1990).

Conclusion

According to Searle, the definition of understanding is very closely tied to intentionality, which he strongly believes is difficult, almost impossible, to be coded into machines, until and unless we are able to replicate machines which have the same “causal powers as brains”. But, with the advent of technology this parochial definition of understanding can be contested. However, these modern chat bots are restricted in their ability to qualify as fully understanding language and hence are not generic like humans to perform multiple activities. So it can be said that they aren’t equipped fully enough to be classified as strong AI, but one can’t negate the possibility of their development into the same in the near future. With concepts such as trans-humanism becoming popular movements, which aim to create an intermediary between human and post-human through technological advances, the hope for complete materialization of strong AI lives on, whose pros and cons would be avenues for another paper altogether.


Pottabathini & Thomas|7

References Cowie, Fiona. "Innateness and language." (2008).

Dyer, Michael G. "Intentionality and computationalism: Minds, machines, Searle and Harnad." Journal of Experimental & Theoretical Artificial Intelligence 2, no. 4 (1990): 303-319.

Jacob, Pierre. "Intentionality." Stanford Encyclopedia of Philosophy. August 07, 2003. Accessed June 18, 2017. https://plato.stanford.edu/entries/intentionality/.

Levin, Janet. "Functionalism." Stanford Encyclopedia of Philosophy. August 24, 2004. Accessed June 18, 2017. https://plato.stanford.edu/entries/functionalism/.

Metz, Cade. "AI's Next Frontier: Machines That Understand Language." Wired. June 03, 2017. Accessed

June

18,

2017.

https://www.wired.com/2015/06/ais-next-frontier-machines-

understand-language/.

"Natural Language Processing and Machine Learning: the core of the modern smart chatbot." LINKIT.

Accessed

June

18,

2017.

https://www.linkit.nl/knowledge-

base/228/Natural_Language_Processing_and_Machine_Learning_the_core_of_the_modern_sma rt_chatbot.

Pinker, Steven. The stuff of thought: language as a window into human nature. London: Penguin Books, 2010.


Pottabathini & Thomas|8

"The Understanding Machine: Can Intelligent Machines Understand Language?" The Oxford Philosopher.

May

03,

2016.

Accessed

June

18,

2017.

https://theoxfordphilosopher.com/2014/08/25/the-understanding-machine-can-intelligentmachines-understand-language/.

Yao, Maria. "4 Approaches To Natural Language Processing & Understanding." TOPBOTS. March 21, 2017. Accessed June 18, 2017. http://www.topbots.com/4-different-approachesnatural-language-processing-understanding/.

Profile for SwathiP

Can we make machines understand language?  

Authors: Swathi Pottabathini and Trinisha Thomas Paper philosophically explores the topics and themes associated with language and the possi...

Can we make machines understand language?  

Authors: Swathi Pottabathini and Trinisha Thomas Paper philosophically explores the topics and themes associated with language and the possi...

Advertisement