Nico Wada, 260669707 PHIL 306 Friday October 14, 2016 The Abstract and Impenetrable Nature of The Mind: Debunking Machine State Functionalism and Machine Intelligence In Hilary Putnam’s The Nature of Mental States, Putnam relates all organisms capable of feeling pain to computing machines in a theory known as Machine State Functionalism, in which the mind is comparable to an application or a software and the brain is similar to a device’s hardware. According to Putnam, an organism arrives at a particular mental state in a manner analogous to how a computer completes a certain function by implementing complex processes through programmed mechanical principles. Machine State Functionalism allows for multiple realization, meaning that a myriad of distinct species with varied brain states are permitted to share mental states under this theory. Because of the theory’s support for multiple realization and definition of a mental state as a functional translation of input into output, a Machine State Functionalist posits that a machine with correctly implemented software can be considered intelligent. In this paper, I will explore the relation between Machine State Functionalism and machine intelligence and reference John Searle’s objection involving the Chinese Room Experiment. Then, I will argue against the possibility of machine intelligence and demonstrate that the computational description of mental states under Machine State Functionalism is a flawed and radically liberal understanding of what constitutes intelligence. While I hold that machine intelligence is impossible, a Machine State Functionalist must reason that machines can possess mental states and therefore, be deemed intelligent. Putnam defines a mental state as “the state of receiving sensory inputs which play a certain role in the
!1