The Hard Problem of Consciousness

Page 1

THE ‘HARD PROBLEM’ OF CONSCIOUSNESS Marcus Abundis 1 Abstract To frame a useful model of information, intelligence, ‘consciousness’, or the like, one must first address a claimed Hard Problem (Chalmers, 1996) – the idea that such phenomena fall beyond all scientific thought. While the Hard Problem’s veracity is often debated, analogues to this claim arise elsewhere in the literature as a ‘symbol grounding problem’ (Harnad, 1990), ‘solving intelligence’ (Burton-Hill, 2016), Shannon and Weaver’s (1949) ‘theory of meaning’, etc. Thus, the ‘issue of phenomena’ or innate subjectivity still holds sway in many circles as ‘unresolved’. Also, direct analysis of the Hard Problem is rare, where researchers instead offer claims asserting that: 1) it is a patently absurd view unworthy of study, or 2) it marks an intractable issue defying such study, where both views offer little clarifying detail. A ‘Hard Problem debate’ thus endures. This essay takes a third approach: directly assessing the Hard Problem’s assertion contra natural selection in the formation of human consciousness. It examines Chalmers’s logic and evidence for this view, taken from his articles over the years. The aim is to frame a case where it then becomes possible to attempt resolving this ‘issue of phenomena’ (7 pages: 3,400 words). Keywords: consciousness, hard problem, psychology, mind, evolution, natural selection, duality. INTRODUCTION – Statement of Problem and Proposed Approach Philosopher David Chalmers (1996), in The Conscious Mind, is known for naming a Hard Problem in the study of consciousness. Such views go back to Anaxagoras’s Pre-Socratic notions of mind and matter, and to Descartes’s Meditations as separation of ‘mind’ and ‘body’. The terms are seen to have interrelated functional roles, but where ‘mind’ has no specific physical identity, but a material ‘body’ has a specific physical form. This supposed separation of a thinking world (mind) from a material world (body), for Chalmers and others, presents an insoluble point – an explanatory gap ‘not open to investigation by the usual scientific methods.’ He claims a ‘reductive explanation of consciousness is [therefore scientifically] impossible’ (ibid., p. xiv). Chalmers argues a general case against science in studying consciousness. He does this by asserting an abstract ‘logical supervenience’ over scientific ‘empirical supervenience’ (ibid., p. xiv, 34-35) 2 – as innately different logical orders for mind and body. But Chalmers’s abstract general case cannot reasonably entail all of science, as science holds many specific-but-diverse, unfinished, and at times puzzling or even incompatible areas of study. For example, we lack final theories for biology and for gravity, quantum mechanics seem innately ambiguous, epigenetic roles cloud prior notions of DNA, dark matter and dark energy are wholly unexplained, etc. Due to this unfinished scientific variety the only way to assess the Hard Problem’s merit is to gauge its claims against just one scientific view. This essay pursues such a study using Darwin’s theory of evolution by natural selection (EvNS) – often seen as our most successful scientific theory. If

1

Organizational Behavior (GFTP), Graduate School of Business, Stanford University (March 2011).

2

Supervene – (of a fact or property) entailed by or dependent on the existence of another: mental events supervene upon physical events. Chalmers enlarges this definition beyond ‘fact or property’ to include conceptual/logical supervenience, and then gives primacy to that abstract view, over empirical views.

21 Dec. 2017

1


the Hard Problem is found to hold firm arguments contra EvNS, this would argue for its veracity and underpin an interesting discourse – and if not, the opposite is true. Nothing in science as a whole has been more firmly established by interwoven factual information. Nothing has been more illuminating than the universal occurrence of biological evolution. Furthermore, few natural processes have been more convincingly explained than evolution by the theory of natural selection. (Wilson, 2009) CHALMERS’S VIEW AGAINST A DARWINIAN FRAMEWORK – Review of Literature EvNS is widely accepted but Chalmers’s Hard Problem makes a specific claim against EvNS in the formation of human consciousness. He dismisses it as having any place in the development of consciousness claiming the ‘process of natural selection cannot distinguish between me and my zombie twin. [EvNS] selects properties according to their functional role, and my zombie twin performs all functions I perform just as well’ (emphasis added, Chalmers, 1996, pp. 120-21). A zombie is defined earlier as ‘physically identical to me . . . molecule for molecule . . . but lacking conscious experiences altogether . . . all is dark inside . . . empirically impossible’ (ibid., pp. 94, 96), thus setting-up a ‘zombie twin versus David Chalmers’ functional thought experiment. Chalmers offers this functional-but-fictional account of his own consciousness to initiate a contra-scientific view developed throughout his book. The above notes are the only express comments he offers on EvNS. He gives no other detail or outside references, but lays out this ‘zombie trope’ and proceeds from there. Chalmers is not alone in using zombie-like devices to portray problems of consciousness (Kirk, 2012, Deacon, 2011), so this ‘zombie’ may be a useful trope. Still, Chalmers’s use of zombies here to rebut EvNS is singular in the literature. Also, this ‘empirically impossible’ zombie rebuttal of EvNS collides with a prior claim to not ‘dispute current scientific theories in domains where they have authority’ and to ‘take consciousness to be a natural phenomenon . . . under the sway of natural laws’ (Chalmers, 1996., p. xiii). Thus, early on, self-contradictory positions begin to arise in his volume. Chalmers’s (1996) sparse comments on EvNS prompt further study of his volume for more detail. With no other notes on EvNS found therein, perhaps exploring the functional view he poses against EvNS can help. ‘Functionality’ arises early on, in Chapter 1.2, where Chalmers ties phenomenal events to ‘subjective conscious experience’, and ties ensuing psychological views of phenomenal events to ‘functionalism and cognitive science’. He then ponders ‘whether the phenomenal and psychological will turn out to be the same [practical] thing’ (ibid., p. 12). For example, in jointly considering phenomenal and psychological roles he names many likely coadaptive functions (e.g., pain, emotions, etc.). Chalmers states that ‘the co-occurrence of phenomenal and psychological properties reflects something deep about our phenomenal concepts’ of consciousness and that it is practical ‘to say that together, the psychological and the phenomenal exhaust the mental . . . there is no third kind of manifest explanandum’ (ibid., p. 22). He plainly accepts causal/functional relations between the phenomenal and the psychological (ibid., pp. 13, 16-17, 21-22), thus implying a likely role for EvNS (i.e., selection for synchronous functioning of phenomenal sensory input, and efficient psychological views of that input). But soon after, Chalmers oddly insists on holding the two, the


phenomenal and the psychological, apart. He argues that this separation is somehow ‘logically possible’ (despite earlier arguments) and therefore must be used in modeling consciousness. This forced conceptual separation of ‘phenomena and function’ (versus a plainly needed functional sensorium in EvNS), by asserting ‘logical possibility’, arises in Chapter 1.3. This sense of logical possibility is then what underpins his call for a functional Hard Problem, subtly displacing his earlier-stated claims of ‘logical supervenience’ (note 2). Further, Chalmers accepts that ‘functional analysis works beautifully in most areas of cognitive science’ and that many ‘mental concepts can be analyzed functionally’ (ibid., p. 42). But as ‘functionalist analysis denies the distinct’ separation of phenomenal from psychological (which his argument requires), it is ‘therefore unsatisfactory’ (ibid., p. 14) for studying consciousness. He states that ‘no matter what functional account of cognition one gives, it [still] seems logically possible’ to imagine a scenario requiring no conscious presence (emphasis added, ibid., p. 47). The model used most often to typify this logical possibility is his zombie twin (from above). Later, Chalmers baldly disavows any functional attempt to model consciousness, for to ‘analyze consciousness in terms of some functional notion is either to change the subject or to define away the problem’ (ibid., p. 105) – ignoring his own use of a functional view to justify his zombie twin contra EvNS argument. Leaving Chalmers’s (1996) major work behind, later essays continue in a non-Darwinian vein. In Strong and Weak Emergence Chalmers (2006) argues that the emergence of high-level truths like consciousness are not conceptually/metaphysically reliant on low-level truths like survival (i.e., consciousness is ‘strongly emergent’). In contrast, he states arrival of EvNS as a natural dynamic (via genomic events) is unsurprising – weakly emergent – versus a claimed strong emergence for consciousness. But he again seems contradictory, where a ‘most salient adaptive phenomena like intelligence’ is a natural event (ibid., 253) – how this adaptive intelligence should differ from consciousness is unclear. He never defines intelligence nor consciousness in a way that allows further analysis. Instead, he again references his zombie trope: ‘I have argued this position at length elsewhere’ (ibid., p. 246-247, indicating Chalmers, 1996) – Strong and Weak Emergence is thus based mainly on points raised in his earlier volume. Later, Chalmers (2010, p. 103) begins Consciousness and Its Place in Nature with the following: ‘Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see how it could be part of the physical world. So it seems that to find a place for consciousness within the natural order, we must either revise our conception of consciousness, or revise our conception of nature’ (emphasis added). Importantly, here Chalmers uses ‘common conceptions’ to impugn a ‘scientific conception’ of nature. Thus, Chalmers again asserts a dualist-conceptual view (forced logical divide) to challenge basic notions of ‘natural order’ or EvNS. Also, his distrust of the current ‘science of consciousness’ (shared by others [Koch, 2012]) drives a call for ‘psychophysical laws’ as a regular feature of his work (Chalmers, 1996, 2006, 2010). His argued-for laws are posed as a needed alternative to EvNS or ‘natural order’. But those laws are never developed, they are only invoked as required to solve persistent questions on consciousness. Neurologist Christoph Koch (2012, p. 124) sees Chalmers’s psychophysical laws as a ‘crude type of dual-aspect information

21 Dec. 2017

3


theory’ – per Koch, set out in an appendix to Chalmers 1996 but, in fact, missing from any generally available edition of that volume. There is something extremely puzzling about the claim that consciousness plays no evolutionary role, because it is obvious that consciousness plays a large number of such roles. (Searle, 1998, p. 63) CHALMERS’S FRAMING – Discussion of Literature Next, how should we relate the above notes on EvNS to grasping consciousness? Chalmers’s view is plainly that we cannot. To justify this ‘evolutionary exclusion’ his only evidence is a zombie, a fictional device from Chalmers’s only express comment on EvNS. So, if we are to accept EvNS cannot help in understanding consciousness, we must accept a logical superiority (or ‘logical possibility’) of fictional devices over scientific theory. A question then arises: ‘How well-supported is that logical possibility?’ Chalmers embraces a line of thought that is, by his own words, ‘not defined in terms of deducibility in any system of formal logic’ or functioning (Chalmers, 1996., p 35) – seeming to imply his claim is unsupported and arbitrary. Also, this fictional account presents an odd gambit for one wanting to conform to scientific thinking and natural laws (noted earlier). Lastly, his fictional foundation presents a catalogue of other issues. À propos of Chalmers’s zombie, Frigg and Hartmann (2006, p. 11) note: ‘The drawback . . . is fictional entities are notoriously beset with ontological riddles. This has led many philosophers to argue that [such devices] . . . must be renounced.’ And yet other problems arise with zombie-like devices, noted elsewhere in the literature (Kirk, 2012). As an example, neuro-anthropologist Terrence Deacon (2011, pp. 40, 45) states that use of ‘homuncular representations’, which he equates to zombies, ‘can be an invitation for misleading shortcuts of explanation . . . although [the ‘map’] is similar in structure to what it represents it is not intrinsically meaningful . . . the correspondence . . . tells us nothing about how it is interpreted . . . Appealing to an agency that is just beyond detection to explain why something [works] is an intellectual dead end in science. It locates cause or responsibility without analyzing it.’ Regardless, Chalmers’s argument is a thought experiment, which must use fictional devices that may be useful. The problem is his zombie trope is imprecisely drawn (re Frigg and Hartmann). For example, Chalmers defines his zombie as ‘lacking conscious experiences altogether . . . all is dark inside,’ while also claiming this ‘zombie twin performs all functions I perform just as well.’ Does Chalmers then envision his zombie in a functional role, writing a piece on consciousness ‘just as well as he’? And with what for content? It’s empty inside. Such complex functioning cannot be explained absent any informational content as accrued life experiences, and a fully functioning memory. Chalmers’s claimed fictional ontology for a fully-functional zombie as ‘logically possible’ begins to sound glib. As a further example of imprecision, Chalmers (1996 p. 35) posits a logically-possible zombie by pondering what ‘would have been in God’s power (hypothetically!) to create, had he chosen to do so. God could not have created . . . male vixens’ because of obvious gender contradictions. But then Chalmers’s God is somehow unfazed by the contradictory ‘living death’ that typifies all zombies, and is blind to humanity’s gender bending habits (e.g., hermaphrodites, transsexuals, transvestites, bisexuality, hijras, LGBT, etc.) Chalmers’s hypothetical claim of logical possibility


is unfounded here, especially as a statement ‘in terms of its truth across all logically possible worlds’ and holding to a ‘notion of conceptual truth’ to support ‘[k]ey elements of [his] discussion’ (ibid., pp. 52-54). Chalmers (1996, pp. 94-97) later elevates his zombie to something ‘quite unlike the zombies [of] Hollywood movies’ with ‘significant functional impairments.’ A ‘creature [that] is molecule for molecule identical . . . . He will certainly be identical to me functionally’ and ‘psychologically identical’ too, but lacking conscious experience – a phenomenal zombie. Chalmers then states: ‘the logical possibility [of such] zombies seems equally obvious’ as that of a mile-high unicycle. But his zombie-unicycle comparative ignores the functioning that makes a unicycle a unicycle – that it is a one-wheeled vehicle acrobatically ridden by a human. Chalmers’s mile-high unicycle is not logically possible as the necessarily massive device3 could not even be lifted, let alone mounted and ridden, by any one ordinary human. Chalmers sees the risk in mixing conceptual (zombie) and empirical (unicycle) metaphors, citing W. V. Quine ‘who argued that there is no useful distinction between conceptual truths and empirical truths’ (ibid., p. 52). But he repeatedly ‘solves’ the issue by conflating fictional logical possibilities with natural facts and functioning – as with his zombie-unicycle exemplar. By one count, he holds psychological (concepts) and phenomenal (empiric) views apart, but then ties his zombie concept to a ‘presumably empiric’ unicycle that is, in fact, another impossible fictional device. This is how Chalmers supports his zombie twin with accounts of ‘logical possibility’. Beyond this imprecision, in the whole of Chalmers’s noted work he never defines consciousness. The closest he comes is this: ‘What is central to consciousness, at least in the most interesting sense, is experience. But this is not a definition. At best, it is a clarification . . . to define conscious experience in terms of more primitive notions is fruitless’ (Chalmers, 1996, p. 3-4). So, when he deprives his zombie of consciousness, it is denied what, exactly? If this zombie has no functioning sensorium to support ‘experience’, how does it exist, eat, walk, or talk? Again, as implied by Frigg and Hartmann, even simple self-regulation cannot be explained absent a basic ontologic, epistemic, and operative framing – Chalmers’s offered account of consciousness is more ‘fully fictional’ rather than ‘fully functional’. As Chalmers never defines consciousness, so too, he never defines his zombie in a delineated way that allows for clarifying arguments. He can make whatever ‘zombie claim’ he wishes with impunity. It is an argument impossible to refute, simply because it has neither bounds nor logic – it is untestable and unfalsifiable (Shuttleworth, 2008). Later, Chalmers even uses the phrase ‘has the consciousness of a zombie’ (Chalmers, 1996., p. 254). So a firm sense of what this zombie is, conscious or otherwise, seems absent. The lack of even a working definition for consciousness deprives us of the taxonomy needed for practical study, taxonomic naming being prerequisite to any formal endeavor (Ford, 2007, p. 91). Some even think a precise naming of consciousness is unlikely (Greenfield, 2009, Koch, 2012). But if no formal name for consciousness is likely, there seems little point in studying ‘that which 3

One mile of CroMoly 1.5” tubing, used in light-weight frame building, with a wall 0.035" or 0.25" thick, weighs 2,878 or 17,541 pounds (OnlineMetals, 2012). Several lengths would likely be needed.

21 Dec. 2017

5


we cannot even name’. Ascribing such unnamable or opaque traits to consciousness is oddly reminiscent of a mystical pre-scientific era. Further, it hinders (precludes) the honest appraisal of one’s basic framing of the matter. Finally, Chalmers never offers an alternative to EvNS. It is this complete lack of a practical framing for consciousness that, in fact, makes his Hard Problem hard. He denies us EvNS as a likely tool, and then offers no substitute – beyond his empty ‘psychophysical laws’. Yes, this is indeed a hard problem. His zombie trope abruptly plants us within a fictional terra incognita, an informationally spare state defined only by the author’s allowances. It’s unclear how even Chalmers hopes to advance his own psychophysical laws in a serially coherent manner, as he never offers any detail. CONCLUSION Chalmers presents his thesis by stating that the ‘one technical concept that is crucial to [his argument] is that of supervenience’ (1996, p. xiv). But a close reading of his work shows that logical possibility is, in fact, his one key concept. Chalmers sees the risk in any such conceptual framing, noting that the ‘way in which conceivability arguments go wrong is by subtle conceptual confusion: if we are insufficiently reflective we can overlook an incoherence in a purported possibility . . . [revealing] that the concepts are being incorrectly applied’ (1996, p. 98-99). But he seems blind to the incoherence and imprecision of his own logical possibilities. The weakness of Chalmers’s logical possibilities makes his arguments arbitrary. Any number of equal fictional possibilities can be offered as counter claim (e.g., Chalmers’s zombie could never function as claimed, as it would be constantly harried by flying monkeys). While such vistas may entertain, heavily fictionalized debate offers no basis for understanding or modeling challenging topics, let alone consciousness. Chalmers further acknowledges that there ‘is certainly a sense in which all these arguments are based on intuition, but [he tries] to make clear just how natural and plain these intuitions are, and how forced it is to deny them’ (1996, p. 110). But what is forced is the conceptual separation of logical and functional possibilities – where the only logic that view supports is a ‘dysfunctional logic’. Such misplaced arguments are called, by some, a FIST, a false implied supervenience thesis (McLaughlin & Bennett, 2011). Chalmers’s use of zombies, as Frigg and Hartmann suggest, only offers impossible riddles. His ‘zombie twin’ begins to sound more like a conjurer’s ploy of misdirection, rather than a genuine effort at clarification. This intellectual sleight-of-hand diverts us from that which is right before us – EvNS’s likely relevance in framing questions of consciousness. Chalmers and others argue for zombies as logically possible (Kirk, 2012). But Chalmers’s zombies are by no means logically verifiable or even roughly comparable to reality, as useful models require. Still, this analysis does not deny the many factors Chalmers names as problematic in studying consciousness. Nor does it even deny that ‘science’ (as presently grasped) can say little about consciousness – Chalmers’s main point. This critique only addresses Chalmers’s general case against EvNS, and thus asserts that the Hard Problem is not as irreducible as claimed. The ‘next


step’, the use of EvNS to frame basic issues on consciousness, information, intelligence, and the like, is undertaken in other essays. Lastly, this paper covers only Chalmers’s view as it is the best-known of many such claims. A similar study is also possible for each of those other claims, from other authors. For example, philosopher Daniel Dennett (1991) is often seen to argue an opposite view from Chalmers (i.e., ‘consciousness as illusion’). But Dennett’s view fails for the same reason as Chalmers’s – it too ignores the place of functionally-differentiating sensoria and discrete perspectival uniqueness, key to EvNS. To study each such view from the last 2,500 years, going back to the Pre-Socratic Anaxagoras, is pointless if one’s true need is for one firm solution. For example, recent gains and issues in machine learning and artificial intelligence (LeCun et al., 2015; Rosa, 2017) make the need for practical solutions (instead of debate) more compelling and interesting. Thus, later essays address this need for useful models of intelligence, rather than continued analysis of prior debates. REFERENCES Burton-Hill, C. (2016). The superhero of artificial intelligence, The Guardian, 16 February 2016 [Online]. Available at: <https://www.theguardian.com/technology/2016/feb/16/demis-hassabisartificial-intelligence-deepmind-alphago> [Accessed 12 March 2017]. Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. New York, NY: Oxford University Press. Chalmers, D. (2006). Strong and weak emergence, in Clayton, P., & Davies, P. C. W. (eds.) The re-emergence of emergence: The emergentist hypothesis from science to religion. Oxford, England: Oxford University Press. Chalmers, D. J. (2010). The character of consciousness. Oxford, England: Oxford University Press. Deacon, T. (2011). Incomplete nature: How mind emerged from matter. New York, NY: W. W. Norton. Dennett, D. C. (1991). Consciousness explained. Boston, MA: Little, Brown and Co. Frigg, R., & Hartmann, S. (2006). “Fiction and scientific representation,” in The Stanford Encyclopedia of Philosophy. [online] Stanford, CA: Stanford Univ. Available at: <http:// plato.stanford.edu/entries/models-science/> [Accessed 1 January 2011]. Ford, D. (2007). The search for meaning: A short history. Berkeley, CA: University of California Press. Greenfield, S. (2009). “The neuroscience of consciousness,” Towards a Science of Consciousness Conf., 10–14 June 2009, Hong Kong: Hong Kong Polytechnic Univ. [online] Available at: <http://thesciencenetwork.org/programs/raw-science/the-neuroscientific-basis-of-consciousness> [Accessed 1 January 2011].

21 Dec. 2017

7


Harnad, S. (1990). The Symbol Grounding Problem. Physica D 42: 335-346. Kirk, R. (2012). “Zombies,” in The Stanford Encyclopedia of Philosophy (Spring 2012 Edition), Edward N. Zalta (ed.). [online] Stanford, CA: Stanford Univ. Available at: <http:// plato.stanford.edu/archives/spr2012/entries/zombies/> [Accessed 1 March 2012]. Koch, C. (2012). Consciousness: Confessions of a romantic reductionist. Boston, MA: MIT Press. LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521:436–444, 2015. McLaughlin, B. & Bennett, K. (2011). "Supervenience," in The Stanford Encyclopedia of Philosophy (Winter 2011 Edition), Edward N. Zalta (ed.). [online] Stanford, CA: Stanford Univ. Available at: <http://plato.stanford.edu/archives/win2011/entries/supervenience/> [Accessed 30 November 2012]. OnlineMetals (2012). “4130 - Alloy Tube Round.” [online] Seattle, WA: OnlineMetals.com. Available at: http://www.onlinemetals.com/merchant.cfm?id=250&step=2 [Accessed 30 November 2012]. Rosa, M. (2017). AI Roadmap Institute, [Online]. Available at: <https:// www.roadmapinstitute.org> [Accessed on 12 March 2017]. Searle, J. R. (1998). Mind, language, and society: Philosophy in the real world. New York, NY: Basic Books. Shuttleworth, M. (Sep 21, 2008). Falsifiability. Retrieved Feb 10, 2013 from Explorable.com: http://explorable.com/falsifiability Wilson, E. (2009). “The four great books of Darwin,” in National Academy of Sciences - Sackler Colloquim: In the Light of Evolution IV. [online] Available at: <http:// sackler.nasmediaonline.org/2009/evo_iv/eo_wilson/eo_wilson.html> [Accessed on 1 January 2011].


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.