On the Relation of Computing to the World

Page 1

OntheRelationofComputingtotheWorld

DepartmentofComputerScienceandEngineering, DepartmentofPhilosophy,DepartmentofLinguistics, andCenterforCognitiveScience

UniversityatBuffalo,TheStateUniversityofNewYork, Buffalo,NY14260-2500

rapaport@buffalo.edu

http://www.cse.buffalo.edu/∼rapaport/

April21,2015

Abstract

Isurveyacommonthemethatpervadesthephilosophyofcomputerscience (andphilosophymoregenerally):therelationofcomputingtotheworld. Arealgorithmsmerelycertainproceduresentirelycharacterizableinan“indigenous”,“internal’,“intrinsic”,“local”,“narrow”,“syntactic”(moregenerally:“intra-system”)purelyTuring-machinelanguage?Ormusttheyinteractwiththerealworld,withapurposethatisexpressibleonlyinalanguage withan“external”,“extrinsic”,“global”,“wide”,“inherited”(moregenerally:“extra-”or“inter-”sytem)semantics?

1

1Preface

IfyoubeginwithComputerScience,youwillendwithPhilosophy.1

Iwassimultaneouslysurprisedanddeeplyhonoredtoreceivethe2015Covey AwardfromtheInternationalAssociationforComputingandPhilosophy.2 The honorisdueinparttolinkingmetotheillustriouspredecessorswhohavereceived thisaward,butalsotoitshavingbeennamedforPrestonCovey,3 whomIknew andwhoinspiredmeasIbeganmytwinjourneysinphilosophyandcomputing.

1.1FromPhilosophytoComputerScience,andBackAgain

Contrarytothemottoabove,Ibeganwithphilosophy,foundmywaytocomputer science,andhavereturnedtoamixtureofthetwo.InspiredbyDouglasHofstadter’sreview[Hofstatder,1980]ofAaronSloman’s TheComputerRevolutionin Philosophy [Sloman,1978],whichquotedSlomantotheeffectthataphilosopher ofmindwhoknewnoAIwaslikeaphilosopherofphysicswhoknewnoquantum mechanics,4 myphilosophicalinterestsinphilosophyofmindledmetostudyAI atSUNYBuffalowithStuartC.Shapiro.5 ThiseventuallyledtoafacultyappointmentincomputerscienceatBuffalo.(Alongtheway,myphilosophycolleagues andIatSUNYFredoniapublishedoneofthefirstintroductorylogictextbooksto useacomputationalapproach[Schagrinetal.,1985].)

AtBuffalo,Iwasamazedtodiscoverthatmyrelativelyarcanephilosophy dissertationonAlexiusMeinongwasdirectlyrelevanttoShapiro’sworkinAI, providinganintensionalsemanticsforhisSNePSsemantic-networkprocessing system(see,e.g.,[ShapiroandRapaport,1987],[ShapiroandRapaport,1991]).6 AndthenIrealizedthatthediscoveryofquasi-indexicals(‘hehimself’,‘sheherself’,etc.;[Castaneda,1966])bymydissertationadvisor,Hector-NeriCastaneda

1“ClickingonthefirstlinkinthemaintextofaWikipediaarticle,andthenrepeatingthe processforsubsequentarticles,usuallyeventuallygetsyoutothePhilosophyarticle.Asof May26,2011,94.52%ofallarticlesinWikipedialeadeventuallytothearticlePhilosophy” (http://en.wikipedia.org/wiki/Wikipedia:Getting to Philosophy).Ifyoubeginwith“ComputerScience”,youwillendwith“Philosophy”(in12links).

2http://www.iacap.org/awards/

3http://en.wikipedia.org/wiki/Covey Award

4“Iampreparedtogosofarastosaythatwithinafewyears,ifthereremainanyphilosopherswho arenotfamiliarwithsomeofthemaindevelopmentsinartificialintelligence,itwillbefairtoaccuse themofprofessionalincompetence,andthattoteachcoursesinphilosophyofmind,epistemology, aesthetics,philosophyofscience,philosophyoflanguage,ethics,metaphysics,andothermainareas ofphilosophy,withoutdiscussingtherelevantaspectsofartificialintelligencewillbeasirresponsible asgivingadegreecourseinphysicswhichincludesnoquantumtheory”[Sloman,1978,p.5].

5http://www.cse.buffalo.edu/∼shapiro/

6IpresentedsomeofthisworkatCAP1987.

2

couldrepaira“bug”inaknowledge-representationtheorythatShapirohaddevelopedwithanotherconverttocomputerscience(frompsychology),AnthonyS. Maida[MaidaandShapiro,1982].(Thisworkwasitselfdebuggedwiththehelpof yetanotherconvert(fromEnglish),mydoctoralstudentJanyceM.Wiebe [Rapaportetal.,1997].)

MyworkwithShapiroandourSNePSResearchGroupatBuffaloenabledme torebutmyCoveyAwardpredecessorJohnR.Searle’sChinese-RoomArgument [Searle,1980]usingaphilosophicaltheoryofwhatIcall“syntacticsemantics” [Rapaport,1986b],[Rapaport,1988],[Rapaport,1995],[Rapaport,2012].7 And bothoftheseprojects,aswellasoneofmyearlyMeinongpapers[Rapaport,1981], ledme,togetherwithanotherdoctoralstudent(KarenEhrlich)and(later)acolleaguefromBuffalo’sDepartmentofLearningandInstruction(MichaelW.Kibby) todevelopacomputationalandpedagogicaltheoryofvocabularyacquisitionfrom context[RapaportandKibby,2007],[RapaportandKibby,2014].8

1.2ThePhilosophyofComputerScience

Allofthisinspiredmetocreateandteachacourseonthephilosophyofcomputer science[Rapaport,2005b]9 and,nowinretirement,towriteupmylecturenotesas atextbook[Rapaport,2015].

Thecourseandthetextbeginwithasinglequestion:Whatiscomputerscience?10 Toanswerthis,weneedtoconsideraseriesofquestions,eachofwhich leadstoanother:

• Iscomputerscienceascience?(Andwhatisscience?)Orisitabranchof engineering?(Whatisengineering?)(Orisitacombinationofthese?Or perhapssomethingcompletelydifferent,new, suigeneris?)

• Ifitisascience,whatisitascienceof?Ofcomputers?(Inthatcase,what isacomputer?)Orofcomputation?Thatquestionleadstomanyothers:

• Whatiscomputation?Whatisanalgorithm?Doalgorithmsdifferfromprocedures?Fromrecipes?(Andwhatarethey?)Whatisthe(Church-Turing) ComputabilityThesis?11 Whatishypercomputation?Whatisacomputer

7IpresentedsomeofthisworkatIACAP2009andNACAP2010.

8IpresentedsomeofthisworkatNACAP2006.

9PresentedatNACAP2006inmyHerbertA.SimonKeynoteAddress, http://www.hass.rpi.edu/streaming/conferences/cap2006/nacp 8 11 2006 9 1010.asx

10Thisisthefirstquestion,but,becausetherearetwointendedaudiences—philosophystudents andcomputer-sciencestudents—Iactuallybeginwithazerothquestion:Whatisphilosophy?

11Ipreferthisnameover“Church’sThesis”,“Turing’sThesis”,orthe“Church-TuringThesis”for reasonsgivenin[Soare,2009, §§3.5,12].

3

program?Thisquestionalsogivesrisetoahostofothers:

• Whatisanimplementation?Whatistherelationofaprogramtothatwhich itmodelsorsimulates?(Forthatmatter,whatissimulation?Andcanprogramsbeconsideredtobe(scientific)theories?)Whatissoftware,andhow doesitrelatetohardware?(Andcan,orshould,oneorbothofthosebe copyrightedorpatented?)Cancomputerprogramsbe(logically)verified?

• Thereare,ofcourse,issuesinthephilosophyofAI(WhatisAI?Whatisthe relationofcomputationtocognition?Cancomputersthink?Whatarethe TuringTestandtheChinese-RoomArgument?).

• Finally,thereareissuesincomputerethics,butIonlytouchontwothatI thinkarenotwidelydealtwithinthealreadyvoluminouscomputer-ethics literature:Shouldwetrustdecisionsmadebycomputers?Shouldwebuild “intelligent”computers?

TherearemanyissuesthatIdon’tdealwith:Thenatureofinformation,theroleof theInternetinsociety,theroleofcomputingineducation,andsoon.ButIhaveto stopsomewhere!

Mygoalinthebookisnottobecomprehensive,buttoprovidebackground onsomeofthemajorissuesandaguidetosomeofthemajorpapers,andtoraise questionsforreaderstothinkabout(togetherwithaguidetohowtothinkabout them—thetextcontainsabriefintroductiontocriticalthinkingandlogicalevaluationofarguments).Foraphilosophytextbook,raisingquestionsismoreimportant thanansweringthem.Mygoalistogivereaderstheopportunityandthemeansto joinalong-standingconversationandtodevisetheirownanswerstosomeofthese questions.

2ACommonThread:ComputingandtheWorld

Inthecourseofwritingthebook,Ihavenoticedathemethatpervadesitstopics.In linewithmygoalsforthebook,Ihavenotyetcommittedmyselftoaposition;Iam stillaskingquestionsandexploring.Inthisessay,Iwanttosharethosequestions andexplorationswithyou.

Thecommonthreadthatrunsthroughmost,ifnotall,ofthesetopics:the relationofcomputingtotheworld:

Iscomputingaboutthe world?Isit“external”,“global”,“wide”,or “semantic”?

4

Orisitabout descriptions oftheworld?Isit,instead,“internal”,“local”,“narrow”,or“syntactic”?

AndIwillquicklyagreethatitmightbeboth!Inthatcase,thequestionishowthe answerstothesequestionsarerelated.

Thisthemeshouldbefamiliartomostofyou;Iamnotannouncingsomenewly discoveredphilosophicalpuzzle.Butitisn’tnecessarilyfamiliarorobviousto mybook’sintendedaudience,andIdowanttorecommenditasatopicworth ponderingandworthdiscussingwithstudents.

Inthissection,we’llsurveytheseissuesastheyappearinsomeofthephilosophyofcomputersciencequestionsof §1.2.Insubsequentsections,we’llgointo moredetail.ButIonlypromisetoraisequestions,nottoanswerthem(inthecourse andthetext:tochallengemystudents’thinking,nottotellthemwhattothink).

LeslieLamportrecentlysaid(quotingcartoonistRichardGuindon),“‘Writingis nature’swayoflettingyouknowhowsloppyyourthinkingis.’Wethinkinorder tounderstandwhatwearedoing.Ifweunderstandsomething,wecanexplainit clearlyinwriting.Ifwehavenotexplaineditinwriting,thenwedonotknowif wereallyunderstandit”[Lamport,2015,38].12

2.1SomeThoughtExperiments

Castanedausedtosaythatphilosophizingmustbeginwithdata.Solet’sbeginwith somedataintheformofrealandimaginedcomputerprograms.

2.1.1Rey’sandFodor’sChessandWarPrograms

JerryFodor,takingupasuggestionbyGeorgesRey,asksustoconsideracomputer thatsimulatestheSixDayWarandacomputerthatsimulates(oractuallyplays?) agameofchess,butwhichare(are?)suchthat“theinternalcareerofamachine runningoneprogramwouldbeidentical,stepbystep,tothatofamachinerunning theother”[Fodor,1978,p.232].

Arealexampleofthesamekindis“amethodforanalyzingx-raydiffraction datathat,withafewmodifications,alsosolvesSudokupuzzles”.13 Orconsidera computerversionofthemurder-mysterygameCluethatexclusivelyusestheResolutionruleofinference,andsocouldbeageneral-purposepropositionaltheorem proverinstead.14

12AsE.M.Forsterisallegedtohavesaid,“HowcanIknowwhatIthinktillIseewhatIsay?” https://rjheeks.wordpress.com/2011/04/13/discovery-writing-and-the-so-called-forster-quote/

13CitedinthebiographicalsketchforabookreviewbyVeitElser[Elser,2012].

14ThankstoRobinHill,personalcommunication,forthisexample.

5

Similarexamplesabound,notablyinapplicationsofmathematicstoscience, andthesecanbesuitably“computationalized”byimaginingcomputerprograms foreach.Forexample,

NicolaasdeBruijnoncetoldmeroughlythefollowinganecdote:Some chemistsweretalkingaboutacertainmolecularstructure,expressing difficultyinunderstandingit.DeBruijn,overhearingthem,thought theyweretalkingaboutmathematicallatticetheory,sinceeverything theysaidcouldbe—andwas—interpretedbyhimasbeingaboutthe mathematical,ratherthanthechemical,domain.Hetoldthemthe solutionoftheirproblemintermsoflatticetheory.They,ofcourse, understooditintermsofchemistry.WeredeBruijnandthechemists talkingaboutthesamething?[Rapaport,1995, §2.5.1,p.63].

Otherexamples,concerning,e.g.,theapplicabilityofgrouptheorytophysics arediscussedin[Frenkel,2013].And[Halmos,1973,p.384]pointsoutthatonce aspectsofHilbertspacesareseentobestructurallyidenticaltoaspectsofquantummechanicalsystems,“thedifferencebetweenaquantumphysicistandamathematicaloperator-theoristbecomesoneoflanguageandemphasisonly”.Clearly,EugeneWigner’sclassicpaper[Wigner,1960]isrelevanthere.Theissueshereare alsoakinto,ifnotthesameas,thosethatmotivatestructuralisminthephilosophy ofmathematics(we’llreturnbrieflytothisin §5.2).

Intheseexamples,dowehaveonealgorithm,ortwo?15

2.1.2Cleland’sRecipeforHollandaiseSauce

CarolClelandoffersanexampleofarecipeforhollandaisesauce[Cleland,1993], [Cleland,2002].Let’ssupposethatwehaveanalgorithm(arecipe)thattellsusto mixeggsandoil,andthatoutputshollandaisesauce.16 Supposethat,onEarth,the resultofmixingtheeggandoilisanemulsionthatis,infact,hollandaisesauce. Andletussupposethat,ontheMoon,mixingeggsandoildoesnotresultinan emulsion,sothatnohollandaisesauceisoutput(instead,theoutputisamessy mixtureofeggsandoil).

CanaTuringmachinemakehollandaisesauce?Ismakinghollandaisesauce computable?

15Comparethisremark:“Recoveringmotivesandintentionsisaprincipaljobofthehistorian. Forwithoutsomeattrributionofmentalattitudes,actionscannotbecharacterizedanddecisionsassessed. Thesameovertbehavior,afterall,mightbedescribedas‘mailingaletter’or‘fomentinga revolution.’ ”[Richards,2009,415].

16Callingarecipean“algorithm”,despiteitsubiquityinintroductorypresentationsofcomputerscience,iscontroversial;see[Preston,2013,Ch.1]forsomeusefuldiscussionofthe nonalgorithmic,improvisationalnatureofrecipes.

6

2.1.3ABlocks-WorldRobot

Considerablocks-worldcomputerprogramthatinstructsarobothowtopickup blocksandmovethemontooroffofotherblocks[Winston,1977].Ioncesawa livedemoofsuchaprogram.Unfortunately,therobotfailedtopickuponeofthe blocks,becauseitwasnotcorrectlyplaced,yettheprogramcontinuedtoexecute “perfectly”eventhoughtheoutputwasnotwhatwasintended.[Rapaport,1995, §2.5.1].

Did theprogrambehaveasintended?

2.1.4AGCDProgram

MichaelRescorla[Rescorla,2013, §4]offersanexamplethatisreminiscentofCleland’s,butless“physical”.HereisaSchemeprogramforcomputingthegreatest commondivisor(GCD)oftwonumbers:

(define(gcdab) (if(=b0) a (gcdb(remainderab))))

Implementthisprogramontwocomputers,one(M10)usingbase-10notation andone(M13)usingbase-13notation.Rescorlaarguesthatonly M10 executesthe Schemeprogram forcomputingGCDs,eventhough,ina“narrow”sense,both computersareexecutingthe“same”program.Whenthe numerals ‘115’and‘20’ areinputto M10,itoutputsthenumeral‘5’;“ittherebycalculatesthegreatest commondivisorofthecorresponding numbers”[Rescorla,2013,p.688].Butthe numbers expressedinbase-13by‘115’and‘20’are18710 and2610,respectively, andtheirGCDis110,not510.So,ina“wide”sense,thetwomachinesaredoing “differentthings”.

Are thesemachinesdoingdifferentthings?

2.1.5ASpreadsheet

IvividlyrememberthefirstsemesterthatItaughta“GreatIdeasinComputer Science”courseaimedatcomputer-phobicstudents.Weweregoingtoteachthe studentshowtouseaspreadsheetprogram,somethingthatIhadneverused!So, withrespecttothis,Iwasasnaiveasanyofmystudents.MyTA,whohadused spreadsheetsbefore,gavemesomethinglikethefollowinginstructions:

7

enteranumberincell 1; enteranumberincell 2; enter‘= clickoncell 1 clickoncell 2 ’incell 3

IhadnoideawhatIwasdoing.Iwasblindlyfollowingherinstructions andhad noideathatIwasaddingtwointegers.OnceshetoldmethatthatwaswhatIwas doing,myinitialreactionwas“Whydidn’tyoutellmethatbeforewebegan?”.

WhenIenteredthosedataintothespreadsheet,wasIaddingtwonumbers?

2.2WhatIsComputerScience?

Science,nomatterhowconceived,isgenerallyagreedtobeawayof understanding theworld.So,ifcomputerscienceisascience—and,ofcourse,justbecauseitis called‘computer science’doesnotimplythatit is ascience!—thenitshouldbea wayofunderstandingtheworld computationally. Engineering,nomatterhowconceived,isgenerallyagreedtobeawayof changing theworld(preferablybyimprovingit).17 So,ifcomputerscienceis anengineeringdiscipline,thenitshouldbeawayofchangingtheworld(forthe better?)—byimplementingalgorithmsincomputerprogramsthatcanhavephysicaleffects.

Computersciencetriestodoboth:tounderstandtheworldcomputationally, andtochangetheworldbybuildingcomputationalartifacts.Inintroductoryclasses, Iofferthefollowingdefinition:

Computerscienceisthescientificstudy18 of:

• what canbecomputed(narrowly,whichmathematicalfunctions arecomputable;widely,whichreal-worldtasksareautomatable [Forsythe,1968]),

• how tocomputesuchfunctionsortasks(howtorepresentthe necessaryinformationandhowtoconstructappropriatealgorithms),

• howtocomputethem efficiently,

17“[S]sciencetriestounderstandtheworld,whereasengineeringtriestochangeit”[Staples,2015, §1],paraphrasingMarx:“Thephilosophershaveonlyinterpretedtheworld,invariousways;the pointistochangeit”[Marx,1845,Thesis11].WasMarxproposingadisciplineof“philosophical engineering”?

18Usingtheadjective‘scientific’insteadofthenoun‘science’neatlyputsthescience-vs.engineeringdisputeinthebackground.Afterall,surelyengineeringis“scientific”,evenifitisn’ta “science”.And,whetherornotcomputerscienceisa“science”,itissurelyasystematic, scientific fieldofstudy.

8

• andhowto engineer computersthatcanimplementthosecomputationsintherealworld.

Andhereweglimpsethefirststrandofourthread:Iscomputationconcerned with(a)theinternalworkingsofacomputer(bothabstractlyintermsofthetheory ofcomputation—e.g.,thewayinwhichaTuringmachineworks—aswellasmore concretelyintermsoftheinternalphysicalworkingsofaphysicalcomputer)?Or (butnotnecessarilyexclusively“or”!)with(b)howthoseinternalworkingscan reachouttotheworldinwhichtheyareembedded?

And,here,itisalsoimportanttopointoutsimilaritieswithaclassicissuein thephilosophyofmind:Howdoesthemind(or,morematerialistically,thebrain) reachoutto“harpoon”theworld?19

2.3WhatIsaComputer?

Computersciencemightbethestudyofcomputers orofcomputation.Ifitisthe scienceoreventheengineeringofcomputers,thenweneedtoaskwhatacomputeris.Isacomputeranabstract,mathematicalentity,inparticular,aTuring machine(orauniversalTuringmachine)?(Or,forthatmatter,anythinglogically equivalenttoaTuringmachine,suchasa λ-“calculator”orarecursive-function “machine”?)Orisitanyphysical,real-worldobjectthatimplementsa(universal)Turingmachine?(Orboth,ofcourse.)Ifitisphysical,whichthingsinthe worldarecomputers(besidestheobvioussuspects,suchasMacs,PCs,iPhones, etc.)?Notably,isthebrainacomputer?Isthesolarsystemacomputer(computing Kepler’slaws)?Istheuniverseacomputer?20

Here,weseeanotherstrandofourthread:Whereshouldwelookforananswer towhatacomputeris?Shouldwelooknarrowlytomathematics,ormorewidely totherealworld?(AndIwon’tevenmentiontherelatedissueofmathematicalPlatonism,whichmightrespondthatlooking“narrowly”tomathis,indeed,looking evenmorewidelythanthemerelyrealworld.)

2.4WhatIsComputation?

Whenweaskwhatcomputationis,thingsgetmorecomplex.Iscomputation“narrow”,focusingonlyontheoperationsofaTuringmachine(print,move)oronbasic recursivefunctions(successor,predecessor,projection)?Or“wide”,involving,say,

19See,e.g.,[Castaneda,1989,p.114,“TheHarpooningModel”].

20Onthebrain,see,e.g.,[Searle,1990];onthesolarsystem,see,e.g.,[Copeland,1996, §2],[PerruchetandVinter,2002, §1.3.4],[Shagrir,2006,p.394];ontheuniverse,see,e.g., [Weinberg,2002],[Wolfram,2002],[LloydandNg,2004].

9

chesspiecesandachessboard(forachessprogram)orsoldiersandabattlefield (forawargamesimulator)?Isitindependentoftheworld,orisitworld-involving?

Arealgorithmspurelylogical?Orarethey“intentional”[Hill,2015]and“teleological”[Anderson,2015]?Whichofthesetwoformsdotheytake?:

Do P (where‘P’iseitheraprimitivecomputation,orasetofcomputations recursivelystructuredbysequence,selection,andrepetition,i.e.,a “procedure”).

Or

Inordertoaccomplishgoal G,do P

Iftheformer,thencomputationis“narrow”;ifthelatter,then“wide”.(We’llreturn tothisin §4.)

What is aprocedure?Arerecipesprocedures?Is“Makehollandaisesauce”a high-levelprocedurecall?Ifso,thencomputationiswide.(We’llreturntothisin §4.4.)

IsmakinghollandaisesauceTuring-machinecomputable?Orisitataskthat goesbeyondTuringmachines?Howdoesinteractive(ororacle)computation,relatetoTuring-machinecomputation?Turing-machinecomputationseemstobe “narrow”;theothers,“wide”.(We’llreturntothisin §6.)

2.5WhatIsaComputerProgram?

Isanalgorithmanimplementationofafunction,acomputerprogramanimplementationofanalgorithm,andaprocess(i.e.,aprogrambeingexecutedonacomputer) animplementationofaprogram?Implementationisarelationbetweensomething moreorless abstract andsomethingelsethatismoreorless concrete (atleast,that islessabstract).Itisthecentralrelationbetweenabstractionandreality(aswellas wheresciencemeetsengineering).

Elsewhere,Ihavearguedthatimplementationismostusefullyunderstood as(external)semanticinterpretation.Moreprecisely, I isanimplementation,in medium M,of“Abstraction” A iff I isasemanticinterpretationormodelof A, where A issomesyntacticdomainand M isthesemanticdomain[Rapaport,1999, 128],[Rapaport,2005a].Typically(butnotnecessarily), I isareal-worldsystem insomephysicalmedium M,and A isanabstractorformalsystem(butboth I and A couldbeabstract.Thethemeoftherelationofcomputingtotherealworldis obviouslyrelatedtothisissue.

Ithasbeenclaimedthat(atleastsome)computerprogramsaretheories [SimonandNewell,1962,p.97],[Johnson-Laird,1981,pp.185–186]

10

[Pylyshyn,1984,p.76].(Forthecontrastingview,see[Moor,1978, §4], [Thagard,1984].)Howdotheoriesrelatetotheworld?Docomputerprograms simulatetheworld?(We’llreturntothisalittlebitin §7.)

Arecomputerprogramssoftwareorhardware?Herewehaveacomputational versionofthemind-bodyproblem.Andithaslegalramificationsintermsofwhat canbecopyrightedandwhatcanbepatented.

Cancomputerprogramsbeverified?Thisquestionconcerns,atleastinpart, thecorrelationofasyntacticdescription(oftheworld)withadomainofsemantic interpretation(i.e.,theworldbeingdescribed).(We’llreturntothisin §7.)

Itistimetodelveintosomeofthese.

3Inputs,TuringMachines,andOutputs

Anymachineisaprisonerofitsinputandoutputdomains. [Newell,1980,148]

Let’sbeginatthebeginning,withTuringmachines.ThetapeofaTuringmachine recordssymbolsinits“cells”,usually‘0’or‘1’.Isthetapetheinput-outputdevice oftheTuringmachine?Orisitthemachine’sinternalmemorydevice?

GivenaTuringmachineforcomputingacertainmathematicalfunction,itis certainlytruethatthefunction’sinputswillbeinscribedonthetapeatthebeginning ofthecomputation,andtheoutputswillbeprintedontothetapebythetimethatthe computationhalts.Moreover,theinscriptionsonthetapewillbeusedandmodified bythemachineduringthecomputation,inthesamewaythataphysicalcomputer usesitsinternalmemoryforstoringintermediateresultsofacomputation.Soit certainlylooksliketheanswertoourquestionsis:both.

But,althoughTuring’s a-machinesweredesignedtosimulatehumancomputers,21 Turingdoesn’ttalkaboutthehumanswhowould use them.ATuringmachine doesn’t acceptuser-suppliedinputfromtheexternalworld!Itbeginswith alldatapre-storedonitstapeandthensimplydoesitsownthing,computingthe outputofafunctionandleavingtheresultonthetape.Turingmachinesdon’t“tell” anyoneintheexternalworldwhattheanswersare,thoughtheanswersarethere foranyonetoreadbecausethe“internalmemory”ofthemachineisvisibletothe externalworld.Ofcourse,auserwouldhavetobeableto interpret thesymbolson thetape,andthereonhangsatale.

21Thatis,humanswhocompute.Insomerecentwritings(see §4.2),theyarecalled‘computors’ [Sieg,1994],soastobeabletouseawordthatsoundslike‘computer’withoutthe21st-century implicationthatitissomethinglikeaMacoraPC.Iprefertocallthem‘clerks’.

11

ThequestionIwanttolookathereiswhetherthesymbolsonthetapearereally inputsandoutputsinthesenseofcomingfrom,andbeingreportedto,theexternal world.

Areinputsandoutputsanessentialpartofanalgorithm?Afterall,theinputoutputinterface“merely”connectsthealgorithmwiththeworld.Itmayseem outrageoustodenythattheyareessential,butit’sbeendone!

3.1AreInputsNeeded?

It’soutrageous,ofcourse,becausealgorithmsaresupposedtobewaysofcomputingmathematicalfunctions,andmathematicalfunctions,bydefinition,haveinputs andoutputs.Theyare,afterall,certainsetsoforderedpairsofinputsandoutputs, andyoucan’tverywellhaveanordered pair thatismissingoneorbothofthose.

A.A.Markov’sinformalcharacterizationofalgorithmhasan“applicability”conditionstatingthatalgorithmsmusthave“Thepossibilityofstartingfromoriginal givenobjectswhichcanvarywithinknownlimits”[Markov,1954,p.1].Those “originalgivenobjects”are,presumably,theinput.

ButDonaldKnuth’sinformalcharacterizationofthenotionofalgorithmhasan “input”conditionstatingthat“Analgorithmhas zeroormore inputs”[Knuth,1973, p.5;myitalics]!Henotonlydoesn’texplainthis,buthegoesontocharacterize outputsas“quantitieswhichhaveaspecifiedrelationtotheinputs”[Knuth,1973, p.5].The“relation”wouldnodoubtbethefunctionalrelationbetweeninputs andoutputs,but,ifthereisnoinput,whatkindofarelationwouldtheoutputbe in?22 Knuthisnotaloneinthis:JurisHartmanisandR.E.Stearns’sclassicpaper oncomputationalcomplexityallowstheirmulti-tapeTuringmachinestohaveat mostonetape,whichisanoutput-onlytape;thereneednotbeanyinputtapes [HartmanisandStearns,1965,p.288].

Onewaytounderstandthisisthatsomeprogramsmerelyoutputinformation, suchasprime-numbergenerators.Incasessuchasthis,althoughtheremaynot beany explicit input,thereisan implicit input(roughly,ordinals:thealgorithm outputsthe nthprime,withoutexplicitlyrequestingan n tobeinput).Another kindoffunctionthatmightseemnottohaveany(explicit)inputsisaconstant function,but,again,itsimplicitinputcouldbeanything(oranythingofacertain type,“varyingwithinknownlimits”,asMarkovmighthavesaid).

So,whatconstitutesinput?Isitsimplytheinitialdataforacomputation?Or isitinformationsuppliedtothecomputerfromtheexternalworld(andinterpreted ortranslatedintoarepresentationofthatinformationthatthecomputercan“understand”andmanipulate)?

22Isthisarelationtoanon-existententityinaMeinongiansense?See[Grossmann,1974,p.109], [Rapaport,1986a, §4.5].

12

3.2AreOutputsNeeded?

Whataboutoutput?Markov,Knuth,andHartmanis&Stearnsallrequireatleast oneoutput.Markov,forexample,hasan“effectiveness”condition(usingthatterm slightlydifferentlyfromothers,suchasChurchorKnuth)statingthatanalgorithm must“obtainacertainresult”.

ButB.JackCopelandandOrenShagrirsuggestthataTuringmachine’soutput mightbeunreadable[CopelandandShagrir,2011,pp.230–231].Imagine,nota Turingmachinewithatape,butaphysicalcomputerthatliterallyprintsoutits results.Supposethattheprinterisbrokenorthatithasrunoutofink.Orsuppose thattheprogrammerfailedtoincludea‘print’commandintheprogram.The computer’sprogramwouldcomputearesultbutnotbeabletotelltheuserwhat itis.Considerthisalgorithmfrom[ChaterandOaksford,2013,p.1172,citingan examplefrom[Pearl,2000]]:

1.input P

2.multiply P by2;storein Y

3.add1to Y ;storein Z

Thisalgorithmdoesnotappeartohaveanoutput.Thecomputerhascomputed 2X + 1andstoreditawayin Z forsafekeeping,butdoesn’ttellyouitsanswer. There is ananswer,butitisn’toutput.(“Iknowsomethingthatyoudon’t!”?) So,whatconstitutes“output”?Isitsimplythefinalresultofacomputation?Or isitsomekindoftranslationorinterpretationofthefinalresultthatisphysically outputandimplementedintherealworld?Intheformercase,wouldn’tbothof §2.1.4’sbase-10andbase-13GCDcomputersbedoingthesamething?Aproblemwouldariseonlyiftheytolduswhatresultstheygot,andwe—readingthose results—wouldinterpretthem,possiblyincorrectly.

3.3WhenAreInputsandOutputsNeeded?

HereisGualtieroPiccininionthistopic:

Docomputationshavetohaveinputsandoutputs?Themathematical resourcesofcomputabilitytheorycanbeusedtodefine‘computations’ thatlackinputs,outputs,orboth. Butthecomputationsthataregenerallyrelevantforapplicationsarecomputationswithbothinputsand outputs. [Piccinini,2011,p.741,n.11;myitalics]

Or,asAllenNewellputit:

13

Machinesliveintherealworldandhaveonlyalimitedcontactwithit. Anymachine,nomatterhowuniversal,thathasnoears(sotospeak) willnothear;thathasnowings,willnotfly.[Newell,1980,148]23

Thereyouhaveit:Narrowlyconceived,algorithmsmightnotneedinputsand outputs.Widelyconceived,theydo.Anyinputfromtheexternalworldwouldhave tobe encoded byauserintoalanguage“understandable”bytheTuringmachine (orelsetheTuringmachinewouldneedtobeableto decode suchexternal-world input).Andanyoutput from theTuringmachinetobereported to theexternal world(e.g.,auser)wouldhavetobe encoded bytheTuringmachine(or decoded bytheuser).Suchcodingswould,themselves,havetobealgorithmic.

Infact,thekeytodeterminingwhichreal-worldtasksarecomputable—one ofthemainquestionsthatcomputerscienceisconcernedwith(§2.2)—isfinding codingschemesthatallowthesequenceof‘0’sand‘1’s(i.e.,anaturalnumberin binarynotation)onaTuringmachine’stapetobe interpreted asasymbol,apixel, asound,etc.Amathematicalfunctiononthenaturalnumbersiscomputableiffitis computablebyaTuringmachine(accordingtotheChurch-TuringComputability Thesis);thus,areal-worldproblemiscomputableiffitcanbeencodedassucha computablemathematicalfunction.

Butit’sthatwideconception,requiringalgorithmic,semanticinterpretations oftheinputsandoutputs,thatleadstovariousdebates.

3.4MustInputsandOutputsBeInterpretedAlike?

Thereisanotherinput-outputissue,whichhasnotbeendiscussedmuchinthe literaturebutwhichisrelevanttoourtheme.[Rescorla,2007,p.254]notesthat

DifferenttextbooksemploydifferentcorrelationsbetweenTuringmachinesyntaxandthenaturalnumbers.Thefollowingthreecorrelations areamongthemostpopular:24

d1(n)= n

d2(n + 1)= n

d3(n + 1)= n, asaninput.

d3(n)= n, asanoutput.

23‘Universal’,asNewellusesithere,meansbeingableto“produceanarbitraryinput-output function”[Newell,1980,147].

24[Thesymbol‘x’representsasequenceof x strokes,where x isanaturalnumber.–WJR]

14

Amachinethatdoublesthenumberofstrokescomputes f (n)= 2n under d1, g(n)= 2n + 1under d2,and h(n)= 2n + 2under d3.Thus,the sameTuringmachinecomputesdifferentnumericalfunctionsrelative todifferentcorrelationsbetweensymbolsandnumbers.

Let’sfocusoninterpretationslike d3 (we’lllookat d1 and d2 in §5.2).This ideaofhavingdifferentinputandoutputinterpretationsoccursallthetimeinthe realworld.(Idon’tknowhowoftenitisconsideredinthemorerarefiedatmosphereofcomputationtheory.)Forexample,machine-translationsystemsthatuse an“interlingua”workthisway:Chineseinputisencodedintoan“interlingual” representationlanguage(oftenthoughtofasaninternal,“meaning”-representation languagethatencodesthe“proposition”expressedbytheChineseinput),andEnglishoutputisgeneratedfromthatinterlingua(re-expressinginEnglishthepropositionthatwasoriginallyexpressedinChinese).25 Cognition(assumingthatit iscomputable!)alsoworksthisway:Perceptualencodingsintothe“language” ofthebiologicalneuralnetworkofourbrainsurelydifferfrommotordecodings. (Newell’sabove-quotedexamplesofhearingandflyingaresurelydifferent.)

Consider,again,Rescorla’sGCDprogram.Andletusconsider,notScheme, butaCommonLispversion.TheCommonLispversionwilllookidenticaltothe Schemeversion(thelanguagessharemostoftheirsyntax),buttheCommonLisp versionhastwoglobalvariables—*read-base* and *print-base*—that tellthecomputerhowtointerpretinputandhowtodisplayoutput.Bydefault, *read-base* issetto10.SotheCommonLispread-procedureseesthethreecharactersequence‘115’(forexample);decidesthatitsatisfiesthesyntaxofan integer;convertsthatsequenceofcharacterstoaninternalrepresentationoftype integer whichisrepresentedinternallyasabinarynumeralimplementedas bitsorswitch-settings—doesthesamewith(say)‘20’;andcomputestheirGCDusingthealgorithmfrom §2.1.4onthebinaryrepresentation.Ifthephysicalcomputer hadbeenanoldIBMmachine,thecomputationmighthaveusedbinary-codeddecimalnumeralsinstead,thuscomputinginbase10.If *read-base* hadbeenset to13,theinputcharacterswouldhavebeeninterpretedasbase-13numerals,and theverysameCommonLisp(orScheme)codewouldhavecorrectlycomputed theGCDof18710 and2610.Onecouldeithersaythatthealgorithmcomputes with numbers—notnumerals—orwith base-2numeralsasacanonicalrepresentationofnumbers,dependingonone’sviewconcerningsuchthingsasPlatonism ornominalism.Andsimilarlyforoutput:Theswitch-settingscontainingtheGCD oftheinputarethenoutputasbase-10orbase-13numerals,aspixelsonascreen orinkonpaper,dependingonthevalueofsuchthingsas *print-base*.The 25MyformerstudentMin-HungLiaousedSNePSforthispurposein[Liao,1998].Formoreon thehistoryofinterlinguasincomputerscience,see[Daylight,2013, §2].

15

point,onceagain,withrespecttoRescorla’sexample,isthatasingleCommonLisp (orScheme)algorithmisbeingexecutedbyboth M10 and M13.Thosemachines are different;theydo not “havethesamelocal,intrinsic,physicalproperties” [Rescorla,2013,687],because M10 has *read-base* and *print-base* set to10,whereas M13 has *read-base* and *print-base* setto13.26

TheaspectofthissituationthatIwanttoremindyouofiswhetherthetapeis the external inputandoutputdevice,oris,rather,themachine’s internal memory.27 Ifitisthemachine’sinternalmemory,then,insomesense,thereisno(visibleor user-accessible)inputoroutput(§3).Ifitisanexternalinput-outputdevice,then themarksonitarefor our convenienceonly.Intheformercase,theonlyaccurate descriptionoftheTuringmachine’sbehaviorissyntacticallyintermsofstrokeappending.Inthelattercase,wecanusethatsyntacticdescriptionbutwecanalso embellishitwithoneintermsofourinterpretationofwhatitisdoing.(We’llreturn tothisin §5.2.)

4AreAlgorithmsTeleological(Intentional)?

Let’sbeginuntanglingourthreadwiththequestionofwhethertheproperwayto characterizeanalgorithmmustincludean“intentionalorteleologicalpreface”,so tospeak.

4.1WhatIsanAlgorithm?

Thehistoryofcomputationtheoryis,inpart,anattempttomakemathematically precisetheinformalnotionofanalgorithm.Turingmoreorless“won”thecompetition.(Atleast,hetiedwithChurch.G¨odel,alsointherace,placedhisbeton Turing[Soare,2009]).Manyinformalcharacterizationsof“algorithm”exist(such asKnuth’sandMarkov’s,mentionedin §3.1);theycanbesummarizedasfollows [Rapaport,2012,Appendix,pp.69–71]:

Analgorithm(forexecutor E)[toaccomplishgoal G]is:

1.aprocedure P,i.e.,afiniteset(orsequence)ofstatements(or rules,orinstructions),suchthateachstatement S is: (a)composedofafinitenumberofsymbols(better:uninterpretedmarks)fromafinitealphabet (b)andunambiguous(for E—i.e.,

26IamindebtedtoStuartC.Shapiro,personalcommunication,fortheideasinthisparagraph.

27[Dresner,2003]and[Dresner,2012]discussthis.

16

i. E “knowshow”todo S, ii. E cando S, iii. S canbedoneinafiniteamountoftime iv.and,afterdoing S, E “knows”whattodonext—), 2. P takesafiniteamountoftime,i.e.,halts, 3.[and P endswith G accomplished].28

4.2DoAlgorithmsNeedaPurpose?

Ithinkthatthenotionofanalgorithmisbestunderstoodwithrespecttoanexecutor. Onemachine’salgorithmmightbeanother’sungrammaticalinput [Suber,1988].Wecanprobablyrephrasetheabovecharacterizationwithoutreferenceto E,albeitmoreawkwardly,hencethe parentheses aroundreferencesto E.

Butthepresentissueiswhetherthe bracketed commentsaboutthetasktobe accomplishedareessential.MyformerstudentRobinK.Hillhasrecentlyargued infavorofincluding G,roughlyonthegroundsthata“prospectiveuser”needs “someunderstandingofthetaskinquestion”overandabovethemereinstructions ([Hill,2015, §5].Algorithms,accordingtoHill,mustbeexpressedintheform“In ordertoaccomplish G,do P”,notmerely“Do P”.

StephenC.Kleene’sinformalcharacterizationof“algorithm”[Kleene,1995, p.18]beginsasfollows:

[A]methodforansweringanyoneofagiveninfiniteclassofquestions...isgivenbyasetofrulesorinstructions,describingaprocedurethatworksasfollows. After theprocedurehasbeendescribed, [then]ifweselect any questionfromtheclass,theprocedurewillthen tellushowtoperformsuccessivesteps,...

28“[N]ote...thatthemoreonetriestomakeprecisethese informal requirementsforsomething tobeanalgorithm,themoreonerecapitulatesTuring’smotivationfortheformulationofaTuring machine”[Rapaport,2012,p.71].Afewotherthingstonote:Thecharacterizationofaprocedureas a set of statements (withtheparentheticalalternativesofsequence,rules,andinstructions)isintended toabstractawayfromissuesaboutimperative/proceduralvs.declarativepresentations.Whetherthe “letters”ofthealphabetinwhichthealgorithmiswrittenareconsideredtobeeithersymbolsinthe classicalsenseofmark-plus-semantic-interpretationorelseuninterpretedmarks(symbolsthatare not“symbolicof”anyting)isanotheraspectofourcommonthread.Talkof“knowinghow”does notpresumethat E isacognitiveagent(itmightbeaCPU),butIdowanttodistinguishbetween E’s (i)beingable inprinciple (i.e.,“knowinghow”)toexecuteastatementand(ii)beingable inpractice to(i.e.,“can”)executeit.Similarly,“knowing”whattodonextdoesnotpresumecognition,but merelyhavingadeteministicwaytoproceedfromonestatementtothe“next”.Iamusing‘know’ anditscognateshereinthesame(notnecessarilycognitive)waythatAIresearchersuseitinthe phrase‘knowledgebase’.

17

Notethattheprocedurehasapurpose:“answeringanyoneofagiveninfiniteclass ofquestions”.Andtheproceduredependsonthatclass:Givenaclassofquestions, thereisaproceduresuchthat,given“any questionfromtheclass,”theprocedure “enable[s]ustorecognizethatnowwehavetheanswerbeforeusandtoreadit off”.

DavidMarranalyzedinformationprocessingintothreelevels:computational (what asystemdoes),algorithmic(how itdoesit),andphysical(howitis implemented)[Marr,1982].Ihaveneverlikedtheseterms,preferring‘functional’, ‘computational’,and‘implementational’,respectively:Certainly,whenoneisdoingmathematicalcomputation(thekindthatTuringwasconcernedwithin [Turing,1936],onebeginswitha mathematicalfunction (i.e.,acertainsetoforderedpairs),asksforanalgorithmto compute it,andthenseeksan implementation ofit, possibly ina physical systemsuchasacomputerorthebrain(orperhapseven [Searle,1982]’sbeercansandleverspoweredbywindmills),but notnecessarily (e.g.,thefunctionalityofanabstractdatatypesuchasastackcanbe abstractly implementedusingalist.29

In non-mathematicalfields(e.g.,cognition),thesetoforderedpairsofinputoutput behavior isexpressedinproblem-specificlanguage;30 thealgorithmiclevel willalsobeexpressedinthatlanguage;andtheimplementationlevelmightbethe brainoracomputer.Arecipeforhollandaisesaucedevelopedinthiswayneeds tosaymorethanjustsomethingalongthelinesof“mixtheseingredientsinthis way”;itmusttaketheexternalenvironmentintoaccount.(Wewillreturntothisin §4.4,andwewillseehowitcantaketheexternalworldintoaccountin §5.4.)

Marr’s“computational”levelisrathermurky.FrancesEgantakesthemathematicalfunctionalviewjustoutlined.31 Onthatview,Marr’s“computational”level ismathematical.

BartonAnderson[Anderson,2015, §1],ontheotherhand,saysthatMarr’s “computational”level

concern[s]thepresumed goal or purpose ofamapping,32 thatis,the specificationofthe‘task’thataparticularcomputation‘solved.’Algorithmiclevelquestionsinvolvespecifyinghowthismappingwas achievedcomputationally,thatis,theformalprocedurethattransforms aninputrepresentationintoanoutputrepresentation.

Onthisview,Marr’s“computational”levelis teleological.Intheformulation“To

29ThisisonereasonthatIhavearguedthatimplementationissemanticinterpretation [Rapaport,1999],[Rapaport,2005a].

30Usingan“external”or“inherited”semantics;thesetermswillbeexplainedbelow.

31[Egan,1991,pp.196–107],[Egan,1995,p.185];cf.[ShagrirandBechtel,2015, §2.2].

32Ofamathematicalfunction?

18

accomplish G,do P”,the“To G”prefaceexpressestheteleologicalaspectofMarr’s “computational”level;the“do P”seemstoexpressMarr’s“algorithm”level.

Accordingto[Bickle,2015],Marrwastryingtocounterthethen-prevailing methodologyoftryingto describe whatneuronsweredoing(a“narrow”,internal, implementation-leveldescription)withouthavinga“wide”,external,“computational”-level purpose (a“function”intheteleological,notmathematical,sense). Suchateleologicaldescriptionwouldtellus“why”[Marr,1982,p.15,asquoted in[Bickle,2015]]neuronsbehaveastheydo.

ShagrirandWilliamBechtelsuggestthatMarr’s“computational”levelconflatestwoseparate,albeitrelated,questions:notonly“why”,butalso“what”.On thisview,Eganisfocusingonthe“what”,whereasAndersonisfocusingonthe “why”[ShagrirandBechtel,2015, §2.2].Wewillreturntothisinamoment.

Certainly,knowingwhatthegoalofanalgorithmismakesiteasierfora cognitiveagent executortofollowthealgorithmandtohaveafullerunderstandingof whats/heisdoing.Ididn’tunderstandthatIwasaddingwhenmyTAtoldmeto entercertaindataintothecellsofthespreadsheet.Itwasonlywhenshetoldme thatthatwashowIcouldaddtwonumberswithaspreadsheetthatIunderstood.

Now,(Iliketothinkthat)Iamacognitiveagentwhocancometounderstand thatenteringdataintoaspreadsheetcanbeawayofadding.ATuringmachinethat addsoraMacrunningExcelisnotsuchacognitiveagent.Itdoesnotunderstand whatadditionisorthatthatiswhatitisdoing.Anditdoesnothaveto.However,an AIprogramrunningonarobotthatpassestheTuringtestwouldbeaverydifferent matter;IhavearguedelsewherethatsuchanAIprogramcould,would,andshould (cometo)understandwhatitwasdoing.33

Theimportantpointisthat—despitethefactthatunderstandingwhattaskan algorithmisaccomplishingmakesiteasiertounderstandthe algorithm itself— “blind”followingofthealgorithmisallthatisnecessaryto accomplish thetask. Understandingthetask—thegoalofthealgorithm—isexpressedbytheintentional/teleologicalpreface.Thisisakintodubbingitwithanamethatismeaningfultotheuser,aswewilldiscussin §5.3.

Thatcomputationcanbe“blind”inthiswayiswhat[Fodor,1980]expressed byhis“formalitycondition”andwhatDanielC.Dennetthascalled Turing’s...strangeinversionofreasoning.ThePre-Turingworldwas oneinwhichcomputerswerepeople,whohadtounderstandmathematicsinordertodotheirjobs.Turingrealisedthatthiswasjust notnecessary:youcouldtakethetaskstheyperformedandsqueeze outthelasttinysmidgensofunderstanding,leavingnothingbutbrute, 33[Rapaport,1988],[Rapaport,2012].SeemyformerdoctoralstudentAlbertGoldfain’sworkon howtogetAIcomputersystemstounderstandmathematics[Goldfain,2006],[Goldfain,2008].

19

mechanicalactions.INORDERTOBEAPERFECTANDBEAUTIFULCOMPUTINGMACHINEITISNOTREQUISITETOKNOW WHATARITHMETICIS.[Dennett,2013,p.570,capsinoriginal]34

AsIreadit,thepointisthataTuringmachineneednot“know”thatitisadding, butagentswhodounderstandaddingcanusethatmachinetoadd.

Orcanthey?Inordertodoso,themachine’sinputsandoutputshavetobe interpreted—understood—bytheuserasrepresentingthenumberstobeadded. Andthatseemstorequireanappropriaterelationshipwiththeexternalworld.It seemstorequirea“usermanual”thattellstheuserwhatthealgorithmdoesinthe waythatHillprescribes,notinthewaythatmyTAexplainedwhataspreadsheet does.Andsucha“usermanual”—anintentionorapurposeforthealgorithm—in turnrequiresaninterpretationofthemachine’sinputsandoutputs.

Butbeforepursuingthislineofthought,let’stakeafewmoreminutestoconsider“Turing’sinversion”,theideathataTuringmachinecanbedoingsomething veryparticularbyexecutinganalgorithmwithoutanyspecificationofwhatthatalgorithmis“doing”intermsoftheexternalworld.Algorithms,onthisview,seem nottohavetobeintentionalorteleological,yettheyremainalgorithms.

BrianHayesofferstwoexpressionsofanalgorithmthatantsexecute [Hayes,2004]:

teleologicalversion:

Tocreateanantgraveyard,gatherallyourdeadinoneplace.35 non-teleologicalversion:

1.Ifyouseeadeadant,36 andifyouarenotcarryingone,thenpickitup.

2.Ifyouseeadeadant,andifyouarecarryingone,thenputyoursdown neartheotherone.

AsHayesnotes,theteleologicalversionrequiresplanningandorganizationskills farbeyondthosethatanantmighthave,nottomentionconceptualunderstanding thatwemightverywellbeunwillingtoascribetoants.Moretothepoint,the

34Seealsothemoreeasilyaccessible[Dennett,2009,p.10061].

35Thisisa“fully”teleologicalversion,withahigh-level,teleologicallyformulatedexecutionstatement.A“partially”teleologicalversionwouldsimplyprefix“Tocreateanantgraveyard”tothe followingnon-teleologicalversion.

36Notethattestingthisconditiondoesnotrequiretheanttohavea dedicto conceptofdeath; itissufficientfortheanttosense—eithervisiblyorperhapschemically—what we woulddescribe, dere,asadeadant.Foracomputationaltheoryofthe dere/dedicto distinction,see [WiebeandRapaport,1986],[Rapaportetal.,1997].

20

antneedsnoneofthat.Theteleologicaldescriptionhelps us describeandperhaps understandtheant’sbehavior;itdoesn’thelptheant.

Thesameistrueinmyspreadsheetexample.KnowingthatIamaddinghelps me understandwhatIamdoingwhenIfillthespreadsheetcellswithcertainvalues orformulas.Butthespreadsheetdoesitsthingwithoutneedingthatknowledge.

AnditistrueforSearleintheChineseRoom[Searle,1980]:Searle-in-theroommightnotunderstandwhatheisdoing,buthe is understandingChinese.37 WasSearle-in-the-roomsimplytold,“Followtherulebook!”?Orwashetold, “TounderstandChinese,followtherulebook!”?Ifhewastoldtheformer(which seemstobewhatSearle-the-authorhadinmind),then,(a)fromanarrow,internal, first-personpointofview,Searle-in-the-roomcantruthfullysaythathedoesn’t knowwhatheisdoing(inthewidesense).Inthenarrowsense,hedoesknowthat heisfollowingtherulebook,justasIdidn’tknowthatIwasusingaspreadsheet toadd,eventhoughIknewthatIwasfillingcertaincellswithcertainvalues.And (b)fromthewide,external,third-personpointofview,thenative-Chinese-speaking interrogatorcantruthfullytellSearle-in-the-roomthathe is understandingChinese. WhenSearle-in-the-roomistoldthathehaspassedaTuringtestforunderstanding Chinese,hecan—paraphrasingMoli`ere’sbourgeoisgentleman—truthfullyadmit thathewasspeakingChinesebutdidn’tknowit.38

HereisanicedescriptionofcomputationthatmatchestheChinese-Roomsetup:

ConsideragainthescenariodescribedbyTuring:anidealizedhumancomputor39 manipulatessymbolsinscribedonpaper.Thecomputormanipulatesthesesymbols becausehe[sic]wantstocalculate thevaluesomenumber-theoreticfunctionassumesonsomeinput. Thecomputorstartswithasymbolicrepresentationfortheinput,performsaseriesofsyntacticoperations,andarrivesatasymbolicrepresentationfortheoutput. Thisproceduresucceedsonlywhenthe computorcan understand thesymbolicrepresentationshemanipulates. Thecomputorneednotknowinadvancewhichnumberagiven symbolrepresents, buthemustbecapable,inprinciple,ofdeter-

37ToomuchhasbeenwrittenontheChinese-RoomArgumenttocitehere,but[Cole,1991],my responsetoColein[Rapaport,1990],[Rapaport,2000],and[Rapaport,2006,390–397]touchon thisparticularpoint.

38“Parmafoi!ilyaplusdequaranteansquejedisdelaprosesansquej’ensusserien,etje voussuisleplusoblig´edumondedem’avoirappriscela.”“Uponmyword!Ithasbeenmorethan fortyyearsthatIhavebeenspeakingprosewithoutmyknowinganythingaboutit,andIammost obligatedtoyouintheworldforhavingapprisedmeofthat.”(mytranslation) (http://en.wikipedia.org/wiki/Le Bourgeois gentilhomme).

39Seeexplanationofthistermin §3.

21

miningwhichnumberthesymbolrepresents.Onlythendoeshis syntacticactivityconstituteacomputationoftherelevantnumbertheoreticfunction. Ifthecomputorlacksanypotentialunderstanding oftherelevantsyntacticitems,thenhisactivitycountsasmeremanipulationofsyntax,ratherthancalculationofonenumberfromanother. [Rescorla,2007,pp.261–262;myboldface,Rescorla’sitalics]

Withouttheboldfacedclauses,thisisanicedescriptionoftheChineseRoom.The differenceisthat,intheChineseRoom,Searle-in-the-roomdoesnot“wantto” communicateinChinese;hedoesn’tknowwhathe’sdoing,inthatsenseofthe phrase.Still,he’sdoingit,accordingtotheinterpretationofthenativespeaker outsidetheroom.Ialsoclaimthattheboldfacedclausesareirrelevanttothecomputationitself;thatisthelessonofTuring’s“strangeinversion”. Theseexamplessuggestthattheuser-manual/externalworldinterpretationis notnecessary.Algorithms can beteleological,andtheirbeingsocanhelpcognitive agentswhoexecutethemtomorefullyunderstandwhattheyaredoing.Butthey don’t haveto beteleological.40

4.3CanAlgorithmsHaveMorethanOnePurpose?

Inadditiontobeingteleological,algorithmsseemtobeabletobe multiply teleological,asinthechess-warexampleanditskin.Thatis,therecanbealgorithmsof theform:

Toaccomplish G1,do P andalgorithmsoftheform:

Toaccomplish G2,do P.

40Thistouchesonanotherphilosophicalissueincognitivescience:theroleof“embodiedcognition”.As[Adams,2010,p.619]describesit,

Onanon-embodiedapproach,thesensorysysteminformsthecognitivesystemand themotorsystemdoesthecognitivesystem’sbidding.Therearecausalrelations betweenthesystemsbutthesensoryandmotorsystemsarenotconstitutiveofcognition.Forembodiedviews,therelationtothesensori-motorsystemtocognitionis constitutive,notjustcausal.

Comparethisparaphrase:Onanon-teleologicalapproach,inputsfromtheexternalworldinforma TuringmachineandoutputstotheexternalworlddotheTuringmachine’sbidding.Therearecausal orsemanticrelationsbetweenthesystemsbutthesemanticinterpretationsoftheinputsandoutputs arenotconstitutiveofcomputation.Forteleologicalviews,therelationtothesemanticinterpretations oftheinputsandoutputsisconstitutive.

22

where G1 = G2,andwhere G2 doesnotsubsume G1 (orviceversa),althoughthe Psarethesame.Inotherwords,whatifdoing P accomplishesboth G1 and G2? Howmanyalgorithmsdowehaveinthatcase?Two?(Onethataccomplishes G1, andanotherthataccomplishes G2,countingteleologically,or“widely”?)Orjust one?(Asinglealgorithmthatdoes P,countingmorenarrowly?)

WeredeBruijnandthechemiststalkingaboutthesamething?Ontheteleological(orwide)view,theyweren’t;onthenarrowview,theywere.Multipleteleologiesaremultiplerealizationsofanalgorithmnarrowlyconstrued:‘Do P’can beseenasawaytoalgorithmicallyimplementthehigher-level“function”(mathematical or teleological)ofaccomplishing G1 aswellasG2.E.g.,executinga particularsubroutineinagivenprogrammightresultincheckmateorwinninga battle.Viewingmultipleteleologiesasmultiplerealizations(multipleimplementations)canalsoaccountforhollandaise-saucefailuresontheMoon,whichcould betheresultofan“implementation-leveldetail”[Rapaport,1999]thatisirrelevant totheabstract,underlyingcomputation.

4.4WhatIf G and X ComeApart?

Whatif“successfully”executing Pfails toaccomplishgoal G?Thiscouldhappen forexternal,environmentalreasons(hencemyuseof‘wide’,above).Doesthis meanthat G mightnotbeacomputabletaskeventhough P is?

Theblocks-worldcomputer’smodeloftheworldwasanincomplete,partial model;itassumedthatitsactionswerealwayssuccessful.I’llhavemoretosay aboutpartialmodelsin §7.Fornow,thepointisthatthisprogramlackedfeedback fromtheexternalworld.Therewasnothingwrongwiththeenvironment,asthere isinthelunarhollandaise-saucecase;rather,therewasincompleteinformation abouttheenvironment.

Rescorla’sGCDcomputersdo“differentthings” by doingthe“samething”. Thedifferenceisnotin how theyaredoingwhattheyaredoing,butintheinterpretationsthat we usersofthemachinesgivetotheirinputsandoutputs.Would [Hill,2015]saythattheprocedureencodedinthatSchemeprogramwastherefore notanalgorithm?

Whatismorecentraltothenotionof“algorithm”?:Allofparts1–3inour informalcharacterizationin §4(“To G,do P”),orjustparts1–2,i.e.,withoutthe bracketedgoals(just“Do P”)?Isthealgorithmthenarrow,non-teleological,“purposeless”(ornon-purposed)entity?Oristhealgorithmthewide,intentional,teleological(i.e.,goal-directed)entity?Onthenarrowview,thewargameandchess algorithmsarejust one algorithm,thehollandaise-saucerecipe does workonthe Moon(itscomputerprogrammightbelogicallyverifiableevenifitfailstomake hollandaisesauce),andthe“two”GCDprogramsarealsojust one algorithmthat

23

doesitsthingcorrectly(butonlywebase-10folkscan use ittocomputeGCDs).

Onthewideview,thewargameandchessprogramsare two,distinctalgorithms, thehollandaise-saucerecipe fails ontheMoon(despitethefactthattheprogram mighthavebeenverified—shadesoftheFetzercontroversythatwewilldiscuss in §7!),andtheSchemeprogramwhenfedbase-13numeralsisdoingsomething wrong (inparticular,its“remainder”subroutineisincorrect).41

Theseexamplessuggestthatthewide,goal-directednatureofalgorithmsteleologicallyconceivedisduetotheinterpretationoftheirinputandoutput.AsShagrir &Bechtelputit,Marr’s“algorithmiclevel...isdirectedtothe innerworking of themechanism....Thecomputationallevellooks outside,toidentifyingthefunctioncomputedandrelatingittotheenvironmentinwhichthemechanismoperates” [ShagrirandBechtel,2015, §2.3].

Thereisawaytocombinetheseinsights:Hill’sformulationoftheteleological orintentionalnatureofalgorithmshadtwoparts,ateleological“preface”specifyingthetasktobeaccomplished,andastatmentofthealgorithmthataccomplishes it.OnewaytoclarifythenatureofMarr’s“computational”levelistosplititinto its“why”andits“what”parts.The“why”partisthetasktobeaccomplished. The“what”partcanbeexpressed“computationally”(Iwouldsay“functionally”) asamathematicalfunction(possibly,butnotnecessarily,expressedin“why”terminology),butitcanalsobeexpressed algorithmically.Finally,thealgorithmcan beimplemented.So,wecandistinguishthefollowing four Marr-likelevelsof analysis:

“Computational”-WhatLevel

Do f (i)= o

“Computational”-WhyLevel

Toaccomplish G,do f (i)= o

AlgorithmicLevel

Toaccomplish G,do A f (i)= o

ImplementationLevel

Toaccomplish G,do I(A f )(i)= o

where:

• G isthetasktobeaccomplishedorexplained,expressedinthelanguageof theexternalworld,sotospeak;

41AtleastasRescorladescribesit;itdoesthe right thingontheShapiro-Rapaportinterpretation discussedin §3.4.

24

• f isan input-outputfunctionthataccomplishes G,expressedeitherinthe samelanguageorperhapsexpressedinpurelymathematicallanguage;

• A f isanalgorithmthatimplements f (i.e.,itisanalgorithmthathasthesame input-outputbehavioras f );and

• and I isanimplementation(perhapsinthebrainoronsomecomputer)of A).42

[ShagrirandBechtel,2015, §4]saythat“The what aspect[ofthe“computational”level]providesadescriptionofthemathematicalfunctionthatisbeing computed.The why aspectemploysthecontextualconstraintsinordertoshow howthisfunctionmatcheswiththeenvironment.”Theseseemtometonicely describethetwoclausesofwhatIcallthe“computational-why”levelabove.

Andthisraisesanotherinterestingquestion:

5DoWeComputewithSymbolsorwithMeanings?

5.1WhereDoesContextBelong?

Analogous(ifnotidentical)problemsariseelsewhere:Is“3isanintegerpowerof 2”true?[Stewart,2000,p.98](cf.[Rescorla,2013]).Notinbase-10arithmetic. Butitistrueintheintegersmodulo5.43

Onecantaketwoattitudestowardssituationslikethis.First,onecansaythat thetruth-valuesdifferbecausetheexternalcontextisdifferentinthetwocases, inmuchthesamewaythat‘Ilikewater’istrueonEarth(where‘water’refersto H2O)but(say)falseonTwinEarth(where‘water’referstoXYZ).Alternatively, onecansaythattheoriginalpropositionwasnotstatedcarefullyenough:Itshould havebeensomethinglike:“310 isanintegerpowerof210”or,perhaps,“Inbase10,3isanintegerpowerof2”(withacorrespondingpairofpropositionsforthe Z/5case).44 Weshouldhaveincorporatedtheinformationabouttheexternalcon-

42[Egan,1995,p.187,n.8],citingColinMcGinn,notesthatevenwhatIamcallingthe “computational”-whatlevelcanbephrasedintentionallyas,e.g.,“TocomputetheLaplaceanofa Gaussian,do f (i, o).”Soperhapsthereisalevelintermediatebetweenthewhat-andwhy-levels, somethingalongtheselines:“Toaccomplish A f (i, o),do A f (i, o)”,where A isexpressedinpure Turing-machinelanguage.Note,too,that both clausescanvaryindependently:Notonlycan f implementmanydifferent Gs(asinthechess-wargameexample),but G canbeimplementedbymany different A f s.

43Because (210)(110) = 210 < 310 < (210)(210) = 410.But23 = 8 = 3Z/5

44Therearemanyotheroptions:“The number representedbythe numeral ‘3’inbase-10isan integerpowerofthe number representedbythe numeral ‘2’inbase-10”.Or“Thenumbercanonically representableas {{{ / 0}}} (etc.)”.

25

text into theproposition.Inotherwords,the“externalcontext”shouldhavebeen “internalized”.45 (Wewillreturntothisin §5.4.)

5.2WhatIsThisTuringMachineDoing?

Hereisaquestiontoconsider:Aresuchpropositionsaboutnumbers?Orarethey aboutnumerals?WhatdoTuringmachinescomputewith?Forthatmatter,whatdo we computewith?Thisisnottheplaceforustogetintoadiscussionofnominalism inmathematics,thoughitisworthpointingoutthatourcommonthreadseemsto leadusthere.

BothChristopherPeacockeandRescorlaremindusthat

ATuringmachinemanipulatessyntacticentities:stringsconsistingof strokesandblanks....Ourmaininterestisnotstring-theoreticfunctionsbutnumber-theoreticfunctions.Wewanttoinvestigatecomputablefunctionsfromthenaturalnumberstothenaturalnumbers.To doso,wemustcorrelatestringsofstrokeswithnumbers.46 [Rescorla,2007,p.253]

Onceagain,weseethatitisnecessarytointerpretthestrokes.

Hereis[Peacocke,1999]’sexample:SupposethatwehaveaTuringmachine thatoutputsacopyoftheinputappendedtoitself(thusdoublingthenumberof inputstrokes):input‘/’,output‘//’;input‘//’,output‘////’,andsoon.Whatisour Turingmachinedoing?Isn’t“outputtingacopyoftheinputappendedtoitself”the mostneutraldescription?Afterall,thatdescribes exactly whattheTuringmachine isdoing,leavingtheinterpretationuptotheobserver.Ifwehadcomeacrossthat Turingmachineinthemiddleofthedesertandweretryingtofigureoutwhatit does,somethinglikethatwouldbethemostreasonableanswer. Why someone mightwantacopy-appendingTuringmachineisadifferentmatterthatprobably would requireaninterpretationofthestrokes.Butthatgoesfarbeyondwhatthe Turingmachineisdoing.

Aswesawin §3.4,Rescorlaofferedthreeinterpretationsofthestrokes.Do wereallyhaveonemachinethatdoesthreedifferentthings?Whatitdoes(inone senseofthatphrase)dependsonhowitsinputandoutputareinterpreted,thatis,on theenvironmentinwhichitisworking.Indifferentenvironments,itdoesdifferent things;atleast,that’swhatClelandsaidaboutthehollandaisesaucerecipe.

45ThisissomethingthatIhavearguedisnecessaryforcomputationalcognition;see,e.g., [Rapaport,2006,pp.385–387],[Rapaport,2012, §3].

46Here,bytheway,wehaveaninterestingdifferencebetweenTuringmachinesandtheirlogical equivalentsintheComputabilityThesis,suchasthe λ-calculusorrecursive-functiontheory,because thelatterdealwithfunctionsandnumbers,notsymbolsforthem.

26

Piccininisaysmuchthesamething:

Incomputabilitytheory,symbolsaretypicallymarksonpaperindividuatedbytheirgeometricalshape(asopposedtotheirsemanticproperties).Symbolsandstringsofsymbolsmayormaynotbeassigned aninterpretation;iftheyareinterpreted,thesamestringmaybeinterpreteddifferently....Inthesecomputationaldescriptions,theidentity ofthecomputingmechanismdoesnothingeonhowthestringsare interpreted.[Piccinini,2006, §2]

By‘individuated’,Piccininiistalkingabouthowonedecideswhetherwhatappear tobetwoprograms(say,oneforawargamebattleandoneforachessmatch)are, infact,twodistinctprogramsorreallyjustoneprogram(perhapsbeingdescribed differently).Here,hesuggeststhatitisnothowtheinputsandoutputsareinterpreted(theirsemantics)thatmatters,butwhattheinputsandoutputslooklike (theirsyntax).So,forPiccinini,thewargameandchessprogramsarethesame. ForCleland,theywouldbedifferent.ForPiccinini,thehollandaise-sauceprogram runningontheMoonworksjustaswellastheonerunningonEarth;forCleland, onlythelatterdoeswhatitissupposedtodo.

So,thequestion“WhichTuringmachineisthis?”hasonlyoneanswer,given intermsofitssyntax(“determinedby[its]instructions,notby[its]interpretations” [Piccinini,2006, §2]).Butthequestion“WhatdoesthisTuringmachinedo?”has n + 1answers:onesyntacticanswerand n semanticanswers(oneforeachof n differentsemanticinterpretations).

Hereisarelatedissue,showingourthreadrunningthroughactiontheory: GivenacalculatorthatIusetoaddtwonumbers,howwouldyoudescribemy behavior?AmIpushingcertainbuttonsinacertainsequence?(A“syntactic”, narrow,internalanswer:Iam“doing P”.)OramIaddingtwonumbers?(A teleological,“semantic”,wide,externalanswer:Iamaccomplishing G.)Oram Iaddingtwonumbers by pushingthosebuttonsinthatsequence?(Ateleological (etc.)answer,togetherwithasyntacticdescriptionofhowIamdoingit:Iamaccomplishing G,bydoing P.)[Rapaport,1990],[Rapaport,1993].Thisisthesame situationthatwesawinthespreadsheetexample(andwewillseeitagainwhenwe discussnamedsubroutinesin §5.3).

Clearly,insomesense,alloftheseanswersarecorrect,merely(?)focusingon differentaspectsofthesituation.Butthereisafurtherquestion: Why (orhow) doesaTuringmachine’sprintingandmovingthusandso,ormypushingcertain calculatorbuttonsthusandso,resultinaddingtwonumbers?Andtheanswerto thatseemstorequireasemanticinterpretation.Thisisthekindofquestionthat Marr’s“computational”levelissupposedtorespondto.

27

IfIwanttoknowwhichTuringmachinethisis,Ishouldlookattheinternal mechanism(roughly,Dennett’s“design”stance[Dennett,1971])fortheanswer [Piccinini,2006].ButifI’minterestedinbuyingachessprogram(ratherthana wargamesimulator),thenIneedtolookattheexternal/inherited/widesemantics [Cleland,1993].

Sincewecanarbitrarilyvaryinheritedmeaningsrelativetosyntactic machinations,inheritedmeaningsdonot makeadifference tothose machinations.Theyareimposeduponanunderlyingcausalstructure. [Rescorla,2014,p.181]

Onthisview,thehollandaise-sauce-makingcomputerdoesitsthingwhetherit’s onEarthortheMoon(whetheritsoutputishollandaisesauceornot).Perhaps itsoutputissomekindofgeneralized,abstract,hollandaise-sauce type,whoseimplementations/instantiations/tokensontheMoonaresomesortofgoop,butwhose implementations/instantiations/tokensonEartharewhatarenormallyconsidered tobe(successful)hollandaisesauce.(Ihavebeentoldthatpackagesofcakemix thataresoldinmile-highDenvercomewithalternativedirections).

Hereisanotherniceexample:

aloomprogrammedtoweaveacertainpatternwillweavethatpattern regardlessofwhatkindsofthreaditisweaving.Thepropertiesof thethreadsmakenodifferencetothepatternbeingwoven.Inother words,theweavingprocessisinsensitivetothepropertiesoftheinput.

[Piccinini,2008,p.39]

AsPiccininigoesontopointout,theoutputmighthavedifferentcolorsdepending onthecolorsoftheinputthreads,butthe pattern willremainthesame.Thepattern isinternal;thecolorsareexternal,tousetheterminologyabove.Ifyouwantto weaveanAmericanflag,youhadbetterusered,white,andbluethreadsinthe appropriateways.Butevenifyouusepuce,purple,andplumthreads,youwill weaveanAmerican-flag pattern.Whichismoreimportant:thepatternorthe colors?That’sprobablynotexactlytherightquestion.Rather,theissueisthis:If youwantacertainpattern,thisprogramwillgiveittoyou;ifyouwantacertain patternwithcertaincolors,thenyouneedtohavetherightinputs(youneedtouse theprogramintherightenvironment).(Thisaspectofourthreadreappearsinthe philosophyofmathematicsconcerning“structuralism”:Isthepattern,orstructure, ofthenaturalnumbersallthatmatters?Ordoesitalsomatterwhatthenatural numbersinthepattern“really”are?Forasurvey,see[Horsten,2015, §4].)

28

5.3SyntacticSemantics

‘Syntax’isusuallyusedinthenarrowsenseofthegrammarofalanguage,and ‘semantics’isusuallyunderstoodasthemeaningsofthemorphemes,words,and sentencesofalanguage.Following[Morris,1938,pp.6–7],Iprefertounderstand ‘syntax’verybroadlytobethestudyofthepropertiesof,andrelationsamong, theelementsofasingleset(orformalsystem),and‘semantics’verybroadlytobe thestudyoftherelationsbetweenanytwosetswhatsoever(eachwithitsownsyntax).47 Syntaxisconcernedwith“intra-system”propertiesandrelations;semanticsisconcernedwith“extra-system”relations(wherethe“system”inquestionis the“syntactic”domain),or,viewed“subspecieaeternitatis”,itisconcernedwith “inter-system”relations(i.e.,relationsbetweentwodomains,oneofwhichistaken asthesyntacticdomainandtheotherasasemanticdomain).

So,onewaytoanswerthequestionsattheendof §5.2isbyusinganexternal semanticinterpretation:TheseTuring-machineoperationsorthosebuttonpresses (consideredasbeinglocatedinaformal,syntacticsystemofTuring-machineoperationsorbuttonpressings)canbeassociatedwithnumbersandarithmeticaloperationsonthem(consideredasbeinglocatedinadistinct,Platonic(oratleast external)realmofmathematicalentities).48 Intheformulation“To G,do P”, Pcan be identified syntactically(atthe“computational-what”level),but Gneeds tobe identifiedsemantically—andthen P canbe interpreted semanticallyin G’sterms (atthe“computational-why”level).

Butthereisanotherwaytoanswerthesequestions,byusingan“internal”kind ofsemantics,thekindthatIhavecalled“syntacticsemantics”.Syntacticsemantics ariseswhenthesemanticdomainofanexternalsemanticinterpretationis“internalized”intothesyntacticdomain.Inthatway,theprevioussemanticrelations betweenthetwopreviouslyindependentdomainshavebecomerelationswithinthe newunifieddomain,turningthemintosyntacticrelations.49 Syntacticsemantics isakinto(ifnotidenticalwith)whatRescorlahascalled“indigenoussemantics” [Rescorla,2012],[Rescorla,2014].Onepossibledifferencebetweenthemisthat

47So,thegrammarofalanguage—syntaxinthenarrowsense—isthestudyoftheproperties of,andrelationsamong,itswordsandsentences.Andtheirreferentialmeaningsaregivenbya semanticinterpretationrelatingthelinguisticitemstoconceptsorobjects.Formoredetails,see [Rapaport,2012, §3.2].

48Whichrealm,incidentally,hasitsownorganizationintermsofpropertiesandrelationsamong itsentities,i.e.,itsontology,whichcanbethoughtofasitsown“syntax”[Rapaport,2006,p.392]. Similarly,theorganizationofthesyntacticrealm,intermsofthepropertiesandrelationsamong itsentities,canbeconsidered,atleastinthementalcase,asan“epistemologicalontology” [Rapaport,1986a].Ontologyissyntax(bythedefinitionof‘syntax’givenhere).Relationsbetween two domains,eachwithitsownsyntax(orontology)issemantics.

49[Rapaport,1988],[Rapaport,1995],[Rapaport,2006],[Rapaport,2012];seealso[Kay,2001].

29

myversionemphasizestheimportanceof conceptual-role semantics(tobedistinguishedfrom,butincluding, inferential-role semantics)[Rapaport,2002],whereas Rescorla’sversionemphasizes causal relations.50

Withoutgoingintodetails(someofwhicharespelledoutinthecitedpapers), letmenoteherethatonewaytogivethiskindofsemanticsisintermsof(named) subroutines(whichaccomplishsubtasksoftheoverallalgorithm).Wecanidentify collectionsofstatementsinaprogramthat“worktogether”,thenpackagethemup, namethepackage,andthusidentifysubtasks.

E.g.,thefollowingLogoprogramdrawsaunitsquarebymovingforward1 unit,thenturning90degreesright,anddoingthat4times: repeat4[forward1right90]

ButLogowon’t“know”whatitmeanstodrawasquareunlesswetellitthat tosquare repeat4[forward1right90] end

Ofcourse,theLogoprogramstillhasnounderstandingintheway we doofwhat asquareis.Itisnowcapableonlyofassociatingthatnewlydefinedsymbol (‘square’)withacertainprocedure.Thesymbol’smeaningfor us isits external semantics;theword’smeaning(or“meaning”?)fortheLogoprogramisitsinternal “syntacticsemantics”duetoitsrelationshipwiththebodyofthatprogram.Anotherexampleisthesequenceofinstructions“turnleft;turnleft;turnleft”,in Karel theRobot [Pattisetal.,1995],whichcanbepackagedupandnamed“turnright”:

DEFINE-NEW-INSTRUCTIONturnrightAS

BEGIN

turnleft;turnleft;turnleft END

NoticeherethatKarelstillcan’t“turn right”(i.e.,90degclockwise);itcanonly turnleftthreetimes(i.e.,270degcounterclockwise).

Thereisacaveat:Merely naming asubroutinedoesnotautomaticallyendow itwiththemeaningofthatname,as[McDermott,1980]makesclear.Butthe ideathatconnections(whetherconceptual,inferential,orcausal)canbe“packaged”togetherisawayofproviding“syntactic”or“indigenous”semantics.Ifthe nameisassociatedwithobjectsthatare external totheprogram,thenwehave external/wide/inherited/extra-systemsemantics.Ifitisassociatedwithobjects 50[Egan,1995,p.181]’s“structuralproperties”and[Bickle,2015,esp. §5]’sdescriptionof “causal-mechanisticexplanations”inneurosciencemayalsobe“syntactic/indigenous”semantics.

30

internal totheprogram,thenwehaveinternal/narrow/syntactic/indigenous/intrasystemsemantics. Identifying subroutinesissyntactic; naming themleadstosemantics:Ifthenameisexternallymeaningfultoauser,becausetheusercanassociatethenamewithotherexternalconcepts,thenwehavesemanticsintheordinary sense(subjecttothecaveatinthepreviousfootnote);ifitisinternallymeaningful tothecomputer,becausethecomputercanassociatethenamewithotherinternal names,thenwehave“syntactic”or“indigenous”semantics.

5.4Internalization

Anotherwaytogetsyntacticsemanticsisby“internalization”.Externalsemantic relationsbetweentheelementsoftwodomains(a“syntactic”domaindescribed syntacticallyanda“semantic”domainalsodescribedsyntactically(or,ifyouprefer,“ontologically”)canbeturnedintointernalsyntacticrelations(“syntacticsemantics”)byinternalizingthesemanticdomainintothesyntacticdomain.After all,ifyoutaketheunionofthesyntacticandsemanticdomains,thenallformerly externalsemanticrelationsarenowinternalsyntacticones.

Andonewaythatthishappensforuscognitive(andarguablycomputational) agentsisbysensoryperception,whichisaformofinputencoding.Foranimal brains,perceptioninterpretssignalsfromtheexternalworldintoabiologicalneuralnetwork.Foracomputerthatacceptsinputfromtheexternalworld,theinterpretationofexternaloruserinputasitsinternalswitchsettings(orinscriptionson aTuring-machinetape)constitutesaformofperception.Bothareformsofwhat Iamcalling“internalization”.Asaresult,theinterpretationbecomespartofthe computer’sorthebrain’sintra-system,syntactic/indigenoussemantics.(See,e.g., [Rapaport,2012]forfurtherdiscussionandrefereces.)

MycolleagueStuartC.Shapiroadvocatesinternalizationinthefollowingform:51

Shapiro’sInternalizationTactic

Algorithms do taketheteleologicalform,“Toaccomplish G,do P”, but G mustinclude everything thatisrelevant:

• Tomakehollandaisesauce onEarth,do P.

• TofindtheGCDof2integers inbase-10,do Q

• Toplaychess,do R, whereR’svariablesrangeoverchesspiecesand achessboard

51Personalcommunication.Asimilarpointwasmadein[Smith,1985,p.24]:“aswellasmodellingtheartifactitself,youhavetomodeltherelevantpartoftheworldinwhichitwillbeembedded.”

31

• Tosimulateawargamebattle,do R, whereR’svariablesrangeover soldiersandabattlefield

Andtheproperlocationfortheseteleologicalclausesisinthepreconditionsand postconditionsoftheprogram.Oncetheyarelocatedthere,theycanbeusedinthe formalverificationoftheprogram,whichproceedsbyprovingthat, ifthepreconditionsaresatisfied,thentheprogramwillaccomplishitsgoal asarticulatedinthe postconditions.Thisbuildstheexternalworld(andanyattendantexternalsemantics) into thealgorithm.AsLamporthassaid,“Thereisnoeasywaytoensurea blueprintstayswithabuilding,butaspecificationcanandshouldbeembeddedas acommentwithinthecodeitisspecifying”[Lamport,2015,41].Theseparabilityofblueprintfrombuildingisakintotheseparabiiityof G from P;embedding aspecificationintocodeas(atleast)acommentistointernalizeitasapre-or postcondition.52

AsIsuggestedin §4.1,wecanavoidhavingCleland’shollandaise-saucerecipe outputamessygoopbylimitingitsexecutiontoonelocation(Earth,say)without guaranteeingthatitwillworkelsewhere(ontheMoon,say).Thisisnodifferent fromapartialmathematicalfunctionthatissilentaboutwhattodowithinputfrom outsideitsdomain,orfromanalgorithmforaddingtwointegersthatspecifiesno particularbehaviorfornon-numericalinput.53 Anotherwayistousethe“Denver cakemix”strategy:Therecipeoralgorithmshouldbeexpressedconditionally:If location=Earth,thendo P;iflocation=Moon,thendo Q (where Q mightbethe outputofanerrormessage).

Athirdwayistowriteanalgorithm(arecipe)thattellsustomixeggsandoil untilanemulsionisproduced,andthatoutputshollandaisesauce.OnEarth,an emulsionisindeedproduced,andhollandaisesauceisoutput.ButontheMoon, thisalgorithmgoesintoaninfiniteloop;nothing(and,inparticular,nohollandaise sauce)iseveroutput.Oneproblemwiththisisthatthe“until”clause(“untilan emulsionisproduced”)isnotclearlyalgorithmic(cf.[Cooper,2012,p.78]).How wouldthe computer tellifanemulsionhasbeenproduced?Thisisnotaclearly algorithmic,Booleanconditionwhosetruthvaluecanbedeterminedbythecomputersimplybycheckingoneofitsswitchsettings(thatis,avaluestoredinsome variable).Itwouldneedsensorstodeterminewhattheexternalworldislike.But thatisaformof interactive computing,towhichwenowturn.

52Notethesimilarityof(a)internalizingexternal/inheritedsemanticsintointernal/syntacticsemanticsto(b)theDeductionTheoreminlogic,whichcanbethoughtofassayingthatapremiseof anargumentcanbe“internalized”asanantecedentoftheargument’sconclusion: P ⊢ C ⇔⊢ (P → C)

53“Crashing”isawell-definedbehavioriftheprogramissilentaboutillegalinput.More“wellbehaved”behaviorrequiressomekindoferrorhandling.

32

6Interactive(Hyper?)computation

Therearemanyvarietiesofhypercomputation,i.e.,“thecomputationoffunctions thatcannotbe”computedbyaTuringmachine[Copeland,2002,p.461].ThenotionisputforthasachallengetotheComputabilityThesis.ManyvarietiesofhypercomputationinvolvesucharacanaascomputersoperatinginMalament-Hogarth spacetime.54 Here,Iwanttofocusononeformofhypercomputationthatismore downtoearth.Itgoesundervariousnames(thoughwhetherthereisasingle it withmultiplenames,ormultiple itsisanissuethatIwillignorehere):‘interactive computation’[Wegner,1995,p.45],‘reactivecomputation’(AmirPneuli’sterm; see[Hoffmann,2010]),or‘oraclecomputation’(Turing’sterm;usefulexpositions arein[Feferman,1992],[Davis,2006],[Soare,2009],and[Soare,2013]).

RememberthatTuringmachinesdonotreally accept inputfromtheexternalworld;theinputtoafunctioncomputedbyaTuringmachineis pre-stored alreadyprintedonitstape;Turingmachineswork“offline”.Givensuchatape, theTuringmachinecomputes(and,hopefully,halts).Astudentinanintroductory programmingcoursewhoisaskedtowritean interactive programthattakesasinputtwointegers chosenrandomlybyauser andthatproducesasoutputtheirGCD hasnotwrittenaTuring-machineprogram.Thestudent’sprogrambeginswitha Turingmachinethatprintsaqueryonitstapeandhalts;theuserthendoesthe equivalentofsupplyinganewtapewiththeuser’sinputpre-storedonit;andthen a(nother)TuringmachineusesthattapetocomputeaGCD,queryanotherinput, and(temporarily)halt.

Eachrunofthestudent’sprogram,however,couldbeconsideredtobethe runofaTuringmachine.Buttheread-loopinwhichtheGCDcomputationis embeddedinthatstudent’sprogramtakesitoutoftherealmofaTuringmachine, strictlyspeaking.

Thathardlymeansthatourfreshmanstudenthascreatedahypercomputerthat computessomethingthataTuringmachinecannotcompute.Suchinteractivecomputations,whichareattheheartofmodern-daycomputing,weremathematically modeledbyTuringusinghisconceptofanoracle.Ourfreshman’scomputerprogram’squerytotheusertoinputanotherpairofintegersisnothingbutacalltoan oraclethatprovidesunpredictable,andpossiblyuncomputable,values.(Computer userswhosupplyinputareoracles!)

Now,manyinteractivecomputationscanbe simulated byTuringmachines, simplybysupplyingalltheactualinputsonthetapeatthestart.TheKleeneSubstitutionProperty55 statesthatdatacanbestoredeffectively(i.e.,algorithmically)

54http://en.wikipedia.org/wiki/Hypercomputation

55AlsocalledtheKleeneRecursionTheorem[Case,nd].

33

inprograms;thedataneednotbeinputfromtheexternalworld.AtypicalinteractivecomputermightbeanATMatthebank.Noonecanpredictwhatkindofinput willbegiventothatATMonanygivenday;but,attheendoftheday,allofthe day’sinputsareknown,andthatATMcanbesimulatedbyaTM.

ButthatisofnohelptoanyonewhowantstouseanATMonadailybasis. Computationinthewildmustallowforinputfromtheexternalworld(including oracles).

Andthatiswhereourthreadre-appears:Computationmustinteractwiththe world.Acomputerwithoutphysicaltransducersthatcoupleittotheenvironment [Sloman,2002, §5,#F6,pp.17–18]wouldbesolipsistic.Thetransducersallowfor perceptionoftheexternalworld(andtherebyforinteractivecomputing),andthey allowformanipulationoftheexternalworld(andtherebyforcomputers—robots, includingcomputationalchefs—thatcanmakehollandaisesauce).Butthecomputationandtheexternal-worldinteraction(theinterpretationofthecomputer’s outputintheexternalworld)areseparableanddistinct.Andtherecan,therefore, beslippagebetweenthem(leadingtosuchthingsasblocks-worldandhollandaisesaucefailures),mulitipleinterpretations(chessvs.wargame),etc.

7ProgramVerificationandtheLimitsofComputation

Let’sconsiderprogramsthatspecifyphysicalbehaviorsabitfurther.Inaclassic andcontroversialpaper,JamesH.Fetzer[Fetzer,1988]arguedtotheeffectthat, givenacomputerprogram,youmightbeabletologicallyprovethatits(primitive) instructionRINGBELLwillbeexecuted,butyoucannotlogicallyprovethatthe physicalbellwillactuallyring(awireconnectingcomputerandbellmightbe broken,thebellmightnothaveaclapper,etc.).Similarly,wemightbeableto logicallyprovethatthehollandaise-sauceprogramwillexecutecorrectly,butnot thathollandaisesaucewillactuallybeproduced.ForFetzerandCleland,it’sthe bellsandthesaucethatmatterintherealworld:Computingisabout theworld.

DonaldMackenzieagrees[MacKenzie,1992,p.1066]:

...mathematicalreasoningalonecanneverestablishthe“correctness” ofaprogramorhardwaredesigninanabsolutesense,butonlyrelative tosomeformalspecificationofitsdesiredbehavior”.

Similarly,aformalproofofatheoremonlyestablishesitstruth relativeto thetruth ofitsaxioms,notits“absolute”truth[Rapaport,1984,p.613].

MacKenziecontinues:

Mathematicalargumentcanestablishthataprogramordesignisacorrectimplementationofthatspecification,butnotthatimplementation

34

ofthespecificationmeansacomputersystemthatis“safe”,“secure”, orwhatever.[MacKenzie,1992,p.1066].

Therearetwopointstonoticehere.First,amathematicalargumentcanestablish thecorrectnessofaprogramrelativetoitsspecification,thatis,whethertheprogramsatisfiesthespecification.Inpart,thisisthefirstpoint,above.But,second, notonlydoesthisnotnecessarilymeanthatthecomputersystemissafe(orwhatever),italsodoesnotmeanthatthe specification isitself“correct”.Presumably,a specificationisarelativelyabstractoutlineofthesolutiontoaproblem.Proving thatacomputerprogramiscorrectrelativeto—thatis,satisfies—thespecification doesnotguaranteethatthespecificationactuallysolvestheproblem! Lamporthasrecentlysaidmuchthesamething:

Aspecificationisanabstraction.Itshoulddescribetheimportantaspectsandomittheunimportantones.Abstractionisanartthatis learnedonlythroughpractice....[A]specificationofwhatapiece ofcodedoesshoulddescribeeverythingoneneedstoknowtousethe code.Itshouldneverbenecessarytoreadthecodetofindoutwhatit does.[Lamport,2015,39]

Thegoalofanalgorithmcanbeexpressedinitsspecification.Thisiswhyyou wouldn’t have toreadthecodetofindoutwhatitdoes.Ofcourse,ifthespecificationhasbeeninternalizedintothecode,youmightbeableto.Butit’salsowhy youcanhaveachessprogramthatisalsoawargamesimulator:Theymighthave differentspecificationsbutthesamecode.

...whatcanbeprovencorrectisnotaphysicalpieceofhardware, orprogramrunningonaphysicalmachine,butonlyamathematical modelofthathardwareorprogram.[MacKenzie,1992,1066].

ThisisthecaseforreasonsthatarerelatedtotheChurch-TuringComputability

Thesis:Youcan’tshowthattwosystemsarethesame,insomesense,unlessyou cantalkaboutbothsystemsinthesamelanguage.InthecaseoftheComputability

Thesis,theproblemconcernstheinformalityofthelanguageofalgorithmsversus theformalityofthelanguageofTuringmachines.Inthepresentcase,theproblemconcernsthemathematicallanguageofprogramsversusthenon-linguistic, physicalnatureofthehardware.Onlyby describing thehardwareina(formal, mathematical)language,canaproofofequivalencebeattempted.Butthenwe alsoneedaproofthatthatformaldescriptionofthehardwareiscorrect;andthat can’tbehad.

Itcan’tbehad,because,tohaveit,wewouldneed another formaldescription ofthehardwaretocomparewiththeformaldescriptionthatweweretryingto

35

verify.AndthatleadstoaZeno-orBradley-likeinfiniteregress.(Wecancome “close,butnocigar”.)

BrianCantwellSmithhasarticulatedthisproblemmostclearly[Smith,1985]. Forhim,computingisabouta modelof theworld.AccordingtoSmith,todesign acomputersystemtosolveareal-worldproblem,wemustdotwothings:

1.Createa model ofthereal-worldproblem.

2. Representthe model inthecomputer.

Themodelthatwecreatehasnochoicebuttobe“delimited”,thatis,itmust beabstract—itmustomitsomedetailsofthereal-worldsituation.Abstractionis theoppositeofimplementation.Itistheremovalof“irrelevant”implementation details.Hispointisthatcomputersonlydealwith theirrepresentationsof these abstractmodelsof therealworld.Theyare twice removedfromreality.56

Allmodelsarenecessarily“partial”,henceabstract.Butactionis not abstract: You and thecomputermustact in thecomplex,realworld,andinrealtime.Yet suchreal-worldactionmustbebasedon partialmodels oftherealworldandinferencesbasedonincompleteandnoisyinformation(cf.HerbertSimon’snotion of“boundedrationality”andtheneedfor“satisficing”[Simon,1996]).Moreover, thereisnoguaranteethatthe models arecorrect.

Actioncanhelp:Itcanprovidefeedbacktothecomputersystem,sothatthe systemwon’tbeisolatedfromtherealworld.Recallourblocks-worldprogram thatdidn’t“know”thatithaddroppedablock,but“blindly”continuedexecuting itsprogramtoputtheblockonanother).Ifithadhadsomesensorydevicethat wouldhaveletitknowthatitnolongerwasholdingtheblockthatitwassupposed tomove,andiftheprogramhadhadsomekindoferror-handlingprocedureinit, thenitmighthaveworkedmuchbetter(itmighthaveworked“asintended”).

Theproblem,onSmith’sview,isthatmathematicalmodeltheoryonlydiscussestherelationbetweentwo descriptions:themodelitself(whichisapartial descriptionoftheworld)andadescriptionofthemodel.Itdoesnotdiscussthe relationbetweenthemodelandtheworld;thereisanunbridgeablegap.InKantianfashion,amodelislikeeyeglassesforthecomputer,throughwhichitseesthe world,anditcannotseetheworldwithoutthoseglasses.Themodel is theworld asfarasthecomputercansee.Themodelistheworld as thecomputerseesit.

BothSmithandFetzeragreethattheprogram-verificationprojectfails,butfor slightlydifferentreasons:ForFetzer(andCleland),computingisaboutthe world; itisexternalandcontextual.Thus,computerprogramscan’tbeverified,because 56“Humanfallibilitymeanssomeofthemoresubtle,dangerousbugsturnouttobeerrorsin design;thecodefaithfullyimplementstheintendeddesign,butthedesignfailstocorrectlyhandlea particular‘rare’scenario”[Newcombeetal.,2015,67].

36

theworldmaynotbeconduciveto“correct”behavior:Aphysicalpartmightbreak; theenvironmentmightpreventanotherwise-perfectly-running,“correct”program fromaccomplishingitstask(suchasmakinghollandaisesauceontheMoonusing anEarthrecipe);etc.

ForSmith,computingisabouta model oftheworld;itisinternalandnarrow. Thus,computerprogramscan’tbeverified,butforadifferentreason,namely,the modelmightnotmatchtheworld.57 NotethatSmithalsobelievesthatcomputers mustact in therealworld,butitistheirabstractnarrownessthatisolatesthemfrom theconcrete,realworldatthesametimethattheymustactinit.

Thedebateoverwhethercomputationconcernstheinternal,syntacticmanipulationofsymbolsortheexternal,semanticinterpretationofthemisreminiscentof Smith’sgap.As[Rescorla,2007,p.265]observes,weneedacomputabletheory ofthesemanticinterpretationfunction,but,asSmithobserves,wedon’t(can’t?) haveone,forreasonsakintotheComputabilityThesisproblem.

Smith’sgapisdue,inpart,tothefactthatspecificationsareabstractions.How doesoneknowifsomethingthathasbeenomittedfromthespecificationisimportantornot?Thisiswhy“abstractionisanart”,asLamportsaid,andwhythere’s noguaranteethatthemodeliscorrect(inthesensethatitmatchesreality).

8Conclusion

Wheredowestand?First,Ihavenotattemptedinthisoverviewtoresolvethese issues.Iamstillstrugglingwiththem,andmygoalwastoconvinceyouthatthey areinteresting,andperhapsimportant,issuesthatarewidespreadthroughoutthe philosophyofcomputerscienceandbeyond,toissuesinthephilosophyofmind, philosophyoflanguage,andtheethicsandpracticalusesofcomputers.ButIthink wecanseeopportunitiesforsomepossibleresolutions.

WecandistinguishbetweenthequestionofwhichTuringmachineacertain computationisandthequestionofwhatgoalthatcomputationistryingtoaccomplish.Bothquestionsareimportant,andtheycanhaveverydifferentanswers. TwocomputationsmightimplementthesameTuringmachine,butbedesignedto accomplishdifferentgoals.

Andwecandistinguishbetweentwokindsofsemantics:wide/external/extrinsic/inheritedandnarrow/internal/intrinsic/“syntactic”/indigenous.Bothkindsexist, haveinterestingrelationshipsandplaydifferent,albeitcomplementary,roles.

57Perhapsabetterwayoflookingatthingsistosaythattherearetwodifferentnotionsof“verification”:aninternalandanexternalone.Forrelateddistinctions,see[TedreandSutinen,2008, pp.163–164].

37

Algorithmsnarrowlyconstrued(minustheteleologicalpreface)iswhatisstudiedinthemathematicaltheoryofcomputation.Todecidewhetherataskiscomputable,weneedtofindanalgorithmthatcanaccomplishit.Thus,wehavetwo separatethings:analgorithm(narrowlyconstrued,ifyouprefer)andatask.Some algorithmscanaccomplishmorethanonetask(dependingonhowtheirinputs andoutputsareinterpretedbyexternal/inheritedsemantics).Somealgorithmsmay fail,notbecauseofabuggy,narrowalgorithm,butbecauseofaproblematthe real-worldinterface.Thatinterfaceisthe(algorithmic)codingofthealgorithm’s inputsandoutputs,typicallythrougha sequence oftransducersatthereal-world end(cf.[Smith,1987]).Physicalsignalsfromtheexternalworldmustbetransduced(encoded)intothecomputer’sswitch-settings(thephysicalanaloguesofa Turingmachine’s‘0’sand‘1’s),andtheoutputswitch-settingshavetobetransduced(decoded)intosuchreal-worldthingsasdisplaysonascreenorphysical movementsbyarobot.

Atthereal-worldendofthiscontinuum,werunintoSmith’sgap.Fromthe narrowalgorithm’spointofview,sotospeak,itmightbeabletoasymptotically approachtherealworld,inZeno-likefashion,withoutclosingthegap.But,justas someonetryingtocrossaroombyonlygoinghalftheremainingdistanceateach step will eventuallycrosstheroom(thoughnotbecauseofdoingitthatway),so thenarrowalgorithmimplementedinaphysicalcomputer will dosomethinginthe realworld.Whetherwhatitaccomplisheswaswhatitsprogrammerintendedis anothermatter.(Intherealworld,thereareno“partialfunctions”!)

Onewaytomaketeleologicalalgorithmsmorelikelytobesuccessfulisby Shapiro’sstrategy:Internalizingtheexternal,teleologicalaspectsintothepre-and post-conditionsofthe(narrow)algorithm,therebyturningtheexternal/inherited semanticinterpretationofthealgorithmintoaninternal/indigenoussyntacticsemantics.

WhatSmithshowsisthattheexternalsemanticsforanalgorithmisneverarelationdirectlywiththerealworld,butonlytoa model oftherealworld.Thatis,the real-worldsemanticshasbeeninternalized.Butthatinternalizationisnecessarily partialandincomplete.

Therearealgorithms simpliciter,andtherearealgorithms foraccomplishing aparticulartask.Alternatively, all algorithmsaccomplishaparticulartask,but sometasksormore“interesting”thanothers.Thealgorithmswhosetasksare notcurrentlyofinterestmayultimately become interestingwhenanapplication isfoundforthem,asinthecaseofnon-Euclideangeometry.Putotherwise,the algorithmsthatdonotaccomplishtasksmayultimatelybeusedtoaccomplisha task.

Algorithmsthat explicitly accomplisha(ninteresting)taskcanbeconverted intoalgorithmswhosetasksare not explicitinthatmannerbyinternalizingthe

38

taskintothealgorithmnarrowlyconstrued.Thiscanbedonebyinternalizingthe task,perhapsintheformofpre-andpostconditions,perhapsintheformofnamed subroutines(modules).Bothinvolvesyntacticorindigenoussemantics. Aspromised,IhaveraisedmorequestionsthanIhaveanswered.Butthat’s whatphilosophersaresupposedtodo!58

References

[Adams,2010]Adams,F.(2010).Embodiedcognition. PhenomenologyandCognitiveScience,9:619–628.DOI10.1007/s11097-010-9175-x.

[Aizawa,2010]Aizawa,K.(2010).Computationincognitivescience:Itisnot allaboutTuring-equivalentcomputation. StudiesinHistoryandPhilosophyof Science,41(3):227–236.Replyto[Piccinini,2004].

[Anderson,2015]Anderson,B.L.(2015).Cancomputationalgoalsinformtheoriesofvision? TopicsinCognitiveScience.DOI:10.111/tops.12136.

[Bickle,2015]Bickle,J.(2015).Marrandreductionism. TopicsinCognitive Science.DOI:10.1111/TOPS.12134.

[Case,nd]Case,J.(nd).MotivatingtheproofoftheKleenerecursiontheorem. http://www.eecis.udel.edu/∼case/papers/krt-self-repo.pdf.Accessed20March 2015.

[Casta˜neda,1966]Casta˜neda,H.-N.(1966).‘He’:Astudyinthelogicofselfconsciousness. Ratio,8:130–157.

[Castaneda,1989]Castaneda,H.-N.(1989).Directreference,thesemanticsof thinking,andguisetheory(constructivereflectionsonDavidKaplan’stheoryof indexicalreference).InAlmog,J.,Perry,J.,andWettstein,H.,editors, Themes fromKaplan,pages105–144.OxfordUniversityPress,NewYork.

[ChaterandOaksford,2013]Chater,N.andOaksford,M.(2013).Programsas causalmodels:Speculationsonmentalprogramsandmentalrepresentation. CognitiveScience,37(6):1171–1191.

[Cleland,1993]Cleland,C.E.(1993).IstheChurch-Turingthesistrue? Minds andMachines,3(3):283–312.

58IamgratefultoRobinK.HillandtoStuartC.Shapirofordiscussionandcommentsonearlier drafts.

39

[Cleland,2002]Cleland,C.E.(2002).Oneffectiveprocedures. MindsandMachines,12(2):159–179.

[Cole,1991]Cole,D.(1991).Artificialintelligenceandpersonalidentity. Synthese,88(3):399–417.

[Cooper,2012]Cooper,S.B.(2012).Turing’stitanicmachine? CommunicationsoftheACM,55(3):74–83.http://cacm.acm.org/magazines/2012/3/146259turings-titanic-machine/fulltext(accessed26February2014).

[Copeland,1996]Copeland,B.J.(1996).Whatiscomputation? Synthese, 108:335–359.Preprint(accessed18March2014)at http://www.alanturing.net/turing archive/pages/pub/what/what.pdf.

[Copeland,2002]Copeland,B.J.(2002).Hypercomputation. MindsandMachines,12(4):461–502.

[CopelandandShagrir,2011]Copeland,B.J.andShagrir,O.(2011).DoacceleratingTuringmachinescomputetheuncomputable? MindsandMachines, 21(2):221–239.

[Davis,2006]Davis,M.(2006).WhatisTuringreducibility? Noticesofthe AMS,53(10):1218–1219.http://www.ams.org/notices/200610/whatis-davis.pdf (accessed30May2014).

[Daylight,2013]Daylight,E.G.(2013).Towardsahistoricalnotionof“Turing—thefatherofcomputerscience”. http://www.dijkstrascry.com/sites/default/files/papers/Daylightpaper91.pdf. (accessed7April2015).

[Dennett,2009]Dennett,D.(2009).Darwin’s‘strangeinversionofreasoning’. ProceedingsoftheNationalAcademyofScience,106,suppl.1:10061–10065.http://www.pnas.org/cgi/doi/10.1073/pnas.0904433106.Seealso [Dennett,2013].

[Dennett,2013]Dennett,D.(2013).Turing’s‘strangeinversionofreasoning’.In Cooper,S.B.andvanLeeuwen,J.,editors, AlanTuring:HisWorkandImpact, pages569–573.Elsevier,Amsterdam.Seealso[Dennett,2009].

[Dennett,1971]Dennett,D.C.(1971).Intentionalsystems. JournalofPhilosophy,68:87–106.ReprintedinDanielC.Dennett, Brainstorms (Montgomery, VT:BradfordBooks):3–22.

40

[Dresner,2003]Dresner,E.(2003).EffectivememoryandTuring’smodelof mind. JournalofExperimental&TheoreticalArtificialIntelligence,15(1):113–123.

[Dresner,2012]Dresner,E.(2012).Turing,MatthewsandMillikan:Effective memory,dispositionalismandpushmepullyoustates. InternationalJournalof PhilosophicalStudies,20(4):461–472.

[Egan,1991]Egan,F.(1991).Mustpsychologybeindividualistic? Philosophical Review,100(2):179–203.

[Egan,1995]Egan,F.(1995).Computationandcontent. PhilosophicalReview, 104(2):181–203.

[Elser,2012]Elser,V.(2012).Inaclassbyitself. AmericanScientist,100:418–420.http://www.americanscientist.org/bookshelf/pub/in-a-class-by-itself, http://www.americanscientist.org/authors/detail/veit-elser(bothaccessed 16December2013).

[Feferman,1992]Feferman,S.(1992).Turing’s‘oracle’:Fromabsolutetorelativecomputability—andback.InEcheverria,J.,Ibarra,A.,andMormann,T., editors, TheSpaceofMathematics:Philosophical,Epistemological,andHistoricalExporations,pages314–348.WalterdeGruyter,Berlin.

[Fetzer,1988]Fetzer,J.H.(1988).Programverification:Theveryidea. CommunicationsoftheACM,31(9):1048–1063.

[Fodor,1978]Fodor,J.A.(1978).TomSwiftandhisproceduralgrandmother. Cognition,6:229–247.Accessed11December2013from: http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/tomswift.pdf.

[Fodor,1980]Fodor,J.A.(1980).Methodologicalsolipsismconsideredasaresearchstrategyincognitivepsychology. BehavioralandBrainSciences,3:63–109.

[Forsythe,1968]Forsythe,G.E.(1968).Computerscienceandeducation. ProceedingsIFIP68Cong,pages92–106.

[Frenkel,2013]Frenkel,E.(2013). LoveandMath:TheHeartofHiddenReality BasicBooks,NewYork.

[Goldfain,2006]Goldfain,A.(2006).Embodiedenumeration:Appealingtoactivitiesformathematicalexplanation.InBeetz,M.,Rajan,K.,Thielscher,M., andRusu,R.B.,editors, CognitiveRobotics:PapersfromtheAAAIWorkshop

41

(CogRob2006),TechnicalReportWS-06-03,pages69–76.AAAIPress,Menlo Park,CA.

[Goldfain,2008]Goldfain,A.(2008).Acomputationaltheoryofearlymathematicalcognition.PhDdissertation(Buffalo:SUNYBuffaloDepartmentof ComputerScience&Engineering), http://www.cse.buffalo.edu/sneps/Bibliography/GoldfainDissFinal.pdf.

[Grossmann,1974]Grossmann,R.(1974). Meinong.RoutledgeandKeganPaul, London.

[Halmos,1973]Halmos,P.R.(1973).ThelegendofJohnvon Neumann. AmericanMathematicalMonthly,pages382–394. http://poncelet.math.nthu.edu.tw/disk5/js/biography/v-n.pdf, http://stepanov.lk.net/mnemo/legende.html(accessed7April2014).

[HartmanisandStearns,1965]Hartmanis,J.andStearns,R.(1965).Onthecomputationalcomplexityofalgorithms. TransactionsoftheAmericanMathematicalSociety,117:285–306.

[Hayes,2004]Hayes,B.(2004).Small-townstory. AmericanScientist,pages 115–119.https://www.americanscientist.org/issues/pub/small-town-story (accessed14March2015).

[Hill,2015]Hill,R.K.(2015).Whatanalgorithmis. PhilosophyandTechnology DOI10.1007/s13347-014-0184-5.

[Hoffmann,2010]Hoffmann,L.(2010).AmirPnueli:Aheadofhistime. CommunicationsoftheACM,53(1):22–23. http://cacm.acm.org/magazines/2010/1/55750amir-pnueli-ahead-of-his-time/fulltext (accessed4March2015).

[Hofstatder,1980]Hofstatder,D.R.(1980).Reviewof[Sloman,1978]. BulletinoftheAmericanMathematicalSociety,2(2):328–339. http://projecteuclid.org/download/pdf 1/euclid.bams/1183546241.

[Horsten,2015]Horsten,L.(2015).Philosophyofmathematics. InZalta,E.N.,editor, StanfordEncyclopediaofPhilosophy http://plato.stanford.edu/entries/philosophy-mathematics/#StrNom.

[Johnson-Laird,1981]Johnson-Laird,P.N.(1981).Mentalmodelsincognitive science.InNorman,D.A.,editor, PerspectivesonCognitiveScience,chapter7, pages147–191.Ablex,Norwood,NJ.

42

[Kay,2001]Kay,K.(2001).Machinesandthemind. TheHarvardBrain,Vol.8 (Spring).http://www.hcs.harvard.edu/∼hsmbb/BRAIN/vol8-spring2001/ai.htm (accessed17April2015).

[Kleene,1995]Kleene,S.C.(1995).Turing’sanalysisofcomputability,andmajorapplicationsofit.InHerken,R.,editor, TheUniversalTuringMachine:A Half-CenturySurvey,SecondEdition,pages15–49.Springer-Verlag,Vienna.

[Knuth,1973]Knuth,D.E.(1973). TheArtofComputerProgramming,Second Edition.Addison-Wesley,Reading,MA.

[Lamport,2015]Lamport,L.(2015).Whobuildsahousewithout drawingblueprints? CommunicationsoftheACM,58(4):38–41. http://cacm.acm.org/magazines/2015/4/184705-who-builds-a-house-withoutdrawing-blueprints/fulltext(accessed18April2015).

[Liao,1998]Liao,M.-H.(1998).ChinesetoEnglishmachinetranslationusing SNePSasaninterlingua.http://www.cse.buffalo.edu/sneps/Bibliography/tr9716.pdf.Unpublisheddoctoraldissertation,DepartmentofLinguistics,SUNY Buffalo.

[LloydandNg,2004]Lloyd,S.andNg,Y.J.(2004).Blackholecomputers. ScientificAmerican,291(5):52–61.

[MacKenzie,1992]MacKenzie,D.(1992).Computers,formalproofs,andthelaw courts. NoticesoftheAmericanMathematicalSociety,39(9):1066–1069.Also seeintroductionbyKeithDevlin,sameissue,pp.1065–1066.

[MaidaandShapiro,1982]Maida,A.S.andShapiro,S.C.(1982).Intensional conceptsinpropositionalsemanticnetworks. CognitiveScience,6:291–330. reprintedinRonaldJ.Brachman&HectorJ.Levesque(eds.), Readingsin KnowledgeRepresentation (LosAltos,CA:MorganKaufmann,1985):169–189.

[Markov,1954]Markov,A.(1954).Theoryofalgorithms. Tr.Mat.Inst.Steklov, 42:1–14.trans.byEdwinHewitt,in AmericanMathematicalSocietyTranslations,Series2,Vol.15(1960).

[Marr,1982]Marr,D.(1982). Vision:AComputationalInvestigationintothe HumanRepresentationandProcessingofVisualInformation.W.H.Freeman, NewYork.

43

[Marx,1845]Marx,K.(1845).ThesesonFeuerbach. https://www.marxists.org/archive/marx/works/1845/theses/theses.htm (accessed14March2015).

[McDermott,1980]McDermott,D.(1980).Artificialintelligencemeetsnaturalstupidity.InHaugeland,J.,editor, MindDesign:Philosophy,Psychology,ArtificialIntelligence,pages143–160.MITPress,Cambridge,MA. http://www.inf.ed.ac.uk/teaching/courses/irm/mcdermott.pdf.

[Moor,1978]Moor,J.H.(1978).Threemythsofcomputerscience. BritishJournalforthePhilosophyofScience,29(3):213–222.

[Moor,2003]Moor,J.H.,editor(2003). TheTuringTest:TheElusiveStandard ofArtificialIntelligence.KluwerAcademic,Dordrecht,TheNetherlands.

[Morris,1938]Morris,C.(1938). FoundationsoftheTheoryofSigns.University ofChicagoPress,Chicago.

[Newcombeetal.,2015]Newcombe,C.,Rath,T.,Zhang,F.,Munteanu, B.,Brooker,M.,andDeardeuff,M.(2015).HowAmazonwebservicesusesformalmethods. CommunicationsoftheACM,58(4):66–73.http://delivery.acm.org/10.1145/2700000/2699417/p66-newcombe.pdf(accessed18April2015).

[Newell,1980]Newell,A.(1980).Physicalsymbolsystems. CognitiveScience, 4:135–183.Accessed15March2014fromhttp://tinyurl.com/Newell1980.

[Pattisetal.,1995]Pattis,R.E.,Roberts,J.,andStehlik,M.(1995). Karelthe Robot:AGentleIntroductiontotheArtofProgramming,SecondEdition.John Wiley&Sons,NewYork.

[Peacocke,1999]Peacocke,C.(1999).Computationasinvolvingcontent:AresponsetoEgan. Mind&Language,14(2):195–202.

[Pearl,2000]Pearl,J.(2000). Causality:Models,Reasoning,andInference.CambridgeUniversityPress,Cambridge,UK.

[PerruchetandVinter,2002]Perruchet,P.andVinter,A.(2002).Theselforganizingconsciousness. BehavioralandBrainSciences,25(3):297–388.

[Piccinini,2004]Piccinini,G.(2004).Thefirstcomputationaltheoryofmind andbrain:AcloselookatMcCullochandPitts’s‘Logicalcalculusofideas immanentinnervousactivity’. Synthese,141:175–215.Forareply,see [Aizawa,2010].

44

[Piccinini,2006]Piccinini,G.(2006).Computationwithoutrepresentation. PhilosophicalStudies,137(2):204–241.Accessed29April2014from: http://www.umsl.edu/∼piccininig/Computation without Representation.pdf.

[Piccinini,2008]Piccinini,G.(2008).Computers. PacificPhilosophicalQuarterly,89:32–73.

[Piccinini,2011]Piccinini,G.(2011).ThephysicalChurch-Turingthesis:Modest orbold? BritishJournalforthePhilosophyofScience,62:733–769.

[Preston,2013]Preston,B.(2013). APhilosophyofMaterialCulture:Action, Function,andMind.Routledge,NewYork.

[Pylyshyn,1984]Pylyshyn,Z.W.(1984). ComputationandCognition:Towards aFoundationforCognitiveScience.MITPress,Cambridge,MA.Ch.3(“The RelevanceofComputation”),pp.48–86,esp.thesection“TheRoleofComputer Implementation”(pp.74–78).

[Rapaport,1981]Rapaport,W.J.(1981).Howtomaketheworldfitourlanguage: AnessayinMeinongiansemantics. GrazerPhilosophischeStudien,14:1–21.

[Rapaport,1984]Rapaport,W.J.(1984).Criticalthinkingandcognitivedevelopment. AmericanPhilosophicalAssocitationNewsletteronPre-CollegeInstructioninPhilosophy,1:4–5.Reprintedin ProceedingsandAddressesofthe AmericanPhilosophicalAssociation 57(5)(May1984):610–615; http://www.cse.buffalo.edu/∼rapaport/Papers/rapaport84-perryAPA.pdf (accessed13January2015).

[Rapaport,1986a]Rapaport,W.J.(1985-1986a).Non-existentobjectsandepistemologicalontology. GrazerPhilosophischeStudien,25/26:61–95.

[Rapaport,1986b]Rapaport,W.J.(1986b).Philosophy,artificialintelligence,and theChinese-roomargument. Abacus:TheMagazinefortheComputerProfessional,3:6–17.Correspondence, Abacus 4(Winter1987):6–7;4(Spring):5–7; http://www.cse.buffalo.edu/∼rapaport/Papers/abacus.pdf.

[Rapaport,1988]Rapaport,W.J.(1988).Syntacticsemantics:Foundationsof computationalnatural-languageunderstanding.InFetzer,J.H.,editor, Aspects ofArtificialIntelligence,pages81–131.KluwerAcademicPublishers,Dordrecht,TheNetherlands.ReprintedwithnumerouserrorsinEricDietrich(ed.) (1994), ThinkingMachinesandVirtualPersons:EssaysontheIntentionalityof Machines (SanDiego:AcademicPress):225–273.

45

[Rapaport,1990]Rapaport,W.J.(1990).Computerprocessesandvirtualpersons:CommentsonCole’s‘Artificialintelligenceandpersonalidentity’.TechnicalReport90-13,SUNYBuffaloDepartmentofComputerScience,Buffalo. http://www.cse.buffalo.edu/∼rapaport/Papers/cole.tr.17my90.pdf.

[Rapaport,1993]Rapaport,W.J.(1993).Becausemerecalculatingisn’t thinking:CommentsonHauser’s‘Whyisn’tmypocketcalculatora thinkingthing?’. MindsandMachines,3:11–20.Preprintonlineat http://www.cse.buffalo.edu/∼rapaport/Papers/joint.pdf.

[Rapaport,1995]Rapaport,W.J.(1995).Understandingunderstanding:Syntactic semanticsandcomputationalcognition.InTomberlin,J.E.,editor, AI,Connectionism,andPhilosophicalPsychology,pages49–88.Ridgeview,Atascadero, CA. PhilosophicalPerspectives,Vol.9;reprintedinToribio,Josefa,&Clark, Andy(eds.)(1998), LanguageandMeaninginCognitiveScience:CognitiveIssuesandSemanticTheory, ArtificialIntelligenceandCognitiveScience:ConceptualIssues,Vol.4(NewYork:Garland):73–88.

[Rapaport,1999]Rapaport,W.J.(1999).Implementationissemanticinterpretation. TheMonist,82:109–130.

[Rapaport,2000]Rapaport,W.J.(2000).HowtopassaTuringtest:Syntactic semantics,natural-languageunderstanding,andfirst-personcognition. Journal ofLogic,Language,andInformation,9(4):467–490.Reprintedin[Moor,2003, 161–184].

[Rapaport,2002]Rapaport,W.J.(2002).Holism,conceptual-rolesemantics,and syntacticsemantics. MindsandMachines,12(1):3–59.

[Rapaport,2005a]Rapaport,W.J.(2005a).Implementionissemanticinterpretation:Furtherthoughts. JournalofExperimentalandTheoreticalArtificial Intelligence,17(4):385–417.

[Rapaport,2005b]Rapaport,W.J.(2005b).Philosophyofcomputerscience:Anintroductorycourse. TeachingPhilosophy,28(4):319–341. http://www.cse.buffalo.edu/∼rapaport/philcs.html.

[Rapaport,2006]Rapaport,W.J.(2006).HowHelenKellerusedsyntacticsemanticstoescapefromaChineseroom. MindsandMachines,16:381–436.See replytocomments,in[Rapaport,2011].

[Rapaport,2011]Rapaport,W.J.(2011).Yes,shewas!ReplytoFord’s‘Helen KellerwasneverinaChineseroom’. MindsandMachines,21(1):3–17.

46

[Rapaport,2012]Rapaport,W.J.(2012).Semioticsystems,computers,andthe mind:Howcognitioncouldbecomputing. InternationalJournalofSignsand SemioticSystems,2(1):32–71.

[Rapaport,2015]Rapaport,W.J.(2015).Philosophyofcomputerscience.Current draftinprogressathttp://www.cse.buffalo.edu/∼rapaport/Papers/phics.pdf.

[RapaportandKibby,2007]Rapaport,W.J.andKibby,M.W.(2007).Contextualvocabularyacquisitionascomputationalphilosophyandasphilosophical computation. JournalofExperimentalandTheoreticalArtificialIntelligence, 19(1):1–17.

[RapaportandKibby,2014]Rapaport,W.J.andKibby,M.W.(2014).Contextualvocabularyacquisition:Fromalgorithmtocurriculum.InPalma,A.,editor, CastanedaandHisGuises:EssaysontheWorkofHector-NeriCastaneda, pages107–150.WalterdeGruyter,Berlin.

[Rapaportetal.,1997]Rapaport,W.J.,Shapiro,S.C.,andWiebe,J.M.(1997). Quasi-indexicalsandknowledgereports. CognitiveScience,21:63–107.

[Rescorla,2007]Rescorla,M.(2007).Church’sthesisandtheconceptualanalysisofcomputability. NotreDameJournalofFormalLogic,48(2):253–280. Preprint(accessed29April2014)at http://www.philosophy.ucsb.edu/people/profiles/faculty/cvs/papers/church2.pdf.

[Rescorla,2012]Rescorla,M.(2012).Arecomputationaltransitionssensitive tosemantics? AustralianJournalofPhilosophy,90(4):703–721.Preprintat http://www.philosophy.ucsb.edu/docs/faculty/papers/formal.pdf (accessed30October2014).

[Rescorla,2013]Rescorla,M.(2013).Againststructuralisttheoriesofcomputationalimplementation. BritishJournalforthePhilosophyofScience, 64(4):681–707.Preprint(accessed31October2014)at http://philosophy.ucsb.edu/docs/faculty/papers/against.pdf.

[Rescorla,2014]Rescorla,M.(2014).Thecausalrelevanceofcontenttocomputation. PhilosophyandPhenomenologicalResearch,88(1):173–208.Preprintat http://www.philosophy.ucsb.edu/people/profiles/faculty/cvs/papers/causalfinal.pdf (accessed7May2014).

[Richards,2009]Richards,R.J.(2009).Thedescentofman. AmericanScientist, 97(5):415–417.http://www.americanscientist.org/bookshelf/pub/the-descentof-man(accessed20April2015).

47

[Schagrinetal.,1985]Schagrin,M.L.,Rapaport,W.J.,andDipert,R.R.(1985). Logic:AComputerApproach.McGraw-Hill,NewYork.

[Searle,1980]Searle,J.R.(1980).Minds,brains,andprograms. Behavioraland BrainSciences,3:417–457.

[Searle,1982]Searle,J.R.(1982).Themythofthecomputer. NewYorkReviewof Books,pages3–6.Cf.correspondence,samejournal,24June1982,pp.56–57.

[Searle,1990]Searle,J.R.(1990).Isthebrainadigitalcomputer? Proceedings andAddressesoftheAmericanPhilosophicalAssociation,64(3):21–37.

[Shagrir,2006]Shagrir,O.(2006).Whyweviewthebrainasacomputer. Synthese,153:393–416.Preprintat http://edelstein.huji.ac.il/staff/shagrir/papers/Why we view the brain as a computer.pdf (accessed25March2014).

[ShagrirandBechtel,2015]Shagrir,O.andBechtel,W.(2015).Marr’scomputationallevelanddelineatingphenomena. CognitiveScience,forthcoming. Preprint(accessed23March2015)at http://philsci-archive.pitt.edu/11224/1/shagrir and bechtel.Marr’s Computational Level and Delineating Phenomena.pdf.

[ShapiroandRapaport,1987]Shapiro,S.C.andRapaport,W.J.(1987).SNePS consideredasafullyintensionalpropositionalsemanticnetwork.InCercone,N. andMcCalla,G.,editors, TheKnowledgeFrontier:EssaysintheRepresentation ofKnowledge,pages262–315.Springer-Verlag,NewYork.

[ShapiroandRapaport,1991]Shapiro,S.C.andRapaport,W.J.(1991).Modelsandminds:Knowledgerepresentationfornatural-languagecompetence.In Cummins,R.andPollock,J.,editors, PhilosophyandAI:EssaysattheInterface,pages215–259.MITPress,Cambridge,MA.

[Sieg,1994]Sieg,W.(1994).Mechanicalproceduresandmathematicalexperience.InGeorge,A.,editor, Mathematicsand Mind,pages71–117.OxfordUniversityPress,NewYork. http://repository.cmu.edu/cgi/viewcontent.cgi?article=1248&context=philosophy (accessed10December2013).

[Simon,1996]Simon,H.A.(1996).Computationaltheoriesofcognition.In O’Donohue,W.andKitchener,R.F.,editors, ThePhilosophyofPsychology, pages160–172.SAGEPublications,London.

48

[SimonandNewell,1962]Simon,H.A.andNewell,A.(1962).Simulationof humanthinking.InGreenberger,M.,editor, ComputersandtheWorldofthe Future,pages94–114.MITPress,Cambridge,MA.

[Sloman,1978]Sloman,A.(1978). TheComputerRevolutioninPhilosophy:Philosophy,ScienceandModelsofMind.HumanitiesPress,AtlanticHighlands, NJ.http://www.cs.bham.ac.uk/research/projects/cogaff/crp/.

[Sloman,2002]Sloman,A.(2002).TheirrelevanceofTuring machinestoAI.InScheutz,M.,editor, Computationalism: NewDirections,pages87–127.MITPress,Cambridge,MA. http://www.cs.bham.ac.uk/research/projects/cogaff/sloman.turing.irrelevant.pdf (accessed21February2014).Pagereferencesaretotheonlinepreprint.

[Smith,1985]Smith,B.C.(1985).Limitsofcorrectnessincomputers. ACMSIGCASComputersandSociety,14–15(1–4):18–26.Alsopublishedas Technical Report CSLI-85-36(Stanford,CA:CenterfortheStudyofLanguage&Information);reprintedinCharlesDunlop&RobKling(eds.), Computerizationand Controversy (SanDiego:AcademicPress,1991):632–646;reprintedinTimothyR.Colburn,JamesH.Fetzer,&TerryL.Rankin(eds.), ProgramVerification:FundamentalIssuesinComputerScience (Dordrecht,Holland:Kluwer AcademicPublishers,1993):275–293.

[Smith,1987]Smith,B.C.(1987).Thecorrespondencecontinuum.Technical ReportCSLI-87-71,CenterfortheStudyofLanguage&Information,Stanford, CA.

[Soare,2009]Soare,R.I.(2009).Turingoraclemachines,onlinecomputing,andthreedisplacementsincomputabilitytheory. AnnalsofPureandAppliedLogic,160:368–399.Preprintat http://www.people.cs.uchicago.edu/∼soare/History/turing.pdf; publishedversionat http://ac.els-cdn.com/S0168007209000128/1-s2.0-S0168007209000128main.pdf? tid=8258a7e2-01ef-11e4-9636-00000aab0f6b&acdnat=1404309072 f745d1632bb6fdd95f711397fda63ee2 (bothaccessed2July2014).Aslightlydifferentversionappearsas [Soare,2013].

[Soare,2013]Soare,R.I.(2013).Interactivecomputingandrelativizedcomputability.InCopeland,B.J.,Posy,C.J.,andShagrir,O.,editors, Computability:Turing,G¨odel,Church,andBeyond,pages203–260.MITPress, Cambridge,MA.Aslightlydifferentversionappearedas[Soare,2009].

49

[Staples,2015]Staples,M.(2015).Criticalrationalismandengineering:Methodology. Synthese,192(1):337–362.Preprintat http://www.nicta.com.au/pub?doc=7747(accessed11February2015).

[Stewart,2000]Stewart,I.(2000).Impossibilitytheorems. ScientificAmerican, pages98–99.

[Suber,1988]Suber,P.(1988).Whatissoftware? JournalofSpeculativePhilosophy,2(2):89–119.http://www.earlham.edu/∼peters/writing/software.htm(accessed21May2012).

[TedreandSutinen,2008]Tedre,M.andSutinen,E.(2008).Threetraditions ofcomputing:Whateducatorsshouldknow. ComputerScienceEducation, 18(3):153–170.

[Thagard,1984]Thagard,P.(1984).Computerprogramsaspsychologicaltheories.InNeumaier,O.,editor, Mind,LanguageandSociety,pages77–84. Conceptus-Studien,Vienna.

[Turing,1936]Turing,A.M.(1936).Oncomputablenumbers,withanapplication tothe entscheidungsproblem. ProceedingsoftheLondonMathematicalSociety, Ser.2,42:230–265.

[Wegner,1995]Wegner,P.(1995).Interactionasabasisforempiricalcomputer science. ACMComputingSurveys,27(1):45–48.

[Weinberg,2002]Weinberg,S.(2002).Istheuniverseacomputer? NewYork ReviewofBooks,49(16).

[WiebeandRapaport,1986]Wiebe,J.M.andRapaport,W.J.(1986).Representing dere and dedicto beliefreportsindiscourseandnarrative. Proceedingsof theIEEE,74:1405–1413.

[Wigner,1960]Wigner,E.(1960).Theunreasonableeffectivenessofmathematicsinthenaturalsciences. CommunicationsinPureandAppliedMathematics, 13(1).https://www.dartmouth.edu/∼matc/MathDrama/reading/Wigner.html, http://www.maths.ed.ac.uk/∼aar/papers/wigner.pdf(bothaccessed7April 2015).

[Winston,1977]Winston,P.H.(1977). ArtificialIntelligence.Addison-Wesley, Reading,MA.

[Wolfram,2002]Wolfram,S.(2002). ANewKindofScience.WolframMedia.

50

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.