Foundations of
Autonomy 1st Edition Hussein A. Abbass
Visit to download the full and correct content document: https://textbookfull.com/product/foundations-of-trusted-autonomy-1st-edition-husseina-abbass/
Trusted
More products digital (pdf, epub, mobi) instant download maybe you interests ...
Dynamics of Lattice Materials Mahmoud I. Hussein
https://textbookfull.com/product/dynamics-of-lattice-materialsmahmoud-i-hussein/
Rethinking New Womanhood Nazia Hussein
https://textbookfull.com/product/rethinking-new-womanhood-naziahussein/
Forensic Data Collections 2.0: A Selection of Trusted Digital Forensics Content 2nd Edition Robert B. Fried
https://textbookfull.com/product/forensic-datacollections-2-0-a-selection-of-trusted-digital-forensicscontent-2nd-edition-robert-b-fried/
Adaptive Reuse of the Industrial Building: A case of Energy Museum in Sanatistanbul, Turkey 1st Edition Ma. Najmaldin Hussein Department Of Architecture
https://textbookfull.com/product/adaptive-reuse-of-theindustrial-building-a-case-of-energy-museum-in-sanatistanbulturkey-1st-edition-ma-najmaldin-hussein-department-ofarchitecture/
Foundations of Astronomy Fourteenth Edition Michael A. Seeds
https://textbookfull.com/product/foundations-of-astronomyfourteenth-edition-michael-a-seeds/
Patterns of Local Autonomy in Europe Andreas Ladner
https://textbookfull.com/product/patterns-of-local-autonomy-ineurope-andreas-ladner/
Aesthetics of Negativity Blanchot Adorno and Autonomy 1st Edition William S. Allen
https://textbookfull.com/product/aesthetics-of-negativityblanchot-adorno-and-autonomy-1st-edition-william-s-allen/
A Heartless Crow upon a Winter Barren Bough First Edition Selected Poems Of Ahmad Hemmati Edited By Dr Hussein Mollanazar
https://textbookfull.com/product/a-heartless-crow-upon-a-winterbarren-bough-first-edition-selected-poems-of-ahmad-hemmatiedited-by-dr-hussein-mollanazar/
Intel Trusted Execution Technology for Server Platforms
A Guide to More Secure Datacenters 1st Edition William Futral
https://textbookfull.com/product/intel-trusted-executiontechnology-for-server-platforms-a-guide-to-more-securedatacenters-1st-edition-william-futral/
Hussein A. Abbass
Jason Scholz
Darryn J. Reid
Editors
Foundations of Trusted Autonomy
in
Control 117
Studies
Systems, Decision and
StudiesinSystems,DecisionandControl
Volume117
Serieseditor
JanuszKacprzyk,PolishAcademyofSciences,Warsaw,Poland e-mail:kacprzyk@ibspan.waw.pl
Theseries “StudiesinSystems,DecisionandControl” (SSDC)coversbothnew developmentsandadvances,aswellasthestateoftheart,inthevariousareasof broadlyperceivedsystems,decisionmakingandcontrol-quickly,uptodateand withahighquality.Theintentistocoverthetheory,applications,andperspectives onthestateoftheartandfuturedevelopmentsrelevanttosystems,decision making,control,complexprocessesandrelatedareas,asembeddedinthe fieldsof engineering,computerscience,physics,economics,socialandlifesciences,aswell astheparadigmsandmethodologiesbehindthem.Theseriescontainsmonographs, textbooks,lecturenotesandeditedvolumesinsystems,decisionmakingand controlspanningtheareasofCyber-PhysicalSystems,AutonomousSystems, SensorNetworks,ControlSystems,EnergySystems,AutomotiveSystems, BiologicalSystems,VehicularNetworkingandConnectedVehicles,Aerospace Systems,Automation,Manufacturing,SmartGrids,NonlinearSystems,Power Systems,Robotics,SocialSystems,EconomicSystemsandother.Ofparticular valuetoboththecontributorsandthereadershiparetheshortpublicationtimeframe andtheworld-widedistributionandexposurewhichenablebothawideandrapid disseminationofresearchoutput.
Moreinformationaboutthisseriesathttp://www.springer.com/series/13304
HusseinA.Abbass • JasonScholz
DarrynJ.Reid
Editors
FoundationsofTrusted
Autonomy
Editors
HusseinA.Abbass
SchoolofEngineeringandIT UniversityofNewSouthWales
Canberra,ACT
Australia
JasonScholz
DefenceScienceandTechnologyGroup JointandOperationsAnalysisDivision
Edinburgh,SA Australia
DarrynJ.Reid
DefenceScienceandTechnologyGroup JointandOperationsAnalysisDivision Edinburgh,SA Australia
ISSN2198-4182ISSN2198-4190(electronic) StudiesinSystems,DecisionandControl
ISBN978-3-319-64815-6ISBN978-3-319-64816-3(eBook) https://doi.org/10.1007/978-3-319-64816-3
LibraryofCongressControlNumber:2017949139
© TheEditor(s)(ifapplicable)andTheAuthor(s)2018.Thisbookisanopenaccesspublication. OpenAccess ThisbookislicensedunderthetermsoftheCreativeCommonsAttribution4.0 InternationalLicense(http://creativecommons.org/licenses/by/4.0/),whichpermitsuse,sharing,adaptation,distributionandreproductioninanymediumorformat,aslongasyougiveappropriatecreditto theoriginalauthor(s)andthesource,providealinktotheCreativeCommonslicenseandindicateif changesweremade.
Theimagesorotherthirdpartymaterialinthisbookareincludedinthebook’sCreativeCommons license,unlessindicatedotherwiseinacreditlinetothematerial.Ifmaterialisnotincludedinthebook’s CreativeCommonslicenseandyourintendeduseisnotpermittedbystatutoryregulationorexceedsthe permitteduse,youwillneedtoobtainpermissiondirectlyfromthecopyrightholder.
Theuseofgeneraldescriptivenames,registerednames,trademarks,servicemarks,etc.inthispublicationdoesnotimply,evenintheabsenceofaspecificstatement,thatsuchnamesareexemptfromthe relevantprotectivelawsandregulationsandthereforefreeforgeneraluse.
Thepublisher,theauthorsandtheeditorsaresafetoassumethattheadviceandinformationinthis bookarebelievedtobetrueandaccurateatthedateofpublication.Neitherthepublishernorthe authorsortheeditorsgiveawarranty,expressorimplied,withrespecttothematerialcontainedhereinor foranyerrorsoromissionsthatmayhavebeenmade.Thepublisherremainsneutralwithregardto jurisdictionalclaimsinpublishedmapsandinstitutionalaffiliations.
Printedonacid-freepaper
ThisSpringerimprintispublishedbySpringerNature
TheregisteredcompanyisSpringerInternationalPublishingAG
Theregisteredcompanyaddressis:Gewerbestrasse11,6330Cham,Switzerland
Toafuturewherehumansandmachineslive togetherinharmony.
Foreword
Technology-dependentindustriesandagencies,suchasDefence,arekeenlyseekinggame-changingcapabilityintrustedautonomoussystems.However,behindthe researchanddevelopmentofthesetechnologiesisthestoryofthepeople,collaborationandthepotentialoftechnology.
ThemotivationforDefenceinsponsoringtheopenpublicationofthisexciting newbookistoaccelerateAustralia’sDefencescienceandtechnologyinTrusted AutonomousSystemstoaworld-classstandard.ThisjourneybeganinJuly2015 witha firstinvitationalsymposiumhostedinAustraliawithsomeoftheworld-class researchersfeaturedinthisbookinattendance.Sincethattime,engagementacross theacademicsectorbothnationallyandinternationallyhasgrownsteadily.Inthe nearfutureinAustralia,welookforwardtoestablishingaDefenceCooperative ResearchCentrethatwillfurtherdevelopournationalresearchtalentandsowthe seedsofanewgenerationofsystemsforDefence.
LookingbackoverthelastcenturyatthepredictionsmadeaboutgeneralpurposeroboticsandAIinparticular,itseemsappropriatetoask “sowhereareallthe robots?” Whydon'tweseethemmoreembeddedinsociety?Isitbecausetheycan't dealwiththeinevitableunpredictabilityofopenenvironments inthecaseforthe military,situationsthatarecontested?Isitbecausethesemachinesaresimplynot smartenough?Orisitbecausehumanscannottrustthem?Forthemilitary,these problemsmaywellbethehardestchallengesofall,asfailuremaycomewithhigh consequences.
Thisbookthenappropriatelyinthespiritoffoundationsexaminesthetopicwith anopenandenquiring flavour,teasingapartcriticalphilosophical,scienti fic, mathematical,applicationandethicalissues,ratherthanassumingastanceof advocacy.
vii
Thefullstoryhasnotyetbeenwrittenbutithasbegun,andIbelievethis contributionwilltakeusforward.Mythanksinparticulartotheauthorsandthe editors,Prof.HusseinA.AbbassattheUniversityofNewSouthWalesforhis sustainedeffortandartofgentlepersuasion,andmyownDefenceScientist, ResearchLeaderDr.JasonScholzandPrincipalScientistDr.DarrynJ.Reid.
Canberra,Australia
April2017
Dr.AlexZelinsky
ChiefDefenceScientistofAustralia
viii Foreword
Preface
Targetingscientists,researchers,practitionersandtechnologists,thisbookbrings contributionsfromlike-mindedauthorstoofferthebasics,thechallengesandthe stateoftheartontrustedautonomoussystemsinasinglevolume.
Ontheonehand,the fieldofautonomoussystemshasbeenfocusingontechnologiesincludingroboticsandartificialintelligence.Ontheotherhand,thetrust dimensionhasbeenstudiedbysocialscientists,philosophers,humanfactorsspecialistsandhuman–computerinteractionresearchers.Thisbookdrawsthreadsfrom thesediversecommunitiestoblendthetechnical,socialandpracticalfoundationsto theemerging fi eldoftrustedautonomoussystems.
Thebookisstructuredinthreeparts.Eachpartcontainschapterswrittenby eminentresearchersandsupplementedwithshortchapterswrittenbyhighcalibre andoutstandingpractitionersandusersofthis field.The firstpartcoversfoundationalartificialintelligencetechnologies.Thesecondpartfocusesonthetrust dimensionandcoversphilosophical,practicalandtechnologicalperspectiveson trust.Thethirdpartbringsaboutadvancedtopicsnecessarytocreatefuturetrusted autonomoussystems.
Thebookiswrittenbyresearchersandpractitionerstocoverdifferenttypesof readership.Itcontainschaptersthatshowcasescenariostobringtopractitionersthe opportunitiesandchallengesthatautonomoussystemsmayimposeonthesociety. ExamplesoftheseperspectivesincludechallengesinCyberSecurity,Defenceand SpaceOperations.Butitisalsoausefulreferenceforgraduatestudentsinengineering,computerscience,cognitivescienceandphilosophy.Examplesoftopics coveredincludeUniversalArtificialIntelligence,GoalReasoning,Human–Robotic Interaction,ComputationalMotivationandSwarmIntelligence.
Canberra,AustraliaHusseinA.Abbass Edinburgh,AustraliaJasonScholz Edinburgh,AustraliaDarrynJ.Reid March2017
ix
Acknowledgements
Theeditorswishtothankallauthorsfortheircontributionstothisbookandfor theirpatienceduringthedevelopmentofthebook.
AspecialthanksgototheDefenceScienceandTechnologyGroup,Department ofDefence,Australia,forfundingthisprojecttomakethebookpublicaccess. ThanksalsoareduetotheUniversityofNewSouthWalesinCanberra(UNSW Canberra)forthetimetakenbythe fi rsteditorforthisbookproject.
xi
1FoundationsofTrustedAutonomy:AnIntroduction 1 HusseinA.Abbass,JasonScholzandDarrynJ.Reid
PartIAutonomy
2UniversalArti ficialIntelligence ............................
TomEverittandMarcusHutter
3GoalReasoningandTrustedAutonomy .....................
BenjaminJohnson,MichaelW.Floyd,AlexandraComan, MarkA.WilsonandDavidW.Aha
4SocialPlanningforTrustedAutonomy
TimMiller,AdrianR.PearceandLizSonenberg
5ANeuroevolutionaryApproachtoAdaptiveMulti-agent Teams
BobbyD.BryantandRistoMiikkulainen
6TheBlessingandCurseofEmergenceinSwarmIntelligence Systems
JohnHarvey
7TrustedAutonomousGamePlay ..........................
MichaelBarlow
PartIITrust
8TheRoleofTrustinHuman-RobotInteraction ...............
MichaelLewis,KatiaSycaraandPhillipWalker
9TrustworthinessofAutonomousSystems
S.KateDevitt
Contents
15
47
67
87
117
125
135
161
xiii
10TrustedAutonomyUnderUncertainty ......................
MichaelSmithson
11TheNeedforTrustedAutonomyinMilitaryCyberSecurity .....
AndrewDowse
12ReinforcingTrustinAutonomousSystems:AQuantum CognitiveApproach
PeterD.BruzaandEduardC.Hoenkamp
13LearningtoShapeErrorswithaConfusionObjective
JasonScholz
14DevelopingRobotAssistantswithCommunicativeCuesforSafe, FluentHRI ...........................................
JustinW.Hart,SaraSheikholeslami,BrianGleeson,ElizabethCroft, KaronMacLean,FrankP.Ferrie,ClémentGosselin andDenisLaurandeau
PartIIITrustedAutonomy 15IntrinsicMotivationforTrulyAutonomousAgents
RonSun
16ComputationalMotivation,AutonomyandTrustworthiness:Can WeHaveItAll?
KathrynMerrick,AdamKlyneandMedriaHardhienata
17AreAutonomous-and-CreativeMachinesIntrinsically Untrustworthy?
185
203
215
225
247
............ 273
293
317
18TrustedAutonomousCommandandControl ................. 337 NoelDerwort 19TrustedAutonomyinTraining:AFutureScenario ............ 347 LeonD.Young 20FutureTrustedAutonomousSpaceScenarios ................. 355
fin 21AnAutonomyInterrogative 365 DarrynJ.Reid Index 393 xiv Contents
SelmerBringsjordandNaveenSundarGovindarajulu
RussellBoyceandDouglasGrif
Contributors
HusseinA.Abbass SchoolofEngineeringandInformationTechnology, UniversityofNewSouthWales,Canberra,ACT,Australia
DavidW.Aha NavyCenterforAppliedResearchinAI,USNavalResearch Laboratory,WashingtonDC,USA
MichaelBarlow SchoolofEngineeringandIT,UNSW,Canberra,Australia
RussellBoyce UniversityofNewSouthWales,Canberra,Australia
SelmerBringsjord RensselaerAI&Reasoning(RAIR)Lab,Departmentof CognitiveScience,DepartmentofComputerScience,RensselaerPolytechnic Institute(RPI),Troy,NY,USA
PeterD.Bruza InformationSystemsSchool,QueenslandUniversityof Technology(QUT),Brisbane,Australia
BobbyD.Bryant DepartmentofComputerSciences,UniversityofTexasat Austin,Austin,USA
AlexandraComan NRCResearchAssociateattheUSNavalResearch Laboratory,WashingtonDC,USA
ElizabethCroft DepartmentofMechanicalEngineering,UniversityofBritish Columbia,Vancouver,Canada
NoelDerwort DepartmentofDefence,Canberra,Australia
AndrewDowse DepartmentofDefence,Canberra,Australia
TomEveritt AustralianNationalUniversity,Canberra,Australia
FrankP.Ferrie DepartmentofElectricalandComputerEngineering,McGill University,Montreal,Canada
MichaelW.Floyd KnexusResearchCorporation,Spring field,VA,USA
xv
BrianGleeson DepartmentofComputerScience,UniversityofBritishColumbia, Vancouver,Canada
ClémentGosselin DepartmentofMechanicalEngineering,LavalUniversity, QuebecCity,Canada
NaveenSundarGovindarajulu RensselaerAI&Reasoning(RAIR)Lab, DepartmentofCognitiveScience,DepartmentofComputerScience,Rensselaer PolytechnicInstitute(RPI),Troy,NY,USA
DouglasGriffin UniversityofNewSouthWales,Canberra,Australia
MedriaHardhienata SchoolofEngineeringandInformationTechnology, UniversityofNewSouthWales,Canberra,Australia
JustinW.Hart DepartmentofComputerScience,UniversityofTexasatAustin, Austin,USA;DepartmentofMechanicalEngineering,UniversityofBritish Columbia,Vancouver,Canada
JohnHarvey SchoolofEngineeringandInformationTechnology,Universityof NewSouthWales,Canberra,Australia
EduardC.Hoenkamp InformationSystemsSchool,QueenslandUniversityof Technology(QUT),Brisbane,Australia;InstituteforComputingandInformation Sciences,RadboudUniversity,Nijmegen,TheNetherlands
MarcusHutter AustralianNationalUniversity,Canberra,Australia
BenjaminJohnson NRCResearchAssociateattheUSNavalResearch Laboratory,WashingtonDC,USA
S.KateDevitt RoboticsandAutonomousSystems,SchoolofElectrical EngineeringandComputerScience,FacultyofScienceandEngineering,Institute forFutureEnvironments,FacultyofLaw,QueenslandUniversityofTechnology, Brisbane,Australia
AdamKlyne SchoolofEngineeringandInformationTechnology,Universityof NewSouthWales,Canberra,Australia
DenisLaurandeau DepartmentofElectricalEngineering,LavalUniversity, QuebecCity,Canada
MichaelLewis DepartmentofInformationSciences,UniversityofPittsburgh, Pittsburgh,PA,USA
KaronMacLean DepartmentofComputerScience,UniversityofBritish Columbia,Vancouver,Canada
KathrynMerrick SchoolofEngineeringandInformationTechnology,University ofNewSouthWales,Canberra,Australia
RistoMiikkulainen DepartmentofComputerSciences,UniversityofTexasat Austin,Austin,USA
xvi Contributors
TimMiller DepartmentofComputingandInformationSystems,Universityof Melbourne,Melbourne,VIC,Australia
AdrianR.Pearce DepartmentofComputingandInformationSystems,University ofMelbourne,Melbourne,VIC,Australia
DarrynJ.Reid DefenceScienceandTechnologyGroup,JointandOperations AnalysisDivision,Edinburgh,SA,Australia
JasonScholz DefenceScienceandTechnologyGroup,JointandOperations AnalysisDivision,Edinburgh,SA,Australia
SaraSheikholeslami DepartmentofMechanicalEngineering,Universityof BritishColumbia,Vancouver,Canada
MichaelSmithson ResearchSchoolofPsychology,TheAustralianNational University,Canberra,Australia
LizSonenberg DepartmentofComputingandInformationSystems,Universityof Melbourne,Melbourne,VIC,Australia
RonSun CognitiveSciencesDepartment,RensselaerPolytechnicInstitute,Troy, NY,USA
KatiaSycara RoboticsInstituteSchoolofComputerScience,CarnegieMellon University,Pittsburgh,PA,USA
PhillipWalker DepartmentofInformationSciences,UniversityofPittsburgh, Pittsburgh,PA,USA
MarkA.Wilson NavyCenterforAppliedResearchinAI,USNavalResearch Laboratory,WashingtonDC,USA
LeonD.Young DepartmentofDefence,WarResearchCentre,Canberra, Australia
Contributors xvii
Chapter1
FoundationsofTrustedAutonomy: AnIntroduction
HusseinA.Abbass,JasonScholzandDarrynJ.Reid
1.1Autonomy
Toaidinunderstandingthechapterstofollow,ageneralconceptualisationofautonomymaybeuseful.Foundationally,autonomyisconcernedwithanagentthatacts inanenvironment.However,thisdefinitionisinsufficientforautonomyasitrequires persistence(orresilience)tothehardshipsthattheenvironmentactsupontheagent. Anagentwhosefirstactionendsinitsdemisewouldnotdemonstrateautonomy.The themesofautonomythenincludeagency,persistenceandaction.
Actionmaybeunderstoodastheutilisationofcapabilitytoachieveintent,given awareness.1 Theactiontrinityofintent,capabilityandawarenessisfoundedona mutualtensionillustratedinthefollowingfigure.
If“capability”isdefinedasanythingthatchangestheagent’sawarenessofthe world(usuallybychangingtheworld),thentheerrorbetweentheagent’sawarenessandintentdrivescapabilitychoiceinordertoreducethaterror.Or,expressed compactly,anagentseeksachievableintent.
Theembodimentofthisactiontrinityinanentity,itselfseparatedfromtheenvironment,butexistingwithinit,andinteractingwithit,istermedanagent,orautonomy, orintelligence.
1 D.A.Lambert,J.B.Scholz,UbiquitousCommandandControl,IntelligentDecisionTechnologies,Volume1Issue3,July2007,Pages157–173,IOSPressAmsterdam,TheNetherlands.
H.A.Abbass(B) SchoolofEngineeringandIT,UniversityofNewSouthWales, Canberra,ACT2600,Australia e-mail:h.abbass@adfa.edu.au
J.Scholz · D.J.Reid DefenceScienceandTechnologyGroup,JointandOperationsAnalysisDivision, POBox1500,Edinburgh,SA,Australia e-mail:jason.scholz@defence.gov.au
D.J.Reid e-mail:darryn.reid@defence.gov.au
©TheAuthor(s)2018
H.A.Abbassetal.(eds.), FoundationsofTrustedAutonomy,StudiesinSystems, DecisionandControl117,https://doi.org/10.1007/978-3-319-64816-3_1
1
SoitisfittingthatChapter 2 byTomEverittandMarcusHutteropenswith thetopicUniversalArtificialIntelligence(UAI):PracticalAgentsandFundamentalChallenges.TheirdefinitionofUAIinvolvestwocomputationalmodels:Turing Machines;onerepresentingtheagent,andonetheenvironment,withactionsbythe agentontheenvironment(capability),actionsfromtheenvironmentontheagent (awareness),andactionsfromtheenvironmenttotheagentincludingautilisation reward(intentachievement)subjecttouncertainty.The“will”thatunderpinsthe intentofthisagentis“maximisationofreward”.Thismachineintelligenceisexpressible-astoundingly-asasingleequation.NamedAIXI,itachievesatheoreticallyoptimalagentintermsofrewardmaximisation.Thoughuncomputable,theconstruct providesaprincipledapproachtoconsideringapracticalartificialintelligenceandits theoreticallimitations.EverittandHutterguideusthroughthedevelopmentofthis theoryandtheapproximationsnecessary.Theythenexaminethecriticalquestion ofwhetherwecantrustthismachinegivenmachineself-modification,andgiven thepotentialforrewardcounterfeiting,andpossiblemeanstomanagethese.They alsoconsideragentdeathandself-preservation.Deathforthisagentinvolvesthe cessationofaction,andmightrepresentedasanabsorbingzerorewardstate.They definebothdeathandsuicide,toassesstheagent’sself-preservationdrivewhichhas implicationsforautonomoussystemssafety.UAIprovidesafascinatingtheoretical foundationforanautonomousmachineandindicatesotherdefinitionalpathsfor futureresearch.
Inthisactiontrinityofintent,capability,andawareness,itisintentthatisinsome sensetheforemost.Drivenbyanunderlyingwilltoseekutility,survivalorother motivation,intentestablishesfuturegoals.Chapter 3 BenjaminJohnson,Michael Floyd,AlexandraComan,MarkWilsonandDavidAhaconsiderGoalReasoning andTrustedAutonomy.GoalReasoningallowsanautonomoussystemtorespond moresuccessfullytounexpectedeventsorchangesintheenvironment.Inrelation toUAI,theformationofgoalsandexplorationofferthemassivebenefitofexponen-
2H.A.Abbassetal.
tialimprovementsincomparisonwithrandomexploration.Sogoalsareimportant computationallytoachievepracticalsystems.Theypresenttwodifferentmodelsof GoalReasoning:Goal-DrivenAutonomyandtheGoalLifecycle.Theyalsodescribe theSituatedDecisionProcess(SDP),whichmanagesandexecutesgoalsforateam ofautonomousvehicles.Thearticulationofgoalsisalsoimportanttohumantrust, asbehaviourscanbecomplexandhardtoexplain,butgoalsmaybeeasierbecause behaviour(ascapabilityactionontheenvironment)isdrivenbygoals(andtheir differencefromawareness).Machinereasoningaboutgoalsalsoprovidesabasisfor the“missioncommand”ofmachines.Thatis,theexpressionofintentfromoneagent toanother,andtheexpressionofacapability(e.g.aplan)inreturnprovidesfora higherlevelofcontrolwiththe“human-on-the-loop”appliedtomoremachinesthan wouldbethecaseofthe“human-in-the-loop”.Inthissituation,theauthorstouchon “rebellion”,orrefusalofanautonomoussystemtoacceptagoalexpressedtoit.This isanimportanttrustrequirementifcriticalconditionsareviolatedthatthemachine isawareof,suchasthelegalityofaction.
Theabilitytoreasonwithandexplaingoals(intent)iscomplementedin Chapter 4 byconsiderationofreasoningandexplanationofplanning(capability). TimMiller,AdrianR.PearceandLizSonenbergexaminesocialplanningfortrusted autonomy.Socialplanningismachineplanninginwhichtheplanningagentmaintains andreasonswithanexplicitmodelofthehumanswithwhichitinteracts,including thehuman’sgoals(intent),intentions(ineffecttheirplansoringeneralcapabilityto act),beliefs(awareness),aswellastheirpotentialbehaviours.Theauthorscombine recentadvancestoallowanagenttoactinamulti-agentworldconsideringtheother agents’actions,andaTheoryofMindabouttheotheragents’beliefstogether,to provideatoolforsocialplanning.Theypresentaformalmodelformulti-agentepistemicplanning,andresolvethesignificantprocessingthatwouldhavebeenrequired tosolvethisifeachagent’sperspectivewereamodeinmodallogic,bycastingthe problemasanon-deterministicplanningtaskforasingleagent.Essentially,treatingtheactionsofotheragentsintheenvironmentasnon-deterministicoutcomes (withsomeprobabilitythatisnotresolveduntilaftertheaction)ofoneagentsown actions.Thisapproachlooksverypromisingtofacilitatecomputablecooperativeand competitiveplanninginhumanandmachinegroups.
Consideringautonomyaswill-driven(e.g.forreward,survival)fromChapter 2, andautonomyasgoal-directedandplan-achieving(simplifyingcomputationand explanation)fromChapters 3 and 4,whatdoesautonomymeaninasocialcontext? TheUSDefenseScienceboard2 signalstheneedforasocialperspective, itshouldbemadeclearthatallautonomoussystemsaresupervisedbyhumanoperatorsat somelevel,andautonomoussystems’softwareembodiesthedesignedlimitsontheactions anddecisionsdelegatedtothecomputer.Insteadofviewingautonomyasanintrinsicproperty ofanunmannedvehicleinisolation,thedesignandoperationofautonomoussystemsneeds tobeconsideredintermsofhuman-systemcollaboration.
2 U.S.DefenceScienceBoard,TaskForceReport:TheRoleofAutonomyinDoDSystems,July 2012,pp.3–5.
1FoundationsofTrustedAutonomy:AnIntroduction3
TheDefenseScienceBoardreportgoesontorecommend“thattheDoDabandontheuseof‘levelsofautonomy’andreplacethemwithanautonomoussystems referenceframework”.Giventhisneedforsupervisionandeventualhuman-system collaboration,perhapsausefulconceptualisationforautonomymightborrowfrom psychologyasillustratedinthefollowingfigure.
Here,apopulardefinition3 of‘autonomyasself-sufficientandself-directed’is situatedinasettingofsocialmaturityandextendedtoinclude‘awarenessofself’. Covey4 popularisesamaturityprogressionfromdependence(e.g.onparents)via independencetointerdependence.Themaladjustedpathisprogressionfromdependencetoco-dependence.Co-dependentagentsmayfunctionbutlackresilienceas compromisetooneagentaffectstheother(s)thusdirectlyaffectingownsurvivalor utility.Fortheinterdependentagentcutofffromcommunicationthereisthefall-back stateofindependence.
So,ifthismightbeapreferredtrajectoryformachineautonomy,whatarethe implicationsastrongandindependentautonomy?InChapter 5,BobbyD.Bryant andRistoMiikkulainenconsideraneuroevolutionaryapproachtoadaptivemultiagentteams.Intheirformulation,asimilarandsignificantcapabilityforeveryagent isposed.Theyproposeacollectivewhereeachagenthassufficientbreadthofskills toallowforaself-organizeddivisionoflaboursothatitbehavesasifitwereaheterogeneousteam.Thisdivisionisdynamicinresponsetoconditions,andcomposedof autonomousagentsoccurswithoutdirectionfromahumanoperator.Indeedingeneral,humansmightbemembersoftheteam.Thispotentiallyallowsformassivelyscalableresilientautonomoussystemswithgracefuldegradation,aslosinganyagent affectsalossofrole(s)whichmightbetakenupbyanyotheragent(s)allofwhich haverequisiteskills(capability).Artificialneuralnetworksareusedtolearnteams withexamplesgivenintheconstructofstrategygames.
FurtheringthethemeofsocialautonomyinChapter 6,JohnHarveyexamines boththeblessingandcurseofemergenceinswarmintelligencesystems.Wemight
3 J.M.Bradshaw,TheSevenDeadlyMythofAutonomousSystems,IEEE,2013.
4 S.R.Covey,TheSevenHabitsofHighlyEffectivePeople,FreePress,1989.
4H.A.Abbassetal.
consideragentscomposingaswarmintelligenceas“similar”andrangingtoidentical, butnotnecessarily“significant”capabilities,withtheimplicationsthatresilienceis apropertyofthecollectiveratherthantheindividual.Harveynotesthatswarm intelligencemayrelatetoacategorywithinthecomplexityandself-organisation spectrumofemergencecharacterisedasweaklypredictable.Swarmsdonotrequire centralisedcontrol,andmaybeformedfromsimpleagentinteractions,offering thepotentialforgracefuldegradation.Thatis,thelossofsomeindividualsmay onlyweaklydegradetheeffectofthecollective.Theseandother“blessings”of swarmintelligencepresentedbytheauthoraretemperedbytheshortcomingsof weakpredictabilityandcontrollability.Indeed,iftheyareidentical,systematicfailure mayalsobepossibleasanydesignfaultinanindividualisreplicated.Theauthor suggestsafuturedirectionforresearchrelatedtothespecificationoftrustproperties, mightfollowfromtheintersectionoflivenesspropertiesbasedonformalmethods andsafetypropertiesbasedonLyapunovmeasures.Swarmintelligencealsobrings intoquestionthenatureofintelligence.Perhapsitmayariseasanemergentproperty frominteractingsimplercognitiveelements.
Ifasocialgoalforautonomyiscollaboration,thencooperationandcompetition(e.g.forresources)isimportant.Furthermore,interdependentautonomymust includemachinescapableofsocialconflict.Conflictexistswherethereismutually exclusiveintent.Thatis,iftheintentofoneagentcanonlybeachievediftheintent oftheotherisnotachieved.Machineagentsneedtorecogniseandoperateunder theseconditions.Astructuredapproachtoframingcompetitionandconflictisin games.MichaelBarlow,inChapter 7 examinestrustedautonomousgameplay.Barlowexplainsfourdefiningtraitsofgamesthatincludeagoal(intent),rules(action bounds),afeedbacksystem(awareness),andvoluntaryparticipation.Voluntaryparticipationisanexerciseofagencywhereanagreementtoactwithinthoseconditions isaccepted.Barlowexaminesbothperspectivesofautonomyforgamesandgames forautonomy.AutonomousentitiesareusuallytermedAIsingames,andmayserve atrainingpurposeorjustprovideanengaginguserexperience.So,improvingAIs mayimprovehumancapabilities.Autonomoussystemscanalsobenefitfromgames, asgamesprovideaclosed-worldconstructformachinereasoningandlearningabout scenarios.
Thesechapterstakeusonabriefjourneyofsomeuniqueperspectives,from autonomyasindividualcomputationalintelligencethroughtocollectivemachine diversity.
1.2Trust
Trustisaubiquitousconcept.Weallhaveexperienceditonewayoranother,yetit appearstoholdmanycomponentsthatwemayneverconvergeonasingle,precise, andconcisedefinitionoftheconcept.Yet,themassiveamountofliteratureonthe topicisevidencethatthetopicisanimportantoneforscientificinquiry.
1FoundationsofTrustedAutonomy:AnIntroduction5
Themaincontributionofthispartofthebookistoshowcasethecomplexityof theconceptinanattempttogetahandleonitsmultifacetednature.Thispartofthe bookisabriefinquiryintothemeaningoftrust,howitisperceivedinhuman-human interactionandinhuman-machineinteraction,andattemptstoconfinetheambiguity ofthetopicthroughnovelperspectivesandscientifically-groundedopinions.
Itinitiallysoundedlogicaltoustostartthispartofthebookwiththosechaptersdiscussingtrustinitsgeneralformbeforethechaptersdiscussingthetrusted autonomyliterature.Aslogicalasthisideamaysound,itisarguablybiasingina methodologicaltreatmentoftrustintrustedautonomy.
Thepreviousstructurereflectsthepaththatmostresearchintheliteraturehas beenfollowing.First,anattemptismadetounderstandtheconceptinthehuman socialcontextthenweusethisunderstandingtodefinewhataspectoftheconcept canbemappedtothehuman-machineinteractioncontext.Whynot?Afterall,we wouldlikethehumantotrustandacceptthemachineaspartofoursocialsystem.
Thepreviousargumentisthestrengthandweaknessoftherationalebehindthat logic.Itisastrongargumentwhenweinvestigatehuman-machineinteraction;when trustinthisrelationshipisonlyameanstoanend.Theultimateendisthehuman acceptsthemachine,acceptsitsdecision,andacceptsitsrolewithinacontext.
However,thisviewfallsshortmethodologicallytostudytrustintrustedautonomy.Intheultimateformoftrustedautonomoussystems,thepartiesofatrusting relationshiparebothautonomous;thus,bothpartiesneedtoestablishtrustinthemselves,andthenineachother.Ifonepartyisahumanandtheotherisamachine, themachineneedstotrustthehuman(machine-humantrust)andthehumanneeds totrustthemachine(human-machinetrust).Therefore,tomerelyassumethatthe machineneedstorespectwhattrustisinahumansystemlimitsourgrasponthe complexityoftrustintrustedautonomy.
Thenatureoftrustinamachineneedstobeunderstood.Howcanmachines evaluatetrustisaquestionwhoseanswersneedtostemfromstudiesthatfocuson thenatureofthemachine.
Wethendecidedtoflipthecoininthewaywestructurethispartofthebook.We startthejourneyofinquirywithachapterwrittenbyLewis,SycarabandWalker. Thechapterentitled“TheRoleofTrustinHuman-RobotInteraction”pavesthe waytounderstandtrustfromamachineperspective.Lewisetal.presentathorough investigationoftrustinhuman-robotinteraction,startingwiththeidentificationof factorsaffectingtrustasmeansformeasuringtrust.Theyconcludebycallingfora needtoestablishabatteryoftasksinhuman-robotinteractiontoenableresearchers tostudytheconceptoftrust.
KateDevittinherchapterentitled“TrustworthinessofAutonomousSystems” startsajourneyofinquirytoanswerthreefundamentalquestions:whoorwhat istrustworthy?howdoweknowwhoorwhatistrustworthy?andwhatfactors influencewhatorwhoistrustworthy?Sheproposesamodeloftrustwithtwo primarydimensions:onerelatedtocompetencyandthesecondrelatedtointegrity. Theauthorconcludesthechapterbydiscussingthenaturalrelationshipbetweenrisk andtrustworthiness;followedbyquestioningwhoandwhatshouldwetrust?
6H.A.Abbassetal.
MichaelSmithsoninvestigatestherelationshipbetweentrustanduncertaintyin moredepthinhischapterentitled“TrustedAutonomyUnderUncertainty”.His firstinquiryintotherelationshipbetweentrustanddistrust,takestheviewthatan autonomoussystemisanautomatonandinvestigatesthehuman-roboticinteraction fromthisperspective.Theinquiryintouncertaintyleadstodiscussingtherelationship betweentrustandsocialdilemmasuptotheissueoftrustrepair.
AndrewDowseinhischapter“TheNeedforTrustedAutonomyinMilitaryCyber Security”presentsontheneedfortrustedautonomyintheCyberspace.Dowse discussestherequirementsfortrustintheCyberspacebydiscussingaseriesof challengesthatneedstobeconsidered.
BruzaandHoenkampbringthefieldofquantumcognitiontoofferalenson trustintheirchapter“Reinforcingtrustinautonomoussystems:aquantumcognitive approach”.Theylookintotheinterplaybetweensystem1-thefastreactivesystemandsystem2-theslowrationalethinkingsystem.Theydiscussanexperimentwith images,wheretheyfoundthathumansdistrustfakeimageswhentheydistrustthe subjectoftheimage.BruzaandHoenkampthenpresentsaquantumcognitionmodel ofthisphenomenon.
JasonScholzinhischapter“LearningtoShapeErrorswithaConfusionObjective” presentsaninvestigationintoclasshidinginmachinelearning.Throughclassreweightingduringlearning,theerrorofadeepneuralnetworkonaclassificationtask canberedistributedandcontrolled.Thechapteraddressestheissueoftrustfromtwo perspectives.First,errortradingallowstheusertoestablishconfidenceinthemachine learningalgorithmbyfocusingonclassesofinterest.Second,thechaptershowsthat theusercanexertcontrolonthebehaviorofthemachinelearningalgorithm;which isatwo-edgesword.Itwouldallowtheusertheflexibilitytomanipulateit,whileat thesametimeitmayofferanopportunityforanadversarytoinfluencethealgorithm throughclassredistribution.
Thelastchapterinthispartshowcasesafewpracticalexamplesfromwork conductedattheUniversityofBritishColumbia.Hartandhiscolleaguesintheir chapteron“DevelopingRobotAssistantswithCommunicativeCuesforSafe,Fluent HRI”listexamplesoftheirworkrangingfromCarDoorAssemblyallthewaytothe understandingofsocialcuesandhowthesecommunicativecuescanbeintegrated inahuman-robotinteractiontasks.
1.3TrustedAutonomy
PartIIIofthebookhasadistinctivelyphilosophicalflavour:thebasicthemethat runsthroughallofitschaptersconcernsthenatureofautonomy,asdistinctfrom automation,andtherequirementsthatautonomousagentsmustmeetiftheyareto betrustworthy,atleast.Autonomyismoreorlessunderstoodasarequirementfor operatingincomplexenvironmentsthatmanifestuncertainty;withoutuncertainty relativelystraightforwardautomationwilldo,andindeedtheautonomyisgenerally seenhereasbeingpredicatedonsomeformofenvironmentaluncertainty.PartIII
1FoundationsofTrustedAutonomy:AnIntroduction7
8H.A.Abbassetal.
isheavilyconcernedwiththecentrepointofautonomyintermsofintrinsicmotivation,computationalmotivation,creativity,freedomofaction,andtheoryofself. Trustworthinessislargelyseenasahereasanecessarybutnotsufficientcondition forsuchagentstobetrustedbyhumanstocarryouttasksincomplexenvironments, withconsiderableimplicationsfortheneedforcontrolsonagentbehaviourasa componentofitsmotivationalprocesses.
Sunarguesthatagentsneedtohaveintrinsicmotivation,meaninginternalmotivationalprocesses,iftheyaretodealsuccessfullywithunpredictablecomplexenvironments.Intrinsicmotivationisrequiredundersuchconditionsbecausecriteria definingagentcontrolcannotbespecifiedpriortooperation.Theimportanceof intrinsicmotivationinregardstothesuccessfuloperationandacceptancebyhumans underconditionsoffundamentaluncertaintyrepresentsachallengethatrequires seriousredressoffamiliarbutoutdatedassumptionsandmethodologies.
Furthermore,theabilitytounderstandthemotivationofotheragentsiscentral totrust,becausehavingthisabilitymeansthatthebehaviourofotheragentsis predictableevenintheabsenceofpredictabilityoffuturestatesoftheoverallenvironment.Indeed,theargumentisthatpredictabilityofthebehaviourofotheragents throughunderstandingtheirmotivationsiswhatenablestrust,andthisalsoexplains whytrustissuchanimportantissueinanuncertainoperatingenvironment.
Thechapterpresentsanoverviewofacognitivearchitecture–theClarioncognitivearchitecture–supportingcognitivecapabilitiesaswellasintrinsicandderived motivationforagents;itamountstoastructuralspecificationforavarietyofpsychologicalprocessesnecessaryforautonomy.Inparticular,thefocusofthechapterin thisregardisontheinteractionbetweenmotivationandcognition.Finally,several simulationsofthiscognitivearchitecturearegiventoillustratehowthisapproach enablesautonomousagentstofunctioncorrectly.
Merricketal.discussiononcomputationalmotivationextendsaverysimilarargument,byarguingthatcomputationalmotivationisnecessarytoachieveopen-ended goalformulationinautonomousagentsoperatingunderuncertainty.Yetitapproaches thisinaverydifferentmanner,byrealisingcomputationalmotivationinpractical autonomoussystemssufficientforexperimentalinvestigationofthequestion.Here, computationalmotivationincludescuriosityandnovel-seekingaswellasadaptation, primarilyasanepistemicmotivationforknowledgeincrease.
Agentshavingdifferentpriorexperiencesmaybehavedifferently,withtheimplicationthatintrinsicmotivationthroughpriorexperienceimpactstrustworthiness. Thustrustisaconsequenceofhowmotivationalfactorsinteractwithuncertaintyin theoperatingenvironmenttoproduceaneffectthatisnotpresentunderclosedenvironmentscontainingonlymeasurablestochasticrisk,whereessentiallyrationality andthustrustworthinessisadefinableintermsofanoptimalityconditionthatmeans thatagentsoperatewithoutamuchscopeforexercisingchoice.
Thechapterconcludesthattheempiricalevidencepresentedisconsistentwiththe thesisthatintrinsicmotivationinagentsimpactstrustworthiness,inpotentiallysimultaneouslypositiveandnegativeways,becauseofthecomplexofoverlappingand sometimesconflictingimplicationsmotivationhasforprivacyandsecurity.Trustworthinessisalsoimpactedbywhatcombinationofmotivationstheagentsemploy
1FoundationsofTrustedAutonomy:AnIntroduction9 andwhethertheyoperateinmixedorhomogeneousagentenvironments.Finally,if humansaretodeveloptrustinautonomousagents,thenagenttechnologieshaveto betransparenttohumans.
GeneralcomputationallogicsareusedbyBringsjordandNaveenasthebasisfor amodelofhuman-levelcognitionasformalcomputingmachinestoformallyexplore theconsequencesfortrustofautonomy.Thechaptertherebysetsformallimitson trustverymuchakintothoseobservedforhumansinthepsychologyliterature,by presentingatheoremstating,undervariousformalassumptions,thatanartificial agentthatisautonomous( A )andcreative(C )willtendtobe,fromthestandpoint ofafullyinformedrationalagent,intrinsicallyuntrustworthy(U ).Thechapterthus referstotheprincipleforhumansas PACU,andthetheoremas TACU.Theproofof thistheoremisobtainedusingShadowProver,anovelautomatedtheoremproving program.
Afterbuildinganaccessibleintroductiontotheprinciplewithreferencetothepsychologymaintainingitforhumansandempiricalevidenceforitsveracity,thechapter establishesaformalversionoftheprinciple.Thisrequiresestablishingformalisationsofwhatitmeanstobeanidealobserver,ofwhatitmeanstobecreative,and ofwhatitmeanstobeautonomous,andaformalnotionofcollaborativesituations. ThechapterdescribesthecognitivecalculusDeLELinwhichTACUisformalised, andthenoveltheoremproverShadowProverusedtoprovethetheorem.
Morebroadly,thechapterseeksnotjusttoestablishthetheorem,buttoestablish thecaseforitsplausibilitybeyondthespecificassumptionsofthetheorem.Beyond thelimitationsofthisparticularformalisation-andtheauthorsinvitefurtherinvestigationbasedonmorepowerfulformalisations-theTACUtheoremestablishesthe necessityofactiveengineeringpracticestoprotecthumansfromtheunintendedconsequencesofcreativeautonomousmachines,byassertinglegalandethicallimitson whatagentscando.Thepreconditionsofautonomyandcreativityareinsufficient; justaswithhumans,societalcontrolsintheformoflegalandethicalconstraintsare alsorequired.
Derwort’sconcernsrelatetothedevelopmentofautonomousmilitarycommand andcontrol(C2).Autonomoussystemsinmilitaryoperationalenvironmentswillnot actalone,butratherwilldosoinconcertwithotherautonomousandmannedsystems, andultimatelyallunderbroadnationalmilitarycontrolexercisedbyhumandecisionmakers.Thisisasituationbornofnecessityandtheopportunityaffordedbyrapidly developingautonomoustechnologies:autonomoussystemsandthedistributedC2 acrossthemisemergingasaresponsetotherapidincreaseincapabilitiesofpotential militaryadversariesandthelimitedabilitytorespondtothemwiththedevelopment oftraditionalmannedplatforms.
ThechapteroutlinesanumberofpastscenariosinvolvinghumanerrorinC2, withtragicconsequences,toillustratethelimitationsofhumandecision-making,and plausiblemilitaryscenariosinthenot-too-distantfuture.Therearenodoubtrisks involvedwithtakingthehumanoutofthedecision-makingintermsofresponsibility, authorityanddehumanisingofhumanconflict,yetanyrationaldiscussionontheuse ofautonomyinwarandbattleneedstoalsobemoderatedbyduerecognitionofthe inherentrisksofhavinghumansinthedecision-makingprocesses.
Autonomoussystemsaremerelytools,andthecostoftheirdestructionismerely countedindollars.Thereinliesaparticularstrength,forautonomoussystemswith distributedC2hasenormouspotentialtocreateandimplementminimalsolutionsin placeofthemoreaggressivesolutionstotacticalproblemstowhichstressedhumans areprone.Autonomyoffersthepotentialtointerveneininthefaceofunexpected circumstances,tode-escalate,toimprovethequalityaswellasspeedofmilitary decision-making.Thereinmaylieitsmostseriouspotentialformilitaryoperational use.
Youngpresentsontheapplicationofautonomytotrainingsystemsandraises questionsabouthowsuchsystemswillimpactthehumanlearningenvironmentsin whichtheyareused.Chapter 19 exploresthisstartingfromthepivotalpremiseof traditionalteachingwherebythestudentsmusthavetrustintheteachertoeffectively concederesponsibilitytotheteacher.Whatdoesthismeaniftheteacherisamachine? Thechapterseekstoexplorewhatispossiblewithautonomyintheclassroom,and whatwemightreasonablyexpecttobeplausible.
Amapispresentedshowingtheinterconnectedfunctionalcomponentsofatrainingsystem,includingboththosethatareprovidedbyhumantraineesandthose thatmightbeprovidedbymachines.Itincludesthefunctionsoftheteacherand thelearner,includingthetrainingtopicandmeasurementoflearning.Theauthors presentthreekeydriverslikelytodeterminethefutureofautonomoussystemsin trainingandeducation:autonomoussystemsdevelopment,trainingsystems,and trust.Someofthefunctionsrequiredforalearningenvironmentarealreadybeing providedbymachines,albeitinrelativelylimitedways;theadvanceofautonomous systemstechnologieswillexpandthepotentialfordelegatingmoreofthesefunctions tomachines.
Trustispresentedasafunctionoffamiliarity,whichisconsistentwiththeviewof trustinsomeprecedingchaptersasrequiringpredictabilityofotheragents’behavioursevenwithinacomplexenvironmentthatisinherentlyunpredictable.Trustis heldtobecentraltolearning,andtrustthroughfamiliarityovertimeisthebasis forexploringanumberoffuturescenarios.Thefirstrevolvesaroundthefrustration thatmightbetheresultoftheperceivedartificialityofautonomousteachers,compoundedbyinconsistenciesbetweendifferentautonomousteachersoversubsequent timeperiods.Thesecondconcernsthesocialdislocationandpotentialincompetence resultingfrommachinestakingoversimplertasksfromhumansandtherebydenying thehumansknowledgeofthosetasksandtherebyeffectingthequalityofhigher-level humandecision-making.Thethirdisascenarioinwhichthemachineresponsible forteachingthehumangrowsupwiththehumaninacomplexrelationshipmarked bymutualtrust,suggestingthatthehuman’strustinthemachineissymbioticwith thedevelopmentofthemachine’strustinthehuman.
BoyceandGriffinbeginwithanelucidationoftheharshnessandremoteness ofspace,markedbyextremeconditionsthatcandegradeordestroyspacecraft. Manoeuvresinorbitsnearearthorotherlargeobjectsarecomplexandcounterintuitive.Gravitationalfieldsarenotuniform,interactionsbetweenmultipleobjects canproducesignificanterrors,andspaceisbecomingincreasinglycrowded,requiring theabilitytoconductevasiveactionsinadvanceofpotentialcollisions.Closehuman
10H.A.Abbassetal.
operationisinefficientanddangerous,mandatingtheuseofautonomyforawide rangeofspacecraftfunctions.
Withincreasingminiaturisationofspacecraft,trafficmanagementandcollision avoidancearebecomingpressingproblemsdrivinggreaterdegreesofspacecraft autonomy.Yetthelackoftrustascribedtothelimitationsofautomatedcodegeneration,runtimeanalysisandmodelcheckingforverificationandvalidationforsoftwarethathastomakecomplexdecisionsisalargebarriertoadoptionofhigher-level autonomyforspacecraft.Linkedtothisistheneedforhumandomainexpertsto beinvolvedinthedesignanddevelopmentofsoftwareinordertobuildtrustinthe product.
Thechapterconcludeswithsomepossiblespacescenariosforautonomy,the firstofwhichmightbeachievedinthenearfuture,involvinggreaterautonomous analysisofinformationfromdifferentsources.Thesecondconcernsautonomyin spacetrafficmanagement,linkedtoallspacecraftthathavetheabilitytomanoeuvre, thatincludesthedecision-makingandactioncurrentlyundertakenbyhumans.The finalscenarioconcernsdistributedspacesystemsthatcanself-configurewithminimal humaninput,bothtoachievecapabilitiesnotachievableusingsinglelargespacecraft andtorespondtounexpectedeventssuchaspartialsystemfailure.
Thefinalchapterpresentsapictureofautonomoussystemsdevelopmentprimarily fromaneconomicpointofview,onthebasisthataneconomicagentisanautonomous agent;thedifferencebeingthateconomicsisprimarilyconcernedwithanalysing overalloutcomesfromsocietiesofdecision-makerswhileAIissquarelyfocussed ondecision-makingalgorithmdevelopment.Theconnectionbetweeneconomicsand AIisprobablymorewidelyunderstoodineconomics-whichhaslongutilisedand contributed,inturn,tothedevelopmentofmachinelearningandautomatedreasoning methods-thanitisinautonomyresearch.Thusthechaptertreatsautonomyasthe allocationofscarceresourcesunderconditionsoffundamentaluncertainty.
Themainthrustofthechapterisaneconomicviewofuncertainty,whichdistinguishesbetweenepistemicuncertaintyandontologicaluncertainty,anditsconsequencesforautonomy.Ontologicaluncertaintyisthedeeperofthetwo:epistemic uncertaintyamountstoignoranceofpossibleoutcomesduetosamplinglimits,while ontologicaluncertaintyrelatestothepresenceofunsolvableparadoxicalproblems; thechapterthusdrawsouttheconnectionbetweentheeconomicnotionofontologicaluncertaintyandthefamedincompletenesstheoremsofGödel,theunsolvability oftheHaltingProblemofTuring,andincompressibilitytheoremsofAlgorithmic InformationTheory.
Drawingonbothfinancialeconomicsandmacroeconomictheory,commonplace investmentstrategiesarepresentedinthecontextofthisnotionofuncertainty,noting that,underconditionsofontologicaluncertainty,whatmightbeseeminglyrational foranindividualagentintheshort-termneednotberationalinthelong-termnorfrom theperspectiveoftheentiresocialenterprise.Certainwell-knownbondinvestment strategies,however,appeartohavethepotentialtostrikeahealthybalanceandyield desirablelong-termpropertiesforboththeagentandthebroadersystemofwhichit
1FoundationsofTrustedAutonomy:AnIntroduction11
isacomponent,andthusmayofferabasisforautonomoussystems.Interestingly, implementingsuchastrategyinanagentseemstorequireatheoryofself,toprovide thekindsofmotivationalprocessesdiscussedinotherchaptersaswell.
OpenAccess ThischapterislicensedunderthetermsoftheCreativeCommonsAttribution4.0 InternationalLicense(http://creativecommons.org/licenses/by/4.0/ ),whichpermitsuse,sharing, adaptation,distributionandreproductioninanymediumorformat,aslongasyougiveappropriate credittotheoriginalauthor(s)andthesource,providealinktotheCreativeCommonslicenseand indicateifchangesweremade.
Theimagesorotherthirdpartymaterialinthischapterareincludedinthechapter’sCreative Commonslicense,unlessindicatedotherwiseinacreditlinetothematerial.Ifmaterialisnot includedinthechapter’sCreativeCommonslicenseandyourintendeduseisnotpermittedby statutoryregulationorexceedsthepermitteduse,youwillneedtoobtainpermissiondirectlyfrom thecopyrightholder.
12H.A.Abbassetal.
Autonomy
PartI
Chapter2
UniversalArtificialIntelligence
PracticalAgentsandFundamentalChallenges
2.1Introduction
Artificialintelligence(AI)bearsthepromiseofmakingusallhealthier,wealthier, andhappierbyreducingtheneedforhumanlabourandbyvastlyincreasingour scientificandtechnologicalprogress.
SincetheinceptionoftheAIresearchfieldinthemid-twentiethcentury,arangeof practicalandtheoreticalapproacheshavebeeninvestigated.Thischapterwilldiscuss universalartificialintelligence (UAI)asaunifyingframeworkandfoundational theoryformany(most?)oftheseapproaches.Thedevelopmentofafoundational theoryhasbeenpivotalformanyotherresearchfields.Well-knownexamplesinclude thedevelopmentofZermelo-Fraenkelsettheory(ZFC)formathematics,Turingmachinesforcomputerscience,evolutionforbiology,anddecisionandgametheory foreconomicsandthesocialsciences.Successfulfoundationaltheoriesgiveaprecise, coherentunderstandingofthefield,andofferacommonlanguageforcommunicating research.Asmostresearchstudiesfocusononenarrowquestion,itisessentialthatthe valueofeachisolatedresultcanbeappreciatedinlightofabroaderframeworkorgoal formulation.UAIoffersseveralbenefitstoAIresearchbeyondthegeneraladvantages offoundationaltheoriesjustmentioned.Substantialattentionhasrecentlybeencalled tothe safety ofautonomousAIsystems[10].Ahighlyintelligentautonomoussystem maycausesubstantialunintendedharmifconstructedcarelessly.Thetrustworthiness ofautonomousagentsmaybemuchimprovediftheirdesignisgroundedinaformal theory(suchasUAI)thatallowsformalverificationoftheirbehaviouralproperties. Unsafedesignscanberuledoutatanearlystage,andadequateattentioncanbegiven tocrucialdesignchoices.
T.Everitt(B) M.Hutter
AustralianNationalUniversity,Canberra,Australia
e-mail:Tom.Everitt@anu.edu.au
M.Hutter
e-mail:marcus.hutter@anu.edu.au
©TheAuthor(s)2018
H.A.Abbassetal.(eds.), FoundationsofTrustedAutonomy,StudiesinSystems, DecisionandControl117,https://doi.org/10.1007/978-3-319-64816-3_2
TomEverittandMarcusHutter
15