Download pdf Hci international 2020 late breaking papers multimodality and intelligence 22nd hci int

Page 1


HCI International 2020 Late Breaking Papers

Multimodality and Intelligence

22nd HCI International Conference HCII

2020 Copenhagen Denmark July 19 24

2020 Proceedings Constantine Stephanidis

Visit to download the full and correct content document: https://ebookmeta.com/product/hci-international-2020-late-breaking-papers-multimod ality-and-intelligence-22nd-hci-international-conference-hcii-2020-copenhagen-denma rk-july-19-24-2020-proceedings-constantine-stephanidis/

More products digital (pdf, epub, mobi) instant download maybe you interests ...

HCI International 2020 Late Breaking Papers Cognition

Learning and Games 22nd HCI International Conference

HCII 2020 Copenhagen Denmark July 19 24 2020

Proceedings Constantine Stephanidis

https://ebookmeta.com/product/hci-international-2020-latebreaking-papers-cognition-learning-and-games-22nd-hciinternational-conference-hcii-2020-copenhagen-denmarkjuly-19-24-2020-proceedings-constantine-stephanidis/

HCI International 2020 Late Breaking Papers Digital Human Modeling and Ergonomics Mobility and Intelligent Environments 22nd HCI International Conference HCII

2020 Copenhagen Denmark July 19 24 2020 Proceedings

Constantine Stephanidis https://ebookmeta.com/product/hci-international-2020-latebreaking-papers-digital-human-modeling-and-ergonomics-mobilityand-intelligent-environments-22nd-hci-international-conferencehcii-2020-copenhagen-denmark-july-19-24-2020-p/

HCI International 2021 Posters 23rd HCI International Conference HCII 2021 Virtual Event July 24 29 2021

Proceedings Part II 1st Edition Constantine Stephanidis

https://ebookmeta.com/product/hciinternational-2021-posters-23rd-hci-international-conferencehcii-2021-virtual-event-july-24-29-2021-proceedings-part-ii-1stedition-constantine-stephanidis/

HCI International 2021 Late Breaking Posters 23rd HCI

International Conference HCII 2021 Virtual Event July 24 29 2021 Proceedings Part II in Computer and Information Science 1499 1st Edition Constantine Stephanidis https://ebookmeta.com/product/hci-international-2021-latebreaking-posters-23rd-hci-international-conferencehcii-2021-virtual-event-july-24-29-2021-proceedings-part-ii-incomputer-and-information-science-1499-1st-edition-constan/

HCI International 2022 Late Breaking Papers

Multimodality in Advanced Interaction Environments 24th International Conference on Human Computer Interaction

HCII 2022 Virtual Event June 26 July 1 2022 Proceedings

Masaaki Kurosu https://ebookmeta.com/product/hci-international-2022-latebreaking-papers-multimodality-in-advanced-interactionenvironments-24th-international-conference-on-human-computerinteraction-hcii-2022-virtual-event-june-26-july-1-2022-p/

HCI International 2022 Posters 24th International Conference on Human Computer Interaction HCII 2022

Virtual Event June 26 July 1 2022 Proceedings Part II

Constantine Stephanidis

https://ebookmeta.com/product/hciinternational-2022-posters-24th-international-conference-onhuman-computer-interaction-hcii-2022-virtual-eventjune-26-july-1-2022-proceedings-part-ii-constantine-stephanidis/

HCI International 2022 Posters 24th International Conference on Human Computer Interaction HCII 2022

Virtual Event June 26 July 1 2022 Proceedings Part IV

Constantine Stephanidis

https://ebookmeta.com/product/hciinternational-2022-posters-24th-international-conference-onhuman-computer-interaction-hcii-2022-virtual-eventjune-26-july-1-2022-proceedings-part-iv-constantine-stephanidis/

HCI International 2022 Posters 24th International Conference on Human Computer Interaction HCII 2022

Virtual Event June 26 July 1 2022 Proceedings Part I

Constantine Stephanidis

https://ebookmeta.com/product/hciinternational-2022-posters-24th-international-conference-onhuman-computer-interaction-hcii-2022-virtual-eventjune-26-july-1-2022-proceedings-part-i-constantine-stephanidis/

HCI International 2022 Posters 24th International Conference on Human Computer Interaction HCII 2022

Virtual Event June 26 July 1 2022 Proceedings Part III

Constantine Stephanidis

https://ebookmeta.com/product/hciinternational-2022-posters-24th-international-conference-onhuman-computer-interaction-hcii-2022-virtual-eventjune-26-july-1-2022-proceedings-part-iii-constantine-stephanidis/

Constantine Stephanidis

Masaaki Kurosu

Helmut Degen

Lauren Reinerman-Jones (Eds.)

HCI International 2020 –Late Breaking Papers

Multimodality and Intelligence

22nd HCI International Conference, HCII 2020 Copenhagen, Denmark, July 19–24, 2020

Proceedings

LectureNotesinComputerScience12424

FoundingEditors

GerhardGoos

KarlsruheInstituteofTechnology,Karlsruhe,Germany

JurisHartmanis

CornellUniversity,Ithaca,NY,USA

EditorialBoardMembers

ElisaBertino

PurdueUniversity,WestLafayette,IN,USA

WenGao

PekingUniversity,Beijing,China

BernhardSteffen

TUDortmundUniversity,Dortmund,Germany

GerhardWoeginger

RWTHAachen,Aachen,Germany

MotiYung

ColumbiaUniversity,NewYork,NY,USA

Moreinformationaboutthisseriesat http://www.springer.com/series/7409

ConstantineStephanidis • MasaakiKurosu • HelmutDegen •

LaurenReinerman-Jones(Eds.)

HCIInternational2020 –

LateBreakingPapers

MultimodalityandIntelligence

22ndHCIInternationalConference,HCII2020 Copenhagen,Denmark,July19–24,2020

Proceedings

Editors

ConstantineStephanidis UniversityofCreteandFoundation forResearchandTechnology – Hellas (FORTH)

Heraklion,Crete,Greece

HelmutDegen SiemensCorporation Princeton,NJ,USA

MasaakiKurosu TheOpenUniversityofJapan Chiba,Japan

LaurenReinerman-Jones UniversityofCentralFlorida Orlando,FL,USA

ISSN0302-9743ISSN1611-3349(electronic)

LectureNotesinComputerScience

ISBN978-3-030-60116-4ISBN978-3-030-60117-1(eBook) https://doi.org/10.1007/978-3-030-60117-1

LNCSSublibrary:SL3 – InformationSystemsandApplications,incl.Internet/Web,andHCI

© SpringerNatureSwitzerlandAG2020

Chapter “ReadingAloudinHuman-ComputerInteraction:HowSpatialDistributionofDigitalTextUnitsat anInteractiveTabletopContributestotheParticipants’ SharedUnderstanding” islicensedundertheterms oftheCreativeCommonsAttribution4.0InternationalLicense(http://creativecommons.org/licenses/by/4.0/). Forfurtherdetailsseelicenseinformationinthechapter.

Thisworkissubjecttocopyright.AllrightsarereservedbythePublisher,whetherthewholeorpartofthe materialisconcerned,specificallytherightsoftranslation,reprinting,reuseofillustrations,recitation, broadcasting,reproductiononmicrofilmsorinanyotherphysicalway,andtransmissionorinformation storageandretrieval,electronicadaptation,computersoftware,orbysimilarordissimilarmethodologynow knownorhereafterdeveloped.

Theuseofgeneraldescriptivenames,registerednames,trademarks,servicemarks,etc.inthispublication doesnotimply,evenintheabsenceofaspecificstatement,thatsuchnamesareexemptfromtherelevant protectivelawsandregulationsandthereforefreeforgeneraluse.

Thepublisher,theauthorsandtheeditorsaresafetoassumethattheadviceandinformationinthisbookare believedtobetrueandaccurateatthedateofpublication.Neitherthepublishernortheauthorsortheeditors giveawarranty,expressedorimplied,withrespecttothematerialcontainedhereinorforanyerrorsor omissionsthatmayhavebeenmade.Thepublisherremainsneutralwithregardtojurisdictionalclaimsin publishedmapsandinstitutionalaffiliations.

ThisSpringerimprintispublishedbytheregisteredcompanySpringerNatureSwitzerlandAG Theregisteredcompanyaddressis:Gewerbestrasse11,6330Cham,Switzerland

Foreword

The22ndInternationalConferenceonHuman-ComputerInteraction,HCIInternational 2020(HCII2020),wasplannedtobeheldattheACBellaSkyHotelandBellaCenter, Copenhagen,Denmark,duringJuly19–24,2020.DuetotheCOVID-19pandemicand theresolutionoftheDanishgovernmentnottoalloweventslargerthan500peopleto behosteduntilSeptember1,2020,HCII2020hadtobeheldvirtually.Itincorporated the21thematicareasandaffi liatedconferenceslistedonthefollowingpage.

Atotalof6,326individualsfromacademia,researchinstitutes,industry,andgovernmentalagenciesfrom97countriessubmittedcontributions,and1,439papersand 238posterswereincludedinthevolumesoftheproceedingspublishedbeforethe conference.Additionally,333papersand144postersareincludedinthevolumes oftheproceedingspublishedaftertheconference,as “LateBreakingWork” (papers andposters).Thesecontributionsaddressthelatestresearchanddevelopmenteffortsin the fi eldandhighlightthehumanaspectsofdesignanduseofcomputingsystems.

ThevolumescomprisingthefullsetoftheHCII2020conferenceproceedingsare listedinthefollowingpagesandtogethertheybroadlycovertheentire fi eldof human-computerinteraction,addressingmajoradvancesinknowledgeandeffective useofcomputersinavarietyofapplicationareas.

IwouldliketothanktheProgramBoardChairsandthemembersoftheProgram BoardsofallThematicAreasandAffiliatedConferencesfortheirvaluablecontributionstowardsthehighestscientifi cqualityandtheoverallsuccessoftheHCIInternational2020conference.

Thisconferencewouldnothavebeenpossiblewithoutthecontinuousandunwaveringsupportandadviceofthefounder,conferencegeneralchairemeritusand conferencescientificadvisor,Prof.GavrielSalvendy.Forhisoutstandingefforts, Iwouldliketoexpressmyappreciationtothecommunicationschairandeditorof HCIInternationalNews,Dr.AbbasMoallem.

July2020ConstantineStephanidis

HCIInternational2020ThematicAreas andAffiliatedConferences

ThematicAreas:

• HCI2020:Human-ComputerInteraction

• HIMI2020:HumanInterfaceandtheManagementofInformation

Affi liatedConferences:

• EPCE:17thInternationalConferenceonEngineeringPsychologyandCognitive Ergonomics

• UAHCI:14thInternationalConferenceonUniversalAccessinHuman-Computer Interaction

• VAMR:12thInternationalConferenceonVirtual,AugmentedandMixedReality

• CCD:12thInternationalConferenceonCross-CulturalDesign

• SCSM:12thInternationalConferenceonSocialComputingandSocialMedia

• AC:14thInternationalConferenceonAugmentedCognition

• DHM:11thInternationalConferenceonDigitalHumanModeling&Applications inHealth,Safety,Ergonomics&RiskManagement

• DUXU:9thInternationalConferenceonDesign,UserExperienceandUsability

• DAPI:8thInternationalConferenceonDistributed,AmbientandPervasive Interactions

• HCIBGO:7thInternationalConferenceonHCIinBusiness,Governmentand Organizations

• LCT:7thInternationalConferenceonLearningandCollaborationTechnologies

• ITAP:6thInternationalConferenceonHumanAspectsofITfortheAged Population

• HCI-CPT:SecondInternationalConferenceonHCIforCybersecurity,Privacyand Trust

• HCI-Games:SecondInternationalConferenceonHCIinGames

• MobiTAS:SecondInternationalConferenceonHCIinMobility,Transportand AutomotiveSystems

• AIS:SecondInternationalConferenceonAdaptiveInstructionalSystems

• C&C:8thInternationalConferenceonCultureandComputing

• MOBILE:FirstInternationalConferenceonDesign,OperationandEvaluationof MobileCommunications

• AI-HCI:FirstInternationalConferenceonArti ficialIntelligenceinHCI

ConferenceProceedings – FullListofVolumes

1.LNCS12181,Human-ComputerInteraction:DesignandUserExperience(PartI), editedbyMasaakiKurosu

2.LNCS12182,Human-ComputerInteraction:MultimodalandNaturalInteraction (PartII),editedbyMasaakiKurosu

3.LNCS12183,Human-ComputerInteraction:HumanValuesandQualityofLife (PartIII),editedbyMasaakiKurosu

4.LNCS12184,HumanInterfaceandtheManagementofInformation:Designing Information(PartI),editedbySakaeYamamotoandHirohikoMori

5.LNCS12185,HumanInterfaceandtheManagementofInformation:Interacting withInformation(PartII),editedbySakaeYamamotoandHirohikoMori

6.LNAI12186,EngineeringPsychologyandCognitiveErgonomics:Mental Workload,HumanPhysiology,andHumanEnergy(PartI),editedbyDonHarris andWen-ChinLi

7.LNAI12187,EngineeringPsychologyandCognitiveErgonomics:Cognitionand Design(PartII),editedbyDonHarrisandWen-ChinLi

8.LNCS12188,UniversalAccessinHuman-ComputerInteraction:Design ApproachesandSupportingTechnologies(PartI),editedbyMargheritaAntona andConstantineStephanidis

9.LNCS12189,UniversalAccessinHuman-ComputerInteraction:Applicationsand Practice(PartII),editedbyMargheritaAntonaandConstantineStephanidis

10.LNCS12190,Virtual,AugmentedandMixedReality:DesignandInteraction (PartI),editedbyJessieY.C.ChenandGinoFragomeni

11.LNCS12191,Virtual,AugmentedandMixedReality:IndustrialandEveryday LifeApplications(PartII),editedbyJessieY.C.ChenandGinoFragomeni

12.LNCS12192,Cross-CulturalDesign:UserExperienceofProducts,Services,and IntelligentEnvironments(PartI),editedbyP.L.PatrickRau

13.LNCS12193,Cross-CulturalDesign:ApplicationsinHealth,Learning, Communication,andCreativity(PartII),editedbyP.L.PatrickRau

14.LNCS12194,SocialComputingandSocialMedia:Design,Ethics,UserBehavior, andSocialNetworkAnalysis(PartI),editedbyGabrieleMeiselwitz

15.LNCS12195,SocialComputingandSocialMedia:Participation,UserExperience, ConsumerExperience,andApplicationsofSocialComputing(PartII),editedby GabrieleMeiselwitz

16.LNAI12196,AugmentedCognition:TheoreticalandTechnologicalApproaches (PartI),editedbyDylanD.SchmorrowandCaliM.Fidopiastis

17.LNAI12197,AugmentedCognition:HumanCognitionandBehaviour(PartII), editedbyDylanD.SchmorrowandCaliM.Fidopiastis

18.LNCS12198,DigitalHumanModeling&ApplicationsinHealth,Safety, Ergonomics&RiskManagement:Posture,MotionandHealth(PartI),edited byVincentG.Duffy

19.LNCS12199,DigitalHumanModeling&ApplicationsinHealth,Safety, Ergonomics&RiskManagement:HumanCommunication,Organizationand Work(PartII),editedbyVincentG.Duffy

20.LNCS12200,Design,UserExperience,andUsability:InteractionDesign(PartI), editedbyAaronMarcusandElizabethRosenzweig

21.LNCS12201,Design,UserExperience,andUsability:DesignforContemporary InteractiveEnvironments(PartII),editedbyAaronMarcusandElizabeth Rosenzweig

22.LNCS12202,Design,UserExperience,andUsability:CaseStudiesinPublicand PersonalInteractiveSystems(PartIII),editedbyAaronMarcusandElizabeth Rosenzweig

23.LNCS12203,Distributed,AmbientandPervasiveInteractions,editedbyNorbert StreitzandShin’ichiKonomi

24.LNCS12204,HCIinBusiness,GovernmentandOrganizations,editedbyFiona Fui-HoonNahandKengSiau

25.LNCS12205,LearningandCollaborationTechnologies:Designing,Developing andDeployingLearningExperiences(PartI),editedbyPanayiotisZaphirisand AndriIoannou

26.LNCS12206,LearningandCollaborationTechnologies:HumanandTechnology Ecosystems(PartII),editedbyPanayiotisZaphirisandAndriIoannou

27.LNCS12207,HumanAspectsofITfortheAgedPopulation:Technologies, DesignandUserexperience(PartI),editedbyQinGaoandJiaZhou

28.LNCS12208,HumanAspectsofITfortheAgedPopulation:HealthyandActive Aging(PartII),editedbyQinGaoandJiaZhou

29.LNCS12209,HumanAspectsofITfortheAgedPopulation:Technologyand Society(PartIII),editedbyQinGaoandJiaZhou

30.LNCS12210,HCIforCybersecurityPrivacyandTrust,editedbyAbbasMoallem

31.LNCS12211,HCIinGames,editedbyXiaowenFang

32.LNCS12212,HCIinMobility,TransportandAutomotiveSystems:Automated DrivingandIn-VehicleExperienceDesign(PartI),editedbyHeidiKrömker

33.LNCS12213,HCIinMobility,TransportandAutomotiveSystems:Driving Behavior,UrbanandSmartMobility(PartII),editedbyHeidiKrömker

34.LNCS12214,AdaptiveInstructionalSystems,editedbyRobertA.Sottilareand JessicaSchwarz

35.LNCS12215,CultureandComputing,editedbyMatthiasRauterberg

36.LNCS12216,Design,OperationandEvaluationofMobileCommunications, editedbyGavrielSalvendyandJuneWei

37.LNCS12217,Arti ficialIntelligenceinHCI,editedbyHelmutDegenandLauren Reinerman-Jones

38.CCIS1224,HCIInternational2020Posters(PartI),editedbyConstantine StephanidisandMargheritaAntona

39.CCIS1225,HCIInternational2020Posters(PartII),editedbyConstantine StephanidisandMargheritaAntona

40.CCIS1226,HCIInternational2020Posters(PartIII),editedbyConstantine StephanidisandMargheritaAntona

41.LNCS12423,HCIInternational2020 – LateBreakingPapers:UserExperience DesignandCaseStudies,editedbyConstantineStephanidis,AaronMarcus, ElizabethRosenzweig,P.L.PatrickRau,AbbasMoallem,andMatthiasRauterberg

42.LNCS12424,HCIInternational2020 – LateBreakingPapers:Multimodalityand Intelligence,editedbyConstantineStephanidis,MasaakiKurosu,HelmutDegen, andLaurenReinerman-Jones

43.LNCS12425,HCIInternational2020 – LateBreakingPapers:Cognition,Learning andGames,editedbyConstantineStephanidis,DonHarris,Wen-ChinLi, DylanD.Schmorrow,CaliM.Fidopiastis,PanayiotisZaphiris,AndriIoannou, XiaowenFang,RobertSottilare,andJessicaSchwarz

44.LNCS12426,HCIInternational2020 – LateBreakingPapers:UniversalAccess andInclusiveDesign,editedbyConstantineStephanidis,MargheritaAntona,Qin Gao,andJiaZhou

45.LNCS12427,HCIInternational2020 – LateBreakingPapers:Interaction, KnowledgeandSocialMedia,editedbyConstantineStephanidis,GavrielSalvendy, JuneWay,SakaeYamamoto,HirohikoMori,GabrieleMeiselwitz,FionaFui-Hoon Nah,andKengSiau

46.LNCS12428,HCIInternational2020 – LateBreakingPapers:VirtualandAugmentedReality,editedbyConstantineStephanidis,JessieY.C.Chen,andGino Fragomeni

47.LNCS12429,HCIInternational2020 – LateBreakingPapers:DigitalHuman ModelingandErgonomics,MobilityandIntelligentEnvironments,editedby ConstantineStephanidis,VincentG.Duffy,NorbertStreitz,Shin’ichiKonomi,and HeidiKrömker

48.CCIS1293,HCIInternational2020 – LateBreakingPosters(PartI),editedby ConstantineStephanidis,MargheritaAntona,andStavroulaNtoa

49.CCIS1294,HCIInternational2020 – LateBreakingPosters(PartII),editedby ConstantineStephanidis,MargheritaAntona,andStavroulaNtoa

http://2020.hci.international/proceedings

ThefulllistwiththeProgramBoardChairsandthemembersoftheProgramBoardsof allthematicareasandaffi liatedconferencesisavailableonlineat:

http://www.hci.international/board-members-2020.php

HCIInternational2021

The23rdInternationalConferenceonHuman-ComputerInteraction,HCIInternational 2021(HCII2021),willbeheldjointlywiththeaffiliatedconferencesin WashingtonDC,USA,attheWashingtonHiltonHotel,July24–29,2021.Itwill coverabroadspectrumofthemesrelatedtohuman-computerinteraction(HCI), includingtheoreticalissues,methods,tools,processes,andcasestudiesinHCIdesign, aswellasnovelinteractiontechniques,interfaces,andapplications.Theproceedings willbepublishedbySpringer.Moreinformationwillbeavailableontheconference website: http://2021.hci.international/

GeneralChair

Prof.ConstantineStephanidis

UniversityofCreteandICS-FORTH

Heraklion,Crete,Greece

Email:general_chair@hcii2021.org

http://2021.hci.international/

Contents

MultimodalInteraction

EyeMovementClassificationAlgorithms:EffectofSettings onRelatedMetrics.........................................3 AminG.Alhashim

AnAntenatalCareAwarenessPrototypeChatbotApplication UsingaUser-CentricDesignApproach...........................20 MohammedBahja,NourAbuhwaila,andJuliaBahja

AUser-CentricFrameworkforEducationalChatbotsDesign andDevelopment..........................................32 MohammedBahja,RawadHammad,andGibranButt

CollegeBot:AConversationalAIApproachtoHelpStudents NavigateCollege...........................................44 MohinishDaswani,KavinaDesai,MiliPatel,ReeyaVani, andMagdaliniEirinaki

UserExpectationsofSocialRobotsinDifferentApplications: AnOnlineUserStudy.......................................64 XiaoDou,Chih-FuWu,XiWang,andJinNiu

CreatingEmotionalAttachmentwithAssistiveWearables..............73 NedaFayaziandLoisFrankel

AuDimo:AMusicalCompanionRobottoSwitchingAudioTracks byRecognizingtheUsersEngagement...........................89 W.K.N.Hansika,LakinduYasassriNanayakkara, AdhishaGammanpila,andRavindradeSilva

TransmissionofRubbingSensationwithWearableStick-SlipDisplay andForceSensor..........................................107 HonokaHaramo,VibolYem,andYasushiIkei

ReadingAloudinHuman-ComputerInteraction:HowSpatialDistribution ofDigitalTextUnitsatanInteractiveTabletopContributestothe Participants’ SharedUnderstanding..............................117 SvenjaHeuser,BéatriceArend,andPatrickSunnen

SpeechRecognitionApproachforMotion-EnhancedDisplay inARM-COMSSystem......................................135 TeruakiIto,TakashiOyama,andTomioWatanabe

Individual’sNeutralEmotionalExpressionTrackingforPhysical ExerciseMonitoring........................................145

SalikRamKhanal,JaimeSampaio,JoãoBarroso,andVitorFilipe

ExploringPointerAssistedReading(PAR):UsingMouseMovements toAnalyzeWebUsers’ ReadingBehaviorsandPatterns...............156 IlanKirshandMikeJoy

TheEffectsofRobotAppearances,VoiceTypes,andEmotionsonEmotion PerceptionAccuracyandSubjectivePerceptiononRobots..............174 SangjinKo,XiaozhenLiu,JakeMamros,EmilyLawson,HaleySwaim, ChengkaiYao,andMyounghoonJeon

DevelopmentforTablet-BasedPerimeterUsingTemporalCharacteristics ofSaccadicDurations.......................................194

NaokiMaeshiba,KentaroKotani,SatoshiSuzuki,andTakafumiAsao

AutomaticPage-TurnerforPianistswithWearableMotionDetector.......209 SeyedAliMirazimzadehandVictoriaMcArthur

ASociableRoboticPlatformtoMakeCareerAdvicesforUndergraduates...219 W.K.MalithiMithsara,UdakaA.Manawadu, andP.RavindraS.DeSilva

DevelopmentandEvaluationofaPenTypeThermalSensationPresentation DeviceforSPIDAR-Tablet....................................231 KaedeNohara,YasunaKubo,MakotoSato,TakehikoYamaguchi, andTetsuyaHarada

CountMarks:Multi-fingerMarkingMenusforMobileInteraction withHead-MountedDisplays..................................241 JordanPollockandRobertJ.Teather

Single-ActuatorSimultaneousHapticRenderingforMultipleVitalSigns....261 JulietteRegimbal,NusaibaRadi,AntoineWeill–Duflos, andJeremyR.Cooperstock

DevelopmentofanInterfacethatExpressesTwinklingEyes bySuperimposingHumanShadowsonPupils......................271 YoshihiroSejima,MakikoNishida,andTomioWatanabe

MUCOR:AMultipartyConversationBasedRoboticInterfacetoEvaluate JobApplicants............................................280

H.A.S.D.Senaratna,UdakaA.Manawadu,W.K.N.Hansika, S.W.A.M.D.Samarasinghe,andP.RavindraS.DeSilva

UsabilityEvaluationofSmartphoneKeyboardDesignfromanApproach ofStructuralEquationModel..................................294

YinchengWang,JunyuHuo,YuqiHuang,KeWang,DiWu,andJiboHe

UnderstandingVoiceSearchBehavior:ReviewandSynthesisofResearch...305 ZhaopengXing,XiaojunYuan,DanWu,YemanHuang, andJavedMostafa

EvaluationofSpeechInputRecognitionRateofAR-BasedDrawing ApplicationonOperationMonitorforCommunicationSupportDuring EndoscopicSurgery.........................................321

TakutoYajima,TakeruKobayashi,KentaroKotani,SatoshiSuzuki, TakafumiAsao,KazutakaObama,AtsuhikoSumii, andTatsutoNishigori

TracKenzan:DigitalFlowerArrangementUsingTrackpadandStylusPen...332 AnnaYokokubo,YujiKato,andItiroSiio

MappingBetweenMindCyberneticsandAestheticStructureinReal-Time EEGArt................................................344

MinliZhang,YiyuanHuang,SalahUddinAhmed, andMohammadShidujaman

UserExperienceAnalysisforVisualExpressionAimingatCreating ExperienceValueAccordingtoTimeSpans........................363 CairenZhuoma,KeikoKasamatsu,andTakeoAinoya

AIinHCI

Arny:AStudyofaCo-creativeInteractionModelFocused onEmotionFeedback.......................................377

SarahAbdellahi,MaryLouMaher,SafatSiddiqui,JebaRezwana, andAliAlmadan

TowardsIntelligentTechnologyinArtTherapyContexts...............397 WoudAlSadoun,NujoodAlwahaibi,andLeanAltwayan

ExplainableClassificationofEEGDataforanActiveTouchTaskUsing ShapleyValues............................................406 HaneenAlsuradi,WanjooPark,andMohamadEid

SANDFOXProjectOptimizingtheRelationshipBetweentheUser InterfaceandArtificialIntelligencetoImproveEnergyManagement inSmartBuildings.........................................417

ChristopheBortolaso,StéphanieCombettes,Marie-PierreGleizes, BerangereLartigue,MathieuRaynal,andStéphanieRey

SafetyAnalyticsforAISystems................................434 YangCai

Human-CenteredExplainableAI:TowardsaReflective SociotechnicalApproach.....................................449 UpolEhsanandMarkO.Riedl

ThePowerofAugmentedRealityandArtificialIntelligenceDuring theCovid-19Outbreak.......................................467 ChutisantKerdvibulvechandLiming(Luke)Chen

V-Dream:ImmersiveExplorationofGenerativeDesignSolutionSpace.....477 MohammadKeshavarzi,ArdavanBidgoli,andHansKellner

UsabilityinMixedInitiativeSystems............................495 SachinKumarswamy

HumanVersusMachineandHuman-MachineTeamingonMasked LanguageModelingTasks....................................505 MingQianandDavisQian

UsingArtificialIntelligencetoPredictAcademicPerformance...........517 ArsénioReis,TâniaRocha,PauloMartins,andJoãoBarroso

WhyDidtheRobotCrosstheRoad?:AUserStudyofExplanation inHuman-RobotInteraction...................................527 ZacharyTaschdjian

AuthorIndex ............................................539

MultimodalInteraction

EyeMovementClassificationAlgorithms: EffectofSettingsonRelatedMetrics

TheUniversityofOklahoma,Norman,OK73069,USA alhashim@ou.edu

https://about.me/am1ngh

Abstract. Thebasicbuildingblockofanyeyetrackingresearchisthe eyefixations.Theseeyefixationsdependonmorefinedatagatheredby theeyetrackerdevice,therawgazedata.Therearemanyalgorithmsthat canbeusedtotransformtherawgazedataintoeyefixation.However, thesealgorithmsrequireoneormorethresholdstobeset.Aknowledge ofthemostappropriatevaluesforthesethresholdsisnecessaryinorder forthesealgorithmstogeneratethedesiredoutput.Thispaperexamines theeffectofasetofdifferentsettingsofthetwothresholdsrequiredfor theidentification-dispersionthresholdtypeofalgorithms:thedispersion anddurationthresholdsonthegeneratedeyefixations.Sincethiswork isatitsinfancy,thegoalofthispaperistogenerateandvisualizethe resultofeachsettingandleavethechoiceforthereaderstodecideon whichsettingfitstheirfutureeyetrackingresearch.

Keywords: Eyetracking · Eyemovementclassificationalgorithms · Fixationdurationmetric

1Introduction

Itisalmostimpossibletofindaneyetrackingstudythatdoesnotutilizethe eyefixationsinonewayortheother.Eyefixationscanbeusedasastandalone metricorasabasisforothermetricssuchasfixationsequences.(Holmqvist etal. 2015)Theseeyefixationsarearesultofatransformationprocessdone ontherawgazedatabeingproducedbytheeyetrackerdevices.Thereare differentcategoriesofalgorithmsthatcanbeusedtodosuchtransformation (Duchowski 2007;SalvucciandGoldberg 2000).Thefocusofthispaperisthe identification-dispersionthreshold(I-DT)category.Thiscategoryofalgorithms hastwothresholdsthatneedtobeset:thedispersionthresholdandtheduration thresholdwhichareexplainedinSect.2.4.2.1andSect.2.4.2.2,respectively. SalvucciandGoldberg(2000)andBlignaut(2009)suggestedadispersion thresholdbetween0.5◦ and1.0◦ whileJacobandKarn(2003)decidedtofix itto2.0◦ .Similarly,SalvucciandGoldberg(2000)andJacobandKarn(2003) suggestedavaluebetween100msand200msforthedispersionthresholdwhile Nystr¨omandHolmqvist(2010)suggestedashorterperiodbetween80msand c SpringerNatureSwitzerlandAG2020 C.Stephanidisetal.(Eds.):HCII2020,LNCS12424,pp.3–19,2020. https://doi.org/10.1007/978-3-030-60117-1 1

150ms.Table 1 summarizesthedifferentvaluessuggestedbydifferentresearchers intheliterature.

Thispaperexplorestheeffectofsomeofsuggestedvalues,besideothers, forthedispersionanddurationthresholdsontheresultingeyefixationsand scanpathsequenceswithoutstatisticallytestingforanysignificantdifference betweenanyofthedifferentsettings.

Table1. Summaryofsomethesuggestedvaluesforthedispersionanddurationthresholdsoftheidentification-dispersionthreshold(I-DT)algorithmsintheliterature

Toexploretheeffectofthedifferentsettingsofthedispersionandduration thresholdsonthegeneratedeyefixationsbytheI-DTalgorithmsandthenon thescanpathstringsbuiltbasedontheeyefixations,theprocessleadingtothe finalresult,thescanpathstrings,isexplainedindetails.Theprocessconsistsof foursteps:(1)cleaningtherawgazedatapacketsgeneratedbytheeyetracker device(Sect. 2.4.1),(2)selecting/calculatingthethresholdvaluesfortheI-DT algorithm(Sect. 2.4.2),(3)computingtheeyefixationsusingtheI-DTalgorithm, and(4)computingthescanpathstringsbasedontheeyefixations(Sect. 2.4.3). Theeyefixationsandscanpathstringsresultingfromthedifferentsettings ofthedispersionanddurationthresholdsarereportedinSect. 2.5.Thepaper endswithadiscussion(Sect. 3)andaconclusion(Sect. 4).

2Methods

2.1Apparatus

TobiiProTX300(120Hz)EyeTrackersystemwitha24-inchmonitorwasused tocollecttherawgazedatapackets.Theeyetrackerwaspositionedbelowthe monitorandwasnotindirectcontactwiththeparticipant.Differentvaluesfor thevisualangleaccuracyweretestedfortheireffectontheeyefixationsand thescanpathsequencesasdescribedinSect.2.4.2.1.Thedistancefromtheparticipant’seyesandtheeyetrackerdeviceiscalculateddynamicallyandaddress alsoinSect.2.4.2.1alongwiththevisualangleaccuracy.Thedifferentthresholds thathavebeentestedtodefinetheeyefixationsarediscussedinSect.2.4.2.2.

2.2Participants

One35-yearoldmalestudentfromtheUniversityofOklahomatookpartinthe experiment.Theparticipantdoesnothaveanyvisualimpairments.

2.3Procedure

Theparticipanthasbeenaskedtoreadamedialtextthatcontainssomedifficult words(Fig. 1).Thisreadingtaskwasproceededbyacalibrationsessionforthe uservision.Theparticipanthasbeenaskedtomoveawayfromtheeyetracker deviceoncehefinishedreadingthetexttoavoidrecordinganyunrelatedgaze datapacketstotheexperiment.

Fig.1. Themedicalparagraphdisplayedtotheparticipant

ThevisionoftheparticipanthasbeencalibratedusingtheMatlabfunction, HandleCalibWorkflow,providedbyTobiiTechnologyInc.intheMatlabbinding API(TobiiTechnologyInc. 2015).Thisscriptispartofthe RealtimeEyeTrackingSample exampleandwasusedwithoutmodification.Attheendofthecalibrationprocess,aplotfortheoffsetbetweeneacheyegazedataandthecalibration pointsisshown(Fig. 2).Theparticipantwillbedirectedtopress‘y’onthe keyboardifthecalibrationprocessissuccessfulor‘n’torepeatthecalibration. Aftersuccessfullycompletingthecalibrationprocessandpressing‘y’onthe keyboard,thetextforthereadingtaskwasshownimmediatelyandtheeye trackerdevicestartedrecordingtherawgazedatapacketsafterpausingfor3s. Therawgazedatapacketscanbeobtainedbycallingthe readGazeData function intheMatlabbindingAPI.

Fig.2. Theoffsetbetweeneachleft(showninred)andright(showningreen)eyegaze dataandthecalibrationpointsoftheparticipant(showninblue)(Colorfigureonline)

2.4DataAnalysis

2.4.1DataCleaning

Eachrawgazedatapacketholds13piecesofinformationabouteacheye.More informationabouteachpieceoftherawgazedatacanbefoundinTobiiAnalytics SDKDeveloperGuideavailableasaPDFfilewhendownloadingtheTobiiAnalyticsSDK(TobiiTechnologyInc. 2015)ordirectlyviaAcuitywebsite(Tobii TechnologyInc. 2013).Oneofthepiecesofinformationintherawgazedata packetisthe validitycode.Thiscoderepresentshowconfidenttheeyetracker systemwaswhenassigningaparticulareyegazedatapackettoaparticulareye. Thiscoderangesfrom0(mostconfident)to4(leastconfident).

Avaliditycodeof0assignedtoaraweyedatapacketcapturedataspecific pointintimefortheleftandtherighteyeindicatesthattheeyetrackerisvery certainthateachofthedatapacketsisassociatedwiththecorrespondingeye. Ontheotherhand,ifthevaliditycodeoftherawdatapacketofoneeyeis0 and4fortheother,theeyetrackerdetectsaneyeanditismostprobablythe eyethathasavaliditycodeof0.

TheDataValidityCodestableintheTobiiAnalyticsSDKDeveloperGuide summarizesallthepossiblecombinationsofthevaliditycodesofthetwoeyes. Thetableshowsthatavaliditycodeof1ofoneeyeisonlypossiblewitha validitycodeof3ormoreoftheothereye.Also,theonlypossiblecombination ofavaliditycodeof2ofoneeyeisavaliditycodeof2oftheothereyewhich meansthattheeyetrackerdetectedoneeyebutitisnotcertainwhicheye,left orright,thisrawgazedatapacketrepresents.

Sinceweareinterestedincapturingthegazeofbotheyeswithhighlevel ofcertainty,anyrawgazedatapacketcapturedatanyspecificpointintime thatholdsavalidlycodevaluethatisnot0foranyeyewillbediscarded.The rawgazedatapacketsforbotheyes,afterfilteringtheinvalidpackets,willbe combinedintoasinglearrayof26columns;13identicalcolumnsforeacheye.

2.4.2ThresholdsSelection

InorderfortheI-DTalgorithmtorun,twothresholdsmustbeprovidedalongsidetherawgazedatapackets.Oneofthesethresholdsisthemaximumdispersion(Sect.2.4.2.1)andtheotheristheminimumduration(Sect.2.4.2.2).

2.4.2.1DispersionThreshold. Thedispersionthresholdrepresentsthemaximum distancethattheID-Talgorithmshouldtoleratewhencomputingtheeyefixationspoints.Thisdistancehasthesamemeasurementunitofthecoordinates oftheprovidedgazedata.Thevalueassignedtothisthresholddependsonthe visualerroranglethattheeyetrackersystemsuffersfrom.Tobii,intheProduct DescriptionofitsTobiiProTX300EyeTracker(TobiiTechnologyInc. 2010), reportsavisualerroranglebetween0.4◦ and0.6◦ fordifferentconditions.

Thisvalueassignedtothedispersionthresholdmeansifagroupofrawgaze pointsisincloseproximitytoeachotherandthemaximumdistancebetween anytwoofthemislessthanthisdispersionthresholdvalue,thenthesegroupof rawgazepointsisconsolidatedintoasinglepointcalledaneyefixationgiven thattheysatisfytheminimumdurationthresholdexplainedinSect.2.4.2.2.

Thecalculationofthedispersionthresholddependsontwofactors:thedistanceoftheuserfromtheeyetrackersystemandthevisualerrorangleofthe eyetrackersystem.Thedistanceoftheuserfromtheeyetrackersystemcanbe elicitedfromtherawgazedatapacketsgeneratedbytheeyetrackersystemas theeye z-coordinate valueinmillimeters(moreinformationcanbefoundinthe TobiiAnalyticsSDKDeveloperGuide).Theaveragevalueofthesez-coordinates forbotheyesconstitutestheuser’seyedistancefromtheeyetrackersystem.

ThevisualerroranglereportedintheProductDescriptiondocumentofthe TobiiTX300EyeTracker(TobiiTechnologyInc. 2010)isgenerallybetween0.4◦ and0.6◦ dependingonthelightingconditions.Visualerroranglesof0.4◦ ,0.5◦ , and0.6◦ wereusedtocalculatethedispersionthreshold.

Thedispersionthreshold(inmillimeter)iscalculatedasfollows:

dispersionTheshold = eyeDistance ∗ sin(visualErrorAngle)

Afterpluggingtheeyedistance,whichisusually650mm,andvisualerror anglestotheequation,dispersionthresholdsof4.5mm,5.7mm,and6.8mm wereobtained.Figure 3 visuallyillustratethecalculationprocess.

Tobiiprovidesthe2-dimensionaleyegazepositionsinwhatiscalledActive DisplayCoordinateSystem(ADCS).Thissystemencodesthexandycoordinatesforthegazepositionofbotheyesusingvaluesbetween0and1(seeFig. 4). Avalueof(0,0)indicatesagazepositionatthetopleftcorneroftheactive displayareawhichisthemonitorinourcase.Ontheotherhand,avalueof

Differentpositioningsystems

(1,1)representsagazepositionatthelowerrightcorneroftheactivedisplay area.Formoreinformationaboutthissystem,refertotheTobiiAnalyticsSDK DeveloperGuide.

Sincethedispersionthresholdmeasurementsystemisdifferentfromthemeasurementsystemoftheeyegazepositions,theeyegazepositionswillbeconvertedtothedispersionthresholdmeasurementsystemwhichisthemillimeter system.ToconvertfromtheADCSsystemwhichistheeyegazepositionsmeasurementsystem,aknowledgeofthesizeoftheactivedisplay,themonitor,used inthestudyisnecessary.ThemonitorattachedtoTobiiTX300EyeTrackersystemmeasures508mminlengthand287mminwidth.Theconversionfromthe ADCSsystemtothemillimetersystemisstraightforward.Thex-coordinatesof eacheyegazepositionneedstobemultipliedbythelengthofthescreenwhich is508mmandthey-coordinatesneedstobemultipliedbythewidthofthe monitorwhichis287mm.

2.4.2.2DurationThreshold. Thedurationthresholdrepresentstheminimum time(inmilliseconds)afterwhichasetoftherawgazedatapackets,ifthey satisfythemaximumdispersionthreshold,willbeturnedintoaneyefixation point.Thecommondurationthresholdsuggestedbytheliteraturerangesfrom 50to200ms(SalvucciandGoldberg 2000;JacobandKarn 2003;Blaschecket al.2017;Nystr¨omandHolmqvist 2010;Blignaut 2009).Fourdifferentduration thresholds,namely,50,100,150,and200mswillbeexaminedfortheireffecton

Fig.3. Dispersionthresholdcalculations
Fig.4.

thegeneratedeyefixationsbytheI-DTalgorithm.Alongwiththesecommon durationthresholdvalues,twomorevalues,500and1000ms,wereinvestigated.

2.4.3FindingScanpathStrings

Findingthescanpathstringforeachcombinationofthedispersionandduration thresholds’valuesgoesthroughtwostages.Thefirststageentailsdetermining thecoordinatesoftheboundingboxesoftheeachareaofinterest(AOI)for thesentences(Sect.2.4.3.1)andwords(Sect.2.4.3.3).Thesecondstageentails implementingascriptthatcomputationallydecidestowhichAOI(orAOIs) eachoftheeyefixationsbelongs(Sect.2.4.3.2).Aneyefixationwillbelongto morethanoneAOIincasetheAOIsareinterleaving.Inthecasewherenoone AOIinterleaveswithanother,eacheyefixationwillfallintoasingleAOI.

2.4.3.1DeterminingAOICoordinatesofSentences. TheeasiestwaytodeterminethecoordinatesofanyAOIistodescribeitasasetofrectangles(see Fig. 5).Doingsowillensuresmootherimplementationofthealgorithmthatwill findthescanpathstring.AnyrectangularAOIcanbedescribedby2points. Theupper-leftcornerandthelower-rightcornerpointsareusedheretodescribe eachAOI.AsingleAOIcouldbedescribedbymultiplerectanglesasinthecase oftheAOIAandAOIBinFig. 5

Fig.5. RepresentingtheAOIofthesentencesasasetofrectangles.AnAOIcan beacombinationofmultiplerectangleslikeAOIAwhichisacombinationoftwo rectangles:thefirstrectanglehasthecoordinates(476,155)and(1508,262)andthe secondrectanglehasthecoordinates(476,262)and(1096,372)

2.4.3.2FindingtheScanpathStringofSentences. Afterdeterminingthecoordinatesoftherectanglesthatspecifyallthesentences’AOIs,ascriptwillbe

executedtodeterminethescanpathstringforeachsetofeyefixations.The ScanpathFinder scriptgoesthroughtheeyefixationsonebyoneanddeterminesto whichAOI(orAOIs)eacheyefixationpointbelongs.Todeterminewhetheran eyefixationpointthatisrepresentedby(x, y )iswithinanAOIrepresentedby (xul , yul )and(xlr , ylr ),thefollowingBooleanvalueiscalculated:

Thevalueof d is true whentheeyefixationpointisinsidetheAOIoronits boundary,anditis false whentheeyefixationpointisoutsidetheAOI.Figure 6 illustratestheideapictorially.

Fig.6. DeterminingifaneyefixationpointisinsidetheAOIrepresentedby(xul ,yul ) =(476,262)and(xlr ,ylr )=(1096,372):(a)illustratesaneyefixationwithcoordinates (x, y )=(615,325)thatresidesinsidetheAOIand(b)illustratesaneyefixationwith coordinates(x, y )=(1200,325)thatresidesoutsidetheAOI

2.4.3.3DeterminingAOICoordinatesofWords. Inordertobuildthescanpath stringatalowerlevelofgranularity,i.e.,tothewordslevel,theboundingboxfor eachwordmustbedetermined.Sincethereare69wordsinthemedicalparagraph (Fig. 1),itwillbetediousanderror-pronetomanuallyfindthecoordinatesof eachworkAOI.Tomaketheprocesssystematicandlesssusceptibletoerrors, the WordsBoundingBoxCoordinatesFinder scripthasbeenimplemented.This Matlabscripshowsthestimuli,whichisinourcasethemedicalparagraph,inthe samedimensionsasitwouldappeartotheparticipant.Inorderforthescriptto generatethecoordinatesoftheAOIsofthewords,theuserofthescriptneeds firsttomovethemousecursortotherightthentotheleftboundaryofthe stimuliandclick.Thisstepwillrecordthex-coordinatesoftheleftandright boundaryofthestimulithatwillbeusedinthesubsequentstepstodetermine thecoordinatesoftheboundaryboxesofthoseAOIsthatfallontheleftand

rightedgesofthestimuli.Second,theuserofthescriptisrequiredtomovethe mousecursorverticallyandclickontheboundaryofeachlinestartingfromthe upperboundaryofthefirstlineandendingwiththelowerboundaryofthelast line.Thisstepwillrecordthey-coordinatesoftheboundaryboxesthatwillbe definedinthenextstep.Nowthattheleftandrightboundaryofthestimulias wellastheboundaryofeachrowhasbeencaptured,theuserofthescriptneeds togooverthetextandclickontheimaginarymiddleverticallineboundary betweeneachtwowords.

Theimaginarymiddleverticallineboundarybetweenanytwowordsisshown asadashedlineinFig. 7.TheuserofthescriptneedstohitanEnter/Return afterclickingontheboundarybetweenthelasttwowordsineachline.When theuserofthescriptreachesthelastlineandfollowsthatwithanEnter/Return click,thescriptwillgeneratethecoordinatesoftheAOIswiththeirlabels.The labelsoftheseAOIsarebasedonearlierinformationprovidedtothescriptabout thenameofeachparagraphAOIandthenumberofwordsitcontains.

2.4.3.4FindingtheScanpathStringofWords. Findingthescanpathstringofthe wordsfollowsexactlythesameprocedurementionedinSect.2.4.3.2.However, thealgorithmisprovidedwiththecoordinatesoftheAOIs’boundingboxesof thewordsinsteadoftheAOI’sboundingboxesofthesentences.

2.5Results

Theprocessofdeterminingtheeyefixationsfromtherawgazedatahasbeen testedononeparticipantandtheresultisdiscussedinSect. 2.5.1.Thescanpath stringsbasedonthesentenceslevelispresentedinSection 2.5.2 andbasedon thewordslevelispresentedinSect. 2.5.3

Fig.7. Illustrationoftheboundingboxesofthewords’AOIs

2.5.1DeterminingtheEyeFixations

TheI-DTalgorithmhasbeenexecuted12times,usingallthepossiblecombinationsoftheselectedvaluesforthedispersionanddurationthresholds.Areading taskofashortmedicalparagraph(Fig. 1)thatconsistsofatotalof69words hasbeengiventooneparticipanttocollecttherawgazedataanduseinthe I-DTalgorithm.

Figure 8 showstheeyefixationsgeneratedbytheI-DTalgorithmusingthe differentvaluesforthedurationthresholdbutkeepingthedispersionthreshold fixedto0.4◦ .Similarly,Fig. 9 andFig. 10 showtheeyefixationsgeneratedby theI-DTalgorithmwiththedifferentvaluesofthedurationthresholdbutfixing thedispersionthresholdto0.5◦ and0.6◦ ,respectively.Table 2 summarizesthe numberofeyefixationsgeneratedbytheI-DTalgorithmunderthedifferent settings.Generally,thenumberofeyefixationsmonotonicallydecreasesasthe durationthresholdvalueincreases.Thenumberofeyefixationsunderdifferent valuesofthedurationthresholdtendstobelessthanforthosewithhigher dispersionthresholdvalueasshowninFig. 11.

Table2. SummaryofthenumberofeyefixationsgeneratedbytheI-DTalgorithm underthedifferentsettings

Fig.8. TheeyefixationsgeneratedbytheI-DTalgorithmusingadispersionthreshold valueof0.4◦ andthedifferentdurationthresholdvalues

Fig.9. TheeyefixationsgeneratedbytheI-DTalgorithmusingadispersionthreshold valueof0.5◦ andthedifferentdurationthresholdvalues

Fig.10. TheeyefixationsgeneratedbytheI-DTalgorithmusingadispersionthreshold valueof0.6◦ andthedifferentdurationthresholdvalues

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.