Page 1


Aalborg University Copenhagen Semester: 3 Title: Interactive Hologram

Aalborg University Copenhagen Frederikskaj 12,

Project Period: 9th September 2016 – 19th December 2015

DK-2450 Copenhagen SV Semester Coordinator: Sofia Dahl

Semester Theme: Visual Computing – Human Perception Supervisor(s): Dan Overholt Project group no.: 308 Members: Anders Ravn Ipsen Jan Kanty Janiszewski Joakim Carlsen Mikkel Thynov Nicolai Grum Lukas Aaron Jørgensen

Secretary: Lisbeth Nykjær

Abstract:

This study examines whether single user or multi user interaction is better suited for a Dreamoc holographic display exhibit at The Viking Ship Museum. An interview with Dreamoc’s creator RealFiction and the Viking Ship Museum were both essential for getting the necessary knowledge for developing the prototype, while user experience and motivation were important research topics for design decisions and for testing the prototype. Single- and multi user interactions were evaluated by visitors at the Viking Ship Museum. Although the results were not fully conclusive because of a small sample size, the data indicates that a collaborative and interactive use of a Dreamoc proves its usefulness better compared to one designed for just a single user. This study provides a basic framework for further development of future projects using similar technology and invites further research towards improving it.

Tobias Dalsgaard Larsen

Copyright © 2006. This report and/or appended material may not be partly or completely published or copied without prior written approval from the authors. Neither may the contents be used for commercial purposes without this written approval.

Copies:


Contents 1 Analysis 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Initial Problem Statement . . . . . . . . . . . . . 1.2 Analysis of the Dreamoc . . . . . . . . . . . . . . . . . . 1.3 SOTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Viking Ship Museum . . . . . . . . . . . . . . . . . . . . 1.5 Visitors at the VSM . . . . . . . . . . . . . . . . . . . . 1.5.1 Expert Interview with Andreas Kallmeyer Bloch . 1.5.2 Relevant data gathered from VSM annual report . 1.5.3 Characteristics and capabilities . . . . . . . . . . 1.5.4 Expert interview: Why do people visit the VSM? 1.5.5 Sub Conclusion . . . . . . . . . . . . . . . . . . . 1.6 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 What is motivation? . . . . . . . . . . . . . . . . 1.6.2 Intrinsic motivation . . . . . . . . . . . . . . . . . 1.6.3 Extrinsic motivation . . . . . . . . . . . . . . . . 1.6.4 Sub Conclusion . . . . . . . . . . . . . . . . . . . 1.7 Interaction Tools . . . . . . . . . . . . . . . . . . . . . . 1.7.1 Kinect vs Leap Motion Controller . . . . . . . . . 1.7.2 How Leap Motion works . . . . . . . . . . . . . . 1.7.3 Muscle fatigue . . . . . . . . . . . . . . . . . . . . 1.7.4 Sub Conclusion . . . . . . . . . . . . . . . . . . . 1.8 User Experience . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 Natural User Interface . . . . . . . . . . . . . . . 1.8.2 Seven Laws of UI design . . . . . . . . . . . . . . 1.8.3 Designing for multiple users . . . . . . . . . . . . 1.8.4 Sub Conclusion . . . . . . . . . . . . . . . . . . . 1.9 Analysis Conclusion . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

8 8 8 9 11 13 14 14 14 14 15 15 17 17 17 17 18 19 19 19 19 20 21 21 21 22 23 24

2 FPS & Requirements 2.1 FPS . . . . . . . . . . . . . . . . . . 2.2 Requirements . . . . . . . . . . . . . 2.2.1 Non-Functional Requirements 2.2.2 Functional Requirements . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

25 25 25 25 26

3 Methods 3.0.1

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

27 Report Format . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4


4 First Iteration 4.1 Design . . . . . . . . . . . . . . . . . . . . . . . 4.2 Concept Development . . . . . . . . . . . . . . 4.3 Concept description . . . . . . . . . . . . . . . . 4.4 Scope . . . . . . . . . . . . . . . . . . . . . . . 4.5 Structure . . . . . . . . . . . . . . . . . . . . . 4.6 Paper prototype . . . . . . . . . . . . . . . . . . 4.7 Final concept . . . . . . . . . . . . . . . . . . . 4.8 Skeleton . . . . . . . . . . . . . . . . . . . . . . 4.8.1 The Seven Laws of UI . . . . . . . . . . 4.8.2 Idle Mode . . . . . . . . . . . . . . . . . 4.8.3 Chopping wood . . . . . . . . . . . . . . 4.8.4 Placing planks . . . . . . . . . . . . . . . 4.9 Surface . . . . . . . . . . . . . . . . . . . . . . . 4.10 Sub Conclusion . . . . . . . . . . . . . . . . . . 4.11 Implementation . . . . . . . . . . . . . . . . . . 4.11.1 Models and other visual representations 4.11.2 Camera setup . . . . . . . . . . . . . . . 4.11.3 Colour Tracking . . . . . . . . . . . . . . 4.11.4 Python . . . . . . . . . . . . . . . . . . . 4.11.5 Unity3D . . . . . . . . . . . . . . . . . . 4.11.6 Log Animation . . . . . . . . . . . . . . 4.11.7 Leap Motion . . . . . . . . . . . . . . . . 4.11.8 Assemble Ship . . . . . . . . . . . . . . . 4.12 Usability test . . . . . . . . . . . . . . . . . . . 4.12.1 Theory . . . . . . . . . . . . . . . . . . . 4.12.2 Sampling . . . . . . . . . . . . . . . . . 4.12.3 Setting . . . . . . . . . . . . . . . . . . . 4.12.4 Test Procedure . . . . . . . . . . . . . . 4.12.5 Tasks . . . . . . . . . . . . . . . . . . . . 4.13 Test and Results . . . . . . . . . . . . . . . . . 4.13.1 Colour tracking . . . . . . . . . . . . . . 4.13.2 Leap Motion . . . . . . . . . . . . . . . . 4.13.3 Interaction . . . . . . . . . . . . . . . . . 4.14 Discussion . . . . . . . . . . . . . . . . . . . . . 4.14.1 Colour Tracking . . . . . . . . . . . . . . 4.14.2 Leap Motion . . . . . . . . . . . . . . . . 4.14.3 Interaction . . . . . . . . . . . . . . . . . 4.14.4 Sub Conclusion . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32 32 33 34 36 37 38 39 40 40 40 41 41 42 43 44 44 44 45 45 47 48 49 49 51 52 52 52 53 54 55 55 55 55 56 56 56 56 56

5 Second Iteration 5.1 Design . . . . . . . . . . . 5.1.1 Requirements . . . 5.2 Implementation . . . . . . 5.2.1 Bonfire and Steam 5.2.2 Removing log . . . 5.2.3 Axe size . . . . . . 5.2.4 Sail on ship . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

57 57 57 58 58 58 59 59

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . . 5

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .


5.3

5.2.5 5.2.6 5.2.7 5.2.8 Final 5.3.1 5.3.2 5.3.3 5.3.4 5.3.5 5.3.6 5.3.7 5.3.8

Ship Animation / Idle mode Highlights . . . . . . . . . . Introduction Video . . . . . Sub Conclusion . . . . . . . Evaluation . . . . . . . . . . Theory . . . . . . . . . . . . Observation . . . . . . . . . Interview . . . . . . . . . . Sampling . . . . . . . . . . Setting . . . . . . . . . . . . Test Procedure . . . . . . . Results . . . . . . . . . . . . Interview . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

59 59 60 60 61 61 61 62 62 62 62 63 63

6 Discussion 66 6.0.1 Validity and Bias . . . . . . . . . . . . . . . . . . . . . . . . . 67 7 Re-design 69 7.0.1 Leap Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 7.0.2 Colour tracking . . . . . . . . . . . . . . . . . . . . . . . . . . 69 7.0.3 Learning experience . . . . . . . . . . . . . . . . . . . . . . . . 69 8 Future works

70

9 Conclusion

71

10 References

72

Appendices

74

A RealFiction Interview

75

B Interview with Andreas K. Bloch

76

C Graph from the Viking Ship Museums Annual Report 2015

77

D The Viking Ship Museums Annual Report 2015

78

E Skeleton Of Each Step

79

F Consent Form

83

G Test Procedure for Usability Test

84

H Poster Guide for Usability Test

85

I

86

Observation Sheet

J Observation Data

87

K Interview Data

88 6


L Online Questionnaire

90

M Observation Notes for Final Evaluation

91

N Interview Questions

97

O Poster Guide for Final Evaluation

98

P Test Procedure for final evaluation

99

7


1 | Analysis 1.1

Introduction

Holographic displays are currently a successful tool in marketing. They instantly grab the attention of possible customers and present the product with visual eects. It makes for a memorable experience for the viewers and therefore leads to a positive connection to brands using this marketing method (Appendix A). Dreamoc is a holographic display device produced by RealFiction. This type of holographic display is equipped with a digital screen that can display 3D computer graphics and with the necessary depth cues it creates the illusion of 3D. The screen is reflected onto a glass, which makes it look like it is floating in mid-air (figure 1.1). It should not be confused with real 3D holograms. Letting the customer interact with the content in the holographic display can take the experience to another level. Partners of RealFiction have experimented with such, using smartphone apps, gaming controllers and gesture controls as input devices (Appendix A).

Figure 1.1: RealFiction http://www.realfiction.com

holographic

displays

Dreamoc can grab people’s attention, provide a stunning visual experience and even create an illusion of an interactive 3D environment. This device has potential for far more than a display in the field of marketing. 1.1.1

Initial Problem Statement

How can user interaction enhance the experience of a holographic display?

8


1.2

Analysis of the Dreamoc

The Dreamoc is a holographic display that through an optical illusion called "Pepper’s Ghost" can create the effect of a 3D object floating in mid-air. The Pepper’s Ghost illusion has been used in theatres in the 19th century to make ghostly figures. The illusion works by splitting the stage in two rooms, one as the main stage and the other hidden from the audience. The hidden room should be dark until the figure has to appear. A pane of glass is then placed at an angle of 45 degrees, where the ghostly figure will be. For the figure to appear at full effect the light on the main stage is dimmed and the light in the hidden room is increased. This way the actor acting as a ghost can then be reflected as a transparent being in the pane of glass and for the audience he will appear as out of thin air (Pepper, 1890).

terest among customers. The interesting question however, is how exactly does it create this special interest? First of all, the holographic display is not something that people generally see very often. Customers are presented with a different experience than the usual 2D screen or posters. Another typical reaction is that people usually do not quite understand the holographic effect and begin to wonder whether it is real or not. This creates a kind of curiosity. These feelings ultimately create a bond and relation to what is presented. And this relation is what drives people to, in this case, buying the product. This is also referred to as sensory marketing, but the interesting part for this project is that the Dreamoc can create this kind of relation between a human being and an object. In the interview with Clas Dyrholm, different techniques were discussed on how to create the illusion of 3D in a Dreamoc holographic display.

Although this type of illusion only creates a simulated, or pseudo hologram, it can still be referred to as a hologram, as it has the ability to give the illusion of a 3D image that “floats” in mid-air. This illusionary technique has been given a more modern approach, by simply making a pyramid of glass, giving multiple sides of viewing, or a single pane of glass, giving a one sided view, at an angle of 45 degrees with an HD screen on top. The image on the HD screen is then reflected on the glass to give the illusion that there is actually an object or a scene in the middle of the glass pyramid i.e. it looks like a hologram.

1. Movement Movement and rotation will trick the brain into perceiving depth in the scene. Even if text is presented it should rotate a small amount to show that the letters themselves are built of more than one side. 2. Physical elements. Physical elements create depth just by being there. It is possible to place physical objects or products inside the pyramid and let the hologram display different content around the object. The user will clearly see the physical object being 3D and is therefore easier convinced that the image on display is also 3D.

In a semi-structured interview with Clas Dyrholm (Appendix A), Co. Founder and current CEO of RealFiction, he explains that the Dreamoc holographic display is a success in the marketing world. The Dreamoc has been proven to increase sales of any kind of products because it creates an extra in9


3. Light The Dreamoc have a built-in light system. A fairly bright light setting will enhance the Pepper’s Ghost effect, as the lines made by the edge of the screen, which are reflected in the glass, will fade. The viewer will then only see the content displayed, and cues to it actually being 2D are minimised. 4. Show the same on each side. A last recommendation from Dyrholm, was to show the same effect on each side. This is to avoid a rough cut that will happen right when the view is switched from one side to another. If the same is presented on the sides, the brain is actually more likely to believe that it is real. Though he notes, it can make sense in some cases if a story is told and different parts of the story are presented on different sides.

The big problem with pseudo holograms like the aforementioned is that it will not evoke the same depth cues as a real 3D laser hologram will (Geng, 2013). In order for a person to perceive an object as 3D, there are some physiological and psychological depth cues that make us think that e.g. an image has depth (Zanker 2010; Geng, 2013). In the following table not all existing cues are presented, but the most relevant in this case are (figure 1.2). While pseudo holograms utilise the Pepper’s Ghost illusion are very good at accommodating the psychological depth cues to make the object or scene in the hologram appear 3D, they do not accommodate the important physiological depth cues, which are important in order to look realistic (Geng, 2013). The illusion makes it look like the object or scenery is within the glass pyramid, but it still looks much more realistic than a normal 2D screen, as the pyramid creates depth.

Figure 1.2: Physiological and psychological depth cues (Zanker 2010; Geng, 2013).

Following the techniques, recommen- create an illusion of 3D in the Dreamoc dations from RealFiction together with holographic display. depth cues from Zanker it is possible to

10


1.3

SOTA

The current holographic solutions have a huge impact on today’s retail market. Holographic displays are mainly used by companies who want to stand out. They use products like the Dreamoc display from Real Fiction, to increase sales by providing animations, 3D videos and similar effects to the products (Appendix A).

ing can be used in relation to the prototype of this project. Creating a solution that can provide an interactive experience with a product might benefit to greater success. However, Real Fiction experiences problems when using interaction methods with the Dreamoc. A product which is not very reliable and user friendly often results in a negative experience and frustration towards the product. Therefore it is important to ensure that the prototype is as reliable and intuitive as possible to create a better user experience.

Real Fiction was founded in 2002 and can be considered an expert regarding state of the art for holographic displays. Clas Dyrholm explains different aspects where Real Fiction have experienced either success or problems regarding holoDyrholm (Appendix A) talks about graphic solutions (Appendix A). different partners of Real Fiction, workFirst of all, because the Dreamoc dis- ing on different interactive solutions for play is based on the Pepper’s Ghost il- the Dreamoc. Real Fiction themselves lusion (Pepper, 1890), the image might have made a mobile app that lets the appear flat inside the display. The user switch between different content in mind must be tricked and Dyrholm de- the hologram. While this will not make scribes different tools to create a better the user able to directly interact with the 3D illusion. These tools are described object in the hologram, it still gives the in the previous section (1.2 Analysis of idea that the user can interact with the the Dreamoc) and can be useful in this machine. project when creating a holographic soDyrholm (Appendix A) has also seen lution, since they have been used with successful Leap Motion projects, where great success. the Leap Motion has been used as an inReal Fiction strives to create simple and stable solutions with a user-friendly interface. As an example Dyrholm mentions an application that provides interaction through a tablet. The tablet is used to change different animations inside the Dreamoc with a simple interface. Simplicity and stability are important elements in order to create a product with success (Appendix A). Therefore a prototype could benefit from simple interaction methods. Additionally getting senses in play has proven to be a very powerful tool in a marketing manner. Interaction with the product either physically or digitally, greatly increases the chance for a sale. These elements of sell-

teractive method with the object in the Dreamoc. Although it was not an essential part of the program, he can see potential in being able to touch and play with the object in the hologram. He says that it can be dangerous to use things like Leap Motion, as it does not work 100%, the usability and user experience will be diminished, and it might have a negative influence on the user and might cause frustration if the user struggles to use it. Lastly Dyrholm (Appendix A) talks about a German company who developed an Android based game application for the Dreamoc with a wide catalogue of games with both single and competitive multiplayer games. This is made for e.g.

11


companies who can then have users play and then whoever has the highest high one of the games where the product they score, wins the product in question. are trying to sell might be in the game,

12


1.4

Viking Ship Museum

Evolution of technology has greatly affected cultural institutions in Denmark making them relevant and suitable as a testing environment for a prototype. The development of technology, have led to a radical change in the visitors’ expectations when going to a museum (Christensen et al., 2009). Instead of attending exhibitions to see historical and cultural collections for public education and research purposes, the visitors have become users who want to interactively touch, feel and be a part of the learning environment (Rudloff, 2013; Kulturarvsstyrelsen & Kulturministeriet, 2011). This year, the Viking Ship Museum, the VSM, in Roskilde, Denmark, began a new initiative towards the use of technology to increase the overall experience of their museum. Thus the curator at the museum invited us to create a suitable exhibit that will fit the visitors needs. The museum’s main attraction is their daily outdoor activities, such

as sailing in reconstructed Viking ships or working with wood to build shields or weapons from the time of the Vikings. These activities are very successful and attract many families, schools and international visitors. Unfortunately poor weather conditions mean that the outdoor activities close during the autumnand winter period and cause big challenges for the museum (Appendix B; Appendix C). The museum is forced to rely on their indoor exhibition in the winter, which currently consists of exhibits where the visitors are limited to look and read. This does not fit their need and expectations for an interactive museum experience (Appendix B). In this environment the Dreamoc has potential to become an interactive exhibit as a part of the whole indoor VSM exhibition. Target group research is made in order to design the content and interaction specifically for this environment and visitors.

13


1.5

Visitors at the VSM

This target group analysis identify the users, their behaviour at the museum, their goals and the context of how the Dreamoc can be a solution or rather contribute to the VSM’s indoor exhibition. Information found through analysing the target group will be considered when forming requirements, as the content of a prototype should fit the testing environment. Following data gathering methods were conducted: • Interview with museum expert and employee at the VSM, Andreas Kallmeyer Bloch. • Relevant data gathered from the VSM’s annual report. 1.5.1

Expert Interview with Andreas Kallmeyer Bloch

An unstructured expert interview was held with the museum inspector, Andreas Kallmeyer Bloch, who currently has his focus on the development of exhibits using new interactive technology at the VSM. He has many years of experience at the museum, and provided us with rich and qualitative information about the museum visitors, which is presented later in this target group section. 1.5.2

Relevant data gathered from VSM annual report

Every year the museum conducts quantitative research related to the museum. Everything from their cafe and outdoor exhibitions to visitors and the VSM’s economic situation is discussed and collected into the report. The report is used

to gather the relevant data for understanding the target group.

1.5.3

Characteristics and capabilities

From the annual report made by the VSM (Appendix D) specific quantitative studies have been conducted to learn more about whom their visitors are. In 2015, 146.891 visited the VSM, whereas adults cover 70,8 % of the guests, while children cover 29,2 %. The numbers clearly show a majority of adults visiting the VSM. Having said that, Bloch states that the VSM wish to target all age groups and therefore a holographic display exhibit should be available, interactable and usable for all ages. Moreover, first time users should also be able to use it, without any problems. First time users would be people who had no experience with interactive holographic display technology. Since the Dreamoc is a relatively new technology, most of the visitors will be first-time users. Figure 1.3 show certain groups of visitors attending the exhibitions from 2012 - 2015. 64,8 % of the guests arrive from foreign countries. Bloch adds that they like to include English text for international visitors (Appendix B). The remaining 35,2 % are Danish, and the museum wish to have solutions that also include Danish (Appendix D). Bloch has experienced Danish visitors feel annoyed or dissatisfied when attending Danish museums which do not support their language.

14


Figure 1.3: TargetGroupDevelopment: Groups of visitors from 2010 - 2015

Another interesting key aspect to note is that visitors usually attend exhibitions in groups. Bloch identifies two main groups of visitors:

1. To learn something about the Vikings and their ships. 2. To gain interactive experiences.

Andreas elaborates that visitors come • Families who are smaller groups of to learn about the Vikings, but an in2-5 people. teractive activity should according to • Schools, which are larger groups of him rather inspire the visitors to learn more and be engaged in the time of the 20-30 people. Vikings. Visitors are much more excited Since people arrive in groups, possible and engaged when they can be a part holographic solutions to create a better of the exhibition and somehow “get their and more engaging experience at the muhands on” either physically or digitally. seum could include a social experience. In fact, interactive exhibits and activiAccording to Bloch it can be crucial to ties are where the visitors spent most of enhance the experience using social intertheir time when they come to the VSM action at the museum (Appendix B). The (Appendix B). Likewise Parry (2013, p. Dreamoc has been used as a multiplayer 252) suggests that interactivity, engageplatform (1.3 SOTA) and this could make ment and learning combined, can crefor an interesting group activity. Howate a good environment for participatory ever visitors at the museum might come and/or collaborative activities with the to the museum alone or browse alone. purpose of enhancing experience at exhiTherefore Bloch also wishes to include a bitions. Furthermore, Bloch has experiprototype which is usable for single users. enced that many interactive exhibitions that children like to interact with, most 1.5.4 Expert interview: Why do adults also like. people visit the VSM? Bloch (Appendix B) confirms that with new technology able to provide interactive experiences for the visitors, a worldwide change has happened to the visitor expectations when coming to a museum (1.4 Viking Ship Museum). According to Andreas, when people visit the VSM there are two key points that are important.

1.5.5

Sub Conclusion

Taking everything into consideration following requirements are necessary to fulfil in order to create an exhibit which can provide a good experience for visitors at the VSM:

15

• Usable for all ages (excluding babies).


• Understandable for all English and Danish language speakers. • Possibly a group activity (more than one user at the same time). • Visitors expect to learn something about Vikings and their ships.

• Inspire users to learn more about the Vikings. • Easy to use, especially for first time users.

• Making the holographic display in- Interactive activity. With the specific requirements on teractive makes perfect sense in how to create a good experience for visthis environment. itors at the VSM, this will be combined As we do not expect one holographic with motivation theory on how to credisplay exhibit to be able to drastically ate a good activity experiences in genchange the overall museum experience, eral. Throughout the report a good exwhat is interesting to know is what de- perience is therefore defined as “an expefines a “good experience” at a single rience that can provide new knowledge exhibit. Collecting information in the while making the visitors able to interabove research, a “good experience” at a act with an exhibit”. Inspiring visitors single exhibit at the VSM can be defined to seek new knowledge and interact with as: an exhibit might counteract the lack of • Content with relevance to the rest interactive exhibits which is seen in the of the museum. winter period.

16


1.6

Motivation

Motivation is a feeling required to make visitors interact with the holographic display. Looking from a wider perspective, motivation is almost always a part of why people do as they do. When the holographic display has grabbed a visitor’s attention, the person should be motivated to approach the exhibit and begin interaction. The visitor should also stay motivated to continue the activity until it is done. This section gives an overview and discusses the different kinds of motivation and reflects on how it should be properly implemented into a holographic display exhibit at the VSM. Furthermore, this report will use motivation as a tool for measuring experience.

1.6.1

What is motivation?

As there are countless amounts of research and perspectives on motivation, this report primarily follows the research of Edward L. Deci and Richard M. Ryan whose work on motivaion is widely respected amongst the psychological community. “A state that directs an organism in certain ways to seek particular goals.” (Cotman & McGaugh, 2014, p. 629). In this case, something that will direct the visitor to interact with the exhibit.

1.6.2

Intrinsic motivation

Intrinsic motivation is when one is selfdetermined to do an activity. Looking at the definition for intrinsic motivation, it is described as an activity, which is done for the sole pleasure of the activity itself (Ryan & Deci, 1985). There are three key elements to achieve or enhance intrinsic motivation: • Competence. • Autonomy. • Relatedness. Competence is when one is capable of doing an action correctly. Intuitive controls and providing positive feedback on actions will enhance the user’s feeling of self-competence (Ryan & Deci, 1985). A good feeling of self-competence for an activity can be enough to increase intrinsic motivation towards a specific activity. Autonomy is defined to be “the perceived origin or source of one’s own behaviour” (Ryan & Deci, 2002, p. 8). As the visitors choose themselves which exhibits they want to engage, autonomy is naturally achieved. Relatedness is the enjoyment of relation with other people. Ryan & Deci (2000) also suggests that most activities that are interpersonal and create a feeling of relatedness play an important role in maintaining intrinsic motivation. Relatedness is very much linked to the fact that people visit the VSM in groups, which should be taken into consideration for the design, by designing for interpersonal interaction (1.5.2 Relevant data gathered from VSM annual report).

According to Ryan & Deci (2000) there are two types of motivation: Intrinsic motivation, "which refers to doing an activity for the inherent satisfaction of the activity itself ” (p. 71), and 1.6.3 Extrinsic motivation extrinsic motivation which refers to “the performance of an activity in order to at- Extrinsic motivation is the state, when tain some separable outcome” (p. 71). a person does an activity only to reach 17


a certain goal or reward (Ryan & Deci, 2000). In contrast to intrinsic motivation, the behaviour of one that is extrinsically motivated can vary in many different directions. The different types of extrinsic motivation are described in detail in Ryan & Deci (2000, p. 61-62). The most relevant part of extrinsic motivation, is that people can identify a goal, which the individual also sees a value in and will thereby initiate the feeling of autonomy. This kind of extrinsic motivation is called Identified Regulation and Ryan & Deci (2002) writes: “Identification represents an important aspect of the process of transforming external regulation into true self-regulation.” (p. 17) This kind of regulation towards a goal is what drives people in daily life, work, competitions and games. An interactive activity would require the visitor to identify and understand

that attending the exhibits content will provide a high value. Whether they get a reward or not, does not seem to affect the self-determination. 1.6.4

Sub Conclusion

As museum visitors walk through the VSM they need to be motivated to engage with our interactive exhibit. This interaction should be intuitive (competence) and allow multiple people to interact at the same time (relatedness), which should make the foundation for people to engage. Providing a goal with a high value to the visitor could also increase the motivation in general. The visitors expect to learn something when coming to the VSM (1.5.4 Expert interview: Why do people visit the VSM) and therefore this could be their goal or part goal of the interaction.

18


1.7

Interaction Tools

Interaction Tools A way of interacting is needed to accommodate the needs of the Viking Ship Museum. Research from state of the art and the target group reveals that interaction can be beneficial (1.3 SOTA; 1.4 Viking Ship Museum). This study will focus on touchless interaction through gestures. In order to track the gestures, an intelligent camera system is needed, for which there are currently two accessible state of the art options, the Microsoft Kinect Camera and the Leap Motion Controller, the LMC.

ground noise, the process of sending data from the Leap Motion controller to the computer is much faster (Colgan, 2014). The Leap Motion controller has its own local memory, which it uses to read the data from the sensors. Then it performs any necessary resolution adjustments and applies advanced algorithms to the data making it ready to be read by the software on the computer (Colgan, 2014). The computer software uses image processing to get rid of any leftover background clutter. After the noise removal the software will generate a 3D represen1.7.1 Kinect vs Leap Motion Con- tation of the hands from the live feed gathered from the Leap Motion controller troller (Colgan, 2014). Kinect’s precision works best when the subject is at least 1.45m away (Tao, 1.7.3 Muscle fatigue 2013), making it excel in full or upper body recognition. The Kinect is not Gesture recognition requires three pieces: as flexible as the LMC, seeing that the Camera, software able to recognise limbs Kinect has a minimum distance for op- moving, and moving limbs. Movement timal readings. Since you have to be up of limbs requires energy and makes them close to experience the hologram prop- numb or feel weary if energy is not proerly, it is more optimal to use hand ges- vided fast enough. Causes can be by extures as the interaction method. Kinect haustion which is the result of short, but can also be programmed to read hand intense usage of muscles, or muscle fagestures, however the LMC is much more tigue, which is the result of prolonged, precise (Weichert et al., 2013), making it lesser intense, but constant usage of musthe optimal choice to use. Kinect would cles. Muscle fatigue is by Edwards (1981) only be the preferred option if full body defined as “a failure to maintain the rerecognition is needed, as the LMC does quired or expected force” (p.1). Fatigue appears either when a muscle consumes not have the same range. energy too fast or when a failure in excitation occur. Failure in excitation oc1.7.2 How Leap Motion works curs when the muscle fiber do not receive Leap Motion consist of two cameras and enough stimuli at the fibers synapse. Exthree infrared LEDs. The setup is able citation is the stimuli a sensing. cell to track and focus on any object within needs before releasing a signal, an excited an 80 cm range above the device. The cell release a signal more frequently. Edarea is limited to 80 cm because of wards (1981, p.3) provide a graph illusthe light-range the LEDs can produce. trating the amount of ATP/energy musThe cameras only focus on what is lit cles requires, compared to how much up by infrared light, which will remove energy is available and force used, demost background clutter. Without back- scribed as maximum voluntary contrac19


tions, MVC (figure 1.4). 1.7.4

Sub Conclusion

An intelligent camera setup is needed to create a touch-less, gesture recognition setup. Because of the small minimum range and its high precision, the Leap Motion is the most flexible device towards hand gesture recognition. Muscle fatigue is important to know about, because gestures for interaction in mid-air is a fatiguing type of movement. Because of this, software utilising gestures as input should give the user a possibility to enter a resting position frequently, and should not require the user to interact for too long periods at a time, in order to minimise muscle fatigue.

20

Figure 1.4: Muscle Fatigue


1.8

User Experience

When creating an interactive exhibit it is important to conceptualise and design with the experience of the users in focus. By following user experience guidelines, a product becomes intuitive and pleasurable in use. However, if the user experience is bad the users will abandon it (Garrett, 2010). 1.8.1

• Build a user interface that considers context when using the right metaphors, providing visual indications, feedback and similar. In case of gesture based interface: • Use the “No Touch Left Behind” rule - providing the users with visual responses to be clear about the connection between the cause and the effect.

Natural User Interface

To find out more about how to provide the visitors with good user experience, knowledge, decided to be followed, is found in Wigdor and Wixon’s (2011) book “Brave NUI World” (1.8.1 Natural User Interface). When explaining what NUI is the authors wrote: “(..) goal is a product that creates an experience and context of use that ultimately leads to the user feeling like the pitcher atop the mound: completely comfortable, expert, and masterful—a virtuoso of the user experience. The goal is to achieve this from the very beginning, for complete novices, and to carry this feeling through as the users become experts.” (Wigdor and Wixon’s, 2011, p. 12) The whole process explained in their work has been selectively taken apart into elements which may be applied towards solving the problem statement. Natural User Interface must: • Create an experience that feels as natural to the novice as it does to an expert.

• To deal with the problem of the system misinterpreting the gestures use one of the methods like reserving a movement for starting drawing the gesture, using invisible plane which when crossed works like a clutch, combining gestures with voice. 1.8.2

Seven Laws of UI design

When designing an interface it is important to have the user’s intentions in mind and based on that, give them the options they need to reach their goal. Norman (1986) provides 7 stages of action, which have to be considered in order to make a design that allows the users to get what they expect. To complement the basic NUI design guidelines the design of the application will be built upon the 7 Laws of User Interface Design (Vukovic, 2014) which were created based on Norman’s 7 stages of action.

• Create an experience that is authentic to the medium, meaning it should not mimic the natural world, but fit the style of interaction. 21

1. Law of clarity People try to go around things they do not understand and tend to ignore them. It is important to avoid using unintuitive UI elements, as users often will not bother trying figure out how they work.


the size of the task and give a sense of accomplishment when each of them is completed.

2. Law of preferred action

For the users to feel more comfortable the preferred action should be clear, meaning the next step towards achieving what user wants 1.8.3 Designing for multiple users should be always obvious. As the visitors of the VSM most often come in groups, the developed proto3. Law of context type should be able to provide the target The controls of the interface are al- group with a sociable, multi-user expeways expected to be close to the ob- rience (1.5 Visitors at the VSM) as well ject user wants to control. If some as a single user experience. Wigdor and elements can be edited, or manipu- Wixon (2011) introduce 3 distinctive levlated in anyway the controls to do els of task coupling: it should always be next them. • Highly coupled tasks: demanding the users to help each other to com4. Law of defaults plete the tasks. Defaults are powerful. Users rarely change the default settings. Vukovic (2014) says that most of the people have a default ringtone and background on their phones. It is safe to assume the defaults will not be changed, so it is important to make sure the defaults are as practical and useful as possible.

5. Law of guided action

• Lightly coupled tasks: users perform different tasks to e.g. achieve a common result. An example of this could be a Chief of Fire and Chief of Police managing different elements of a crisis. • Uncoupled tasks: Users use the same device, but for different purposes.

It is more probable that users will To properly design for the users do something they are asked to, expectations for multi-user experiences, than to expect them to do it on Wigdor and Wixon (2011) write that the their own. If the designers expect product must: users to do something specific they • Be tested with multiple users simulshould not hesitate to ask for it. taneously interacting. 6. Law of feedback • Be designed for different types of Feeling of confidence with using the task coupling. UI makes the users in control and And that it should: willing to use the product again. To achieve t the users should al• Enhance the experience when ways receive feedback about the reworking in a group. sults of their actions. • Enable one user to enjoy the expe7. Law of Easing rience without others. If the designers demand the user • Enable other users to join anytime to perform a complex action, it is without disrupting other users. important to break it down into smaller steps. This will save the • Allow users to stop interacting user from being overwhelmed by without disrupting other users. 22


• Make system changes clear to all the users e.g. when one user zooms in the map, others should know how and why the zoom changed.

• The seven laws of user interface to create a base for a good user experience. • Natural user interface to make an intuitive interface.

• Avoid audio feedback, as there is no mechanism for users to distinguish two simultaneous audio cues.

• Gesture based interface to secure a pleasurable experience when interacting with gestures.

• Do not attach shared controls to one side of the display. Rather avoid the users to move the controls or have their own.

• The guidelines for designing for multiple users to create an environment for multiple people to play with at the same time.

• Communicate ownership of content through location on the display. 1.8.4

Sub Conclusion

The general design should follow:

Following these requirements should create a great user experience.

23


1.9

Analysis Conclusion

Dreamoc is a holographic display made by RealFiction, which utilises a 3D illusion called "Pepper’s Ghost". The Pepper’s Ghost illusion does, however, lack depth cues (1.2 Analysis of the Dreamoc). Different techniques are used to give the illusion of 3D, and can be applied in order to accommodate a better perception of depth. These techniques are important in order to create a better experience. RealFiction has experienced that interaction with a product, either physically or digitally, is helpful. They have different partners developing state of the art interaction methods combined with the Dreamoc. Among different solutions they have seen projects using Leap Motion as an interaction method. Research showed that a holographic display could fit in a museum context (1.4 Viking Ship Museum). The Viking Ship Museum, the VSM, faced a digital revolution and found it relevant to provide a test environment. An interview with the museum curator of the VSM provided knowledge and clarified ambitions towards the development of the prototype. Main results from the expert interview (Appendix B) revealed that a prototype should include some kind of interaction. The visitors often arrive in groups, which indicate that the prototype should include the ability to be used by multiple users. However, the VSM stated that they also wished to include single user interaction in order to target all visitors. The visitors would not interact with the prototype unless they are properly motivated. Therefore, it was required

that they felt competent in using the controls. Furthermore, relatedness have been researched to gain knowledge about social activity and how it affects motivation (1.6 Motivation). RealFiction describes the importance of interaction with a product (Appendix A). Devices like Leap Motion or Kinect are appropriate state of the art interaction methods, which could be relevant to use in combination with the Dreamoc. Leap Motion, has a high precision in gesture interpretation which makes it superior to Kinect in regards to this project (1.3 SOTA). However when designing for gesture-based interaction, one must be aware of muscle fatigue (1.7.3 Muscle fatigue). The prototype should therefore not require the user to interact for longer periods of time. Interactive prototype does also require designing for a good user experience. The Design will follow the seven rules of user interface for the base of a good user experience (1.8 User Experience). Research also showed that designing for the rules of natural user interfaces and gesture-based interfaces, can provide the user with a better experience when interacting through gestures. Moreover, the prototype should be usable for multiple users taking the design guidelines for multiple users into account, during the design process (1.8 User Experience). All these criteria have to be taken into consideration when designing, implementing and evaluating, for the product to create a great interactive experience for the visitors of the VSM.

24


2 | FPS & Requirements 2.1

FPS

How is a museum exhibit experienced through a Real Fiction hologram with a single user interaction, to a collaborative interaction?

2.2

Requirements

The list is made of two kinds of requirements, functional and non-functional. The functional requirements include system functions and required features. Non-functional requirements include content and user centered features. 2.2.1

Non-Functional Requirements

1. The prototype’s content should be designed with the depth cues (1.2 Analysis of the Dreamoc) and presented with techniques and recommendations from RealFiction to create an illusion of 3D and grab attention of potential users (1.2 Analysis of the Dreamoc) 2. The prototype should be designed having in mind muscle fatigue (1.7.3 Muscle Fatigue). 3. It should be intuitive to interact with the prototype. 4. The prototype should be intuitive and understandable for all ages who visit the VSM (1.4 Viking Ship Museum). 5. The intuitive interaction should create a feeling of competence for the users (1.6.2 Intrinsic Motivation). 6. The content should be understandable for English and Danish language speakers. 7. It should teach or inspire users with content relevant to Vikings and their ships. 8. The experience should create a valuable goal for the user which according to the VSM research a valuable goal for the museum visitors would be a piece of information or an experience related to Vikings and their ships. (1.6.3 Extrinsic Motivation; 1.5.4 Expert Interview: Why do people visit the VSM?) 9. The prototype should be designed by following the 7 laws of UI Design (1.8.2 Seven Laws of UI Design). 25


2.2.2

Functional Requirements

1. The prototype should be interactive. 2. The prototype should be usable for either a single person or multiple people to interact simultaneously (1.5.2 Relevant data gathered from VSM annual report; 1.8 User Experience). 3. The prototype should enable other users to join or quit anytime without disrupting other users. 4. The prototype should either make system changes clear to all users or not aect other users. 5. The prototype should allow users to have their own controls, separated from others (1.8.3 Designing for multiple users).

26


3 | Methods An iterative design process will be used for the development of this prototype. Each iteration will consist of establishing requirements, design alternatives, prototyping and evaluating as seen in (figure 3.1). The advantage of using an agile method like this is the evaluation happening during the development,

which can be used to get feedback on the current state of the prototype (Preece et al., 2015). This feedback helps in creating new requirements and constant improvement of a prototype, and leads to new design alternatives, which can then be implemented and tested once again.

Figure 3.1: AgileMethod Basic agile approach

This project is predicted to go through two major iterations, before the final test;

2. Second iteration to improve on the high fidelity prototype resulting in a final evaluation testing the final problem statement.

1. Iteration including concept development ending in a high fidelity Each iteration will combine the agprototype with possibility for test- ile method with Garrett’s (2010a) 5 layer ing usability. model. The layer process model merges 27


very well with the iterative process, as the agile method provides an overall approach to the design process. Where the layer process model provides a more specific approach, providing tools for working in depth with each phase related to developing a prototype. According to Garrett (2010a) a good user experience can be achieved by following five steps during the design process. Garrett shows a model (figure 3.2) that visually describes the connection between the different layers. The first steps are more abstract, whereas the last steps are more concrete. 1. The first layer of the model is strategy, where requirements are established based upon the user needs and the technical requirements.

28

2. The next layer of the model is scope, which is a list describing all the functions required to meet the user’s needs. 3. After the scope layer, the structure layer needs to be planned. The structure should describe the interaction and information architecture of the design. 4. With the structure layer planned, the skeleton layer should be made. The skeleton layer is about positioning and composition of the design content. 5. The last layer of the model is the surface, which is the creation of the visual design.


Figure 3.2: FiveLayerModel 5 steps for good user experience Garrett (2010b)

The model begins with defining requirements, goes through designing and ends with producing visuals. It can be used as a base when designing a prototype to get around every aspect of the design process. If the project is executed with an iterative approach the model should be run through several times to refine the outcome.

Evaluation Methods In the last phase of every iteration in the agile method is evaluation, and as the goal of each iteration is dierent, so is the method used to evaluate. Concept test The goal is to make an obvious concept in the sense that it should be intuitive to build the paper ship. The concept model

29


will be a “high-level description of how a system is organized and operates” (Johnson & Henderson, 2002, p.26) done with a paper prototype. A presentation of the concept along with a quick pitch and paper prototype will be used to see if the ideas are going in the right direction. Observing the participants try the paper prototype will generate data that will tell whether the activity is intuitive. Usability test Several runs will be performed to test if the prototype can be used by anyone not familiar with the technology as well as people who have tried similar technologies before. This evaluation is designed as a formative evaluation, done in the early stages of the design and implementation process in order to make improvements on the system. The test focuses on three usability aspects: learnability, efficiency and errors. Learnability is important since it is expected that the majority of users will be first time users. RealFiction (1.3 SOTA) explains that interactive technology combined with Dreamoc require a stable and reliable solution, which defines why efficiency and errors are important to test. The data from the test will be gathered through qualitative methods in form of an observation and an interview. The

aforementioned methods will be used to improve the system and make it ready for the final evaluation. Test of final problem statement The test of FPS will be held at the VSM using the visitors as subjects. The test will be comparing single user interaction to cooperative interaction with the holographic display. For gathering measurable quantitative data, a questionnaire will be used. The questions will be based upon the IMI (Self-determination Theory, 2016) to find out if there is a difference in motivation and experience when being a single user compared to cooperative use. For collecting qualitative in-depth data and to supplement the quantitative questionnaire a semi-structured interview was used. Observation will help to find information about the subjects’ behaviour, struggles and enthusiasm when interacting with the prototype. 3.0.1

Report Format

Each movement through the agile method is visualised with a figure similar to figure 3.3, but highlighted in a way connecting the iteration step (figure 3.1) to one or several layers of the 5 layer model (figure 3.2). Except evaluation which will only be shown as an agile step, as being unrelated to the 5 layer model.

30


Figure 3.3: A visualisation connecting each iterative step with a layer in the 5 layer model.

31


4 | First Iteration 4.1

Design

The requirements, final problem statement and design approach described in methods will be the foundation of the design section. The five layer model (3 Methods) is used throughout the iterations as a guide to create good solutions that will fit the requirements in the best way possible. The first layer of the five

layer model (3 Methods), strategy, focuses on user needs and technical requirements for the design. These requirements were described in the analysis (2.2 Requirements) and divided into functional and non-functional. Based on the outcome of the strategy layer, the development of the concept could begin.

32


4.2

Concept Development

Figure 4.1: The concept development. The numbers denote the number of ideas at each stage

Figure ConceptDevelopment . Several ideas (figure 4.1) were generated through card sorting. To categorise the ideas an online whiteboard was used. The categorisation was based on its relevance to the FPS and whether the ambition level was realistic. Categories were: ridiculous-, bad-, possible-, goodand great ideas. Only the ideas decided to be possible or better were kept. The 11 ideas were then combined, if possible, and sorted. A vote decided which ideas should be kept and which to throw away resulting in six ideas. This process was repeated and resulted in three main ideas, which were pitched to the collaboration partner at the VSM. After getting feedback only two ideas seemed relevant: “ship in the storm” with a learning outcome of weather conditions and sailing, and the idea “build a ship”, which gives users knowledge about the process of re-

constructing a Viking ship. The choice of “build a ship” was based on discussion and arguments. The idea has the opportunity to give visitors an experience similar to the interactive features the museum provides during summer (1.4 Viking Ship Museum).

33


4.3

Concept description

volve their audience as much as possible regarding the reconstruction process. Reconstruction is a big part of the museThe prototype covers a reconstruction of ums interactive activities during the suma Viking ship. The VSM wishes to in- mer, which it is well known for.

Figure 4.2: Storyboard of the initial concept

The concept covers some basic steps of reconstruction. Figure 4.2 shows a storyboard that illustrates what happens from the moment of grabbing attention to the end of the interaction. The prototype will attempt to catch attention by displaying an animation. Once the visitor interacts with the hologram, he will be in charge of building and reconstructing the Viking ship model. Tools will appear as guides along the path of the process. First, an axe will appear and be used to chop pieces for the ship. Second, a mob will appear as a tool to wet the planks so that they can get heated and bent under steam. This will ensure that the planks are more flexible and ready to

put on the ship. Lastly, the ship will be assembled piece by piece until the model is complete. When the visitor is done, the final ship will be displayed with an animation showing the ship in its glory. A very important element of this process is collaboration. Throughout the building process the users (if more than one) have to collaborate to build the final ship. One person will be in charge of chopping and steaming, while the other person will take care of assembling. The setup can be seen in figure 4.3. This can be done with single user, but it will require the user to shift position in order to follow the storyline.

34


Figure 4.3: Setup of the dierent screens in the prototype.

35


4.4

Scope

After the concept description, a list of be able to do (figure 4.4). The functions functions was created in order to give an listed are scheduled to be implemented in overview over what the prototype should the dierent iterations.

Figure 4.4: List of functions derived from strategy

The first requirements on the list refers to getting the user’s interest (2.2 Requirements). The functions will use sensors to track when there are people interacting with it. The idle mode will be implemented by showing a looped animation of the ship and an inviting animation. In active mode sensors should be able to catch input from users and apply it to digital object movement. The digi-

tal objects should interact correctly with each other and replicate the process of building a Viking ship. If users leave and new users appear it should be possible to restart the building process from scratch. When the ship is finished, smaller detail should apply themselves and an animation of the sailing ship begins. After a minute it should go into idle mode.

36


4.5

Structure

After figuring out what functions are duced, telling how these features are reneeded, an interaction map was pro- lated to each other.

Figure 4.5: Interaction map.

As seen in figure 4.5, the prototype starts in idle mode where it shows an animation. When a person walks by the prototype, an animation should be displayed that shows the person what to do. The person then starts chopping the wood, by using Leap Motion. After the wood has been chopped, the planks have to be steamed. When the planks have been steamed, it has to be placed in the right position. Here the player can move the plank, which will snap in position when it

is close to the right position. These three steps are repeated until the ship is done. When the ship has been built, the prototype will show an animation, where the ship is sailing. After the animation, the ship will be deconstructed and go to idle mode to restart. If the prototype is unused for a fixed amount of time, it will go into idle mode automatically. With the structure made, it was possible to make a small paper prototype to simulate the steps of figure 4.5.

37


4.6

Paper prototype

The first prototype was a low fidelity clips and tape as seen in the figure 4.6. prototype constructed of paper, paper-

Figure 4.6: Picture of the paper ship.

The paper ship was made of a front, seats, shields, a mast, a sail, a bottom and side planks. The bottom and side were already attached to each other as the skeleton of the ship. The participants would then need to assemble the ship using gestures. Logs were made, for simulating that the user had to chop the wood into the pieces of the ship, which had to be placed in the right order. With the paper prototype made, the concept was ready to be pitched for getting feedback. The pitch was held at a midterm presen-

tation where students could try to build the ship and get the concept explained. For simulating the interaction with Leap Motion, the participants did not touch the objects. Instead a facilitator acted as an intermedium, to move and place to objects. An unstructured observation indicated lack of haptic or tactile feedback. It could be provided by building the ship out of lego or coloured paper and then, through colour detection, tracking the colours of physical objects to build the ship inside the hologram.

38


4.7

Final concept

The results from the pitch resulted in changes to the concept. The main changes of the concept were the ways of interacting with the hologram. One of the two Leap Motion controllers was switched to a colour detection system

that can track colour markers on a physical tool, and then translate the data to move objects inside the hologram. The other Leap Motion will still be used for placing the objects.

39


4.8

Skeleton

The layout of the prototype is very restricted, as a result of the chosen medium. The Dreamoc is not able to display content on every part of every surface as seen in figure 4.7. The front is able to display on the upper 160 mm of

the glass and all the width, this disable the use of the last 50 mm to the bottom. From the side view the whole 200 mm in height is able to display and 312 mm from the back, this makes 48 mm unavailable.

Figure 4.7: Dreamoc display measures

4.8.1

The Seven Laws of UI

been illustrated with dotted lines, however the content might move out of the area during use. The area is mainly used as a starting point to place the content of the design. The skeleton of each step in figure 4.5 can be seen in Appendix E. The first step, idle mode, can be seen in figure 4.8, while the rest can be found in appendices. Furthermore the figures and elements used in the skeleton are only used as stand-ins and will not necessarily be the same in the final design.

The placement of the content in the hologram should be decided based upon the seven laws of UI design together with the limitations of the Dreamoc. The purpose of this is to avoid any user confusion and make it clear what is happening and what the user is supposed to do (1.8.2 Seven Laws of UI design). The ship model will be visible at all times and will be placed higher up in the screens, since it is an important feature to see the ships progression during the reconstruction. Every step from the inter- 4.8.2 Idle Mode action map (figure 4.5) has been assigned a layout in relation to the seven UI laws. When the prototype is in idle mode, the The area where the layout is placed has display will show an animation of a ship 40


as seen in figure 4.8. Idle mode is used to gather the attention of visitors passing by at the museum and to show when the user has completed assembling the ship.

The ships placement should create an illusion of a hovering 3D object; therefore the screens will not show the exact same content from each side.

Figure 4.8: Layout of the elements of the prototype in idle mode

4.8.3

Chopping wood

In the “chopping wood� scene in Appendix E, the user can see an axe that is highlighted to show the preferred action. Once the user has picked up the axe, the log should be highlighted for the same purpose. The two will be placed relatively close to each other to easily show the context of where it should be used (1.8.2 Seven Laws of UI Design). Behind the log the user can see the other player placing objects onto the ship, because the

user chopping, needs to know the context of his work. 4.8.4

Placing planks

In Appendix E it is shown that in the object placing scene, the plank is laying in front of the ship. Behind the ship, the user can see the bonfire and log while also being able to see what the other person is doing. This again, is for understanding the context of the task.

41


4.9

Surface

To decide how everything should come together visually, a discussion was held, mainly about style. Because there was a possibility that a model would be provided by the museum and the proto-

type is simulating the reconstruction of a ship similar to the ones found at the VSM, the style is decided to be as realistic as possible.

42


4.10

Sub Conclusion

The design should try to accomplish the activity of building a ship, providing the museum guests with something that resembles the interactivity available in the summer periods. In order to be relevant to the museum’s other ex-

hibits, a replica of one of their current displayed ships is going to be used as the reconstructed ship. In order to catch the attention of museum guests more effectively an idle mode should be implemented.

43


4.11

Implementation

The main part of the prototype was implemented in Unity3D in C#, taking advantage of previous experience in this game engine. Other parts were done in Python, and modelling programs such as Blender and Sketchup. 4.11.1

of rain were downloaded from Sketchup’s 3D warehouse and Unity3D assets store. The plank and log were modelled in Sketchup and assigned a material created in the same way as the materials for the ship.

Models and other visual 4.11.2 representations

The Viking ship used in the prototype is a model provided by a curator of the VSM. The ship was divided in Blender, textured and assigned materials imitating real surfaces using Bitmap2Material and Adobe Photoshop, but also by using materials from free sources like substanceshare.com. The models of the bonfire, axe, steam and particle simulation

Camera setup

There are four cameras in the prototype. Three of them were placed in a configuration seen in figure 4.9 so the objects are properly displayed in the hologram. One camera is behind the rest and covers the whole screen. It does not display anything apart from black colour, which is necessary to hide the white default background Unity provides.

44


Figure 4.9: Placement of the cameras in the prototype.

4.11.3

Colour Tracking

For users to be able to interact with the hologram through real objects, these objects would need to be tracked. There are dierent ways to do this, but for this project it was decided to do it with colour tracking. Colour tracking was made with OpenCV, an image processing library, but since it is not available for Unity3D or in the C# language for free, it was done using Python. The compiler called Enthought Canopy has OpenCV library easily accessible. Sending the data to Unity3D was implemented through a UDP Server-Client operation, where the Python program acts as a server, send-

ing data, and Unity3D acts as the client, receiving the data. 4.11.4

Python

The following code examples are written in Python. In order to be able to track the location of the objects, the center of their mass was needed, which was done in the following way. In the following examples blue is used as an example to show the process. First the code converts the colours, from the camera input, from RGB to HSV. This makes it easier to adjust the tracking to dierent lighting settings, without changing the hue (figure 4.10).

45


Figure 4.10: Converts from RGB to HSV

Then the colour ranges are defined ing on the lighting in the room the proto make it only track specific colours, totype is placed in (figure 4.11). though these have to be changed depend-

Figure 4.11: Finds the upper and lower boundary of a colour, in this case, blue.

The image is then thresholded for cific colour, which is then saved in a mask each colour, so it only looks at each spe- (figure 4.12).

Figure 4.12: Thresholds to only one colour

The mask containing a thresholded tours, which later saves them in an array image is therefore searched for all the (figure 4.13). contours using a function called findCon-

Figure 4.13: Finds all the contours in the array

The function (figure 4.14) then looks ordinates using a simple function called at all the contours, finds the biggest one, moments. and can calculate its center of mass co46


Figure 4.14: Function for finding the biggest contour and returning the x and y position.

The center positions of the contours server (figure 4.15). are then sent to Unity3D through a UDP

Figure 4.15: Sends the input from the camera.

The code then does this for all the colours being tracked, sending the data to 2 different clients in Unity3D, through different ports, but on the same IP. The 2 different clients are made to sort the data.

4.11.5

Unity3D

To make Unity3D receive data from 2 different clients, a class called ReceiveData (figure 4.16), which takes a port number as an argument, was created. The class then has an if statement for each port, which handles the data in the same way, but saves it in different variable (xPos, yPos, xPos2, yPos2 etc.), making it possible to use the data in the hologram.

47


Figure 4.16: Receives the data from the python compiler.

The variables were then used in an inside Unity3D (figure 4.17). update function to transform the objects

Figure 4.17: Rotates and moves the axe in accordance to the input.

4.11.6

Log Animation

When the log has been chopped enough, an animation that splits the log starts. This animation was done in the animation editor in Unity3D, and was made using key frames, like in other animation programs, like Maya. When the log has been split the newly formed plank would move by using lerp function (figure 4.18) over the fire to be steamed. Lerping is

performed by providing 3 arguments for the Lerp function: the original place of the object by accessing its position / rotation, the vector that the function uses to move or rotate the object between the two given positions and the time argument influencing the speed of a lerp. Implementation of the steam was left for another iteration as for its low priority at that time.

Figure 4.18: Lerps the log parts to new position.

48


4.11.7

Leap Motion

Besides colour tracking, the user needs a way to place the object in the hologram. For this task it was decided to use Leap Motion based on our research (1.3 SOTA). In order for Leap Motion to work in Unity3D, it was necessary to download the orion beta software from Leap Motion and a Unity3D package, for getting the Leap Motion hands in Unity3D. Lastly, for the hands to be able to grab objects, the prototype uses an asset module called Leap Motion Interaction Engine. The objects that the Leap Motion is

supposed to interact with, needs an interaction behaviour script from Unity3D and an interaction manager in the hierarchy. Furthermore a pair of brush hands should be placed with the Leap Motion controller. That will make every object with the interaction behaviour script movable. In the prototype, it is not the ship part the user can grab, it is the parented cube in front of it (figure 4.19) that has the interaction behaviour script assigned. The box was also made bigger than the collider of the part, to make it easier to grab it using Leap Motion.

Figure 4.19: A ship part and its handle.

4.11.8

Assemble Ship

In order to assemble the ship, the user needs to pick up the parts and place them by picking up its parent called ‘handle’. Inside the ship model there is a box col-

lider, which all parts need to hit in order to snap, lerping them smoothly towards their destination on the ship model. This was done in void OnTriggerEnter (figure 4.20), which activates when a rigidbody hits the collider.

49


Figure 4.20: Destroys the handle.

To prevent the part from being movable after placing it in the right position a bool isHit turns true and the handle is destroyed. With the basic interaction imple-

mented, the prototype was be ready for a usability test, that would focus the colour tracking and Leap Motion working as intended.

50


4.12

Usability test

The high fidelity prototype will be evaluated through a usability test. The purpose of the test is to gain knowledge about the reliability and eďŹƒciency of the

system. The usability test will mainly focus on learnability, eďŹƒciency and errors (figure 4.21). This test is the first step before the final evaluation.

Figure 4.21: Overview of the focus areas for the usability test.

51


4.12.1

Theory

This evaluation is designed as a formative evaluation, since the prototype will be evaluated early in the process in order to make improvements on the system. This type of evaluation is useful for detecting and eliminating early stage problems (Bjørner, 2015). The type of data for this evaluation will be qualitative data, collected through direct and non-participant observation, meaning that the observers will look directly at the participant’s behaviour, without interrupting during the test. This will give an idea on what works with the system and what does not. Since the data collected through observation will not provide specific feedback on how and why errors or problems occur, but rather what errors exist, a semi-structured interview will be conducted afterwards. This will provide rich qualitative data, which, together with the qualitative data from the observations, should lead to further improvements of the system. Holographic displays supported by Leap Motion and other interaction methods such as colour tracking are relatively new on the market (1.2 Analysis of Dreamoc), which means the prototype will most likely meet many first time users (1.5.3 Characteristics and capabilities). As mentioned before the focus points for the test are learnability, efficiency and errors. Learnability refers to how long it takes for novice users to accomplish basic tasks; the first time they encounter the design (Preece et al., 2015). Similarly how easy and intuitive the system is. Learnability will be estimated through observation, looking at how hard or easy it is for the users to accomplish the tasks of program. Efficiency refers to how the response time for the system is. Using processconsuming methods, such as Leap Motion and sending processed data of the webcam through a client-server program,

some latency might occur. In order to check if this is a noticeable problem for the users, they will be asked through the interview, if they experienced any delays or weird movements with the interactions, and if not, it can then be concluded that there is no major latency problems. Errors refer to either bugs regarding the system or design implications that can cause trouble for the user. The complexity of the system requires the usability test to investigate the reliability of the prototype. This area covers what kind of errors the users make. 4.12.2

Sampling

Initially the sampling group would have been found through a non-probability quota sampling, by choosing certain precharacteristics of the possible participants (Bjørner, 2015), making them as close to the visitors of a museum as possible. Due lack of time related to problems found in various pilot tests, the sampling group was found through convenience sampling instead. The evaluation tested on 8 participants. First four were expected to identify problems while the last four were used for confirmation. 4.12.3

Setting

The usability test setting was a controlled environment. It took place in a closed room, lit only by a spotlight in order to keep a constant brightness level in the room. Prototype observations revealed that brightness changes badly influence performance of colour tracking. During the test one facilitator was in charge of the test acting also as an observer (figure 4.22). The holographic display was placed for the three sides of the display to be easily accessible. One or two test participants were interacting with the holographic display using a Leap Motion controller and a webcam that can register the axe. A computer

52


with its screen turned off, was placed close to the holographic display in order to host the software needed for the testing and to connect the webcam and the Leap Motion controller to the holographic display. An external webcam was used to make the tracking camera receive feed from the best possible angle. An

observer was assigned to each test participant and one cameraman was recording the session, as long as the participants have agreed to all the terms described in the consent form (Appendix F). Throughout the whole test, a maximum of three researchers were present.

Figure 4.22: Setting of the usability test.

4.12.4

Test Procedure

The test has been planned carefully to minimise bias by making each test as similar as possible. Therefore a test procedure was created (Appendix G), which the facilitator and the researchers should follow for each test. The tasks were explained on a poster placed on the front of the hologram related to the tasks that can be performed on the specific side (Appendix H). When the test started, the facilitators handed out a “Consent form”. Afterwards they acted as nonparticipatory observers and used an ob-

servation sheet (Appendix I) to get a clear overview of what data needed to be noted. Additionally a cameraman was recording video from different angles for exam purposes, only if the participants have agreed to be recorded in the “Consent form”. To compensate for the learning effect there were two versions of the test. At first two participants tried the prototype together after which they did it separately. This order was reversed for the second pair of participants and every second time consecutively.

53


4.12.5

Tasks

Chopping pieces & assembling ship.

The task for the participants was to Before executing the usability test a build a ship either alone or by collaborat- number of pilot tests were conducted to ing. The task consisted of two elements see whether there were any major proband the user was measured in each task; lems with its structure.

54


4.13

Test and Results

After the usability test, the qualitative results obtained through observation (Appendix J) and interviews (Appendix K), were used to create a list of changes to get an overview of what needed to be redesigned or implemented. The list was split in three sections; Colour tracking, Leap Motion and Hologram. 4.13.1

most of the participants had problems figuring out what and when they could grab on the Leap Motion side. The problem here was that they did not see which part was movable, and some of them even tried to grab and move the frame of the ship instead of the part. 4.13.3

Colour tracking

Interaction

A problem that occurred when the participants tried the prototype alone was that it was hard to figure that they had to move around the hologram to place the objects. In fact only one person clearly red the signs before trying the prototype. Furthermore, one person also commented that there should be less waiting time between the tasks, so the chop wood side, just could continue chopping and did not need to wait for the other person. Another person mentioned that the area where the user could move the Leap Motion around in the hologram was too small compared to how much they moved in real life. After the test it was also very clear that the participants had a much 4.13.2 Leap Motion easier time figuring out how to build the Only a few of the participants had a hard ship when they were collaborating, comtime trying to grab the objects. Whereas pared to the single user experience. Some participants felt that the axe in the hologram was too small to see, while others felt that the axe, used for tactile feedback, was too small. Furthermore, a few people experienced that the axe started chopping by itself. A lot of the participants had problems realising the plank was ready and continued to chop it. This was a slightly bigger problem when they were doing it alone, as the wood was already chopped when they got back from the Leap Motion part of the prototype. One person said that the prototype was missing haptic feedback, when he chopped the wood.

55


4.14

Discussion

The evaluation gave a lot of positive 4.14.2 Leap Motion results with the fact, that all users quite A few people had trouble grabbing the quickly learned how to interact with the objects with the Leap Motion. This hologram. can be optimised by making the grabbable area for the object bigger. Furthermore, some participants had trouble seeing what objects, they could grab. This could be fixed by making a high4.14.1 Colour Tracking light around the ship part. Another comment was, that the Leap Motion area was smaller in the hologram than in real life. As some participants found that the real This is due to the design of the Dreamoc axe tool was too small, a new axe would and that the Leap Motion hands needs to have to be created in order to accommobe big, in order to have a big interactive date this. One participant wanted hapfield. tic feedback when chopping with the axe, which could be done by placing a real log with a colour marker and then when the 4.14.3 Interaction axe got closer to that marker, the virtual From the observation it was clear that axe would chop. This would give haptic the participants learned the system feedback, as the user would feel the axe quicker if they tried the multi user expehitting the log. The fact that the virtual rience first. To fix this, an animation or axe sometimes chopped on its own, was a an introduction video could be presented problem caused by the calibration of the to the users. colour tracking, which caused problems at the time. This is fixed by calibrating 4.14.4 Sub Conclusion every time the hologram is moved to a new location or if the light in the location Usability test gave a clear overview of the changes. Another problem was that the lacks in the prototype and led it one step participants had problems knowing when closer to being ready for the final evalthey were done chopping the piece. This uation. Next section starts a new iteracould be fixed by simply making the log tion by forming the design requirements disappear. needed for implementing the changes.

56


5 | Second Iteration

5.1

Design

The result showed that there were • Removing the log when it has some minor changes that needed to be been chopped, so the user will done in prototype from the usability test. not be confused whether they have However, because these are rather small chopped the wood or not. visual and usability corrections, this iterFurthermore, there are some funcation will start by proceeding straight to tions that were not implemented in the the requirements. first iteration, which would be needed for the final evaluation. These are: 5.1.1 Requirements • An introduction video. Some of these changes are more important than others, to implement before the • The fireplace steaming animation. final evaluation. Therefore only the most • Make the object move from the fireimportant changes were picked out and place to a place reachable by the listed below: Leap Motion side. • A bigger axe in the hologram, to make the axe more visible to the • A sail should be placed on the ship user. when it has been build. • Implementation of highlights, to help the user know when something can be interacted with. 57

• An idle mode with a sailing animation.


5.2

Implementation

5.2.1

Bonfire and Steam

While the side screens were used for chopping wood and placing ship parts, the big, middle screens bonfire was not working properly. The steam over the

fireplace was implemented by setting a steam particles prefab (downloaded from the Assets store) active at a proper time in the coroutine part of the code (figure 5.1) where other lerps and animations were placed.

Figure 5.1: Coroutine - activating steam prefab.

After the plank has been steamed it is lerped again to its waiting position between the fire and the ship from where the user can pick it up and place on a ship.

ing the plank, confused people to think they should still use the axe to chop. The simplest way to prevent this was by removing the remaining log parts, after the animation has been played. This was done by disabling the game object right after it has been chopped o and enabling 5.2.2 Removing log it again, when the ship part has been As the results from the usability test placed. show, the remaining parts of the log, which were animated to fall apart reveal58


5.2.3

Axe size

A comment from the usability test said that the axe in the hologram was too small and confusing to look at. This was fixed by scaling up the axe object inside Unity3D. 5.2.4

Sail on ship

done assembling the ship. A bool was created to check if the ship had been assembled (figure 5.2) and then to indicate that the process is finished, the mast with sails, ropes and paddles drops down on the ship using lerp. At the same time a coroutine starts which loads the ship animation scene.

In the prototype for the usability test, nothing happened when the user was

Figure 5.2: Makes the sail sink down onto the ship

Ship Animation / Idle mode seen after building the ship in the prototype. For the ship to simulate movement For the water and ship animations, open on waves in the y axis, a code which apsource code found on github was used. plied certain force on the ship’s position Unity3D’s standard assets provided wa- depending on the distance from the dester surface models and shaders, which ignated center was created. For rotation combined with the Java code from github on other axis lerping overtime was used created the eect of waves, which can be using time.deltaTime (figure 5.3). 5.2.5

Figure 5.3: Makes the final animation move from side to side.

ganise a shader structure. Using a free template the highlight shader was creThe highlighting for the ship parts was ated to fit the prototypes purpose figure made in shaderLab, which is used to or- 5.4. 5.2.6

Highlights

59


Figure 5.4: A ship part with the highlight.

5.2.7

Introduction Video

5.2.8

To prevent the users from being confused about what to do, an introduction was created. It presented the interaction methods that are used to interact with the Hologram and showed their influence on objects inside the prototype.

Sub Conclusion

After analysing the data from the usability test and having the changes implemented, the prototype was ready for the final evaluation.

60


5.3

Final Evaluation

The final evaluation focused on the comparison between single user and multi user interaction. The comparison was based on compared results of different Intrinsic Motivation Inventory items which is elaborated further in the Theory section. The evaluation was trying to answer the final problem statement presented below: “How is a museum exhibit experienced through a Real Fiction hologram with a single user interaction, compared to a collaborative interaction?” 5.3.1

Theory

Quantitative The evaluation was designed as an experimental test, with the purpose of comparing two different variables. Each test was conducted on different participants to prevent them from getting experience from previous interactions with the prototype. The advantage of this is that the Order effect (Preece et al., 2015) will not bias the results. However, each person has his individual understanding and preferences, which could influence the results significantly. To reduce it a relatively big amount of participants is needed in hopes to possibly sort out underperforming or overperforming individuals (noise) and by taking an average. The test was carried out through a field study located in the target group’s natural environment. Answering the final problem statement required both qualitative and quantitative data gathering. Quantitative data was found through an acknowledged measurement device used to assess participants’ subjective experience and motivation (Self Determination Theory, 2016). The device is called Intrin-

sic Motivation Inventory (IMI) and was used through an online questionnaire in both Danish and English (Appendix L). IMI has 7 subcategories and 45 items, which can be used in the questionnaires. For the purpose of the current project, 4 of the subcategories were used: enjoyment/interest, perceived competence, effort/importance and value/usefulness. Enjoyment/interest subscale is used to assess the intrinsic motivation and was important for analysing the experience between the two conditions. Perceived competence was considered relevant, as the test participants had no chance to have experienced the prototype previously. Effort/importance was chosen to detect any differences in how much effort and engagement was put when comparing the two conditions. Lastly, the value/usefulness was considered important to gain knowledge of how well the content fitted the target group and the context of the VSM. From those four subcategories, 14 items were chosen in total and together with 2 additional demographic questions formed a quantitative questionnaire (Appendix L). Qualitative Apart from the quantitative questionnaire, two different qualitative methods were used to gather data: 5.3.2

Observation

The observation was planned as a direct, non-participatory observation meaning that the participants remained uninterrupted during the test. To gather any form of rich and informative data in the four different categories (Appendix M),

61


firstly the participant trying the prototype was described. Secondly, the observer was noting down technical issues. Thirdly, the behaviour and visible feelings of the participant were noted down. Lastly, additional observations, thoughts and other unexpected occurrences were documented. 5.3.3

Interview

Aside from observations, qualitative interviews were conducted (Appendix N). The interviews were semi structured (Preece et al., 2015) and consisted of a few follow up questions related to the IMI questionnaires. Single users were interviewed alone, while multi users were interviewed together. 5.3.4

Sampling

The sample group was found through probability sampling. The participants were randomly selected at the VSM. 19 participants were documented in total. 11 participants were multi users, while seven were single users.

5.3.5

Setting

The test setting was located together with other exhibitions in the basement of the VSM due to colour tracking preferences. The holographic display was placed in the middle of the room to be easily accessible (figure 5.5). In the test room there were two researchers, the observer and the cameraman. In the room next to the test room a researcher in charge of questionnaires and an interviewer were located. The laptop was placed behind the holographic display to avoid distractions. In front of the Dreamoc there was a tablet with a video looping constantly. The video showed how to use and interact with the prototype. Above the tablet there was a poster (Appendix O), which is not represented in figure 5.5. The cameraman also functioned as a researcher recruiting participants, when he did not record. It was necessary since the basement exhibit was not clearly visible and often neglected by the visitors at the VSM.

Figure 5.5: The final evaluation settings.

5.3.6

Test Procedure

A test procedure for the final evaluation was described to secure a consistent pro62


cedure (Appendix P). The preparation of the testing environment took place one hour before the VSM opening hours. After finding a location with enough space and good light conditions the prototype was prepared and calibrated, posters were placed and researchers went in position for their roles and got ready for the test. The test was performed on visitors interested in the prototype or gathered by the scout. Participants were not given information about the prototype apart from the video that was displayed on a tablet in front of the hologram. During the test session an observer took notes on the type of participants, the prototypes performance, his own thoughts and the participants behaviours and expressions. In the first half of the day, a cameraman recorded footage that could have proven useful for documentation. In the second half the cameraman changed his role to scout for participants, as less people were coming the later it got. After each participant finished using the prototype or wanted to leave they were asked to fill in a questionnaire and be interviewed. 5.3.7

Results

Qualitative data Observation While observing visitors at the VSM interacting with the prototype, specific events occurred regularly. Most participants starting as single users would gather someone to do the task at the opposite side of the holographic display, or someone would join in without being asked. Both the Leap Motion and the colour detection were sometimes unstable and caused confusion amongst the test participants. When it worked as supposed,

everything went without any usability problems. Different age groups tried to tackle the few technical problems in different ways. Children would keep going and try any amount of times until it worked for them. Young adults in their twenties would stop, think and then try again, but slower and more concentrated. Older participants above 40 would give up and state that they would like to stop testing. Presumably it took too much time to build the entire ship and it appeared that participants wanted to finish what they started even though they seemed to lose some enthusiasm, possibly because of the repetitive content. All but a few participants seemed to be excited and put a lot of effort into accomplishing the given task. Most participants talked with the groups they arrived with while using the prototype. 5.3.8

Interview

Eight interviews were conducted to gather qualitative data, which indicated certain patterns. The interviewed people were not necessarily the same people who filled in the questionnaires as some people only agreed for the qualitative part of test. 14 of them were trying the prototype as multi users while the remaining 1 was a single user. All 14 multi users mentioned that they found the prototype fun for different reasons. The majority described it as a new and different way of using technology. Few participants also mentioned that they found it fun, being able to collaborate with each other. Additionally the answers indicated that around half of the participants interviewed, experienced the ability to interact using their hands, as fun. Some of the participants also explained that they found the prototype exciting and more engaging compared to other non-interactable exhibits.

63


On the contrary the majority did not feel much learning related to the VSM. Instead, most felt that they learned something about the technology and how to use it properly. It was mentioned that the prototype was missing certain aspects that could assist with learning. Suggestions said that either relevant text or videos would be beneficial to include. Most participants needed to figure out how the prototype worked, but they quickly understood what and how they should do. Therefore the majority described their own performance as good and they felt satisfied. However, those describing that they did not feel very good at it, experienced certain usability problems. Some mention that the axe was unresponsive and that they did not know exactly how to use it. Some even gave up as the axe did not act as they wanted it to. Additionally few experienced some smaller usability issues with

the Leap Motion, as their hand disappeared or turned into the outline of the hand. The single user was very unsatisfied and found it was weird and annoying. He was too impatient and felt that it took too long before the plank arrived as a piece he could put on the ship. Lastly, the majority of participants mentioned that the prototype would fit well at the VSM. They found it relevant and that the content could be beneficial. Mostly it was described as a new and interesting way of mediating and getting new knowledge in a more modern context. Quantitative The quantitative data results are split into two groups: Single user results and multi user results. They are further split into four categories that resemble certain aspects of the IMI.

Figure 5.6: Single user test results

Figure 5.6 shows that single users scored an average of 4.9 to enjoyment, which means that they enjoyed it slightly, however, the single users felt that they performed adequately at the activity, with an average of 5.0 in perceived com-

petence. The table also shows that the single users did not put a big amount of eort into this activity scoring 4.2. Furthermore, the single users scored an average of 4.1, meaning that they felt that it had a relatively little value to them.

64


Figure 5.7: Multi user test results

In figure 5.7 it can be seen that multi users scored 5.7 in enjoyment, which means that they generally liked it. They also felt that they in some way performed quite well at this activity, with a score of 4.6 in perceived competence. The multi users also scored 4.5 in eort, meaning that they put a little eort into doing the

activity, while they scored 4.3 in value. This says that the multi users feel that they did not learn much from the activity. The results found from the evaluation will be analysed and discussed in the following section.

65


6 | Discussion The purpose of the final evaluation was to answer the final problem statement: How is a museum exhibit experienced through a Real Fiction hologram with a single user interaction, compared to a collaborative interaction? Comparing the two can be done by looking at the IMI’s first category, enjoyment and interest. Here the multi users scored 5.7, while single users scored 4.9, which tells that multi users in general enjoyed the activity more. This is further backed by the interview where 14 people described the activity as fun. Some participants even argued, that the fun part was that they were able to interact with each other, so much that single user interaction turned into collaboration automatically. The findings from the observation further supports the claim and it was also observed that the participants were excited, to try the activity. Furthermore, it was noted that there was a lot of communication between multi users, creating a more social experience in the museum. This might have lead to an enhanced experience for the multi users. In the competence comparison, single users scored 5.0, where multi users scored 4.6, meaning that the single users felt they did better, compared to the multi users. However, according to IMI (Self-determination Theory, 2016) people tend to be modest about their own performance, when they have to self-report. From the observation it was quite clear that people performed very well, hence

the reason for them not giving themselves a high competence score must be because of problems with the system. This is further backed by the interview where some people reported problems with both the Leap Motion and the axe. The importance and effort tells how much effort the participants put into the activity. The single users scored 4.2, while multi users scored 4.5. The difference here is very small, meaning that multi users only felt that they put a bit more effort into it compared to the single users. This might be due to few challenging aspects in the prototype or because of the very linear storyline, which might influence the perceived choice of the participants. However, it was not measured during the test and can only be seen an assumption. The last result from the quantitative data is value and usefulness. The multi users gave a score of 4.3, while single users gave it 4.1. The difference is so small it hardly makes a difference. This means that they generally did not feel like it had great value. Furthermore the users felt they gained little to no knowledge of things related to the VSM, however through observation and the interview it was clear that they did learn the controls of the prototype fast. An interesting aspect from the observation showed that most single users turned multi users without any interruption from the researcher’s (Appendix M). The data indicated that interacting as a single user felt unnatural to the visitors,

66


since they arrive in groups and by intu- participate, when no one came by. This ition want to experience it together. might have created some bias, as some participants came by and tried it naturally, while others were scouted. 6.0.1 Validity and Bias It is important to be aware of Bias when Negative participants looking at the results, to improve the validity of the results (Bjørner, 2015). During the evaluation some participants had a very negative attitude, perhaps because they were not there of their free Uneven amount of participants will, but rather with a school trip and did A very big bias for the results is that the not want to be there. The participants, quantitative results have 11 multi users who might have had a negative attitude and only 7 single users. The data will still towards being at the museum to start give an idea of the result, but it is non with, might have biased the results. Esconclusive. For it to be more conclusive, pecially one participant did clearly have there would have to be an equal amount a negative attitude towards this, which for each of them. The same problem oc- was very visible in his questionnaire ancurred for the observation and interview. swers as he had just give neutral answers For the interview there was an even big- to all questions. Therefore his results ger bias, as only one single user was in- were denoted as noise and removed from terviewed against 14 multi users. We the questionnaire. However, there might were aware of this bias when it started have been more people with similar attito occur, during the testing. Most par- tude, which might have caused bias. ticipants wanted to try it as multi users. To try and remove this bias by getting Interrupt and help elderly more single users, we tried to tell them before they tried the prototype, that it It was planned that the participants was a single user exhibit. This gave us would not be interrupted while using the some more single users, but not enough prototype. During the evaluation there as they naturally either joined in or got were 3 elderly people who had big probothers to join in. It was difficult to avoid lems using the prototype, therefore the the differences, as the test was set in a observer had to interrupt and help them. natural environment making unexpected It might have created some bias, since events occur. all other participants tried the prototype uninterrupted. Scouting participants The reason for choosing a natural environment was to give the participants an experience as natural as possible. The problem this created was that the place that was best for the hologram, was a place where not many visitors came by naturally. The curator of VSM explained it as the result most visitors not knowing they were allowed to go there. Because of this a researcher was assigned a role to ask participants whether they wanted to

Questionnaire As the VSM gets both Danish and International visitors, a Danish and an English questionnaire were made in order for everyone to be able to answer. This might have created some bias because of translation between the different languages. Another problem with the questionnaires is that the scales from the English questionnaire went from “Not true at all” to “Very true”, while the Danish

67


(translated) went from “Totally disagree” to “Totally agree”. This might have created a bias as people might not have understood the two translations the same way and might have answered differently. Possible wrong categorisation

users. Some of these people might still have categorised themselves as single users even though they should have been counted as multi users. This might possibly have caused a bias, as some of the results from the single user section might actually be in multi user results.

Some users started as single users, until someone joined in, making them multi

68


7 | Re-design The data gathered from the last evaluation, together with the discussion, creates an array of elements that need to be improved.

shadow and similar colours appearing in front of the camera. The solution is to calibrate the colour properly, by compensating the external brightness changes or finding a totally new and more reliably tracking method, e.g. a sender and a re7.0.1 Leap Motion ceiver of infrared light, much like the NinSome observations mention that the Leap tendo Wii Remote. Motion hand in some cases would disappear or only show an outline of the hand. 7.0.3 Learning experience The reason for this problem is unknown at this point. In order to fix this, some re- One of the most prominent results that search on the problem would be needed. both the quantitative and qualitative data showed was that participants lacked a personal or informational factor mo7.0.2 Colour tracking tivating them to perform this activity. During the evaluation there were some Therefore, more information about how comments about the axe not being very to reconstruct the ship should be preresponsive. The reason for this was the sented before, during and after the accolour decalibration caused by lighting, tivity.

69


8 | Future works As not everything from the design was implemented to accommodate the final evaluation, and because of the results from the final evaluation, there are many additional elements needed for further development of the prototype. Leap Motion Right now the Leap Motion part of the prototype is not finely tuned. In the future highlighting the place where the user has to place the ship part, would benefit the final product and possibly motivate the user by challenging him a little bit with a slightly harder task to perform. Restart Function At the moment, the program has to be turned on and o manually in order for the program to restart. Ideally the program should restart itself after the ship has been build or when it has not been used for some time.

Better Animationsimulate the real behaviour of a ship on water The sailing animation after the ship has been assembled, should have more effects, like water splashing against the ship and wind in the sail. It could provide a better illusion of depth and Camera Sensor A camera sensor that senses when a person walks by should be implemented as well. When a person then walks by the hologram, an animation should play in order to catch the attention and interest of a visitor. Introduction Animation

One of the UI laws states that the designer should not hesitate to ask the user to do something specific (1.8.2 Seven Laws of UI design). Therefore, for the future an introduction animation should be Another tracking form implemented, that shows the user what The prototype utilises colour tracking in to do and how to do it. order to cut the wood. However, this is not the most eďŹƒcient form of tracking, More Tasks since there are a lot of variables like people’s clothing, lighting and weather that Some of the participants got a bit bored can create a lot of noise to the data from of doing the same two tasks over and over the colour tracking program. Therefore, again. Therefore, in a finished product another form of tracking would possibly more tasks for the user should be implebe more beneficial. mented, so they have more jobs to do.

70


9 | Conclusion During the summer, the Viking Ship Museum experiences a huge spike in visitors, due to its popular outdoor activities. The main attraction is reconstruction of the Viking ships that the museum finds underwater near the coasts. In the winter however the museum is forced to rely on their indoor exhibitions, which have not yet gained such a great popularity (1.4 Viking Ship Museum). An interview with an expert from the museum (1.5.1 Expert Interview with Andreas Kallmeyer Bloch), revealed that to compensate for the lack of physical interaction the museum is preparing a whole new interactive indoor exhibition and invited us to collaborate towards creating a suitable exhibit. Since the museum visitors mostly arrive in groups the exhibit should allow for both single and multi user interaction. As the starting point of this research was to enhance the experience of using a Dreamoc hologram (1.1.1 Initial Problem Statement), the next step was to research unexplored ways of interaction with the given display. It concluded on using Leap Motion and by further developing the concept and researching UX, it became clear that one part of the prototype needed tactile feedback (4.2 Concept development) to provide with better user experience. Holding a physical object marked by two dierent colours and tracking them through image processing, occurred to be the best possible solution to transfer the object data into the prototype. To answer the previously formulated problem statement (2.1 FPS), the prototype went through a concept development process, an iteration finalised by a usability test and an iteration with a prototype ready for a final evaluation. Final evaluation (5.4 Final Evaluation) compared the motivation of single and multi user participants and concluded that multi users enjoyed the experience more than single users. Possibly the reason for this is that collaboration and communication between the test participants enhanced their experience. The given value did not reach the expectations of either visitors, meaning they did not realise that the content of the prototype had an educational purpose (6 Discussion). Moreover almost all single users were quickly joined by other users. Although the results were not fully conclusive because of the recognised bias (6.0.1 Validity and Bias), the data indicates that a collaborative and interactive use of a Dreamoc proves its usefulness better compared to one designed for just a single user. With no previously documented and published research of an interactive Dreamoc hologram used for cooperation, this study provides a basic framework for further development of future projects using similar technology and invites further research towards improving it.

71


10 | References Bjørner, T. (2015) Qualitative Methods For Consumer Research. Hans Reitzels Forlag Christensen, C., Hansen, J., Hansen, C. & Løssing, A. S. W. (2009) Digital museumsformidling – i brugerperspektiv. Copenhagen: The Heritage Agency of Denmark. Colgan, A. (2014) How Does the Leap Motion Controller Work?. Blog.leapmotion.com. Retrieved 29 September 2016, from http://blog.leapmotion.com/hardware-to-softwarehow-does-the-leap-motion-controller-work/ Cotman, C. W. & McGaugh, J. L. (2014) Behavioral neuroscience: An introduction. Academic Press. Edwards, R. H. (1981) Human muscle function and fatigue. Human muscle fatigue: physiological mechanisms, 1-18. Garrett, J. J. (2010a). Elements of user experience, the: user-centered design for the web and beyond. Pearson Education New Riders. Garrett, J. J. (2010b, December 26) The Elements of User Experience: UserCentered Design for the Web and Beyond (2nd Edition). New Riders Geng, J. (2013) Three-dimensional display technologies. Advances in optics and photonics. 5(4), 456-535 Johnson, J. & Henderson, A. (2002) Conceptual models: Begin by designing what to design. Interactions January/February (p. 25-32). Kulturministeriet (2011) Udredning om fremtidens museumslandskab. Kulturstyrelsen, Kulturministeriet. Retrieved from: http://kum.dk/servicemenu/publikationer/2011/ udredning-om-fremtidens-museumslandskab/ ISBN: 987-87-91298-82-0 Norman, D. A. & Shallice, T. (1986) Attention to action. In Consciousness and self-regulation (pp. 1-18). Springer US. Parry, R. (Ed.) (2013) Museums in a digital age. Routledge. 72


Pepper, J. H. (1890) The True History of the Ghost. London, Paris, New York & Melbourne : Cassel and Company. Preece, J., Rogers, Y. & Sharp, H. (2015) Interaction Design beyond human-computer interaction. John Wiley & Sons Ltd. Rudlo, M. (2013) Det medialiserede museum: digitale teknologiers transformation af museernes formidling. Society of Media researchers In Denmark, ISSN: 1901-9726 Ryan, R. M. & Deci, E. L. (2000) Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist, 55(1), 68. Ryan, R. M. & Deci, E. L. (2002) Handbook of self-determination research. University Rochester Press. Ryan, R. M., Connell, J. P. & Deci, E. L. (1985) A motivational analysis of selfdetermination and self-regulation in education. Research on motivation in education: The classroom milieu, 2, 13-51. Self Determination Theory (2016, May 6) Intrinsic Motivation Inventory (IMI). Retrieved from: http://selfdeterminationtheory.org/intrinsic-motivation-inventory/ Tao, G., Chen, S., Tang, X. & Joshi, S. M. (2013) Adaptive control of systems with actuator failures. Springer Science & Business Media. Vukovic, P. (2014, January 15) 7 Unbreakable Laws of User Interface Design. Retrieved from: https://99designs.dk/blog/tips/7-unbreakable-laws-of-user-interfacedesign/ Weichert, F., Bachmann, D., Rudak, B. & Fisseler, D. (2013) Analysis of the accuracy and robustness of the leap motion controller. Sensors, 13(5), 6380-6393. Wigdor, D. & Wixon, D. (2011) Brave NUI world: designing natural user interfaces for touch and gesture. Elsevier. Zanker, J. M. (2010) Sensation, Perception and Action. Palgrave Macmillan.

73


Appendices

74


A | RealFiction Interview See attached CD on the back of the book Filename: RealFiction.m4a

75


B | Interview with Andreas K. Bloch See attached CD on the back of the book Filename: TheVSMInterview.m4a

76


C | Graph from the Viking Ship Museums Annual Report 2015

The Viking Ship Museums annual report 2015, p. 61.

77


D | The Viking Ship Museums Annual Report 2015

Retrieved from: http://www.vikingeskibsmuseet.dk/fileadmin/vikingeskibsmuseet/_frontend_files _/documents/Om_museet/AArsberetninger/Aarsberetning-2015-Final-web.pdf

78


E | Skeleton Of Each Step

79


80


81


82


F | Consent Form A project by Group 308 Medialogy at Aalborg University Copenhagen

Consent Form Welcome to the Ship Reconstruction Test! This is a usability test of a holographic display. The purpose of the test is to check how well the system functions and if there are any problems with the software. The test can take up to 20 minutes including an interview after the test. You have the right to withdraw from the study at any time. The collected data including interview answers and observation notes, will be held confidential and be used for exam purposes only. The test will be recorded on video and microphone. The group will use some of these recordings to make a video production, that may be used as a CV for future employers. After the test there will be a short interview. When you are ready please tell the facilitator, and the test will begin.

Please check the boxes that reflect your wishes, sign and date the form below.

I agree to participate in the test.

I agree to being recorded

83


G | Test Procedure for Usability Test Usability Test Test Procedure Before the test begins, the participants will be handed an instruction sheet (consent form). The test will be separated into two topics C ​ hopping wood​, ​assembling ship​. Each of the topics will have its own area to gather information about. This is done to create a clear overview. Two people are needed for the test. 1. First the two will try the prototype together (we are aware of the learning effect) 2. Afterwards one of the participants will try the prototype as a single user, while the other is waiting. The participant who waits will be talking to a researcher without being able to see what is happening in the test. 3. Then the last participant who only tried the multi user interaction, will try the prototype alone. Every second test will be opposite of the previous test. This means that the participants in the 2nd, 4th and 6th test will take each participant through the single user interaction first, and then the multi user interaction last. Two observers will observe one participant each using the observation sheet (appendix I). The participant will not be interrupted during the test and the researchers will maintain the same task during all tests.

Tasks: The task for the participants will be to build a ship through collaboration. The participant will be asked to use the prototype uninterrupted until the task is finished. The task consist of two elements and the user will be measured in each task; C ​ hopping pieces​ & ​assembling ship​.

84


H | Poster Guide for Usability Test

85


I | Observation Sheet

Chopping Wood

Assembling ship

Time spent pr task Number of Slips Number of Mistakes

Description of Errors (what, where, how, why)

Others

1

86


J | Observation Data Observation notes Group 1 Participant 1: It took some time to figure out what to do. Has a hard time grabbing Waits with moving to other side after chopping, buy does not chop like 2, though he does sometimes Group 1 Participant 2: Did not know where to hold axe at first Keeps chopping until piece is ready for leap motion Did not know what to grab when nothing was there They talk a lot together when collaboration Quickly understood the tasks Group 2 Participant 1: Keeps chopping Do not understand what is happening on leap side Group 2 Participant 2: Had a hard time grabbing Tried to grab ship instead of part Group 3 Participant 1: Did not keep chopping when alone Understood task quickly 2 Took some time before understanding what 1 did Group 3 Participant 2: Keeps chopping until piece is ready for Leap when alone Group 4 Participant 1: Keeps chopping wildly when alone Does not see part on the other side at first Leap Motion works quickly Sometimes hard to grab Group 4 Participant 2: Moved with the plank Hard time grabbing and placing Do not know to place Understood seconds time

87


K | Interview Data

Usability Interview Data: Group 1 Participant 1: 1. Not at first. Couldn’t see axe in scene. Needed to get started. Confused at axe moving on it. 2. Felt natural, tool small. Axe went off on its own 3. Only with the axe, leap motion worked well. Didn’t disturb. Measuring distance between leap 4. Like two player. Didn’t have to move around 5. Fire confused a bit 6. Yes 7. Nice to see what partner did Group 1 Participant 2: 1. Needed to learn what to pinch. When start talking it worked 2. Hard to grab things at times 3. A bit jumpy with 2 hands 4. Collaborative was funnier. Prefer to work with someone, and would do it with friends. 5. Fireplace seemed out of place 6. Yes 7. Chopping was not responsive at all times Group 2 Participant 1: 1. A bit challenging but understood quite fast 2. Clunky, it moved on its own 3. Again the axe moved on its own. Disturbed the experience of tool 4. Multi was more fun 5. Didn’t understand the bonfire 6. No 7. Maybe some haptic feedback for the axe Group 2 Participant 2: 1. There was no indication, so was confused at first 2. Okay, but hard to grab 3. Leap worked okay, axe was weird 4. Multi, it was annoying to move around the hologram 5. Unintuitive to move around the hologram 6. Once 7. Make the axe better. Group 3 Participant 1: 1. Easy to understand 2. Unresponsive. Didn’t get multiple chops. Motion natural. Went through 3. Axe out of sync. Didn’t really disturb. 4. Colab better, made more sense 5. % 6. Yes 7. Cool project, like different interactions

88


Group 3 Participant 2: 1. Very simple and easy 2. It was intuitive. But a small screen 3. The skeleton hand 4. A lot of walking and felt lonely, when doing it alone. Collaboration was best 5. No 6. Yes 7. When doing the axe, had to do it slowly. Like to chop more wood, when alone. Boring to do alone. Group 4 Participant 1: 1. Didn’t see where wood went. Confused at first 2. React slowly 3. At first hard time telling the distance of leap. 4. More things to do in multi player would be nice. Liked multi 5. Hand model changing 6. Yes 7. Fun, adaptable Group 4 Participant 2: 1. Read the sign, so easy 2. It was not accurate. Fun to see the hand. Feeling the grab was not good. 3. No lag 4. They were both fun. Single player was fun because doing both things. Multi player was fun because there was another 5. Hand turn into stars 6. Only at the demo 7. Camera should be at a different angle

89


L | Online Questionnaire

90


M | Observation Notes for Final Evaluation

Focus

Observations: Cooperation

Who?

-

Girl crowd, age 8-12, first participant(s)

Technical

-

Leap motion was hard to use but they did not give up.

Visible Feelings

-

They cooperation without testers meddling. Very obsessed by the content, but had a hard time understanding what to do. Very energetic and curious.

-

It takes too much time to build the ship. A single boy is present at absorb everything.

Own Thoughts

Focus

Observations: Cooperation

Who?

-

Girl crowd, age 8-12, second participant(s)

Technical

-

Leap motion went wireframe 5 times but the participant fixed it herself by waving her hand. A ship part placed itself rotated wrong.

Visible Feelings

-

The group almost fight in order to try the hologram. The teacher is unhappy because the children want to test the hologram instead of seeing the rest of the museum.

Own Thoughts

-

They were inspired to cooperate by the previous pair. Leap motion takes a long time compared to chopping wood.

Focus

Observations: Cooperation

Who?

-

Girl crowd, age 8-12, third participant(s)

Technical

-

The axe moves too slow as if it is unresponsive. The axe start chopping by itself.

Visible Feelings

-

Energetic as the rest of the girls.

Own Thoughts

-

They did not finish because of pressure from the teacher.

91


Focus

Observations: Cooperation

Who?

-

Girl crowd, age 8-12, fourth participant(s)

Technical

-

The axe is chopping by itself.

Visible Feelings

-

Energetic as the rest of the girls.

Own Thoughts

-

Overtook the previous girls unfinished ship and finished it.

Focus

Observations: Cooperation

Who?

-

Girl crowd + boy, age 8-12, fifth participant(s)

Technical

-

The axe moves too slow as if it is unresponsive. The axe start chopping by itself. Leap motion went wireframe

Visible Feelings

-

Very curious about the technical part and try to figure out how color tracking works. The boy: “It was mega fun to try and build that stuff�

-

The boy have observed everything and know what to do. The boy is good at using the leap motion. They had to leave before they were finished.

Own Thoughts

Focus

Observations: Single into Cooperation

Who?

-

Adult couple, age 40-60, sixth participant(s)

Technical

-

Explanation was needed and a hint towards the video tutorial. After the first peace the man started to use leap and the woman chopped the wood. The axe chop by itself Leap motion went wireframe but worked.

Visible Feelings

-

A bit of annoyment from the man because leap motion were unstable.

Own Thoughts

-

They speak together! The woman advise the man on what to do.

92


Focus Who?

Observations: Canceled -

Boy crowd, age 8-12, s ​ eventh participant(s)

-

Showed interest and wanted to try, but had to leave beacuse of thier teacher.

Technical Visible Feelings Own Thoughts

Focus

Observations: Single into Cooperation

Who?

-

Young guys, age 20-25, seventh participant(s)

Technical

-

Almost flawless

Visible Feelings

-

Very annoyed that he is not able to do a swear related handsign Seems excited

Own Thoughts

-

Focus Who?

He talk and comment a lot and asked about what we were doing and our study, he was told to wait for after the questionnaire. One of them stayed a while to figure out how it worked.

Observations: Canceled -

Woman, age 30+, e ​ ighth participant(s)

-

Showed interest but did not want to try.

Technical Visible Feelings Own Thoughts

93


Focus

Observations: Cooperation

Who?

-

Young guys, age 20-25, eighth participant(s)

Technical

-

Axe went crazy and chopped every piece automatic.

Visible Feelings

-

Very excited and find it entertaining.

Own Thoughts

-

Both participant looked a long time on the front screen. At last the moved to the sides.

Focus

Observations: Single

Who?

-

Boy crowd, age 20-25, ninth participant(s)

Technical

-

Axe chopping wood by itself.

Visible Feelings

-

Do not understand what to do. Even after having it explained by observer. To him both video and and explanation makes no sense.

Own Thoughts

-

Showed interest and wanted to try, but had to leave because of their teacher. He do not move around the display. This one went wrong.

-

Focus

Observations: Cooperation

Who?

-

Girls, age 8-12, tenth participant(s)

Technical

-

Axe do not respond very well. Axe chopping by itself.

Visible Feelings

-

Had a hard time understanding what to do. It was hard to use leap motion and did the grabbing gestures wrong.

Own Thoughts

-

The problems with the axe are annoying to everyone...

94


Focus Who?

Observations: Single -

Old man, age 60+, eleventh participant(s)

Visible Feelings

-

Do not understand what to do. Need a lot of help and instructions.

Own Thoughts

-

The prototype does not seem user friendly enough for everyone.

Technical

Focus

Observations: Single into Cooperation

Who?

-

Family, age mixed 15+, twelfth participant(s)

Technical

-

Axe chopping wood by itself.

Visible Feelings

-

Boy “Kinda cool :)”

Own Thoughts

-

Even though they started alone they started to cooperate and help each other. It seems unnatural to use the display alone.

-

Focus Who?

Observations: Single into Cooperation -

Couple, age 30-60, thirteenth participant(s)

Visible Feelings

-

Man “I am confused”. Do not understand what to do. They started to help each other instead of taking turns.

Own Thoughts

-

That man was so sceptical :/

Technical

95


Focus

Observations: Single

Who?

-

Father and daughter, age 10, fourteenth participant(s)

Technical

-

Axe chopping wood by itself.

Visible Feelings

-

A little bit of explanation was needed because the axe had chopped the wood by itself. Smiled less after the first 5 pieces.

Own Thoughts

-

Focus

The girl did everything herself! First successful single user. It seems very repetitive when only 1 person do it all.

Observations: Single / gave up

Who?

-

Old couple, age 60+, fifteenth participant(s)

Technical

-

Axe chopping wood by itself.

Visible Feelings

-

Do not understand what to do. Kept chopping when the axe had done everything itself. They watched the video more than 4 times.

Own Thoughts

-

They both read the poster! It was not intuitive to them what to do. They were very nice people.

96


N | Interview Questions Interview Guide Participant number:

Interviewer:

Date:

Participant 1 #

Questions

1.

Did you understand what you were supposed to do? - Was it difficult or easy to understand? - Did you feel confused at any point? Why and when?

2.

How was the experience of using a tool? - Did the motion you had to perform feel natural? Why, why not? - Did you experience any problems?

3.

Did the system feel responsive? - Any delay or jumps? - If yes: Did it disturb the experience? How? - Any other problems?

4.

How did you find the single player experience compared to the collaborative? - Did you like one better than the other? Why?

5.

Did you find any anomalies? (Something that felt out of place)

6.

Have you tried Leap Motion before?

7.

Any other comments?

Participant 2 #

Questions

1.

Did you understand what you were supposed to do? - Was it difficult or easy to understand? - Did you feel confused at any point? Why and when?

2.

How was the experience of using Leap Motion? - Was it intuitive? (did they know what to do straight away) - Was it difficult or easy to use? - Did you experience any problems?

3.

Did the system feel responsive? - Any delay or jumps? - If yes: Did it disturb the experience? How? - Any other problems?

4.

How did you find the single player experience compared to the collaborative? - Did you like one better than the other? Why?

5.

Did you find any anomalies? (Something that felt out of place)

6.

Have you tried Leap Motion before?

7.

Any other comments?

1

97


O | Poster Guide for Final Evaluation

98


P | Test Procedure for final evaluation

99


100

Hologram P3  
Read more
Read more
Similar to
Popular now
Just for you