Issuu on Google+


Page |2

TABLE OF CONTENTS

1. Introduction…………………………………………………………………………………………..

2

2. Importance of Physical and Digital Modelling………………………………………………………. 2 3. Tangible Media Interface- A representation Platform ………………………………………………. 3 4. Examples of interactive architecture as precedent studies………………………………….………… 3 5. Introduction of the project and Site…………………………………………………………………… 4 6. Concept of the project – Architecture That senses and Responds.…………………………………… 5 7. Initial Idea………………………..……………………………………………………………….….. 7 8. Interaction List……………………………………………………………………………………….. 8 9. Execution………………………………………………………………………………………..……. 8 10. Current Limitations of Interactive Model…………………………………………………..………. 12 11. Comparison with PowerPoint…………..…………………………………………………..………. 12 12. Conclusions……………………………….………………………………………………..………. 14 13. References and Bibliography………………………………………………………………………. 14


Page |3

1. Introduction Technological advances of the 21st century have superseded the conventional representation techniques of the Architectural world. Not only do such methods save valuable time but also helps present the ever-changing architectural synthesis in a more realistic and concise manner. Even the data available for these purposes is getting complex and techniques like Virtual Reality (VR) helps represent the data in a coherent way to the users. Above all, Virtual reality is achieving a high degree of interactivity and photo reality (Tanikawa, T., Hirota, K.,et al 2002). A new approach to building a virtual environment by integrating many different virtual environments constructed with different types of datasets and rendering methods and also linking it with a physical model was experimented in this project. This project explored a new interconnecting the physical world with the digital world through Augmented Reality (AR), which is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as video and graphics (Wikipedia, 2012).

2. Importance of Physical and Digital Modelling Architecture is a multi-focal and complex cycle of designing, analysis, criticising and perfecting. An architect may resort to various forms of representations such as 2D Drawings, 3D models, Physical Models, Rendered Images, Walkthrough videos etc, to convey his concepts depending on the complexity in function and structure of the project (Lennings, A.F., 2012). Physical Models give a handson understanding of the design project and the understanding of the volumes of spaces as it is a 3D representation of work. Digital models are faster to create and can vary in levels of finesse. Although they are 2D, it gives the user an understanding of materiality, volumes and visualisations using different swatches. Digital Models are more easily correctable and modified. Still it incorporates an indepth understanding of softwares and dependence on technology. Nowadays even construction of physical models is becoming easier with the help of technological devices like laser cutter, 3D printers etc. Hence, in the architectural world, neither of the two methods can be ignored and at one point of the other, the importance of both is felt.


Page |4

3. Tangible Media Interface- A representation Platform Inspired by the Tangible Media Group at MIT by Professor Hiroshi Ishii (Tangible Media Group, 2012), which combines the physical form with virtual information, this technology was explored in the urban campus of the University of Sheffield. The Tangible media is a sophisticated method of conveying information through the operation of both the physical and digital realms. The advantages of these are multi-fold especially for visually impaired users and also for conveying information through interactive methods. The representation of real time information is heightened through markers and direct human engagement.

4. Examples of interactive architecture as precedent studies •

The SENSEable City

Extensive research has been done into how real-time data generated by sensors, mobile phones, and other ubiquitous technologies within cities can teach us about how cities are used and how new technologies redefine the urban landscape.

Figure 1,2,3: TheSENSEable City Project 1,2,3-(source:http://senseable.mit.edu/visual-explorationsurban-mobility/)


Page |5

5. Introduction of the project and Site The project was introduced to the group in February, 2012. The requirements were to develop a prototype which incorporates a nexus of the physical model representation and a digital model through a Tangible Reality Interface. The context suggested was the urban area surrounding the Western Bank Library and the Arts Tower in Sheffield. The purpose of this prototype was

suggested

as

a

medium

of

information which will be revealed on the Open Day of the University of Sheffield. The target audience, as a result would be prospective students, parents, current students, staff, visitors etc. As the area suggested was large, the immediate response from the group was

Figure 4: Suggested boundary of Prototype

to emphasise on the buildings of the University alone. Hence, the extent of the site in concern was chosen as shown beside (see Fig.4). The main focal points of the project were the Students Union building and the Arts tower building. Once the boundary of the project was confirmed the group moved on to decide the concept of the project.

Figure 5: 3D Modeling of the Western Bank area in AutoCad and extent of boundary decided.


Page |6

6. Concept of the project – Architecture That senses and Responds The concept of the project revolves around Augmented Reality; i.e. direct or indirect view of the real world environment, physical interpretation, and a computer generated graphical perception by sensor input like touch, sound or barcode/ QR code. Reasons for adopted AR: •

Figure 6: Diagrammatic representation of Concept

Minimise use of Conventional information dissemination using brochures, flyers, catalogues, delivering speeches, etc. (Example: During open day of the Campus, freshers or visitors can easily get information).

More fun and Interactive way of sharing information.

Enhanced sensory quality- useful for the physically impaired.

The scope of the project (see fig.7) was structured around: EVENT, WAY-FINDING, BUILDING INFORMATION (Departments) and NOISE MAP. But due to lack of time and sources, the noise maps around the students union was unable to be executed. The main focus was to make the end product user friendly. The main users being student and visitors, the ideal location for the model would be the Students Union, where users can easily access it and interact without any segregation of departments or visitors.

Figure 7: Scope of the Prototype- Interactions possible


Page |7

Building Information: This model will provide information of the

building,

such

as

building

history,

departments in the building, year of completion and building area with pictures of the building. etc,

for example: Arts tower (see fig.8). This section would contain details of building floor with details of occupancy, course work, lecture/ studio availability and events and so on. Using this, the departments can put up significant information that common to all user of the floor or the building.

Figure 8: Information hierarchy

Way Finding: Way finding: There are four primary elements of way finding: Architectural, graphics, audible and tactile communication, which the group wanted to, work on initially. However, due to time constrains, the group

squared

in

on

using

representation and visual graphics,

architectural

Figure 9: Implementing Digital Tools in Daily life for way finding Source: http://www.arkhi-tekton.com/south_florida_environmental _graphic_design.html.

to navigate the user to the destination. In architectural way finding, the use of architecture is in the form of small-scale physical model and visual graphic representation through walkthroughs along with audible instructions for persons who are visually impaired. Events: The prototype will be a platform that displays the University’s events, thereby uniting all the departments with this portal. The interactions related to this will be explained further.


Page |8

7. Initial Idea After ample brainstorming, it was decided that the physical model would perform as a platform to allow real-time connection of software applications while facilitating direct interaction between physical and digital spaces of the campus, thereby informing from different information categories and different departments, on one interface. The user with the assistance of sensors allocated in strategic points of the physical model would interact with it and stimulate the sensors to provide him the requested information. Figure 10: Initial Sketches

8. Interaction list The prototype consists of two sets of projections, one which is made on a wall as backdrop and one which is made on the physical model, as additional information onto it. The projections on the wall are primarily of dynamic nature- such as walkthroughs and way finding. The projection onto the physical model is static and two dimensional- such as display of building information and plan route between two buildings.

Figure 11: Projecting the Urban Area from Google Earth onto the physical model

The different interactions are: 1. The aerial view of the complete site using Google Earth would be projected on physical model (see fig.11). 2. Way finding •

The markers that contain ReactVision codes would be placed on the desired destination of travel, on the physical model which displays the route through a walkthrough and on the physical model (see fig 12.)

Figure 12: Relationship between different media for way finding


Page |9

3. Building information

•

By placing the markers on to the building on which a visitor desires information of; the camera

will

detect

and

trigger

the

information to be displayed on to the physical and digital model (see fig.13).

Figure 13: Building information projected on Physical model

4. Events •

The projection on physical model, when on hibernation mode (see fig.14), will display events

or

ads.

For

more

detailed

information of certain particular event, the user can place the marker on to the highlighted portion of the model; this will trigger a link to the website, which will be projected on the wall.

Figure 14: Information of Events being displayed on the physical model.

9. Execution Understanding the entire procedure was a gradual

process,

where

the

group

tried

to

understand the interactions possible first and tried to incorporate them in a test model. As discussed earlier, the initial design relied on physical touch with sensors as shown beside (fig 15). Figure 15: Initial stages of project execution- test phase

Physical Model Making:


P a g e | 10

The first step was to make the physical model in 1:500m scale, out of a white material, foam board, so that the projection of the campus site could be made on it. This was done in three days by the group, focussing only on the site terrain and the buildings of the campus alone.

Figure 16: Final Physical model- 1:500m Scale

Digital Model Making: The structure of the digital system would use the master plan with function buttons as the main menu, and then each button would lead the user to the first layer of function, and after the user choosing the building which would be corresponding to the function that the user needs, the second layer would be shown (see Fig. 17). So the main menu and the first layer would be used for input and the second layer will be used for output. To realise this idea, the interface was divided into two parts: the master plan and three buttons for information, way finding and events (see Fig. 18). The trigger of these buttons was initially supposed to use pressure sensors in the physical model. In the digital system, the trigger was clicking mouse on those buttons (see Fig. 19).

Figure 27: Sketch of the conception of the first interaction function


P a g e | 11

The First Stage:

Processing was used to develop the interface for the main menu and first layer only. While making this concept model, it was discovered that using .png files as another transparent layer on

Figure 18: The first digital interaction concept run in Processing

the background could provide different information that was easily executable. And using .gif files as animation in Processing would provide better graphic effect and user experience. The Second Stage:

The digital interactions utilised QR codes (see Fig. 20) with an extra camera to capture them. QR codes were applied for visualized walk-through in the campus to it improved user experience. The Third Stage:

The third stage involves decision of the content of the interaction model and the way of

Figure 19: Script of the first digital interaction concept

operation in Processing. Various levels of complexity of digital models in Sketchup were tried, to maximise the appearance of the walk-through using Quest3D (see Fig. 21 and 22). However the better version of Sketchup model, (shown in Fig. 21), exhibited many problems in exporting into Quest3D (see Fig. 23), so the less complex version was used, which is shown in Fig. 22.

Figure 20: A example of QR code Source: http://www.fastcompany .com/1585822/businesscard-just-scan-my-qrcode


P a g e | 12

Figure 21: One version of Sketchup model with complex detail

Figure 22: Another version of Sketchup model with lesser detail


P a g e | 13

While using Processing, some of the script used in the second stage was retained and the files were continuously upgraded, as more content and graphic effect was added into the interaction system.

Many

.png

files

with

transparent

background to provide a smooth and clear interface to the user were used, and more script was added to improve graphic effect instead of using still images only, to highlight some function when it was needed. The Final Outcome: During the test run of the prototype some of

Figure 23: Using Quest3D

the problems in Quest3D (see fig.24) were fixed, for the walk-through. The keyboard served as triggers of the main interaction. During the mini conference, the following interactions were successfully generated: The main interaction, the information of each building was shown(see Fig. 25), way finding between Students Union, Arts Tower and Information Commons(see Fig. 26), and events happening on that day(see Fig. 27). Figure 24: A scene in walk-through


P a g e | 14

Figure 25: An example of showing building information in

10.

Figure 26: An example of way finding from Students Union to Information Commons

Figure 27: An example of events happening on 18th May, 2012

Current Limitation of interactive Model Currently, the main limitation of the model is on interaction. Due to the inexpert skill in writing script, the model can be only interacted by using keyboard and mouse. In this way, users would find difficulty in finding the function they want, as there are 16 keys on keyboard and 1 key on mouse set for different function and information. But in the plan, the trigger of each interaction would be replaced by QR code cubes engaging specific area (Shown in Fig. 28). This upgrade would profoundly improve the performance of the model and convenience of users. A user could use only one cube, which has different QR codes on each surface, to control the whole model. And as the main frame and the content have already been completed, and the

Figure 28: Using QR code cube to control the model


P a g e | 15

QR codes were successfully used for other purpose, changing the script into using the QR codes would be feasible and easy.

11. Comparison with PowerPoint After the presentation, some questions aroused from the audience asking about the difference from the interactive model with a PowerPoint file. It is an interesting and important question as it denoted that the audience is interested in such types of interactions and it is necessary to understand the competence of softwares such as Processing, Quest3D, Rhino etc, over the others. The most basic difference was that in PowerPoint each of the pages is individual and has no relationship to the other. Whereas, while using Processing, each interaction is rooted to another, thereby information is networked to each other. A comparison of these two presentation methods is made below (Table 1):

Table 1: Comparing interactive model with PowerPoint

Interactive Model

Powerpoint

Controlling Devices

Mouse / Keyboard ( QR code cubes and Gesture control could be achieved in the future)

Mainly Mouse

Sense of The Scene

Very Good

Normal

Good

Normal

Good

Normal

Good (it could be very good using QR Code Cubes or Gesture control in the future)

Good

Freedom of use (For Users ) Freedom of use (For Producers ) Ease of use (For Users )


P a g e | 16 Ease of use

Normal

Good

Expansibility

Good

Bad

Open Source

Yes

No

(For Producers )

According to Table 1, for the users, using interactive model would provide them better experience and free controlling, for the producers, writing script could lead to more difficult work, but using the interactive model would give them more potential for graphics and cooperating with other software. For the controlling of each method, the interactive model would be like radiance from each function, users could easily jump from one function to another; PowerPoint would be more linear, users would need to go back to main menu first before they enter another function. And in the future, the controlling of interactive model could be replaced by QR code cubes or even gesture control, the user experience would be much better than using mouse clicking links in PowerPoint. 12. Conclusions The platform created to use the digital and physical could be used in educational major and could transform the way people are being educated enabling playful participatory learning environments. For example, the application of the platform could work successfully for Museums and improve the way information is being disseminated. Using interactive model would be more user-friendly and more similar to human's way of thinking. Except better user experience, interactive model has more potential; it could combine more function and meet the demand of more kinds of people. The entire process was a learning experience, aside of our conventional ways of presentation. The group collectively understood the scope of applying this methodology further in studios as a presentation tool. The most challenging areas were collating the information with the models. Specific shortcomings such as lack of time and inadequacy in


P a g e | 17

the software skills of the team members led to improvisation of certain aspects on the project, as seen on the day of the mini-conference. But each of the team members has a good understanding of the entire process and also the potential that the prototype can have in real life and its benefit to the University of Sheffield. 13. References and Bibliography 1.

Augmented Reality. Availabe on http://en.wikipedia.org/wiki/Augmented_reality. Accessed on 21.05.2012

2.

Bullivant L., 2007. Interactive design environments, London.

3.

Chin K,Khoo1. F, Salim1. J, Burry1.. (December 2011). Designing Architectural Morphing Skins with Elastic Modular Systems. International Journal of Architectural Computing. Volume 9 (4), p397-420.

4.

Dourish, P., Where the action is: the foundations of embodied interaction, Cambridge,

5.

http://interactive.usc.edu/wp-content/uploads/2011/04/IOT_Interactive-Architecture.pdf . Accessed on 21.05.2012

6.

http://www.interactive-environments.nl/. Accessed on 21.05.2012

7. http://www.arch-os.com/. Accessed on 21.05.2012 8.

http://multi-science.metapress.com/content/0m23081747675l85/fulltext.pdf. Accessed on 24.05.2012

9. Interactive Architecture. Available on http://www.interactivearchitecture.org/. Accessed on 21.05.2012 10. K,Bonsor. (2008). How Augmented Reality Works. Available: http://www.howstuffworks.com/augmented-reality.htm. Accessed on 24th may 2012. 11. Lennings, A.F. Using Physical Models in Design. Delft Spline Systems, The Netherlands 2012. 12. Mass.: MIT Press, 2001 13. QR codes, 2012. Available on http://www.fastcompany.com/1585822/business-card-just-scan-my-qr-code. Accessed on 03.06.2012 14. R Apelt, J Crawford, D Hogan. (2007). WayďŹ nding design guidelines. Available: http://www.constructioninnovation.info/images/pdfs/Publications/Industry_publications/CRC0002_CRC_Wayfinding_Guidelines.pdf. Accessed on 22th may 2012. 15. S, Miller. (2005). Brainstorming Uses of Augmented Reality. Available: http://www.ted.com/conversations/4804/brainstorming_uses_of_augmente_1.html. Last accessed 24th may 2012. 16. Tangible Media Group. 2012. Available on http://tangible.media.mit.edu/. Accessed on 21.05.2012


P a g e | 18 17. Tanikawa, T., Hirota, K., et al. 2002. A Study for Image Based Integrated Virtual Environment. Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR'02). Available on http://ieeexplore.ieee.org.eresources.shef.ac.uk/stamp/stamp.jsp?tp=&arnumber=1115092. Accessed on 21.05.2012 18. Werner, M., 2001. Model Making, New York, N.Y: Princeton Architectural Press.


Interactive Urban Visuaisation