Encode Engage

Page 1



ENCODE ENGAGE



CONTENTS

Acknowledgments Poster Abstract Thesis Question Vocabulary Research Essay Methodology

Future Visions

Experiments Prototypes Final Proposal Documentation Fabrication Exhibition Conclusion Bibliography Appendix



ACKNOWLEDGMENTS

Thank you to my advisors and peers.

A special thanks to:

Zenovia Toloudi Rob Trumbour Alex Cabral Jen Lee-Michaliszyn


POSTER


Ryan Kahen

ENCODE_ENGAGE

long microsecondsToCentimeters(long microseconds) { // The speed of sound is 340 m/s or 29 microseconds per centimeter. // The ping travels out and back, so to find the distance of the // object we take half of the distance travelled. return microseconds / 29 / 2;

long microsecondsToInches(long microseconds) { // According to Parallax’s datasheet for the PING))), there are // 73.746 microseconds per inch (i.e. sound travels at 1130 feet per // second). This gives the distance travelled by the ping, outbound // and return, so we divide by 2 to get the distance of the obstacle. // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf return microseconds / 74 / 2; }

}

{ if (inches <=36) { analogWrite(motorPin, 255); } Electret Microphone else { Sensor analogWrite(motorPin,0); } }

delay(100);

Serial.print(inches); Serial.print(“in, “); Serial.print(cm); Serial.print(“cm”); Connection Wires Serial.println();

// convert the time into a distance inches = microsecondsToInches(duration); Passive Infrared (PIR) cm = microsecondsToCentimeters(duration); Motion Detection Sensor

// The same pin is used to read the signal from the PING))): a HIGH // pulse whose duration is the time (in microseconds) from the sending // of the ping to the reception of its echo off of an object. pinMode(pingPin, INPUT); duration = pulseIn(pingPin, HIGH);

// The PING))) is triggered by a HIGH pulse of 2 or more microseconds. in. Plexi Glass // Give a 0.25 short LOW pulse beforehand to ensure a clean HIGH pulse: Structure Ribs pinMode(pingPin, OUTPUT); digitalWrite(pingPin, LOW); delayMicroseconds(2); digitalWrite(pingPin, HIGH); Lightweight Inflatable Fabric Skin delayMicroseconds(5); digitalWrite(pingPin, LOW);

Structure Rings

// establish variables for duration of the ping, // and the distance result in inches and centimeters: long duration, inches, cm; 0.25 in. Plexi Glass

// reverse the direction of the fading at the ends of the fade: if (brightness == 0 || brightness == 120) { fadeAmount = -fadeAmount ; } // wait for 30 milliseconds to see the dimming effect delay(100);

// change the brightness for next time through the loop: brightness = brightness + fadeAmount;

Arduino Microcontroller

void loop() { analogWrite(led, brightness);

(LEDs)

Fan Actuator void setup() { // initialize serial communication: Serial.begin(9600); pinMode(motorPin, OUTPUT); pinMode(led, OUTPUT); } Light Emitting Diodes

5V DC Brushless

// this constant won’t change. It’s the pin number // of the sensor’s output: const int pingPin = 7; // the pin that the sensor is attached to Mother Component int motorPin = 9; // the pin that the fan is attached to int speed = 0 ; int led = 11; // the pin that the LED is attached to Connection Wires int brightness = 0; // how bright the LED is int fadeAmount = 5; // how many points to fade the LED by

with one of three Daughter Components

1:1 Detail Mother Component

2012

MArch

A mapping of the various states that occur with engaging the object.

1:4 Three States Diagram

Our current digital age has considerably affected the ways in which we operate as humans. Information technologies have increased the speed of our cities through the ways we access, share, and communicate data. The use of mobile technology, specifically the smart phone, has been a key component in this progression into this digital age. These technologies have become situated within our daily lives, causing a shift in the way we engage with both our space and one another. Through the study and experimentation of sensorial technologies, this thesis looks to bridge the gap between the virtual and the physical. Our cities are embedded with sensing technologies, collecting environmental, social, and infrastructural data used as a way to monitor our cities, ensuring safety and efficiency. While these technologies are already situated within our urban fabric, we as the users of the city do not have a direct relationship with them. We become the observed rather than becoming a participant in our city. Rather then having our embedded technologies simply collect data, they can be used to create an environment that both recognizes and responds to us as the users. Through a dialogue initiated by an input/output system we can create a new relationship between people, technology, and architecture. Through the medium of installation, a new artificial atmosphere is created encouraging curiosity, active participation, and exploration within the fabricated environment.

Lightweight Inflatable Fabric Skin

0.25 in. Plexi Glass Structure Ribs

0.25 in. Plexi Glass Structure Rings

Light Emitting Diodes (LEDs)

Connection Wire

Daughter Component


ABSTRACT

Our current digital age has considerably affected the ways in which we operate as humans. Information technologies have increased the speed of our cities through the ways we access, share, and communicate data. The use of mobile technology, specifically the smart phone, has been a key component in this progression into this digital age. These technologies have become situated within our daily lives, causing a shift in the way we engage with both our space and one another. Through the study and experimentation of sensorial technologies, this thesis looks to bridge the gap between the virtual and the physical. Our cities are embedded with sensing technologies, collecting environmental, social, and infrastructural data used as a way to monitor our cities, ensuring safety and efficiency. While these technologies are already situated within our urban fabric, we as the users of the city do not have a direct relationship with them. We become the observed rather than becoming a participant in our city. Rather then having our embedded technologies simply collect data, they can be used to create an environment that both recognizes and responds to us as the users. Through a dialogue initiated by an input/output system we can create a new relationship between people, technology, and architecture. Through the medium of installation, a new artificial atmosphere is created encouraging curiosity, active participation, and exploration within the fabricated environment.


Fig. 1 Cirriform Future Cities Lab



Encode /en ‘kōd/ - To convert information to a digital form Engage /en ‘gāj/ - To occupy, attract, or involve as if to capture interest or attention


RESEARCH ESSAY


We are living in an era where society is immersed in a digital world, where we perceive our surroundings by means of technology. The rapid development of technology has created a wave in our culture. Just as the automobile changed the way our world worked, from the implementation of new infrastructure and increased mobility, computers are creating a similar effect. The automobile increased the speed in which our world moved. In the same way, the computer is increasing the speed of our lives. We are easily able to access and share information as well as instantaneously communicate with one another. These advancements in technology, specifically mobile technology, has shaped the way in which we, the users, interact on a social level. “The Computer no longer needs to adapt to the user because the opposite Knowing is true.”1 the advancements in technology, we need to ask ourselves how we can continue to build our urban context to create a response to the growth in our digital age. Just as we use technology to communicate with one another, we can use a similar means to communicate with our space, creating an architecture that responds to our social participation.

“The computer no longer needs to adapt to the user because the opposite is true.”

1

Conrad (p. 63)


Responsive Environments The utilization of technology in our built context has allowed for the design of responsive environments. Lucy Bullivant defines responsive environments as “spaces that interact with the people who use them, pass through them or by them”. 1 These become environments that engage the user, redefining their experience of the space. Through the use of tech elements, the interaction becomes that of a digital realm. This creates an understandable bridge between the virtual world and the physical. As the user senses or creates the input, they in return experience the output. “The power of the responsive environments in this book is precisely that they are not purely reactive or entirely predetermined. Both they and their users learn from experience and redefine their sense of place.”2 The back and forth dialogue between the user and the system is where the success of responsive environments lies. Datagrove by Future Cities Lab (Fig. 2) is an interactive installation designed for the Zero1 Biennial in San Jose, CA. “Datagrove thrives on Information from its urban environment.”3 Collecting data from users and streaming Twitter feeds; Datagrove is a responsive architecture creating presence in both physical and virtual space. Through the need of an initial stimulus, whether environmental or human induced, the response becomes the result that is understood. These environments create a connection with the user as the user connects with the system. 1 2 3

Bullivant (p. 7) Bullivant (p. 17) Future Cities Lab


“Both they and their users learn from experience and redefine their sense of place.�

Fig.2 Datagrove Future Cities Lab


Interactive Technologies When designing in a digital era, knowledge of technology is needed, including tools, fabrication methods, materials, and peripheral technologies. The use of this technology has begun to change the way architects and designers use and think about materials. Manufacturing tools, including computer numerical control (CNC), laser cutting, vacuum forming, and three-dimensional printing allow rapid prototyping for testing and creating components. Interactive architecture is borrowing technologies from other fields. Sensors and actuators are applied to create a high-tech system resulting in an interaction between the user and the architecture. “Currently a change is taking place in interactive media whereby increased emphasis is being placed on designing and creating interfaces, experiences, and software that are customizable, re-programmable, and adaptable.�1 These tech elements allow for a creation in the input and output results. Through the use of software and programmable entities the designer has control over the system, creating interactions that speak to the digital age. Michelle Addington and Daniel Schodek, authors of Smart Materials and Technologies for the Architecture and Design Professions, speak of the multiple ways of achieving these tech systems. By an understanding of material properties and tech system capabilities, designers are able to push technology further to create a new interactive architecture. Fig. 3 Light Drift Howeler + Yoon

1

Fox and Kemp


“The issue of controlling physical change is central to issues of design and construction techniques, kinetics, and maintenance, as well as issues of human and environmental information gathering.”2 Michael Fox and Miles Kemp’s Interactive Architecture introduces tools used in creating responsive systems. Elizabeth Diller and Ricardo Scofidio created an interactive wearable technology, Braincoat (Fig. 4), for their Swiss Expo Blur Building. The coat uses information from the wearer to create a new form of communication within the installation. Rather then direct face-to-face communication, light is used based on compatibility, ranging from antipathy to affinity, of people determined by the initial questionnaire filled out upon entering the Blur Building. These designed systems need a way to receive and control the information from its context, whether environmental or social. The first part of the responsive system is the input data. Sensors are used to recognize the information and send it to the next part of the system. Sensors can be placed within two categories, contact based and non-contact based. Contact based sensors deal with direct information exchange, including touch, moisture, pressure, or wind. Non-contact based sensors read information based on presence. These include infrared, sonar, accelerometer, light, and microphones. This sensory information then needs to be processed through the micro controller to create a response. Micro controllers are similar to a computer we would use, but rather then performing multiple tasks, it is designed to design one very well. “A micro controller is especially good at three things: receiving information from sensors, controlling basic motors and other kinetic parts, and sending information to other computers. They act as an intermediary between the digital world and the physical world.”3 2 3

Fox and Kemp (p. 73) Fox and Kemp (p. 78)

Fig. 4 Blur Braincoat Diller Scofidio


Experience of Technologies As we enter the world of ubiquitous computing, we need to understand how these technologies will alter our experience of the natural and built environments. Erik Conrad, in his article Embodied Space for Ubiquitous Computing, speaks of how ubiquitous computing has grown exponentially. “The average American already owns twenty or more computers.”1 A computer in these terms is an object that contains information processing components, such as televisions, microwaves, and cell phones. We tend to think of these technologies as solely tools, but we need to understand their effects on our culture. As these technologies are becoming built into our environments, these ubiquitous systems alter our social interactions, saying all interactions with computers are at some level social. Conrad mentions how we tend to think in a Cartesian way, meaning the properties of physical objects are quantifiable ones. However the meanings we attribute to space are mainly based on qualitative, sensory experiences. This creates a duality between the qualitative and quantitative experience we have, forcing a pull between the two to understand our experiences of space. “Social space reconciles the physical and the mental, concrete and abstract, and if we consider all interactions with the computer systems ‘social’, then these interactions also have potential to be places where the physical and mental co-mingle.”2 These human-computer interactions can merge the two outlooks on space, creating a bridge between the sensory and tangible, the virtual and the physical. 1 2

Conrad (pgs. 61-62) Conrad (p. 63)


“If we consider all interactions with the computer systems ‘social’, then these interactions also have potential to be places where the physical and mental co-mingle.”

Fig. 6



THE NEW CITY

36

MEASURINg THE CITY the inFrastructure oF data collection

37 Advanced Smoke Detectors can sense smoke levels and temperature and communicate them to firefighters who are within 100 feet, via wireless sensors installed on the firefighters’ air tanks. In return, the firefighters are also being tracked: the smoke detectors automatically map the position of firefighters within range and communicate their location to the on-call incident commander.

Fire Sensors for elevators are located on every floor and in the motor room.

Pavement Sensors keep track of road conditions. they’re most often found on bridges because they have the tendency to freeze first.

Underwater bridge Pier Sensors monitor the structural safety of bridges, especially older ones. their use became widespread after the interstate highway 35w bridge, which had only been visually inspected, collapsed over the Mississippi river in Minneapolis on aug. 1, 2007.

Emergency Management & Communications Sensors detect abnormal biological, chemical, and radiological conditions.

Water Sensors, Carbon Dioxide Sensors, and Surveillance Systems monitor building occupancy, which can be used to optimize energy use. Entryway Sensors screen and record people entering the building.

lighting Sensors turn lights off in rooms automatically if no motion is detected.

Triangulation Systems Sensors are crucial for any automatic door system.

with passive and active sensors embedded throughout its infrastructure, the city is already sentient. cities are sensoring up to the gills, monitoring traffic, people, and weather. chicago is one of the most sensored cities...

light Sensors turn street lights on as it gets darker.

Weather and air quality sensors Anenometers measure wind speed and pressure.

Accelerometers measure building movement.

Audio Sensors detect gunfire. Police are alerted and surveillance video can be immediately transmitted.

Image credit: Flickr user John W. Iwanski

Traffic Controllers switch light signals when pavement sensors alert them that a vehicle is waiting.

Motion Detectors function as part of security systems.

Parking Sensors report how many parking spots are in use and charge for parking accordingly. as this data is released, apps are being developed to help drivers find parking spaces remotely.

Inlaid Pavement Sensors can detect cars, motorcycles, and bicycles. they communicate with the traffic controllers wirelessly.

River Sensors measure water conditions, currents, and levels, and report them to the u.s. geological survey.

Inaba_2012

Our urban environment is filled with sensors that record data and monitor our city. Looking at how many sensors are in our city, none of these give anything back to the user in real time interactions. These embedded sensors collect data on environmental information, traffic data, and monitoring people to ensure safety and efficeincy on our cities. Looking at these embedded technologies, one must question how we can use these sensors to create a responsive architecture that recognizes us but also responds to us.



Conclusion As technologies develop, professions adapt utilizing these advancements in the progression of their field. Architecture, stuck in past ideologies, is still playing catch up to these technologies. Few are taking strides forward, using new means of tooling and design methods. As technology progresses from past discoveries, architecture can advance, through the use of integrated technologies and new means of production to situate itself within our digital age. In today’s society, the cultural norm is to have the latest trends in technology. With these rapid advancements, the demand and desire for technology is increasing. As more people own and use these technologies, they become ubiquitous. Due to this, our cities are becoming smart. Mobile cell phones offer information instantaneously in the palm of our hands, at anytime time and place. This use of technology is reshaping our culture in the way we perceive, not only ourselves, but also the space we occupy. By embedding our urban context with interactive technologies such as sensors, lighting, and kinetic systems, we can create an environment that recognizes and responds to our social participation. Through the design of responsive environments, architecture can embrace the technological movement, and redefine itself in today’s culture.



In our current digital culture, how can we fabricate a new relationship between people, technology, and architecture?


SMART PHONES TABLETS

MOBILE TECHNOLOGY

MOBILITY NETWORKING

SOCIAL MEDIA CONNECTIVITY

INFORMATION

INTERNET

H

OF THE CITY

SOCIALLY RESPONSIVE ARCHITECTURE

CULTURE OF THE PEOPLE

AWARENESS INTERACTION FUNCTION

FACTORS

E

SPACE TIME

USE

T NONPHYSICAL

PHYSICAL

SENSORY

VISUAL TACTILE AUDITORY


ORS

MOVEMENT

KINETIC ARCHITECTURE

CHANGE FORM SHAPE CHARACTER HUMAN THOUGHT PERCEPTION MEMORY USE ENVIRONMENTAL SITE CONTEXT URBAN PUBLIC SPACE BUILT TECHNOLOGICAL USE DISPLAY EASE

Mind Mapping

MIND MAPPING

Mind mapping was used for the preliminary thesis topic discovery, as a method to organize thoughts, and explore many avenues of interest. By focusing on responsive architecture, I developed keywords that led to my further research and development on the topic. Some of these main keywords were connectivity, interaction, and mobility.



Lost in iPhone City Our smart phones have become the new lens in which we view society. In the palm of our hands lies a mobile computer that allows for information to be accessed at any moment. This smart technology is becoming the new way of controlling our social being, but will it be the new way in which we control our city?



kahendesign.wordpress.com


METHODOLOGY


My methodology, or design procedure, looked towards a means of a bridging the gap between our physical realm and our digital world. My initial interest in mobile smart technology and research of embedded technologies within our built environment led to an exploration of a new relationship between people, technology, and space. My methodology was a three-tiered system consisting of visions for a future city, explorations in technological systems, and prototyping these systems as responsive objects. The future city visions looked at different ways our space can sense us as the users and output a direct response. These visions tested interventions through scale, responses, and location. The technology based experiments looked to the tooling used in creating a responsive object. A micro controller was used as the means of creating a closed loop system. Through a study of sensors and actuators, I was able to collect real time data and translate it into an immediate response. Each of these tests became building blocks, allowing for a library of systems that can be applied towards a larger, more complex system. These experiments were applied to fabricated prototypes where they sensed specific data creating an output of various responses, including light and movement. Similar to the encoded tests, each tier of my methodology acted as an item in a closed loop system where each component would control the process of another.


Responsive System Diagramming Through the study of responsive systems, set rules needed to be applied. To be considered an active feedback systems, both input and output data needs to be translated. Sensors are used in collecting the input data, while actuators, lights, or speakers can produce an output response. The means of collecting data needs to be predetermined in order to compute that information into a responsive outcome. Looking at different input scenarios, including communication data, human presence, or environmental data can dictate an appropriate response, creating a dialogue with our space. Each scenario has its own corresponding sensory component, but the compilation of these sensors and output devices can create a dynamic system with visual, haptic, and spatial elements.


Intervention in Space

Data Sensing of Mobile Technologies

Intervention in Space

Environmental Sensing of Sunlight

Intervention in Space

Proximity Sensing of User Engagement

Intervention Responds with Light Display

Intervention Responds Sunshading

Intervention Responds with Change in Shape


Intervention Responds with Change in Shape

A kinetic output response is applied to the system where proximity sensing is used to detect human presence. As people trigger the sensor, the component changes form, creating a new overhead condition within the given space.



Intervention Responds with Light Display

A light output is produced based on the input data of the users. Light is used as a means to communicate data visually within the system. As people inhabit the space, the illuminated response they create enlivens the space for them as well as others.



Intervention Responds Sunshading

The responsive system can also produce indirect responses. As it changes shape it can serve as a functioning system. The system has the possibility to act as shelter from environment, creating sunshading or a rain screen.




Future Vision of Technology in Architecture

As we interact with our current mobile technology (smart devices), a future architecture will also create an integrated relationship between our technology and our space. An architecture that have the capability to recognize and respond fits within the constant flux of our current digital culture. The real time change of spatial qualities, directed by people, creates a new experience of space. These responsive environments are meant to redefine our sense of place, both spatially and socially, through a playful construct. They have the ability to enhance the atmosphere of a space, creating a more dynamic public place within our cities.



This proposal looked to a suspended system along Winter and Summer Street in Downtown Crossing. By developing a kinetic system, the form of each component would change based on the location of a person in space. Light is used in these representations to highlight the movement of the system. Responding to a person’s place, the system also works in a temporal manner. The light is delayed leaving behind a moment in time even after the user has passed. The user not only can alter the shape of the system, but also leave a momentary instance in space. The system becomes a way to reshape our perception of the space based on human interaction.





Set within Dewey Square, in Boston’s Downtown District, this proposal looked at the interactions with and architectural object. As the user engages the object, it would respond through the transformation between different states. Each state would correspond to a spatial quality fitting the needs of the social experience happening in the area. Beginning with the object in its first state, it would resemble a column or a pillar. As a person engages it, the object would Stages of Canopy Movement draw itself up creating a canopy system. This scenario in Dewey Square can create shelter for the open plaza, allowing for a destination in the busy downtown area.





A further investigation of the change in states of the object as users engage. This scenario continues to look as the original column like state and kinetic transformation as people inhabit the space around. As people engage, the armatures grow outward greating a space within the object itself. Rather then just a canopy system, the object can create enclosures to fit the needs of its users. Each object can respond to the density of people in real time, allowing for multiple social encounters to happen throughout the space, each with their one defined areas of engagement.





Proximity_

By establishing a range for the sensor, the object creates its own “personal space”, initiating a response within the installation.

Proximity_

By establishing a range for the sensor, the object creates its own “personal space”, initiating a response within the installation.

Haptic Sensing_

Using capacitive sensing technologies, physical engagement can be used to create a response. In this scenario the the central core is touched, it allows the user to drive the actuator altering the size od their immediate space.

The Sensory Engagements Using a series of sensors, users can engage with the installation. Based on both location in place and physical engagements, people can create a dialogue with their build environment through their interaction with the technologies.

Haptic Sensing_

Using capacitive sensing technologies, physical engagement can be used to create a response. In this scenario the the central core is touched, it allows the user to drive the actuator altering the size od their immediate space.



A Study in Sensor Technologies As part of my methodology, I used technology as a means for creating a response. A micro controller was used as a tool in conducting a series of sensory input/output systems. This study allowed for the exploration of emerging technologies, becoming a new medium in the architectural field. Each experiment provided experience in wiring and coding tech components to physically create a system that responds. Each test became a building block that allowed for the further investigation of this tool to be applied to a more complex sensorial system.


Arduino Test_1 LED Blink_ Introduction to Arduino_initial understanding of coding and wiring The script states that the LED light will turn on for one second, then turn off for one second. This action is set on a loop creating a consistent blinking of light


Arduino Test_2 Proximity Sensing_ Incorporating a PING))) sensor to create a feedback loop system The script states that the when the sensor detects an object within its set range, it initiates the response. This can be used as a means to detect the presence of a person as they engage an object.


Arduino Test_3 Passive Infrared Sensing (PIR)_ Incorporating a PIR sensor to create a feedback loop system. The script states that the PIR sensor will measure the change in heat levels in its range to initiate a response. This is used to detect motion in the space triggering a response as movement occurs.


Arduino Test_4 Arduino Controlled Servo Motor Using a servo motor as an actuator to translate an output response The script states that the based on the input data, the motor will turn between 0 and 180 degrees. This was used in creating a linear actuator to drive a system along the fabricated axis through its different states.


Arduino Test_5 Capacitive Sensing_ Using capacitive sensing to create a response based on a haptic input. The script states that as the set pad is touched it will recognize a change in electrical current flow. If the flow is greater then a set amount, it will trigger the output response to happen.


Arduino Test_6 LED Fade in Parallel_ Using multiple LEDs wired in parallel to fade in and out through various degrees of brightness The script states that the LEDs will loop through a determined level of brightness. Beginning in an off state, the light will move through five states turning brighter until it reaches a level of 50% brightness. This allows for a fading light that can be applied to a specific state of the object.


Arduino Test_7 Arduino Controlled DC Fan_ Using a fan as the actuator to translate the output response The script states that as the sensor is triggered on, it initiates the fan to turn on. This will begin to inflate the skin of the object. Using air as a means of changing state offers a softer atmospheric quality to the object rather than mechanized movement


Arduino Test_8 Electret Mic_ Using an electret microphone to translate communication data into an output response. The script states that as sensor collects sound data if the level is greater then a determined amount then to trigger the output response. Here light is used to output the sound data creating a pulsating response that reacts to speech happening around the mic.



Responsive System Prototyping My methodology continued through the application of the Arduino experiments by applying them to functioning prototypes. These prototypes became tools in exploring how these technology tests can be applied to an architectural setting. Through the development of these responsive objects, implementations and tectonics were discovered.


// this constant won’t change. It’s the pin number // of the sensor’s output: const int pingPin = 7; const int ledPin = 13; void setup() { // initialize serial communication: Serial.begin(9600); pinMode(ledPin, OUTPUT); } void loop() { // establish variables for duration of the ping, // and the distance result in inches and centimeters: long duration, inches, cm; // The PING))) is triggered by a HIGH pulse of 2 or more microseconds. // Give a short LOW pulse beforehand to ensure a clean HIGH pulse: pinMode(pingPin, OUTPUT); digitalWrite(pingPin, LOW); delayMicroseconds(2); digitalWrite(pingPin, HIGH); delayMicroseconds(5); digitalWrite(pingPin, LOW); // The same pin is used to read the signal from the PING))): a HIGH // pulse whose duration is the time (in microseconds) from the sending // of the ping to the reception of its echo off of an object. pinMode(pingPin, INPUT); duration = pulseIn(pingPin, HIGH); // convert the time into a distance inches = microsecondsToInches(duration); cm = microsecondsToCentimeters(duration); Serial.print(inches); Serial.print(“in, “); Serial.print(cm); Serial.print(“cm”); Serial.println(); delay(100);

}

{ if (inches <= 36) { digitalWrite(ledPin, HIGH); } else { digitalWrite(ledPin, LOW); } }

long microsecondsToInches(long microseconds) { // According to Parallax’s datasheet for the PING))), there are // 73.746 microseconds per inch (i.e. sound travels at 1130 feet per // second). This gives the distance travelled by the ping, outbound // and return, so we divide by 2 to get the distance of the obstacle. // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf return microseconds / 74 / 2; } long microsecondsToCentimeters(long microseconds) { // The speed of sound is 340 m/s or 29 microseconds per centimeter. // The ping travels out and back, so to find the distance of the // object we take half of the distance travelled. return microseconds / 29 / 2; }


Responsive Prototype_1 The responsive system incorporated a proximity sensor and LED to create an interactive object. The PING))) sensor was given a set range that triggered the LED on as someone entered the determined distance. This was a test to observe how people interacted with both the object and the space around as it responded to their location to it.



Prototype Study Models_



Prototype Study Models_


* Adapted from code by Tom Igoe * http://itp.nyu.edu/physcomp/Labs/Servo * */ /** Adjust these values for your servo and setup, if necessary **/ int servoPin = 2; // control pin for servo motor int minPulse = 1170; // maximum servo speed clockwise int maxPulse = 1770; // maximum servo speed anticlockwise int turnRate = 75; // servo turn rate increment (larger value, faster rate) int refreshTime = 20; // time (ms) between pulses (50Hz) /** The Arduino will calculate these values for you **/ int centerServo; // center servo position int pulseWidth; // servo pulse width int moveServo; // raw user input long lastPulse = 0; // recorded time (ms) of the last pulse void setup() { pinMode(servoPin, OUTPUT); // Set servo pin as an output pin centerServo = maxPulse - ((maxPulse - minPulse)/2); pulseWidth = centerServo; // Give the servo a stop command Serial.begin(9600); Serial.println(“Arduino Serial Continuous Rotation Servo Control”); Serial.println(“ by Orfeus for GRobot.gr”); Serial.println(“ Press < or > to move, spacebar to center”); Serial.println(); } void loop() { // wait for serial input if (Serial.available() > 0) { // read the incoming byte: moveServo = Serial.read(); // ASCII ‘<’ is 44, ASCII ‘>’ is 46 (comma and period, really) if (moveServo == 44) { pulseWidth = pulseWidth + turnRate; } if (moveServo == 46) { pulseWidth = pulseWidth - turnRate; } if (moveServo == 32) { pulseWidth = centerServo; } // stop servo pulse at min and max if (pulseWidth > maxPulse) { pulseWidth = maxPulse; } if (pulseWidth < minPulse) { pulseWidth = minPulse; } // Show me the keys I pressed //Serial.print(“Key pressed: “); //Serial.println(moveServo);

}

}

//print pulseWidth back to the Serial Monitor (comment to undebug) Serial.print(“Pulse Width: “); Serial.print(pulseWidth); Serial.println(“us”);

// pulse the servo every 20 ms (refreshTime) with current pulseWidth // this will hold the servo’s rotation and speed till we told it to do something else. if (millis() - lastPulse >= refreshTime) { digitalWrite(servoPin, HIGH); // start the pulse delayMicroseconds(pulseWidth); // pulse width digitalWrite(servoPin, LOW); // stop the pulse lastPulse = millis(); // save the time of the last pulse }


Responsive Prototype 2_ Utilizing a servo motor to raise and lower the structure creating different states of space as it is engages. This prototype was a test in both tech components and scale of the object.



Prototype 2 In Motion Drawings_ The responsive prototype was documented through a series of drawings highlighting its different states. This scenario tests the object as free standing on a central column. Its movement is mapped as the armatures move from its column state to the canopy state.



Prototype 2 In Motion Drawings_ The responsive prototype was documented through a series of drawings highlighting its different states. This scenario tests the object as a suspended object. Its movement is mapped as the armatures move from its column state to the canopy state.


The Core_ This central system houses the ‘brain’ of the installation. The microcontroller, the motor, and the gear train are all housed within this system. The core also acts as the structure, rooting the installation in place. This core eliminated the need for a dependancy on a substructure or existing conditions. The core also allows for a new means of interaction. As the systems moves in response with the user, it reveals the core. This new artifact can then be a new tool of communication between the user and the architecture.

System Components_

The overall system system works as one unit, but it is comprised of several smaller systems working together. One being the central core, another is the structural elements, and the last is the skin of the system.


Structural Elements_ As the system recieves the input data from the sensor, it needs to turn that into an output response. The kinetics of the installation rely on components having the ability to move and reshape their structure. The skeletal members work as a join to create the movement between object and canopy. They can be further studied of how the joinery works and its dominance in the system, looking at material and scale of components.

Armatures_

Option 1 uses long elements to achieve movement resulting in large canopy space.

Armatures_

Option 2 uses several small compnantus to achieve movement resulting in a more dynamic transition between states

System Components_

The overall system system works as one unit, but it is comprised of several smaller systems working together. One being the central core, another is the structural elements, and the last is the skin of the system.


The Skin_ The installation has a skin that wraps around the armatures. It is the element that works with the movement. As the system moves, the skin will alter its appearance between the changes from one state to the other. Materiality is a key factor in this effect. The use of fabric can be useful for its properties. It is lightweight and have the capabilities of distorting its original shape. The skin would also act as an open loop system where it can intergrate each component into one readable installation. As the user engages with one object, they will not only effect their immediate space but they will effect the space of surrounding objects.

System Components_

The overall system system works as one unit, but it is comprised of several smaller systems working together. One being the central core, another is the structural elements, and the last is the skin of the system.


Scaled Space_ The frame that the armatures attach to can be scaled to create a space within the installation. This would reverse the response of the installation from creating a canopy as it is engaged to creating an enclosure as users are within its proximity. This situation deals with social interactions, creating scenarios of interaction within the boundaries set by the installation

System Components_

The overall system system works as one unit, but it is comprised of several smaller systems working together. One being the central core, another is the structural elements, and the last is the skin of the system.


const int pingPin = 7; // the pin that the sensor is attached to int motorPin = 9; // the pin that the fan is attached to int speed = 0 ; int sensorPIN = 0; int led = 11; // the pin that the LED is attached to int brightness = 0; // how bright the LED is int fadeAmount = 5; // how many points to fade the LED by void setup() { // initialize serial communication: Serial.begin(9600); pinMode(motorPin, OUTPUT); pinMode(led, OUTPUT); digitalWrite(led, LOW); } void loop() {

// establish variables for duration of the ping, // and the distance result in inches and centimeters: long duration, inches, cm; // The PING))) is triggered by a HIGH pulse of 2 or more microseconds. // Give a short LOW pulse beforehand to ensure a clean HIGH pulse: pinMode(pingPin, OUTPUT); digitalWrite(pingPin, LOW); delayMicroseconds(2); digitalWrite(pingPin, HIGH); delayMicroseconds(5); digitalWrite(pingPin, LOW); // The same pin is used to read the signal from the PING))): a HIGH // pulse whose duration is the time (in microseconds) from the sending // of the ping to the reception of its echo off of an object. pinMode(pingPin, INPUT); duration = pulseIn(pingPin, HIGH); // convert the time into a distance inches = microsecondsToInches(duration); cm = microsecondsToCentimeters(duration); Serial.print(inches); Serial.print(“in, “); Serial.print(cm); Serial.print(“cm”); Serial.println(); delay(100); { if (inches <= 5) { analogWrite(motorPin, 255); analogWrite(led, 255); } else { analogWrite(motorPin,0); analogWrite(led, brightness); } // change the brightness for next time through the loop: brightness = brightness + fadeAmount; // reverse the direction of the fading at the ends of the fade: if (brightness == 0 || brightness == 120) { fadeAmount = -fadeAmount ; } // wait for 30 milliseconds to see the dimming effect delay(100); } { if(analogRead(sensorPIN) > 600) digitalWrite(led, HIGH); else digitalWrite(led, LOW); // delay(250); } } long microsecondsToInches(long microseconds) { // According to Parallax’s datasheet for the PING))), there are // 73.746 microseconds per inch (i.e. sound travels at 1130 feet per // second). This gives the distance travelled by the ping, outbound // and return, so we divide by 2 to get the distance of the obstacle. // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf return microseconds / 74 / 2; } long microsecondsToCentimeters(long microseconds) { // The speed of sound is 340 m/s or 29 microseconds per centimeter. // The ping travels out and back, so to find the distance of the // object we take half of the distance travelled. return microseconds / 29 / 2; }


Responsive Prototype_3 The responsive system compiled several experiments including proximity sensing, LED outputs, and a DC fan actuator. This prototype tested the scale of the architectural object. At a smaller component scale, this prototype was designed to be dispersed in a field condition. This model was designed to house the electronics within, driving the form to fill the hierarchal needs of easy system. The brain (Arduino) was centralized with a bulge, the sensor was pulled down reaching out towards the people while the fan was pulled up to draw in air to inflate.



Responsive Prototype_3 The LEDs were wired in parallel to incorporate multiple lights per system. The fade in and out code was incorporated to highlight the resting state of the object. While no activity is being sensed, the lights would fade resembling breathing. As the object senses an object, it turns the LEDs to full brightness as if the object woke up.



Responsive Prototype_3 The responsive system tested a skin that would be inflated and deflated as the system sensed the presence of a user. Here the fan is being tested to inflate the bag.



Responsive Prototypes_ These prototypes were investigations of how sensory technologies can be embedded within architectural objects to create a responsive environment. Each test pushed the implementation of technology, adding complexity to each system, in a way that produced a more dynamic outcome. The third prototype was expanded further in refining the technology within and testing it within a field condition. As each Arduino test acted as a building block for the next study, the prototypes too left the opportunity for expansion and further investigation of responsive technologies within architecture.


FINAL PROPOSAL


Applying Technologies to Architecture Based on the past prototypes exploring sensor technologies within a fabricated system, this final proposal was an extension of prototype 3. This proposal included the use of multiple sensors used to collect data based on different human engagements. The PIR sensor collected data based on location in space. An electret mic was used to translate speech to an output response. LEDs were used as an output as well as a fan actuator used in inflating and deflating the components’s skin. This installation was tested as a field condition where multiple components were constructed to work as a unified system.


Passive Infrared (PIR) Motion Detection Sensor

Lightweight Inflatable Fabric Skin

0.25 in. Plexi Glass Structure Ribs

0.25 in. Plexi Glass Structure Rings

Arduino Microcontroller

Light Emitting Diodes (LEDs)

5V DC Brushless Fan Actuator

Connection Wires

Mother Component

Lightweight Inflatable Fabric Skin

0.25 in. Plexi Glass Structure Ribs

0.25 in. Plexi Glass Structure Rings

Light Emitting Diodes (LEDs)

Connection Wire

Daughter Component


Electret Microphone Sensor

Connection Wires

Passive Infrared (PIR) Motion Detection Sensor

with one of three Daughter Components

1:1 Detail Mother Component



A time lapse of the object as it inflates and deflates

1:1 Component in Motion


// this constant won't change. It's the pin number // of the sensor's output: const int pingPin = 7; // the pin that the sensor is attached to int motorPin = 9; // the pin that the fan is attached to int speed = 0 ; int led = 11; // the pin that the LED is attached to int brightness = 0; // how bright the LED is int fadeAmount = 5; // how many points to fade the LED by void setup() { // initialize serial communication: Serial.begin(9600); pinMode(motorPin, OUTPUT); pinMode(led, OUTPUT); }

Initial Pin Inputs for each sensor and actuator

Pin Setup

void loop() { analogWrite(led, brightness); // change the brightness for next time through the loop: brightness = brightness + fadeAmount; // reverse the direction of the fading at the ends of the fade: if (brightness == 0 || brightness == 120) { fadeAmount = -fadeAmount ; } // wait for 30 milliseconds to see the dimming effect delay(100); // establish variables for duration of the ping, // and the distance result in inches and centimeters: long duration, inches, cm; // The PING))) is triggered by a HIGH pulse of 2 or more microseconds. // Give a short LOW pulse beforehand to ensure a clean HIGH pulse: pinMode(pingPin, OUTPUT); digitalWrite(pingPin, LOW); delayMicroseconds(2); digitalWrite(pingPin, HIGH); delayMicroseconds(5); digitalWrite(pingPin, LOW);

Loop_Telling each component to react and what to react to

// The same pin is used to read the signal from the PING))): a HIGH // pulse whose duration is the time (in microseconds) from the sending // of the ping to the reception of its echo off of an object. pinMode(pingPin, INPUT); duration = pulseIn(pingPin, HIGH); // convert the time into a distance inches = microsecondsToInches(duration); cm = microsecondsToCentimeters(duration); Serial.print(inches); Serial.print("in, "); Serial.print(cm); Serial.print("cm"); Serial.println(); delay(100);

}

{ if (inches <=36) { analogWrite(motorPin, 255); } else { analogWrite(motorPin,0); } }

long microsecondsToInches(long microseconds) { // According to Parallax's datasheet for the PING))), there are // 73.746 microseconds per inch (i.e. sound travels at 1130 feet per // second). This gives the distance travelled by the ping, outbound // and return, so we divide by 2 to get the distance of the obstacle. // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf return microseconds / 74 / 2; } long microsecondsToCentimeters(long microseconds) { // The speed of sound is 340 m/s or 29 microseconds per centimeter. // The ping travels out and back, so to find the distance of the // object we take half of the distance travelled. return microseconds / 29 / 2; }

Arduino Code The code written to create the responses


1:4 Three States Diagram A mapping of the various states that occur with engaging the object.



PIR Sensor

Detection Range

Intimate Space

From Touching - 18”

Personal Space From 18” - 4’

Social Space From 4’ - 8’

Public Space Greater than 8’

Personal Space of the Object Using ranges based on the levels of human personal space, the object takes on a personal space of its own. Using the dimensions given, it can detect presence as one enters its personal space



A study in Processing to test the variable pulses that ca occur when paired with a microphone.

Translating Communication Using the electret microphone, the microcontroller can process the frequency of talking into a light pulse. Through this response, the installation creates a dialogue through the remapping of communication that occurs in physical space



Attraction Point Selection

Grid Dimensioning

Attraction point Computation

Component Mapping

Parametric Organization Using a scripted definition each mother object becomes an attraction point, allowing for a series of daughter components to become arranged around it.



Fabrication Process_ Each component was 3D modeled using a radial rib structure. These ribs were nested and laser cut on 0.25� plexiglass. They were constructed by slipping the pieces together using their notched joints. A series of three systems were constructed. Each system consisted of one main object and three corresponding daughter components.



Fabrication Process_ The Arduino brains were wired and encoded to be placed within the central portion of each main component. Every object received 4 LEDs that were wired in parallel to translate light responses across the field.



Fabrication Process_ Here one full system of the entire prototype installation in completed. The main central component houses the micro controller, 4 LEDs, a PIR sensor, and a 5V DC fan. Three daughter components are wired to the main system and translate output responses that occur from the sensors centrally housed.



Final Exhibition_ An exhibition was curated as the final presentation of our thesis. My installation was displayed along with final drawings. The format of the exhibition was set in 3 Acts each with 3 students in each. It offered the opportunity to both display my work for a larger audience and present my ideas initiating a discourse conversation on integrated technologies within architecture



Final Exhibition_ An exhibition was curated as the final presentation of our thesis. My installation was displayed along with final drawings. The format of the exhibition was set in 3 Acts each with 3 students in each. It offered the opportunity to both display my work for a larger audience and present my ideas initiating a discourse conversation on integrated technologies within architecture



WHY ? Why push for this technology in architetcure?


DISCUSSION


This thesis looks toward a future architecture that pushes the use of technologies as they become embedded within our culture. We have grown to be dependent on our technology, specifically our mobile smart devices, and we now have immediate access to information in the palm of our hands. As these technologies continue to develop, they become more of a prosthetic to our bodies then an additive item. The real time data that these devices offer, raise the question of why our physical space does not respond to us in a similar fashion. My research for this thesis looked to others who are designing responsive architecture interventions. Based on this research, I gained an understanding of how they incorporated technology into their architecture. Many projects are surface oriented where a building façade, wall panel, or overhead system is a responsive element. Other implementations of responsive technologies are installations that become a spectacle. While some responsive systems are purely playful and evoke a sense of wonder in the space, others work towards functionality. These functional projects look towards environmental stimuli, including façade panels that react to sunlight, or overhead canopies creating a shelter from sun or rain. My interests in responsive architecture lie outside of environmentally driven projects and towards an architecture that responds directly to people. Since people are the inhabitants of space, we should have the ability to communicate with our architecture and have it communicate back in real time. This thesis looked at the balance between playfulness and functionality, creating an awareness of these technologies in architecture to understand the future possibilities they have for our future spaces. My experimentation throughout this thesis process allowed me to explore the many possibilities sensory technologies have in the future of architecture. I gained an insight on how others approach this design problem in architecture which informed my approach to how architecture can be responsive. I began by looking at the ways in which we use our smart phones, as an individual object that can act as a controller. I then moved towards looking at architecture as a machine that can perform with us. This development looked towards a softer approach of responsive architecture and began to add a life like quality to these installations. If we look towards architecture with embedded intelligence, it should be able to both respond to us and to other architecture. This thought process unveiled the ideas of bio-mimicry, where natural phenomena informs characteristics of fabricated systems. This led to my interest in creating a fully ‘living’ architectural environment through the use of embedded technology and a thoughtful tectonic material relationship as a responsive system.


BIBLIOGRAPHY


Addington, Michelle, and Daniel Schodek. Smart Materials and Technologies for the Architecture and Design Professions. Oxford: Architectural, 2005. Beesley, Philip. Kinetic Architecture & Geotextile Installations. N.p.: Riverside Architectural, 2007. Print. Bullivant, Lucy. Responsive Environments Architecture, Art and Design. London: V&A Publications, 2006. Conrad, Erik. “Embodied Space for Ubiquitous Computing.” Responsive Architecture Subtle Technologies 2006. N.p.: Riverside Architectural, 2006. 60-63. Fox, Michael, and Miles Kemp. Interactive Architecture. New York: Princeton Architectural, 2009. Inaba, Jeffery. “Sensorial City.” Adaptation Architetcure, Technology, and the City: 22-23. 2012 Shepard, Mark, ed. Sentient City. Cambridge: MIT, 2011.



Blog URL Kahen Design <http://kahendesign.wordpress.com/>

Precedent URLs Future Cities Lab < http://www.future-cities-lab.net/> Howeler and Yoon < http://www.mystudio.us/> Diller and Scofidio < http://DSRNY.COM/> Bjarke Ingels < http://big.dk/#projects>


APPENDIX




NeoPlayformZ_ As our final presentation, we but on an exhibition to display our thesis work. The exhibition was structured under the theme of a play. The exhibition had three acts to organinize the projects. Act I was titled Networked Terrains. This collection of projects centered around utopian architectural ideas ranging from wireless networks to rooftop inhabitation. Act II was titles Reactive Arrangements. These projects look towards ways of transformable architecture and restructuring ways in thinking of spatial organizations. Act III was titles Augmented Interludes. This collection of projects tested different environments that can be created through various mediums including dream states, technology, and the dark. We constructed display panels to be arranged in specific arrangments that corresponded with each act. The exhibition itself became a performance as each act was taken down and reassembled.



NeoPlayformZ Act III Augmented Interludes_ During this final act, Augmented Interludes, there are three explorations of heightened social experiences manifested through very different methodologies and resulting in intriguing and temporal experiences for the users. The first part, Architecture Asleep, uses a theoretical and scientific approach to define a waking architectural reality through the use of dream functions and elements. The second part, Encode_Engage, activates the users’ senses with a responsive and reflexive architecture that explores both tectonics and technology. The final part, Social Darkness, explores the social and sensorial benefits of engaging with people, food, and architecture in complete darkness. These three theses present new social and experiential architectures that specifically engage the users in a moment of time in order to heighten their sense of place and self.



Special Topics Studio Fall 2012_ This installation is a student-designed project for the fall 20120 Special Topics Studio. The work is the product of a collaboration between our studio class (Samantha Altieri, Viviana Bernal, Erblin Bucaliu, Katherine Bujalski, Brittany Carey, Kristen Giannone, Ryan Kahen, Mark Morin, Bao Nguyen, Samantha Partington, Charles Simmons, Liem Than, Robert Trumbour {instructor}, Alex Cabral). The installation is in response to an intensive 10-day travel component conducted at the start of the semester, including visits to design and fabrication studios in New York City, the landscape of Big Bend, Texas and to Marfa, Texas to see the work of artist Donald Judd. The studio offered the opportunity to work at a one to one scale. Through prototyping and fabricating systems at full scale, we were able to encounter and problem solve issues not seen in previous studio courses. Working though the design schemes, we had the opportunity to focus specifically on certain aspects of design. I dealt with components of lighting to be integrated within the systems, focusing on new fabrication techniques with the CNC machine and cast moldings. I also used generative design programs to design algorithmic solutions for the system at both the scale of an individual component and the populated field.



5/16” x 1-1/4” Galvanized Steel Carriage Bolt

1/2” x 3’ PVC Vertical Member

Metallic Two-Hole Strap (Fastened with 5/16” Galvanized Steel Carriage Bolt)

Steel Conduit Pipe (Flattened Ends)

Metallic Two-Hole Strap

1/2” x 10’ PVC Vertical Member

3/4” .020 Type 304 Stainless Steel Strapping

Resin Cast Light Casing

1/2” x 10’ PVC Vertical Member

PVC Piping

Halogen Light Bulb Wiring for Light Drilled Hole in PVC to House Wire

7/8” x 20” Steel Thinwall Conduit Pipe



A

N

A SITE PLAN 2

8

4



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.