Presence and Absence of The Body

Page 1

Presence and Absence of the Body



Ann Margaret Lin 14029438

Presence and Absence of the Body

A dissertation submitted in partial requirements for the degree of MA Digital Architecture and Manufacture. London Metropolitan University. 2015





Acknowledgments

I would like to express my sincere gratitude to the Sir John Cass Faculty of Art, Architecture and Design and London Metropolitan University for letting me fulfill my dream of studying here. To my colleagues, Ergys Peka, Jayden Ali , and Edoardo Perani, thank you for technical supports and discussing ideas with me. To everyone in Unit 4, thank you for your assistance and suggestions throughout the whole year. To Philip Earley and Arrash Fakouri, I am grateful for all the tutorials and technical supports. To the staff at Casswork – especially to Anatol Just and Mathew Dart, thank you for all the supports on laser cut, CNC and 3d print machines. To staff at Digits2Widgets – especially Tom Mallinson, thank you for your assistance and suggestions on 3D printing. To my boyfriend Andreas Schiedermeier, thank you for all the assistance and supports throughout my project. I would like to thank my family for their support– especially my father, thank you for all the supports and assistance, patience, and all the Skype tutorials. Most of all, I would like to thank Jonas Lundberg and Nate Kolbe, my advisor and director, for their understanding, passion, patience, and encouragement and for supporting me to pursue my interests.


Abstract

Are we here? Or are we there? Or, can we be here and there at the same time? Presence and Absence of The Body raises major questions about the concept of place, as there is no doubt that technology explosion have distorted our understanding of time and space, we have found a way to present, yet, at the same time, absent at a place. While our bodies employ sensory perception as a medium to perceive our environments, what would happen if we can cheat or control our perception by reprogramming stimuli to our senses? Could we distort, diffuse, teleport our body, therefore, redefine the relationship between body and space? My work consists of three parts: the new body, the other space, and the telecasted event. The new body is the postorganic body (Palumbo, 2000), the fusion of the organic body and machine. Various movement of the organic body is sensed, recorded and converged into signals, and transmitted to the machine. The other space is a place different to the where the organic body is located. It is dissociated to the body, yet at the same time, responding to certain body movements and creating events which will then be telecasted back to the body’s visual perception via a camera installed in the machine. By tracking the body’s movement, projecting it onto the machine which is positioned in the other space, and experiencing events inside the space through the artificial eye (a camera), the body reaches its own spatial extension. KEYWORDS: IMU, robotic, postorganic body, perception


CONTENTs

10

Introduction

18

Presence and Absence of the Body: the Other Space

26

Physical to Digital: Digitize the Body with IMU 6DOF Sensors

44

Digital to Physical: Stick Figure

68

Telecast Plan: Transmission of Visual Perception

72

Notation: the Trigger Move

74

Postscript

76

Bibliiography


Introduction

Although scientists had already found a way to limitedly interfere our perception with electromagnetic waves, the idea to release a body from its geophysical constraints and reconstruct it in different places is still a fantasy which exists only in sci-fi novels. Technology has achieved to detect activities of a body and to perform remote operation, yet, to combine these two currently existing technique to diffuse our body and to change the interrelation between the body and space still remains in an imaginative stage. However, as technology and science progress rapidly, we might wonder when will our fantasy come true, when will we live a totally different life with the aid of machines? The truth is that fantasy had already started. Our understanding of time and space is in a way different than 30 years ago. Distance is no longer an objective quantity that can only be measured by meters or miles. Places are connected through signals and spaces are fused. In a near future, we might be able to teleport ourselves and move between places through our perceptions. To achieve such goal, an interdisciplinary cooperation is eagerly needed. Masamune Shirow’s 1989 manga series KĹ?kaku KidĹ?tai: Ghost in the Shell is an example of perception teleportation. Ghost indicates the spirit, perception and feelings that show us our existence, and shell indicates the organic or cybernetic body. This 1989 sci-fi series represents the body as a container that realizes its existence only when a spirit enters the body. By leaving its own body and entering another, a spirit can diffuse between bodies 10


Shirow, M. (1989) KĹ?kaku KidĹ?tai: Ghost in the Shell.

11


and connect their thoughts. A more recent American film, Surrogates (2009), also depicts the atlas of a futuristic world where human lives in isolation and interacts through surrogate robots. Through electromagnetic waves, human is able to experience the world by imposing stimuli to their senses. Yet, these cases only cover part of our discussion. The discussion of a new relationship between this new body and space is not involved here, and space itself is still seen in a bygone way. Ubiq: a Mental Odyssey , Mathieu Briand’s (2006) permanent exhibit for the 21st Century Museum of Contemporary Art, Kanazawa, Japan, is an example of how relationships between architectural space and human perception have changed with visual perception exchange system is applied to visitors’ experience. Yet, Mathieu Briand’s case is about deconstructing the concept of the subject. Visitors receive changes of vision passively and their behavior makes no difference to what they receive. Studies about sensors are made to improve a body’s interaction with the architectural space. With sensors, it is able to detect the characteristic and output correspondingly. Kinect, a motion sensing input device by Microsoft for computer gaming, which was released to developers to program, has the ability to scan a body’s motion with a projector and a camera and a microchip. A combine use of Kinect and Grasshopper can track movements in 2 dimensional. However, if looking for a better resolution, accelerometer and gyroscope, which are the two of the main sensors used in an IMU board, is commonly used for navigating airplanes and can also track motions. Sebastian O.H. Madgwick’s (2013) Ph.D research at the University of Bristol had successfully tracked footsteps with an x-IMU sensor. Studies In robotics are made in order to design a moving device, which will later be responding to sensors and used for visual per-

12


Top

Briand, M. (2001) SYS*017.ReR*06/ PiG-EqN\ 5*8. [Installation]. Ateliers d’Artistes de la ville de Marseille/Marseille, 2001. Bottom Madgwick, S. (2011) 3D Tracking with IMU. 13


ception transplant. Stelarc’s (1980) mechanical human-like device Third Hand is an example of control and interacting with sensors. The installation is a responding device attached to the organic body and then controlled via detected signals from sensors which are attached to other parts of the organic body. Similar concept is used again in the Open Bionic project, the 3D printed hand for amputees. Studies in virtual reality are also carried out to achieve a better understanding of visual experience. Oculus Rift, a 3D eye goggles product of Oculus VR, has achieved better imitation to our visual behavior. The same product is combined in use with robotic heads in an exploration surrogate project, DORA, a robotic head built by a team of students at the University of Pennsylvania (2015). The team attempts to achieve the goal of telepresence by mapping the goggle-wearer’s head movement to the robotic head for the robot to follow through. Although the project didn’t mentioned too much about the architectural space, it is an example of interdisciplinary cooperation where a loop of interaction between visual perception, sensors, and robotics is created. Above all boundless imagination and rapidly developing technology I would like to ask, what is the connection between body and space? What is the essence of space and what makes space a space? If perception events can be shredded, transported, and restructured, if what we see, hear, or touch is from another space, where do we exist? My work experiments the concept of teleporting the body by reconstructing the perception, start from our sense of vision. M. Metfessel’s (1931) article, Eye-mindedness and Ear-mindedness, on New York World-Telegram quantified the dominance of visual perception: “65% of the knowledge of the normal human being

14


Top Stelarc. (1980) The Thrid Hand. Bottom University of Pennsylvania. (2015) Oculus Rift and Robotic Heads: A Match Made in Geek Heaven.

15


is assimilated through his eyes.� By transplanting the sense of vision to a machine, the project attempts to diffuse a great part of the body to another place and to experience interactions with a different time and space. This project can be briefly described as a loop: visual perception influences the movement of body; the movement of body creates events in another architectural space; a cyborg within the architectural space telecasts the vision of an artificial eye back to the organic body. Through extended senses and interactive feedbacks, the new body experiences an overlap of architectural spaces.

16


17


Presence and Absence of the Body: the Other Space

The human body is the origin of architecture. Different from nature, which exist long before the existence of human, architecture appears due to the appearance of human. Human created architecture. Therefore, if we had to find out the dominate force behind the appearance of architecture, we have to start from the body. Why do we have to start from the body? The Functionalist claims “form follows function”. However, throughout the various theories in the architecture history, form has never fall out of following function, it never fall out of working in coherence with the appearance, the mind, the design of the body in aesthetics, in operation, in all aspect. The function is the body, the whatsoever purpose that architecture is built to satisfy. The essence of elements, the essence of space, the essence of architecture, the essence of utilitas, venustas, and firmitas (Vitruvius, 1960) is the body. Speaking of the body not only refers to the shape of the body or the way of functioning the body, it also speaks about the thinking of the body, the perception of the body. Form for instance will not be the same if we were the size of a caterpillar. Circulations will no longer be described as horizontal and vertical if we have wings. If we can see through things, then “walls” might not be like the ones we’ve known, if they still exist. The reason we have doors and windows and a horizontal floor for us to walk on is that we live in a body like this, as Louis Sullivan (1896) once said in his article The Tall Office Building Artistically Considered: 18


Le Corbusier. (1887-1965) Le Modulor. Top Middle Leonardo da Vinci. (1452-1519) The Vitruvian Man. Bottom Le Corbusier. (1887-1965) Le Modulor.

19


It is the pervading law of all things organic and inorganic, of all things physical and metaphysical, of all things human and all things superhuman, of all true manifestations of the head, of the heart, of the soul, that the life is recognizable in its expression, that form ever follows function. This is the law. But what is a body? How are the body and space interrelated? How will evolution of the body, from time to time, affect the way architecture looks like? Our bodies consist of concrete material. We consist of bones, muscles, skin, veins and nerves, organs and cells. But we also comprise more intangible things such as perception and feelings. This is how we perceive our environment, this is how we feel our existence at a place. We see the world through vision, we observe the world with smell, hearing, taste and touch. Light strikes the retina of our eyes thus we see shapes and colors. Smell is mediate through odor molecules thus we smell flowers. Pressure wave vibrates our eardrums thus we are able to hear the bird sing. However, what we see is not a fully objective world either. Perception is not a passive window, but is shaped by learning, attention, memory, and expectation (Gregory, 1987). We only see our environment in the way we can. Composition II in Red, Blue, and Yellow (Mondrian, 1930), for instance, has no meaning to a cat (cats only see limited colors). Aesthetic in perspective will never exist if we have our eyes located on the sides of our face like fish. Sensorium is the medium, the bridge, that connects us to our environment and gives us the sense of present. The explosion of science and technology has distorted our understanding of body and released us from time and space. We have found a way to present, yet, at the same time, absent at a place.

20


For example, if we were to start a Skype call through cameras installed in other places, our perceptions have extended. Our eyes present in the space where other body organs are absent. Our visual perception and our organic body seem separated, yet they are not. Through our eyes, time has been distorted, space has been fused, our bodies achieve telepresence, our minds escaped its organic boundaries. The meaning of body has been changed. Our senses are no longer limited by the location or size of our body. It has been diffused into electric cables and wireless signals. We are experiencing space in a fusion of flesh and machine. The body that ends at the furthest point of the radius of action of its sensors and remote-control devices, linking biological rhythms and a media universe crossed by information flows…Contact that is no longer simply that with the ground on which we rest our feet, but on the contrary, contact that stems from our possible emancipation from geophysical constraints. (Palumbo, 2000) Maria Luisa Palumbo (2000) explained cyborg in her book, New Wombs: Electronic Bodies and Architectural Disorder as: the technologically extended organism. She defended this new body is no alien or replicated body, but instead a new “appearance of our incarnation” (Haraway, 1995, cited in Palumbo, 2000, p.23). If sense can be planted somewhere else and we don’t receive signals or feelings from the space that contains our organic body, then, where do we exist? Are we here? Or are we there? Or, can we be here and there at the same time? While the body can be deconstructed into electromagnetic waves, perception can be stimulated and decoded, ghosts are no longer grounded by shells (Shirow, 1989)[1], here and there can no longer be distinguished, will we interrelate with our surroundings in a different way? This

21


new body will ultimately lead to a new criterion of function, a new definition of time, space and place, and, consequently, the emergence of new architecture. Project【1:1/ 1:10】 There are no ants the size of elephants. Phisical demands change as the scale changes. This project aims to digitize the body and diffuse it into “the other space” through signals. A cognitive disconnect occurs between the sense of vision and the sense of hearing, touch, smell and taste. A part of us presents in a 1:10 scaled model while other parts of us presents in the so-called real world. The meaning of time and distance and scale has changed. How will a new body settle in-between two different places? By teleporting the sense of vision into this scaled space, I wish to raise the question of present and absent of the body and bring out the following question: how space changes as the body changes from 1:1 to 1:10?

1. Shirow, M. (1989) Kōkaku Kidōtai: Ghost in the Shell. Japan: Kodansha. The 1989 manga describes the spirit as ghosts and the body as shells. 22


Bone structure of an elephant and an ant.

23




Physical to Digital: Digitize the Body with IMU 6DOF Sensors

Since machines were introduced to the human life, from a remote control to robot arms, our body has been digitized and processed in the form of data more or less. Sensors are commonly used as a media that interprets the characteristic of its environment into a digital world. Each type of sensor has its specific object of detection and decodes the environment for different type of tasks. In this project, we are looking for a type of sensor that is able to detect possible motions of a body. An IMU (Inertial Measurement Unit) is a system composed of an accelerometer and a gyroscope (gyro).This particular IMU 6DOF (6 Degree of Freedom) board we will be using in this project is comprised of an ADXL345 Accelerometer and an ITG-3200 gyro. An accelerometer is a device for measuring acceleration with the help of gravity. The particular ADXL345 accelerometer is a hot-air-bubble type 3-axis accelerometer. The concept of operation of ADXL345 can be roughly expressed as a hot-air-bubble surrounded by 6 thermal sensors. When the accelerometer experiences changes of acceleration, including the ones caused by gravity which can happen in X or Y rotations, the hot-air bubble leaves its balance position and approaches one or more of the sensors. This causes the sensors to output a different signal pattern, therefore, able to detect acceleration changes or angular positions . A gyro is an angular rate sensor composed of three spinning wheels with a spin axis that is free to assume any direction. It kicks in when an accelerometer leaves off. And the 26


Thermal sensor (Y+) Thermal sensor (X-)

ย ย ย ย ย ย ย ย ย ย ย

Thermal sensor (X+)

Thermal sensor (Y-)

วฆย ย ย ย

วฆย ย ย ย

ย ย ย ย

ย ย ย ย

While x and y are both in horizontal position

วฆย ย ย ย

วฆย ย ย ย

ฬด

วฆย ย ย ย

ฬด

ย ย ย ย

วฆย ย ย ย

เต

เต

ย ย ย ย

ย ย ย ย

ย ย ย ย

While y in horizontal position x tilting clockwise

While y in horizontal position x tilting counterclockwise

27


other way around, an accelerometer compensates when a gimbal lock situation happens. Generally, motions of a body can be roughly subdivided into translational motion and rotational motion. Although a 6DOF sensor is not made perfect for detecting linear motions, it is still possible to do so with postnatal integral. While a gyro is a pure angular rate sensor, an accelerometer detects data of both acceleration and rotation, which means the data it obtains is a sum of both angular and translational motion. Theoretically, subtracting the value of gyro from the value of accelerometer will result in data of pure linear motion . A 6DOF board comes with a fixed programmed script that can only be viewed read-only in the file library . The output of a 6DOF is consists of three data types – the three-axis acceleration values (accX, accY, accZ), the three-axis angular rate values (gyroX, gyroY, gyroZ), and the three-axis Euler angle values (Euler0, Euler1, Euler2). Only acceleration values are adopted directly from the accelerometer. Angular rate values and Euler angles are results of complex mathematics that involves quaternion (q). In the complex calculations, the result of angular rate is rectified by acceleration and quaternion. Euler angle is the result of integral of angular rate corrected by quaternion. However, the result of this complex calculation cannot be applied in our case. Tests, carried out to find a regular pattern between each value and the actual rotate angle, indicates an influential amount of offset and drifting. The value of Euler angle is not consistent with the actual situation. Correct Offsets and Drifts The accumulation of data offsets leads to a consequence of

28


Acceleration raw graph

Integral: acceleration to velocity to displacement

29


drifting. Visualized in Processing, drifting accumulates over time could result in a false status of the actual orientation. To fix the problem, we cancel out offsets with an average offset value to prevent drifting. An average offset value is the sum of offsets in a certain time period divided by number of samples. It can be described as đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ?‘‚đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´ =

������������0 + ������������1 + ������������2 + ⋯ + �������������� ��

Offset0 will then be replaced by Offset0_new = Offset0 - OffsetAverage, and so are the other original offset values. Correct Angular Data Integral of angular rate, retrieved from gyro, over time results in angular motion. The sum of angular displacement can be calculated by trapezoidal rule, over a sample time of 0.05 second. However, the ITG-3200 gyro is not stable enough to work alone. Offsets and drifts occur and the outcome is not accurate enough for further use. While an accelerometer is not horizontally placed on the X-Y plane, a vector projection of gravity can be detected as acceleration on the tilted axes. For example, if a rotation around Y axis happens and X axis rotates from 0 degree to 90 degree, the vector projection on X will grow from 0g to g (g = 9.8m/s). The acceleration can be described as a=g sinθ. By getting the value of θ, we are able to calculate the angular position. However, this method can only be applied when the accelerometer has no linear motion. The combination of accelerometer and gyro working in tandem

30


Calibration of IMU 6DOF

31


32


Quaternion calculation.

33


34


IMU 6DOF library scrpts.

35


36


Test result of IMU 6DOF.

37


compensates the pitfalls of each other. If the accelerometer is trusted when angular-only motions are performed and the gyro is adopted when linear motions are involved. The concept of estimation is introduced into the algorithm shown as diagram(below) . R represents the result of each section. Racc(n) represents the current angular position data from accelerometer. Rgyro(n) represents the current angular position data from gyroscope. Rest(n) represents the current estimated angular position. Rest(n-1) represents the previous estimated angular position. A weighting of trust applied between Rest(n-1), Racc(n), and Rgyro(n) affects the final data Rest(n). This filters noise and smoothen the signal, iteratively corrects the signal to approach an accurate result. A threshold is also set up to avoid cumulative errors and drifting. A threshold value is set to clear out random data floating inside the set range of value. However, there is still one shortcoming of this method. Since the rotation around Z axis will not lead to an acceleration change in any of the axes, the detecting of Z rotation can only rely on gyro, which directs us back to the same predicament as before. Therefore, it requires a careful act while performing Z rotation with an IMU board.

ሺ Ǧͳሻ

ሺ ሻ

Rgyro(n)

38

ሺ ሻ


Translational Motion Linear motions can be calculated by canceling out the part of value caused by angular motions. Racc(n) = Rest(n) + Rlin(n), where Rlin(n) represents the unknown current value of linear motion. Subtracting Rest(n) from Racc(n) with the previous outcome will result in a current value of linear acceleration, therefore, give us the value of linear movement by double integral. The finished sensor set includes an Arduino board, a Bluetooth, an IMU 6DOF board and a power supply. Signal of a 6DOF sensor is transferred wireless through Bluetooth using the TX/ RX pin. The outcome of a sensor can be visualized and monitored interactively on a system design software – LabVIEW . On the MAIN canvas, number of sensors and the baud rate for transferring can be set up before run. The system can read up to three sensors at a time. The importing COM port is selected manually to connect 6DOF with the system. Connected in-port COM ports will be shown at INPORT S in select order. Data read in via these COM ports will be shown at INPUT:6 DOF also in data type as: accX, accY, accZ, gyroX, gyroY, gyroZ The out-port COM port is the port to where the data will be transferred for further use . The output data will be shown at OUTPUT: in a data type as: (Sensor No.) linX, linY, linZ, angX, angY, angZ Where the first number is the sensor number, the rest are values of linear/angular motion. The unit of linearX/Y/Z over 100 is meter. The unit of angularX/Y/Z over 100 is degree. RESET buttons

39


numbers of sensor input input baud rate in-port COM port chosen COM port

data read in: (Sensor No.) accX, accY, accZ, gyroX, gyroY, gyroZ

out-port COM port to Stick Figure output baud rate out-port COM port to X-Y table

data output: (Sensor No.) linX, linY, linZ, angX, angY, angZ

40


for g and acceleration are designed to correct tolerance manually. Signal read-ins from 6DOF sensor are visualized in canvas ax-ay, az-gx, and gy-gz in line charts. Redress of an IMU 6DOF board has provided a more reliable method of tracking body movement than before. Yet, a movement too slow cannot be detected by the sensor due to the way of algorithm. Multiple integral in the process also leads to deviation. The unstable status of Z rotation is still the main problem to be conquered in the future .

41


Top A combined set of IMU 6DOF/Arduino Uno/bluetooth. Bottom Wire connection of the sensor set. 42


43


Digital to Physical: Stick Figure

Stick Figure is a light weight mechanical device designed and programmed to act as projections of body and work as the sense of sight in parallel space. First, in order to represent the sense of sight, Stick Figure aims to capture motions of the body, above its shoulder, synchronously and accurately. It is asked to nod, tilt, and shake – which is defined as Roll, Pitch, and Yaw in aspects of flight dynamics – like human head. Second, it requires light weight load and a foundation that allows it to move freely in parallel space, yet, at the meantime, stiff enough to carry a camera on top. Last, the way of data outputs from 6DOF sensors has an impact on how this mechanical device can be controlled. To achieve its goal of synchronously and accurately, Stick Figure is wished to require less complex computation to smoothen transmissions. Therefore, to meet the many demands, the design of Stick Figure is divided into four related areas: main body design, circuit design, base design, and scripting and servos. Main Body Design The form of Stick Figure derives from cervical spine. The cervical spine is consist of 7 vertebrae, which as known as the C1 - C7. These 7 vertebrae, together with surrounding muscles, support the movement of a human head. If we analyze head movements in Cartesian coordinate system with X axis pointing to the right, Y axis to the front, and Z axis to the top: C1 and C2, the two topmost vertebrae, together perform rotation around Z axis (shaking 44


45


or rotating); C2 - C7 perform rotation around X axis (nodding) and Y axis (tilting). Gaps between vertebrae are filled and supported by muscles, veins and nerves; this stabilizes vertebrae and prevents our neck from dislocation while performing actions. With the same concept, the main body design of Stick Figure attempts to achieve the goal of design - act as projections of the head movement - in the method of biomimicry. The prototype is mainly consists of five “bones� and four pieces of sponge, arrayed in a vertical order. These bones play the role of cervical spine here, which gives the model a certain size and shape and carries the pressure. Gaps in between are filled with pieces of sponge, which stabilize the bones by friction and make it easy for the components to stick together without limiting any extent of actions. A music wire that works as a bendable backbone goes through the parts, centering all bones and sponge. The strength and elasticity of the music wire supports tensions of the prototype while bending. This head-mimicking device therefore can easily be bended - up to 30 degree angle towards all direction - by nylon filaments attached to four light weight servo motors, and return to its start position by losing the threads. On top of the topmost bone sits a continuous micro servo with a camera attached. This servo plays the role as the junction between C1 and C2, which is, controls the head rotation around Z axis. The camera, therefore, points to the direction of where we are looking at. Components like bones and pulling wheels were sent to 3d print in nylon after details were modified according to the material behavior. Compare to the study model, the 3d printed components obtain a more organic and curvy geometry for a smoother use while functioning. Details like the minimal thickness of bones are increased from 1mm to 4mm to maintain its rigidity.

46


Two ways of approaching the goal of design: geometry and biomimicry.

47


Design of Stick Figure.

48


Size of Stick Figure compare to an iPhone.

49


Robot neck study model. 50


3D printed robot neck components. 51


driven by continuous servos. Unlike position control servos, continuous servos operate with speed control. It rotates fully forward or backwards with a specified speed instead of moving to a position. However, the data types we receive from the sensors are “positionsâ€?. The difference between data types requires calculus to redress. This will lead to more errors during the process, therefore, unable to capture our motions synchronously and accurately. Although not commonly used to carry cameras, X-Y table is another method to our situation. In spite of the scope of activities is limited, it is a better solution if we ask for accuracy. X-Y tables provide horizontal translational motions for machines, for example, they are commonly used in a CNC machine. Generally, it can be applied to processes that can be described as series of movements and operations. The translational motion is controlled along two axes driven by stepper motors. This type of motors operates through step counts, which shares the same data type as the sensors. In this case, we are using a NEMA 17 stepper motor, which has a 1.8 degree step angle, to drive the position. If R represents the radius of the pulley and S represents the scale of Stick Figure model, the maximum error will be đ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘š đ?‘’đ?‘’đ?‘’đ?‘’đ?‘’đ?‘’đ?‘’đ?‘’đ?‘’đ?‘’ =

1.8Ď€ R Ă— áS 180 2

In our case, the radius of pulley is about 6mm and the scale is 1/10. This gives us a maximum error around 0. 9mm and no more than 1mm. If we take an average person’s stride length as approximately 762mm, the maximum error of tracking a person’s motion with X-Y tables is about 0.1%. Construction of the X-Y Table Our X-Y table was first constructed with linear rails and linear

52


53


the other space

Stick Figure

X-Y table

Composition of the project 54


Sections of Stick Figure and X-Y table 55


Circuit Design To complete the device, a relatively considerable number of wires are connected to the camera and servos. It is essential to have a circuit system that attaches the device well and follows through all the movements without tangling itself or shedding. To make it easier to describe, we will divide device vertically, from top to bottom, into four layers – camera space, neck space, machine room, and electricity room. Besides the power circuit of the camera stays in the camera space, all circuits, including circuit of the continuous servo, go through the machine room to the electricity room. Circuits that start from the camera space travel down through the neck space. Hooks that attach to the bones collect the wire; they prevent it from impeding the movement. The machine room contains four servos that control the bending actions of four cardinal directions. The electricity room is where all circuits gather together to the Arduino board. A 9 volts battery is also placed in this room to support that power use of Arduino. To minimize the size and simplify the device, we try to make the circuit as short as possible by placing the machine room next to the electricity room. Base Design How the base is designed is how, the logic of mechanical and programming, will we use to make Stick Figure move. Previously, we have discussed about how Stick Figure captures rotational motions. Here, we are going to discuss how Stick Figure captures translational motions. There are several methods to move the device, but, to choose the best way of moving Stick Figure takes careful considerations of the data type we get from the sensor. Autonomous vehicle is a popular solution to achieve our goal. This robotic car is usually

56


Composition of Stick Figure camera

neck

machine room

electricity

57


ball bearing sliders. However, compare to frictionless air bearings that is commonly used in CNC machines, ball bearings are not frictionless. The quality of manufacture and the weight of the load can easily increase friction between the rail and slider, therefore, cause distortion during operations. Due to this fact, we have to slightly adjust our original plan and replace part of the rails with castors. Scripting and Servos Scripting stands an important role as a media of interpretation in the project. It is able to transform collected information from a physical environment and process data into different forms of signal to execute tasks. At here, the script reads in data via COM ports and directs it to an Arduino Uno board to control Stick Figure. Our scripting structure in Arduino is composed of two major parts: data import and motor control. The former is about executing Arduino to importing data from LabVIEW; the latter is about processing data, collected from the former, and output commands to servo motors. Here we will start our discussion from the latter. According to function types, the structure of motor control section is divided into 2 branches: X-Y rotation and Z rotation. X-Y rotation is the bending action that mimics nod and tilt. In scripts, servoPinA, servoPinB, servoPinC, and servoPinD are four integer parameters that are used to control motor A, B, C, and D , while motor A and C is a corresponding pair that controls X rotation, and motor B and D is the other corresponding pair that controls Y rotation. At the start of the script, we enable Arduino digital pin ~5, ~6, ~9, and ~10 for PWM (Pulse Width Modulation) signal output to four servos. The Tower Pro sg90 9g servo motor we are using here receives pulse width between 500-2400Âľs and

58


59


has a 180 degree rotation angle - where 500Âľs refers to 0 degree and 2400Âľs to 180 degree . Pulley wheels are attached to the motors as a medium for unit conversion between length and angle . We define 2300Âľs as the maximum thread length, therefore, the maximum bending angle, and use it as a reference while we carry out the test of PWM signals and its corresponding bending angle. The results are as follow: 2500 2000 1500 A C

1000 500 0

0

10

20

30

40

2500

2000 1500

B D

1000 500 0

0

10

20

30

60

40


Stand-straight position

20° tilt angle

1600µs

20°

1600µs

2000µs

Transform between PMW wave to tilting angle

61

1300µs


This diagram indicates the corresponding pulse widths of relevant pairs in different bending angles. Among them, 0 degree is the stand-straight position of Stick Figure. Base on the results, we denote PWM and its corresponding bending angle into linear function f(x) đ?‘“đ?‘“(đ?‘Ľđ?‘Ľ) = đ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘ đ?‘ đ?‘ đ?‘ đ?‘ đ?‘ + đ?‘šđ?‘š Ă— đ?‘šđ?‘š =

đ?‘Ľđ?‘Ľ 100

đ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘€đ?‘€đ?‘€đ?‘€đ?‘€đ?‘€ − đ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘ƒđ?‘šđ?‘šđ?‘šđ?‘šđ?‘šđ?‘š đ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ??ˇđ?‘€đ?‘€đ?‘€đ?‘€đ?‘€đ?‘€

PWMstr represents the value of pulse width when Stick Figure is standing straight. PWMMax represents the value of pulse width when Stick Figure reaches its maximum bending angle. PWMmin represents the value of pulse width when Stick Figure reaches its maximum bending angle. DegreeMax represents the maximum bending angle. x is the expected bending angle of Stick Figure that is updated via sensors. m is the slope of each f(x). Since each motor has a slightly different range of pulse width, the steepness of each line differs. Here we get function A(x), B(x), C(x), and D(x) as đ??´đ??´(đ?‘Ľđ?‘Ľ) = 1600 − 600 Ă— đ??ľđ??ľ(đ?‘Ľđ?‘Ľ) = 1700 − 600 Ă— đ??śđ??ś(đ?‘Ľđ?‘Ľ) = 1700 + 600 Ă— đ??ˇđ??ˇ(đ?‘Ľđ?‘Ľ) = 1600 + 700 Ă— 62

đ?‘Ľđ?‘Ľ 3500

đ?‘Ľđ?‘Ľ 3500

đ?‘Ľđ?‘Ľ 3500

đ?‘Ľđ?‘Ľ 3500


Last, we include A(x) and C(x) to Arduino function “Bow_2_NS” to control bending movements around x axis, and include B(x) and D(x) to Arduino function “Bow_2_EW” to control bending movements around y axis. Z rotation motor control is relative to movements such as shake and turn of the camera head. We declare integer parameter servoPinE to pin ~3 on Arduino to output PWM signal to the motor at the start of the script. Owing to the characteristic of speed control servos, PWM signals no longer bring the rotation to a certain angle, but demand the motor to perform positive and negative rotations, and to pause. Each servo has a different value of pulse width at pause, and for the one we are using here is at 1440µs. While the pulse width is larger than 1440µs, the servo operates positive rotations; while the pulse width is lesser than 1440µs, the servo operates negative rotations. Operating speed depends on the difference between the operating pulse width and the pulse width at pause. The greater the difference is, the faster the speed is. Operation at different speed with different distance to achieve has different acceleration and deceleration while starting and stopping. This unpredictable speed difference led to errors while integrating speed to position. To solve this issue, we use “for” loops to set up a “step unit” as a basis for measurement. Step unit is a concept of dividing distance by step length. While a loop controls the servo to rotate continuously, a for loop divides the whole operation into even steps. It operates the servo with the same speed and the same distance-to-reach each time a step is operated. Instead of rotating straight to the goal, the servo operates a certain angle each time with the same acceleration and deceleration and “steps” to the goal position.

63


64


for (int i = 0; i<2; i++){ digitalWrite(servoPin, HIGH); delayMicroseconds(1490); digitalWrite(servoPin, LOW); delay(30); } After several experiments on loop times (int i), speed (delayMicroseconds) and break time (delay), a for loop that operates twice at a speed of 1490Âľs pulse width is consider the most accurate step unit. A reliable result of average step angle came back as 2 degree per step base on our experiments. To complete the task, we will add another for loop j on top of for loop i. If for loop i represents a unit of step length, for loop j is the amount of steps that an operation takes to achieve its goal position. for (long j = 0; j < loop_times; j++) { for (int i = 0; i<2; i++) { digitalWrite(servoPinE, HIGH); delayMicroseconds(1490); digitalWrite(servoPinE, LOW); delay(30); } } At here, loop_times multiplied by i is the total angle which the servo is assigned to achieve. Yet, while scripting, we assign a parameter delta_deg to loop_times to receive and process signals from sensors. The relation between delta_deg, i, and loop_times can be described as:

65


��������_���������� =

|đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘_đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘đ?‘‘| đ?‘–đ?‘– Ă— 100

Last, we replace loop_times with delta_deg to complete the function of z rotation .

Data import is the part of scripts that connects the input of 6DOF sensor data to motor control. Following the previous chapter, each data set we receive from LabVIEW via COM5 contains a serial number at the start of the line, followed by six sensor data that represents the value of positions and rotations along each axis. To be able to input data to the script, a set is described in parametric as (SenNo) px, py, pz, qx, qy, qz SenNo is the serial number of the sensor, which is 0 in our case; px,py,pz are the scalar components of the position on the axes of x, y, and z; qx,qy,qz are the scalar components of the angle rotated around the axes of x, y, and z. Parameters carry data from COM5 to the motors and then replaced the previous data with the next set of data. A complete script has the data import section functioning as a gate with the motor control functioning as a substructure inside of it.

66


67


Telecast Plan: Transmission of Visual Perception

There are several ways to telecast images from the other space back to our eyes, and each of them has to be concerned under parameters such as physical dimensions, cost, programming easiness, and value for further experiment. The two ends of our visual telecasting path are the camera (a.k.a. the artificial eye) and the eye goggles. The camera is the visual recording device that is carried on top of Stick Figure. It is also the signal sender and the start of the route. Eye goggles are the visual receiver and the end of the route. It is the device that will be carried with by an organic body to receive constant signal input. The dimensions and weight of a camera are in direct interaction on size with “the other space” and the carrying device Stick Figure. For example, the iphone camera is a fast way of recording and telecasting videos, but the size and weight of an iphone might lead to the demand of a stronger carrier, which will then affect the size, material, structure, and price of Stick Figure, therefore, lead to a relatively change on the size of “the other space”. Also, different types of camera require different methods of signal transmission, and some of them require different kinds of by-product. Signals can be transmitted in two formats: analog or digital. Analog signal appears earlier than digital signal, it is mostly used in security cameras. Analog video transmission requires a RCA 68


2D virtula glasses

camera video receiver

spy camera

The telecast set : devices and wire connection.

69


connecter, which is known as the red, yellow, and white signal wire. Digital signal is a more modern method of transmitting signals. Since it is less affected by noise, it has replaced the use of analog signal and become a more and more popular method in the modern life. Eye goggles were first commonly known in video gaming. Since Facebook acquired Oculus VR in 2014, 3D virtual reality eye goggles had officially replaced 2D virtual reality glasses and became the future. After choosing between methods of telecast plan, a mini spy camera and 2D virtual eye goggles are applied as a final solution. Profit from the mini spy camera’s light weight and size, we can now reduce Stick Figure’s strength and weight to provide a more fluent movement mapping. However, the mini spy camera requires analog signal transmission and therefore leads to 2D virtual goggles and complex connection of signal wires. Regarding its development and value in the future, this might not be the best choice for further application.

70


71


Notation: the Trigger Move

As previously mentioned, the unstable status of the IMU sensor is a problem remained to be solved. Problems can be divided mainly into two types:

First type of problem happens when a sensor attaches to the organic body unstably or when the movement of the body is not smooth enough. Result shown in tests also indicates a movement too slow is difficult to be detected by the sensor. A sudden movement also greatly lowers the accuracy of the result. Since the result comes from multiple integral, deviation can accumulate, there is no perfect solution to this. However, body installations could provide a smoother data output from the body by limiting its motions. Due to the soft and flexible nature of our skin, body installations, designed according to the characteristic of the sensor, can be applied as a joint between the skin and the sensor, to provide a more rigid surface for the sensor to attach.

Second type of problem occurs while interactions happen between the new body and the other space. Due to the continuously floating data, it is necessary to bring in the concept of sets. A set is a series of data that represents a certain type of movement. It can be recorded and memorized in LabVIEW as a basis for 72


Illustrations of Labanotation notation system

movement mapping. By using the concept of sets, it is feasible to have different movements recognized by matching them to tens or hundreds of known recorded data sets. The well-known Labanotation is an example for how movement can be divided, recorded (noted), copied, represented or regrouped. And the inevitable sensor data float can be corrected and reset by comparing the data to a most matching set. Each set is given a function, which triggers certain behavior of response. 73


postscript

The relationship between body and space has been discussed in lots of works via different media. The earliest known study on human body was done by Vitruvius in the book De Architettura (1960), describing the proportions of human figure as the model for architectonic proportions, and has later been given a more visualize detail in Leonardo’s Vitruvian figure. In Renaissance, the idea of the body as a model of formal measurement was replaced by the idea of the body as a system of perception. This led to the conviction that architectural forms should be concordant with the laws of the senses rather than the proportions of the body. In 20th century, Bauhaus’ performance explores the new mechanical space through a bio-mechanical body. The body that appears in the space is a body extended through space, a body where costume and scenery merge. Our century attempts to move beyond the body and its physical nature, to embody the projections of a virtual world and to explore the potential of a postorganic body. My work attempts to bring forward a dream that has occurred in novels, films and works, and been discussed through history. In this new age, could technology change not just the form of space but the essence of space, the essence of human body? Further application of this project, the Solar Villa project, will take place at 2015 team SOL_ID’s Solar Villa show house in Cali, Colombia. The overview of 2015 Solar Decathlon social housing project is a scheme of future dwelling under limited 74


space and living conditions. The Solar Villa project seeks to extend the field of sensory perception and imaginations of living.

75


Bibliography

Crimando, J. (2008) Cervical Vertebrae Overview. Available at: http://www.gwc.maricopa.edu/class/bio201/vert/cerv.htm (Accessed: 2 September 2015). David Hoberman (Producer), Todd Lieberman (Producer), Max Handelman (Producer), & Jonathan Mostow (Director). (2009). Surrogates [Motion Picture]. United States: Touchstone Pictures, Mandeville Film, & Top Shelf Prodections. Davide Madeddu. (2011) ‘Kinect + Grasshopper Skeleton‘, Grasshopper, 11 September. Available at: http://www.grasshopper3d.com/video/kinect-grasshopper-skeleton (Accessed: 2 September 2015). Eberly, D. (1999) Quaternion Algebra and Calculus. Available at: http://www.geometrictools.com/Documentation/Quaternions. pdf (Downloaded: 4 March 2015). Gregory, R. L. (1997) Knowledge in perception and illusion. Available at: http://www.richardgregory.org/papers/knowl_illusion/knowledge-in-perception.htm (Accessed: 3 September 2015).` Jones, C. A. (2006) Sensorium: Embodied Experience, Technology, and Contemporary Art. Cambridge: The MIT Press.

76


Palumbo, M. L. (2000) New Wombs: Electronic Bodies and Architectural Disorders. Translated by Lucinda Byatt. Basel; Boston; Berlin: Birkh äuser. Shirow, M. (1989) Kōkaku Kidōtai: Ghost in the Shell. Japan: Kodansha. Sofge, E. (2015) ‘Oculus Rift and Robotic Heads: a Match Made in Geek Heaven‘, POPULAR SCIENCE, 27 April. Available at: http://www.popsci.com/oculus-rift-and-robotic-heads-matchmade-geek-heaven-0 (Accessed: 2 September 2015). Starlino. (2009) A Guide to Using IMU (Accelerometer and Gyroscope Devices) in Embedded Applications. Available at: http://www.starlino.com/imu_guide.html (Accessed: 3 September 2015). Wikipedia The Free Encyclopedia (no date) ‘Cervical Vertebrae‘, Wikipedia The Free Encyclopedia. Available at: https://en.wikipedia.org/wiki/Cervical_vertebrae (Accessed: 2 September 2015).

77



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.