Page 1

P R O T O T Y P I N G I N T E R A C T I V E E X P E R I E N C E S 40060065 Edinburgh Napier University


Monster Circuits Project

Genmaze. Project


Table of Contents 4 5 6 8

The module Keeping track Documentation Booklet Interaction

10 14 15 18 20 24

Processing Move to dot game Manipulating pixels Audio Visualiser project Turning generative

28 29 30 32 51 52

Project 1: Objects D’Interface Interpretation Research Creation Coding Outcome and Testing

60 62 66 68

Arduino Exploring the tutorials Morse Coding Kiltman

72 74 75 76 80 82 91 92 94

Project 2: Emotional Objects Manifesto Interpretation Brainstorming Choosing an idea Creation Coding Environment Outcome


The Module Excerpt The Prototyping Interactive Experiences module will provide you with an introduction to the use of interactive softwares and hardwares, basic electronics and the development of skill and knowledge in the practice and visualization of interaction and experience design in various design genres. In the ďŹ eld of contemporary digital design it is essential that designers consider their role exists in both the virtual and physical world. This module will give you a broad and detailed knowledge of both the tools and principles of experiential design for both the screen and physical space. The relationship between the user and object, audience and artwork can form one memorable and impacting experience. To successfully produce an immersive designed experience we must employ more than just technical skills.

Analytical skill, visual understanding and practical knowledge will underpin the development of creative interactive, immersive and visual relationships in a range of design genres. The range of projects given will include the application of interactive, designed solutions to screen, space and place.To successfully complete this module you will need to install speciďŹ c softwares which will be taught on this module. Processing - Arduino - This module is intended to develop the exploratory skill sets that interactive technology can offer within design. The focus is less on the technology itself but more on the designed use of it.


Keeping Track It was important to log all development while producing any material or completing any research for this module. A personal blog is available on tumblr that deals specifically with this module. All my done, dusted, cropped, finalised work is available as portfolio pieces on my Behance, including projects completed for this module


Documentation Booklet This document will detail my personal development on this module from start to finish. Through the document, I will explain my processes, research, development. The document will be punctuated with devices to give insight into my coding and research experiences with the following signifiers;

INSPIRATION BREAK “Inspiration Breaks” will give examples of practioners and interesting tid-bits I have come across in my research and my thoughts on them.

NITTY GRITTY “Nitty Gritty” segments will show coding in the respective languages of Arduino and Processing with annotated explanations.

VIDEO BREAK “Video Breaks” will give examples on interesting videos I have come across that relate to the work I am doing.


Turbulence: Watercolor + Magic by Dr. Woohoo

by Desmond Paul Henry


Interaction The purpose of this module is to make us think about our interaction with technology, beyond the normative Human Computer Interactive description that now feels rather dated, how we emotionally view and use objects in our daily lives. And how we as designers can understand and use this technology to create meaningful and emotionallydriven designs and products. We were asked to document examples of human-technology interactions in every-day life, to catergorise and understand how we use technology and gain knowledge of the interaction experience.


The full video of these interactions is available on my tumblr blog.


Processing Processing is a programming language, development environment, and online community. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. Initially created to serve as a software sketchbook and to teach computer programming fundamentals within a visual context, Processing evolved into a development tool for professionals. Today, there are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production.


At first glance, the Processing environment is deceptively simple. A simple window with a text editor and a play and stop button. But there is a lot more to it than meets the eye.

As well as in-class lessons there is great resources online for learning processing, and seeing what other people are doing with the technology, such as


Processing has a wide array of functions that can be used to produce visual outputs. Above is a simple example of a “switch” command. The ball on screen moves from one side of the screen to the other. Upon reaching the other side, the ball begins to travel back. This goes on infinitely as the switch goes from “0” (off) at the right, to “1” (on) at the left.


Processing allows for interaction between user and computer. Which is why it is an essential element of this course and why we are using it to create visual designs. Here, depending on where you move your cursor on the page, a different image of a cat will appear, delightful.


Here a simple game was made to roll a dice. The user interacted with the game with the mousepad, and a dice roll appeared on screen. A video of the testing of this game is available on the tumblr link opposite:


PImage img; PImage imgb;


void setup(){ size(500,700); img = loadImage(“greg.jpg”); imgb = loadImage(“greg2.jpg”); frameRate(24); }

I am interested in how I can manipulate pixels. Using some easy tutorials from the Processing website I quickly managed to create a “static TV blur”. A video of this is available via. the tumblr icon.

void draw() { background(0); image(img,0,0,500,700); tint(200,100,100,155); image(imgb,0,0,500,700); loadPixels(); for (int i = 0; i <pixels. length; i++){ float rand1 = random(255); float rand2 = random(255); float rand3 = random (255); color c = color(rand1, rand2, rand3); pixels[i]=c; } updatePixels(); } Explanation: Images are loaded so we can manipulate the pixels. We create a for loop that fills each pixel in the array with a new random value. Then we update these pixels and display them as our output.


Tristan Bagot French Digital Artist But are we really able to be objective about our notions of order and disorder ? Can we find a way to generate ordered disorder and represent it visually ? The Boltzmann method answers this question by enabling the generation of an image from a single computer file with an algorithm based on random components. First of all randomly generated images are arranged in an orderly fashion, then the file is converted to numbers using a cryptographic hashing function called SHA-1.

Excepert from Bagot’s “Boltzmann” project: We fear disorder and wish for order. We go after the trouble makers of society.

This method lies at the border between determinism and randomness, and can then be applied to many fields such as music artwork depending on the soundfile, a book cover based on the text it contains, a t-shirt depending on the name of the person wearing it…

We would rather have our bedroom, our office or our paperwork in a state of order rather than one of disorder.



Tristan Bagot’s beautifully executed “ Boltzmann” shows the potential for creating modern, sleek, and generative art. Here, Bagot applys his algothrim to music, but what else is possible? Through what other automated processes can we create art? For me, Processing is most exciting for it’s potential to create generative art in a visually gratifying way. With Processing’s ability to pixel sort and draw based on a huge variety of inputs from audio, visual, and the tactile what kind of generative art, if any, would I be able to produce?



Processing has the ability to use input and output sound via using an in built library called â&#x20AC;&#x153;Minimâ&#x20AC;? that uses JavaSound.

We can take the input of, for example, an mp3, and translate it graphically as sound waves, or whatever else you wish to use the information the mp3 is giving you to create.



Visualiser As a class, we were set the task of creating a “musical instrument” using Processing. I decided to interpret this brief in a different way, and create a music visualiser. Excerpt from my tumblr blog: I set myself the task of creating more with audio. I am heavily interested in electronic music, and have friends who are keen DJs and musicians. While my flatmate, Patrick, was mixing I thought wouldn’t it be cool to make a visualiser for the screen in front of him? Using the Minim library again, I found the code needed to take a Microphone input and output graphical visualisation. After a good few hours of testing and iterations I was able to scale the visualisations to my needs. I also utilised the Video library to play vintage stock footage as background images. I was able to change what video was playing using Key presses.


PImage img; PImage imgb;


void setup(){ size(500,700); img = loadImage(“greg.jpg”); imgb = loadImage(“greg2.jpg”); frameRate(24); }

An example of the visualiser in action is available via. the tumblr button.

void draw() { background(0); image(img,0,0,500,700); tint(200,100,100,155); image(imgb,0,0,500,700); loadPixels(); for (int i = 0; i <pixels. length; i++){ float rand1 = random(255); float rand2 = random(255); float rand3 = random (255); color c = color(rand1, rand2, rand3); pixels[i]=c; } updatePixels(); } Explanation: Images are loaded so we can manipulate the pixels. We create a for loop that fills each pixel in the array with a new random value. Then we update these pixels and display them as our output.



A very popular option for designers using Processing is to integrate the Xbox Kinect using a special library so that you can manipulate visuals with 3d depth. This can produce some amazing visual effects such as the one shown below in this example video. This opens up new possibilities for video processing. This is something I may explore in the future, but right now the Kinect is out-with my student budget.


As port of our module, we were set a one day task of creating a “Micropet” in Processing. The idea was to explore the relationship that we have with technology. Could we have a real relationship with another being and how can we transfer this digitally? What made millions of us love our tamigotchis, and mourn them when they died? Our team made a simple little, rather silly, game. Your goal was to take care of the insatiable “Baby the Cool Dog”. He followed around your mouse and as he moved he got more tired or hungry. You had to take him to his bed in one corner to let him have a nap, or to his bowl in other to feed him. A video of Baby the Cool Dog is available in my tumblr opposite.


Turning generative

Here, I manipulated the visualiser I created to it into a unique generative art creator. This was almost created by accident. Due to the way processing operates it draws frame by frame in a loop. So by not playing a video in the background, the sound waves were layered over each other to create dazzling patterns.





Project 1: Objects Dâ&#x20AC;&#x2122;Interface From the brief: For this project you must investigate and critique how the design and production of modern technologies are addressing the relationship between humans and objects, or not. Your investigation will lead you to the design of an everyday technological gaming experience with a specific focus on the relationship, interaction and communication between, object and user, and, user and object. The responsive final output of the experience must be implemented using Processing. Your task within the design of your final outcome will be on both the game controller and also the game itself. You may choose to re-design an existing game or design a new game through Processing.

Your final designed outcome must consider not only the psychological relationship but also the physical relationship, meaning you must produce a physical interactive artefact of your design object. Key to the success of your outcome will be your investigation and development of your newly designed object. Test it, evaluate your results, revise your design and test it again. The rigidity of this phase in your process will allow you to find opportunity and insight in an already overcrowded world of consumable technologies. You must also closely consider the aesthetic design and finish of your outcome. The object should look and feel like a product. This is a prototyping stage output but card modelling should be implemented where appropriate.


Interpretation Excerpt from my tumblr blog: I want to remove the typical computer inputs of the now rather dated “mouse and keyboard” and create a unique physical interface that requires the user to engage and put effort into their operations to receive an outcome in a fun and rewarding experience. From children’s colouring sheets in cheap restaurants to Stephen King’s ethereal and ominous garden maze in “The Shining”; Mazes are universally understandable games of persistence and confusion. My plan is to create a physical maze that a user must guide (a wireless mouse in disguise) an object through to reach the end. In doing this, mouse inputs will be sent to processing and based on this will create generative art. The project hopes to be captivating, polished and capable of making it’s own array of infinite artworks.


Research Paper & Pen testing mazes of varying difficulties from Easy, Intermediate and Hard (free at Two participants were involved and asked to simply complete the maze with a pen and were timed. The participants acted mainly as expected. The male participant showed more of a “bullrush” tactic where he would instantly start moving forward without too much forethought. The female participant would carefully consider her route before neatly plotting it once she had deducted the best way to go. A maze is a tour puzzle in the form of a complex branching passage through which the solver must find a route.

This “bullrush” tactic may produce more unique generative artworks. Hopefully, the tactile and laborious nature the final prototype encourage this response. However further testing of something that more resembles the physical result needed would be necessary to draw any conclusions.


What is most interesting is that the intermediate maze provided more of a challenge than the hard maze in both cases. And seemed to follow a more simple and straight forward route. It is clear from this that the complexity of the maze looses out to how clever routes are made in attempts to confuse and distract the user. It is essential to consider a maze design that is both compact, challenging, yet manageable to create. Considering this will be a physical creation I must consider the size implications of such a large maze because of the time frame constraints of the project. This will be explored further until a final maze design is found to be acceptable and can be created in full.

Video and results are available on the tumblr link below.



I set off to create an initial prototype for Genmaze using thick card. Off the bat, I realised there may of been something that could be improved in for the maze aspect of the game.

From my initial testing the mazes had simply one entrance and one exit. There was truly only one correct path and therefore variation on the path would of been minimal. One tester simply thought out the maze then completed it, there was only one correct answer so she worked it one and completed it. This wouldnâ&#x20AC;&#x2122;t of created the most interesting generative art in practice.


Expanding on the maze idea, I decided to make three entrances and three exits, with no correct path. People will be given the freedom to choose where to enter and where to exit. This add a great amount of potential to the user results. What will be the most popular route?

Why do people choose certain paths more than others? What trends do we see appearing? What happens if people go fast or go slow? What generative art will this produce?v


Creating the physical maze proved more challenging than I anticipated. There was some mistakes in measurements, and hence when it came to â&#x20AC;&#x153;housingâ&#x20AC;? the mouse it got stuck between walls and could not fit through the gaps. Eventually I came to the conclusion that the either the spaces between the boxes need to be bigger, or the mouse needs to be smaller.

Next I will create the prototype again with the correct proportions so that testing peoples routes and reactions can be explored before further iteration begins.


One thing discussed with the tutor is how the mouse co-ordinate will be mapped to the maze board. This will require use of the â&#x20AC;&#x153;mapâ&#x20AC;? command in Processing to decide how many pixels wide and high this maze will represent on a screen.


Jackson Pollock American abstract expressionist Could Genmaze be capable of procuding some kind of vivid “digital abstract expressionism” as seen here. Quite literally, the “mouse” in Genmaze will be a paintbrush, and that paint and where it hits the page will be the result of displacement, speed and time. I am sure beyond doubt that Processing is capable of making beautiful generative art, and I intend to educate myself on the digital theory and algorithms that can make it happen. In this digital era, how different is a cursor to a paintbrush? Paul Jackson Pollock (January 28, 1912 – August 11, 1956), known as Jackson Pollock, was an influential American painter and a major figure in the abstract expressionist movement. He was well known for his unique style of drip painting. (Wikipedia)



The style of Jackson Pollocks paintings takes on characteristics that can be replicated on a screen. There is strokes of varying thickness, ellipsis, points and lines. All this is truly beautiful and programmable material. How much of this style do I want to take forward for consideration in my own generative art and what new things does the digital bring? How would this look with transparency, moving parts and digital artefacts?


After ironing out any physical size problems with the maze; I then created all remaining pieces of my maze and placed them on the mazeboard, I am not going to glue them down or cover them yet as to do so would only restrict me if something were to change.


My problem now comes to the code. I have found that initially because of the physical size of the maze, once the mouse moves outside the boundaries of my screen as set by processing, it becomes unresponsive. I have not found out how I can â&#x20AC;&#x153;mapâ&#x20AC;? the entire physical board to the mouse, as right now if it moves a couple of inches it will reach the end of the screen boundaries and stop.

My current thoughts is to make my cursor speed super slow outside of the processing software, or use what small amount of mouse information I can have access to create something.


Turning my attention to code, I began to explore what options I had available to me. The variables that I was able to work with were: Colour, Speed, Time, Shape, Direction, Transparency, Weight, Random, and Size.

In the example above, I endeared to replicate a â&#x20AC;&#x153;Jackson Pollockâ&#x20AC;?-esque digital output using a very early prototype of the physical maze and a mouse.


NITTY GRITTY void setup () { size (displayWidth, displayHeight, P3D); rectMode(CENTER); background(255); strokeCap(SQUARE); } void draw() { smooth(); pathR=centerR+R*sin(a); a=a+.03; pathG=centerG+G*sin(a1); a1=a1+.03; In the code opposite, is an early testing of Genmaze. and what could be done using just mouse position and speed. Various pieces of geometry were produced under different circumstances to understand how this would output visually.

pathB=centerB+B*sin(a2); a2=a2+.03; noStroke(); fill(pathR,pathG,pathB,50);

if (pmouseX > mouseX) { rect (mouseX, height/3,250,250); } if (pmouseX < mouseX) { ellipse(mouseX,


I followed this up by testing this code in the physical maze. I ran three tests varying the speed, and path taken through the maze to produce three different outcomes. To the right is one example. More examples and a video showing this physical test is available on my tumblr blog below.



After being inspired by other artistic processing sketches, I decided to set about planning how I wanted the output of Genmaze to behave. It was important to me that the outcome of the aesthetic process was not ‘by chance’ and was instead designed.

I decided to expand on this idea and work with ‘quads’, which has four points. I wanted X to MouseX, and then be mirrored at max height, and Y to be speed and be mirror at max Width. After working out some maths and equations I felt comfortable enough to take this into processing to code out.



Turbulence: Water + Magic by Dr. WooHoo Turbulence is relevant to Genmaze. project. In Genmaze, the input is human; organic; physical and the output is digital; arithmetical; planned Turbulence is the opposite. Itâ&#x20AC;&#x2122;s input is computational and itâ&#x20AC;&#x2122;s output physical. But it is generative, and together with Genmaze shows the scope for the variety of Generative Art and shows how different inputs and output changes the look and feel of the art. Video below:



Putting my plan into action. I set about to tinker with the idea of a quad based on speed and mouse position. These are my results and I am beginning to become happy with what is being produced on screen. This code will be brought forward for testing with the Genmaze model and tweaked until I am satisified with the output.



The brief detailed that the outcome must â&#x20AC;&#x153;look and feel like a finished productâ&#x20AC;? so I endeavoured to detail my maze as much as my time constraints would allow. This involved painting, gluing, and sanding to give a well rounded and complete aesthetic. The mouse box was painted an easily identifiable red.



void setup () { size (displayWidth, displayHeight, P3D); rectMode(CENTER); background(255); strokeCap(SQUARE);


} void draw() { smooth(); pathR=centerR+R*sin(a); a=a+.03; pathG=centerG+G*sin(a1); a1=a1+.03;

This is an extract from the final code of Genmaze. It is a compact piece of coding that only contains 39 lines of code. What it does in brief is: Take x co-ordinate of the mouse and speed of how fast the mouse is moving through the maze. Using these two, it draws a symmetrical â&#x20AC;&#x153;Quadâ&#x20AC;? shape at a low opacity to give the illusion that they are layered. A quad is drawn every time the mouse is moved.

pathB=centerB+B*sin(a2); a2=a2+.03; noStroke(); fill(pathR,pathG,pathB,50);

if (pmouseX > mouseX) { rect (mouseX, height/3,250,250); } if (pmouseX < mouseX) { ellipse(mouseX,


Gemaze. was now ready to be brought forward for user testing. The end goal for Genmaze. was to create an enjoyable interactive experience and explore the notion of “usercreated” art. All artwork created was printed on the fly and an “exhibition” of sorts was created on the board behind the physical maze.

Users were simply asked to guide the red box from side of the maze to the other any way that they pleased.


Depending on their path, unique generative art was produced.

This final image was then printed out on the fly, twice. One copy was given to the user as evidence of their art, the other was stuck on a wall behind with all the other generated pieces.


The general user experience was positive. Users were delighted to receive a copy of their artwork, and became intrigued with how it was created. They also began to discuss what their artwork meant about them. One thing I had not considered until then is that the artwork could be posed to reflect the user themselves If they complete the maze fast, which meant their lines were thick, did it mean they rush through things? Did people who took long routes, and backed over the same route multiple times show dedication, or were they trying to be â&#x20AC;&#x153;cleverâ&#x20AC;? with their input, as to think they were deceiving the code? As each piece has unique colours and line consistency. It was interesting to see users debate what their artwork meant about


The most popular entrance and exit by far were both their first. Could it be that to go from one to one was the most natural path for most users? Evidently, the path taken actually had the least effect on the artwork produced, as all paths had a similiar mixture of twists and turns as each other. I do not think must conclusions can be drawn about user habits from this short tests.

I feel Genmaze. has the potential to be developedi nto a fuller product. Instead of mouse, there could be motion sensors, giving a wider array of available data to create the artwork. The maze could use rigid plastic interchangable parts to make different mazes instantly. Also, the use an automated process of developed High Quality prints when the maze is completed would be fantastic.

The consensus from the participants was that the maze was immediately understandable. Users felt like they had the freedom to complete the maze at their own pace, and were always happy with their outcomes, claiming that their own piece of generative artwork was definitely the best on the wall.


Genmaze. was completed and uploaded in a professional manner to my portfolio on Behance where it is available to view along with a finished video that shows the process of Genmaze. on the users end.

Genmaze. ended up being an enjoyable and rewarding project that created artwork that was strong enough as pieces of art on their own and in a collage.


Abstract Genmaze. was a two week project in prototyping for interactive experiences. The goal was to create an physical experience that considered our relationship with technology through the context of play. Genmaze. was coded in the opensource language Processing to create unique pieces of generative art depending on how the user completed a physical maze.




Arduino Arduino can sense the environment by receiving input from a variety of sensors and can affect its surroundings by controlling lights, motors, and other actuators. The microcontroller on the board is programmed using the Arduino programming language (based on Wiring) and the Arduino development environment (based on Processing). Arduino projects can be stand-alone or they can communicate with software running on a computer (e.g. Flash, Processing, MaxMSP). The boards can be built by hand or purchased preassembled; the software can be downloaded for free.


At first glance, the Arduino coding environment looks like a replica of Processing, but it is all together a different beast.

The first logical step of taming this beast was learning itâ&#x20AC;&#x2122;s langauage. I am no coder, nor do I ever intend to be one, but having an understanding means you can enhance your design outputs tenfold. The first logical step was going through the tutorial provided with the kit.


Working with the tutorials, I managed to combine a variety of operations and do-dads that the Arduino Starter Kit came with to create a circuit that included LEDs, a Servo Motor, and a Buzzer.

The Servo was programmed to wind up to itâ&#x20AC;&#x2122;s maximum value, then wind back down. A buzzer was used to create tones using the current value of where the Servo was in itâ&#x20AC;&#x2122;s cycle (from 0 to 299).


While the Servoâ&#x20AC;&#x2122;s current value was increasing, and green LED was set to high voltage, when the Servoâ&#x20AC;&#x2122;s value was decreasing, the red LED was set to high voltage. Eventually creating an all-flashing, all-singing, all-dancing little guy! A video of this is available on my tumblr blog.

This was only some of what the Arduino Starter Kit included. Other items including heat sensors, tilt switches, light dependent resistors and so forth. Giving a variety of things to play with out the box.


Desmond Paul Henry Artist Henry’s artwork was some of the first to show what computers could bring to art, and what artists could do with computers. Henry was famed for his “drawing machines”, algorithmic robots of precision. They would start, run a course of code, and stop. Resulting in these spirographical images. These are examples of the earliest time-based generative art.

Desmond Paul Henry (1921–2004) was a Manchester University Lecturer and Reader in Philosophy (1949–82) and was one of the first few British artists to experiment with machinegenerated visual effects at the time of the emerging global computer art movement of the 1960s (The Cambridge Encyclopaedia 1990 p. 289; Levy 2006 pp. 178–180). -Wikipedia

Work such as Henry’s raises some questions that are still hotly debated in the art world. Who is the creator of this art if it is not done by the artist’s hands? The machine drew it, Henry only coded it. Genmaze will add another element into this mix, by adding the user. So who is the artist? Is it me, the computer, or the user? Surely since I will of made the code, I am the owner. But after i hit play, I do not choose what comes on screen, the computer does, even still after that, the computer is only following instructions from a user. Such questions are philosophical in nature, and have no correct answer, if any. Will I be creating art by proxy?



How would Arduino fair with making art? How could I wire up these electronics and circuits to make Art for me? This is become quite an interest for me in considering Arduino Although I may not of been able to explore this idea in this module, is a train of thought I will have as I develop my skills as an Artist.


Morse Coding In a take-home task, we were asked as group to simply blink out our names. We then presented our blinking Arduinos to others in the class, and worked out what others said. Considering how old a technology Morse Code now seems, it is quite strange to consider how far technology has come. Once a feat of technological brilliance, now a few students and a small little piece of electronics can create what would of once been a complex system requiring engineers and professionals


NITTY GRITTY void setup () { size (displayWidth, displayHeight, P3D); rectMode(CENTER); background(255); strokeCap(SQUARE); } void draw() { smooth(); pathR=centerR+R*sin(a); a=a+.03; pathG=centerG+G*sin(a1); a1=a1+.03; Opposite is the morse code for the Arduino blinking out “ACE-KENJURA”

pathB=centerB+B*sin(a2); a2=a2+.03; noStroke(); fill(pathR,pathG,pathB,50);

if (pmouseX > mouseX) { rect (mouseX, height/3,250,250); } if (pmouseX < mouseX) { ellipse(mouseX,



We were set a another day project of exploring the function of the “LDR” (Light Dependent Resistor) within the Arduino kit. The Light Dependent Resistor stores a light reading to use in your code.

“Kiltman” was a tongue-in-cheek project. Some of the Chinese exchange students in our class were asking us about the Scottish “Kilt”. So we designed a little silly project to show them what Kilts were all about.


An LDR was set underneath a Kilt which the user could flick up. When they flicked it up, the LDR received light and an LED underneath the kilt magically went on! A video of users testing this is available on tumblr:




Project 2: Emotional Objects

Excerpt from the brief: The human body is a complex system of surfaces and organs that can detect heat, light, sound, movement, and we respond to these stimuli, or inputs based upon a contextual map of our surroundings, and in some cases our emotional state. As well as this, emotional stimuli can also elicit a physical response that for many of us is uncontrollable, for example blushing. Using the Arduino microprocessor ( you are asked to design an object which exhibits the human response characteristics to an input factor, based upon a specific human response to stimuli (see table below). You may choose one of the input response characteristics below or choose your own stimuli and response

- A computer program for an Arduino microprocessor that is able to take this input and control an output device in a predetermined (by you) manner - A sensor or sensors that are able to detect an input - An appropriate output device. - An electronic circuit to enable the above constructed on a Breadboard. - A physical manifestation of your design that will contain the Arduino, breadboard and power supply. The design of the form is expected to be contextualised around your stimuli and response.



Manifesto For this project, we were split into groups of two. One student from my course (Design & Digital Arts BDes) was paired with one student from Product Design BDes. Hoping that our combination of skills would produce a good outcome. The first task was to work out how we wanted to function as a team, what our goals were, and we produced a short Manifesto.

Some well-known studios we decided we aligned with aesthetically were Studio Droog, and Dunne & Raby. Team: Jura Kenneth


Designs McKay McLean


Interpretation Originally, we wanted a simple, fun, and minimalistic design that encapsulated one or more human emotions in 10x10x10cm box. This idea progressed and changed over the course of the task and eventually we decided on a funfocused, tactile, product-driven box that had interchangable parts and a variety of responses.


Brainstorming To develop our ideas, we were tasked with created a â&#x20AC;&#x153;Pecha Kuchaâ&#x20AC;? slide show. Pecha Kuchas are short presentations of 20 slides, each of which is played for 20 seconds. The idea is to be able to explore and talk about your ideas while having a simple visual cue playing in the background. But to be focused on what you are trying to get across rather than reading off a screen.

We initally researched various standard human emotions to heat, light, and movement.



We began exploring how we could manifest these emotions into a small box and what elements of the Arduino kit could be used to replicate these emotions.

What inputs would we need? Could we replicate the input of “falling” with a tilt switch? Would it’s output be self righting? Noise?

How does light effect us? We shield our eyes. How does heat effect us? Sweating, fainting. How does cold effect us? Shivering.


How would these objects look aesthetically, how would they move and operate?

What would the output of Shyness be? Embarrassment? Would the box turn red? Would it mumble noises?


The Idea We decided that we didnâ&#x20AC;&#x2122;t want to narrow in our range of emotions. We understood that sound could be the response to a variety of emotions. Happiness produces an upbeat chirpy noise, Sadness could produce a sad whimpering noise, Anger could produced a low aggressive noise. We needed a way to personify and enclose these emotions to a single box.


We decided on using the fun concept of “monsters” to personify our ambitions. We would have three monsters that could display a variety of emotions. . To contextualise our design, each monster would have it’s own persona and therefore it’s own emotion. This motion would be reflected in their design.


Creation We began by sketching methods of housing the monsters in the 10x10x10 cube.


A 3d model of the box was designed to show understand how our sliding mechanism would work. It was essential to have an easy and simple way to change emotion and monster. So each monster would be able to slide in and out of the box with ease.


Little Printer by Berg As a team, we decided that personification was important to success of an emotional object. Technology alone is emotionless. It shares no characteristics with a human. It is cold, metallic, and robotic. Little Printer by Studio Berg shows how to create emotional attachment with an object;By personification. With the hope that this will create connections with itâ&#x20AC;&#x2122;s owner. That we have to care for it, to love it.

Little Printer loves to receive direct messages. Send a message or photo from anywhere in the world and your Little Printer will print it immediately. Working late? Send the kids a goodnight kiss via Little Printer. Spotted something that will make them smile? Little Printer delivers photo messages too!



This product encapulates many of aspects of the brief. Even going as far as having similiar dimensions of that of the cube we are making. These characteristics are some that we wish to transfer to our own emotional object.


Three cartoon monsters were designed. Hoping they would be fun, and the user would find them lovable and sweet.






On the outside, our object was to be fun, emotional, playful. Though inside, it was undeniable that there was technology running all of these emotions. We decided to illustrate a circuit board that would be inside, and show when a monster wasnâ&#x20AC;&#x2122;t present. To show the flip-side of the emotion. Therefore without a monster slid into the box, it was emotionless and robotic.


Our prototype was made out of carboard and glued together ready to receive the arduino and monsters that it was to house.


This is circuitry of our project. It contains 3 LEDs, 3 LDRs and a buzzer. It runs in series along the breadboard and draws 5v from an external battery.

It is then covered with the circuit board illustration with holes cut at the appropriate places to give the idea that this is inside a circuit board. Depending on which LDRs are receiving light (related to each emotion), a little tune will play relating to the mood of the monster.



void setup() { pinMode(9, OUTPUT); beep(50); beep(50); beep(50); delay(1000); Serial.begin(9600); pinMode(ledPin1,OUTPUT); pinMode(ledPin2,OUTPUT); pinMode(ledPin3,OUTPUT); }

If all LDRs are receiving light (no monster is inserted), 3 LEDs will flash in sequence to give the idea that the board is in “stand-by” mode.

void loop() { LDRValue1 = analogRead(LDR1); LDRValue2 = analogRead(LDR2); LDRValue3 = analogRead(LDR3); Serial.println(LDRValue1); Serial.println(LDRValue2); Serial.println(LDRValue3); delay(50); // ldr1 is getting light, ldr 2 is getting light, ldr 3 is getting light if (LDRValue1 > light_ sensitivity && LDRValue2 > light_sensitivity && LDRValue3 > light_sensitivity) { digitalWrite(ledPin1, HIGH); digitalWrite(ledPin2, LOW);


Environment The brief asked that the finished object be shown in the environment itâ&#x20AC;&#x2122;s emotional response could response to. The box was taken out in a variety of environments where the emotion would be elicited. A video of this showing the finished object is available on tumblr:

A red light in Traffic. Reponse: Anger


A lunch break in Reponse: Happiness



Studying all day Reponse: Sadness




The monsters were then to be mounted onto a board and displayed at a mini-exhibition at Edinburgh Napier University. There was some admittance to the failures of this particular project. The monsters show a tongue-in-cheek response to the brief, which is meant to be light-hearted, fun and enjoyable.

This module had been an enjoyable and skill-expanding experience, and in particular has piqued a further interesting into the design and implications of generative art with the aid of Processing. -

However, the brief described an emotional object that was an â&#x20AC;&#x153;autohuman response), and I am not sure this could be considered an â&#x20AC;&#x153;auto-reponseâ&#x20AC;? as the response of the monsters has to be instigated manually and is not automatic.




Prototyping Interactive Experiences  

A development booklet produced at the completion of the "Prototyping Interactive Experiences" module at Edinburgh Napier University.

Prototyping Interactive Experiences  

A development booklet produced at the completion of the "Prototyping Interactive Experiences" module at Edinburgh Napier University.