Page 1



An Event and Conversation about Cross-Disciplinary Design Practices




An Event and Conversation about Cross-Disciplinary Design Practices


SCDC Director José Pinto Duarte SCDC Advisory Committee Felecia Davis David Goldberg Tawab Hlimi Peter Lusch Marcus Shaffer Stuckeman School of Architecture and Landscape Architecture Kellean Foster, Director Eliza Pennypacker, Landscape Architecture Department Head Mehrdad Hadighi, Architecture Department Head Open House and Symposium Organization Felecia Davis José Pinto Duarte David Goldberg Peter Lusch Marcus Shaffer SCDC Open House Poster Designs Peter Lusch Open House Student Moderators and Assistants Ardavan Bidgoli Vernelle Noel Administrative Support Rynne Crissinger Barbara Cutler Kendall Mainzer Karen McNeal Video and Photo Scott Tucker Web Design Scott Tucker

This Publication Editors Felecia Davis José Pinto Duarte Student Assistant Shivaram Punathambekar Book Design Shivaram Punathambekar Cover Image Credit Justine Holzman Symposium Participants Skylar Tibbits, MIT Self Assembly Lab Justine Holzman, University of Tennessee, Harvard GSD Zachary Kaiser, Michigan State University Open House Exhibiting Instructors Felecia Davis David Goldberg Tawab Hlimi Peter Lusch Open House Exhibiting PhD Students Ardavan Bidgoli Eduardo Costa Flavio Craveiro Vernelle Noel Clarissa Albrecht Da Silveira Debora Verniz Open House Exhibiting Masters Students Zoe Bick Shokofeh Darbari Nasim Motalebi Angelica Rocio Rodriguez Ramirez ISBN: 978-1-941659-05-2


Foreword Preface Introduction Symposium Skylar Tibbits Justine Holzman Zachary Kaiser


Faculty Projects SuperCart: Mobile Retail Organizer Application Woven Water Filter Responsive Textile Panels Bromma Belvedere Reverse Engineering A. William Hajjar’s Air-Wall Project Collaborative Design Studio (CoLab) Student Work Digital Technology Augmenting Expression and Expression of Green Urban Spaces (Re)conceptualizing Wire-Bending in Design Robotic Motion Grammar Mass Customization of Ceramic Tableware Understanding the Urban Structure of Informal Settlements Automated Multi-Material Fabrication of Buildings Mechanizing Rammed Earth Notes from the Symposium

Foreword KELLEANN FOSTER, RLA, ASLA Director, Stuckeman School of Architecture and Landscape Architecture, Penn State


SCDC ‘16

The Stuckeman Center for Design Computing (SCDC) supports design research, theoretical investigations and academic opportunities within the realm of design computing. Work conducted within the Center seeks to enrich understanding, student learning experiences and scholarship in regards to cutting-edge technology - including the advancement of those technologies for the areas with the Stuckeman School: Architecture, Landscape Architecture and Graphic Design.

The SCDC also has an outreach mission to help others beyond the School appreciate the benefits of integration and collaboration centered around design computing. This year’s symposium is an excellent representation of fully embracing all these aspects of the SCDC – showcasing exciting work from current Stuckeman School faculty and students, collaborators from across the University, all inspired by featured speakers visiting from several U.S. universities.


Preface JOSÉ PINTO DUARTE Stuckeman Chair in Design Innovation & Director of the Stuckeman Center for Design Computing, Penn State Dr. José Pinto Duarte is the Stuckeman Chair in Design Innovation and director of the Stuckeman Center for Design Computing (SCDC). An accomplished scholar with a record of innovative leadership, Duarte guides the ongoing research and direction of the SCDC. After obtaining his doctoral degree from MIT, Duarte returned to Portugal where he helped launch groundbreaking, technology-oriented architecture degrees and programs in two different universities, as well as a digital prototyping and fabrication lab. Most recently, he served as dean of the Technical University of Lisbon School of Architecture (FA). Duarte has an impressive record of uniting academic research and industry, as well as fostering multi-national partnerships. He has served as president of eCAADe, a European association devoted to education and research in computer-aided architectural design. He also helped establish the MIT–Portugal program, and created the Design and Computation research group, which boasts a strong record of interdisciplinary and collaborative research efforts funded by the Portuguese Foundation for Science and Technology (FCT) and private companies.


SCDC ‘16

I see design research as a way of tackling problems that affect the world today and, ultimately, improve people’s lives without depleting natural resources. In this regard, technology is a means to address such problems. The Stuckeman Center for Design Computing (SCDC), the School, and Penn State, in general, have the human and technological resources to contribute for finding meaningful solutions and the goal is to take advantage of such resources, fostering interdisciplinary research and contributing for such a rich environment. In this context, the goal of the Stuckeman Center is to use computing and information technologies to develop innovative solutions, while contributing to society’s understanding of these technologies in the production of built environments across scales – from territories to cities, to buildings, products, and interfaces. The Stuckeman Center’s mission is to become a design research and learning center of international relevance. This entails fostering an open and multidisciplinary culture of research that engages faculty, post-doctoral researchers, as well as graduate and undergraduate students. To accomplish this we seek to establish enduring collaborations on three levels: within the college of Arts and Architecture that includes units in Music, Theatre, Visual Arts, Art History, and the Performing Arts; within the university, with other colleges, particularly in engineering and the humanities; and beyond the university, to form and maintain alliances with peer centers in academia and industry, both nationally and internationally.

The Stuckeman Center encompasses four laboratories, Immersive Environments, Remote Collaboration, Advanced Geometric Modeling, and Digital Fabrication, including architectural robotics. The Center is organized into three focus areas corresponding to the three units of the school: architecture, landscape architecture, and graphic design, and into research groups spanning the laboratories and the focus areas. The Stuckeman Center has several initiatives to improve the dissemination of its work to the academic community, and to society at large. The SCDC Open house is one of them, and it is a oneday event designed to communicate the value of the research taking place in the Center through demos, an exhibition, and talks on finished and ongoing projects. It is articulated with the SCDC Flash Symposium, an event that takes place on the same day, with the aim to showcase internally cutting edge research being developed at other institutions, promoting cross-fertilization and collaboration. This book documents the SCDC Open House and Flash symposium that took place in 2016.


Introduction FELECIA DAVIS Assistant Professor of Architecture, Penn State

Felecia Davis is an Assistant Professor at the Stuckeman Center for Design and Computation in the School of Architecture and Landscape Architecture at Pennsylvania State University and is the director of SOFTLAB@ PSU. She completed her PhD in Design and Computation within the School of Architecture and Planning at MIT. She received her Master of Architecture from Princeton University, and her Bachelor of Science in Engineering from Tufts University. While at MIT she worked on a dissertation which develops computational textiles or textiles that respond to commands through computer programming, electronics and sensors for use in architecture. These responsive textiles used in lightweight shelters will transform how we communicate, socialize and use space. Felecia has lectured, taught workshops, published and exhibited her work in textiles, computation and architecture internationally, including the Swedish School of Textiles, Microsoft Research, and MIT’s Media Lab. She has taught architectural design for over 10 years at Cornell University, and taught design studios most recently at Princeton University, and the Cooper Union in New York. The architectural work of her studio Felecia Davis Studio has been selected as finalist work in several international architectural design competitions. She is currently working on a book titled Softbuilt: Networked Textile Architectures. More information about her work may be seen in her website www.feleciadavistudio.com


SCDC ‘16

Design + Computing operates as both a title and agenda for the second Stuckeman Center for Design Computing Flash Symposium and Open House. What this agenda asks is ‘how do designers use computational tools and methods shape our environments?’ As hosts to this symposium we engage this question across the disciplines of architecture, landscape architecture and graphic design which are the disciplines that co-exist and cross pollinate in the Stuckeman School. This year the Stuckeman Center was fortunate to have Skylar Tibbits from the Self Assembly Lab at M.I.T. contribute a keynote lecture as well lectures by Justin Holzman an Assistant Professor in the Landscape Architecture Program at the John H. Daniels School of Architecture, Landscape and Design at the University of Toronto. She is also a research associate for the Responsive Environments and Artifacts Lab at Harvard’s Graduate School of Design, and Zachary Kaiser an Assistant Professor of Graphic Design and Experience Architecture in the Department of Art, Art History, and Design at Michigan State University. Further information about each lecturer is available in the Bios section at the back of this book. Each of the lecturers demonstrate through their own design work and illuminate in the works of others ways of using computation. Each lecture has been transcribed and edited by me and the respective author for access in written format. Therefore, you as the reader should be aware that you are reading spoken words. By using transcription for its Flash Symposium and Open House the Stuckeman Center can give quickly give access to these ideas in written form for the research community and the general public. In

the paragraphs below you will find some of the salient points each guest lecturer contributed to our agenda. Skylar Tibbits changes the traditional design process, whereby a shape or idea is superimposed onto a material. In a traditional design process material is shaped and collected together into a useful product or space by a-priori drawings and later brute force. His work challenges this paradigm by examining materials that assemble themselves, that have embedded properties or are coded to organize themselves into a useful shape, product or space. He calls this self-assembly. Tibbits presents five ways in which self-assembly happens including polymorphism, where building blocks can come together in several different ways; growth and replication; programmable materials which can be programmed to change shape, property or reconfigure themselves; 4D printing where material is used like active robots; and lastly, phase change materials that go from one to state to another i.e. from solid to liquid. Tibbits shows how these methods can be a way for designers to save energy, material and time by moving from programming machines to one where designers program materials. Justine Holzman’s design work and lecture offers readers methods and case studies that frame ways to understand the embedding of responsive and sensing technologies in landscape design. In her lecture Holzman presents her own work and concepts from the book she has co-authored with Brad Cantrell titled Responsive Landscapes: Strategies for Responsive Technologies in Landscape Architecture (2016). She presents the idea of the


synthetic as a way to understand the workings of dynamic material processes in landscape design. Holzman demonstrates materials as “active agents in design” in her own work with ceramics that challenges landscape architecture’s disciplinary traditions such as topographic drawings. Holzman shows how technology acts as a mediator in landscape architectural design and discusses six methods that relate responsive technologies to landscape architectural design. Zachary Kaiser questions the use of technology within the design process in his lecture for the symposium titled “Computation as a CoConspirator in Resisting its Own Hegemony”. Kaiser looks at the relationship between design and computing from epistemological and philosophical frameworks. He presents the work he has done with his graphic design students and the work he has developed alone and in collaboration with other artists for several different art galleries. In each project Kaiser demonstrates a method(s) that use computational tools to resist reification of computation in society. He uses these works to get his students and the public exposed to his work in galleries thinking specifically about how relying on computational tools changes the information we receive and thus our relationship to information. Each of the three lecturers makes visible what is often invisible or unseen in the development of designs and shaping of space using computation materially, politically and socially. Simply changing


SCDC ‘16

the view point on methods of design yields new territory to help designers and others shape their environment in more sustainable ways. Tibbits reveals a new trajectory for how architects, designers and others can think about building using less energy by moving the method of working with materials from prescribing the way in which the material is used for example through drawings, to working with embedded energy or energy coded into that material system. Holzman, at a larger scale of operation engages material agency from the perspective of landscape architecture and reveals to us the ways in which our current systems of addressing transforming, dynamic material systems can be considered, revealing new relationships of people in the landscape. Holzman’s synthetic work resists traditional landscape representation opening a way for designers to grasp and understand their own inclusion in material processes. Kaiser’s dynamic material is the study of human behavior in an environment now shaped by ubiquitous computation. He enlists computational tools and devices in his work to reveal to us our vulnerabilities.

There are many other emergent linkages and themes in this collection of lectures as they address the original question relating to design + computing. I invite you to read the collection of lectures to formulate your own linkages and themes.



SCDC ‘16


Projects at Self-Assembly Lab / Massachusetts Institute of Technology



Skylar Tibbits is a co-director and founder of the Self-Assembly Lab housed at MIT’s International Design Center. The Self-Assembly Lab focuses on self-assembly and programmable material technologies for novel manufacturing, products and construction processes. Skylar is an Assistant Professor of Design Research in the Department of Architecture where he teaches graduate and undergraduate design studios and How to Make (Almost) Anything, a seminar at MIT’s Media Lab with Neil Gershenfeld. Skylar was recently named R&D Magazine’s 2015 Innovator of the Year, 2015 National Geographic Emerging Explorer, 2014 Inaugural WIRED Fellow, 2014 Gifted Citizen, 2013 Fast Company Innovation by Design Award, 2013 Architectural League Prize, The Next Idea Award at Ars Electronica 2013, Visionary Innovation Award at the Manufacturing Leadership Summit, 2012 TED Senior Fellow & was named a Revolutionary Mind in SEED Magazine’s 2008 Design Issue. Previously, he has worked at a number of renowned design offices including: Zaha Hadid Architects, Asymptote Architecture and Point b Design. He has designed and built large-scale installations at galleries around the world, has been published extensively in outlets such as the New York Times, Wired, Nature, Fast Company as well as various peer-reviewed journals and books. Skylar has a Professional Degree in Architecture and minor in experimental computation from Philadelphia University. Continuing his education at MIT, he received a Master of Science in Design Computation and a Master of Science in Computer Science under the guidance of; Patrick Winston, Terry Knight, Erik Demaine and Neil Gershenfeld. Initiated in 2007, Skylar Tibbits is also the founder and principal of a multidisciplinary design practice, SJET LLC.


SCDC ‘16


simulate complex scenarios, simulate physical phenomenon, analyze structural, environmental or other scenarios. It is quite clear at this point that code has become a new language for design. Code has allowed us to make things that we couldn’t have thought of before and potentially make better design decisions.

Thanks a lot for the generous introduction. Hopefully, I can live up to all the kind words and it’s obviously a pleasure to be here as I have some connections to Penn State and it’s great to see friends and family. I grew up outside of Philadelphia, so there’s obviously that connection and lots of people were Penn State fans, so exciting to be here. I’m going to try to show you some of the work that we do at the Research Lab and give you some context how we got into that, what we’re doing today and where we think we’re going with this. I have a few main roles right now, one is I teach full-time at MIT. A second one is I recently stepped into a role as an Editor-in-Chief of the 3D Printing and Additive Manufacturing Journal which is quite unique in the sense that it’s not for any one discipline. It’s not an architecture journal, not material science, not mechanical engineering. It’s one topic - printing and additive manufacturing - additive of all sorts. But it crosses nearly every domain, so it’s interesting to witness this – it’s the end of first year. But the main role that I’m going to talk about is running the research 13

SCDC ‘16

lab at MIT. So the research lab is called SelfAssembly Lab. It’s situated within the center called International Design Center at MIT and I’m also a faculty in the architecture department. What’s interesting here is that International Design Center is not within one department, much like the journal but it has biologists, material scientists, mechanical engineers, folks from all sorts of different schools at MIT, different backgrounds, undergrads, grads, researchers, etc., and so it’s a really interesting mix of different domains.

BACKGROUND The path that I took to arrive at our current research is through the world of design computation. I came to MIT after studying architecture and I was researching how code became a new language to transform design. The first CAD tool, “Sketchpad” was invented at MIT in the 40s and 50s which has contemporary counterparts in sophisticated software tools or computational abilities that allow us to

There’s obviously growing body of research around this, nearly every domain has been radically changed by the influx of computation and software. There’s a similar analogy in machines - the first CNC machine was invented at MIT in 1955 where they connected a router to a computer which obviously has its contemporary counterparts in printing, laser cutters, water jets, CNC routers and all sorts of things that we use on a daily basis today. Again, nearly every industry has been changed by the link between code and machines. We explore the next step, which is how do we bring code to construction? We have all of these tools to enable us to design things in ways that we couldn’t have designed before with machines that help us make things that we couldn’t have made before. But the way we put them together is still blood, sweat and tears. We typically just slam these things together. There was this insight that we would spend weeks or months riveting an installation together in galleries all over the world and we started thinking that there must be some other way. There must be a way that we could bring all of the elegance that we have in the computational world, all the elegance that we have with digital machines which enabling us to produce things that we couldn’t have produced before. Could we find something similar in construction?

MATERIALITY + COMPUTATION The way that we do that is to look at materials themselves. We try to find ways to embed more information and greater capabilities in materials. If you think about all of the capabilities that we’ve become used to in the digital world, you have things like error correction, reconfiguration. You have copy and paste. You have replication. We have all these amazing possibilities that we have become used to it. But in materials and in construction, we don’t have any of those things and so we’re trying to find ways to invent of all of that programmability from the digital world into mundane, simple everyday things like physical materials that surround us and allow us to have higher performing products, better systems, environments, and better construction or manufacturing processes. We specifically look at these domains and these large-scale messy processes. There’s a famous quote by Feynman saying “there’s lots of room at the bottom”. There’s lots of research to happen at very, very small scales and we’re kind of the opposite of that. We’re really interested in the lots of room left at the top of the scale. Humans obviously occupy a very large-scale world. If you go all the way down to quantum or Nano, there are some amazing things happening. Lots and lots of research going in that direction and if you go all the way up to larger scales like astrophysics or geological scales – there are a lot of interesting things happening. I feel like we’ve just come to this point where we feel pretty good with our skills at the human-scale world. We feel like we can build things (buildings, bridges, machines, products etc.). But there’s lots of room to improve. We’re really interested in the dirtiness, the messiness, and the inefficiencies here and we try to find ways that materials can collaborate with humans and machines to make better things.


There must be a way that we could bring all of the elegance that we have in the computational world, all the elegance that we have with these machines are enabling us to produce things that we couldn’t have produced before - can we find something similar in construction?

SELF-ASSEMBLY There are a few main scenarios or phenomena that we look at. The first one is self-assembly. Selfassembly is a phenomenon where disordered parts, physical things, components for example can come together on their own without humans or machines. You can take a bunch of things. Let’s say it’s bricks, or it’s bolts or it’s connectors and they should be able to simultaneously assemble themselves. That’s a super weird thing. Almost nothing at the human scale does that, but almost everything outside of the human scale is built based on this principle. If you think about how humans are built, how plants are built, you think about everything in physics, everything in biology, everything in chemistry, everything in astrophysics, they are built off these principles. There are no planetary scales printers . There are no sledge hammers and screw drivers that make cells. There’s no top down construction at any other scale than the human scale. Why at the human scale have we become used to brute force top-down construction? There’s top down design. We have this grand vision - we want to make something happen, and there’s top down construction – where we force it to happen. We’re not necessarily saying that is going to go out the window and we’re going to completely change, but there must be opportunities to utilize bottom-up principles at larger scales. This is probably the most simple example of self-assembly – a project where we collaborated with a molecular biologist, Arthur Olson which 15

SCDC ‘16

contains a glass flask and components inside. You shake the flask hard and all the components break apart. And then you shake it randomly and they come back together. And so there is random energy that builds non-random structures, which is really the fundamental aspect of self-assembly - that it’s not guided and it’s not taking the parts and placing them, one-by-one It’s not based on a robot. It’s noisy, random, human interaction. A little kid is just as good as the expert. You can be blind folded and you’re just as good, maybe better than looking at it. You can pick it up and not knowing anything in a few seconds to minutes, you can get this thing to assemble. You don’t even know what it supposed to assemble, sort of the ship in the bottle scenario. But rather you can collaborate with this medium by just supplying energy. All of the instructions are in the parts themselves.

SYMMETRIC STRUCTURES In that example you have a bunch of parts with small components, you shake it, then it comes back together to make a dodecahedron structure. The dodecahedron structure is built the same way every single time. The parts are in different positions but the dodecahedron structure is exactly the same. Every component is exactly the same, and that essentially points towards a manufacturing scenario where you have a bunch of components, you want to build a final product and you want to give some amount of energy so that all of the components spontaneously come together. In this scenario, you don’t want to make

a bunch of different chairs; you want to make the same chair but many times. However, there is another perspective if you think about diamond to graphite or graphene. Diamond and graphite are made of exactly the same thing –the exact same fundamental building block, but they have completely different characteristics. Diamond is obviously super strong, one of the toughest materials. It’s transparent. But Graphite is super brittle, it’s black. How are these things the same material? When they come together, it’s the same components but in completely different ways. Their properties are completely based on how they combine; resulting in very different physical characteristics. We started to study this, a phenomenon called polymorphism. We designed and built building blocks that come together in different ways. We have an project we built with a 500 gallon tank of water and hundreds of neutrally buoyant particles. They move around in the 500 gallon tank of water freely in all three dimensions. We included a number of pumps that we can control to have different amounts of energy - you can change the intensity, you can change the frequency, you can have chaotic patterns, or regular patterns and they all eventually come together to build a single crystal. The single crystal is unique in the sense that every time it builds, the shape will be different. Theoretically, there is the chance that it will do exactly the same thing. But there’s lots of different scenarios - the temperature of the water, the way you deposit the particles, the pattern of energy and turbulence in the water, all will produce different effects. The local structure though is exactly the same - a cubic lattice. Its components and the shape are not changing.

DIFFERENTIATED STRUCTURES Then we started to think about how do you guide certain things to happen? I don’t want to mandate certain things but I want certain behaviors to emerge. If I created a vortex or a tower, or if I create lateral forces in the water tank, will sheets emerge like graphene? How do you promote certain behaviors to emerge in a fairly complex environment? In the previous example every single component was the same. In the next project, every component is unique. We have different geometries and different lock and key mechanisms at the nodes. The units tumble around in this water tank, with pumps providing turbulence and each component has to find exactly the right place in the global structure in order to make the chair. The point of the chair being that it is not symmetric in all axes and that it is a differentiated structure. The chair has different types of units, and units have to find exactly the right place. They can’t just randomly connect somewhere or else it won’t ever make a chair. In this case, you can point towards a scenario of making arbitrarily complex things. If I come up with some design, perhaps I want it to be able to self-assemble. That design may not be directly in line with how selfassembly works. So how do we find ways to build more and more complex, more and more useful things in our everyday environment? I’m really on a path of scalability in the sense of making many components at the same time and also increasing the complexity of what’s possible.


wrong, i.e. if they connect in the wrong way, they’ll break off. If they’re less than 50%, they’ll be stronger and they’ll stick. You can error correct to make sure that you have the right structure.

So how do we find ways to build more and more complex, more and more useful things in our everyday environment? So I’m really on a path of scalability in that sense of making many components at the same time and also increasing the complexity of what’s possible. The other aspect of scale you can think about wanting to build is many. Many of something; such that you have hundreds of thousands of particles. Alternatively, the scale can be very large. The next project used large scale modules for selfassembly. The modules were 36 inch diameter weather balloons and floating around in a courtyard and assembling. Each one of the nodes is based on a Velcro connection - it’s probably a good time to talk about how they come together. There are a few ingredients for self-assembly, one of those is geometry. Different geometries produce different structures. The next ingredient is energy. We study different types of energy and different environments. The third ingredient is some form of connection. You can use magnets as in some of the previous examples; you can use Velcro, surface tension or many other connection mechanisms. You need some stickiness and you need error correction. In this case the nodes are Velcro nodes - it has a sort of a positive-negative form because you connect the up and the down of the Velcro which allows them to connect to other structures. Then if they’re more than 50% 17

SCDC ‘16

There are 36 inch diameter weather balloons in the courtyard outside of our lab and we threw a party. We had friends come over and help push the balloons up in the air, sort of like a concert and we had large fans - so we think about this as a form of perhaps, human-guided-assembly. There’s a spectrum between completely autonomous and completely controlled. In this case, you can get really good at how you throw them, where you put fans, and you can start to find ways to subtly interact. Obviously, you can’t assemble them given they are 20 feet up in the air. But you can guide the patterns and energy – get them stuck in a corner, for example to promote certain things to happen. The idea here was for a future scenario, but pointing towards construction in extreme environments - scenarios where it’s difficult to build today. Whether these are super small scales, super large scales, extreme environments like space or underwater for example or harsh environments where it’s hard to get materials, hard to get equipment, hard to get people. You can imagine a scenario where you can just deposit material and allow the materials to come together on its own. In the case of the balloons, they’re filled with helium, and as the helium dies, you’re then left with a space frame structure 10’ x 10’ x 10’ that would be quite unwieldy to build otherwise. We are trying to show you can also scale up, it’s not just small desktop things, it’s not just nanoscale, but larger light-weight structures can self-assemble as well. We then try to go beyond just assembly into growth and replication. In the next project, we used a shaking table. There were individual

modules that were all the same. We kept adding modules, essentially like adding food. These modules can grow into structures, they can encapsulate. They can then create equilibrium states and grow to uncomfortable states and you’ll see eventually they grow and divide, grow and divide. We have done this with tens of units and with a few hundred modules trying to show probably the closest synthetic version of mitosis, or cellular growth and division. There’s an entire field of research that only looks at replication - there’s biological replication and there’s robotic replication – in which a robot picks up parts and builds another robot. There’s just one other example that we know of that only uses simple materials, not biological materials, not robotic mechanisms, that can demonstrate replication. This was Penrose’s self-replicating wooden blocks in the 1950’s. We are nodding towards that project - but there is an entire space of growth and replication research that can be shown with just simple everyday materials.

PROGRAMMABLE MATERIALS The second category of research is called programmable materials. The idea is that you want to program physical materials to change shape, to change property, or reconfigure into some functional and highly useful way. Instead of components that are assembling something, what if you have a final product and now you want them to transform in some functional way? There are a couple of ingredients to do this. Everything that we know and everything that we interact with on a daily basis is based on material. Everything you’re wearing, you’re sitting on, everything around you and the space is based on materials. The properties of those materials is normally how we choose to make a product. Maybe we want flexibility, we want breathability, maybe we want waterproofing, we want strength – we want all these different characteristics. That’s usually how you start with a material choice.


We’re kind of flipping that and say perhaps you can start with an energy source or you can start with the material, but then you need to think about what energy source will activate that material. So an example is maybe the application has a lot of heat. We tend to try to fight excessive heat whether these are computers or planes or cars. Rather than fighting heat can we utilize that heat as an activation source? What’s going to respond to heat? Metal for example, or thermoplastics. Maybe your energy source is moisture, so then you might want to use a hydrogel or you might want to use a wood, which has cellulose and that’s going to respond to moisture. Then the way that you put those materials together - either combinations of materials or the micro and macro structure of that material will give you chemical transformations. You can combine these materials in ways to get folding, curling, stretching or shrinking. That allows you to take everyday materials to make smarter and higher performing systems.

4D PRINTING Our most notable work to date is probably, 4D Printing. We introduced this topic roughly three and a half-years ago. 4D printing originally grew out of a conversation at a coffee shop – there was a person sitting across from me from Stratasys, and we just started talking and I put it out there that why can’t we print robots, reconfigurable replicating awesome robots that we know today. But I don’t want any of the motors, the wires, and I don’t want any of the electronics. I just want the functionality of the robots. Because we had been doing a lot of research on robots, and they were expensive, they broke all the time, no one could work on them. They had an extra assembly set up, so every time we wanted to work with a robot, it would take days or weeks to assemble this thing, and then it would break and it would take days and weeks to work on this thing. We want to be able to easily produce robots and so we started this collaboration looking at multi-material printing as a way to produce active robots but without any of the downside of robots. You could think about it like printing smart materials. It is a smart material that you

can print, easily produce, and customize to do whatever you want. We deposit different materials at the same time. The black material is a rigid polymer, the rigid polymer acts like the braille - it’s the geometric information so it is the backbone, structure, precision, and joints. It is all the information for what it is going to do. The other material is a hydrogel and that hydrogel swells with moisture. That gives the activation energy to go from one shape to any another shape in a precise way. We worked on all sorts of different joints to go from any one shape to any other shape. Whether those are folding, curling, stretching, shrinking, we tried different types of transformations in those material combinations. In terms of prototypes, we’ve shown single strands that can fold into cubes, single strands that fold in to text like the letters “MIT”, surfaces into truncated octahedron, surfaces like records that go into saddles and curved-crease origami, surfaces with double curvature for textiles, automotive and apparel industry, and 50 foot strands for swimming pools that are like proteins you can swim with. In all of these prototypes from small stuff to really big stuff – the goal is that we want to go from any one shape to any other shapes, from 1D to 2D, 2D to 3D, 1D to 3D, 3D to 2D. We want to make all these different configurations. This requires a series of logic modules with functional mechanisms – folding, curling, twisting etc. – and you can then combine them in different ways to produce geometric transformation.

HOW TO PROGRAM EVERY MATERIAL? But after we introduced this work, one main question kept coming up. Lots and lots of different industries came to us and said can we use this for orthodontics? Can we use it for medical devices? Can we use it for shoes, clothing, sports cars, and planes - every application you can think of? 19

SCDC ‘16

They were all saying we don’t necessarily want 3D printing and we don’t want just plastics. Tell us how to do this with other industrial processes and tell us how to do this with other materials. We’re really interested in finding ways to use other industrial processes and materials. This has led to the broader project which is what we called programmable materials. This is an ongoing line of research with many different companies and collaborators from product designers to mathematicians, computer scientists, material developers, software companies and many others. We have shown three main material types thus far. The first type are composites - carbon fiber, Kevlar, fiber glass, etc. Textiles are the second type and wood is the third.

PROGRAMMABLE WOOD With wood, there’s a long history of active shape transformation. You can look at Japanese joinery where the joints would swell based on moisture to make stronger and more precise joints. Or you can look at the Eames’s furniture, where they would use moisture to activate the plywood to curl and put it into molds to get shapes that wood had never taken before. Or Achim Menges’ work at the University of Stuttgart, a more contemporary example using wood veneer that curls based on moisture. He showed that it can repeatedly open and close and they have had it on their rooftop for many, many years, opening and closing predicting when it is going to rain. But, there are two main challenges – the one challenge in the Eames example was the amount of energy that we put into the systems. They had to force the wood into place, get it wet with steam and force it around the mold. The wood is actually not doing the work, it’s not self-transforming. The other constraint in the ICD example is that you’re constrained by the wood veneer pattern. We wanted all of the performance of the wood but


We’re really interested in finding ways to use other industrial processes and materials and that has led to the broader project which is what we call programmable materials

we wanted full customization in terms of design. I think about this as the forest problem. You don’t want to have to go to forest and find the right tree that has the letters MIT in it or whatever weird shape that you want to make. You don’t want to be constrained by only what you could find. You want to have full design freedom with the performance of wood. The way that we do that is we print wood. We have a filament that is essentially MDF, it’s basically sawdust and plastic in a filament. You can change the amount of wood, and use different types of polymers. You make a filament with it and then you extrude out that filament. By printing the grain, you can control the types of transformation. The printed filament responds to moisture the same way wood normally does because of it cellulose composition. Depending on that grain, you get different types of curling, folding, or stretching, all sorts of different mechanical transformations. You can have wood that curls in different ways or wood that folds to 90 degrees, or you can have wood that curls into sinewaves and furniture-like shapes, fish scales, and anything you can imagine. You can have diverging and converging grains or spiraling grains. The second material group, textiles, has a number of different ways to activate it, and one of them is based on a pre-stretch. You prestretch the textile and then you can bond, you can spray, stich - in this case, we’re printing - on top of it. You basically add a layer of constraint. In this case you’re printing a polymer on top of a pre-stretched textile and then depending on the shape of the pattern put down and the 21

SCDC ‘16

flexibility of the material, you get different types of transformation. If you pre-stretch the textile in a uniform way, you’re going to get uniform shrinkage, if you pre-stretch in one direction, you get single direction; so a circle becomes a saddle. You can get pleating and tufting and all sorts of different complex detailing simply produced in a single shot and self-transformed rather than by manual stitching and labor. We have done a number of projects based on that. A recent project is a table that we did with a company called Wood-Skin - it’s really taking on the Ikea challenge of flat packed furniture shipped flat and then it can transform into a threedimensional ottoman or table. You can stand on it after it simply jumps into shape. It’s made from pre-stretched textile laminated with wood.

PROGRAMMABLE COMPOSITES We then did a project at the Design Museum in London and product designer, Christophe Guberan, on the future of the shoe and we try to tackle the challenge of the manual assembly of shoes. Almost everyone’s shoes here are made with hundreds or maybe thousands of parts almost exclusively manual labor. We were really interested in finding a way that a single stretched textile, can have a deposited pattern and jump into the entire shoe from a single sheet. Carbon fiber shows the other way that you can activate textiles. In this case, we used a flexible carbon fiber made by Carbitex. It’s a composite that is fully cured and flexible. In this case we used a heat-active polymer. Depending

on the temperature, it’s going to expand or contract and that allows it to fold and unfold, fold and unfold, repeatedly, with extreme speed. You can get curling and twisting, folding to any arbitrary angle. This allows carbon fiber to be super light and strong, but now it self-transforms. We worked on a number of different applications with the carbon fiber. One example is a super car by BAC, an automotive company. It’s the world’s fastest street-legal super car and the reason that’s interesting is because when it is street-legal - it’s really not so good when it’s going to rain. Therefore what they wanted to do was use active aerodynamics. Active aerodynamics have been around for a long time in sports cars, racing cars etc. but it’s always electro-mechanical or pneumatic etc. You have lots of motors and gears and electronics which add weight, they add failure and they add assembly time. What BAC wanted is a single layer of carbon fiber that can open and close essentially, to reduce efficiency. When the carbon gets wet, the moisture in the environment goes up when it’s going to rain, they want to reduce the aerodynamic efficiency which adds drag. Drag will push down on the rear tires to give more traction. We developed a number of prototypes on single sheets of carbon fiber that could respond to

moisture in the environment to open and close and change the aerodynamic efficiency of the car. Another example we have recently worked on is with Airbus and it’s still an ongoing collaboration. They came to us with a challenge - the top of their engine has this air inlet and the inlet cools the engine but it causes drag, so it reduces the efficiency of the plane. Again, they would have defaulted to an electromechanical flap. The problem is that the plane’s already in production and they can’t stop production to the wire control back to the cockpit. Extra components likely means it’s going to fail, it’s going to need inspection, it’s going to add weight, and further reduce efficiency. Again, they wanted a single material that can resolve this - essentially it can be a sensor and an actuator at the same time. We developed a carbon fiber composite that can open and close – initially based on temperature, so ground to 30 thousand feet - quite a drastic temperature differential, but it depends on where you land. If you land in Antarctica versus Arizona, you have a different temperature. So we then moved to pressure differential. When you’re driving down the highway and you open the window, things will get sucked out. That’s pressure differential, and that is based on the speed of the plane. In wind tunnel testing, this


component shows that it can transition from two states based on speed of the plane.

it’s in a vacuum packed into a solid brick - that’s granular jamming.

The next project is based on bi-axial braided structures - much like the textile – but it is a super large scale. It’s a 60-foot tower that can morph, open and close, to transform from a single tower structure to a stadium-like structure or a dome - has all sorts of different compositions. It’s essentially a massive Chinese finger-trap. This is a bi-axial braided structure, and that allows it to expand, contract, shrink in various dimensions - it’s really an amazing structure. It is a fully braided 60-foot tower that we’re working on at the moment.

Basically each grain or particle, jams - it literally gets stuck and it acts like a solid, so it’s as hard as a brick. When you release it, it flows like a liquid, and that’s really interesting to material scientists and physicists right now because you can reversibly go from solid to liquid and liquid to solid instantly without temperature change. There are a lot of different applications for this - there’s a company that is making crash protection based on granular jamming. It also has the non-Newtonian property - when you play with the play-dough and you hit it with a hammer – it’s super-hard; but when you push it with your thumb, it’s really soft. Or sand at the beach – if you punch it, it’s really hard; but if you brush it, it’s really soft. We do a lot of research around this topic.

PHASE-CHANGE STRUCTURES The last research topic that we work on are essentially phase-change structures. Not assembly, not self-transformation, but phasechange. The idea behind this is you want something that goes from solid to liquid and liquid to solid and the way that we do that is through a system called granular jamming – when you go to the grocery store and you buy coffee and in


SCDC ‘16

We did a project at Storefront for Art and Architecture in New York with an artist - Lucy McRae. We essentially filled the entire gallery with a moon-bounce-like structure - a huge, inflatable space. On a ten-minute cycle, it jam and unjams. It goes from super rigid – from a Styrofoam-like quality to inflatable - you can climb on this. You can

mold it, it’ll jam around you - and really the vision was that all of our surfaces in the architectural environment are rigid, yet this project proposes morphable/soft/tunable surfaces. What if you can lean against the wall and it would envelope you; or turn it into furniture or mold it around you to make a private space, make a phone call? These phases can be fluid and flexible, jam and unjam whenever you need it to. We did a short science fiction film with Lucy. I’ll show one last project on jamming - we were interested in jamming but not the same jamming in the previous project. We wanted to remove the bladder and the vacuum. The vacuum is a problem because of energy, and you don’t want to have big vacuums holding-up your buildings. The bladder is a problem because if you puncture it, you don’t have jamming anymore. So we tried to find a way to have jamming with no bladder and no vacuum, and we came up with a process that’s essentially block printing. It’s basically loose rock and loose string, no adhesives or connectors, and the string promotes jamming because the string takes tension, the rocks take compression, and they jam.

We did a project for the Chicago Architecture Biennale - it’s essentially like a really large scale powder-based printer – but we layer rocks and string, rocks and string, rock and string over and over and over again within a large bounding box. When we removed the boundary, all of the rocks that had the string jammed and all of the rocks that didn’t have the string fall away. We are left with a big pile of loose rocks and the only thing holding this together is the string which is promoting jamming - it makes a fully loadbearing structure. We put three tons on top of this to demonstrate the load capacity. More recently, we reversed it – at the end of the Chicago Biennale we essentially disintegrated it by winding it up. There’s a mechanism that spoolsup all of the rope that was in the structure. What we are interested in here is essentially reversible concrete that you can pour like a liquid and it instantly cures like a solid - fully load-bearing. Then, you can reverse it and make it a liquid again vacuum it up, move it somewhere else, pour it again. We don’t have the time constraints of curing concrete, we don’t have the waste of concrete etc.


CONCLUDING THOUGHTS Where are we going with all of this craziness? We believe the state of the art today is manual assembly or automated robotics - if you want to make a car, a building, a shoe, they all require lots and lots of components, each with different material properties coming together in super complex ways. The holy grail is either more skilled labor or more precise robots. This appears to be the only solution for fighting this uphill battle with construction and manufacturing. However, we are trying to propose an alternative scenario where it’s not just robots that are going to save us – robots are great for precision and repeatability, they don’t get tired for example. But materials are great at physics, materials occupy the physical world, they respond to physics and forces, they can store information. Some of our research that you’ve seen, and many other people have shown, that materials can act as sensors, actuators and logic. These are the fundamental ingredients for robotics, and so what’s interesting there is that the materials can be essentially material robots in the physical environment. The environment communicates with the robots and tells them something about the structure they’re trying to build. Here is a mundane scenario - you place a component, that component knows a lot of forces that you are not seeing. It can tell you about temperature gradients, about forces that it’s feeling in terms of loading. That component can tell you about moisture, other forces that the robot knows nothing about. Or the component can tell you things that you may not know which then allow you to build better things on the fly. Or that component may permit you to build the same thing but allow that thing to perform in completely new ways.


SCDC ‘16

We’re also seeing a shift in the way people think about construction - whether that’s at large scales or very small scales – there’s lots of research for self-assembly in new material formations or self-assembly for space structures and other scenarios like that. We believe today we are in an era where we program computers and machines and tomorrow we will program matter itself.

Today, we program computers and machines. Tomorrow we will program matter itself.


Responsive Landscapes

JUSTINE HOLZMAN Justine Holzman is an assistant professor in the Landscape Architecture Program at the John H. Daniels School of Architecture, Landscape and Design at the University of Toronto. She is also a research associate for the Responsive Environments and Artifacts Lab at Harvard working on coastal issues in Louisiana. She received an MLA from LSU and a Bachelor of Arts in Landscape Architecture from UC Berkeley. Her research recognizes the inherent responsive capabilities of landscape materiality and speculates on the development of synthetic ecologies dependent on responsive technologies for nuanced monitoring and material reconfiguration. She recently co-authored a book with Bradley Cantrell, RESPONSIVE LANDSCAPES, framing a comprehensive view of interactive and responsive projects and their relationship to environmental space. Holzman pursues ceramic art alongside landscape architecture and is exploring digital and analog methods of making with ceramic material in relation to the built environment. Her work was recently exhibited the Museum of Craft and Design in San Francisco at a part of “Data Clay: Digital Strategies for Parsing the Earth”


SCDC ‘16


. . . in landscape architecture we work with dynamic materials within complex systems, providing a tremendous amount of potential to create changing and responsive conditions just based off the materials that we can use - soils, hydrology, plant life.

Thank you so much for the introduction and thank you for having me - what a wonderful flash symposium to be a part of. I was also really excited to hear from the other speakers. I think Skylar Tibbits’ work was a really great set-up for what I’m going to talk about—the agency of materials within complex and dynamic landscapes. I’d like to speak a little bit about several strands of my research, my recent book, Responsive Landscapes, and discuss where the work is headed.

BACKGROUND My work within the field of landscape architecture was influenced by the fact that anthropogenic changes to the landscape now occur at a geologic rate, in the words of Elizabeth Ellsworth and Jamie Kruse, “we do not simply observe it as landscape or panorama, we inhabit the geologic” — shifting our perception of the landscape as a stage or backdrop for human activity to the more material and visceral qualities of that landscape and how it’s acting upon the environmental systems we rely upon. “Today, information and environmental technologies,” in the words of Jane Amidon, “have the potential to virally increase awareness 29

SCDC ‘16

of ecological states, to link people, place, productivity, and performance.” Additionally, our landscape is already filled with countless sensors and devices that continuously mediate our environments – as Geoff Manaugh writes, these “extraordinary instruments” have been “without pause, on every continent of the Earth and even on the bottom of the sea have been recording and interpreting the world around them.” From this set of ideas, I became really interested in how responsive technologies—both the sensing and physical infrastructures embedded in our environments—can be used as a tool for understanding and designing the range of issues we now face.

MATERIAL AGENCY What struck me, was the fact that in landscape architecture we work with dynamic materials within complex systems, providing a tremendous amount of potential to create changing and responsive conditions just based off the materials that we can use—soils, hydrology, plant life, etc. Landscape materials have complicated interactions, that can’t necessarily be planned

for, but nevertheless occur. This section of images, is from some research I did at Louisiana State University, where I was really trying to challenge the idea of landscape as surface. The project, Material Agency, examined the synthetic quality of material interactions. Taking the two definitions of ‘synthetic’—(1) of, relating to, or produced by chemical or biochemical synthesis; especially, produced artificially (2) devised, arranged, or fabricated for special situations to imitate or replace usual realities—I created a synthetic space (a laboratory) in my studio while relating those materials to the synthetic qualities of our material interactions within the landscape. Like industrial mining and extraction operations followed by the movement of material across vast landscapes along with the ways we biochemically or chemically shift materials into different states—both the macro and micro scale of material change have great significance for landscape change, intentional or unintentional. Because I didn’t have an actual landscape to investigate these ideas, I worked with ceramic materials, which I have years of experience with. I started out with a selection of materials that I refer to as agents—materials you would normally use in ceramics—only now I was going to use them in my studio to reveal their material qualities and agency. When working with ceramic materials, some material combinations work and some fail, ultimately you want your cup to be a cup when it comes out of the kiln. For my material tests, I used the shape of a traditional brick to comment on the architectural building block as

a way to identify a change in materiality. I came up with a set of processes to add in my set of material agents, taking care to measure specific quantities—prioritizing material interaction over the aesthetic final product. One of the materials I used was coffee grounds. As an organic material, it completely burns out in the firing, but leaves a remnant landscape in its place. In my process, I worked with some materials that are typically used to mix different slips, clays, and glazes in ceramics—clays and fluxes. Different types of clays, clays are defined by particle size, how it was geologically formed, and temperature required for it to chemically change states—become hard, or in some cases melt. After conducting my tests, I choose four the base condition of the landscape and thought now that I had observed these materials as an architectural building block, what did they potentially have to offer in a landscape condition? This is a reaction against the landscape as a kind of topography contours, which always showed a static condition and an infinitely thin condition which completely removes the material quality of the landscape and we distribute materials across this landscape to see how they would perform and interact. The design drawing in this sense becomes a mapping of materials. A mapping of distribution of materials. On the right slide, that’s the ceramic object as a green ware state on fire. I’m not just showing you the kind of fun background stuff, but I’ll show you guys the stuff you’re a little more interested in. I used a milling machine, a very small milling machine to make these little blocks



SCDC ‘16


In times where the very concept of ‘nature’ is questioned not only in its philosophical dimension, but in the core of its biological materiality, we need to reconsider the interrelations between architecture and nature.

discipline of landscape architecture. Authors Cantrell and Holzman predict an emerging paradigm shift—where biology, intelligent machines, and systems will begin to productively coexist and co-evolve.”

and I exaggerated the step overage to reveal the contour lines so I could have something to show how the material was working against that concept. I made a plaster mold of it because I learned about this as I was doing a lot of ceramic slip casts which is how a lot of manufacturing of ceramics occur. It’s been around for a long time. The plaster absorbs the water of the liquid clay out of the slip and creates a shell around where it touches the plaster and then you could pour out the liquid state that remains inside. However long you leave it in the mold, the thicker the walls get. Through these interactions, I was amazed to really produce something that had the appearance of an unknown landscape, perhaps a moonscape—something otherworldly. Nevertheless, it had the kind of complexity that I was interested in. Zooming in—you can see all the interaction that took place before the firing and after the firing. The firing, to me, was a metaphor for a dynamic landscape process—the water moving across the landscape or perhaps the introduction of chemical and physical processes. For me, this work illustrates the ability for materials to instigate and record change. I think that Skylar, in his lecture, created a great argument for the 33

SCDC ‘16

ability of material properties to be active agents in design.

RESPONSIVE LANDSCAPES I’m now going to step into some work that I’ve done with Bradley Cantrell (Professor of Landscape Architecture), whom I worked with at LSU and Harvard GSD. After teaching together and developing some related work, we began working on our recently published manuscript, Responsive Landscapes: Strategies for Responsive Technologies in Landscape Architecture, to build a framework and discuss a series of case studies that position the role and potential of responsive technologies in landscape architecture. Generally, I would say that we are much more familiar with the use of responsive technologies in architecture as opposed to landscape architecture in their ability to mediate public and private space with an amount of flexibility, but not necessarily with any adaptive or transformative capacity. We were lucky to get a foreword by Jason Kelly Johnson and Nataly Gattegno of Future Cities Lab, their description follows: “Responsive Landscapes engages a latent territory that, to date, has remained largely underexplored within the

Landscape architecture really has seen a paradigm shift in the last two decades that has required designers to respond to the kind of dynamic and temporal changes that are occurring. We’re really interested in how you can combine this shift with increasing excessive ability of responsive technology particularly in the maker-culture of our time. I really want to highlight this quote by Carole Collet where she says “In times where the very concept of ‘nature’ is questioned not only in its philosophical dimension, but in the core of its biological materiality, we need to reconsider the interrelations between architecture and nature.” There’s really an opportunity, I would say, to use technology as a mediator. With that comes, a lot of testing, experimentation, and working with for instance robotics like Arduino to think about, as Lucy Bullivant says, “the technologies involved, of sensing, computation and display, are in rapid flux, so anachronistic solutions need to be robust; breakdowns are an occupational and institutional hazard, and new schemes are not foolproof … designers are extending the versatility of equipment for crafted responsive environments to enable different sensing modalities. The difference is that they customize what exists in order to achieve the right results.” Perhaps the most important aspect of working with responsive technologies is the feedback

loop—created by our ability to sense and monitor the landscape (or some other phenomenon), process that information, and then respond with adjustments in a way that is dynamic. And that, I would say, is a really fundamental concept—in landscape architecture we are always adjusting to our changing environment. With that introduction, we used the following chapters to create a framework of six different methodologies for how you might approach responsive technologies in a way that is specific to the discipline of landscape architecture. I will quickly go through the six methods and a few of the case studies to provide some context. We were really lucky to have a great selection of built, temporary, and speculative case studies as well as several interviews that are a bit more projective about the work of some of the contributors.

ELUCIDATE The first methodology, elucidate portrays and brings clarity to ordinarily unseen and invisible phenomenon through methods of visualization of the most basic level. Most projects exhibiting responsive technologies share this component, however, projects within this category really draw specific attention to interpreting and visualizing imperceptible phenomena. In this example, things you can’t see, like the energy of wind, elucidated across a temporary installation of 500 wind-powered lanterns composed as an exterior screen on the facade of Building 54 on MIT’s campus by Höweler + Yoon Architecture. Or in the case of Nuage Vert, translated as “Green Cloud”, a temporary installation elucidates the


relationship between energy consumption in Helsinki and the outputs of the local coal-fired power plant designed by Helen Evans and Heiko Hansen, one concept in a series of environmental art installations and performances centered around air pollution and man-made clouds.7 In Confluence, SCAPE/LANDSCAPE ARCHITECTURE and The Living speculated on the possibility of designing a portal at Pittsburgh’s Point State Park, at the confluence of two rivers to visualizing water quality, fish flow, people flow, and river flow— through a designed portal at the confluence of two rivers.

COMPRESS Next, compress, investigates the manipulation of temporal scales to interpret and decipher change over time. Compress considers a different type scale that’s really difficult to comprehend— however, through compression remakes it into an understandable scale. Compression can be a valuable tool to address the “temporal shift” in ecological theory, which considers old conceptions of bounded sites as “part of a changing context in which trends cannot be exactly predicted, and surprises should be expected.” These changes cannot be determined solely by a velocity, but rather, recomposed as a narrative to extract meaning across multiple scales of time, particularly to create a relationship to human scales of time.

DISPLACE Displace describes a removal and reconfiguration of information through physical and temporal displacement. Through the displacement and alignment of information, the connection between disparate moments can be heightened. In his talk, Skylar gave a really great example, the watch. The ways in which our conventions and ideas of time have altered space in regard to mobility have completely altered the landscape. I would also say that this also corresponds to our ability 35

SCDC ‘16

to map and understand territory at larger scales and in aggregation. For instance, Datagrove, an interactive installation by Future Cities Lab, displays and whispers twitter messages (tweets), normally confined to personal feeds, in an actual, physical location in an effort to portray the collective voice of the city. Nicholas de Monchaux’s ongoing project, Local Code, inspired by Gordon Matta-Clark’s unfinished project, Fake Estates, proposes city-wide designs across underutilized or difficult to categorize parcels to showcase how city data, practices of mapping, and parametric design can be harnessed to approach issues such as storm water management across large territories through many localized interventions. More broadly, displace points towards practices of simulation and deep learning across space and time as a method for designing within ecological and organizational systems.

CONNECT Connect, facilitates direct interaction with the responsive system. The relationship it exhibits to the architecture, information, and feedback loop are apparent and easily interpretable. These projects provide a set of one-to-one relationships, where the inhabitant is cognizant of their connection and relationship to the overall system—a clear and visible feedback loop—you act with touch or movement (for example) and the installation responds with a noticeable change. I would argue, that this is currently the most obvious form of responsive architecture—walking up to a wall and seeing that wall shift, seeing something physical or visual change in relation to your human movements and behaviors. MIMMI, a temporary installation by INVIVIA and Urbain DRC, displays the collective mood of the city as a misting mood ring suspended over a public gathering space. Aviary, designed by Höweler + Yoon and Parallel Development creates a framework of light and audio poles that translates touch and movement into the sounds of instruments and local birds, creating a call and response

performance of dance similar to birds within an aviary. Amphibious Architecture, designed by The Living and ThexClinic Environmental Health Clinic, transforms the surface of the East River and the Bronx River in New York, into a visual and SMS interface, displaying pollution, the movement of fish, and prompting visitors to connect via text message to the fish beneath the water.

AMBIENT Ambient considers interventions and installations presenting information precisely, repetitively, and over longer periods of time, requiring a learned relationship for translation and interpretation. We all learn about our environments in this way— we might think about the smell of the asphalt as the air becomes damp before it begins to rain. We inherently pick up environmental cues over time as a way of learning about our environment. You might think about the work of Philip Beesley, who is regarded for using responsive technologies to emulate living systems. Through complex assemblies of thousands of lightweight fabricated components, the installations perform as metaphorical biological systems through exchanges with participants and surrounding


MODIFY The final piece of this framework is Modify. I would say that all of the categories really build up to Modify. And this introduces projects which alter interaction and behavior as a response embedded within the feedback loop. The modification of behaviors expands the feedback loop beyond relationships between humans and landscape to incorporate other biotic and abiotic systems. The previous examples, not all of them, but a lot of them really were about this flexibility. They might be picking up on something in the landscape but they weren’t necessarily changing the landscape itself. Modify asks how we use responsive technologies to actively design the landscape. Architecture can move in relation to levels of carbon dioxide - we can begin to align our infrastructures to our habits and behaviors. We can think about how to mitigate algae blooms in the swamps of Louisiana. We can even think about building land formations and reclaiming territory and reaching large-scale landscapes.


In a way this is already happening, and I love going to this project - this project points towards a way of dealing with land building in Louisiana. So that about sums up the book and I’m really excited about the direction and this foundation for the work and we’ve already begun to put a lot of other work out there that really points in several directions.

REBUILDING THE LOUISIANA DELTA This last project, and my work in general, has evolved out of my time spent living in coastal Louisiana. Coastal Louisiana is experiencing extreme land loss from sediment asphyxiation— the lack of sediment in the Mississippi river. Because the River has been controlled and constrained by levees—the River is no longer able to distribute sediment and fresh water into the marshes of the Delta. Since the late 19th century, the dynamics of the River have been controlled very intensely in a very top down way, particularly after the application of Project Design Flood by the US Army Corps of Engineers. The dams, locks, and control structures through the 37

SCDC ‘16

Mississippi Basin are orchestrated according to a very specific strategy for preventing flooding and supporting navigation. The dramatic and continued loss of wetlands, is a problem not just for settlement but for industry, ecology, as well as navigation. The need to rebuild the wetlands and address the ecological vulnerability of the Delta, in this instance, is a project that many stakeholders can agree upon. The Coastal Protection and Restoration Authority (CPRA) is responsible to creating the Coastal Louisiana Master Plan, updated every 5 years. Their stance is that, the river was once dynamic, it was leveed and controlled, and now, to address land loss we need to develop methods of dynamic control. To show you what this landscape is like on the ground, here is a picture of my students and I on a balloon mapping trip to research the process of marsh creation to capture the “newest land in the Delta” at the Bayou Dupont longdistance sediment pipeline project. What you are seeing in the photographs is the process of dredging sand from a borrow pit in the middle of the Mississippi River channel, that sand is

then transported through a 12-mile-long steel pipe with several relay boost stations, out to the marsh where it is strategically deposited to build land. Using marsh buggies, the engineering team creates small levees and moves the terminus of the pipe around so that the sand can be distributed according to a set elevation. While the Delta needs to be built at a very large scale, it’s ultimately a product of very small particle interactions—the sediments suspended in the water drop out or get picked up as the water velocity changes, ultimately building land in some areas and carving paths in others. In order to rebuild this landscape, to build more marshes— scientists, engineers, and planners, have been researching and designing ways to reconnect the river to its delta—to create controlled crevasses, intentional breaks in the river with operable gates—this structure is referred to as a sediment diversion. While there are currently no sediment diversions built on the river, there are places where the levee is intentionally cut (uncontrolled) to let

water and sediment flow into the Delta. The crevasse (cut) you see here is on the west side of the Bird’s Foot Delta, the terminus of the Mississippi River. In these diagrams I made, you can observe that once there is cut in the levee water and sediment begins to flow through, there is a period of scour where land is lost, and then eventually land forms begin to emerge. Some shoaling occurs within the river because the velocity has been altered. However, this material can be dredged redistributed into the area you want to build land—this is called beneficial reuse. Overtime, small delta formations surface, reminiscent of larger delta formations. The fact that material interactions resemble similar processes at both small and large scales have historically led to scientific practices of physical hydraulic modeling—the use of physical materials to computationally model past, current, or proposed hydrologic systems. These methods are used to study the movement and distribution of water and sediment across in different scenarios with physical materials. Pictured here, is the expert engineer, Dr. Clint Willson working on the modelling of the lower Mississippi river. Currently,


the CPRA is funding a 100x110’ physical model to begin to test the possibility of reconnecting the river to its delta through controlled sediment diversions—essentially responsive devices. The monitoring and surveillance of the delta is immense—it’s full of devices, whether it’s a coastal restoration and monitoring station, a USGS station, or a specific scientific project with an installation of sensors. There are obvious relationships between the sensors throughout the Delta, what our physical models and digital simulations tell us, and how we make decisions to modify the landscape. In my research, I’m very interested in methods of monitoring and the history of scientific modelling to inform the design of responsive landscapes.

RESPONSIVE MODELING During my time working at the LSU Coastal Sustainability Studio working with engineers, scientists, and planners and discussing the potential of dynamic control—I (along with my colleagues Bradley Cantrell and David Merlin) decided to prototype our own small responsive hydraulic model (non-scientifically). With some plywood, foam, a water pump, play sand, and walnuts—we designed a water table to test out some sensing instruments and responsive devices, that might give us an understanding about the monitoring systems and infrastructure in the Louisiana landscape. We built a kind of floodgate that would begin to produce different conditions in the landscape through an orchestrated behavior. We monitored the model with a Microsoft Kinect—with that information we used grasshopper, firefly, and Arduino, to actuate our devices according to recording change--in a very playful way that is more related to craft than to science.

we are working with a much more robust version of our initial responsive model to try and really investigate the potential for responsive models to inform and mediate what we have been referring to as responsive landscapes. Using this model, we are able to set inputs water flow and the distribution of material to simulate abstract versions of different hydrologic landscape conditions. Working in this way is conducive to designing with change over time in a space that we can poke and prod, and in addition, measure. Part of this research is about setting up a type of system for monitoring the landscape and the representations and visual interfaces that would accompany it. It’s not just about working with the physical artifact, it’s also working through our methods of virtualization, abstraction, and representation—methods that might lead to ways in which we design the landscape to be sensed and the ways that we then begin to modify it. I think there’s a lot of potential for how we as designers think about the tools and the technologies we currently have within our reach, such that we can have a hand in the ways in which computational design is shaping our built environments. Design schools such as Stuckeman are making great progress in this area—thank you very much.

I’ll just show a couple of slides of the work that I’m doing with Bradley Cantrell at Harvard in the responsive environments and artifacts lab, where 39

SCDC ‘16


Computation as a Co-conspirator in Resisting its own Hegemony

ZACHARY KAISER Zach Kaiser is a designer, educator, scholar, and music producer. He is currently an Assistant Professor of Graphic Design and Experience Architecture in the Department of Art, Art History, and Design at Michigan State University.  Zach’s work is spent in the pursuit of interrogating the preferred situations manifest in the designed world in which we live, asking for whom (or what) these situations are preferred and why. He seeks to offer his students and others the opportunity to question the hegemony of the preferred situations created for them, to problematize the designed systems and artifacts that privilege a certain worldview, and to propose alternative visions.  Zach earned his MFA from the Dynamic Media Institute at the Massachusetts College of Art and Design in 2013. Prior to joining the faculty at Michigan State, Zach co-founded Skeptic, a Boston-based research and design collective. Zach has exhibited and lectured both in the U.S. and internationally, including appearances at the 2015 Cumulus conference in Mumbai, India, the IMPAKT Festival in Utrecht, The Netherlands, and Relating Systems Thinking and Design 3, in Oslo, Norway.


SCDC ‘16


monolithic entity with one goal or purpose. I also say this now, because much of what I say will be, in some form or another, critical of different technologies.

Hi everyone, I’m Zach. It’s a pleasure to be here. Thank you so much to the organizers of the Design Computing Symposium and the Stuckeman Center for inviting me. I’m honored to take part in this and I hope that I can offer some interesting perspectives on the intersections of design and computing. Over the course of the next 40 minutes or so, I’m going to discuss something along the lines of what it said that I’d discuss in the program: linking learning with computational design. However, we’ll take a few detours that expose the philosophical underpinnings of my work, and we’ll spend a fair amount of time unpacking our emerging reliance on computation, and what that means in terms of human subjectivity—because that has a great deal to do with how we learn. Because much of my output tends towards the written word these days, you’ll have to endure some philosophizing, but it’s my hope that my enthusiasm or your coffee will keep you awake. This talk will progress through a couple different case studies but will situate those within the broader cultural frameworks that the projects themselves aim to address. There are a few important caveats—or I guess more like “things you should know” about me and my work: 43

SCDC ‘16

First: generally speaking, I don’t use computation to so much as a means of generating design work as much as I use computation to question our cultural reliance on it. If I use it in a “generative” fashion, it is in a partnership of sorts. Second: I was trained as a graphic designer— an old-school graphic designer. Which means I learned how to make posters and books, not program computers. These days, I’m not a very sophisticated programmer, but I have begun to learn about computation from a variety of perspectives—both technical and philosophical. In this sense, I try to engage with computation on both epistemological and philosophical terms.

OK, so let’s go back to the title of my talk, which is both intentionally provocative and also accidentally a mouthful: “Computation as a CoConspirator in Resisting its Own Hegemony.” Yikes. Over the course of this presentation, I’ll hopefully communicate what I mean by this. In a sense, I mean to say that I am aiming to demonstrate the use of computation to critique the cultural privileging of computation as an arbiter of meaning or value. And I’m going to talk about this from two different perspectives. The first perspective will consider the potential of computation to assist in learning experiences that transcend the privileging of metrics in today’s educational discourses. The second perspective will address what I call “symbolic subversion:” the way in which we can partner with computation to design experiences that call into question the hegemonic ideologies that privilege it in our everyday lives.

THE COMPUTABLE SUBJECT Today, we increasingly perceive ourselves as computable subjects—people whose existence is defined by data that can be measured. If it can’t be computed, it isn’t real. In his Critique of Everyday Life, Vol. 3, Henri Lefebvre suggests that the hegemonic ideology of late capitalism is “the ideology of the end of ideology,” or “the information ideology,” in which computation and bureaucratic technologies are perceived as neutral and, therefore, arbiters of truth. In order for us to take advantage of what these technologies purport to offer in their promotional discourses, everyday life must succumb to the “homogeny of fragmentation,” in which life is broken up into quantifiable units that have an established equivalence. Whatever you measure—steps taken via FitBit, grade point average, badges achieved in a MOOC, or pages read on your Kindle—all life, suggests the “information ideology,” should be subject to quantification and, implicitly, the market. Again, if it can’t be computed, it isn’t real.

Third: what’s most interesting to me about computation is our relationship to it. My research questions the cultural privileging of computation in the first place, which looks at broader historical and cultural shifts within which our current understanding of computation (and ourselves) has emerged. Fourth, and finally—and this is very important: while I might seem to be a “critic” or a “skeptic” (which was the name of the design studio I cofounded in Boston), this is absolutely NOT a polemic against technology or computation. I say this in part because technology is not singular—a


Much of my work rests on the presumption that the privileging of the quantifiable over all else - and furthermore, the belief that everything can BE or BECOME quantifiable - is problematic

The popularization of the quantified self points loudly to the increased privileging of computation as a neutral arbiter of “truth.” In his “Postscript on the Societies of Control,” Deleuze writes: The numerical language of control is made of codes that mark access to information or reject it. We no longer find ourselves dealing with the mass/individual pair. Individuals have become “dividuals,” and masses, samples, data, markets, or “banks”... The operation of markets is now the instrument of social control and forms the impudent breed of our masters. This ideology is liable to only increasingly become culturally embedded as the internet of things becomes more pervasive. Ambient “intelligence” has already become designed into our refrigerators, mattresses, washing machines, and even egg containers. How telling that we call these products “smart.” The co-opting of language in order to reify an ideology that privileges the computable in the service of efficiency is not new. Indeed, Roger Salerno, in his book Landscapes of Abandonment, writes about Habermas, who noted “that ordinary language is being corrupted by the new scientism. Language is yielding to technocratic taxonomy... Knowledge has become increasingly dominated by the technocratic imperative that has commanded the modern communicative system.” The terming of these technologies as “smart” is a representation of the colonization of language by rationalist and positivist teleology. 45

SCDC ‘16

Much of my work rests on the presumption, that the privileging of the quantifiable over all else and furthermore, the belief that everything can BE or BECOME quantifiable - is problematic. It is not just problematic but is detrimental to the human psyche and the potentialities that lie within each and every one of us. A friend recently shared with me a meme in which a photo of a grumpy child appears underneath the text “I don’t think you understand. There is no point in walking if I don’t have my fitbit on.” Even though it is a meme, and, in that sense, a joke to some degree, it has a note of truth to it that rings loudly in my ears. Again, this is not to say that I think our technologies or computation itself are “bad.” What is dangerous is the subjectivity that we assume - the type of people we become - when we believe that everything can be subject to measurement and computation in the service of efficiency, productivity, or convenience. Such a subjectivity - a subjectivity of computability - is alienating and is the perfect kind of subject for a neoliberal capitalism that eschews any sort-of deviation from the dictums of the free market. People who believe their very existence can be assigned a numerical value (in calories consumed, in steps taken, in likes from Instagram or Facebook, in SleepIQ, and any other of the myriad measurements to which we are subjected everyday) are perfect as both consumers and laborers in such a society.

THE RISE OF THE NEOLIBERAL / MACHINE-READABLE STUDENT When I started grad school, I didn’t intend to become like this. I actually began my trajectory towards my concern for the state of our collective subjectivity by considering the design of learning experiences in relationship to the way I learned about music as a DJ and music producer. I had been making music with the help of my computer for a long time, but it was when I started to use a piece of software called Ableton Live that I first noticed a significant shift in how I made and performed music. Instead of being limited by how many records I could carry in a few crates to a club, or by how many tracks were on my studio tape recorder, with Live I could use as many songs, samples, tracks, and effects as I pleased. At first it was thrilling, but I began to see the “funk,” the imperfect spontaneity, dissipating from my music as my control became simultaneously more farreaching and more granular. I realized that Live’s interface operates much like Vilém Flusser’s camera apparatus: I am a functionary controlling the apparatus via its exterior while it controls me via its interior. It guides my decision-making process, making certain things easier to do while curtailing my ability to do others. Because I saw that I needed it in order to manage the sheer quantity of music to which I had access, I began to rely on its capabilities. It was the things that were made easier by Live’s computational architecture that I began using most frequently. I originally thought that I was “overwhelmed” by “data,” and I drew relationships between the way I was trying to manage sound and the sorts of experiences students were having when they could access any piece of information at any time. While this analogy wasn’t entirely wrong, it

was missing a core component: the ideology of rationalism that drives a cultural bias towards computation as somehow more neutral or true. An overwhelming amount of data seemed to necessitate the help of computation, which begat more data, which necessitated more computation. I started to wonder about this. This led me to focus on the cultural reliance on our technologies as arbiters of meaning, particularly in the way we yield to technological interventions in establishing the means by which we design educational experiences and judge the efficacy and success of education.

It is attractive to think that numbers—grades or test scores—and checklists or credentials that might indicate employability—badges as well as skill or software competency lists—can accurately describe the entirety of learning experience. It makes things efficient, easy, convenient. But in succumbing to the seduction of convenience, we give up a holistic, long-term view in favor of the short terminism of what we can analyze computationally right now. In my graduate thesis, I wrote that, “An increasing ability to handle vast amounts of information computationally becomes an attractive way to handle any type of information. The information we produce, then, begins to cater to the way it is parsed.” We develop more metrics, more tests, more scores. Indeed, education has become instrumentalized, merely a “mechanism for adapting students


to the market” All sanctioned learning must be computable and in being computable, is fundamentally connected with the market.

and meaningful connections between pieces of information and help prompt curiosity and deeper inquiry.”

The ability to think critically, to synthesize information from disparate sources into cogent arguments, to engage as a citizen in a participatory democracy in an ethical fashion, the development of a sophisticated sense of empathy and responsibility to others—I’m not entirely sure these things lend themselves to computability.

A user signs into the service and selects a “bank” of samples, which bears the name of a research project.

So I began experimenting with the development of tools that could use computation to resist this trend. I wanted to create experiments that could get students thinking creatively about connecting and synthesizing information, to cultivate a sensibility and posture of connection-making—a sort-of “remix” intuition. These experiments included a project about old samplers and how interfaces for learning might be designed to mimic some of the affordances of these music-making tools in order to provide limitations and force the user into creative uses and interpretations of research. Sampler is a conceptual prototype for a research tool and service aimed at high school students—a platform for rapid, improvisational exploration of connections between “samples” of research. It is based on the process I go through as a DJ to perform a set or create a song. “Sampler aims to foster an environment where limitations and improvisation can help learners find new 47

SCDC ‘16

She adds research to a given sample in the “bank” of samples for the project on which she’s working – through a simple browser extension she can add images, texts, sounds, and video to her “bank” of samples. She then uses her mobile device as an interface to “play” these “samples” of research, improvisationally connecting the research through a performative experience in which she “plays” these different research “samples” and verbally annotates the connections she notices. The process is intentionally loose and experimental.

She can go back at any time to these “research performances” and the voice annotations she’s made about the relationships between

different sources during the different “research performances” she’s done. Again, this is all about cultivating a particular sensibility, about a sensitivity to relationships and connectionmaking that transcends the metadata about a given piece of content. Maybe something comes out of it – maybe it doesn’t – in the end it’s not about “efficiency” or a checklist of skills – it’s about a sensibility and sensitivity to possibility and potential latent within relationships and connection. The project currently exists as a proof-of-concept prototype, though, strangely, I’ve begun to receive inquiries from various educators about the project, and am considering returning to it in order to develop a deployable web-based version. Such a learning experience, however, doesn’t necessarily play well with the instrumentalized vision of education as preparation to participate in the market. It doesn’t have a quantifiable value that can be affixed to it. Sampler seeks to resist the prevailing tendency to evaluate the effectiveness of education as a market-driven

commodity and to develop new metrics by which to do so. This tendency, I suggest, grows out of the increasing power of computational tools to process large amounts of data, tools to which we are increasingly willing to give over our powers of synthesis and analysis.

I guess what I’m arguing is that the deluge of data that we might feel we are experiencing only has certain properties that are captured, and it’s only those properties that we can study computationally—meaning that what can’t be studied computationally might get studied less and less. Or that what can’t be analyzed computationally gets left out, ignored, regardless of whether or not that data is meaningful to us as opposed to the tools we are using to analyze it. I think that a sense of possibility within connection finding that is human-readable (as


An emphasis on quantification is driven by the emergence of the market-based ideologies about the goal of learning and education. This emphasis on quantification is linked with, and emerging at the same time as, increasingly powerful computational tools, to which we are becoming willing to give over our capabilities of synthesis and analysis.

opposed to machine-readable) at a human scale is a trait worth cultivating—it’s one that privileges something different, something strange and magical, which, I believe, maybe naively, that people are.

“artspace” as opposed to “realspace.” Second, I’m not a hacker. Through my writing and projects, I hope to encourage discourse about the desirability of the computationalist trajectory on which our society is currently traveling.

“We are beginning to find ourselves in a world governed not by our interpretations of information, but rather the interpretation of information by algorithms that we developed, but of which we are neither the audience nor the users. We are blind to this shift because we increasingly, and unknowingly, embed this ideology into [everything] we design.”

The first project I’m going to discuss is called Whisper, which has been a multi-year collaboration between myself and a dear friend and colleague, Gabi Schaffzin.

SYMBOLIC SUBVERSION As I have done more research and writing on the type of subjects we become in a society that privileges computation as a neutral arbiter of value or meaning, the nature of my project work has shifted. I am using this term “symbolic subversion” to describe some of my recent projects for a couple reasons. First, most of these projects operate in 49

SCDC ‘16

Whisper evolved out of my interest in—and concern with—the influence of inference and recommendation algorithms (like those used by Amazon or Netflix). Let’s say you search for something on Amazon or enter an address into Google maps. Capturing that query and tracking the history of your search queries can help these companies infer things about you, what you like, where you typically go, and will make your search results more convenient in future searches, so you have to look less hard for the things you really want. This seems fairly direct (regardless of whether or not we believe that it’s good or bad).

But systems of algorithmic inference and recommendation are rarely that straightforward. It just doesn’t make sense to use all that data they’re capturing about you to just “help” you… why not use it to help other people, too? That query that you typed into Amazon doesn’t just help Amazon help you. It helps them help other users who are algorithmically determined to be, in some way or another “like you.” This means that we are inextricably linked in a database somewhere with people who have travel destinations similar to us or purchase similar items. From these simple queries (aggregated with the simple queries of many other individuals), companies like Amazon begin to infer a great number of things about us, including, but not limited to, gender identity, age, children, religious beliefs, etc. All these “properties” of each of us are fed back to us through product recommendations. We purchase these products, and along with them, purchase an understanding of ourselves constructed by inferences made about us through the aggregation and analysis of the data given to Amazon by people we don’t know. In this way, suggests, John Cheney-Lippold,

we acquire algorithmic identities. Now it’s possible that I’m giving Amazon too much credit here. None of us really build our sense of self on the stuff we order from Amazon. But it’s not just Amazon. It’s Facebook and Netflix and Google maps and YouTube, and the list goes on and on.

How many of you have done those weird little Facebook quizzes where it tells you your ideal road trip companion, or if you could be which Disney princess which one you’d be? My mom did this, and she was like, “wow, Facebook knows me so well…”


We are subject to systems of algorithmic inference and recommendation at nearly every turn in our daily lives, and this is only liable to become more so as the “Internet of Things” becomes more pervasive. Soon, while our decisions might still appear to be “ours,” even the smallest actions will have been inferred and recommended, if not outright predicted.

watch, or a shirt to purchase, or a time to wake up, or a temperature setting for your home. The subsequent actions you take, no doubt influenced by these recommendations (which are influenced by the “categories” developed in response to the actions of other users), are again logged in the database, helping the algorithmic system define and delineate the categories to which you belong. Because your actions, which are a response to the system, are used to help define the categories to which you belong (e.g., “people who watched this also watched…”), your actions also impact the courses of action recommended to others, and vice versa. In short, systems of algorithmic inference and recommendation, configure “life by tailoring its conditions of possibility.”

This has other implications, too. Because algorithmic systems of inference and recommendation influence how we operate in the world and how we conceive of ourselves, the information we produce begins to cater to the way in which it’s parsed. After a long conversation, my mom and I realized that Facebook “knows her so well” ONLY because she produces an expression of her identity that is easily analyzed by Facebook’s inference and recommendation algorithms. John Cheney-Lippold terms this kind of power and control “soft biopower.” He argues that as our devices and smart objects gather data based on our actions, they make inferences about us. These inferences are recorded in the databases of the companies whose services we use. These database entries are then cross-referenced with those of other individuals that the algorithm has determined to be—in some way or another—similar to you. Such cross-referencing dynamically creates categories of users, and helps the service recommend courses of action to you: who to follow or friend, maybe a movie to 51

SCDC ‘16

Whisper is a technology that began as a proposal for a tool to “intercept and scramble the data being transmitted from connected household objects in order to reintroduce surprise and serendipity into someone’s life,” a life that has been completely “algorithmically anticipated.”

Whisper is, in reality—as my colleague, Gabi Schaffzin, and I have described it:

The user in this case says “happy.” The printout continues, reading “you said happy.”

… an interventionary artwork, in which a user approaches a device and tells it how he or she feels. The device then takes the user’s description of her feeling, scrambles it through an association algorithm, queries Amazon using the “scrambled” feeling, and orders a product.

Then it prints out, “one moment please.”

…The Whisper object itself is a small, white, acrylic box, its smooth facade only broken by the affordance of a single red button, with a small microphone protruding from its left side. From the front, a sheet of receipt paper cascades to the gallery floor, displaying the process by which Whisper scrambles a user’s feelings as well as the product recommendation.

Then it says “one moment please. Scrambling algorithms takes time.”

Then, “whisper thought of fortunate. Whisper thought of providential. Whisper thought of luck. Whisper thought of condition. Whisper thought of instruct.”

For example, a user would press the red button. Then the receipt printout would begin with the text “welcome to Whisper. Please tell me how you feel.”


LANGUAGE AND THE PRIVILEGING OF THE COMPUTATIONAL The Allure of Convenience and the Role of Computation in Municipal Governance

Now, usually, I don’t talk about this part of the project – but it seemed appropriate here– the object has an Arduino and a Samsung Galaxy phone inside of it – the phone is using the Google voice API and the query is triggered upon the beginning of the words “I feel” only after the button is pushed. Then the query is fed through a Thesaurus API (also in the cloud). The transformation of the user’s feeling is then used to query Amazon and the results are scraped and pushed to the receipt printer.

The receipt itself points towards the commodification of existence, the way in which everyday life begins to conform to the dictums of the market when all our actions are guided by systems of algorithmic inference and recommendation. And yet, what gets printed is atypical of a receipt: the user’s articulation of 53

SCDC ‘16

her feeling as well as the translational scrambling process. Emerging from this strange but seductively designed object is, then, an attempt to call into question the value of the technologies that themselves seduce us into believing they know us better than we ourselves do.

This might be asking a lot, but I think that when Whisper elicits a chuckle, or often a good belly laugh, it means someone is responding. And maybe that laughter means they are thinking – thinking about how things could be otherwise, or thinking about the scripted nature of their daily interactions with things that are supposed to make their lives easier.

I’ve also been thinking a great deal about bureaucratic technologies, especially in the context of the quest for efficiency and cost savings in cities like Detroit. I’m curious about what happens to people when we develop new metrics to feed into municipal inference and recommendation algorithms, subjecting qualitative experiences of communities to quantification. Again, the human-readable can become secondary to the machine-readable. This may soon be the case for garbage collection. Municipal codes, already difficult to decipher, may become exclusively computational codes, translated beyond human legibility and concealed behind proprietary algorithms owned by the private companies to which municipalities—in their “austerity urbanism,” their efforts to reduce costs an increase efficiency—turn for help. As municipal services become automated and privatized, the way in which basic services are delivered becomes further obscured, and we exchange this obfuscation for the comfort of convenience. I have begun to explore this topic through the lens of experimental publishing and terms of use agreements: Terms of Use for Handling of Solid Waste and Prevention of Illegal Dumping is a book masquerading as a municipal handbook of the future. Inside its bland cover is a confusing and, at times, comedic work of ‘cut-up’ style poetry created by software that combines the Detroit municipal codes for refuse disposal with the iTunes Terms of Use & Google’s Terms of Service.

While the cover presents the book as an artifact from a “smart city”—with its simplicity and language seeming to conform to our ideas about municipal efficiency-enhancing measures—the interior confounds the viewer with jumbled language that is vaguely reminiscent of the legal documents to which they readily click “agree” when trying to launch iTunes or use one of their Google apps. The discord created by this juxtaposition is intended to cause the viewer to reflect on the implementation of computational systems that purport to be neutral and convenience enhancing. But this project spawned all sorts of other text projects, none of which have yet been finished, but I feel are pertinent to this discussion. For example, last fall I started working on a project, which is about the role of the promotional apparatuses used by different companies in trying to convince us to use their technologies. The term “smart” is, of course, central to such discourse, but there is a certain challenge posed to any kind of resistance by the simple fact that our “futurological discourses” are shaped by those who are considered “expert” enough to have developed these technologies in the first place. They—along with the computational products and services they promote to us—wield disproportionate influence over the way we talk about the near future.


for companies like Apple or FitBit. It’s kind of like a magic 8-ball or fortune cookie, combined with business consulting, marketing, and Brian Eno’s oblique strategies. Which is how I feel the inference and recommendation algorithms embedded in our smart objects function, anyway. You simply open the app…

It’s as if we are handed down these truisms by different companies. FitBit says that it measures your activity and gives you a score. Well, does it? I mean, maybe. But no, they respond, of course it does—that’s just how it is, and this score precisely reflects your activity. It couldn’t be otherwise because their proprietary algorithms and the data they capture say so. … shake your phone So I started thinking about “truisms,” and of course if you went to art school, the first person you think of when you think of truisms is Jenny Holzer. So I started trying to find a way to write my own truisms using the vocabulary of the companies who control how we talk about the near future. But who else controls how we talk about the near future? I mean, yes, definitely tech companies and tech pundits, but also those who dole out advice to tech companies—big consultancies like McKinsey and IBM. So, I mean, if McKinsey and IBM are giving advice to FitBit and Apple and other cool tech companies, which are then writing algorithms to make recommendations to us on how to live our lives, why can’t they give us advice, too? So I started working on a project called “Oraclear,” an “oracle” of sorts that helps you think, clearly. It writes truisms for us, using the language out of which our visions for the future are constructed: language from white papers by McKinsey and IBM, and language from the promotional materials 55

SCDC ‘16


And receive a “truism,” constructed using similar grammatical structures to those that Jenny Holzer uses, but using language from those who construct our cultural visions of the future. The application will also query Google and return the first search result to you, in case you are curious about what the advice is implying. You might get something a little more obtuse, like this: “All predictions are economically semiautonomous.” Or something maybe a little more direct, like this: “A little adaptability can produce everything.”

The notion of the computable subject and the ideology of computation support a Taylorization of everyday life and of society as a whole. It renders existence as a quest for efficiency and productivity. Such a society might be considered largely “bureaucratic,” with technologies constructing specific forms of social relations that cater to competition and the market, while subduing other forms of sociality. While various technologies foster and facilitate connection in different ways (think Facebook and FaceTime), that every moment of a life can be quantified and subject to interpretation and manipulation by the market, leads to a precarity and consequently an anxiety that is a defining feature of our age. In his book, Landscapes of Abandonment, Roger Salerno suggests that much of Kafka’s writing expresses the alienation wrought by an over-

reliance on various technologies as neutral mechanisms of control and arbiters of truth or value. He argues that Kafka’s writing expresses a certain “brutality” when technological progress serves “the interests of a bureaucratic civilization.” This is illustrated in nightmarish fashion in Kafka’s short story, “In the Penal Colony.” The story, says Salerno, “displays the scientific wonders of The Harrow—a technologically elegant instrument of gruesome mechanical torture and death. Its operator views the device as both efficient and pristine—a groundbreaking innovation by which to punish those who might shirk their duty, no matter how inconsequential that duty might be.” The Harrow carves the prisoner’s sentence into him, killing him as it does so. The Harrow is used on all the inmates who do anything judged by the Operator of the Harrow himself to be worthy of its implementation—anyone can be found guilty at any time. The Operator of the Harrow has an unwavering faith in its original creator, the old Commandant, and believes in the vision of “justice” put forth by the old Commandant and his creation.


Now, your “smart” objects don’t operate on you in quite the same way, of course. But, the enthusiasm of the operator to continue to use the Harrow, despite the absurdity of its imposition of “justice,” mirrors something of the enthusiasm some have for the potential of the “Internet of Things”— today’s “smart” objects. While this connection might seem tenuous or ridiculous, it’s not about the correlation between the technologies: it is about a correlation between behaviors that reflect individual subjectivities. And conversely, it’s about the way in which these subjectivities emerge in behavior that result in the kind of precarity that produces our collective anxiety.

I wrote a script that splits Kafka’s story into individual phrases separated by specific punctuation. The script then queries YouTube with each of those phrases and returns the thumbnail of the first search result. I wrote another script that automatically builds an InDesign document with those thumbnail images placed into it, and composes the images on spreads in the order in which they were queried, creating a different composition on each spread. I have gone back into the document and begun to insert the punctuation that split the phrases being queried in the first place, hoping that the piece begins to read almost as a narrative of images. Effectively, it is Kafka’s story translated into YouTube video thumbnails.

So how do we grapple with this sense of precarity and with the anxiety that it creates, the fragmented, computable, manipulable selves that we’ve become? YouTube, of course. We pacify ourselves with YouTube videos in an unconscious effort to grapple with the alienation produced by our computable subjectivities. We console ourselves with cat videos, gameplay trailers, sports highlights, ASMR videos. In a recent project that is still very much in progress, I have begun to explore this connection between Kafka’s work and contemporary experiences of computationally-mediated subjectivity. It is an artist’s book that meditates on the role of YouTube and inference and recommendation as an “apparatus” in a computational-bureaucratic society of alienation. 57

SCDC ‘16

CRITIQUE & CONCLUDING THOUGHTS I believe that the cultural privileging of computation as a neutral arbiter of meaning, value, and truth is fundamentally problematic because of the kind of subjects we become when we accept this ideology. Furthermore, we are rarely given the choice or offered the opportunity to deliberate the desirability of the tools and technologies that prop up this ideology because it is embedded in the products and services we use everyday, and given credence by the promotional discourses that position it as convenience- and efficiencyenhancing. It is an ideology that justifies itself through its reification in society and is therefore hegemonic. My research and creative practice use computation as a means to ask questions about this ideology, as a means to critique it and to resist the subjectivity that results from it. In a sense, I guess computation in different ways has been a collaborator and co-conspirator, with agency and power that, in some ways, matches my own.

that privileges it over all else, meaning that the use of computation in everyday life does not necessitate that human subjectivity be shaped by quantifiability, computability, and the market. The use of computation in everyday life does not mean that existence needs to be defined by an arbitrary maximization of efficiency, with our corporeal selves and bodily needs as some sort of physical “limit” to be overcome in the quest for an increase in “efficiency” or “productivity.” But, I suggest, this is happening. And it’s changing how we view ourselves and each other—a subjectivity and relationality predicated on arbitrary numeric indicators derived from the market. I hope that my work functions as a critique that also, through an attempt to negate what exists, opens new possibilities. As Lefebvre argues, only through critique, through what he terms the “negative” can we envision possibility and potential outside of the teleology of technological positivism that dominates everyday discourse. Indeed, he writes that “the negative moment creates something new… it summons and develops its seeds by dissolving what exists.”

I believe that use of computation is not by nature equivalent to an acceptance of an ideology

If you’ve read “In The Penal Colony,” and crack open a book that has that as its title, you might be surprised and intrigued to see what is clearly a series of YouTube video thumbnail images. At the same time, by explicitly linking the story with imagery from YouTube, you—the viewer— might begin to triangulate the connection, the problematic that the project seeks to expose.


At the same time, Lefebvre also argued that the everyday is precisely the site at which we must struggle against the hegemonic “information ideology” because everyday life is the space most recently colonized by this ideology. In this sense, I believe that the gallery space and the academy are places that my work must transcend. At the same time, with an increasing instrumentalization of higher education to serve the market, these problematiques must be posed here as well—in spaces that have long been sites of experimentation and resistance to the dogma of the “practical.” Regardless, my work has yet to really heed Lefebvre’s call, to truly operate within the everyday, within spaces that are not rarefied or already primed for such work.


SCDC ‘16

I’m only just beginning this line of inquiry, and with every passing day there seems to be more work to do. It’s really been an honor to share what I’ve done thus far with you, and I am grateful for the opportunity.

I hope that my work functions as a critique that also, through an attempt to negate what exists, opens new possibilities.

Thank you.



SCDC ‘16


Supercart: Mobile Retail Organizer Application Peter Lusch, Assistant Professor of Graphic Design, Penn State Akid Alias, Researcher, Penn State

Woven Water Filter Felecia Davis, Assistant Professor of Architecture, Penn State Shokofeh Darbari, Master of Architecture Candidate, Penn State Nasim Motalebi, Master of Architecture Candidate, Penn State Angelica Rocio Rodriguez, Master of Architecture Candidate, Penn State

Responsive Textile Panels Felecia Davis, Assistant Professor of Architecture, Penn State Niloufar Kiourmarsi, Master of Architecture Candidate, Penn State

Bromma Belvedere: Projecting the Performance of Noise Abatement Landscape Infrastructure


Tawab Hlimi, Visiting Assistant Professor of Landscape Architecture, Penn State

Reverse Engineering: Negotiating Design Projection with Performance via Stormwater Modeling Tawab Hlimi, Visiting Assistant Professor of Landscape Architecture, Penn State

A. William Hajjar’s Air-Wall Project: Studying an Early Example of Double-Skin Façades Ute Poerschke, Associate Professor of Architecture, Penn State Henry Pisciotta, Associate Librarian, Penn State Moses Ling, Associate Professor of Architectural Engineering, Penn State David Goldberg, Practitioner Instructor of Landscape Architecture, Penn State Laurin Goad, PhD. in Art History Candidate, Penn State Mahyar Hadighi, PhD. in Architecture Candidate, Penn State

Collaborative Design Studio (CoLab) Nathaniel Belcher, Professor of Architecture, Penn State Ross Weinreb, Instructor of Architecture, Penn State David Goldberg, Practitioner Instructor of Landscape Architecture, Penn State Shivaram Punathambekar, Master of Architecture Candidate, Penn State Rohini Raghavan, Master of Architecture Candidate, Penn State Jennifer Gong, Bachelor of Architectural Engineering Candidate, Penn State


SCDC ‘16


SuperCart: Mobile Retail Organizer Application In collaboration with Akid Alias, Researcher, Graphic Design (class of 2016)

PETER LUSCH Assistant Professor of Graphic Design, Penn State Peter Lusch, assistant professor of graphic design, is a design educator and designer who is interested in environmental design, dimensional typography, experience design, video, lost history, design pedagogy, and installations. His work has been exhibited in the experimental design journal Margin, the Communication Arts Typography Annual, the Art Power International book Way of the Sign III, and at the Zaha Hadiddesigned Eli and Edythe Broad Art Museum in East Lansing, Michigan. His professional work has earned regional and national awards. As a design educator, he has presented at the national design conference for AIGA, the professional association for design. He has also studied and worked internationally with Shanghai University in Shanghai, China. Peter holds a B.F.A. from Eastern Michigan University and an M.F.A. from Michigan State University.


SCDC ‘16

TARGET CUSTOMER Online shopper and mobile savvy in-store shopper CUSTOMER’S NEED & DESIRE Wants to have an organizer app that could find them the cheapest item locally in-store and online OFFER SuperCart provides an easy-to-use app that makes price comparisons for you, and creates a more efficient shopping experience DELIVERED BENEFITS Time effective and easy-to-use ALTERNATIVES AVAILABLE SuperCart addresses the current online retail situation where volumes of unorganized information from all over the Web makes finding a great deal difficult


Woven Water Filter In collaboration with researchers Shokofeh Darbari, Nasim Motalebi and Angelica Rocio Rodriguez

FELECIA DAVIS Assistant Professor of Architecture, Penn State Felecia Davis is an Assistant Professor at the Stuckeman Center for Design and Computation in the School of Architecture and Landscape Architecture at Pennsylvania State University and is the director of SOFTLAB@ PSU. She is also a PhD candidate in the Design and Computation Group in the School of Architecture and Planning at MIT. She received her Master of Architecture from Princeton University, and her Bachelor of Science in Engineering from Tufts University. While at MIT she has been working on a dissertation which develops computational textiles or textiles that respond to commands through computer programming, electronics and sensors for use in architecture. These responsive textiles used in lightweight shelters will transform how we communicate, socialize and use space. Felecia is interested in developing computational methods and design in relation to specific bodies in specific places engaging specific social, cultural and political constructions.

The objective of this project is to design and develop a textile wearable such as a cloth backpack, scarf or other clothing accessory which performs 2 critical functions. First the textile wearable can warn people about specific toxins/harmful bacteria/viruses in their drinking water and secondly the textile wearable can filter out toxins, bacteria and viruses from water to make it drinkable. The Wearable Water Filter is to be used in the various regions of the globe where access to clean water is a problem. The wearable is to be made of a lightweight, washable, woven fabric structure which uses yarns and fibers in the warp and weft threads which change color, shape or density to warn people that certain toxins are present. The wearable will also be connected to a smart phone application to warn other people in the area about the toxicity in the water. A second function of the wearable is to filter and cleanse the water by using specific kinds of yarns which eliminate or reduce toxins/ harmful bacteria and viruses. The yarns used to make the wearable textile may also be coated with different finishes which can eliminate or neutralize toxins/harmful bacteria and viruses depending upon the region. The intention is for the Wearable Water Filter to be woven with different yarns of different properties, and coated with different elements depending upon the toxins/ harmful bacteria and viruses causing diseases in specific regions. The nature and function of the wearable as an article of clothing may change depending upon the region and cultural practices of the people. This project has been supported by the 2015 College of Arts and Architecture Faculty Grant Program.


SCDC ‘16


Responsive Textile Panels In collaboration with Niloufar Kioumarsi, Researcher, Architecture

The responsive textile panels project is developed to help learn about people’s response to computational textile expression. The question we have been interested in asking in this phase of the project is what emotion gets communicated back from the textile expression? If designers could begin to understand the nature of what various textile expressions communicated, and what computational textiles communicated in transformation then it would be possible to more clearly understand the role that texture of a computational textile plays in communicating emotion through a computational object. Textiles could be used as a non-verbal way of communicating with people who are unaware of their emotions and use other cues to establish their relationship to the world at large. The project has been developed at the scale of a wall or wall panels which could begin to be understood as a wall or divider in a room. The completed project will consist of 4 wall panels which each would be actuated or transform in time.


SCDC ‘16


Bromma Belvedere: Projecting the Performance of Noise Abatement Landscape Architecture TAWAB HLIMI Visting Assistant Professor of Landscape Architecture, Penn State Tawab Hlimi is a Visiting Assistant Professor at the Pennsylvania State University. He was a practitioner of landscape architecture and an educator prior to joining the faculty. Graduating from the University of Toronto, his design, research, and teaching now overlap in the fields of ecology, infrastructure and urbanism. Tawab has also held academic positions at the University of Illinois Urbana-Champaign as Visiting Designer in Residence. At Penn State, he teaches design studios on ecological infrastructure, and laboratories on landscape visualization and design communication.

Once peripheral, airport landscapes have fused with the expanding urban fabric of cities. The airport/city interface is an urban condition which is bringing to light the tension between local communities concerned by health impacts from noise pollution and a global economy driving the expansion of airport infrastructures and operations. The Bromma Kyrna Neighborhood interfacing with Bromma Stockholm airport is typical of this contested urban condition. Bromma Belvedere proposes to mediate this conflicting adjacency through an earthen berm that is both pragmatic and poetic, mitigating noise at ground level and revealing the sublime beauty inherent in the operational scale and technical ingenuity of the airport landscape from an elevation of 15m above the runways. The design of the landform was generated through collaboration between the disciplines of landscape architecture and mechanical engineering. Through an iterative design process, a feedback loop between design projection and design performance was established wherein the metrics of performance derived via simulation through CADNA software (Computer Aided Noise Abatement), influenced successive design iterations, resulting in the nal landform design which is projected to signiflcantly reduce noise polluiton from aircraft jet engines to less than 30 dBA, an acceptable range for outdoor living areas. The performance of the landform is projected to increase over time through processes of ecological succession tending towards an evergreen forest of pine and spruce trees.


SCDC ‘16


Reverse Engineering: Negotiating Design Projection with Performance via Stormwater Modeling

Reverse Engineering is a green infrastructural design strategy to resuscitate the channelized Boneyard Creek flowing through the twin cities of Champaign-Urbana, particularly the stretch of creek flowing through the engineering campus of the University of Illinois UrbanaChampaign (UIUC). The design projects the emergence of a watershed wide network of green infrastructure reducing runoff volume and peak flows and thus rendering plausible the morphological transformation of the Boneyard creek from hard engineered channel to terraced and soft riparian green infrastructure, facilitating the emergence of diverse native floral and faunal communities. To test this design projection, a particularly flood prone subwatershed of the Boneyard Creek situated on the UIUC Campus was isolated and modeled on the EPA’s Stormwater Management Modeling platform (SWMM). A bioretention network was proposed on this subwatershed, receiving stormwater runoff from impervious surfaces. Despite low rate of infiltration, native clay soils were preserved in order to support indigineous floral and faunal communities. Under the SWMM simulation, it was discovered that the GI systems in the climatic context of Champaign-Urbana perform best for long duration high frequency storms such as 2-yr-24 hour storms. In conclusion, metrics of design performance derived from modeling render plausible the vision of a high performance creek corridor.


SCDC ‘16


A. William Hajjar’s Air-Wall Project: Studying an Early Example of Double-Skin Façades Ute Poerschke, Associate Professor of Architecture, Penn State Henry Pisciotta, Associate Librarian, Penn State Moses Ling, Associate Professor of Architectural Engineering, Penn State David Goldberg, Practitioner Instructor of Landscape Architecture, Penn State Laurin Goad, PhD. in Art History Candidate, Penn State Mahyar Hadighi, PhD. in Architecture Candidate, Penn State

In the late 1950s and early 1960s, architect A. William Hajjar presented several designs with double-skin facades. He convinced the Pittsburgh Plate Glass Company (today: PPG) to fund a four-story test building with a two-story double-skin facade on all four sides to test the potentials of this facade technology. The test building was erected at the Pennsylvania State University and several set-ups and measurements were pursued during the following years. The project studies the anticipated benefi ts of the system as stated in the grant proposals and the measurements during several test phases. This will be compared with computational simulations in order to verify the validity of the developed facade and to provide insight in the reasoning of the test runs.


SCDC ‘16


Collaborative Design Studio (CoLab) Nathaniel Belcher, Professor of Architecture, Penn State Ross Weinreb, Instructor of Architecture, Penn State David Goldberg, Practitioner Instructor of Landscape Architecture, Penn State Shivaram Punathambekar, Master of Architecture Candidate, Penn State Rohini Raghavan, Master of Architecture Candidate, Penn State Jennifer Gong, Bachelor of Architectural Engineering Candidate, Penn State

The 2016 CoLab groups students from Architecture and Architectural Engineering in 5-person interdisciplinary teams to design a new 65,000 GSF / 250 bed residence hall and site on Penn State’s Behrend Campus. An additional team of 2 architecture students serve as sustainable site + landscape design consultants to all teams. While the project is academic; the program is real and the students are engaged throughout the project with the professional design team and a sustainability consultant. A core research area of this year’s CoLab was Virtual Reality - VR has immense latent potential in the Architecture, Engineering and Construction (AEC) industry. Some leading AEC firms have begun experimenting with using VR as a presentation tool and for training purposes, but it can also be used by designers as a collaborative design development tool. Offering a higher degree of immersion than any other visualization tool, the VR process was used by the teams for clash detection and model checking purposes, and also to inform decisions on the scale, proportion and enclosure of the built environment.


SCDC ‘16



SCDC ‘16


Digital Technology Augmenting Expression and Expression of Green Urban Spaces Clarissa Ferreira Albrecht, PhD. Candidate, Penn State (Re)conceptualizing Wire-Bending in Design: An Exploration of Craft, Computational Making & Digital Fabrication Vernelle A. A. Noel, PhD. Candidate, Penn State Robotic Motion Grammar Ardavan Bidgoli, PhD. Candidate, Penn State


Mass Customization of Ceramic Tableware Eduardo Castro E Costa, PhD. Candidate, University of Lisbon Understanding the Urban Structure of Informal Settlements: Combining Techniques of 3D Scan and Shape Grammar Debora Verniz, PhD. Candidate, University of Lisbon Automated Multi-Material Fabrication of Buildings Flavio Craveiro, PhD. Candidate, University of Lisbon Mechanizing Rammed Earth: Making New Earth Construction Viable in the U.S.A. Zoe Bick, M.Arch, Penn State


SCDC ‘16


Digital Technology Augmenting Expression and Expression of Green Urban Spaces CLARISSA FERREIRA ALBRECHT PhD. candidate, Penn State

More than 50% of the world population is currently living in urban areas. The world transition from rural to urban areas is a continuous trend. Therefore, cities are growing in terms of geographical area, number of buildings and population. In this context, green open spaces in cities are very important as they improve the quality of life of urban dwellers and positively impact the ecology. But there are still people not aware of all the benefits provided by green spaces and importance of nature conservancy; they live indoors, away not only from nature, but also from other people, with a desire of control, comfort and independency (Pozo Gil, 2013; Bell, 2012).

Computational Design and Robotic Fabrication - ICD / ITKE Research Pavilion 2011 Courtesy of Institute of Computational Design (Prof. A. Menges)

The world needs cities with more green open spaces and people that are aware of the importance of these spaces for balancing the natural ecosystem and for them. People must feel that they are part of nature besides social and cultural beings. For this reason, green open spaces, as landscapes, need to address not only ecology, but civilization and culture (Meyer, 2008). As we are living in a green and digital culture, green open spaces may address digital technologies as an approach of engaging the users to the different aspects of their lives and culture in association with nature’s presence. Accordingly, digital technology may augment the expression of the landscape and the experience of the user through that landscape in an urban area context. Light Interactive Installation - Arborescence - Loop.ph 83

SCDC ‘16

Responsive Installation - Light Drift - MY Studio Courtesy of Höweler + Yoon Architecture


(Re)conceptualizing Wire-Bending in Design: An Exploration of Craft, Computational Making & Digital Fabrication

Traditional wire-bending

VERNELLE A. A. NOEL PhD. Candidate, Penn State

This research aims to reinvent, challenge, and reconfigure the conception of wirebending and dancing sculptures in the Trinidad Carnival to investigate how they may contribute to design computation and architecture. Through a series of case studies and projects, I explore traditional, computational, and digital processes in the design and fabrication of a mobile, lightweight structure built on traditional wire-bending principles. This project has implications for design both inside and outside the discipline of architecture. Wire-bending is a “specialized art, combining elements of structural engineering, architecture, and sculpture” to create two-dimensional (2D) and three dimensional (3D) forms. This practice developed in the 1930s in carnival in Trinidad and Tobago. In this craft, wire and other thin, flexible strands of material are bent with hand tools and assembled to create 2D and 3D structures. Although the practice is called wire-bending, wire is not the only material used. Additional materials include aluminum flats, fiber glass and PVC rods. This indigenous craft practice, however, is dying due to slow rate of transmission of this knowledge, its laborintensive nature, and the younger generation’s love of technology (Noel, 2015).

Design chosen for development

Point Line Rapid Prototyping (PLRP)

3D printed output from the PLRP 85

SCDC ‘16

Dancing sculpture in Trinidad Carnival made using wire-bending techniques

Full-scaled installation


Robotic Motion Grammar ARDAVAN BIDGOLI PhD. , Penn State

This research proposes a rule-based analytical generative system that lets designers describe robotic characteristics and combine it with the design process to establish an active dialogue between the robotic system and design process. The grammatical system can codify design and making procedures to shape a framework to understand and control robot’s design space and describe its complex geometrical vocabulary with simple rules. This research proposes a structure, founded on the basics of shape grammar theory, but tailored to fit the context of architectural robotics. Motion grammar seeks to address the 3D harmonic movements of a robot, its mounted tool, and the material choreographically, suggesting motion as a generative vehicle of exploration in both designing and making. To do so, the grammar was defined based upon its basic vocabulary, morphemes, lexicons, and combination rules. The grammar defines the possible range of changes in the properties of directrices (morphemes) to generate basic motions (lexicon) and categorize the methods to combine these motions (rules) to generate more complex motions.


SCDC ‘16

Two basic parameters can affect the form of directrices, first degree if NURBS and second their relation in space. By changing the degree, they can shift from a line (or polyline) to a smooth curve. They can be in a same plane, parallel, interesecting or skew ones. By changing the degree and position of curve and position of controller points, one can generate the basic elements of grammar. In this motion grammar, a rule such as A ---> t(A) defines a movement in which (A) is the state of the robot tool assembly in a specified pose and time, and t(A) represents a new state by applying t transformation to the original state. Motions can operate on each other to generate more complex motions based upon a set of defined combination rules. Sequential rules join two vocabulary members into a new motion. Members can be joined either by the generator or by the directrix. Sharing the generator will result in a continuous motion, meanwhile, sharing a directrix requires the robot to finish the first motion, find its initial pose for the second motion and then perform it to complete the procedure.


Mass customization of Ceramic Tableware EDUARDO CASTRO E COSTA PhD. Candidate, University of Lisbon; Visiting Scholar, Penn State

This research is concerned with the mass customization of ceramic tableware. The objective of this research is to allow users to personalize their tableware sets, namely determining the shape of its elements. Mass customization was anticipated as an evolution of mass production by Alvin Toffler (Toffler 1971; Toffler 1984). As a production paradigm, it combines elements of both craft and mass production. As in craft production, it features a high degree of flexibility in its processes; it builds to order rather than to plan and it results in high levels of variety and personalization. As in mass production, mass customization generally produces in large quantities, has low unit costs, and may rely on automated production (Pine 1993). In times when consumers become ever more demanding, and differentiation becomes ever more important, a correct implementation of the mass customization paradigm can boost both customer satisfaction and profit (Bernard et al. 2012). Mass customization requires the articulation of design and production systems using new technologies (Duarte 2008). In the mass customization framework being developed, the design system sets the rules for defining the formal and decorative aspects of the tableware, using shape grammars and parametric modeling. This system foresees two levels of usage, one by designers who define a space of possible design solutions, and another by end users who may specify particular tableware sets within the space defined by designers. The production system enables the materialization of the customized designs using digital fabrication technology.


SCDC ‘16


Understanding the Urban Structure of Informal Settlements: Combining Techniques of 3D Scan and Shape Grammar Favela Dona Marta - case study of the research

DEBORA VERNIZ PhD. Candidate, University of Lisbon; Visiting Scholar, Penn State

The goal of this research is to understand the urban structure of informal settlements to develop a tool for sustainable urban planning. Many people have negative opinions about informal settlements but their inhabitants list several positive aspects: self-constructed spaces reflect the desires and life styles of their residents, take site features into account, and are affordable. This research aims to understand the urban structure of a specific informal settlement used as a case study. It does so by developing new uses for two computational tools that have been used in architecture and urbanism for quite some time: 3D scanning to survey existing structures and shape grammars to describe the bottom up growth of informal settlements. 3D scanning techniques have been widely used in the restoration and conservation of built heritage. However, they also are suitable for collecting spatial data from large, complex areas such as those found in informal settlements.


SCDC ‘16

Shape grammars were invented more than forty years ago. They provide the theoretical and technical apparatus for developing algorithms that describe design and building processes. Initially they were used in the generative specification of new designs in painting and sculpture and later on they were used to analyze existing designs in other fields like architecture and urban planning. In urban planning, recent research has targeted both generative and analytical grammars. This research aims to use grammars to understand the formal structure of informal settlements by finding compositional rules that describe the generation of their form, define principles for their requalification and, ultimately, develop guidelines for the design of planned, sustainable settlements in similar topographic conditions.

Images generated through photos to show VR approaches

Identification of pathways, green spaces, public spaces & transportation


Automated Multi-Material Fabrication of Buildings FLAVIO CRAVEIRO PhD. Candidate, University of Lisbon; Visiting Scholar, Penn State The construction sector is under increasing pressure to reduce the use of resources, environmental impacts, and costs, but recent advances in technology have created opportunities to achieve such goals. This work proposes a new system based on digital design and manufacturing technologies that permit the efficient construction and operation of buildings. The system allows for the design and production of customized building elements made of heterogeneous composite materials, with optimized performance on each of its points. The spatial variation in material properties, namely in composition and microstructural gradient, can be customized to specific loading conditions. These functionally graded materials (FGM) replicate the behavior of natural materials by improving structural efficiency while reducing the amount of material used. The proposed system integrates structural analysis and parametric design models to provide a tool that optimizes the material composition of construction elements. The Finite Element Method (FEM), embodied in Simulation by Autodesk, is used to analyze the structural behavior of elements under varied loading conditions. The resulting data is then used by an algorithm developed in Grasshopper, a Rhino plugin, which defines distinct densities and material compositions according to structural requirements, taking into account that zones with higher Von Mises stress require stronger materials, thereby preventing structural failure by surpassing the yield strength of the material. 93

SCDC ‘16

Several concept models of building walls have been designed & optimized by importing CAD models into the simulation software, distributed loads applied, boundary conditions set as fixed joints and FEM meshes generated. A concept wall was selected and produced by additive manufacturing (AM) technologies, namely Connex3 by Polyjet, a 3D printer that uses inkjet technology, adding successive layers of a liquid photopolymer cured through an UV lamp immediately after deposition. In this case study, two materials -- one transparent representing the material and the other opaque depicting voids-were used for better visualization of the spatial distribution and size of porosity. The control over geometry and material composition permit to create a graded material property distribution and increase the performance of building elements, which cannot be achieved using traditional CAD tools and conventional AM workflows.

Optimized fabrication strategy

A conceptual approach was developed to define the form and material composition of building elements according to functional requirements. These components are then fabricated using a multi-material printing method. The application of this FGM concept to lightweight building components will allow producing resource-efficient graded building components tailored to specific loading conditions, this way minimizing waste generation, CO2 emissions, and resource consumption.

Conceptual wall physical prototype

Concept models of building walls


Mechanizing Rammed Earth: Making New Earth Construction Viable in the U.S.A. ZOE BICK M.Arch, Penn State

In an environment of construction technologies development circa 2016, common building materials such as wood, brick, masonry, and concrete – along with their associated tools and processing – are quickly-evolving semiautomated and fully-automated potentials in a post-industrial phase of invention and advancement. Recent work in applying robotics to the building of steel bridges by MX3D and the use of a 3D printer to create walls/houses by Yingchuang New Materials are two of the many indicators of where the construction world is heading. On the other, more-primitive end of the technology spectrum, sit the tools and processes associated with rammed earth construction in the US. While these tools and methods remain “true” to a traditional/authentic manner of rammed earth construction (which does have a DIY value), they cannot meet modern demands for mass building in the US, demands largely governed by efficiency, economy and delivered by everevolving technologies. The high level of skilled labor requirements associated with traditional rammed earth building and the high cost of formwork (design, materials and assembly) associated with the traditional approach results in construction expenses that are prohibitively high – despite the economic accessibility of the raw material which is local in the extreme. In


SCDC ‘16

addition to facilitating greater accessibility financially, the mechanization of rammed earth construction processes could yield a new perspective into the applicability and aesthetics of rammed earth as a contemporary building material. Introducing elements of the machine into the construction of rammed earth architecture would also allow a critical acceleration of the process of building – an acceleration that introduces new potentials/improvements in the realms of construction safety, production quality, and an easing of environmental impact. In this thesis, three distinctly different forms of mechanized rammed earth are represented and explored though the development of three machines – Monument, Mass, and Needle. These three machines are all at their beginnings, and so currently live in the prototype stage. With the aid of the Stuckeman Center for Design Computing, the third machine, Needle, has gone though one iteration and moved back into digital representation and design.



SCDC ‘16


Notes from the Symposium PETER LUSCH Assistant Professor of Graphic Design, Penn State

In his 1969 book, General System Theory: Foundations, Development, Applications, Austrian biologist Ludwig von Bertalanffy observed a shift in scientific attitude and conceptions during postWorld War II, and championed a trans-disciplinary approach to synthesize modern computational technology, cybernetics (the study of regulatory systems), and biology. Under a trans-disciplinary work model each team member becomes sufficiently familiar with the concepts and approaches of his and her colleagues as to blur the disciplinary bounds and enable the team to focus on the problem as part of broader phenomena. Early in the text Bertalanffy demonstrated an acute awareness of complexities brought on by technological development and accepts the reality of managing these complexities by suggesting a re-orientation of scientific thinking. (Lusch, 2015). Throughout the SCDC Design + Computation Symposium’s robust afternoon of shared work and conversation, this notion of systemic trans-disciplinary approaches became a recurring theme in the work of, and between our panel of scholars. Skylar Tibbets works with the Self-Assembly Lab at MIT, which is a site of transdisciplinary activity between science, engineering, and design to identify where in the natural world self-assembly occurs, what tools to harness it, and at what scales may it be implemented. As a landscape architect, Justine Holzman works with computation within the natural world and utilizes the taxonomy from scientific fields as a means of publishing her research on physical models of water flow and sediment. As a graphic designer, Zachary Kaiser combines an interest in the philosophy of design with a technical knowledge of programming languages to be able to work with developers in creating Web sites for the consumer market. Those fortunate enough to have attended the 2016 Symposium were treated to insightful talks by our three panelists, and were participants in meaningful discussions with each other. The


SCDC ‘16

following are my notes from that afternoon, which are a humble attempt to capture the topical arc of this rich dialog.

ON DEFINING COMPUTATION IN THEIR WORK After earning his professional degree in architecture, Skylar Tibbets developed an interest in design computation based on the inquiry of how code becomes a new medium that transforms design. Computational programming languages (code) and open source software continue enabling designers with a means to produce work in ways they otherwise could not. The combination of code and machines has accelerated the democratization of tools (the CNC (computer numerical controlled) router and 3D printer are but two examples), which have transformed nearly every industry. Tibbets applies his notion of computation towards research at the Massachusetts Institute of Technology’s SelfAssembly Lab (MIT-SAL), where he also serves as its director. There he simulates complex scenarios, physical phenomenon, and analyzes structural and environmental scenarios in the built world. His keynote presentation this afternoon demonstrated three broad scenarios. First being self-assembly, “A process by which disordered parts build an ordered structure through only local interaction”. This is accomplished with select materials designed into strict geometric forms for smart interactions that are activated by passive energy sources. Next, programmable materials are, “Physical, fabricated materials that have the ability to change precise form and/or function by design.” This form of assembly contains the same ingredients as self-assembly, only this inquiry also studies how mechanics, “embed information and capability” into the materials. The third scenario, granular jammable materials are, “Structurally disordered particles that can instantly and reversibly transition from a liquid-like state into a solid state.” Here Tibbets’ work involves replacing


the current technology requiring constant vacuum within a sealed flexible bladder with tension from steel cables and compression of 3D printed rock. Researchers like Justine Holzman work at the interstices between extremities of the natural world and limitations of the computational. She has conducted research in Responsive Modeling (2015) with Bradley Cantrell at the Responsive Artifacts and Environments Lab at the Harvard Graduate School of Design. This work uses physical models of deltaic landbuilding to study the effects of water flows on sediment in meandering river channels and backwater swamps. They are aided in modeling their substrates by CNC motors affixed with tooling made of laser-cut acrylic. In the field they are not otherwise capable of computing digitally what can be computed physically because a computational algorithm does not yet exist to represent the complexities of water flow through sediment. For Holzman, computational agency requires a combination of physical model monitoring, distributed sensor networks, mapping, and mechanical processes, such as scooping and weighing sediment. In addition to her fieldwork she has explored the subject of responsive spatial works built in the environment in the book, Responsive Landscapes: Strategies for Responsive Technologies in Landscape Architecture (Routledge, 2016). Throughout this survey of technologic applications from around the world that question the very concept of ‘nature’, broad categories as “Compress,” investigate the manipulation of temporal scales to interrupt and decipher change over time, or “Modify,” which introduces projects with the potential for responsive technologies to reshape environments. Zachary Kaiser defines computation from a historical point of view, that of computing as a means of calculation. His own research follows design theorist Herbert Simon’s notion of 101

SCDC ‘16

design as, “Courses of action aimed at changing existing situations into preferred ones.” Kaiser interrogates these ‘preferred situations’ in the designed world and inquires about the audience for which they are preferred and why. He engages with computation on an epistemological level so that as a designer he may engage in dialog with collaborators, such as programmers. This is in reference to his professional practice as a Web designer with clients such as Signet Education, Tigers and Bears restaurants, and ISM travel and lifestyle marketing. His exhibition record includes art works exploring algorithmic data such as Whisper (2013-15), which reinterprets a gallerygoers spoken voice into a scrambled keyword search on Amazon.com. As a full-time professor in graphic design, Kaiser also investigates how automated processes such as script languages and software affect the creative agency of his students. This puts into question the capacity for the students’ own making, and the means and media by which they can experiment with to create. On the manner of creativity, he is generating discourse about speculative computational projects where, “The role of creativity is to study existing systems that lead to projects, which question those systems in the first place” (Kaiser, 2016).

ON THE CRITICALITY OF COMPUTATION The Q&A segment following presentations by Holzman and Kaiser brought out wonderful threads of commonality between their respective disciplines landscape architecture, and graphic design. One such point was introduced by Mehrdad Hadighi, Chair of Integrative Design, and Architecture Department Head concerning the shared high-level of criticality towards computation. Does this suggest a renaissance of materiality and physical models? Does it frame an unfavorable view towards large-scale projects involving computation? Speaking generally, Holzman responded with emphasis on the

importance of a “critical eye,” catalyzed by the rapid expanse of applied computation within her field. We may presume she is suggesting the imprudence of an ‘armchair critic’ view that asserts criticality exclusive to failures of computation. Rather, preferring the ‘embedded correspondent’ view where criticism is asserted in real-time with the intent of advancing the discourse. In response to criticism of computation in general she warned that with a, “High level of ability with computation we risk over-asserting the agency that defeats how we think about common sense solutions.” At a later point in the conversation we found a connection to this statement when Holzman paraphrased the political theorist Jane Bennett’s Vibrant Matter: A Political Ecology of Things (2010), that landscape architecture attempts to design something intentional in the landscape in a space where unintentional things occur. This notion has lead Holzman towards an active inquiry about autonomous distributed technology in the landscape, and how such technologies can be embedded with a set of logics or behaviors (also echoing the embedded materials work of Tibbets). She questioned what would be the outcomes if humans were not originating the decisions? A concrete example model by Holzman was the current paradigm in agricultural applications where uses of precise technology are based on gains in efficiencies and crop yields. Could we instead shape the landscape to mimic the former cattle drives on prairie landscapes (which did not utilize fencing) and apply that to a ‘virtual fenced’ agricultural landscape? Here Holzman returned to her notion of the ‘common sense solution’ as a means of rethinking livestock as ecologic agents, and to use animal movements to bring back animal decisions in the process of farming rather than human or machine-imposed decisions. While Kaiser’s approach stems from the philosophy of human subjectivity, specifically how the graphic design field interfaces explicitly with advertising and its implications on the

vernacular consumer culture. This has led him to an ongoing ethical inquiry into the advertising industry’s relationship to technology, and likely implications of computation therein. He surmised the interfacing of technology and graphic design happening now is unprecedented in the history of the field, and is precisely where applying criticality is important. He questioned the desirability of these points of interface between consumer culture and computation, suggesting that consumers assert their own criticality towards the deliverability of patented systems in the market. Kaiser’s symposium presentation exemplified two such systems as targets of his criticism, known in the consumer vernacular as ‘smart’ appliances: the SleepNumber® bed, and Pantelligent® cookware. Ultimately, by opening this discourse the consumer would behold greater influence of these systems (i.e. products), determining if they are publicly acceptable, or even desirable. Later in the conversation he paralleled Holzman’s criticism of computational limitations – albeit within graphic design – related to ‘false validation’ by ETS (eye tracking software) used in testing human-computer interactions, products, and video game design with anticipated end users. In the case of testing for Web site optimization he explained that ETS assumes the end user who is optically engaged with visual elements on the site equates to the success of those elements, ergo, the site itself. Yet, the ETS neglects user’s cognition during the testing process. Kaiser wondered of the danger that exists in false validation of ETS considering test subjects may arbitrarily be thinking about something else during the test. Or, perhaps cognitively they are disengaged from the Web site entirely? The basis for research at the Self-Assembly Lab is itself overtly critical towards the inefficiencies current in the built environment regarding levels of manufacturing, construction, and infrastructure. We will fully disclosure that Tibbets was not a participant in the Q&A conversation


with Holzman and Kaiser, nonetheless his insights from a keynote presentation given earlier in the afternoon are applicable here. Apropos of each speaker’s respective desire to change current paradigms, Tibbets’ work is about transfiguring that of the built environment of, “Complex things, built with complex parts, assembled in complex ways” (Tibbets, TED 2011). Computation then enables the underpinning capability for research and manufacture of nano-scale programmable adaptive materials. For instance, in the 4D printed systems produced at MIT-SAL multi-material parts arrive ‘off the 3D printing bed’ with the ability to transform from one shape to another – effectively building themselves. This system stems from an initial critique of robotics, which indeed provide repeatability and precision, however, at the expense of wiring, complex electro-mechanical components, and frequent maintenance. Tibbets’ had previously addressed in his 2011 TED Talk a critique of the scalability in self-assembling ‘robotic’ components. The Macrobot and Decibot projects demonstrated how the complex network of mechanical sensors were required to be embedded within every part of the assembly, not to mention the human involvement in physically constructing these robots (of eight and twelve feet in length). By shifting their thinking away from construction with robotics MIT_SAL guided it towards materiality capable of reconfiguration, replication and self-correction. Tibbets’ work then is inherently a reductivist design of levels of complexity in the current infrastructure’s system. But, the manufacture of components with selfassembly capabilities is not to be confused with the manufacture of ‘smart things,’ as Tibbets discussed in another TED Talk from 2013. This is, however, relational to the topic of autonomous distributed technology previously mentioned by Holzman. While 4D materials may appear to behold a high level of autonomy, these systems require a certain degree of human control too. The MIT-SAL Weather Balloon Project (2015), for instance, relies on a combination of mechanical 103

SCDC ‘16

energy (fans) and human-guided assembly to accomplish construction of a large-scale frame structure. In this scenario when the three-foot diameter balloons are deflated from within a modular frame, a larger space-frame remains – a structure otherwise unwieldy to build or transport. This simulation also points to a future scenario of building in an extreme environment, such as outer space. The promise of self-assembly is a practical reduction of complexity as a means for implementation of significant depth and breadth in the industrial sector.

SOURCES Holzman, Justine, Zachary Kaiser, and Skylar Tibbets. Stuckeman Center for Design Computing (SCDC) Design + Computing Symposium. Penn State University. 2016. Lusch, Peter. “The Systems Thinkers: New Critical Competencies in Graphic Design.” UCDA Design Education Summit. Proceedings of the University and College Designers Association (UCDA) Design Education Summit, May 18-19, 2015, South Dakota State University, Brookings, SD. 2015. 43. Print. Tibbets, Skylar. Can We Make Things That Make Themselves? March 2011. Available at: www.ted.com/talks/skylar_tibbits_can_we_make_ things_that_make_ themselves?language=en. Accessed July 18, 2016. The Emergence of “4D Printing.” February 2013. Available at: www.ted.com/talks/skylar_tibbits_ the_emergence_of_4d_printing?language=en. Accessed July 18, 2016.

This publication is available in alternative media on request. The University is committed to equal access to programs, facilities, admission, and employment for all persons. It is the policy of the University to maintain an environment free of harassment and free of discrimination against any person because of age, race, color, ancestry, national origin, religion, creed, service in the uniformed services (as defined in state and federal law), veteran status, sex, sexual orientation, marital or family status, pregnancy, pregnancyrelated conditions, physical or mental disability, gender, perceived gender, gender identity, genetic information, or political ideas. Discriminatory conduct and harassment, as well as sexual misconduct and relationship violence, violates the dignity of individuals, impedes the realization of the University’s educational mission, and will not be tolerated. Direct all inquiries regarding the nondiscrimination policy to Dr. Kenneth Lehrman III, Vice Provost for Affirmative Action, Affirmative Action Office, The Pennsylvania State University, 328 Boucke Building, University Park, PA 16802-5901; Email: kfl2@psu.edu; Tel 814-863-0471. U.Ed. ARC 16-15

Profile for Penn State Stuckeman School

SCDC | Design + Computing  

Design + Computing operates as both a title and agenda for the second Stuckeman Center for Design Computing Flash Symposium and Open House....

SCDC | Design + Computing  

Design + Computing operates as both a title and agenda for the second Stuckeman Center for Design Computing Flash Symposium and Open House....

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded