__MAIN_TEXT__
feature-image

Page 1

Bika Rebek

V

Tools for Things: An illustrated study of software used in architectural design concerning its origins, technical fundamentals and associated discursive and theoretical questions

Advisor: Laura Kurgan


Bika Rebek

Tools for Things: An illustrated study of software used in architectural design concerning its origins, technical fundamentals and associated discursive and theoretical questions

Advisor: Laura Kurgan


ABSTRACT

The premise of this book is to consider software, or the mediating layer between designer and design, as the subject of study. My research shows that each software can be traced back to a formative discipline, as well as a fundamental technical principle. These origins are situated in two major areas: visualization and industrial manufacturing. While most software is written by multiple authors over an extended period of time, specific figures have had a particular relevance to its narrative in their role as inventors or early adopters, appropriating software for architectural purposes. These characters along with the technical fundamentals are an instrumental part of this analysis, as they do not just affect technical aspects, but are implicitly part of the theoretical discourse relating to it. Starting from a conceptual positioning of tools in the design process, each chapter considers software as a framework to discuss facets of architectural discourse pertaining to it. Focusing on four different design programs currently popular in architecture practice, Tools for Things is weaving an intertwining narrative of disciplinary influence between architecture and other fields. The four programs are studied as separate entities to reveal their unique histories, characteristics and theoretical implications, addressing their many overlaps and adjacencies in a concluding chapter, ultimately emphasizing the cross- connectedness of software through multiple- platform workflows.


THANK YOU Greg Lynn Erik Carver Matt Choot Hani Rashid Stefan Ritter Marty Wood Jose Araguez Mark Wigley Ursula Rebek Felicity Scott Laura Kurgan Mark Wasiuta Helena Rebek Coco Huemer Valeria Meiller Marti Amargos Agustin Schang Alissa Anderson Sigmund Lerner Peter Ily Huemer Reinhold Martin Damjan Minovski Scott McCagherty Lise Anne Couture


INDEX

Abstract Acknowledgments

Introduction: the tool as medium

1

1. Autocad: infinite scale and standardization

11

2. Rhinoceros: shipbuilding and flexibility

31

3. Photoshop: composite and collage

51

4. Maya: smooth transitions

77

Conclusion

99

Bibliography

103


1

Introduction

INTRODUCTION In order to look at software itself I have chosen four programs, to be analyzed from different perspectives: AutoCAD, Rhinoceros, Photoshop and Maya. The tools I am considering are chosen according to a number of criteria: each software is widely known and used in major architecture offices. All examples are involved in the representation of conceptual ideas in the early stages of the design process. Another common trait lies in their difference: none of the programs was developed exclusively for architectural purposes and they all originate from different fields. Architects tend to appropriate software from other disciplines, importing techniques and working methods along the way. The construction methods of large transportation vehicles including ships, trains, cars and airplanes have long been a source of inspiration for architects. A number of early high-end software was designed for the complex requirements needed to manufacture transportation devices and some of the techniques developed have become widely used in architectural design. The requirements of the shipbuilding industry in particular, have led to the use of NURBS curves geometry in Rhinoceros 3d, today widely used in architectural design.1 Vector representation has been first used for military and civil engineering purposes2 and is today at the base of AutoCad, an industry standard in architectural drafting.3 The second large group is the visualization and animation industry. The history of image-making in its broadest sense and architecture have common roots and share many tools - from the invention of perspective to the technique of water-color, to today’s image editing techniques with Photoshop. Special effects for movies have historically often 1 “We started the Rhino development in about 1992 as an AutoCAD plug-in to help a few of our marine design clients.” Franco Folini. “An Interview with Robert McNeel, CEO of McNeel and Associates (Rhino 3D)”, Novedge blog, March 14, 2007, accessed March 20th, http://blog.novedge.com/2007/03/ an_interview_wi_3.html 2 Weisberg, David E. “Chapter 5: Civil Engineering Software Development at MIT” in The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, 2008, Accessed April 28th. http://www.cadhistory.net/. 3 James Coppinger. CAD For The AEC World, About Tech, Accessed April 28th 2015. http://cad. about.com/od/CAD_Basics/a/Cad-For-The-Aec-Industry.htm


Introduction

2

been designed by architects, and today there is greater exchange than ever due to Maya and other animation software being used by both disciplines.4 A number of other fields that have produced software used by architects that are not included here for different reasons. Computational biology is an emerging branch producing work very useful in architectural design research, including a number of highly relevant algorithms.5 In terms of software however it has not produced notable products that are integrated into widely distributed design workflows in architecture. Another significant field that is bracketed out of this research is manufacturing software. While manufacturing techniques play a very important role in the design process, they usually do not allow for the ground-up creation of design concepts. Manufacturing software for milling, laser cutting or robotic fabrication always operates in relation to a physical output, it therefore faces different constraints than the software discussed here. Other applied design disciplines, like fashion or industrial design, have been slower at appropriating digital technologies and manufacturing techniques in general. To this day, many fashion designers work with 3d modeling experts rather than 3d- modeling themselves.6 Why are architects so prone then to adapting techniques from other fields? I would like to suggest two answers to that question. Architects have been involved with media creation themselves since the 1970s and they have established a strong academic and professional support infrastructure for these activities. A figure well known for combining computation and architecture early on is Nicholas Negroponte, the founder of the MIT media lab in 4 Erik Butka, Meagan Calnon & Kathryn Anthony, “Star” Architects: The Story of 4 Architects who Made It in Hollywood, published June 19th 2013 (http://www.archdaily.com/388732/star-architects-the-story-of-4-architects-who-made-it-in-hollywood/) 5 Especially schools like the AA in London have been focusing on integrating genetic algorithms in Architecture with an ‘Emergent Technologies and Design’ master. People working with genetic algorithms include Achim Menges, Alissa Andrasek and many others. 6 For examples fashions designer Iris Van Herpen often works with architect Julia Koerner to create here intricate designs see: Iris van Herpen, Julia Koerner and Materialise Reunite for the Biopiracy Collection, Materialize, March 6, 2014, Accessed May 4th, http://www.materialise.com/press/ iris-van-herpen-julia-koerner-and-materialise-reunite-for-the-biopiracy-collection


3

Introduction

the 1970s.7 To this day the lab supports computational research and explores the opportunities provided by emerging technologies.8 A less known example that I uncovered while researching this book is the story of Bill Kovacs. He was educated as an architect and worked at SOM for many years working in the computer lab in Chicago in the 1970s before he moved to Hollywood to work for the animation industry and eventually became the creative director of Maya.9 These figures all combine technical with artistic skills, and they themselves design new software or other media. In terms of institutional support, the MIT media lab is a progenitor of this kind of effort. Now accompanied by countless institutions, including master programs, labs and workshops dedicated to computational culture in architecture, academia has created a support system for digital experimentation in architecture. Perhaps the most important reason for architects to use software more extensively than other fields is the nature of the work itself- while industrial design still relies heavily on mass production of identical objects, buildings tend to be unique. In architecture, the ability to optimize workflow though software presents the opportunity for large financial benefits. Software Studies Computing has pervaded the world we live in and has produced a number of emerging academic disciplines throughout this process. Software in itself is a new kind of media, with still largely undefined structures of critique and evaluation. A legacy of scholars and media theorists however have made significant contributions to software studies and associated fields and the introduction below can only deliver but an incomplete account. Scholars writing about software can be divided into three groups- media theorists inlcuding Marshall Mc Luhan and Friedrich Kittler, technical innovators such as Ivan Sutherland and Alan Kay as well as a 7 Negroponte, Nicholas. “The Origins of the Media Lab”, in Jerry Wiesner: Scientist, Statesman, Humanist : Memories and Memoirs, 149- 156, Cambridge, Mass. : MIT Press, 2003. 8

MIT Media lab: Quick Facts, accessed May 8th 2015 (http://www.media.mit.edu/about)

9 Weisberg, David E. “Chapter 13: IBM, Lockheed and Dassault Systèmes” in The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, 335-37, 2008, Accessed April 28th. http://www.cadhistory.net/.


Introduction

4

younger generation of what I call ‘generalists’ including Lev Manovich and Jussi Parikka. Marshall McLuhan was the first to extensively study the history and theory of contemporary media. He analyzed a number of technologies of his time such as the telephone or the typewriter, making the point that ‘all media are fragments of ourselves extended into the public domain.’10 Even if McLuhan did not talk directly about software, his close attention to media technologies laid the foundation for media studies today. Famously, he coined the famous phrase “the medium is the message”11, in reference to how the transmission of something becomes more important than the content of the message itself. Friedrich Kittler pronounced the death of writing concurrent with the birth of the digital, in his essay “There is no software” in 1992.12 Kittler built upon McLuhan’s arguments and extended them to include new media forms emerging in the 80s and 90s. Critiquing the prescriptive determinacy of software he described how the physical materiality of text became encoded in chips, zeroes and ones, or in Kittler’s own words: “The last historical act of writing may well have been the moment when, in the early seventies, the Intel engineers laid out some dozen square meters of blueprint paper in order to design the hardware architecture of their first integrated microprocessor.“13 Ivan Sutherland with his program ’Sketchpad’ stands at the beginning of a long heritage of inventors- academics, scientists who have simultaneously designed and written about design software.14 He foresaw many of the graphic developments 10 McLuhan, Marshall. Understanding Media: The Extensions of Man, Cambridge, Mass.: MIT Press, reprint 1994, 295. 11 McLuhan, Marshall. Understanding Media: The Extensions of Man, Cambridge, Mass.: MIT Press, reprint 1994, 7. 12 Kittler, Friedrich “There Is No Software,” C-Theory: Theory, Technology, Culture, no. 32 (Oct, 18, 1995) 13 Friedrich Kittler, “There Is No Software,” C-Theory: Theory, Technology, Culture, no. 32 (Oct, 18, 1995) 147. 14 Sutherland, Ivan. Looking Back, The TX-2 Computer and Sketchpad. Cambrige, Mass: MIT Press: Lincoln Laboratory Journal,Volume 19, Number 1, 2012


5

Introduction

designers are familiar with today and conceptually laid the foundation for the computer as a design tool, as will be explicated further in the chapter on Autocad. Only a decade later, Alan Kay at the MIT became a key figure in developing what we know as a graphical user interface today. Defining the computer as “a medium for expression through drawing, painting, animating pictures, and composing and generating music”15 in 1977, his team at the MIT designed fundamental communication systems with computers. The ‘Dynabook’ included a keyboard, a mouse and a screen with multiple windows running simultaneously.16 Kay dreamt of a time when every person would have their own notebook and use it for creative purposes. He imagines scenarios of using it to produce music, animation, writing, drawing and painting, and even, specifically, architecture: “An architect might wish to simulate three-dimensional space in order to peruse and edit his current designs, which could be conveniently stored and cross-referenced.“17 His colleague and founder of the MIT Media lab Nicholas Negroponte combined insights based on his existing research and software development with a wide range of speculative ideas about the future of computing in his book “The Architecture Machine” in 1970.18 He imagined computers as catalysts of social change, enabling every person to design their own home without the help of architects. Writing from an idealist perspective, he suggested accepting the computer as a intelligent partner in the design process or even to replace architects fully with his intelligent architecture machines: “Let us build machines that can learn, can grope, and can fumble, machines that will be architectural partners, architecture machines.“19 15 Kay, Alan and Goldberg, Adele. “Personal Dynamic Media” in Wardrip-Fruin, Noah and Montfort, Nick, editors: The NewMediaReader, Cambridge, Mass. : MIT Press, 2003.p 393. 16 Alan Kay and Adele Goldberg “Personal Dynamic Media” in Wardrip-Fruin, Noah and Montfort, Nick, editors: The NewMediaReader (Cambridge, Mass. : MIT Press, 2003) 392-402. 17 Alan Kay and Adele Goldberg “Personal Dynamic Media” in Wardrip-Fruin, Noah and Montfort, Nick, editors: The NewMediaReader (Cambridge, Mass. : MIT Press, 2003) 402. 18 Nicholas Negroponte, The architecture machine; toward a more human environment (Cambridge, Mass., The MIT Press, 1970) 19 Nicholas Negroponte, The architecture machine; toward a more human environment (Cambridge, Mass., The MIT Press, 1970),121.


Introduction

6

The last group of media theorists I call the generalists because they are positioned in a grey zone between being designers, programmers and theorists themselves. Not being fully fledged developers they are informed enough about technical aspects to write about it. Lev Manovich is the most prolific figure in this newly emerging field of software studies. Not only has he published numerous articles and books himself, he is also running a publishing house for software studies supporting other authors. Manovich repeatedly points out the newness and importance of the field he has helped to found and he formulates important questions: “If we don’t address software itself, we are in danger of always dealing only with its effects rather than the causes: the output that appears on a computer screen rather than the programs and social cultures that produce these outputs.“20 His methodology however does not fully complement his statement. Software in the creative industries is addressed by him as a group of ‘media applications’.21 Throughout his book, “Software takes command”, Manovich lists ‘media software’ as: Word, PowerPoint, Photoshop, Illustrator, After Effects, Maya, 3ds Max, etc.22 This list is repeated multiple times throughout his book with minor variations. Even when talking about software more specifically, as he does with Photoshop, it is episodic and only picking out particular aspects of the software to make a general point rather than studying the software itself. Manovich defined ‘software studies’ as a field of research, yet he fails to address individual software, rather analyzing media as one consistent group. Another generalist, finish media theorist Jussi Parikka, noted the emergence of ‘media archeology’, a field of study occupied with disappearing and obscure media types.23 Media archeology, similarly to software studies is far from being a defined discipline. Analogue and digital artifacts from any field can be analyzed as long 20 Lev Manovich, Software takes command : extending the language of new media, (New York ; London : Bloomsbury, 2013),9. 21 Ibid. 22 Lev Manovich, Software takes command : extending the language of new media, (New York ; London : Bloomsbury, 2013),14, 36, 51, 60, 136, 191, 259 23 Erkki Huhtamo and Jussi Parikka, editors, Media archaeology: approaches, applications, and implications (Berkeley, Calif. : University of California Press, c2011).3.


7

Introduction

as they are in the domain of ‘media’. As an example, Jussi Parikka wrote a book on computer viruses as a cultural phenomenon.24 As he describes the discipline: “Media archaeology rummages textual, visual, and auditory archives as well as collections of artifacts, emphasizing both the discursive and the material manifestations of culture.”25 The present study therefore might not fall under this category since each software is still in use. Yet the landscape of software is changing fast and many of the formats used only ten years ago are out of date today. Architects have to face the question of how to preserve and collect digital design artifacts. A discussion on the preservation of out-of date media formats has begun at a show at the Canadian Center for Architecture, titled “Archaeology of the Digital”, pointing out the problematics of storage, display and presentation of projects designed in the computer.26 Greg Lynn, as the curator of the show, emphasized the ephemerality of formats by including technical paraphernalia like the Amiga work station in the show.27 Displaying the platforms, rather than the tools themselves, the exhibition could not replicate the software used in the design of the projects. The ephemerality of storage and representation emphasizes the difficulty for historians and critics to evaluate digitally designed projects. Furthermore, the curator in this case is perhpas too conventiently close to the content of the exhibition. As an early adopter of digital techniques himself and good friend and colleague of all the people in the show, this group is writing its own history, establishing an intellectual monopoly on these early media. While they have certainly contributed immensely to this appropriation, there are many other segues and figures on the brink between the analogue and digital techniques some of which will be discussed in this book

24 Parikka, Jussi. Insect media: an archaeology of animals and technology, Minneapolis, University of Minnesota Press, c2010. 25 Erkki Huhtamo and Jussi Parikka, editors, Media archaeology: approaches, applications, and implications (Berkeley, Calif. : University of California Press, c2011).3. 26 Greg Lynn, editor. Archaeology of the Digital (Canadian Centre for Architecture, Sternberg Press, 2014) 27 Archaeology of the Digital, gy-of-the-digital

http://www.cca.qc.ca/en/exhibitions/1964-archaeolo-


Introduction

8

Theoretical framing As the third part of this introduction I would like to draw attention to the larger conceptual framing of the topic, involving the question how tools stand in relation to what is produced by and with them. As part of the intellectual legacy in regards to thinking about tools and their role in our lives at large, Heidegger used the example of a hammer to illustrate how tools only become truly useful when in action.28 Heidegger’s hammer, when only looked at rather than used, cannot be fully understood. Furthermore, in his definition a tool requires a ‘usable material’ in order to become a useful thing.29 A hammer without a nail is useless. When contemplating Software this relationship is different because the ‘usable material’ and the tool are collapsed into one. For example, when shaping a polygon in Maya, the modification tools are embedded in the mesh logic itself. That means that all tools are subordinated to the logic of Heidegger’s ‘usable material’. Put differently, the usable material equals the basic technological premise of each software. Heidegger further characterized a tool as always being in relation with other tools: “There always belongs to the being of a useful thing, a totality of useful things in which this useful thing can be what it is.”30 If Software is equivalent to an entire workshop, with thousands of hammers to choose from it becomes the designers task to know how to access the repository of these relations. Software becomes truly useful in reference to a totality of other software. For Bruno Latour tools are part of an extended notion of design, encompassing writing code, tinkering, modeling, manufacturing etc.31 Latour, like Kittler before him, notes that design objects are increasingly composed of layers of text and mediation. Latour took the key as an example in his essay ‘The Berlin Key or How to 28

Martin Heidegger, Being and Time (Albany: State University of New York Press, 1996)

29

Martin Heidegger, Being and Time (Albany: State University of New York Press, 1996)

30 Martin Heidegger, Being and Time 31 Bruno Latour, “A Cautious Prometheus? A Few Steps Toward a Philosophy of Design (with Special Attention to Peter Sloterdijk)” in Proceedings of the 2008 Annual International Conference of the Design History Society, pp. 2-10.


9

Introduction

Do things with Words’.32 He illustrates how the key becomes entangled in social relations: “From being a simple tool, the steel key assumes all the dignity of a mediator, a social actor, an agent, an active being.“33 The term design for Latour encompasses more than the mere formation of ‘Gestalt’, it requires designers to understand the things they are working on as deeply embedded sociopolitical agents. Yet the old dichotomy between form and function is still convenient and sometimes useful. Andrew Witt, director of research at Gehry Technologies, sees a split in the architecture profession today between design expertise pertaining to classical rules of proportion, program requirements etc. and instrumental thinking denoting a high skill level with computational tools.34 A dichotomy is established not between form and function but between design (form) and tools (function). While a generation of designers in the 90s did not see the tools as a separate discursive entity to be considered, a younger tech-savy generation is satisfied in displaying the tools themselves as a mode of invention. To refrain this technical virtuosity from thriving in an intellectual vacuum, another type of knowledge needs to be part of the design process as a crucial element. An understanding of the basic ideas and premises of the design tools one is operating with, allows to combine them seamlessly with technical or aesthetic expertise. This is not to say that every architect, critic and designer needs to have extensive technical knowledge. It is less about knowing specific tools in software than to understand the underlying principles and to question their role in design. Looking at software as a multilayered assembly of text, geometric principles and social relations, helps to uncover the complex ways software is implicated in the design process today. 32 Bruno Latour, “The Berlin Key or How to Do things with Words” in Matter, materiality, and modern culture (London: Routledge 1991) p.19 33 Bruno Latour, “The Berlin Key or How to Do things with Words” in Matter, materiality, and modern culture (London: Routledge 1991) p.10-21 34 Andrew Witt, A Machine Epistemology in Architecture. Encapsulated Knowledge and the Instrumentation of Design, Candide. Journal for Architectural Knowledge No. 03 (12/2010), pp. 37–88.


Introduction

10

NOTES A note on format A fundamental quality of software is its continuous development and expandability. This is associated with a number of problems in considering software as a subject of study, since the analysis itself risks being outdated fairly soon. Similarly the historical development of an incrementally changing subject is difficult to trace accurately. Because very few role models or examples of software analysis currently exist, new formats need to be designed, making the methods of investigation of at least as much interest as the topic itself and effectively making this book a proposition on how to write about and critically analyze the development and use of software in architectural practice. Tools for Things is currently published as an online reader in pdf format and for print-on demand services. As the field of software studies is in a stage of development, the book itself is a perpetual work in progress.


11

AutoCad


AutoCad

AUTOCAD

12


13

AutoCad

AutoCad is to drawing what typing or word processing is to writing. 1

Figure 1.1. Sean O'Donnell., Shuttle drawing (1983)

1 Jessica Thurk and Gary Alan Fine, "The Problem of Tools: Technology and the Sharing of Knowledge" in Acta Sociologica, Vol. 46, No. 2, The Knowledge Society ( Jun., 2003), pp. 107-117


AutoCad

14

AUTOCAD

Autocad, as the most widely used software in the architectural profession, has become an industry standard.1 For many architecture offices it has been the first software used in their practice and it is the oldest from the examples discussed here. The technology used in AutoCAD was originally developed for civil and military engineering. The research leading to what we now know as CAD (Computer Aided Drafting) was originally sponsored by the military for the purpose of air defense systems. The SAGE Air Defense System was a building sized computer used to calculate and represent airspace data and the path of airplane attacks.2 From the very beginning, computers were used to represent large scale operations at a much smaller scale. Orthogonality and arrays are the defining features AutoCad is heavily biasing towards, explaining its tendency towards standardized design practices. Suited for the documentation of any type of building, the design procedures in AutoCad are bound to a logic of industrial manufacturing of standardized types. With vast libraries of predefined modules and symbols online, AutoCad enables the integration of these elements assembled by thousands of anonymous authors. Replacing standardization manuals such as the Neufert or the Architectural Graphic Standards, the use of AutoCad suggests a reconsideration of standardization questions in contemporary practice.3 The history of AutoDesk, the mother company of AutoCad, now ranging for almost forty years is a story of explosive growth only possible under the special circumstances of an emerging market.3 It is the American dream reimagined through the rise of personal computing. Autodesk today is the biggest software company catering to designers, engineer and architects. Next to its original product AutoCad it owns and develops Maya, 3dsmax, Revit, Alias and other widely used software packages.4 1 “AutoCad”, Wikipedia. accessed March 28th, 2015 http://en.wikipedia.org/wiki/Auto CAD 2

“Semi-Automatic Ground Environment.”, Wikipedia. accessed March 28th, 2015,

3 Vossoughian, Nader. “Standardization Reconsidered: Normierung in and after Ernst Neufert’s Bauentwurfslehre (1936),” Grey Room 54 (Winter 2014): 2455. 4

Products, Autodesk, accessed March 28th 2015, http://www.autodesk.com/products


15

AutoCad

DISCIPLINARY TRANSFER Engineering and architecture are overlapping fields and one could argue that AutoCad is as much an architectural software as an engineering one. While this might be true today, the origins of the software are strongly rooted in the technical domain. To clarify, engineering in this context denotes in particular mechanical engineering, structural engineering and computer science. The pre-history of AutoCad coincides with the history of computing, while a particular moment marks the beginning of software interfaces as we know them today. Ivan Sutherland's thesis work laid out basic principles of interface design still operable today. In 1960 Sutherland was a PhD student in Computer Engineering at the MIT and interned at Lincoln Laboratory, which had a TX-2 computer, a large first generation electronic digital computer. 5 The only time the contested machine was free for the intern Sutherland to occupy was between 3-6 am, so it was during these hours that he developed and tested his now legendary program ‘Sketchpad’. It is a drawing program, where users can sketch straight lines and circle arcs directly onto the screen with a light pen.6 Remarkably the program was not only a predecessor to contemporary GUIs (graphical user interfaces) as well as CAD (computer aided drafting) but had a parametric component as well- once a shape was drawn, the geometry would adjust if one point was moved. “Sketchpad stores explicit information about the topology of a drawing. If the user moves one vertex of a polygon, both adjacent sides will be moved.”7 The program only worked on one single computer – The TX-2 at MIT - yet Sutherland published his thesis in 1963, influencing the way digital interfaces would develop up to today.8 As Sutherland writes: “The sketchpad system makes it possible for a man and a computer to converse 5 Ivan Sutherland, Looking Back, The TX-2 Computer and Sketchpad (Cambridge, Mass: MIT Press: Lincoln Laboratory Journal,Volume 19, Number 1, 2012) 82 6 Ivan Sutherland, Sketchpad, a man machine graphical communication system (New York: Garland Publishers, 1980), 53-62 7 Ivan Sutherland, Sketchpad, a man machine graphical communication system (New York: Garland Publishers, 1980), 7 8 Ivan Sutherland, Looking Back, The TX-2 Computer and Sketchpad (Cambridge, Mass: MIT Press: Lincoln Laboratory Journal,Volume 19, Number 1, 2012)


AutoCad

Figure 1.2.Ivan Sutherland is shown working with Sketchpad, 1960s

Figure 1.3. unknown, SAGE terminal screen. 1950s

16


17

AutoCad

rapidly through the medium of line drawings. Heretofore most interaction between man and computers has been slowed down by the need to reduce all communication to written statements that can be typed. In the past, we have been writing letters to rather than conferring with our computers”9 There was another concept that Sutherland implemented in his program taken for granted today. In a demo from 1964 Timothy Johnson from the design department at the MIT demonstrates the use of Sketchpad. Johnson explains how the computer is seen as a very large piece of paper. The incredulous journalist asks the scientist: “I would like to ask you how big this pieces of paper you keep referring to is and how many pieces of paper you have available?”10 It can be seen from this question how unfamiliar this idea of scalable space was at the time. Johnson goes on to explain: “We regard this as a window than we can move over our paper and enlarge the size of this window. Imagine the computers as a fixed sheet of paper behind this window, it scales approximately 2 miles on the sides. “11 One of the first uses of this kind of systems was in civil engineering- drawings that are in fact very hard to construct in real-size otherwise. The CLM CEAL (Civil Engineering Automation Library) was used by a number of state highway departments and civil engineering firms.12 It also points to the origins of these technologies in defense systems, where the SAGE computer covered large extends of land to pinpoint airplane attack. The engineers were taking creativity in account when designing the software and it was important for them to be able to revise an existing drawing, because one would sometimes start drawing only with a vague idea. They anticipated that the computer would correct human sloppiness to create precision drawings.13 The idea 9 Ivan Sutherland, Sketchpad, a man machine graphical communication system (New York: Garland Publishers, 1980), 17 10 Historical Perspective: "Computer Sketchpad", (MIT Lincoln Laboratory, 1964) Accessed March 15th 2015, https://www.youtube.com/watch?v=USyoT_Ha_bA 11 Ibid. 12 Weisberg, David E. The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, 2008 13 Historical Perspective: "Computer Sketchpad", (MIT Lincoln Laboratory, 1964) Accessed March 15th 2015, https://www.youtube.com/watch?v=USyoT_Ha_bA


AutoCad

18

Figure 1.4. Creating an Aircraft Master Layout

Figure 2.3 Creating an Aircraft Master Layout

In addition to the difficulty of producing engineering drawings, the design ocess itself was complicated, particularly by the lack of computational machine learly remember one homework assignment in structural engineering in the late 50s. The problem was a fairly straightforward two-story building - perhaps thre four bays. Working with simply a pad of paper and a slide rule, the assignment ok most of a weekend. I didn’t learn much about structural design but it did arpen my arithmetic skills. Today, a student with a notebook computer can wor a building ten times as large and learn much more about what makes for a good sign by trying different size structural members and different arrangements of se components. Calculations were typically done with slide rules, electromechanical desk culators and handbooks of mathematical tables and engineering data. Many hnical calculations weregiant done which multiplication a Figure 1.5. Aconceptual, pieceusing of paper logarithms and the ability to zoom in andenabled out vision calculations to be done using addition and subtraction. The most popular ndbook for doing these calculations was first published in 1933 by Dr. Richard


19

AutoCad

TECHNICAL HISTORY AND FUNDAMENTALS was still however, that the drawing program would lead to one specific output, a final drawing, rather than a range of drawings or variation. Vectors are the first format that has been used for computer drafting. The term vector has a slightly different meaning in mathematics and physics versus computer science. Familiar to most people, a vector in mathematics called a ‘Euclidean vector’ is defined by length and direction.14 It is used to describe a translation rather than a specific geometric entity. The description of geometrical transformation through mathematical vectors is at the base of vector graphics. The term vector graphics encompasses any type of geometry, including points, lines, curves, shapes or polygons. 15 Geometry is stored as a set of instruction that can be saved in text format. One could imagine carrying a document that includes all instructions on how to make a certain drawing. It would enable anyone following the instructions to reproduce this very same drawing anywhere and it would look exactly the same. These instructions include the color and thickness for each line. Moreover, the drawing can be reproduced at any scale and it will not loose any information. Vector are more easily understood in contrast to pixels. Pixels are a flat field of dots or squares, each with information only about the properties of this particular square.16 (See Photoshop chapter) When the resolution of an image decreases there is less information available to describe that same image. This means that these grids are not infinitely scalable- a thumbnail image will not be effective on a billboard poster. A vector graphic however can be blown up to any imaginable scale. This system has other advantages. In his thesis on the Sketchpad program Sutherland describes how Sketchpad stores not just the lines of a drawing, but also 14 Pottmann, Helmut and Asperl, Andreas and Hofer, Michael and Kilian, Axel and Bentley Daril: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 712-714 15 “Vector graphics” Wikipedia. Accessed March 28th, 2015 http://en.wikipedia.org/wiki/ Vector_graphics 16 Pixel Definition, High Definition: A-Z Guide to Personal Technology, Accessed April 15th 2015, http://literati.credoreference.com.ezproxy.cul.columbia.edu/content/entry/hmhighdef/pixel/0


AutoCad

20

the structure or topology of a drawing. This means that ‘a sketchpad drawing explicitly indicates similarity of symbols’.17 The program therefore recognizes a certain combination of vectors as specific shape. It is therefore possible to define and store relationships between sets of objects. In programming, ‘object’ is used to refer to many types of things, not necessarily just geometric entities- an object in a programming language can be anything, in most cases it does refer to some physical 17 Ivan Sutherland, Sketchpad, a man machine graphical communication system (New York: Garland Publishers, 1980), 27

V Figure 1.6. Vectors come in many shapes and sizes


21

AutoCad

object. The first object-oriented programing languages were invented to model real world objects like airlines trajectories, planetary systems or machines. 18 Today, the dxf format by AutoCAD is a standard format for saving and displaying of vector graphics. There are two main formats to save an auto cad file : as an ASCII format standing for “American Standard Code for Information Interchange� or as a binary code meaning it is not saves as text but as a series of numbers.19 ASCII is a language of short messages encoded in characters developed from telegraphic codes, it is therefore a series of verbal instructions. Conceptually both encode the aforementioned instructions. Even if the format is binary, the nature of the information is object-oriented, able to identify systems and to describe topological relationships. 18 A Brief History of Object-Oriented Programming, Accessed on March 20th 2015, http:// web.eecs.utk.edu/~huangj/CS302S04/notes/oo-intro.html 19

ASCII, Accessed on May 5th 2015, http://en.wikipedia.org/wiki/ASCII

Figure 1.6. Autodesk founders


AutoCad

22

LAUNCH TRAJECTORY The company history of AutoDesk, the developers of AutoCad, now ranging for almost forty years is a story of explosive growth only possible under the special circumstances of an emerging market.20 It is not without its illustrious characters and extreme personalities. Less known than their contemporaries Bill Gates or Steve Jobs, figures like John Walker and Carol Bartz have in their own way redefined the digital landscape of today. Autodesk today is the biggest software company catering to designers, engineer and architects. Next to AutoCad it owns Maya, 3dsmax, Revit, Alias and more. AutoCad was founded by a group of programmers assembled by John Walker at his home in California in 1982. Together they collected 59,000$ from their members to start the business, soon incorporated as Autodesk.21 A few years prior to that computer consultant Mike Riddle had started to develop a graphic design program called Interact in his free time.22 Riddle was not an architect but worked at the Frank Lloyd Wright Foundation in Scottsdale Arizona at the time, giving him some insight into architectural drafting practices.23 When Walker started Autodesk at his house, Riddle was one of the assembled computer programmers. The company in the beginning had a list of 15 different programs it proposed to develop. Riddle’s Interact would by become the most significant one by far. 24 At the time, the mere fact that a company would only make software was newhardware and software was usually produced and sold together and would often serve only one specific purpose. The idea of a universal system where once could 20 Weisberg, David E. The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, 2008 21 Weisberg, David E.,"Chapter 8 Autodesk and AutoCAD" in The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, (self published online, 2008).2 22 Donna Rosebaugh, DigiBarn Stories: Mike Riddle & the Story of Interact, AutoCAD, EasyCAD, FastCAD & more, Accessed March 5th 2014, http://www.digibarn.com/stories/ mike-riddle/ 23 Weisberg, David E.,"Chapter 8 Autodesk and AutoCAD" in The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, (self published online, 2008).1-21. 24 Ibid.


23

AutoCad

use various software according to specialized needs was still new because software could only be run on very few specialized computers.25 One of the early challenges of AutoCAD was to function on multiple operating systems. After its modest beginnings Autodesk took off very quickly - in only two years the sales amounted to over 1 million and the company had sold over 10.000 units of the new software.26 It was in this year they ran their first double spread color add in Scientific American magazine(see ill.1). From the beginning Autodesk distinguished itself from other companies by encouraging other developers to create plug-ins. Soon there were hundreds of third-party plug ins, including the one developed by BobMcNeel, eventually leading to the development of Rhinoceros.(see Rhinoceros chapter) Additionally Autodesk had an effective strategy dealing with competitors- it bought them. Beginning in 1988 the company started acquiring other software companies, leading to the major conglomerate it is today. By the early 90s the company needed major restructuring. Despite its success it was, at heart, still a guerrilla company founded by an “unruly clique of programmers”.27 An article of the time described the founders and company leaders as “contentious, eccentric free-thinkers who have had a way of devouring professional managers.“ 28Even if these descriptions might have been a little exaggerated, the company clearly took another course with the arrival of a powerful new manager: Carol Bartz. She was the CEO of the company for the next 14 years and it was under her though leadership that the company reached its status as an industry leader.29 Today Autodesk as a company is changing again- it is trying to extend its re25 Weisberg, David E.,"Chapter 8 Autodesk and AutoCAD" in The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, (self published online, 2008).1-21. 26 Ibid. 27

Zachary, Pascal. “‘Theocracy of Hackers’ Rules Autodesk Inc., A Strangely Run Firm.” The Wall Street Journal, May 28, 1992.

28 Ibid. 29 Weisberg, David E.,"Chapter 8 Autodesk and AutoCAD" in The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, (self published online, 2008).1-21.


AutoCad

24

lationship to the creative industry as a form of partnership, sponsoring small design-research companies like Marc Gage architects or David Benjamin’s the Living.30 It’s business and marketing models have also changed- like Adobe it is moving to a subscription based model, allowing users to ‘rent’ software for a relatively low monthly fee. Building its future costumer, Autodesk is offering free student licenses for all its software since 2011.22 The company that started with a small drawing program now has a major monopoly on architectural design tools. 30 Stott, Rory. Arch daily: What Autodesk’s Acquisition of “The Living” Means for Architecture http://www.archdaily.com/522532/what-autodesk-s-acquisition-of-the-living

Figure 1.7. Scientific American Advertisement 1984


25

AutoCad

DISCOURSE AND EXAMPLES The radical novelty of digital graphic displays lay in the concept of representing large scale on small screens and the ability of zooming and out of images. To this day, when opening up AutoCad, one is welcome by infinite space- places on a grid, structures exist on an empty field. It suggests a very different way of working than for example in Photoshop, where one is restricted from the beginning to a certain format, usually in accordance to a real-world paper format. It suggests that the program can draw anything at any scale. AutoCad enables standardization and mechanical reproduction. It works with catalogs of predefined normative elements that are collected in catalogues and can be deployed as needed. The analogue counterpart and conceptual processor to the digital libraries are books like the Architectural Graphic Standards by Ramsey Sleeper31 or the German catalogue of drawing standards ‘Neufert Architects’ Data’.32 In an article on the standardization practices engendered through the use of Neufert, Voussoughian writes: “In broader terms, standardization must itself be understood as a process that transforms the subject and not just the object. Standardization participates in shaping our thoughts and not just our things.“33 Voussoughian refers here to handbooks and manuals, yet I would extend this thought to software like AutoCad. Voussoughian goes on to argues that the handbooks “routinized the activities of the designer, enforcing time-saving habits.”34 AutoCad takes this kind of standardization a step further. In the process of copying an element from a book like Neufert the architect would need to take in account each measurement and adjust it to the current requirements in the process of redrawing the entire design. Through the introduction of drafting software like AutoCad new kinds of standardization tools emerged. Authored by anonymous designers without in31 Ramsey, Charles and Sleeper, Harold. Architectural graphic standards for architects, engineers, decorators, builders, and draftsmen, Facsim. of the 1932 ed., New York : Wiley ; London : Chapman & Hall, c1990. 32 Ernst Neufert. Bau-Entwurfslehre : Handbuch für den Baufachmann, Bauherrn, Lehrenden und Lernenden, (Berlin : Bauwelt-Verlag, 1936) 33 Vossoughian, Nader. Standardization Reconsidered: Normierung in and after Ernst Neufert’s Bauentwurfslehre (1936). Grey Room, 2004 34 Ibid.


AutoCad

26

stitutional authorization plenty of files circulate on the web, allowing designers to choose from thousands of options for each building typology. As a ‘cut and paste’ method, CAD drawings become collages of pre- existing templates.35 The use of AutoCad is most efficient for companies working with minimal variation from project to project, as for example in suburban housing projects. As stated in a interview with ‘Mercedes Homes’ a major housing real estate company, “the turnaround for creating repetitive buildings is only two to three days”36 with the help of AutoCAD. Arrays, copies and blocks reinforce the use of standards and repetition of the same element. The term norm derives from the Latin norma for carpenters square used to draw and measure out right angles on construction sites.37 There is therefore, from the onset, a strong association between rectangular grids and standardization. An underlying grid is at the base of AutoCAD by default, emphasizing the programs strong bias towards orthogonality. Equivalent of using gridded sketch paper versus a blank sheet to start with the grid already presents a strong bias towards orthogonality. Furthermore, there is a minimum of four default settings to make things straight: the grid, shift, ortho, perpendicular snap. AutoCAD is ubiquitous used in architectural drafting and notably the software is used even for very large and complex projects. Yet at the base of this versatile tools there is still traditional drafting practice with a strong bias towards normative design solutions. 35

Bika Rebek, Code and Norm, unpublished paper, Columbia University, 2015

36 Autocad Customer stories- Mercedes Homes. http://usa.autodesk.com/adsk/servlet/ item?siteID=123112&id=10155895 37

Online etymology dictionary: http://www.etymonline.com/index.php?term=norm


27

AutoCad

1980

1990

1994 John Walker leaves the compan

1982 founded by 14 members including Mike Riddle and John Walker

AUTODESK

1992: Carol Bartz become CEO until 2006

3dsMax

AutoCad

Alias ILM

ADOBE

Figure 1.7. History of Autodesk

Photoshop

Re


ny

AutoCad

2000

28

2010

2015

Grasshopper Rhinoceros

evit aquisition of Revit in 2002

aquisition of Maya in 2005 and Alias in 2006

Maya


29

AutoCad

BIBLIOGRAPHY Atkinson, Paul. A Bitter Pill to Swallow: The Rise and Fall of the Tablet Computer, Design Issues, Vol. 24, No. 4 (Autumn, 2008), pp. 3-25 Campbell-Kelly, Martin. "Not Only Microsoft: The Maturing of the Personal Computer Software Industry, 1982-1995", The Business History Review, Vol. 75, No. 1, Computers and Communications Networks (Spring, 2001), pp. 103-145 Coons, Steven A., Computer-Aided Design, Design Quarterly, No. 66/67, Design and the Computer (1966), pp. 6-13 Published by: Walker Art Center Johnston, George Barnett. Drafting culture : a social history of Architectural graphic standards. Cambridge, Mass. : MIT Press, 2008. Kalisperis, Loukas N. and Groninger, Randal L. "Cadd utilization in the architectural design process: implications for computer integration in practice", Journal of Architectural and Planning Research, Vol. 11, No. 2 (Summer, 1994), pp. 137-148 Kay, Alan and Goldberg, Adele. “Personal Dynamic Media� in Wardrip-Fruin, Noah and Montfort, Nick, editors: The NewMediaReader, Cambridge, Mass. : MIT Press, 2003.p 393. Pressman, Andrew, editor-in-chief, Architectural graphic standards / authored by the American Institute of Architects, Hoboken, N.J. : John Wiley & Sons, c2007. Ramsey, Charles and Sleeper, Harold. Architectural graphic standards for architects, engineers, decorators, builders, and draftsmen, Facsim. of the 1932 ed., New York : Wiley ; London : Chapman & Hall, c1990. Siegert, Bernhard. Cultural Techniques: Grids, Filters, Doors, and Other Articulations of the Real, New York: Fordham University Press, 2015.


AutoCad

30

Sutherland, Ivan. Sketchpad, a man machine graphical communication system. New York: Garland Publishers, 1980 Weisberg, David E. The Engineering Design Revolution, The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, 2008, Accessed April 28th. http://www.cadhistory.net/. Vossoughian, Nader. “Standardization Reconsidered: Normierung in and after Ernst Neufert’s Bauentwurfslehre (1936),” Grey Room 54 (Winter 2014): 2455. Sutherland, Ivan. Looking Back, The TX-2 Computer and Sketchpad. Cambrige, Mass: MIT Press: Lincoln Laboratory Journal,Volume 19, Number 1, 2012

LIST OF ILLUSTRATIONS Figure 1.1. Sean O'Donnell., shuttle drwawing (1983) Figure 1.7. Scientific American Advertisement 1984 Figure 1.6. Autodesk founders Figure 1.6. Vectors come in many shapes and sizes Figure 1.3. unknown, SAGE terminal screen. 1950s Figure 1.2.Ivan Sutherland is shown working with Sketchpad, 1960s Figure 1.7. History of Autodesk


31

Rhinoceros


Rhinoceros

RHINOCEROS

32


33

Rhinoceros

Figure 2.1.Course in Airplane Lofting, Burgard High School, Buffalo, NY, USA, January 1, 1941


Rhinoceros

34

RHINOCEROS Naval architects have developed sophisticated techniques for drawings smooth curves needed for the construction of boat hulls since the renaissance.1 A socalled loftsman used flexible strands of wood called splines help in place by leaden weights called ‘ducks’.2 The ducks were moved around to achieve the desired shape of the spline. In the 1940s scientists started to analyze these techniques to formulate mathematical equations describing free form curves.3 These equations are at the base of the Rhino NURBS modeling today and they are still called splines. Bob McNeel, an accountant by profession has now been the CEO of ‘RobertMcNeel & Associates’ for 35 years, navigating the company through its different phases. It is one of the few software companies that is independent. It boasts countless plug-ins due to an open software development kit available for free to any programmer or developer. An active community of programmers and amateurs keeps developing further plug-is to Grasshopper, creating specialized applications to work with external devices like Arduino and Kinect. While Rhino used to have a text based scripting editor before Grasshopper was introduced, the scripts were predominantly used to execute repeatable tasks too laborious to draw by hand. With Grasshopper, a number of modules have entered the software that allow designers to construct tool-sets for managing data and to connect interactive information streams, as well as produce parametric design solutions. At the end of this chapter I will argue that this transition enables another form of design thinking, embedding designs into information networks, creating an increase in quantifiable data on design performance. This development could be seen as positive in terms of allowing objects to be evaluated in more complex ways during the design phase, but it risks over-quantification at the cost of accidental, immaterial and ephemeral design qualities. 1 Siegert, Bernhard. Waterlines: "Striated and Smooth Spaces as Techniques of Ship Design" in Cultural Techniques: Grids, Filters, Doors, and Other Articulations of the Real, (New York: Fordham University Press, 2015), 147- 163. 2 Pottmann, Helmut and Asperl, Andreas and Hofer, Michael and Kilian, Axel and Bentley Daril: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 256. 3 Schoenberg, I. J. (1946). Contributions to the problem of approximation of equidistant data by analytic functions, Parts A and B. Applied Mathematics 4, 45-99.


35

Rhinoceros

DISCIPLINARY TRANSFER Naval design is equivalent to architecture in its role of enveloping the human body with a large scale enclosure, while being directed toward a different primary element creating another set of constraints.4 Technical innovation and achievements between the two fields have played out for the millennia and the tools of the trade have developing in parallel. Especially large ships have to fulfill the dual function of transportation and living quarters, often resembling buildings very closely except in the form of their hull. The form of a boat hull has special requirementsit needs to work with a number of different vectors and water forces in order to have little resistance in the water.5 The ideal forms for this ability to glide smoothly was developed over millennia through trial and error and intuition. Similarly to architecture, shipbuilding started as an autographic art, where the builders made all design decisions on site and design and construction were subsumed in one integrated process.6 Only in the fifteenth century shipwrights developed techniques to represent aspects of the wooden boats prior to construction. To approximate the shape of their vessels these Portuguese shipwrights used pliable pieces of wood, meaning they effectively used the same material for construction as for representation, not completely detaching the drafting process from construction yet.7 Only in the fifteenth century, contemporaneous with Alberti, Royal Master Shipwright Mathew Baker is credited to have been the first designer to draw a ship on paper.8 As with the invention of allographic notation attributed to Alberti, the design of ships was now dissociated from the actual construction, which required particular drafting devices to create the complex curvatures needed to represent ships to 4 Siegert, Bernhard. Waterlines: "Striated and Smooth Spaces as Techniques of Ship Design" in Cultural Techniques: Grids, Filters, Doors, and Other Articulations of the Real, (New York: Fordham University Press, 2015), 147- 163. 5 Raunekk, Basics of Ship Hull Design, 4/16/2009 , Accessed March 20th, http://www. brighthubengineering.com/naval-architecture/32007-basics-of-ship-hull-design/ 6 Siegert, Bernhard. Waterlines: "Striated and Smooth Spaces as Techniques of Ship Design" in Cultural Techniques: Grids, Filters, Doors, and Other Articulations of the Real, (New York: Fordham University Press, 2015), 147- 163. 7 Ibid. 8 Ibid.


Rhinoceros

36

scale.9 One of these tools is called a spline and is at the basis of modern computing of so called NURBS curves.10 Before digital splines, their analogue counterparts were made out of flexible strands of woods held in place by leaden weights called ‘ducks’.11 The ducks could be moved around to achieve the desired shape of the spline through the material tension of the wood. This technique adopted from a traditional craft is used to construct similar types of forms in computer modeling. The basic logic is derived from the analogue method but there are some significant differences: it takes more effort and time to construct a wooden spline and there are several limitations once it is built- the length of the wooden piece, the number of ducks available and, most importantly, a maximum bending of the wood before it reaches a breaking point. The material properties of the wood are simulated through a process of interpolation approximation between control points.12 When using digital splines in Rhino has retained some of the drawbacks of its analogue heritage. Even if it can be used to create very complex structures, Rhino designs require a degree of planning ahead and need to be rebuilt when significant changes to geometry are required. Op9

Mario Carpo, The alphabet and the algorithm (Cambridge, Mass. : MIT Press, c2011.)p.

10 Pottmann, Helmut and Asperl, Andreas and Hofer, Michael and Kilian, Axel and Bentley Daril: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 256. 11 Ibid. 12 Ibid.

Figure 2.2. A traditional spline with ducks


37

Rhinoceros

erations like filleting, chamfering or trimming are working very well with Rhino while more complex or free form modeling is relatively constrained compared to polygon modeling (see Maya chapter). Akin to a its origins, NURBS work well for small adjustments, for example through moving of control points to adjust the smoothness -or in the terminology of naval architecture fairness - of a curve, but is not suitable for free form sculpting or intuitive modeling where the outcome is not determined from the outset. When complexity is limited, as in when modeling the surface of a boat hull, Rhino performs very well. A single shell can be easily be modeled through lofting and these forms can conveniently be put into production through integrated CadCam technologies such as RhinoCAM.13 Rhino is originally a software that was developed as a plug in for AutoCad, and it has maintained many of its properties. While Rhino works very well at creating and evaluating double curved surface with a limited amount of topological complexity, it has inherently limited capability to preview geometry, color and materiality in real time. NURBS 13 RhinoCAM 2015: computer aided manufacturing inside Rhino, Accessed on April 5th, http://mecsoft.com/rhinocam-software/

Figure 2.3. Charles P. Kunhardt, inboard profile of the cutter yacht Yolande (1880)


Rhinoceros

38

THE DEFINING TECHNICAL ASPECT A few basic terms of NURBS modeling are helpful to fully understand the core functioning of Rhino. Certain typologies are privileged when modeling in Rhino and this is closely related to the mathematical description used for the geometry. One of the greatest advantages of NURBS is their great precision- every point on a NURBS curve is exactly defined, allowing precise geometric operation and a direct workflow from design to production.14 Splines are a sub-group of NURBS and the particular mathematical description originates from the 1940s when mathematicians were trying to describe complex curvature observing techniques used in the ship building industry.15 Curved NURBS curves in Rhino are still called splines, inheriting their terminology from their analogue counterparts.16 NURBS stands for ‘Non-uniform rational B-spline’, representing both curves and surfaces. Technically NURBS are controlled by four factors: curve de14 Pottmann, Helmut and Asperl, Andreas and Hofer, Michael and Kilian, Axel and Bentley Daril: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 276-278 15 Schoenberg, I. J. (1946). Contributions to the problem of approximation of equidistant data by analytic functions, Parts A and B. Applied Mathematics 4, 45-99. 16 Townsend Alastair, "On the Spline: A Brief History of the Computational Curve" in International Journal of Interior Architecture + Spatial Design, Applied Geometries ( Jonathon Anderson & Meg Jackson, 2014)

Figure 2.4. Spline held in place by ducks


39

Rhinoceros

gree, control points, knots, and an evaluation rule, but usually only the curve degree and the control points are relevant for modeling. The degree of a curve can be any positive number but usually does not exceed.17 Straight lines are first degree, circles or conic sections like ellipses are second degree, and complex curvature is third or more. When converting a smooth curve from degree 3 to a degree 1 curve it looses its smoothness and become line out of straight segments.18 NURBS can be modified with the help of control points- a number of points pulling the curve or surface towards them acting like magnets. The handling of NURBS curves therefore requires a degree of intuition. (image2) Control points and knots are often confused to be the same- yet they control different aspects of a curve. Control points are used as grips on curves and adding or deleting them will always change the shape of the curve. The degree of a curve determines the number of knots per curve span. By inserting knots the designer can constrain certain parts of a curve or create kinks. The NU in NURBS stands for non-uniform, meaning that the control points of a curve are not affecting equal portions of the curve. R stands for Rational, meaning that the weight of each control point can be adjusted. The only rational exceptions are circles and conic sections, where each control point has exactly the same weight. Finally BS stands for Basis-Spline function defining the influence of each control point.19 The story of Robert McNeel & Associates is an unusual one for a software company - reflecting both into the organizational structure and the product 17 Pottmann, Helmut and Asperl, Andreas and Hofer, Michael and Kilian, Axel and Bentley Daril: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 276-278 18 Ibid. 19 Ibid.

first degree spline Figure 2.5. Three degress of a Spline

second degree spline

third degree spline


Rhinoceros

40

LAUNCH TRAJECTORY until today. Bob McNeel, an accountant by profession has now been the CEO of ‘RobertMcNeel & Associates’ for 35 years, navigating the company through its different phases. After selling accounting software to engineering and architecture firms for a few years McNeel bought two copies of AutoCad upon request of a client.20 Following the first sales in 1985 McNeel recognized the market potential of CAD applications and the company shifted focus completely to AutoCad. They started offering support and training as well as developing their own plug-ins. One of these side projects that began as a plug-in eventually turned into a standalone application called Rhino.21 There are a number of important factors in this early genesis: first the director of the company is not a software engineer or a programmer- McNeel is an accountant, making business decision and open to change direction and to integrate client requests into his software quickly. The second important factor is that Rhino emerged out a company reselling AutoCad- even if McNeel has long stopped selling AutoCad to focus solely on Rhino this fact is still apparent in the interface itself which makes it easy for Autocad users to switch between programs. In 1992 a company called ‘Applied Geometry’ approached McNeel to integrate a NURBS geometry library into AutoCad. Around the same time a marine client approached McNeel to help them to “send smoother curves from AutoCAD to their plasma cutter.”22 and the NURBS geometry library proved useful to create the complex shapes needed. The original impetus for the software therefore originated from a discipline for the longest time concerned with the construction of complex hull shapes. They needed a software that would allow them to design and manufacture double- curved surfaces and still be compatible with the industry standards. Around this time Bob McNeel hired the young programmer Michael Gibson as an

20 Franco Folini, An Interview with Robert McNeel, CEO of McNeel and Associates (Rhino 3D), March 14, 2007, Novedge blog, Acessed on March 13th 2015, http://blog.novedge. com/2007/03/an_interview_wi_3.html 21 Ibid. 22

Statement in interview with Author


41

Rhinoceros

DISCOURSE AND EXAMPLES intern.23 Gibson, still in college at the time, had single handedly developed a modeling software called Sculptura and he was charged with integrating the NURBS library into a standalone application.24 Rhino was first released as a free beta version- allowing for months of corrections and feedback from the community. Without investing into promotions or advertisements the company managed to have 100.000 beta sites within the first year of the free release. When the first official edition of Rhino shipped in 1998 at a price of 995$ it immediately started selling very well and was quickly translated into Japanese and Korean right away.25 As a strategy that has not changed until the today the company focuses on support and training instead of spending on merchandising and advertisements. Remarkably, even cracked copies of the software receive the same support than officially licensed software. McNeel also does not track user behavior or has any statistics regarding the user base of Rhino. The policy is to let users develop their own plug-ins or to let them approach McNeel with new requests.26 McNeel’s openness extends towards the file formats and documentation- the Rhino SDK (software development kit) is available for free to any programmer or developer interested in creating plug-ins or extensions. As is explicitly stated on the website, they do not forbid the commercial use of their tools- they strongly encourage it.27 By now thousands of spin-offs and plug-ins have emerged built upon that base, including a new NURBS modeler called 'Moments of Innovation' by the original developer of Rhino Michael Gibson who left the company after six years and started building his own business.28 Clearly McNeel is very strategic about this public dissemination. By making their formats transparent and accessible, Rhino is creating a bigger market 23 Franco Folini, "An Interview with Michael Gibson, MoI CEO and Founder", Novedge blog, July 02, 2008 , Accessed March 8th 2015, http://blog.novedge.com/2008/07/an-interview-wi.html 24 Ibid. 25 The History of Rhino, Summary: A few interesting dates, Accessed on March 28th 2015, http://wiki.mcneel.com/rhino/rhinohistory 26

From an interview with the author

27

openNURBS Initiative, Accessed March 28th 2015, https://www.rhino3d.com/opennurbs

28 Franco Folini, "An Interview with Michael Gibson, MoI CEO and Founder", Novedge blog, July 02, 2008 , Accessed March 8th 2015, http://blog.novedge.com/2008/07/an-interview-wi.html


Rhinoceros

42

for itself through dissemination rather than central control like the bigger companies. Slowly, large companies like Autodesk have realized the market advantage of this strategy: as of 2012 educational licenses of all Autodesk products are available for free online. Similar to its progenitor Autocad, Rhino is used by a diverse set of disciplines. Still, there are user groups that can be identified and certain types of geometry that persistently reappear in projects that have been modeled with NURBS. Fillet, Loft and Trim Designs modeled in Rhino are based on surfaces rather than volumes like in Polygon modeling. Rhino is also really good at boolean operations and splitting and joining of surfaces. Perhaps the most quintessential NUBRS command, lofting, akin to the shipbuilding industry it comes from, is a simple method to produce smooth surfaces. To create a loft in Rhino, first a minimum of two profile curves need to be defined. The loft command then connects these two or more curves with a surface. For best results the curves should be of roughly equal length and complexity. A recently built example is the Bombay Sapphire Distillery glasshouse designed by British architect Thomas Heatherwick. The two glasshouse structures have trim

Figure 2. 6. Trim, Loft, Fillet

loft

fillet


43

Rhinoceros

Figure 2.7. Iwan Baan, Bombay Sapphire Distillery by Thomas Heatherwick, 2014

Figure 2.8. Neil Denari, HL23 condominium, (2011)


Rhinoceros

44

been modeled as a series of lofts between two curves. The volume, constructed out of stainless steel frames and glass panels, is made out of more than ten thousand unique elements, including over eight hundred custom double curved glass elements.29 Even the structural elements dividing the glass panels from each other are derived from the Rhino modeling logic- they are the iso-curves of the lofted surfaces. Because a loft is always mediating between lines, it is a challenge to end a loft. This is also where the most geometric imprecisions are occurring in this structure- at the point when the glass panels end and merge into a tangle of steel. In ship this kind of imprecision in form and structure would not be admissible because it would not withstand the forces generated by flows of water. The second Rhino command of importance is the Fillet. Used abundantly in industrial design it has been extraordinarily successful in architecture as well. The fillet in Rhino creates an arc -shaped transition between straight lines or surfaces. It has the advantage of creating a sense of geometric continuity while it can be built with standard construction techniques, not necessarily requiring CNC machining. Neil Denari, inspired by industrial design production methods, made the fillet a signature element in many of his designs, including his HL23 condominium next to the highline in New York City pictured on the left.30 This particular example introduces another typical Rhino command- the Trim. Trimming is process of cutting and deleting a portion of a geometry. Here a combination of filleted surfaces that have been intersected with volumes can be seen. Rhino is a highly flexible tool and it can and has produced a wide variety of shapes and forms, but these three commands can be identified disproportionally often in offices using Rhino as a integral design tool. 29 Thomas Heatherwick, Bombay Sapphire Distillery, 2014, Accessed March 10th 2015, http://www.heatherwick.com/distillery/ 30 Francesc Salla, How to build a bright building in a small plot with Rhino, April 2, 2015 , Accessed May 3rd 2015, http://blog.visualarq.com/2015/04/02/how-to-build-a-bright-buildingin-a-small-plot-with-rhino/


45

Rhinoceros

Grasshopper David Rutten started coding as an architecture student at TU-Delt and later became the single first developer of a program called “Explicit History”, the precursor of today’s Grasshopper.31 The motivation behind coding for him was a discontent about design being evaluated based on “emotions or philosophical ideas.”32 He eventually got hired by Robert McNeel, but first developed the program in secret in his free time, and only introduced it to the team at an advanced stage in 2008. The program is basically a visual representation of coding- operations are represented by nodes that can be moved on a canvas with infinite scroll and connected with each other, create a visual hierarchy of the script. Grasshopper allows for coding without actually writing code in text form, or as David Rutten puts it: “All the steps are like lines in source code, except there is no code”33. This visual component makes it a great tool for teaching basic coding concepts to beginners. Moreover, Grasshopper allows for custom- coded nodes, so users can create their own plug ins and tools with Phython, a popuar scripting language. Many professional and amateur coders share their plug-ins, so less experienced users can built in a plethora of existing scripts into their workflow. A long chain of plug-ins, from Autocad, to Rhino, to Grasshopper, to the small scripts with funny names like Kangaroo, Firefly or Centipede extend the user’s options. This way it becomes possible even for coding novices to create their own tools, rather than accept the existing toolset. Of course projects built with Grasshopper are also biased towards certain outcomes, but at very least, the software represents a first step to an emancipation from a predefined toolset towards specific tools designed by the architect for her purposes. This focus on tools and specifically the possibility to integrate and organize different forms of data into the design process has brought about a renewed discussion on the role of tools into contemporary discourse. 31 “Architectural Association, lectures”, accessed on May 12th, http://www.aaschool.ac.uk/ VIDEO/lecture.php?ID=1212 32 Ibid. 33 Ibid.


Rhinoceros

46

The story of the key This transition from the laborious construction of geometry originated from drafting software like AutoCad to designs embedded in networks of information enabled by Grasshopper is associated with a paradigm shift in the description of objects in architecture. Imagine a key built with regular operations in Rhino, using the aforementioned commonly used commands like fillet, loft and trim. That same key could be built in Grasshopper as a parametric system. It would now be tied into a number of contingencies and could be modified infinitely as needed. Bruno Latour took the key as an example in his essay ‘The Berlin Key or How to Do things with Words’ on the different systems of access in Paris versus Berlin.34 Latour illustrates how the key becomes entangled in social relations: “From being a simple tool, the steel key assumes all the dignity of a mediator, a social actor, an agent, an active being.“35 The term design now encompasses more than the mere formation of ‘Gestalt’, it requires designers to understand the things they are working on as deeply embedded socio-political agents. Similarly, even if Grasshopper is more approachable than other coding environments it still creates significant hurdles to be able to not just replicate preexisting chain of commands but find create ways to make tools. Thinking of architecture as a social actor, however, can open up different modes of operation even without particular skill sets. In the end, the data afforded by Grasshopper is still relatively simple and difficult to tie into other aspects of a particular design. Relying on numerical input will produce more quantifiable results but it often occurs at the loss of other aspects. It is the next challenge to combine this kind of numerical qualities with traditional design criteria pertaining to simple objects.

34 Bruno Latour, “The Berlin Key or How to Do things with Words” in Matter, materiality, and modern culture (London: Routledge 1991) p.19 35 Ibid.


47

Rhinoceros

1980

1990

MCNEEL

AUTODESK

AutoCad

ILM

ADOBE

Figure 2.9. History of Robert McNeel and Associates

Photoshop


Rhinoceros

2000

48

2010

Grasshopper

2015

Kangaroo

GhPhython

Rhinoceros

Maya


49

Rhinoceros

BIBLIOGRAPHY Townsend Alastair, "On the Spline: A Brief History of the Computational Curve" in International Journal of Interior Architecture + Spatial Design, Applied Geometries ( Jonathon Anderson & Meg Jackson, 2014) Bruno Latour, “The Berlin Key or How to Do things with Words” in Matter, materiality, and modern culture (London: Routledge 1991) p.19 Pottmann, Helmut and Asperl, Andreas and Hofer, Michael and Kilian, Axel and Bentley Daril: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 276-278 Townsend Alastair, "On the Spline: A Brief History of the Computational Curve" in International Journal of Interior Architecture + Spatial Design, Applied Geometries ( Jonathon Anderson & Meg Jackson, 2014) Siegert, Bernhard. Waterlines: "Striated and Smooth Spaces as Techniques of Ship Design" in Cultural Techniques: Grids, Filters, Doors, and Other Articulations of the Real, (New York: Fordham University Press, 2015), 147- 163. Michael Young, “Digital Remediation” from Cornell Journal of Architecture 9, Winter 2013


Rhinoceros

LIST OF ILLUSTRATIONS Figure 2.1.Course in Airplane Lofting, Burgard High School, Buffalo, NY, USA, January 1, 1941 Figure 2.2. A traditional spline with ducks Figure 2.3. Charles P. Kunhardt, inboard profile of the cutter yacht Yolande 1880 Figure 2.4. Spline held in place by ducks Figure 2.5. Three degress of a Spline Figure 2. 6. Trim, Loft, Fillet Figure 2.7. Iwan Baan, Bombay Sapphire Distillery by Thomas Heatherwick, 2014 Figure 2.8. Neil Denari, HL23 condominium, (2011) Figure 2.9. History of Robert McNeel and Associates

50


51

Photoshop


Photoshop

PHOTOSHOP

52


53

Photoshop

Politics has become manifesto by Photoshop, seamless blueprints of the mutually exclusive. Rabbit is the new beef, Comfort is the new Justice.1

Figure 3.1. FELD- Studio for digital crafts, Photoshop Collage

1 Koolhaas, R. (2002). Junkspace. October, (100), 190


Photoshop

54

PHOTOSHOP Photoshop is primarily a two-dimensional, inexact software useful for the production of images rather than geometry, and therefore presents an unexpected tool to be discussed in terms of architectural design. There is hardly a creative field however that has not been transformed by the software. In photography and visual culture in general Photoshop has had an immeasurable effect since its inception in the late 80s. In architecture, Photoshop plays an important role on two fronts: it is used as design tool to visually work out ideas in a two- dimensional plane through the collaging or superimposition of images. Since early modernism architects have used collage as a design method, combining photographic and fabricated visual material. Through the introduction of Photoshop in the early 1990s perfect transitions between disparate elements could be achieved with simple digital tools and amateurs were able to produce photo-realistic composite images. Today, a younger group of architects, rejecting the seamless continuity without the potential for friction, is returning to the collage as a political statement, an expression of the complexity of socio-political relationships. Photoshop’s most ubiquitous use is as a touch up tool, to edit existing rendering and photographs. As an fabricator of an overly perfect synthetic reality it is negatively associated with this process of embellishment. When something is photoshopped it is not ‘honest’ or ‘authentic’.1 This process happens at different stages of architectural production- renderings are photoshopped to sell the product and when the building is completed the photos of the building are edited again to match the fictitious perfection of the original selling image. The generation that grew up using Photoshop, so-called digital natives, does not remember a time when editing and creating images was purely analogue. There are significant links however to the analogue world within the tools and structure of Photoshop. The original programmer Thomas Knoll was an avid photographer, familiar with darkroom procedures.2 The combination of this knowledge with the technical expertise on image processing was at the foundation of Photoshop. Photoshop reflects a mixture of analogue and digital influences forming it 1 Gerry Badger. "It's Art, But Is It Photography? Some Thoughts on Photoshop" in The pleasures of good photographs : essays / by Gerry Badger. (New York : Aperture, c2010) 2 Michael Reichmann & Kevin Raber interview with Thomas Knoll Thomas Knoll: The Story of Photoshop, June 2013, Accessed March 3rd, https://vimeo.com/73949178


55

Photoshop

cutter

enlarger

sink water

dry side fixer stop bath developer

wet side

Figure 3.2. Basic elements of a darkroom

burning Figure 3.3. Burn and dodge

dodging


Photoshop

56

DISCIPLINARY TRANSFER into the hybrid application it is today. Some tools emerge directly from darkroom routines used to develop photographs and others exist exclusively in the digital domain. When using Photoshop the distinction between these different tools is blurred. To fully understand the heritage of Photoshop both traditional photography and computer science will be addressed here. The first set of knowledge, concerning the analogue legacy is to be found in photographic development procedures. A darkroom is essentially a space that can be completely darkened, because any light would fog the sensitive gelatin emulsion of the photographic film.3 The work in the lab is divided into two parts- exposure and development. Exposure is the process of enlarging and burning an image from the photographic film onto photographic paper. At this stage all the information of the image is already contained on the white photographic paper but it is invisible to the human eye. The image is then developed through a number of chemical baths.4 Dodging and burning are part of the exposure process. The longer an image is exposed the darker it will be- covering up certain parts of the image will therefore make these parts lighter relative to the rest of the image. Dodging in analog photography is achieved with masks- a piece of paper cut into the right shape and attached to a thin stick to hold it into position.5 For the reverse process called burning, a picture is first given a base exposure. A second round of exposure is applied with a mask in front of the image, revealing only the parts that are supposed to be darker.6 It took a lot of practice to achieve the right balance between light and dark areas with these time-consuming procedures. Another common Photoshop tool that has its origins in analogue technique is the brightness/contrast slider. In the darkroom image contrast is controlled by different factors. First, it depends on the type of photographic paper used- different 3 Dan Massey, Introduction to Black and White Photography: Darkroom Layout & Equipment for Black and White Photography, Fine Art Photography, http://www.danmassey.co.uk/ 4 Ibid. 5 Dodging and burning, Wikipedia, Accessed on May 5th, http://en.wikipedia.org/wiki/ Dodging_and_burning 6 Ibid.


57

Photoshop

grades of paper produce different contrast intensity. Secondly contrast is affected by magenta filters held in front of the image during exposure.7 The more magenta the light contains the stronger the contrast in the image. There are many other darkroom procedures that have been emulated by digital technology. From the few examples described above it can be seen how much time and care was necessary to produce effects that are controlled by a single slider today.

THE DEFINING TECHNICAL ASPECT Photoshop is more than a literal translation of analogy systems into the digitala number of processes are unique to the digital world, with the pixel being the most fundamental. Photoshop is based on raster images, meaning that images consist of thousands of little dots or squares with color information composing the overall image. A pixel does not have a defined shape or size- it is relative to image size and resolution.8 It is better defined as the smallest picture element, no matter what size it is. This is also where the term pixel originates (pix [pictures] and el [element])9 The first pixel image was produced by Russel Kirsch in 1957 at the National Bureau of Standards, a state funded research agency.10 Kirsch created the first digital scan with a picture of his baby son at a resolution of 176x 176 pixels. The square was a logical way of translating images into a computer language of zeros and ones. Today, at the age of 81 Kirsch is researching different ways of tiling pic7 Cameron Knight, Understanding Contrast Control in the Darkroom, 30 Jan 2014, Accessed on May 6th, http://photography.tutsplus.com/tutorials/understanding-contrast-control-in-the-darkroom--photo-17792 8 Pixel Definition, High Definition: A-Z Guide to Personal Technology, Accessed April 15th 2015, http://literati.credoreference.com.ezproxy.cul.columbia.edu/content/entry/hmhighdef/pixel/0 9 Ibid. 10 Russell A. Kirsch, SEAC and the Start of Image Processing at the National Bureau of Standards, NIST virtual museum last update: July 13, 2009, Acessed April 24th 2015, http://museum. nist.gov/


Photoshop

58

tures, into mosaic patterns to store image information more efficiently.11 His idea of the square grid of pixels however has already pervaded a wide range of image processing systems from scanners, printers, cameras to monitors. Pixels and the associated grid- structures are deeply embedded in digital cultures. Working in Photoshop therefore can be interpreted as a grid manipulation. Every image we see on a monitor is essentially composed out of a flat grid of singular squares.

11 Russell A. Kirsch, Precision and Accuracy in Scientific Imaging, Journal of Research of the National Institute of Standards and Technology, Volume 115, Number 3, May-June 2010

Figure 3.4. pixelated image in MacPaint (1984)


59

Photoshop

Resolution There are a few basic terms regarding pixels that are important to how Photoshop processes images. Each image has a certain resolution- the number of pixels in the horizontal and vertical direction. This is the most important number in understanding the size and amount of detail in an image. A 300 by 300 pixel image is considered low resolution and cannot be zoomed in without revealing single pixels. A 5000 by 5000 image would be considered high-res. It is possible to zoom in very far without seeing any pixels. The resolution of the image corresponds roughly to file-size, though file size is also determined by color, amount of detail and layer structure.12 When printing an image, it depends on the size of the print what the density of pixels per square inch (PPI) must be. A 300 by 300 image on a 10-inch paper will have a density of 30 pixels per inch- a very low value. The 5000 by 5000 on the 10 by 10 inch image will have a density of 500- considered very high. Optimal values for printing are considered between 72 and 300 pixels per inch. Color 12 Image size and resolution: About pixel dimensions and printed image resolution, Photoshop Help , Accessed March 6th 2015, https://helpx.adobe.com/photoshop/using/image-size-resolution. html

Figure 3.5. pixel grid

Figure 3.6. The first digital image, Russel Kirsch (1957)


Photoshop

60

The second important concept in Photoshop is how pixel color gets determined. Photoshop has adopted the RGB standard, meaning that each color is defined by three values, representing the amount of red, green and blue. Each color gets assigned a value from 0-256.13 One analogy is to imagine three different filters letting light through. The lower the value the less light is allowed to pass. When all three filters have reached the lowest value of 0 the image will be black. A combination of high values for red and green and a low value for blue, lets light pass only through red and green, resulting in yellow. Through this system of combinations a great variety of color can be achieved. If each color has 256 options, the multiplication of 256 x 256 x 256 yields 16.7 million options for each pixel. 13 RGB color model, Wikipedia, Accessed 8th of March 2015, http://en.wikipedia.org/ wiki/RGB_color_model

Figure 3.7. RGB color combinations


61

Photoshop

LAUNCH TRAJECTORY Photoshop emerged from two disciplines, analog photography and computer science. For these two disciplines to coalesce several talents came together in one family. It began with Glenn Knoll, a professor at the university of Michigan, who built a darkroom in his basement.14 His older son, Thomas Knoll was fascinated with photography and used his dad’s basement to experiment with photographic techniques. As a second hobby he learned how to write simple programs in high school. Using timesharing computers he grasped the concepts quickly and got hired as a programmer at a local company while still a freshman in high school.15 His younger brother John Knoll was fascinated with visual effects in a different way- after seeing the first Star Wars movie he went to film school to become a camera operator. He landed his first job at the revered Industrial Light and Magic, the most prestigious film production company at the time. More importantly for John, it was the very same production house that had worked on Star Wars, the very reason John wanted to work in the film industry. Today, John Knoll is the visual effects supervisor and chief creative officer of the company, fulfilling his childhood dream.16 It is his generation that was a witness and sometimes an instrumental part of the transition from the analog to the digital in the special effects industry. While John joined the film business, his brother had started writing his PhD on Computer Vision at his dad’s alma mater, the University of Michigan. His thesis was concerned with the understanding of images, identifying of objects and spatial relationships in images. For this purpose John wrote image processing algorithms detecting edges in images, a tool that still exists in Photoshop as the ‘find edges filter’. He wrote a few of these algorithms and showed them to his brother Thomas, who saw potential applications in the motion picture industry. Eventually it became too cumbersome to use each tool separately so Thomas built an application to combine several features into one interface. John kept asking for additional func14 Dreams from the digital darkroom — 25 years of Photoshop, Adobe Blogs, Accessed March 3rd 2015, http://blogs.adobe.com/conversations/2015/02/photoshop-turns-25-qa-withthomas-knoll.html 15 Michael Reichmann & Kevin Raber interview with Thomas Knoll Thomas Knoll: The Story of Photoshop, June 2013, Accessed March 3rd, https://vimeo.com/73949178 16 Ibid.


Photoshop

62

tionality and Thomas kept developing them, as a way of ‘procrastinating’ during his PhD.17 Thomas would never finished his PhD, but this side project would become something much bigger than many a dissertation. John, realizing the marketing potential of the software, approached a few companies and in 1988 the program was adopted and by Adobe. Thomas kept on working on Photoshop single-handedly until the second version, ans he is employed by Adobe to this day. When Photoshop first came out in 1990 digital printing and digital cameras had not entered mainstream market and only very specialized professionals could afford the hardware necessary to run Photoshop.18 In this sense the software was ahead of its time and it took over a decade for other technologies to catch up and uncover the full potential of Photoshop's functionality. As the price tag of computing speed fell drastically, Photoshop developed further in conjunction with digital cameras and desktop printing and soon became more accessible for the mainstream market.19 New business models In the past few years Photoshop has adapted to new technologies again- it has changed its business model from expensive singe licenses to a cloud based subscription model, where users pay a relatively low monthly subscription fee rather than a high starting price. Still regarded as a highly professional tool Photoshop is hoping to attract more casual users, offering free applications with a limited palette of tools. Simple-to use pre-packaged image processing filters used in social media software like Instagram offer fast and easy solutions for the general public, making it difficult for an accomplished but relatively complex software like Photoshop to compete. Adobe usually sells Photoshop as part of a bundle of different software in the ‘Adobe Master Collection’, which also contains Illustrator and InDesign, After Effects and more. Because it is part of a collection Photoshop can focus on being excellent at only one specific type of task- editing pixel images. While there 17 Michael Reichmann & Kevin Raber interview with Thomas Knoll Thomas Knoll: The Story of Photoshop, June 2013, Accessed March 3rd, https://vimeo.com/73949178 18 Celebration 25 years of Photoshop, Lynda.com, Accessed March 7th 2015, http://www. lynda.com/25ps 19 Ibid.


63

Photoshop

Figure 3.8. Le Corbusier, Collage, 1962

Figure 3.9 Mies van der Rohe “Ink and Photo Collage with Glass� 1960-3, Courtesy of MoMa


Photoshop

64

are a few vector tools in Photoshop, they are not nearly as comprehensive as in Illustrator. The core structure of Illustrator and Photoshop is very different and it is in the specific understanding of each format that designers can develop a smooth workflow between different software.

DISCOURSE AND EXAMPLES Photoshop as a design tool As an extremely flexible design tool Photoshop is tied into a rich history of photography and collage-technique from analogue sources. Photoshop is essentially an image processing tool- an input image is manipulated to produce an output image.20 The manipulation process can be broken down into three different operations: combination, creation and filtering. The combining of discrete images includes collage, photo-montage and compositing techniques with roots in the visual arts, going back to early modernism. Secondly the drawing tools embedded in Photoshop, like brushes and stamps are creation tools, allowing to create images without external sources. Photoshop has a limited palette of basic shapes, and even some vector tools. The third operation involves different types of filters and masks, changing aspects like color, contrast or adding effects like noise or blur to the image. Filtering is the main technique that has enabled the smooth blending of images into composites with seamless transitions. In the Photoshop workflow all three manipulation techniques are used simultaneously. They serve here as a framework to relate them to analogue sources, while speculating on their effects in the visual world enabled by Photoshop. Collage In a contemporary sense architects have used collage techniques since early modernism, with le Corbusier and Mies Van der Rohe as the most notable protagonists of this generation. Le Corbusier’s work in that respect was closer to the 20 Michael Reichmann & Kevin Raber interview with Thomas Knoll Thomas Knoll: The Story of Photoshop, June 2013, Accessed March 3rd, https://vimeo.com/73949178


65

Photoshop

Figure 3.10. Superstudio, Continuous Monument series, 1969

Figure 3.11. Ben Van Berkel, Caroline Bos: Move, Manimal 1999

Figure 3.12. R&sie(N) profile picture


Photoshop

66

Figure 3.13. Gordon Matta Clark, Conical Intersect (1975)

Figure 3.14. Andres Jacque/ Office for Political Innovation, Contemporary Home Urbanism (2012)


67

Photoshop

artistic avant-garde at the time, namely Cubism, while Mies van der Rohe developed a more specifically architectural framework.21 The collages of le Corbusier were purely graphical, while Mies van der Rohe constructed perspectives through drawing and used photographs of existing artworks to implement them into his virtual spaces. His collages are more than a mere illustration of his two-dimensional plans. Rather, the collages become a design tool itself, testing materiality and transparency of surfaces. From this practice of combining drawing and photography a long tradition has emerged, changing with new technological opportunities. In the 60s, when color photography became widely available and social utopias needed new modes of visual description, Archigram and Superstudio made use of this technique combining images and drawings into elaborate collages.22 Superstudio was using collage to overlap rigid, drawn, man-made, black and white grids with colorful, ‘authentic’ people and other props. A decade later, Rem Koolhaas continued this technique with the voluntary prisoners of architecture and other dark urban fantasies.23 His work would influence a generation of younger designers, using collage as a catalyst for socio-political critique, a topic that will be addressed here later. Composite In the late 90s, a number of architecture offices interested in computational design started experimenting with montage techniques and smooth transitions. UN Studio identified the Manimal as an icon of hybridization in their publication Move.24 The Manimal is a photoshopped composite between an ape, snake, man and horse. Interested primarily in the technique of how it was produced, Ben Van Berkel and Caroline Bos speculated how this type of hybridization could manifest in architectural space. They imagined that an "intense fusion of construction, mate21 Jennifer Shields. Collage and Architecture. (New York: Routledge/Taylor & Francis Group, 2014) 22 Ibid. 23 Rem Koolhaas, and Bruce Mau, "Exodus, or the Voluntary Prisoners of Architecture: AA Final Project, 1972" in Small, medium, large, extra-large : Office for Metropolitan Architecture (New York, N.Y. : Monacelli Press, c1995.) 24 Van Berkel, Ben and Bos, Caroline. Move (Amsterdam, The Netherlands : UN Studio & Goose Press, c1999)


Photoshop

68

rials, circulation and programme spaces"25 would produce hybrid conditions akin to the Manimal where the original sourced could not be distinguished from one another. They also saw the creative process in a co-dependent relationship with the software used. Unaware of the particular history of Photoshop they wrote: “The Manimal was produced by one artist, but looks like the product of a group, which in a way it is, since the anonymous software programmers who created Photoshop also have a large hand in the portrait.”26 Far from anonymous, John and Thomas Knoll would probably take this credit as a compliment. The adverse side of hybridity is that nothing is differentiated anymore- especially when primarily oriented towards a formal expression. The fascination with visual continuity becomes so dominant it occurs at the cost of other factors in their work. Of the same generation as van Berkel and Bos, French architect Francois Roche and his partner Stephanie Lavaux experimented with the idea of hybridity to question their own identities and ideas of collaboration. They merged their own and their team’s faces into a single, uncanny face.27 Stating their aversion to "architects being treated as stars"28 they created an image symbolizing shared and innominate authorship. It is this particular generation that was also fascinated with animation software such as Maya, using three-dimensional morphing and animation techniques, with a focus on generating form out of different vectors. (See Maya chapter) Critical Photomontage and documentary Photoshop has had a profound effect on how we look at images- not in a literal way but on a metaphysical level. The difference is one of trust- consumers are well aware of the concept that most images are manipulated. To counteract this tendency systems of authorization have developed, for example the hash-tag on instagram #nofilter, indicating the images has not been edited and is therefor authentic. Ar25 Van Berkel, Ben and Bos, Caroline. Move (Amsterdam, The Netherlands : UN Studio & Goose Press, c1999) 26 Ibid. 27 Norman Kietzman, François Roche – R&Sie(n) Architects (BauNetz, Swarovski edition, Crystal Talk 26, Accesssed March 6th 2015, http://www.baunetz.de/talk/crystal/) 28 Ibid.


69

Photoshop

chitects have also found countermeasures to emphasize the truthfulness of their images- one is a sheer quantity of images that would make it impossible to modify each of them. Another is a conscious roughness, leaving image contrast or imperfections intact. An analog predecessor of a documentary collage technique are the photo-montages of Gordon Matta Clark, who produced complex perspectives of his own work with pictures taken from multiple angles.29 As a tool to document ideas Photoshop would return in a different capacity in the work of a younger generation. Interested in contrast and the co-existence of multiple actors, these firms are using Photoshop to think through complex assemblages. Andres Jacque’s Office for Political Innovation is creating panoramas of relationships between different actors on a site when designing their projects. The assemblages produce readings that go beyond the juxtaposition of images on a plane – they become a testimony to the variety of human relationships and the way architecture connects politically and socially on all scales. For Jacque, the use of collage is more interesting that the composite, because the individual facets need to be recognizable. Photoshop offers a new kind of toolset for collage here, bringing collage and composite closer together. While the work of architects like Andres Jacque is not interested in the seamless hybridization, the images are still very coherent- using filters and effects to make the final image appear more coherent while keeping the original source. In a tradition emerging from the long heritage of collaging techniques, ranging from Mies, to Superstudio and Rem Koolhaas, Konstantinos Pantazis and Marianna Rentzou, who both worked at OMA, employ collage as a form of critique in their practice. Calling themselves ‘Le Point Supreme’ they define the role of collage as “deeply seminal” in their work: “We are interested in spaces of contradiction, disruptions of meaning, unexpected associations.” 30 Practices like The office for Political Innovation or Le Point Supreme use Photoshop as a tool to create complex reflections of reality with the ultimate aim of a critical reflection of these conditions. 29 Jennifer Shields. Collage and Architecture. (New York: Routledge/Taylor & Francis Group, 2014),67. 30 Jennifer Shields. Collage and Architecture. (New York: Routledge/Taylor & Francis Group, 2014),134.


Photoshop

70

Since the introduction of Photoshop an ongoing discussion has emerged in the visual world concerning what an admissible amount of retouching might be for images to be considered documentary rather than artistic. With architects engaging in speculative futures, the question has to be rephrased in terms of how much an image can and should depart from a photo-realistic illustration. In an article debating these definitions critic Gerry Badger comments on the use of Photoshop by photographers and where the accepted boundaries should be. In his definition Photoshop is admissible as a tool as long as it is used within “the normal bounds of controlling color balance, contrast, density- the kinds of controls that photographers have always exercised using traditional photo-printing methods, the controls used to produce a believable and visually well-balanced print.”31 When employed this way, images edited with Photoshop can still be considered “honest photography” according to Badger. This is a fine line to walk- if a little bit of editing is allowed, when does an image become dishonest? Even if architects are less concerned with this exact delineation there is a discussion on the meaning and effects of the increasingly photorealism in architectural image-making. One the most acclaimed and expensive rendering studios in the world, the french company Luxigon, produces highly atmospheric, nearly realistic images. They only use basic rendering techniques while the atmosphere and details are added in Photoshop, in an almost painterly 31 Gerry Badger. "It's Art, But Is It Photography? Some Thoughts on Photoshop" in The pleasures of good photographs : essays / by Gerry Badger. (New York : Aperture, c2010)

Figure 3.15. Collage vs Composite


71

Photoshop

process.32 As Eric de Broche des Combes says in an interview: "we’re more falling into the “75% Photoshop – 25% pure rendering” type of firm. Photoshop is where the real thrill is, as well as most of the style". Their images are"more about "conveying the spirit"33 of what a building will be than creating a literal representation." Spirit in this context could also mean idealized version- and this is where dishonesty in architectural visualization becomes problematic. Because architecture consumers are so used to an idealized world, images are not only edited in the process of visualization but also for documentary purposes when a building is finished. Post completion a work is re-staged and prepared to be photographed and brought back to the perfection of the original selling image. Published online, this homogeneous, seemingly neutral Iwan Baan- style image will be seen by bigger audiences than the building itself. Perhaps it is time to re-think the representation of finished buildings and use Photoshop more creatively here as well- not in terms of faking or glossing over imperfections but exactly in reverse- to allow and maybe even emphasize specific details and idiosyncrasies of a building rather than producing another perfectly clean and white-balanced image. Photoshop in the building process In the examples so far Photoshop functions rather as a conceptual tool than as a design tool producing architectural elements to be translated into the physical world. One way Photoshop is often implicated in the design process is as a texturing tool, producing images to be printed on facades or abstracted as facade elements. In conjunction with 3d software it produces images that are then mapped onto geometry. The Santa Caterina Market, by Enric Miralles and Benedetta Tagliabue, forms a colorful canopy composed of colored tiles, based on actual photographs of fruit and vegetables.34 Through a process of abstraction and filtering, the original content of the images is not recognizable anymore but becomes an abstracted pattern. 32 Ronen Bekerman. Interview with LUXIGON, April 4, 2011(RonenBekerman Architectural Visualization Blog, http://www.ronenbekerman.com/interview-with-luxigon/) 33

Kyle May, editor: CLOG: Rendering (Brooklyn, N.Y. : 2012), 127.

34 Santa Caterina Market renovation, Miralles Tagliabue EMBT (Accessed March 28th 2015, http://www.mirallestagliabue.com/)


Photoshop

72

Another, relatively common way a Photoshop technique has found its way into contemporary design is through pixellation patterns. Low-res images expressed as tiles on a facade have seen a number of applications in the past years- they are popular as they can be produced with simple technical means, and allow for different readings from close and from far. The content of a pixelated image might only be read from an appropriate distance, while producing abstract fields of color from up close. Photoshop is a versatile tool implicated at various stages of the design and dissemination process of a building. With origins in photography and image processing it brings to architecture a wide range of techniques that find new expressions with each generation.

Figure 3.16. Santa Caterina Market tiling pattern, Mirailles Tagliabue


73

Photoshop

1980

1990

MCNEEL

AUTODESK

AutoCad

ILM Adobe founded by John Warnock and Charles Geschke

Indesign John and Thomas Knoll develop first version of PS

Photoshop

ADOBE Illustrator

Figure 3.17 Timeline of the Adobe Company

Pdf, Acrobat Reade


er

Photoshop

2000

2010

Grasshopper Rhinoceros

Maya

74

2015


75

Photoshop

BIBLIOGRAPHY Dan Massey, Introduction to Black and White Photography: Darkroom Layout & Equipment for Black and White Photography, Fine Art Photography, http://www. danmassey.co.uk/ Jennifer Shields. Collage and Architecture. (New York: Routledge/Taylor & Francis Group, 2014) Koolhaas, Rem, and Mau, Bruce: "Exodus, or the Voluntary Prisoners of Architecture: AA Final Project, 1972" in Small, medium, large, extra-large : Office for Metropolitan Architecture (New York, N.Y. : Monacelli Press, c1995.) Kirsch , Russell A. , SEAC and the Start of Image Processing at the National Bureau of Standards, NIST virtual museum last update: July 13, 2009, Acessed April 24th 2015, http://museum.nist.gov/ Kirsch, Russell A. , Precision and Accuracy in Scientific Imaging, Journal of Research of the National Institute of Standards and Technology, Volume 115, Number 3, May-June 2010 Van Berkel, Ben and Bos, Caroline. Move (Amsterdam, The Netherlands : UN Studio & Goose Press, c1999) Manovich, Lev. Inside Photoshop. Computational Culture- a journal of software studies, 2011


Photoshop

76

LIST OF ILLUSTRATIONS Figure 3.1. FELD- Studio for digital crafts, Photoshop Collage Figure 3.2. Basic elements of a darkroom Figure 3.3. Burn and dodge Figure 3.4. pixelated image in MacPaint (1984) Figure 3.5. pixel grid Figure 3.6. The first digital image, Russel Kirsch (1957) Figure 3.7. RGB color combinations Figure 3.8. Le Corbusier, Collage, 1962 Figure 3.9 Mies van der Rohe “Ink and Photo Collage with Glass� 1960-3, Courtesy of MoMa Figure 3.10. Superstudio, Continuous Monument series, 1969 Figure 3.11. Ben Van Berkel, Caroline Bos: Move, Manimal 1999 Figure 3.12. R&sie(N) profile picture Figure 3.13. Gordon Matta Clark, Conical Intersect (1975) Figure 3.14. Andres Jacque/ Office for Political Innovation, Contemporary Home Urbanism (2012) Figure 3.15. Collage vs Composite

Figure 3.16. Santa Caterina Market tiling pattern, Mirailles Tagliabue Figure 3.17. Timeline of the Adobe Company


77

AutoCad


AutoCad

AUTODESK MAYA

78


79

Maya

"It is a medium, it is not a tool for us." Shajay Bhoosnan from Zaha Hadid Architects

Figure 4.1. Movie still from the Abyss, created with Alias Software (1989)


Maya

80

AUTODESK MAYA Younger, slicker and more intuitive than its competing programs, Maya is a software that holds a particular place in the history of digital design. Arguably, more than any other software, it spawned a theorization of its possibilities, explicitly by design thinkers such as Greg Lynn, and his students Marc Gage and Hernan Diaz Alonso and implicitly by theorists including Mario Carpo and Antoine Picon. Through enabling architects to generate complex form through animation Maya was instrumental in establishing software as a design tool, rather than merely a drafting tool. As an inherently imprecise, intuitive tool its strength is in the representation of complex form. It is used by designers to discover new techniques through a playful process in the search for happy accidents.1 Due to this experimental nature is has become popular with practices primarily interested in formal experimentation. A limitation of Maya is its imprecision: It can never drive a project beyond the preliminary design level. As other tools, which have a more comprehensive toolset for architectural design and production start incorporating intuitive design work-flows, Maya is loosing its importance, becoming associated with willfulness and imprecision rather than revolutionary potential. Maya was originally developed by the movie industry starting in the mid 80s, as it had the budget and means to develop software for photo-realistic imagery.2 As both architects and movies are essentially producers of synthetic reality, architects soon started to be interested in these new representation techniques. Fundamentally, the relationship between these two industries is deeper than merely the use of the same tools- their stories intertwine on multiple levels. The disciplinary transfer is exemplified by the figure of Bill Kovacs, who was educated and practicing as an architect for the first half of his career before entering the computer graphics business and eventually becoming the creative director of Maya.3 Today, Maya is being 1 Eric Owen Moss, Interview: Hernan Diaz Alonso & Marcelo Spina, The architectural Review, 26 April 2013, accessed March 3rd 2015, (http://www.architectural-review.com/opinion/ interview-hernan-diaz-alonso-and-marcelo-spina/8646930.article) 2 Wayne Carlson, A Critical History of Computer Graphics and Animation, Section 14: CGI in the movies (Columbus, Ohio: Ohio State University, 2007), Accessed March 20th 2015, https://design.osu.edu/carlson/history/ID797.html) 3 Tom Sito. Moving innovation: a history of computer animation (Cambridge, Mass. : MIT Press, 2013.), 187-189.


81

Maya

used in architecture schools around the world, making the professional transfer between the two disciplines uniquely fluid- it is not a rarity for architecture graduates go to find employment in the motion picture industry.4 Still an industry standard in the film branche today, Maya stays operative as a sculpting tool for preliminary design ideas in architecture. Perhaps the theoretical reverberations of the moment when movie-production methods and architectural design collided will last longer than the use of the actual software. This chapter will look at the relationship between the movie industry and architecture, a dependency already evident in the genesis of the software.

DISCIPLINARY TRANSFER The disciplinary transfer between the movie industry and architecture started long before the adaptation of animation software. Before the digital, illusionist techniques were employed in movies to create imaginary worlds. Gathering techniques from various related disciplines, like stage design, photography and architecture, special effects teams used scale models and perspective drawings.5 The tools of the trade were therefore very similar to the architectural professionals. In fact, a scenographer in Germain is alternatively called a “Filmarchitekt”.6 Yet Maya is more than just the translation of all these analogue techniques into the digital, it includes a number of techniques native to the digital format including morphing, polygon meshing patterns and scripting. The omnipresent but invisible virtual camera One generally invisible but conceptually significant element of Maya is the constant presence of a virtual camera. The viewport is not the equivalent of looking at 4 Erik Butka, Meagan Calnon & Kathryn Anthony, “Star” Architects: The Story of 4 Architects who Made It in Hollywood, published June 19th 2013 (http://www.archdaily.com/388732/ star-architects-the-story-of-4-architects-who-made-it-in-hollywood/) 5

Richard Rickitt: Special effects : the history and technique (London : Aurum, 2006.)

6 "Szenenbild" in Wikipedia, last accessed on March 12th 2015, http://de.wikipedia.org/ wiki/Szenenbild


Maya

82

an object directly, rather it is mediated by a virtual camera which allows the user to modify its settings- the viewing angle, focus, depth of field, safe frames and more. Some of these settings are helpful only for rendering purposes, others like the focal length will dramatically change the perception of a space or object. Changing the focal length will allow a wider viewing angle to look at narrow spaces and experience interior realistically. This is one of the big advantages of designing in 3d versus physical models- the ability to create realistic viewing angles in interiors. The virtual camera becomes the framing device of looking at the digital environment in Maya and it delimits what is included in a scene and what is left out. During the rendering process, the lighting and texture are all oriented to work with the camera view. It is therefore helpful to set up the virtual camera first and then set up the lighting and texture to work with it. As technology is consistently advancing it becomes increasingly easy and fast to preview light and materials in real time and there is less knowledge required about physical surface properties to create custom materials. Nevertheless, knowing about the basics of their analogue counterparts is useful. Understanding how to position a camera, how to light a scene and how to makes the right material choices are all analogue techniques originating from the film industry that have found their way into a digital translation. Animation and its relationship to parametric modeling If a model is constructed correctly, it can easily be modified in the design process. It is here where parametric processes and animation are very similar- in order to work both have to follow hierarchical rules. The idea is essentially the same- building up a system, with the ability to go back and to modify the original structure again later in time. Animation software was the first to introduce into architecture the idea of reversible time- where one could produce a design and would not have to re-model or re-draw but rather built the model in anticipation of change. This allows for extremely fast production cycles of iterative change during the design process.


83

Maya

Concurrent with the rise of Animation Software, the short architectural animation has seen an increase in popularity. This is due to several reasons- animation shorts have the advantage of showing the design from multiple perspectives, helping user unfamiliar with architectural representation techniques to imagine the finished design. Secondly, the designers themselves are used to work in 3d space, so showing their work only in two dimensions or in the form of a physical model is necessarily a form of reduction. A movie becomes the closest translation of the designers own experience when working with the 3d model. In large studios movies are usually outsourced to offices dedicated to a specific new profession: in reversal of the ‘Filmarchitekt’ today there is the architectural rendering and animation studio. Instead of architects designing the sets for movies, animation studios are making movies to illustrate architectural design. Baumgart Meshes have been designed to approximate reality with a minimal amount

Figure 4.2. Bruce Baumgart, Rockets made with sweep primitives (1972)


Maya

84

TECHNICAL HISTORY AND FUNDAMENTALS of computational effort. To this day they are used in real time applications like Gaming or interactive environments. Polygons as a mathematical concept are relatively new: in 1972 the computer engineer Bruce Baumgart published a paper on ‘winged edge polyhedron’ representation at Stanford, following his notion that polyhedra provide the proper starting point for buildings a ‘physical world representation’.7 The goal in developing these geometric models was an accurate representation of the world rather than to function as drawing or design tools like Sutherland’s sketchpad in 1963.8 Polygons were therefore never modeled after a phenomenon that existed in the real world but were meant for representational purposes from the very beginning. Types of modeling in Maya Maya has a myriad of features that go way beyond the scope of this book. Here, attention will be paid to the geometrical modeling aspects. Even within the category of 3d- modeling we need to differentiate further - Maya supports many file formats and it allows modeling in NURBS, subdivision surfaces and polygons. NURBS are dealt with in detail in the Rhino chapter and they are not Maya’s primary feature. Subdivison surface modeling is based on polygon modeling, with different tessellation algorithms creating smoother surfaces. Polygon modeling is based on the manipulation of meshes. The smallest unit of a mesh is a vertex, defining a single point in Cartesian space. When two vertices are connected by a straight line they form an edge. At least three edges that connect to form a surface are called a face. Several faces combined together form an object, or a mesh.9 When modeling with polygons one usually starts with a solid object to begin with - most often a primitive like a cube or sphere. This type of modeling is also called box modeling. Starting from the primitive a series of operations like extrude, chamfer, inserting of additional edge loops etc. are applied to built up a form. 7 Bruce Guenther Baumgart. Geometric Modeling for Computer Vision. (Stanford California: Ph.D. Dissertation. Stanford University, 1974) 8 Ibid. 9 Helmut Pottmann, Andreas Asperl, Michael Hofer, Axel Kilian and Daril Bentley: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 276-278


85

Maya

Figure 4.3. Transformation of a low-polygon meshing superimposed with different

densities of mesh smoothing


Maya

86

Topology rules Polygon meshes have to follow certain topology rules. A clean mesh is closed, meaning that all the vertices and edges are joined. Open edges are problematic in meshes, because they suggest an impossibility in physical space- the infinitely thin edge. Another rule is that each face has a certain direction - and all the faces in a mesh should be oriented the same way or the shading algorithms will not work and the face will appear black or transparent. The third rule concerns the number of edges a polygon should have- quad meshes are designed to work best with 4 edges. Similarly each vertex should connect a total of 4 edges.10 Modeling software allows to break these rules with many commands and a beginner will not be aware of these invisible limits between correct operations and breaking the rules. The mistakes will only become apparent further down in the production pipeline. For example a mesh that has problems such as open edges or double faces might look fine on the viewport but will not render correctly- the shading algorithms do not know how to deal with a physically impossible definition. Similarly the mesh will encounter problems when going into physical production by means of 3d printing or CNC milling. These invisible rule-sets are determining the aesthetic of designs modeled with polygons, as can be evidenced based on a number of built examples. 10 Helmut Pottmann, Andreas Asperl, Michael Hofer, Axel Kilian and Daril Bentley: Architectural Geometry, (Pennsylvania: Bentley Institute Press, 2007), 276-278

Figure 4.4. Meshes with different densities


87

Maya

LAUNCH TRAJECTORY Illustrating the close relationship between the disciplines, one of the central figures in this early development of Maya was an architect- Bill Kovacs had an undergraduate degree in architecture from Carnegie Mellon and a graduate degree in Environmental Design from Yale University.11 While at Yale he started working at the New York Office of Skidmore, Owings & Merrill and later transferred to their Chicago office. SOM was one of the first practices to acquire their own computers and to develop their own programs. In the early 1970s the Chicago office had to prepare the structural analysis for the Sears Tower and developed a custom program that helped them automate some of the tasks.12 When Bill Kovacs joined 11 Tom Sito. Moving innovation: a history of computer animation (Cambridge, Mass. : MIT Press, 2013.), 187-189. 12 Fallon, Kristine K. Early Computer Graphics Developments in the Architecture, Engineering, and Construction Industry. New Jersery: IEEE Annals of the History of Computing, Volume 20 Issue 2, April 1998

Figure 4.5. Bob Kovacs and Douglas Stoker in SOM Chicago’s computer room, circa 1977


Maya

88

in the mid 1970s he was part of the team working on the computer graphics system in the office. In 1978 Kovacs left Chicago to go to Hollywood and work with the digital production company Robert Abel and Associates where he worked on the first Tron movie. The movie flopped and the company collapsed in 1984, but Bill went to found his own company called Wavefront Technologies together with Mark Sylvester and Larry Bagels.13 Wavefront developed a number of separate applications to model, paint and animate as the Advanced Visualizer suite. Within ten years the highly successful young company, had a market value of $119 million. During that time many young companies were competing for an emerging market and a similar company called Alias brought out competing products. Both of these companies would eventually get bought by Silicon Graphics and merged into Alias/Wavefront.14 It was based on the combined knowledge between these two companies that Maya was created in 1998. By that time both the special effects and the architecture industry were prepared for a shift towards the digital and the software quickly became an industry standard and widely distributed and recognized in the design field. In 1997 Bill Kovacs, who started his journey in the computer labs of SOM, was honored with an scientific and engineering Academy Award for the creative leadership on Wavefront's Advanced Visualizer, the software that would later beomce part of the development onMaya.15 With some background on the origins and technical basis of Maya, the last section will deal with the reception and use of Maya within the architectural profession. No other software has spawned as much theoretical consideration, so the viewpoints discussed here present only a selection of the many voices. The concept of adaptable, infinitely variable form echoed deeply with the generation that first started using computers as a design tool and Greg Lynn is the 13 Wayne Carlson, A Critical History of Computer Graphics and Animation, Section 14: CGI in the movies (Columbus, Ohio: Ohio State University, 2007), Accessed March 20th 2015, https://design.osu.edu/carlson/history/ID797.html) 14 Ibid. 15 Times Staff and Wire Reports, Bill Kovacs, 56; Shared an Oscar for Work in Computer Animation, Los Angeles Times ( June 04, 2006, accesssed May 3rd 2015, http://articles.latimes. com/2006/jun/04/local/me-passings4.3)


89

Maya

Figure 4.6. Hernan Diaz Alonso, Helsinki library, 2012

Figure 4.7 Zaha Hadid Architects, Dongdaemon Plaza, Rendering (c.2007)

Figure 4.8 Zaha Hadid Architects, Dongdaemun Design Plaza ( 2014)

Figure 4.9. Greg Lynn, Embryological house (1998)


Maya

90

DISCOURSE AND EXAMPLES most prolific designer to theorize this new interest. In his book ‘Animate Form’ published in 1999 he described how architecture was transitioning from being a frame for actions to unfold to a dynamic entity itself, shaped by various factors present in its context.16 Lynn made constant reference to the technical background he was using to conceptualize his ideas. In his book he grounds his argument withi n animation techniques: “Because splines are vectorial flows through sequences of points they are by definition continuous multiplicities rather than discrete entities.“17 Despite this fascination with animation techniques and dynamics Lynn did not argue for an actually moving architecture, but for topologically complex surfaces shaped by a set of dynamic vectors. The ability to generate surfaces through setting up animation parameters, provided a lens to look at the computer as a generator of form. In a video from 2006 Greg Lynn shows the way he modeled the embryological house- he created six sliders representing morphed stages of a form and then produced even more variation by mixing the sliders.18 The embryological house became a canonical example of a architecture following a basic prototype allowing for infinite variation through subtle changes in its form. Lynn’s contribution is in emphasizing and demonstrating Maya as a design tool rather than merely a drafting tool. He is very clear on the heritage of the animation techniques he is using and it is because of an intimate understanding of the software that Lynn could first theorize it effects. The pitfall of this approach is that Lynn was trying to produce rigorous methods and design proposals with an intrinsically inexact tool- in the end Maya is primarily directed at a virtual reality, not meant to be produced in the real world. It has limited input and output channels towards data from the physical world- it is a closed system within itself and it works best within its own parameters. Lynn’s early ideas of adaptable form generated in Maya are illustration of an idea, that needs to picked up with different software to become operable in the 16

Greg Lynn: Animate Form (New York : Princeton Architectural Press, c1999)

17

Greg Lynn: Animate Form (New York : Princeton Architectural Press, c1999), 23.

18 Excerpts from a working session with Greg Lynn and CCA curator Howard Shubert recorded in Lynn’s Venice, California, studio on 29-30 October 2007. (Canadian Centre for Architecture, accessed May 8th 2015, http://www.cca.qc.ca/en/collection/6-greg-lynn-embryological-house)


91

Maya

physical world. Interested in high -tech production methods Greg has found ways of producing unique samples of his design and some have gone into production at the scale of industrial design objects. It is left unanswered however how these new forms and shapes are in dialogue with the world at large without access to technology and sophisticated production methods. This idea of Maya as a tool to play with was taken to the level of a dogmatic position by the current director of the Southern California Institute of Architecture, Hernan Diaz Alonso. He has used Maya extensively in his mostly speculative practice and he has described his design process as follows: "Most of the time I grab the mouse and do something directly in the computer.. My way of working with the software was an embrace. The parametric software that I use is animation software. The parametric software in my office is not Rhino, it is Maya."19 At Sci Arc the relationship between the animation industry and the architects using the same software is reciprocal and there is a great interest and skill level at animation and film-making at the school. In fact, many of its graduates in architecture pursue careers in the special effects houses of Hollywood. These smooth transitions are certainly helped by the same software being used by both industries - Maya becomes a hinging point and connector between two fields. In a way Diaz Alonso is using the software more along the lines of what it does best- explorative, inexact form finding, without the ambition to imbibe it with exact and rigorous data and processes. Accordingly his architecture is also not really meant to be constructed in the physical world. It is a design constructed virtually to be displayed as an synthetic image, exactly what Maya was built for. The third position comes as a two-headed hydra with different yet related roles. Zaha Hadid and her business partner Patrik Schumacher are leading the practice that has built the most significant examples of completed buildings based on geometry created in Maya and has developed an associated theory of Parametricism. 19 Eric Owen Moss, Interview: Hernan Diaz Alonso & Marcelo Spina, The architectural Review, 26 April 2013, accessed March 3rd 2015, (http://www.architectural-review.com/opinion/ interview-hernan-diaz-alonso-and-marcelo-spina/8646930.article)


Maya

92

Claiming for Parametricism to be the "great new style after Modernism"20, Schumacher;s ideas are nothing short of grandiosity- his proposes that a formal vocabulary of fluid forms will pervade the built environment. Social and economic disparities are not part of this discourse, if anything Schumacher believes that the social good is not a primary responsibility of architects.21 Equally the rights of workers on construction site are not the concern of the architect, according to Zaha Hadid.22 Not only is this kind of focus and technique and formal expression certainly not workable for any kind of construction beside the high end developments the office is engaged in, claiming an absurd level of importance will eventually turn against formalism itself. The office of Zaha Hadid is heavily relying on Maya as a sketching tool in the design phase. Schumacher said in an interview conducted by Autodesk: "We have this fluidity of corners in architecture, that was basically made possible by Maya."23 In the same series, an associate at the office described the use of Maya as follows: "It is a medium, it is not a tool for us. Playing with Maya gives you ideas."24 To speculate on the biases of the software it helps to analyze a recently completed project in regards to the modeling rules illustrated here previously. The Dongdaemun Design Plaza, which was designed in 2007, is entirely absorbed into one continuous and smooth surface, there is not distinction between floor, wall 20 Patrik Schumacher, Parametricism as Style - Parametricist Manifesto (Presented and discussed at the Dark Side Club1 , 11th Architecture Biennale, Venice 2008, accessed April 27th, http://www.patrikschumacher.com/) 21 Architectrure is not Art' says Patrik Schumacher in Biennale rant, Dezeen 18 March 2014 (accesssed May 7th 2015, http://www.dezeen.com/2014/03/18/architecture-not-art-patrik-schumacher-venice-architecturebiennale-rant/ 22 Alissa Walker, Architect Zaha Hadid Says 500 Worker Deaths Are Not Her Problem, Gizmodo, 2/26/2014 (accessed March 5th, http://gizmodo.com/architect-zaha-hadid-says-500-worker-deaths-are-not-her-1531765457) 23 Autodesk Customer Showcase: Zaha Hadid Architects, Patrick Schumacher and his team use Autodesk速 Maya速 software as a conceptual sculpting tool (accessed May 4th 2015, http://usa. autodesk.com/adsk/servlet/index?siteID=123112&id=13462298&linkID=13454855) 24 Ibid.


93

Maya

and ceiling from the outside.25 The large scale shopping mall does not have any windows- the lighting is artificial and even on the interior everything is fused into smooth and monolithic shapes. Described using modeling language the shape is a continuous subdivision mesh over a low-poly frame. The panels of the facade are all equivalent to the subdivided polygon pattern. The cycle has come full circle- a mathematical definition designed to describe real life has become a formula that is figuring in physical space. The algorithms used by architects to simulate reality are now manifested by the means of constructions. In his recently published article on software monocultures Marc Gage noted the presence of software monocultures, where clusters of architects use specific softwares and discover certain techniques that become signature styles of that office.26 Gage laments that “signatures are no longer determined by these historical architectural variables and instead, are now being largely governed by software selections and individual technical discoveries.“27 His own practice is supported by Autodesk and many of his project carry a clear Maya-signature. In contrast to this position I think software signatures are becoming defined though custom work-flows as part of a design process that switches fluidly between applications- rather than letting one rule-set define every aspect of a building, the designer is regaining agency by understanding the structure of the tools she is using. More than just smooth forms Maya brought something unprecedented into architectural discourse- the possibility to animate form. The idea of parametric, adaptable form for the first time became palpable for designers and it spawned a still ongoing discourse on mutability in architecture. Modeling in Maya allows to operate with different geometric principles, yet it is primarily a mesh modeling application, with strong biases towards certain topologies. Polygon modeling was 25 Zaha Hadid Architects, Dongdaemun Design Plaza (accessed May 7th 2015, http:// www.zaha-hadid.com/architecture/dongdaemun-design-park-plaza/) 26 Marc Gage ‘Software Monocultures” In Composites, surfaces, and software : high performance architecture, edited by Greg Lynn and Mark Foster Gage, (New Haven, Conn. : Yale School of Architecture ; New York : Distributed by W.W. Norton & Co., c2010) 27 Ibid.


Maya

94

invented as a mathematical approximation to simulate reality with limited access to computational speed. With architects using Maya in their designs, it transitions from a software used to simulate reality to an instrumental tool in the production of reality. Put another way, when algorithms designed to represent reality turn into design tools they shape the appearance of the built environment.

Figure 4.10. Still from mesh animation


95

Maya

1980

1990

MCNEEL

AUTODESK

AutoCad

Wavefront founded by Bill Kovacs, Larry Barels, Mark Sylvester

Alias Research founded by Bingham, McGrath, McKenna, Springer

ILM Photoshop ADOBE

Figure 4.11. TImeline of Maya


L

Maya

2000

96

2010

Grasshopper Rhinoceros

Maya

2015


97

Maya

BIBLIOGRAPHY

Bruce Guenther Baumgart. Geometric Modeling for Computer Vision. (Stanford California: Ph.D. Dissertation. Stanford University, 1974) Fallon, Kristine K. Early Computer Graphics Developments in the Architecture, Engineering, and Construction Industry. New Jersery: IEEE Annals of the History of Computing, Volume 20 Issue 2, April 1998 Gage, Marc. ‘Software Monocultures� In Composites, surfaces, and software : high performance architecture, edited by Greg Lynn and Mark Foster Gage, New Haven, Conn. : Yale School of Architecture ; New York : Distributed by W.W. Norton & Co., c2010 Lynn, Greg. Animate Form. New York : Princeton Architectural Press, c1999. Richard Rickitt: Special effects : the history and technique (London : Aurum, 2006.) Tom Sito. Moving innovation: a history of computer animation (Cambridge, Mass. : MIT Press, 2013.), 187-189.


Maya

LIST OF ILLUSTRATIONS

Figure 4.1. Movie still from the Abyss, created with Alias Software (1989) Figure 4.2. Bruce Baumgart, Rockets made with sweep primitives (1972) Figure 4.3. Transformation of a low-polygon meshing superimposed with different densities of mesh smoothing Figure 4.4. Meshes with different densities Figure 4.5. Bob Kovacs and Douglas Stoker in SOM Chicago’s computer room, circa 1977 Figure 4.6. Hernan Diaz Alonso, Helsinki library, 2012 Figure 4.7 Zaha Hadid Architects, Dongdaemon Plaza, Rendering (c.2007) Figure 4.8 Zaha Hadid Architects, Dongdaemun Design Plaza ( 2014) Figure 4.9. Greg Lynn, Embryological house (1998) Figure 4.10. Still from mesh animation Figure 4.11. TImeline of Maya

98


99

Conclusion

1980

1990

2000

201

McNeel starts developing Rhino as a plug in to AutoCad

MCNEEL

founded by 14 members including Mike Riddle and John Walker

AUTODESK

Rhinoceros

Carol Bartz become CEO until 2006 Revit aquisition of Revit in 2002

3dsMax

AutoCad

Grasshopp

aquisition of Maya in Wavefront founded by Bill Kovacs, Larry Barels, Mark Sylvester

Maya Alias Research founded by Bingham, McGrath, McKenna, Springer ILM Adobe founded by John Warnock and Charles Geschke

Indesign John and Thomas Knoll develop first version of PS

Photoshop

ADOBE Illustrator

Pdf, Acrobat Reader


Conclusion

10

per

2015

Kangaroo

GhPhython

2005

CONCLUDING REMARKS

100


101

Conclusion

CONCLUDING REMARKS The separation of the software into independent chapters allows to focus on each independently, even though there are clearly many cross connections. Each of the discussed software packages is likely to be used in an architecture office today and used in complex workflows with many other applications and plug ins. In the following I would like to mention a few cross- connection between software I came accross while researching the topic, as well as discuss changes in distribution and retail strategies in the software market. While Maya was launched ten years later than Photoshop their development started around the same time. They share roots in the animation industry- specifically one studio: Industrial Light and Magic. The very same place where John Knoll, one of the two founders of Photoshop worked would become crucial for the development of digital compositing in animation software, later used in Maya.1 In the 90s they became the primary tools employed by the first digital avant- garde. They enabled a level of hybridisation and photorealistic composition of previously unseen. Autocad and Rhino also share many properties, especially evident in their interface design. In this case, one has branched off the other to become its own program. Allowing the development of various plug-ins has proven beneficial for both softwares. The way they have been relatively open and determined to become an accessible and widely used software has made AutoCad an industry standard and Rhino one of the most versatile programs in the design field. Because of their ubiquity the two programs are also the most resistant to change- they need to be compatible to prior versions and also keep the same structural logic to keep the broad user - base that has gotten used to their way of operating. This makes every change and addition extremely slow and cumbersome. Thought the use of plug ins like Gasshopper for Rhino the programs can extend and combine their reliability and resistance to change with cutting edge tools. Another aspect of practical use of the software is how these tools are being aquired by students, designers and offices. In general, pirated versions of each software circulate on the web, which comes with some disadvantages- it is usually not 1 Tom Sito. Moving innovation: a history of computer animation (Cambridge, Mass. : MIT Press, 2013.)


Conclusion

102

the newest version,or it might inlcude virsues and be buggy. Software companies have realized the wide-spread use of pirating and sometimes they even passively support it as in the case of Rhino. In the past few years, companies have started to embrace different pricing and marketing models to appeal to other groups than large professional offices able to pay thousands of dollars for a single seat user. A new tendency towards subscription models, embraced both by Autodesk and Adobe, allows to rent the software for moderate monthly prices. Autodesk is additionally offerting student licesnces completely free of charge.2 This is a major strategic shift moving away from lawsuits on illegal use of the software which used to be a major problem for the company ( in 2004 Autodesk had made total recoveries of more than $60 million from lawsuits against pirated copies). 3The new strategy is following a larger trend to build a broad userbase and over time convince people to move to a subscription based model once they become regular users. I would like to turn to the question of how software is going to develop in the near future in relation to the architectural discipline. As the first generation of digital natives grows up, how are they going to use and teach the tools they have grown up with? One trend seems to be the atomization of software- while it used to be great to be able to pack as much functionality into one software package, newer application tend to be lean. Despite faster computing speed, which would enable increasingly more complex software, most software will probably be simplified in its appearance and structure, for the sake of stability and easier learning. Software applications will migrate increasingly online, for several reasons. All data can be constantly backed up on global servers. Furthermore companies can collect data on their user behavior, either to benevolently improve performance, or for other purposes, like identifying pirated software. Most importantly access to internet allows for global data to be integrated into software itself- in the form of plug-ins or live input from websites, or through interactive data produced by sensors. 2 Autodesk software for students and educators (accessed May 2nd 2015, http://www.autodesk.com/education/free-software/all) 3 Autodesk Reaches Settlements in Lawsuits Against Two Companies to Recoup More Than $260,000 in Software Piracy Losses (accessed May 2nd 2015, http://www.prnewswire.com/ news-releases/autodesk-reaches-settlements-in-lawsuits-against-two-companies-to-recoup-morethan-260000-in-software-piracy-losses-73362547.html)


103

Conclusion

Many more programs can be included here and overlaps between software analyzed, and I am planning to expand this research in the future. My research shows that software bias goes deeper than just geometric bias. Different generations have changing awareness of certain tools and make different use of the same tool. Software can be used intuitively like a hammer or thought of as the Latuourian key, as a mediator and agent implicated in design in multifarious ways. It is the full spectrum of thinking of software as a multilayered assembly of geometric principles, social relations and associated discourses that helps to uncover the complex ways software is implicated in the design process today.


Conclusion

BIBLIOGRAPHY

104


105

Conclusion

GENERAL BIBLIOGRAPHY For chapter-specific bibliographies please refer to individual chapters

Abel, Christ Architecture, technology, and process. Boston : Elsevier, Architectural Press, 2004. Carpo, Mario, Perspective, projections and design: technologies of architectural representation. London ; New York : Routledge, 2008. Carpo, Mario. The alphabet and the algorithm, Cambridge, Mass. : MIT Press, c2011. Hansen, Mark. New Philosophy for New Media. Cambridge, Mass.: MIT Press 2004 Kandinsky, Wassilly. Point and Line to Plane. New York : Dover Publications, 1979. Klee, Paul. Pedagogical Sketchbook. New York: F. A. Praeger, 1953 Latour, Bruno and Weibel, Peter, editors, Making things public: atmospheres of democracy, Cambridge, Mass.: MIT Press 2005. Manovich, Lev. Software takes command : extending the language of new media, New York ; London : Bloomsbury, 2013. Manovich, Lev. The Language of New Media. Cambridge, Mass.: MIT Press 2001. McLuhan, Marshall. Understanding Media: The Extensions of Man. Cambridge, Mass.: MIT Press, reprint 1994


Conclusion

106

Negroponte, Nicholas. Soft architecture machines. Cambridge, Mass., The MIT Press, 1975. Negroponte, Nicholas. The architecture machine; toward a more human environment. Cambridge, Mass., The MIT Press, 1970. 121. Scolari, Massimo, Oblique drawing : a history of anti-perspective. Cambridge, Mass.: MIT Press 2012 Serraino, Pierluigi. History of form *Z, Basel : Birkhäuser, 2002. Siegert, Bernhard. Cultural Techniques: Grids, Filters, Doors, and Other Articulations of the Real, New York: Fordham University Press, 2015. Spiller, Neil. Drawing Architecture AD (Hoboken, NJ: Wiley, 2013) Parikka, Jussi and Huhtamo,Erkki, editors.Media ArchÌology: Approaches, Applications, and Implications. Berkeley, Calif. : University of California Press, c2011. Perez-Gomez , Albertoadn Pelletier, Louise. Architectural Representation and the Perspective Hinge. Cambridge, Mass.: MIT Press 1997 Pottmann, Helmut and Asperl, Andreas and Hofer, Michael and Kilian, Axel and Bentley Daril: Architectural Geometry, Pennsylvania: Bentley Institute Press, 2007. Witt, Andrew. "A Machine Epistemology in Architecture. Encapsulated Knowledge and the Instrumentation of Design" in Candide. Journal for Architectural Knowledge No. 03 (12/2010) Zielinski, Siegfried. Deep Time of the Media: Toward an Archaeology of Hearing and Seeing by Technical Means. Cambridge, Mass. : MIT Press, c2006.


check on bikaa.net for updates on Tools for Things


Profile for Bika Xaxa

Tools for Things  

An illustrated study of software used in architectural design concerning its origins, technical fundamentals and associated discursive and t...

Tools for Things  

An illustrated study of software used in architectural design concerning its origins, technical fundamentals and associated discursive and t...

Profile for bikaxaxa
Advertisement

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded