Digital Play Scott Penman an undergraduate research thesis under the guidance of Professor Jassen Callender
Introduction Abstract Background Need for Theory Acknowledgements
7 9 10 11
Foundation Resolve Model Play
15 17 19
Synthesis Design Play Digital Resolve Digital Play
29 31 33
Abstract Playfulness is the willingness to experiment. Play allows us to move non-linearly through the field of our ideas, fearless to test and discard different possibilities. This practice is incorporated as an integral part of both design education and the design process. For instance, the modeling process inherent in design is based on a critical flow of intent and effect, both from designer to model and in return upon the designer from the model. This model provides the designer the mental and physical distance necessary for reflection and critique. Development of the idea occurs dynamically, driven by a balance of technical information, curiosity, and intuition, leaving the designer free to experiment within a delicate and ultimately subjective foregroundbackground field. The digital environment offers a set of challenges to this notion. Computers are ubiquitous in design, having secured the role of necessary tool in visualization, transmittal, and fabrication. The power of the computer is in its ability to solve problems efficiently: it is binary, algorithmic, and linear. Value and intent are pre-programmed into the computer environment, to be compared against results in a patternmatching process that is often complex enough to appear “intelligent.” At its core, however, these processes are ultimately reducible to the linear string of bits and governing rules that give it form. In the realm of the computer, grey area does not exist. This non-playful approach is reflected in computer modeling environments. Where an analog model provides physical resistance, the digital one exhibits almost no inherent characteristics or haptic feedback – traits that are paramount in encouraging the designer to explore options and question perceptions. Additionally, as with the majority of technology, the digital model encourages a “newer is better” type of linear thinking that is antithetical to the design process, as it associates development with optimization and places undue emphasis on end products. Current digital modeling environments prematurely assign value, relying on pre-programmed criteria and imposing structures that discourage associative thinking. The computer, as an incredibly powerful scientific tool, affords the designer the opportunity to offload much of the raw, linear work processes that occur during design. Computers could theoretically encourage playfulness, as they are easily able to store iterations and powerfully augment the designer’s abilities. The interface, as the medium between man and model, imposes perhaps the greatest effect on his or her ability to be playful. This paper analyzes just what it means to be playful and seeks to identify the salient characteristics that will be useful in the future developments of the interface.
1. see “Appendix: Unoptimization,” p. 41.
While my undergraduate training in architecture has immersed me in the world of design, my long-running obsession with technology both precedes and often supersedes it. Rather than separate the two fields, however, I find myself most interested in their overlap: human-computer interaction within architectural design. This growing bent originally led to investigations into gestural modeling, but very soon into this study I realized that my proposals lacked a thorough knowledge of the concepts I was addressing. I felt the need for an underlying theory to accompany my technical applications. In pursuit of such a foundation, I began a broad spectrum of research into a variety of fields, ranging from the historical development of technology to the growing complexity of modern society.1 While not necessarily scientific in nature, these helped give form to my approach, acting as a position statement from which I could relate my further investigations. Current implementation and use of computers in the design environment is not entirely benign. The computer, mistaken for an infinite virtual space and consequently held parallel with imagination, falsely offers autonomous creative potential. It is entrusted with critical design decisions, causing a subsequent misappropriation of responsibility in the design process. The designer, meanwhile, is devalued, unable to match the high-speed calculation procedures the computer can perform. The computer has caused the designer to ascribe to a machine ideal, trading skill for speed and glorifying quantity just as much as, if not more than, quality. Finally, the process of design itself is reduced to a series of logical decisions about global optima, supplanting subjectivity with autonomous rationalism. Creativity is not a process that can be subtended by time constraints, however, and as such it does not operate on the same temporal scale as optimized computing. Faster is not always better. Even though we may be able to extract twenty results from our computerized process where once we could only glean two or three, the fact still remains that the reduction of design to a series of accelerated, objective questions and answers undermines the very human, subjective nature that bestows it with quality in the first place. Speed has become the focus of digital technology and is an increasing influence in the architectural design process. Creativity, even as it exists in a real, scheduled construction process, should not be overly constrained by such a focus. Additionally, our current attempts to utilize computers have inadvertently constructed barriers between designer and tool, forcing us to be “locked-in” to certain methods of reduced communication and limited understanding. The computer offers to us a powerful computational tool: To use it best, we must erase the boundaries, interfaces, and translators that slow and skew the transfer of information and intent. The pipeline of computer processing should be immersed in the network of creative design, 9
preventing it from producing singular results that we mistakenly perceive as “final” and preventing us from relinquishing our roles as designers to limited algorithmic processes.
Need for Theory Few things in history have developed as rapidly as digital technology. In less than a century, computers have gone from theoretical concept to near-ubiquity, exponentially shrinking in size while increasing in capability. The simultaneous advents of the internet and personal computing have connected the world in a way never before imaginable. Early thinkers in this field projected wild images of the future of a computerdriven society, some of them hitting much closer to the mark than others. Among these are two competing views of the relationship between man and machine that came out of the research of the 1960’s. The first, Marvin Minsky’s “Artificial Intelligence,” has become the dominant trend of thought, encouraging us to consider the computer as synonymous with the brain (and perhaps even capable of one day surpassing it). A.I. has permeated our culture and become the driving force for much of the computer’s current scientific research. Much less discussed was the view voiced by J.C.R. Licklider. In a paper published in 1960 entitled “Man-Computer Symbiosis,” Licklider presented the idea of “Intelligence Augmentation.” His view portrayed computers not as competing with humans but rather paired with them. In his own words, “Men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking.”2 Licklider foresaw that creative thinking and mathematically-based processing would be undergone sideby-side, and that pure rationale alone would never suffice. While Licklider’s vision has started to gain applicable interest in recent years, A.I. still dominates current thought.3 Improvements in digital technology aim for faster and more capable processors, struggling to realize digital creativity instead of recognizing the important distinction between man and machine. With the rate of technological development so blisteringly fast, however, theoretical differences such as these can have massive effects. There is a need for better theoretical clarity amidst the glitz and glamour of new technology. My investigation is grounded in the work of others: theories that are largely unrelated to, and often precedent to, the digital world. They are concerned instead with some of the basic concepts that underlie our cognitive processes and interactions. Additionally, as this subject pulls from the overlap of several different fields, a clear understanding 10
2. Licklider, J. C. R., “ManComputer Symbiosis,” IRE Transactions on Human Factors in Electronics HFE-1.1 (1960). 3. Sankar, Shyam, “The Rise of Human-computer Cooperation,” TEDGlobal 2012, Sept. 2012.
4. Vesely, Dalibor, Architecture in the Age of Divided Representation: The Question of Creativity in the Shadow of Production, (Cambridge, MA: MIT, 2004), 282.
of terminology is needed in order to defend what may be seen as “less scientific” concepts. In Architecture in the Age of Divided Representation, Dalibor Vesely speaks of the schism between traditional and digital in ways that mimic this strict separation of poetry and pragmatism: “The theoretical basis of technology reveals the radical discontinuity between the modern and traditional way of making. It distances itself from practical knowledge, spontaneous creativity, and skill in a process dominated by new goals of economy, efficiency, and perfection of performance...”4 It is my intention to aid in the bridging of this divide.
Acknowledgements This work could not have taken place without the guiding force and helpful input of my faculty committee. My discussions with them have been both critical and compelling, and I am thankful that they agreed to aid me throughout the semester. While my thoughts on this work have been affected and shaped by dozens of individuals, I would like to thank the committee in particular:
Jassen Callender David Lewis Andrew Tripp Rachel McCann John Poros Frances Hsu Justin Taylor
I would also like to thank Lawson Newman, Greg Watson, and Stacy Callender for their input during the semester. This is a preliminary investigation into a broad topic. The search does not end here, and I am excited about the potential it has for further inquiry. The computer has every ability to be both a powerful and fascinating tool in the realm of architecture and beyond, and I am eager to uncover the opportunities it provides. I look forward to forthcoming developments in human-computer interaction, and I hope one day to be able to test my theories and contribute to this field. I am thankful for the opportunity to conduct this foundational research, and I hope that the results of this can one day be as useful to others as they have been to me.
images of a simple branching algorithm (courtesy of Yannis Chatzikonstantinou)
1. Teal, Randall, “Developing a (Non-linear) Practice of Design Thinking,” International Journal of Art & Design Education 29.3 (2010): 295. 2. Ibid., 296. 3. Bruner, Jerome S, On Knowing; Essays for the Left Hand, (Cambridge: Belknap of Harvard UP, 1962), 89-91. 4. Teal, “Developing,” 295.
5. Bruner, On Knowing, 89-90.
6. Teal, “Developing,” 296.
Resolve is the lack of play. Rather than characterize it as an absence of something, however, I shall endeavor to outline its inherent characteristics. Some of these will be mirrored in the definition of play; others are specific to resolve itself. While much of this interpretation is based on an article by architecture professor Randall Teal, this definition (along with the word’s variations: resolution, resolute, etc.) is intended to serve only the forthcoming conclusions. Resolve is an approach that focuses on product and outcome. Rational in nature, it tends to manifest in a linear approach that echoes the scientific method underlying much of today’s primary and secondary education. Such a mindset reinforces the pursuit of the “intentional, logical, and measurable.”1 This causal process encourages forward movement and progressive stepping, typically organized in a broadto-specific methodology that Teal describes as “tree-like.”2 Resolve pursues a known, or at least expected, goal, founded in intention and relying upon result in order to evaluate success and assign value. A resolute mindset is also outer-directed, to borrow a term from cognitive psychologist Jerome Bruner.3 Students trained in scientific methods see education as “a passive absorption of facts.”4 From this is born a lack of real involvement and control, causing students to view failure – a necessary step in the development of design skills – as a negative consequence, or even punishment. This model of learning “runs counter to too many important phenomena of learning and development.”5 It is this avoidance, and even fear, of failure that contributes to the resolute view of work as a non-exploratory search for a successful outcome. “Only seeing design from the perspective of a tree suggests that when things do not go as planned one has ‘failed’ and one must then ‘start over’...linear models assure that correct intentions lead to correct form.”6 In many ways, this resolute approach is echoed in the algorithmic and binary processes underlying computer technology. Computers rely upon an algorithmic understanding that is pass/fail. Criteria for outcome are pre-programmed as a series of binary patterns. The massive size of these patterns allows for a multitude of outcomes and options; inherently, however, information is bound to strict, externallydecided categories. The attempt to codify language provides a good example of this application. While the robust and flexible nature of digital information allows for an almost limitless variety of input and storage, the resolution of the computer displays a rationale that often fails to incorporate (or “understand,” to reference the famous communicationbased Turing test of machine intelligence) the nuances and embedded value inherent to human communication. Spanish sociologist Manuel Castells writes: “It is precisely because of the diversification, multimodality, and versatility of the new communication system that it is able to embrace and integrate all forms of expression...But the price 15
7. Castells, Manuel, The Rise of the Network Society, (Malden, MA: Blackwell, 1996), 405. 8. Teal, “Developing,” 295.
to pay for inclusion in the system is to adapt to its logic, to its language, to its points of entry, to its encoding and decoding.”7 Speaking again of design students, Teal references “mechanistic reductivism,” stating that such an approach is unable to utilize “a-rational knowledge, like the embodied, intuitive and emotional.”8 If there is a prime example of resolute behavior, it is the machine, and if there is a pinnacle of machine technology, it is the computer.
9. Black, Max, “Models and Archetypes,” Models and Metaphors; Studies in Language and Philosophy, (Ithaca, NY: Cornell UP, 1962), 224. 10. Ibid., 221.
11. McCullough, Malcolm, Abstracting Craft: The Practiced Digital Hand, (Cambridge, MA: MIT, 1996), 165.
12. Black, “Models,” 230, 232. 13. Ibid., 237. 14. Ibid., 233.
Models have a wide variety of applications and definitions, both within architectural design and outside of it. For the purposes of this research, models will be described both in terms of what they are like and what they do. Many of these concepts pull from the definitions outlined by the philosopher Max Black in his writings from the 1960’s. Black describes several different types of models, occasionally listing the characteristics that connect the various types. Amongst these, he specifies that models are usually simpler and more abstract than that which they are representing.9 As representations, they are similar to the real thing either in terms of characteristics or relationships between parts. In order to serve as teaching tools, however, they are often slightly skewed, constructed to overemphasize certain attributes while minimizing others.10 This, of course, is not meant to be deceiving but rather illuminating (R. B. Braithwaite writes that “the price of the employment of models is eternal vigilance”). Speaking specifically of physical models, architecture professor Malcolm McCullough describes the difference between artifacts and documents: “If a document depends more on a context for its meaning, an artifact may have obvious or intrinsic function or significance. [An artifact] is a unique accumulation of responses to material imperfections. It is a material record of tool usage. Something about its authorship or origins is inescapably bound up in it....Objects have a grain, as if they might as well have cracks and wrinkles.”11 Models, then, embody both externally imbued and inherent characteristics. They are both reference and identity, and the qualities that they present are just as important as the qualities that they are able to represent. Models help us understand something hard to grasp by relating it to something better known. We draw inferences about one field of unfamiliar entities based on an understanding of a second field of familiar entities.12 Models help us to “see new connections” and details.13 In fact, “a promising model is one with implications rich enough to suggest novel hypotheses and speculations.”14 As mentioned previously, models either represent characteristics or relationships, 17
which undergo a translation from the model to the thing being modeled. They are designed to reproduce something, rather than replace it; “Analogue models furnish plausible hypotheses, not proofs.”15 As representations, models serve a critical role in the design process. In describing the complexity of this process and the difficulty in approaching such complexity, especially in light of today’s means-end mindsets, Teal borrows the term “rhizome” from the French philosopher Gilles Deleuze. “Thinking rhizomatically does not define a problem so that one can address it instrumentally; rather, one makes things to understand problems.”16 While this approach refers to design as a whole, McCullough describes a similar concept that ties it back to the process of modeling. Tools, he writes, help us inhabit a task, serving as a focus for the task at hand.17 “Thus if we could say that improvisation is a manner of inhabiting design worlds, we must note that those worlds are populated by evolving objects. You try using these elements and operators, and if you like them, you keep the results, and the object of your work evolves. You develop a style based on emphasizing particular processes, but you also try processes in response to the state of the artifact.”18 Echoing this back-and-forth process, Teal states that “acting rhizomatically says that knowledge of the problem arises by inhabiting an emerging solution, and as solutions are rendered, the solution gains clarity; simultaneously the solution evolves.”19 The model in the design process is not only useful: It is necessary. It serves as a form of container, allowing us to store, navigate, and reflect upon versions of our own thoughts. Models allow us to see things previously unseen, crystallizing our ideas into static representations and affording us the space necessary to be reflective and critical. Models aid our thinking by providing both intentional and unintentional results. The “grain” of a model is especially critical to the design process - it is this oppositional force that allows the designer to give form to and challenge her ideas. This definition stays true regardless of model type. Outlining the connection between model and metaphor, Black describes a scenario all-too similar to today’s digital models: “Use of a particular model may amount to nothing more than a strained and artificial description of a domain sufficiently known otherwise. But it may also help us to notice what otherwise would be overlooked, to shift the relative emphasis attached to details - in short, to see new connections.”20
15. Black, “Models,” 223.
16. Teal, “Developing,” 301.
17. McCullough, Abstracting Craft, 61.
18. Ibid., 228-229.
19. Teal, “Developing,” 301.
20. Black, “Models,” 237.
21. Gadamer, Hans-Georg, Truth and Method, Trans. Joel Weinsheimer and Donald G. Marshall, (2nd ed. London: Continuum, 2004), 102-103. 22. Ibid., 104. 23. Ibid.
24. Ibid., 107.
25. Ibid., 105.
26. Vygotsky, L. S., Mind in Society: The Development of Higher Psychological Processes, Ed. Michael Cole, Vera John-Steiner, Sylvia Scribner, and Ellen Souberman, (Cambridge: Harvard UP, 1978), 93.
Although play is widely regarded as necessary to design education, it is a term that inherently resists strict scientific definition. The purpose of this clarification is not to provide an all-encompassing definition of play (and playfulness, etc.) but rather to identify key characteristics that can later be used for evaluative testing. The philosopher Hans-Georg Gadamer writes extensively on the topic of play in his seminal work, Truth and Method. In it, he speaks of play’s immersive nature, going so far as to describe how play does not even require a player at all; the participant is secondary to the real subject, which is the play itself. “Play is not serious…play itself contains its own, even sacred seriousness. Yet, in playing, all those purposive relations that determine active and caring existence have not simply disappeared, but are curiously suspended…Play fulfills its purpose only if the player loses himself in play.”21 Gadamer also writes of play’s inherent aimlessness, noting especially the “to-and-fro movement that is not tied to any goal that would bring it to an end” and the “curious indecisiveness of the playing consciousness.”22 If play is aimless, it is far from a simple means-end solution; “rather, it renews itself in constant repetition.”23 Thirdly, Gadamer writes of the ever-present playing field. Whether in a simple game or a sacred ritual, the act of playing involves a set of rules that delimits the player’s field. Every act of play incorporates some notion of regulation or resistance, against which the player can test himself. “Thus the child gives itself a task in playing with a ball, and such tasks are playful ones because the purpose of the game is not really solving the task, but ordering and shaping the movement of the game itself.”24 Finally, Gadamer sums up many of these characteristics in a description of the relaxation that is evident in the player. He notes that relaxation does not imply the absence of effort but merely the absence of strain. “The structure of play absorbs the player into itself, and thus frees him from the burden of taking the initiative…This is also seen in the spontaneous tendency to repetition that emerges in the player and in the constant self-renewal of play, which affects its forms.”25 Freed from exterior pressure and immersed in the task at hand, the player is allowed to explore options at will, consistently testing various ideas and continually choosing to explore further. Often, play is discussed with reference to children. Educational and behavioral psychologists have provided fascinating insights into mental, social, and emotional development by studying the playfulness of the young. Much of this work stems from the Russian psychologist Lev Vygotsky, whose research revolved around the emergence of reason in children. In Mind in Society, Vygotsky defines play as an imaginary situation created by the child in order to satisfy unrealizable tendencies.26 Imagination thus holds a key role in the child’s development, as “creating an imaginary situation can be regarded as a 19
means of developing abstract thought.”27 It is not only the creative ability that is important to this, however. Vygotsky, as echoed by Gadamer, notes the “seriousness” with which children choose to establish and follow the rules of their imaginary situations. Play is thus transitional, as it is in play that children begin to act according to ideas (internal rules) and not things (external incentives).28 This subordination to manufactured regulation provides the greatest pleasure in play, and it is here that a child’s willpower is cultivated. At the same time, the self-directed nature of play is paramount to a child’s ability to learn and memorize. Jerome Bruner, whose psychological research of the 1960’s focused on education, writes that “the persistence of the organized child stems from his knowledge of how to organize questions in cycles and how to summarize things to himself.”29 Organizing complex information according to one’s own cognitive processes makes that information easier to retrieve. Citing the research of Robert White, Bruner notes that this “competenceoriented learning” minimizes outside reward-based behavior and leads to what he calls “inner-directedness.”30 Acting according to one’s own internal motivation thus not only serves to teach children how to enjoy “playing the field” but also helps them develop a method of learning. As Vygotsky discusses, development lags behind the initial incidence of learning, only occurring through practice.31 Having learned how to internalize, understand, and retrieve information, children are able to “play” with it in their everyday lives, thus further developing their own understanding of the world. Play teaches children repetition and modulation, as well as how to develop complexity.32 This method of practicing, according to Bruner, bestows the child with a generalized problem-solving ability.33 As an inner-directed activity, play operates best when not exposed to external pressures. Bruner writes that “much of the problem in leading a child to effective cognitive activity is to free him from the immediate control of environmental rewards and punishments.”34 Play, in other words, thrives when risk is not an issue. In discussing Vygotsky’s research, Bruner notes that a key point in a child’s development is the moment at which she is able to internalize information. Once this stage is reached, the child can process success and failure as integral and informative aspects of a developmental process, rather than strictly as reward and punishment.35 Failure becomes a cue for a change in direction and ceases to be simply an admonition. Much of this external influence is also reflected in the fact that discovery is itself the reward in a playful search.36 Rather than being rewarded by achieving some predetermined goal or acquiring a “correct” answer, playful children find joy in the mere act of discovering new things. Discovery is paramount to play, and it echoes play’s developmental nature since it is “in its essence a matter of rearranging or transforming evidence in such a 20
27. Vygotsky, Mind in Society, 103.
28. Ibid., 96-97.
29. Bruner, On Knowing, 86.
30. Ibid., 89-91.
31. Vygotsky, Mind in Society, 90.
32. Sennett, Richard, The Craftsman, (New Haven: Yale UP, 2008), 272-273. 33. Bruner, On Knowing, 94.
34. Ibid., 87.
35. Ibid., 90.
36. Ibid., 88.
37. Bruner, On Knowing, 82-83.
38. Sennett, Craftsman, 268269.
39. Ibid., 277-278.
40. McCullough, Abstracting Craft, 225.
41. Sennett, Craftsman, 279.
way that one is enabled to go beyond the evidence so reassembled to new insights.”37 Playfulness and age are not directly correlated, however. While play changes form throughout an individual’s life (and is itself responsible for the change in that individual’s form), it still exists at every stage. In fact, playfulness once again serves a primary role in development when it comes to adults – this time in the more specific realm of craft. In The Craftsman, sociologist Richard Sennett writes that “the rhythm of routine in craftsmanship draws on childhood experience of play…Work and play appear as opposites if play itself seems just an escape from reality. On the contrary…play instills obedience to rules but counters this discipline by allowing children to create and experiment with the rules they obey. These capacities serve people lifelong once they go to work.”38 While Sennett draws distinctions between designers and craftsmen, I would like to highlight their similarities. Craftsmen are similar to designers in the sense that they are not focused on skill acquisition but rather the pursuit of curiosity. Both rely on play to engage, learn about, and test the problems they are faced with. The qualities that Sennett outlines as inherent to craftsmen are equally applicable to the realm of design. The first quality that Sennett describes is the ability to localize, which “names the power to specify where something important is happening.”39 This involves “making a matter concrete” and thoroughly mirrors the designer’s choice to work with a particular medium. In choosing to employ a medium, the designer pursues characteristics and qualities that will be of use to the given situation. The designer must know what is useful and will provide insight. This is not a matter of knowing what the answer is but rather knowing what avenues of investigation may yield interesting results. Years of practice build up a level of intuition that does not promise immediate results so much as provide the guiding framework for exploration and discovery – this is true for both craftsmen and designers. “Design is not only invention, but also sensitivity to a medium. Craft cannot be merely in service of technique, or of inappropriately conceived ends. The craftsman must begin to feel something about the artifacts, and only certain moves will feel right.”40 The intuitive ability to localize an issue affords the designer the freedom to explore around it. Once grounded, whether through the use of a particular medium or otherwise, the designer can move on to explore the issue’s possibilities. The second quality that Sennett lists is the ability to question. Having grounded the issue, the craftsman must be able to reflect on its qualities. “’Questioning’ means, physiologically, dwelling in an incipient state; the pondering brain is considering its circuit options. This state makes neuronal sense of the experience of curiosity, an experience that suspends resolution and decision, in order to probe.”41 Curiosity 21
ink prints: playing within a given medium (courtesy of Matt Robinson)
42. Teal, “Developing,” 301.
43. Ibid., 299.
44. Sennett, Craftsman, 277.
45. Ibid., 279 46. Teal, “Developing,” 295.
47. McCullough, Abstracting Craft, 225.
48. Pye, David, The Nature and Art of Workmanship, (Bethel, CT: Cambium, 1995), 31.
49. Sennett, Craftsman, 279.
50. Summers, David, “Facture,” Real Spaces: World Art History and the Rise of Western Modernism, (London: Phaidon, 2003), 102. 51. Teal, “Developing,” 301.
naturally aligns itself with playfulness, and the ability to question something relates directly to the imaginative quality of play. As noted, play is immersive, and immersion is key to proper questioning. Randall Teal, while writing in support of non-linear design, states that “knowledge of the problem arises by inhabiting an emerging solution, and as solutions are rendered, the solution gains clarity; simultaneously the solution evolves.”42 This immersed, dynamic state results in back-andforth experimentation and active questioning that Teal calls “mapping.” In order to draw out the unknown, “mapping allows a myriad of reformations that arise through such participation in the conditions.”43 To be immersed is to occupy the problem-space directly, and such an approach is often key to discovering new possibilities. Finally, Sennett discusses the ability of the craftsman to open up an issue, “expanding its sense.”44 “The capacity to open up a problem draws on intuitive leaps, specifically on its powers to draw unlike domains close to one another and to preserve tacit knowledge in the leap between them.”45 The development of quality design often involves the assimilation of “the complexities, accidents and flows that are basic to a dynamic and vital existence.”46 To incorporate such disparate forces into a synthetic and beautiful final result is extraordinarily difficult (hence the relegation of such an ability to the realm of craftsmen). The ability to do so is the result of developed intuition, and the moves that are made are not always planned. “The master at play improvises.”47 In The Nature and Art of Workmanship, designer and master craftsman David Pye writes that the workman occasionally “admits to the work an element of the unaccountable and unstudied: of improvisation: either deliberately or because he has not the time or ability to prevent it.”48 Again, it becomes clear that to be skilled in a creative field does not mean knowing all the answers; instead it means knowing how to maneuver the situation and expand the question in order to expose possible solutions. This investigation of the possible indeed is central to the task of the designer, as well as the nature of play. “’Open up’ is intimately linked with ‘open to,’ in the sense of being open to doing things differently, to shifting from one sphere of habit to another.”49 Once free to move laterally, the designer can draw connections hitherto unseen. “Play not only involves the application to given materials of existing skills, it also involves their extension and development. It thus explores the absolutely possible, that is, what is able to be done given the means at hand.”50 Teal states that this ability to play with possibilities is not only beneficial, but necessary: “progressive experimentation is always necessary in moving beyond what we already know.”51 While Sennett stops at three, there is one more characteristic that I believe is inherent to craftsmanship (and reflected in playfulness). Pye writes: “the quality of the result is not pre-determined, but depends 23
on the judgment, dexterity and care which the maker exercises as he works. The essential idea is that the quality of the result is continually at risk during the process of making; and so I shall call this kind of workmanship ‘The workmanship of risk’.”52 The craftsman, along with the designer, absorbs the risk of failure as she works. This fearlessness is driven by the same force that drives play: an inner-directed drive that rewards itself in discovery and is free of external burden. Though the worker may be cognizant of external pressures and restrictions, her internal curiosity and willpower prove to be the stronger forces. While each of these approaches to play (children and craftsmen) has its individual characteristics, there exists a common ground between them. It is in this intersection that I seek to develop a definition of play. By comparing the two, the definition will provide a set of characteristics robust enough to apply to various scenarios, including those that don’t immediately conjure up impressions of playfulness. First, to be playful is to pursue possibility. As Summers notes, “play is permutational in the sense that it tends to realize all possibilities, and it is liminal in the sense that it seeks the limits of possibilities. Play precedes innovation.”53 The act of achieving such a feat without prior knowledge of the established limits requires a degree of experimentation; play is naturally both experimental and improvisational. “Play serves learning through experimentation without risk. Play often lacks any immediately obvious aim…Play’s endlessly variable series of awkward, exaggerated motions seeks out the approximate arena for later development of true competence.”54 The act of discovery is the actual reward for such a search, both for children and for craftsmen. “One knows they have got somewhere only by getting there.”55 It is this thorough engagement of the unknown that reveals just why play is so critical to innovation: “In general, we are very inclined as modern people to think of invention in terms of the analysis of problems and the devising of technological solutions…The opposite is perhaps more often the case, even in the modern world; people - individuals and groups - continually push at the boundaries of what is possible.”56 Secondly, play involves the pursuit of resistance (or, to use McCullough’s term, “grain”57). In children, this manifests in the intentional creation of rules. The imaginary situations that children fabricate hold no legitimate claim against them, but “to carry out the rule is a source of pleasure. The rule wins because it is the strongest impulse.”58 In craftsmen, this dialogue exists in the open dialogue with a medium. The choice to engage a particular medium (in the sense of working style rather than simply physical material) is a choice to come up against certain rules and regulations. The craftsman knows that different materials and processes will yield different results, and the practiced pursuit of these various avenues will eventually yield an 24
52. Pye, Nature and Art of Workmanship, 20.
53. Summers, “Facture,” 106.
54. McCullough, Abstracting Craft, 223.
55. Teal, “Developing,” 299.
56. Summers, “Facture,” 106. 57. McCullough, Abstracting Craft, 165.
58. Vygotsky, Mind in Society, 100.
59. Bruner, On Knowing, 89-91.
60. Teal, “Developing,” 299. 61. McCullough, Abstracting Craft, 61. 62. Teal, “Developing,” 298.
intuition that will guide her future craft. Another characteristic of play is its inner-directedness. “Competence-oriented learning,” to borrow Robert White’s term, minimizes the external-reward system in favor of an internally generated force.59 The reward for this is no longer the value of the work but rather the pursuit of possibility and discovery; in children this involves discovering the nature of the world around them, while in craftsmen it involves investigating the possibilities of a particular project. In each case, willpower is generated not by fear of external punishment but by internal, playful curiosity. Fourth, play is immersive. The imagination of the child is not just fantastical to him: it is, in fact, his reality, constructed in order to help him abstract and understand the reality he can’t yet comprehend. In Teal’s essay about complexity, he does not seek to evade the myriad of forces inherent in the design process: rather, he proposes a full-on envelopment of all the issues at hand.60 McCullough argues for a similar method of immersion, discussing the benefits of tools in inhabiting various tasks.61 Immersion allows us to sift out a structure in which “complexities may be treated as complexities.”62 Finally, play is developmental. Bruner and Vygotsky go to great lengths to demonstrate just how critical play is to child development, from social settings to memory capacity. Play teaches the child how to learn and how to practice. Similarly, play helps the maker acquire skill and become a craftsman. In both cases, the individual learns to absorb complexity into the process and engage the problem at hand. Play allows us to move skillfully through the field of our ideas, fearless to test and discard different possibilities. To be playful is to inhabit the problem- and solution-space simultaneously.
1. Teal, “Developing,” 301.
2. Vesely, Divided Representation, 3.
3. Teal, “Developing,” 301.
4. Summers, “Facture,” 106. 5. Ibid.
6. Kolko, Jon, “Information Architecture and Design Strategy: The Importance of Synthesis during the Process of Design,” Web log post, Interaction Design and Design Synthesis.
Based on the prior research, playfulness can be summarized as our willingness to experiment. The results of a playful search display significant variety, as the goal of a playful search is not “correctness” or “development” but rather the discovery of new possibilities. Play pursues resistance, whether in the form of physical, material grain or mental rules and regulations. Play is driven from within, even while we are immersed in the context of the “game.” In play, we are able to move non-linearly through the field of our ideas, fearless to test and discard different possibilities. To be playful is develop an understanding of our surroundings, absorbing complexity and engaging whatever confronts us. As mentioned earlier, this concept is one that has been highlighted by many authors, and it is typically implemented as an integral part of design education. It is easy to conclude, then, that design inherently involves play. Creativity, one of the prized characteristics of the skillful designer, requires immersion in the design process. Creativity is ambiguous and complex, full of “relevant issues, forces, areas, effects, central to a particular evocation that the brief, site, client has put in motion.”1 It is a feature that requires skill that has been internalized to the point of being intuitive: “Even the most elaborate systems or most successful personal visions cannot replace the unity of the different levels of knowledge required for genuine creativity.”2 Designing creatively for complex situations involves recognition of the fact that a simple solutions-oriented approach can’t suffice. Engagement of the issues at hand is undergone in order to uncover possibilities, not find a single answer. Immersion in the design process is essential when approaching complex problems, since “the only way one can develop the familiarity necessary to truly understand any problem is by attempting to respond to what one thinks the problem to be…knowledge of the problem arises by inhabiting an emerging solution, and as solutions are rendered, the solution gains clarity; simultaneously the solution evolves.”3 The immersion offered by play provides this opportunity for creativity and innovation. As “the continual extension of the possible,” play allows the designer to move beyond her current scope of understanding. “Play precedes innovation.”4 Play’s focus on engagement rejects the idea of avoiding difficulty; in fact, the best way to understand and explore our possibilities is through play.5 If, according to Le Corbusier, creation is a patient search, then play affords us the ability to search in any and all directions, unconstrained by external criteria of time or value and free to learn from our discoveries. Playfulness is also important to the design process because the designer must be inner-directed and free to explore. John Kolko states that design is an open dialogue, emphasizing the need to embrace (rather than reduce) the complexity at hand.6 Design cannot be linear; Temple describes it as cyclical, a process of “making-thinking-doing29
reflecting.”7 A linear process presupposes that value increases as iterations increase, and that if the process fails, it must be restarted. Playfulness rejects both of these ideas. First, it embraces a willingness to experiment. Playful experimentation is not bound to a linear path; rather, it seeks all of the different possibilities available, often moving in obscure directions in order to arrive at unforeseen conclusions. Difference arises through search, and each difference is treated as a potential solution. Value is only added or subtracted well after the discovery is made. The playful individual is able to do this primarily because she is unburdened by the fear of failure, which highlights playfulness’s second key strength: To be playful is to demonstrate a willingness to restart – Gadamer’s “constant self-renewal.”8 The ability to fail – and, in turn, to process that failure as beneficial information rather than harsh criticism – is instrumental in allowing the designer to move forward following failure instead of pushing him to move backward (or shut down entirely). In playful design, the designer is rewarded by the act of discovery itself. Directed from within rather than from without, he seeks the full range of available possibilities, regardless of success rate. Decisions must ultimately be made about the value of different iterations, but that can only happen when different iterations are freely allowed to be brought to the table. Finally, design displays similar developmental qualities to play. Design pursues options through a constructive process that is rarely without some form of path or trail, with the key quality that this path is not always linear. The iterations of the design process are a form of development, even when intuition steps in and facilitates less than obvious jumps between versions. Design is an engagement of the problem, just as play is an engagement of an issue or a rule. Often, this takes place through modeling, as designers seek to test their ideas against a set medium and uncover new insights through the filter of a particular modeling process. Like play, design involves “setting off the playing field”9 such that the rules of the system may be explored, utilized, and pushed. Design involves engagement on all levels, from conceptual and physical to theoretical and programmatic. Both play and design attempt to tease an appropriate answer out of a set of questions too complex for one single solution, working through development and resistance to explore possibilities and satisfy curiosity.
7. Temple, Steven, “A BioExperimental Model for Learning Creative Design Practices That Supports Transformative Development in Beginning Design Students,” ArchNet-IJAR 4.2 (2010): 117.
8. Gadamer, Truth and Method, 105.
9. Ibid., 107.
10. Pye, Nature and Art of Workmanship, 20.
11. Teal, “Developing,” 296. 12. Ibid., 301. 13. Pye, Nature and Art of Workmanship, 36.
As outlined earlier, computers present us with resolute environments. The power of the computer is in its ability to solve problems directly: it is binary, algorithmic, and linear. As a computer runs a program, any failed step requires a reversion and re-evaluation; there is no “grey area.” Value and intent are pre-programmed into the computer environment, to be compared against results in a patternmatching process that is often complex enough to appear “intelligent.” At its core, however, these processes are ultimately reducible to the linear string of bits and governing rules that give it form. In the realm of the computer, correct intention does in fact lead to correct form, as there is always some foreknowledge of what the best answer is going to be, or be like. Computers operate on processes of optimization, and as such they exhibit entirely resolute working environments. Any freedom or forgiveness in the system is merely a simulation of flexibility, because ultimately, the system seeks one “answer” out of the complexity at hand. In a resolute environment, value is constantly present, and the method for assigning it often follows a spatial or temporal structure. Newer is usually better, more developed or updated – terms echoed in the development of technology. David Pye, in discussing the nature of craft, discusses two types of workmanship: the workmanship of certainty, and the workmanship of risk. While the workmanship of risk aligns itself more with playful design in its focus on process rather than end product, the workmanship of certainty has ironed risk out of the process entirely. “In workmanship of this sort the quality of the result is exactly predetermined before a single salable thing is made.”10 Such resolute environments are not appropriate for design thinking. Resolute work is bent on the completion of a particular task, typically with a pre-set understanding of what answers are more valuable than others. While design often incorporates processes of direct problem-solving and scientific reasoning, it cannot thrive when the focus is unduly placed on a linear mode of thinking. Speaking of Deleuze’s philosophy, Teal says that “he sought alternatives to the limitations of binary thinking. This binary manner of seeing the world, long embedded in the way we think, impedes effective design thinking.”11 If design is understood to be problem-engagement rather than problem-solving, it falters, for it “is understood to be a simple intentional and causal process.”12 Pye’s “workmanship of certainty is, simply of its nature, incapable of freedom.”13 The freedom to move that is so critical to design, and so abundant in play, is not available when working resolutely, and thus not inherently provided by computers. While we typically don’t engage computers at the level of bits, or even at the level of computer code, we still encounter this resolute nature at the level of the computer interface. Early interfaces relied upon command lines in order to operate any task on the computer. The computer could not “interpret” what the operator had in mind; the 31
operator merely had to know what the accepted inputs were. Eventually, graphic user interfaces became the primary system of interaction, and operation of the machine became a process of navigating menus and selecting options. Although this type of interface remains dominant today, the operator is still forced to work within the constraints of the tool: the opportunity for discovery is limited by the forced selection of pre-defined tasks. Any outcome on the part of the computer must be pre-programmed as a possibility. The interface offers a limited set of inputs and choices, albeit robust, that do not leave much room for lateral exploration. Quoting French philosopher Henri Bergson, Teal notes that “the human mind has strong tendencies toward being reductive, and that it ‘instinctively selects in a given situation whatever is like something already known.’”14 The menus present in computer software remove the burden of generating solutions by providing a set of “already known” answers. When the computer interface becomes too prominent, it delimits the field of our possibilities based on its own pre-programmed criteria. “The physical limitation of the laptop (the size), combined with the digital limitations of the software (the organizational schema), dramatically limits the designers’ ability to see the forest and the trees: they lose the ability to understand the research in totality and they are limited in their ability to freely manipulate and associate the content.”15 Finally, the designer must operate through the translating duo of mouse and keyboard in order to convey her intent. While this interface has been universally adopted and thoroughly adapted by almost all computer users, it is but another mediating process the designer must consider while trying to represent an idea. This medium differs from the typical medium of the craftsman in that it is not chosen in order to bring value to the work. The keyboard and the mouse offer resistance to the designer, but it is rarely of benefit. The resolute work offered by the computer, whether at the level of bits, software interfaces, or hardware devices, is not playful in and of itself. Immersion in a computerized environment is therefore unable to harbor the creativity and freedom necessary for good design.
14. Teal, “Developing,” 296.
15. Kolko, “Information.”
16. McCullough, Abstracting Craft, 222.
17. Ibid., 157-158.
To be fair, there is nothing to say that a linear and resolute process cannot take place during a design process. The preceding argument is not an attempt to rule out computer-based environments from design altogether, but rather a warning against letting design exist within resolution, rather than vice versa. In fact, there are several characteristics of computers that offer an incredible amount of playful potential. Some of these qualities augment human ability, allowing the designer to continue to explore and push what is possible. Following a discussion of computer tools, McCullough describes two as super-human (simulated) strength and the ability to wield multiple tools.16 Computers also present information in an environment that lacks any built-in friction or physics. While the passing of bits is very much a physical process, the individual’s actual interaction with the computer occurs at the level that the information is represented, which suppresses physicality and allows notions of virtuality to come to the forefront. With friction apparently eliminated, the designer can experiment with models and forms in ways impossible in the real world. Computer models also represent information in vector format, further negating the tricky limitations of physical scale. Digital models can be instantaneously explored as either object or environment – or as both at the same time. This is expanded by the speed at which computers operate. The ability for the computer to perform tasks at speeds far surpassing those achievable by humans drastically cuts down on many of the delays inherent in analog processes, theoretically allowing for more iteration per unit of time. Not only is the computer fast; it is precise, and this specificity can be of great use in various design processes. Licklider’s proposed separation of tasks was meant to highlight each party’s ability: if such “routinizable work” is offloaded to the computer, then the designer is free to concentrate more on strategy and play, rather than computation. Alongside augmenting the designer’s ability, the computer offers him an easy method of producing multiple iterations. The computer can produce perfect copies of information almost instantly, ensuring that the designer can enact changes at a whim without worrying about losing any previous versions of an idea. In fact, McCullough points out, this method of organizing and interpreting information is built in to computer architecture. Unlike analog environments, digital information is not solely tied to individual instances. Software relies upon classes and patterns to control information, allowing the user to manipulate an entire class of models indirectly – perhaps an early description of parametric architecture.17 With such a structure inherent to computers, they are well primed to facilitate iterative generation and manipulation. These characteristics display a striking similarity to playfulness. Computers involve the exploration of possibility, allowing the user to pursue new territory. Physical forces are essentially removed, insulating 33
the designer from external strain. Iteration becomes easy and value is only artificially assigned by time stamps and model names. The computer offers an extension of the imagination, allowing the designer the opportunity to see things previously unseen. Computers are quick enough to provide live responses, mimicking the dynamic and open engagement of a medium. Computer models can certainly be immersive, and the augmentation of the designer’s abilities enables her to absorb complexity into her process. Finally, the computer allows us to develop our skill through repetition, practice, and production. The current use of computers as binary environments tends to hinder our ability to play. We become overly immersed in the computer’s resolute environment and begin to rely upon its answers as finalized end products. A tool, however, is supposed to help you inhabit the task, not inhabit the tool itself. The environment must always be larger than the tool. A binary processes is one-dimensional, and as such it can certainly exist as a subset of a larger process – much the same way as a tool is used temporarily for a specific task, and then discarded. Playfulness within digital design is entirely possible if the computer is used for its strengths and not for its ability to assign value. Only when the computer is understood to be a resolute tool in a playful process can design benefit from Licklider’s vision of “Intelligence Augmentation.”
Discussion During the final discussion with my committee, several points of debate came to the forefront. The first of these involved a discussion of the designer’s field of external limitations. The question was posed: What is the difference between being bound to and inhibited by external limitations and playful engagement with those same limitations? This relates closely to the topic of the designer’s freedom. If we are intentionally self-limited, what exactly does our freedom entail? As an early notion, perhaps we are talking less about the freedom to generate and more about the freedom to assign value - or, perhaps, the very freedom to choose limitation in the first place. This would reinforce the claim that the keyboard and mouse interfaces are not inherently beneficial modes of resistance, for their usage is not freely chosen. We must be free to choose how value is assigned, and thus choose our field of play. Whether this is true or not, it became clear in subsequent discussions that a closer reading of Gadamer’s “Truth and Method” should help further this inquiry. Gadamer writes extensively about the field itself, stating that play is more about the act itself than the players involved. The second major issue involved the strict separation of design and science forwarded by the paper. By isolating resolute thinking and playful thinking, I drew a strong distinction between creativity and research, unwittingly adopting my previously-criticized binary mindset. The two are not distinct fields but rather parts of the same continuum; to split them is to bypass the amazing potential of the intermediate steps. While the purpose of this work was never to omit resolute thinking from design, greater care is needed to emphasize play’s ability to move freely between mindsets and approaches. Third, a question was posed about the relationship of skill-based knowledge to creative architectural design. This relates specifically to the forecasted avenues of this work: it is one thing to identify the players, and another to discuss how those players play. Is the knowledge of a tool or a process needed to ensure skillful application of it in design? This also ties in closely with the topic of immersion, as it is immersion in the process that lends the designer the intuition needed to manipulate and consider complex topics, from siting strategies to the details of material construction. If skill-based knowledge does directly relate to creative design, then what are the effects of “handing over” many of the details of construction processes to computer programs? The committee also discussed the need for the research to be more specific. This is especially relevant if the work is to incorporate any evaluation or development proposal. The analysis at hand should not compare scientists and designers in general but rather discuss a specific subject, situation, and time. By limiting the research to a particular group and topic (i.e., “analyzing the role of playfulness in early architectural design students as they interact with computer design tools”), conclusions can be more substantial and proposed developments more focused. 39
1. Licklider, J. C. R., “ManComputer Symbiosis,” IRE Transactions on Human Factors in Electronics HFE-1.1 (1960).
Has an over-abundance of complexity in our world led to increased levels of simplification that, efficient at first, have oversimplified us to a level of reduction? Has our strive for improvement manifested in a desire to become more like computers? Are we foregoing creativity in place of hyper-rationalized decision-making processes? Speed has become everything, and in pursuit of that, we are accepting binary choices over our gradated value systems; our insistence upon a man-machine merger is causing us to model our own methods of thought on that of computers and believe in the process of optimization as a legitimate design method. The inherent differences in computers, however, render them inappropriate for tasks of creativity and design, while the nature of our brains simultaneously makes them ill-equipped for the type of rapid calculation so easy in computer systems. For decades, our model of “Artificial Intelligence” has quietly been leading us to perceive the computer as merely a simpler, quicker version of the human brain. As the computer’s development has accelerated, our own attempts to be more “efficient” have followed suit. Where, in our development, did we go wrong in our approach to computers? How can our control over computers maintain their role as tools? How can we work towards J.C.R. Licklider’s model of “Man-Computer Symbiosis,” or “Intelligence Augmentation,”1 rather than the inherently demeaning and de-humanizing vision of a superior computer intelligence? I intend to analyze the historical, societal, and psychological effects of technology in order to better understand just why this view has come about and how we can begin to counteract it. As designers, architects hold a prominent place in the testing ground for these new ideas. Perched on the balance between scientific reason and creativity, they are already called to mediate between speed-based calculation and the slow, heuristic search that is design. Nowhere should the clamor for proper computer implementation be heard louder than on this bridge between science and art.
time lapse of the construction of the Eiffel Tower, 1887 - 1889
Science and Style
1. Cole, Emily, The Grammar of Architecture, (Boston: Bulfinch, 2002), 126. 2. Ibid., 178. 3. Pacey, Arnold, The Maze of Ingenuity: Ideas and Idealism in the Development of Technology, (Cambridge, MA: MIT, 1992), 23. 4. Cole, Grammar Architecture, 194.
5. Ibid., 200.
6. Pacey, Maze of Ingenuity, 69.
7. Moffett, Marian, Michael W. Fazio, and Lawrence Wodehouse, Buildings across Time: An Introduction to World Architecture, (3rd ed. Boston: McGraw-Hill, 2004), 379.
Modern thinking tends to draw a divide between the role of the scientist and the role of the designer. Artist versus mathematician, architect versus engineer - the two are often considered incongruous at best. This alienation has not always been the case, however; in fact, science has played a significant role in architectural design from antiquity to modern day. The architect’s image has always incorporated at least some conception of technical knowledge, dating back to the role of architect as master builders. Some of the largest architectural transformations have come from scientific research, whether it be developments in materials and engineering practices or the invention of new design tools. In the construction industry, the discovery of new materials and building practices, whether through research, accident, or exploration, has helped fuel the development of new and daring architectural styles. The use of fired brick and extremely durable concrete by the Romans “influenced the construction of both old and new building types,” from the Pantheon to the Colosseum.1 The rapid spread of Islam from the 7th to 12th centuries brought with it “a mature theory of structural mechanics, the pointed arch, vault, squinch, and dome.”2 Twelfth century cathedral building, which Arnold Pacey views as the beginning of the modern era of technology,3 only saw the achievement of “soaring space” due to the development of stone rib vaults.4 Gothic architecture, so signature in its use of verticality and light, was merely a synthesis of the latest construction techniques: the pointed arch, ribbed vault, and flying buttress.5 Other technical inquiries influenced architecture as well, as seen in the continuing interest in the control, use, and display of water. Alberti included information on the supply of water to drive mills in his De re aedificatoria, and Palladio’s villa designs of the 1500’s were “notable for the waterworks that supplied fountains in the garden and water for the kitchen...Water supply was, indeed, one of the branches of engineering most closely connected with architecture.”6 The Industrial Revolution highlighted this connection between science and architecture. “New materials, new technologies, and new systems of construction would radically alter traditional building forms and would make completely new building types possible”, leading to an explosion of new varieties and styles.7 The use of iron, perhaps the most important development to come out of the Industrial Revolution, changed the face of the construction industry. The ability to create steel frames in a variety of shapes and sizes freed buildings from reliance upon load-bearing walls; this, together with the inventions of air conditioning and the elevator, ushered in the skyscraper as a building type, revolutionizing our urban environments. Just as material science allowed architects and engineers to predict what would happen in their buildings (rather than reflect on past experience), developments in metalworking 45
masonâ€™s geometrical drawing instruments, c. 13th century (Pacey 52)
the first computer mouse, invented by Douglas Engelbart, 1964
technical drawing instruments, c. 1900
8. Pacey, Maze of Ingenuity, 50.
9. Ibid., 78.
10. Moffett, Buildings across Time, 395.
offered them the physical strength to explore new formal territory. Materials and engineering are not the only scientific advancements that have influenced architectural design, however. The tools and techniques used by architects have also seen (and caused) significant changes. Thirteenth century architecture was marked by the advent of cathedral building and the Gothic style. Much of the design of these magnificent buildings can be traced back to the ‘ad quadratum design’ method employed during the time, which relied on a “system of construction lines based on the geometry of a square.” Indeed, in describing the techniques of the time, Pacey emphasizes the use of geometry, citing “clear evidence that Euclidean geometry rather than measurement was the basis of drawing.”8 Several hundred years later, Renaissance architecture adopted the techniques of proportion and scale as its driving forces. Alberti’s De re aedificatoria, published around 1450, drew much of its material from Vitruvius’ De architectura and highlighted this emphasis on proportion. The effect was so widespread that it showed up at all levels of design. Since the invention of the printing press “radically changed the rate at which ideas could spread,” new practices in architecture and engineering were no longer the sole property of the intellectual elite. The work of ordinary builders was just as heavily influenced.9 It wasn’t until the Industrial Revolution that drafting as we know it today became a specialized architectural process. The reason for this may be that during this period, architects began to use drawing techniques that allowed them to depict (on paper) even the most complex three-dimensional objects.10 Additionally, during the 1800’s, an explosion in print media and order catalogs flooded architects with a wide range of building components and styles previously unavailable to them, perhaps helping lead to the numerous styles developed in the years that followed. This architecture of choice accelerated even more during the 1860’s, when the invention of blueprinting allowed the humble drawing to be copied relatively instantly. Over 100 years later, drafting entered the digital age, with Autodesk releasing its CAD software in 1982. Merely ten years later, three-dimensional modeling programs entered the scene, and in another short decade they would become so influential as to spawn their own styles of design. The development of these programs was followed by a proliferation of formal and geometrical experimentation, which was only achievable due to the computer’s ability to maintain and adapt consistent technical information about the building form as a whole. Analog methods have seen a decline since the advent of the computer. Whether this is good or bad, it is evident that digital technology has become the preferred medium for architectural exploration and representation. An architect’s job lies in the coaxing of his or her ideas into built form. To do so, those ideas must first be represented, and it is this step 47
that is most affected by the ongoing development of the architect’s toolset. Ideas have become easier to represent, freeing them from many of the constraints that once held them back. Where a drawing of a building used to be arduous and entirely manual, that drawing eventually became instantly copied, then instantly edited, and finally even instantly updated and generated. Once descriptive geometry permitted architects to represent their buildings entirely graphically, they were “relieved...of the necessity of daily attendance at the job site to direct progress of the work.”11 Similarly, the BIM software that is central to many of today’s firms attempts to predict the answers to many of the architect’s tectonic and structural considerations. Information is becoming increasingly embedded into the medium; issues are left up to the consideration of the tool, rather than the designer. The software also presents the user with an initially limited palette of options, just as the product magazines of the Industrial Revolution allowed architects to “shop” for building pieces. This host of catalogs and CAD blocks means less problem solving by the designer and more authority shifted to the tool itself. This development of digitized tools, along with the accompanying scientific developments, have led to an architecture less rooted in the physical world. Since the Middle Ages, the role of the architect has gradually distanced itself from that of the engineer, as the practices of mechanical and civil engineering became separate fields.12 Despite being continually influenced by science, architecture has found its validation in other venues, from the social causes that followed World War I to the variety of artistic movements of the 20th century. Formal exploration became increasingly easy, with scientific research leading to lighter, stronger, smaller materials. When the computer came onto the architectural scene in the 1980’s, it provided the perfect avenue for non-tectonic expression. Entire theoretical projects could be conducted with ease, bringing to life wild visualizations of impossible scenarios. Architecture could finally be both idea and representation, as malleable as an early sketch yet containing all of the information of a finished building. The tools of the designer are beginning to occupy an increasingly prominent role in the design process. Digital tools detach the designer from reality, providing an avenue for the expression of vague ideas while simultaneously “intelligently” filling in the gaps with pre-programmed details. The automation of this process is not inherently bad: the danger lies in the shortcut it offers to the inexperienced designer. The need for less consideration and decision-making on the part of the architect carries with it the danger of diminished and shifted responsibility. Considering the role that tools and methods have played in the history of architectural design, effort should be taken to ensure that these tools do not contribute to any further separation of the architect from his or her inherent responsibilities. 48
11. Moffett, Buildings across Time, 395.
13. King, Ross, Brunelleschi’s Dome: How a Renaissance Genius Reinvented Architecture, (New York, NY: Penguin, 2001).
14. Pacey, Maze of Ingenuity, 57.
15. Brand, Stewart, How Buildings Learn: What Happens after They’re Built, (New York, NY: Viking, 1994), 3.
16. Castells, Rise, 35.
17. Pacey, Maze of Ingenuity, 49-51.
18. Castells, Rise, 37.
Throughout history, the march of technological innovation has impacted every aspect and tenet of our society and culture. Some of society’s greatest inventions - the printing press, the steam engine, electricity - can be seen as the heralds of a new age, or the forbearers of a time of great progress. Since the first patent was awarded to Brunelleschi in 1421 for a barge that hoisted and shipped marble,13 people have been associating inventions with individuals, and while the work of certain notable men and women should not be downplayed, this view of technological development as a series of isolated advancements distorts our understanding. Technological progress is as gradated as any other development, be it cultural, social, or economic, and it is much more accurately represented as a social process than as a simple series of itemized dates.14 Like any other process, it displays characteristics and tendencies, providing hints as to future developments and directions. But unlike any other process, technology has begun to sustain a consistent acceleration, obscuring the future of its development even while cementing its impact on our lives. In comparing Sullivan’s “form follows function” quote with Churchill’s “we shape our buildings, and afterwards our buildings shape us,” Stewart Brand writes in How Buildings Learn that neither is individually true. Rather, “first we shape our buildings, then they shape us, then we shape them again - ad infinitum.”15 Technology follows in the same vein: When viewed as a gradual process, the innovations that crop up throughout history provide insight into the situations that surrounded the development, while still representing the forward-looking set of new opportunities afforded to the societies in question. As Manuel Castells puts it in The Rise of the Network Society, “technological innovation is not an isolated instance. It reflects a given state of knowledge, a particular institutional and industrial environment, a certain availability of skills to define a technical problem and to solve it, an economic mentality to make such application costefficient, and a network of producers and users who can communicate their experiences cumulatively, learning by using and by doing.”16 Humans have used progress as both a counterpoint and a solution to their own limitations for millennia. In the 13th century, geometric principles were widely used in everything from the mapping of the oceans to the mapping of the human face.17 Two hundred years later, the printing press exponentially increased the individual’s access to information, marking the onset of an entire intellectual movement. The development of the steam engine in the 18th century was perhaps even more influential, as many scholars regard it as the central most important event contributing to the Industrial Revolution.18 However, technological change has never been simply a means to an end. From the religious zeal that drove the Gothic cathedrals to new heights to the celestial obsession that spurred the development of water-driven 49
Mooreâ€™s Law: transistor count will double every two years
19. Pacey, Maze of Ingenuity, 23.
20. Ibid., 65.
21. Castells, Rise, 36.
clocks and astrolabes, the effort put into this progress has always contained some symbolic meaning or level of mystery. Arnold Pacey remarks in The Maze of Ingenuity that the “cathedral crusade” of the 12th and 13th centuries marked the beginning of the modern era of technology, for it was in this time that “changes [were] deliberately made in order to approach some unrealized ideal that is always one step beyond what current techniques [made] possible.”19 Not only were the cathedral builders continually questioning: they were eternally spurred by a zeal that was built upon idealistic imagination, a spirit that ultimately encouraged creativity above and beyond reason or logic. Even into the Renaissance, an age characterized by ration and intellect, “what we now call technology still had something of the quality of magic.”20 While the “positive effects of new industrial technologies on economic growth, living standards, and the human mastery of a hostile Nature” are obvious,21 there remain other, less highlighted effects in this continuing climb. Analyzing the development of technology, both as a gradual process and as marked by important instances, reveals several growing tendencies that have begun to accelerate in recent decades. For instance, the introduction of the mechanical bobbin and flyer into the spinning wheel perhaps marks the beginning trend of automation (defined by Merriam-Webster as “automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labor”), a process that would see its greatest growth in the industrialization of the 19th and 20th centuries. Another example is standardization, which may have had its birth in the metal type used in the printing presses of the 15th century. Considering the explosion of the number of printing presses (over 1000 were produced in the half century following its introduction) and the cultural impact that these presses had, the concept of standardization may have gained a crucial early foothold in influencing further technological development. The printing press saw the onset of another major trend: mass production. What started with the spread of knowledge via literature eventually grew to nearly every industry, from food to automotive. These growing tendencies point toward a natural trend in technological development: the push for efficiency. In one respect, this matches up perfectly with the idea of self-improvement, as we are continually trying to find ways to better our situation, both for ourselves and for future generations. Automation and mass production are now conceivable at increasingly advanced levels of production - an entire branch of architectural style (prefabrication) has grown out of this seemingly economic model. Standardization has had similar influences, introducing codes and regulations into almost every stage of architectural design. What is perhaps most important about the last 51
fifty years of technological development, however, is the accelerated pace at which these processes are being implemented. Although steam power was a central tenet to the Industrial Revolution, it took nearly half a century for the technology to begin having any real effect. By contrast, several milestones of the computer revolution - namely, the invention of the transistor in 1947, the integrated circuit in 1957, and the microprocessor in 1971 - saw an explosion of development that have pushed the progress of computers at an astounding rate (Moore’s Law has actually predicted this relatively accurately, stating that the number of transistors on integrated circuits will double about every two years). Computers, once the size of gymnasiums, are now handheld devices, and rapidly approaching ubiquity. According to one popular saying, the technology used to land men on the moon is less powerful than that found in a child’s cell phone. Of course, this speed is seen most clearly in the advent of the internet, which has connected the world in a way never before imaginable. The tools and information available to individuals are now almost limitless. Castells states, “the feedback loop between introducing new technology, using it, and developing it into new realms becomes much faster under the new technological paradigm. As a result, diffusion of technology endlessly amplifies the power of technology, as it becomes appropriated and redefined by its users. New information technologies are not simply tools to be applied, but processes to be developed.”22 Efficiency is no longer one opportunity amongst a larger quest: it is instead its own self-perpetuating goal. Previously driven by religious zeal, commercial gains, or military aspirations, technological development is now fueled by its own internal ideal. As referenced earlier, the previous eras of technological change, as rational as they may have been, always contained an element of mysticism. The modern approach to technology, however, has lost much of this wonder. Scientific thought has become almost entirely self-referential, ridding itself of any correlated ideals held by separate pursuits and embodying instead a “materialistic, disenchanted rationalism” that has become socially expected.23 Spurred by the incredibly widespread Industrial Revolution and accelerated by a rapidly increasing world population, scientific development has adopted a model of science for science’s sake, and the digital medium has provided the perfect environment in which this model can thrive. “The current process of technological transformation expands exponentially because of its ability to create an interface between technological fields through common digital language in which information is generated, stored, retrieved, processed, and transmitted.”24 However, Castells later cautions, “but the price to pay for inclusion in the system is to adapt to its logic, to its language, to its points of entry, to its encoding and decoding.”25 In You Are Not a Gadget: A Manifesto, Jaron Lanier writes extensively of 52
22. Castells, Rise, 31.
23. Pacey, Maze of Ingenuity, 65.
24. Castells, Rise, 29. 25. Ibid., 405.
26. Lanier, Jaron, You Are Not a Gadget: A Manifesto, (New York: Alfred A. Knopf, 2010), 11.
the suppression of basic human nature inherent in this adaptation. In describing one of the first (and most foundational) computer operating systems, he states, “UNIX expresses too large a belief in discrete abstract symbols and not enough of a belief in temporal, continuous, non-abstract reality; it is more like a typewriter than a dance partner.”26 Lanier’s description of this issue reveals how un-checked digital development results in a form of reduction that cannot encompass our normal experience without some loss of quality. The speed and capabilities of technology have increased to such a state that we are no longer adapting the tools to meet our needs, but rather are beginning to adapt ourselves and our needs to an ideal model of machine-like efficiency.
27. Castells, Rise, 362.
Merriam-Webster defines complexity as “the quality or state of being complex” and complex as “a whole made up of complicated or interrelated parts.” It is only after pursuing this all the way to the definition of complicated (“consisting of parts intricately combined”) that a reasonable idea of the term can be formulated. The word is relative, as evidenced by its evasive definition, and it naturally tends to become even more so when applied to understandings of societal interaction. At its most basic level, however, complexity is quantified in terms of the interrelation of parts. An increase in complexity, then, can be brought about in a number of different ways: namely, by increasing either the quantity and/or diversity of parts within a unit of space or time. Our lives are growing more complex. Between the approaching ubiquity of the digital screens that occupy so much of our days and the explosion of advertising in recent decades, the average individual in modern society can hardly escape sensory bombardment. Castells describes the media as “the almost constant background presence, the fabric of our lives,” citing that “in the US the average person is exposed to 1,600 advertising messages per day.”27 Our level of access to information has also mushroomed. Topics that once required extensive research, proper connections, and perhaps even physical travel now merely require an internet connection and a bit of discernment. A quick Google search of “digital technology” procures over two billion results in a fraction of a second. At the same time, our physical connection to information has increased. According to one survey, “Around 1900, the average distance travelled per person per day was roughly one kilometre. In the 1950s, mobility grew fast and increased to an average of more than ten kilometres a day. At the end of the last century, in the population as a whole it amounted to 53
28. Kliučininkas, Linas, Towards Sustainable Urban Transportation: Environmental Dimension, (Frankfurt, Germany: Peter Lang, 2012), 13.
29. Castells, Rise, 38.
35 kilometres per individual.”28 Such growth brings us in contact with a multitude of new people, places, and experiences, simultaneously increasing the amount of resources needed to satisfy such distances. Our attempts to ameliorate the complexity in our lives, however, merely tend to further complicate the situation: we tend to perpetuate our own loop of inputs and responses. This is not a recent phenomenon; it is rather an apparent side-effect of the “movement toward the expansion of the human mind.”29 The invention of the printing press (and the subsequent birth of mass communication) was a response to a growing demand for information and literacy. The machine served as far more than a simple solution, however, ushering in an age in which the thirst for new knowledge seemed unquenchable. When air conditioning and elevators found their way into buildings in the middle of the 20th century, the resulting densification and physical expansion created a host of new issues involving everything from construction techniques to occupant safety. As information about food production becomes available to us, the resulting demand for new and more complicated diets typically results in a host of new and more complex food products. The most evident example of this loop, however, has appeared in recent decades. The blazingly fast processing speed achieved by recent developments in computer technology has seen the advent of “big data” (and the subsequent need for bigger categorizations of bytes: the “yottabyte” was only added to the International System of Units in 1991), a term that references amounts of data so vast as to be essentially unmanageable using typical database tools. As we start to gain the ability to process such data, however, we redefine our understanding of “unmanageable” and “inconceivable.” The results of such data processing and the subsequent appearance of computerized “wisdom” spur the development of faster processors even higher. Despite this, the move towards increasing complexity is not leading us “up” the Wisdom-Knowledge-Information-Data pyramid developed in the 1980’s. Our lives are becoming populated with stupendous amounts of information, merely expanding the base of the pyramid. What was once filtered, processed, and handled by multiple parties before becoming publicized is now distributed instantly, unfiltered, and live. Transparency and instantaneous access are hallmarks of our society, demanded in everything from app development to government. The removal of such filters on the front end results in a massive flow of information in the system: We are entrusted with the decision-making process ourselves. Ultimately, however, we always seek to normalize this flow into recognizable “knowledge.” Following statistics about the amount of media we are exposed to each day, Castells notes that, contrary to the common assumption, “the barrage of advertising messages received through 55
the media seems to have limited effect.”30 We are, in fact, coping with the increasing amount of information (and thus the growing amount of time needed to process such information back up the pyramid). In order to do so, however, we must necessarily rely on increased processing speed. Every jump in information quantity, be it linear or exponential, correlates to a necessary jump in processing speed. We may be impacted by the same amount of information now as we were throughout antiquity, but that is only because we have become faster and more efficient at filtering out a majority of the noise. Computers do not draw any distinction between “Wisdom” and “Data,” other than what is pre-programmed by humans (even then, such a distinction is not actual “learning” but merely rote memorization, or pattern recognition). Digital technology is inherently entrenched in data. “Better” technology is not technology that can discern between different data value, but rather technology that can process more data faster. The digital age offers us the tools to process exponentially increasing amounts of information in our lives, but in order to deal with that data, it simultaneously demands that we recognize an inherent baseline value for it. Castells writes about this in terms of the homogenization of communication, stating that “All messages of all kinds become enclosed in the medium because the medium has become so comprehensive, so diversified, so malleable that it absorbs in the same multimedia text the whole of human experience, past, present, and future.”31 Put more simply, our once hierarchical understanding of information has suffered from “the digital flattening of expression into a global mush.”32 Where before a certain limited set of options was enough to choose from, we now have the ability (and thus, mistakenly perceived, the need) to select from all available possibilities. Our progress towards more complexity demands more speed, reducing our inherent techniques of intuitive judgment and subjective evaluation into a series of streamlined, expedited, objective processes.
30. Castells, Rise, 362.
31. Ibid., 404. 32. Lanier, Not a Gadget, 47.
33. Collins, Harry. “Humans, Machines, and the Structure of Knowledge.” Stanford Humanities Review 4.2 (1995): 67-83. 34. Castells, Rise, 403.
Brains and computers share many similarities in the basic, core structure of their processing methods: Computers utilize the binary system of 1’s and 0’s to encode and organize information, relying on the incredibly rapid method of electrical communication to process data. Patterns of these two digits are compiled in incredibly long strings, providing the variety needed to encompass the numerous characters, numbers, and symbols that comprise human communication. Since the processing of these digits occurs at the speed of light, the massively lengthy strings appear to be woven together into information that transcends its one-dimensional nature. At the most basic biological level, the neurons in the brain fire, or signal, electrochemical impulses, similar to these 1’s and 0’s. As electrochemical impulses, however, this direct processing speed is on the one hand much slower (operating at approximately 1000 operations/second, as opposed to a computer processor’s 1,000,000,000 operations/second) but on the other hand much more modifiable. The dendrites in the brain’s neurons don’t only measure the modulation of these impulses; they also read the rate at which the impulses are firing. This combination of effects allows for an incredible multitude of different permutations and variations, incorporating multiple dimensions and characteristics of information. Essentially, while computers process issues as just 1 or 0, yes or no, white or black, the human brain expands those two options into so many varieties that the results become an entire gradient of possible responses. Harry Collins, professor at Cardiff University, expounds on this idea by explaining how knowledge is embedded in our methods of communication in a variety of ways. In addition to basic, symbolbased information, Collins writes of embodied knowledge, embrained knowledge, and encultured knowledge. He separates action into “regular action” and “behavior-specific action,” noting that the majority of our day-to-day decisions either follow or establish ‘rules’ and appear to us to be easy to understand. The hidden instructions inherent in this behavior, however, reveal that we “cannot encapsulate all that [we] know about [regular action] into a formula...What is more, what counts as following the rules varies from society to society and situation to situation.”33 Manuel Castells writes concerning communication that “all forms of communication...are based on the production and consumption of signs...It is precisely this ability of all forms of language to encode ambiguity and to open up a diversity of interpretations that makes cultural expressions distinct from formal/logical/mathematical reasoning.”34 The inherent “ambiguity” of our brain’s processing abilities, while restricting its speed, also lends it the ability to deal with high levels and multiple dimensions of complexity. To visualize what’s going on, picture the neurons in the brain as a network of thousands of interconnected nerves, whereas a 57
Euclidâ€™s algorithm for the greatest common denominator of two numbers
computer’s processor is more of a pipeline. The brain can calculate many things at once, moving in multiple directions around a network of tens of thousands of connections, calling on intuitive and embedded information and interpolation to supplement the data presented to it. A computer, on the other hand, is (technically) limited to one task per processor. Advances in processing capabilities are beginning to see computers able to undergo parallel processing, but at the end of the day, computers still undergo a more binary and deterministic process, as opposed to the graded and stochastic one undergone by the brain. No matter how technology continues to advance, the brain is understood as a much more subjective and evolutionary entity, where the computer is, no matter how complex, still deterministic. The computer draws conclusions to programmed questions, where the brain immerses itself in a field of search that often utilizes roundabout methods of questioning and exploration in order to arrive at a “result.” Humans have built-in common sense and intuition: Processes which, even if they are ultimately quantifiable as merely a massive collection of both pre-programmed and learned algorithms, we are far from being able to replicate in computer architecture. Whether the brain is just an incredibly advanced computer or guided and influenced by some non-mathematical, un-quantifiable set of expressions, suffice it to say that the computer of today operates and achieves results in an entirely different way than the human brain. The “results” achieved by the brain (or, to be more specific its “design process”) often take into account so many factors and parameters that the outcome is far from some optimized result. Optimization brings with it some promise of an “architectural solution.” Some notion that, through the power of computing, we can “solve our problem.” Merriam-Webster defines an algorithm as a “stepby-step procedure for solving a problem or accomplishing some end, especially by a computer.” At first this sounds promising in the vast decision-making field of design. Optimization, however, is inherently inhuman. In mathematical terms, design is about local optima (as opposed to global optima), because design resides in a field that is entirely subjective. Global optima span over all times and situations, but we design in scenarios that are much more fluid and temporally variant. What’s popular today might not be popular tomorrow, and what’s available here may not be available there. Optimization is flawed because it produces a static result, and a snapshot answer: It once again predicts that there is some one style that humanity will cling to, and champion, and live with forever. It glorifies the end and seeks to reduce the process to the most efficient model possible. But why would we seek something so unchangeable? Design is subjective, and bound to a temporal landscape just as much as a physical one. Computers are not. 59
map of the internet (original graphic courtesy of the Opte project)
35. Lanier, Not a Gadget, 63.
The proper allocation of computer resources is not in the continued pursuit of computational creativity, but in the careful integration of (and restricted reliance on) pure processing. Computers offer us the ability to process information incredibly rapidly, but it is up to us to recognize this as merely another step in a live and fluid process, rather than a definite answer. “Computers have an unfortunate tendency to present us with binary choices at every level, not just at the lowest one, where the bits are switching.”35 It is up to us to avoid deferring to this yes/no procedural mindset and properly prioritize human creativity.
36. Grosz, Elizabeth, “Cyberspace, Virtuality, and the Real: Some Architectural Reflections.” Architecture from the Outside: Essays on Virtual and Real Space, (Cambridge, MA: MIT, 2001), 111. 37. Ibid., 109.
38. Castells, Rise, 404.
39. Cass, Stephen, “How Much Does the Internet Weigh?” Discover Magazine, 29 May 2007.
In her writing “Cyberspace, Virtuality, and the Real,” Elizabeth Grosz states that “the concept of virtuality has been with us a remarkably long time. It is a coherent and functional idea already in Plato’s writings, where both ideas and simulacra exist in some state of virtuality...since there has been writing...there has been some idea of the virtual.”36 She argues for an understanding of virtuality that is rooted in the real, as both an age-old concept and one “whose status as virtual requires a real relative to which its virtuality can be marked as such.”37 Manuel Castells describes the same issue in terms of communication. “Cultures are made up of communication processes. And all forms of communication, as Roland Barthes and Jean Baudrillard taught us many years ago, are based on the production and consumption of signs. Thus there is no separation between ‘reality’ and symbolic representation.”38 All too often, however, the term “virtual” is conceptualized as digital space. New developments in technology are pushing towards a realization of “virtual reality,” a substitution of spatial and sensory experience that would provide enough similarity to our own reality as to appear indistinguishable, if not superior. Virtual and digital are not synonymous, however, and though they may share several key qualities, it is important that a line be drawn between the two. Our mental substitution of digital as virtual is rooted in the incomprehensibly small size of digital data, especially when juxtaposed to the value that information provides. Consider, for example, the internet. Attempts to quantify the internet into an actual measurable weight prove tricky. Stephen Cass of Discover Magazine released an article describing his attempt to do so by uncovering just how much digital information comprised the internet, and then “weighing” that content. Cass’s calculation portrayed the internet as weighing a mere 0.2 millionths of an ounce, or about the size of the smallest possible grain of sand.39 Another approach yields a much larger, albeit still 61
incredibly small, answer: by roughly calculating the energy needed to run all of the servers that make up the internet, Russel Seitz estimated that the combined total weight only came to roughly fifty grams (less than two ounces), approximately the weight of one strawberry.40 When considering the fact that this is divided between 75-100 million servers, the value of such a number becomes relatively meaningless. Considering the physical volume of such information is perhaps even more difficult. While an electron’s mass can be reasonably estimated, its size is a much more variant number. The issue that this consideration does raise, however, is that of the storage devices associated with digital information. Storing, enclosing, managing, and networking digital content elevates the amount of space involved to the scale of the building, or, in the case of data-giants like Google, the mega-warehouse. Despite the fact that this volume is much more graspable, digital information still seems to lack any intuitive sense of size. On the one hand, the size needed for storage devices is shrinking: the 100 gigabyte hard drive that used to occupy a significant portion of my desk space can now be squeezed onto a flash drive and hidden inconspicuously on my keychain. At the same time, the data centers that are so instrumental to the functioning of the internet, despite being sizable, critical, and incredibly complex buildings, are rarely (if ever) discussed. The very fact that the term “cloud” is used to reference these centers should hint towards their intentionally intangible nature. The volume of digital information means nothing to us because we cannot evaluate it at the micro level and yet we can’t rely on any understanding of it at the macro level. Another characteristic that obfuscates our prior considerations of information is the speed at which digital content travels. Whether the travel medium is the copper wires running out the back of your computer or the optical fibers that lie on the bottom of the ocean floor, data moves from server to server at speeds approaching the speed of light. In fact, any latency experienced by the end user has nothing to do with how far away the information originates, but rather the multitude of physical connections and complications that exist at either end. In some communication programs, it’s even possible to see when your partner is composing a message, or receive (instant) validation that he or she has read the one you just sent. Information and communication, even across great distances, are essentially instantaneous, rendering hopeless any attempt to discern the physical origin of content based on travel times. Recent music provider softwares have started focusing more on streaming music from the internet, erasing the boundary between local content and internet content and occasionally even doing away with the local provider altogether. Just like fast food, access to digital information has become entirely relative: If it’s not provided quickly enough, it becomes problematic, even though the 62
40. Seitz, Russell, “Weighing the Web,” Web log post, Adamant, 25 Oct. 2006.
41. Wray, Richard, “Internet Data Heads for 500bn Gigabytes,” The Guardian, 18 May 2009.
42. Grosz, “Cyberspace,” 109.
43. Ibid., 110. 44. Ibid., 109.
45. Ibid., 114.
46. Ibid., 111.
delayed time is still only a fraction of the time it would take to procure the content using traditional means. No matter how miniscule the actual internet may be, however, its breadth and scope seem to be infinitely large, rather than infinitely small. An article published by the Guardian in 2009 estimated that, at almost 500 billion gigabytes (500 exabytes), the printed equivalent of the internet “ would form a stack [of books] that would stretch from Earth to Pluto 10 times.” Not only that, but the internet is expanding at an incredible rate: the same article quoted technology consultancy IDC in saying that this number would double in the next year and a half.41 With such wildly divergent considerations of size, it is no wonder that the digital content we create each and every day registers as entirely virtual, with no real dimensional quality. Grosz describes this shift in perception as “perhaps the most striking transformation effected by these technologies.”42 With no way to quantify digital content according to our traditional senses, and no conception as to where that content originates, we are left with no choice but to relegate it as “virtual.” Grosz goes on to say that “the virtual is the space of emergence of the new, the unthought, the unrealized.”43 Recognizing the effect of technological developments on society as a whole, she hints that the digital-as-virtual confusion is also rooted in its novelty.44 Along with being new, she also paraphrases Marcos Novak in saying that “‘Cyberspace stands to thought as flight stands to crawling.’ In short, cyberspace is a mode of transcendence, the next quantum leap in the development of mind, as flying is a mode of corporeal transcendence of the bodily activity of walking.”45 The digital is the latest frontier, the newest form of experimentation and exploration, and with continuing developments pushing out new hardware and software every week, it has become remarkably good at maintaining its glamour. While both thought and computer virtuality may be connected in their mutual synthesis of “the new” and representation of the intangible, it is critical to realize that the latter does not occupy the exact same territory as the former. The first distinction, as outlined previously, is in the finite nature of digital technology. In Grosz’s words, “the capacity for simulation clearly has sensory and corporeal limits that are rarely acknowledged, especially because the technology is commonly characterized as a mode of decorporealization and dematerialization.”46 The second distinction lies in the fact that digital space is not inherently generative. Whereas thoughts are born from within our own heads, relative to our cumulative experiences and dependent upon more external and internal influences than we are able to measure, the output of computers is entirely reliant upon pre-programmed software. No matter how complex the algorithm or seemingly unpredictable the output, a computer’s generated results always originate in human input. 63
Jaron Lanier, one of the forefathers of virtual reality, admits an inherent danger in its development. “When my friends and I built the first virtual reality machines, the whole point was to make this world more creative, expressive, empathetic, and interesting. It was not to escape it.”47 Lanier’s manifesto argues that creativity is a quality that can’t (and shouldn’t) be approximated by computers. But with the computer holding “the promise of a perfect open-ended automatism, a nonstop variability untainted by false consciousness,”48 it seems that our misconception of digital as virtual is leading us to believe that the computer is a space of creative generation. Whereas the computer may provide a valuable tool in the creative process, it is critical that it be recognized as such - merely a tool - and never serve to supplant our own ability to understand value and assign judgment. “The attribution of intelligence to machines, crowds of fragments, or other nerd deities obscures more than it illuminates...Treating computers as intelligent, autonomous entities ends up standing the process of engineering on its head. We can’t afford to respect our own designs so much.”49
47. Lanier, Not a Gadget, 33.
48. Jones, Wes, “Big Forking Dilemma,” Harvard Design Magazine 32 (2010): 8.
49. Lanier, Not a Gadget, 36.
Discussion The issues raised in the preceding essays are by no means the only ones confronting designers today; nor are these issues solely driven by the causes brought forth. The world of design is affected by every aspect and tenet of society, no matter how large or small. The forces that have shaped architecture are many and varied; to try to organize them under a common or predictable structure is an exercise in futility. But just as the overall direction of architecture may not be entirely plottable, the influence of its constituent interests cannot be denied. Technology has always influenced architecture, and in recent decades that influence has sped up dramatically. Speed does not always go hand in hand with design, however, and although the benefits of computer-aided design cannot be denied, the widespread use of this technology has also had several significant side effects: namely, skewed perceptions about the parties involved in the design process, as well as the process itself. First, the computer has taken on the role of designer, causing a subsequent misappropriation of responsibility in the design process. The computer, mistaken for an infinite virtual space and consequently held parallel with imagination, falsely offers autonomous creative potential. The computer’s process itself appears creative, alien as it is to the typical human way of thinking. Alongside this, the process remains cutting-edge and new, and thus it resists absorption into banality. The computer offers the ability to process massive amounts of data at super-human speeds, a characteristic that renders it “successful” where human capability falls short. The decisions made by the computer, products of an objective and rational process, are seen as autonomous and innocent. Where error has so long been the result of subjective decision-making, complete objectification seems to hold a promising advantage. Computers thus offer the seeds for error-free generation, and as such they have become more than tools. Computers have become significant players in the decision-making part of the process, at times absorbing much of the responsibility typically held by the designer. Second, the computer has caused the designer to ascribe to a machine ideal, trading skill for speed and glorifying quantity just as much as, if not more than, quality. Technological development has instilled in us an unavoidable desire for efficiency. Automation, standardization, and mass production have given us an insatiable thirst for speed, simultaneously defining all of our issues in terms of solvable processes. Computers are mistakenly considered to be “faster brains,” even though their method of processing is based purely on instructions and data, without any assignment of value or ability to judge on anything other than pattern recognition. But in a world of increasing complexity and an overabundance of data, speed has become necessary in order to cope. The next (mistaken) step is to 67
consider speed necessary to the creative process as well. The machine ideal born out of the development and idolization of technology causes us to believe that our own methods are inferior, and that we “need” to come to “solutions” faster. Where the designer does still maintain a foothold in the design process, he or she is unduly burdened by the pressure of performing at the pace of the computer. Third, the computer has converted design into a process of optimization. Decisions are made just like the computer makes them: yes or no, black or white, one or zero. Architecture is seen as a “problem” to be solved,” and design becomes a consideration of global optima and processed answers. This succumbing to computerized processes also does a disservice to our information, as the conversion of communication into digital formats always results in reduction. In order for the system to accommodate every bit of data, the data must be abstracted to some degree. Value is always lost, and even though we may be able to convert our architectural inquiry into an incredibly complex program that seems to embody all of the issues in consideration, we are still missing the point. Even though we may be able to extract twenty results from our computerized process, where once we could only glean two or three, the fact still remains that the reduction of design to a series of objective questions and answers undermines the very human, subjective nature that bestows it with quality in the first place.
Bibliography Black, Max. “Models and Archetypes.” Models and Metaphors; Studies in Language and Philosophy. Ithaca, NY: Cornell UP, 1962. 219- 43. Print. Brand, Stewart. How Buildings Learn: What Happens after They’re Built. New York, NY: Viking, 1994. Print. Bruner, Jerome S. On Knowing; Essays for the Left Hand. Cambridge: Belknap of Harvard UP, 1962. Print. Cass, Stephen. “How Much Does the Internet Weigh?” Discover Magazine. 29 May 2007. Web. 12 Nov. 2012. Castells, Manuel. The Rise of the Network Society. Malden, MA: Blackwell, 1996. Print. Collins, Harry. “Humans, Machines, and the Structure of Knowledge.” Stanford Humanities Review 4.2 (1995): 67-83. Print. Cole, Emily. The Grammar of Architecture. Boston: Bulfinch, 2002. Print. Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT, 1992. Print. Gadamer, Hans-Georg. Truth and Method. Trans. Joel Weinsheimer and Donald G. Marshall. 2nd ed. London: Continuum, 2004. Print. Glynn, Mary Ann, and Jane Webster. “The Adult Playfulness Scale: An Initial Assessment.” Psychological Reports 71.5 (1992): 83-103. Print. Grosz, Elizabeth. “Cyberspace, Virtuality, and the Real: Some Architectural Reflections.” Architecture from the Outside: Essays on Virtual and Real Space. Cambridge, MA: MIT, 2001. 109-117. Print. Jones, Wes. “Big Forking Dilemma.” Harvard Design Magazine 32 (2010): 8-17. Kim, Mi Jeong, and Mary Lou Maher. “The Impact of Tangible User Interfaces on Designers’ Spatial Cognition.” Human-Computer Interaction 23.2 (2008): 101-37. Psychology and Behavioral Sciences Collection. Web. 17 Jan. 2013. King, Ross. Brunelleschi’s Dome: How a Renaissance Genius Reinvented Architecture. New York, NY: Penguin, 2001. Print.
Kliučininkas, Linas. Towards Sustainable Urban Transportation: Environmental Dimension. Frankfurt, Germany: Peter Lang, 2012. Print. Kolko, Jon. “Information Architecture and Design Strategy: The Importance of Synthesis during the Process of Design.” Web log post. Interaction Design and Design Synthesis. 2007. Web. 20 Jan. 2013. Lanier, Jaron. You Are Not a Gadget: A Manifesto. New York: Alfred A. Knopf, 2010. Print. Licklider, J. C. R. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1.1 (1960): 4-11. Print. Maslow, A. H. “A Theory of Human Motivation.” Psychological Review 50.4 (1943): 370 96. Classics in the History of Psychology. Aug. 2000. Web. 19 Nov. 2012. McCullough, Malcolm. Abstracting Craft: The Practiced Digital Hand. Cambridge, MA: MIT, 1996. Print. Mistry, Pranav. “The thrilling potential of SixthSense technology.” Lecture. TEDIndia 2009. Mysore, India. TED. TED Conferences, LLC, Nov. 2009. Web. 9 Sept. 2012. Moffett, Marian, Michael W. Fazio, and Lawrence Wodehouse. Buildings across Time: An Introduction to World Architecture. 3rd ed. Boston: McGraw-Hill, 2004. Print. Pacey, Arnold. The Maze of Ingenuity: Ideas and Idealism in the Development of Technology. Cambridge, MA: MIT, 1992. Print. Pye, David. The Nature and Art of Workmanship. Bethel, CT: Cambium, 1995. Print. Sankar, Shyam. “The Rise of Human-computer Cooperation.” Lecture. TEDGlobal 2012. Edinburgh, Scotland. TED. TED Conferences, LLC, Sept. 2012. Web. 9 Sept. 2012. Seitz, Russell. “Weighing the Web.” Web log post. Adamant. 25 Oct. 2006. Web. 12 Nov. 2012. Sennett, Richard. The Craftsman. New Haven: Yale UP, 2008. Print. 74
Summers, David. “Facture.” Real Spaces: World Art History and the Rise of Western Modernism. London: Phaidon, 2003. 98-116. Print. Teal, Randall. “Developing a (Non-linear) Practice of Design Thinking.” International Journal of Art & Design Education 29.3 (2010): 294- 302. Art & Architecture Complete. Web. 18 Jan. 2013. Temple, Steven. “A Bio-Experimental Model for Learning Creative Design Practices That Supports Transformative Development in Beginning Design Students.” ArchNet-IJAR 4.2 (2010): 116- 138. Art & Architecture Complete. Web. 18 Jan. 2013. Vesely, Dalibor. Architecture in the Age of Divided Representation: The Question of Creativity in the Shadow of Production. Cambridge, MA: MIT, 2004. Print. Vygotsky, L. S. Mind in Society: The Development of Higher Psychological Processes. Ed. Michael Cole, Vera John-Steiner, Sylvia Scribner, and Ellen Souberman. Cambridge: Harvard UP, 1978. Print. Wray, Richard. “Internet Data Heads for 500bn Gigabytes.” The Guardian. Guardian News and Media, 18 May 2009. Web. 20 Nov. 2012.
Undergradute research thesis at Mississippi State University.