Page 1

A Genetic Algorithm based Design Aid Applying Evolutionary Computing to the Design of Commercial Housing Layouts


Abstract According to the CABE housing audit, the quality of housing being produced in the UK is currently less than acceptable. The complex designs of large housing developments, coupled with short timescales for design delivery, results in a detrimental effect on design quality. This research investigates one possible way of counteracting the afore mentioned effect. A thorough study of current and historical applications of computational design tools is carried out, looking in particular into the ways that the technology is integrated into the design process. Following this, an initial model of a Genetic Algorithm based design aid is developed with the intention of increasing efficiency in the early stages of the design process. The program is subsequently tested in terms of its design optimisation abilities in its current state as a way of proving the concept of the Genetic Algorithm. In addition, it is demonstrated to a panel of experts who give an opinion as to how the program might prove useful, and what potential modifications could make it more applicable to the field of mass housing design.


Acknowledgements I would like to thank Proffesor Wassim Jabi for his invaluable support, without which the practical aspect of this research would have been entirely unachievable, and also Origin3 Studio Ltd for providing their expert opinion.


Contents List of Illustrations

1.0 Introduction

1

2.0 Methodology

5

3.0 Literature Review

7

3.1 The Origins of Digital Design aids

8

3.2 Strict Rule-Based Systems

10

3.3 Genetic Algorithms

13

3.4 The application of Genetic Algorithm in Housing Design

17

4.0 Experiment and Analysis

24

4.1 Program Development

25

4.2 Case study

33

5.0 Implementation in Practice

39

6.0 Conclusion

45

Bibliography

47

Appendix 1: Code for Random Placement of Rectangles within a defined area

50

Appendix 2: Survival Rate Experiment data

53

Appendix 3: Mutation Rate Experiment data

55

Appendix 4: Survival Rate Experiment 2 data

57

Appendix 5: Expert Opinion Interview Transcription

59


List of Illustrations Cover illustration and section dividers - Original images generated by the Author’s program Figure. 1 - Statistics - CABE Housing Audit, p.12 Figure. 2 - Statistics - CABE Housing Audit, p.17 Figure. 3 - Statistics - RIBA, Improving Housing Quality, p.8 Figure. 4 - Defining a Shape Grammar - Chouchoulas, Shape Evolution, p.24 Figure. 5 - Shape Grammar - Kitchley and Srivathsan, Combining Shape

Grammar and Genetic Algorithm..., p.262 Figure. 6 - Genetic Algorithm flow chart - Chouchoulas, Shape Evolution, p.22 Figure. 7 - Shape Grammar definition - Chouchoulas, Shape Evolution, p.48 Figure. 8 - Shape Grammar definition - Chouchoulas, Shape Evolution, p.48 Figure. 9 - Output Visualisation - Chouchoulas, Shape Evolution, p.106 Figure. 10 - Results Sample - Narahara and Terzidis, Multiple-constraint Genetic

Algorithm in Housing Design, p.422 Figure. 11 - Output of Author’s program Figure. 12 - Output of Author’s program Figure. 13 - Output of Author’s program Figure. 14 - Output of ‘House Packing’ script Figure. 15 - Output of ‘House Packing’ script Figure. 16 - Diagram adapted by Author, original image from - Daniel

Shiffman, The Nature of Code’, p.428 Figure. 17 - Screen shots of House Packing script being used in

AutoDesk® 3DsMAX®


Figure. 18 - Housing Layout - By kind permission of Origin3 Studio Ltd Figure. 19 - Housing Layout - By kind permission of Origin3 Studio Ltd Figure. 20 - Output of ‘House Packing’ script Figure. 21 - Original Graph by Author Figure. 22 - Original Graph by Author Figure. 23 - Original Graph by Author Figure. 24 - Voronoi Diagram - http://blog.kleinproject.org/wp-content/

uploads/2012/05/Voronoi.png - Accessed 21/01/13 Figure. 25 - Voronoi Diagram - http://lunarbovine.com/blog/wp-content/

uploads/2011/07/VoronoiCanada.png - Accessed 21/01/13


Introduction


We are at a critical point in the development of the new British housing stock. The government has set a target to produce three million new homes by 2020. To meet this target, the housing industry will see a boom on par with that of the inter-war period. “The suburb of 2010-20 would become as prevalent in the housing stock as the 1930s semidetached. These new homes would have an equally dramatic effect on the psyche of a whole generation and the appearance of the entire nation.”1 It is therefore of utmost importance that the quality of the British housing stock is addressed. “The housing produced in the first few years of this new century is simply not up to the standard which the government is demanding and which customers have a right to expect… The gap between aspiration and delivery needs to close as a matter of urgency”2 – Dr. Richard Simmons (Chief Executive, CABE). In 2006, the CABE Housing Audit was carried out across the UK. Its aim was to give an accurate account of the state of the British housing stock. It rated schemes based upon the Building for Life criteria and labelled them as being poor, average, good or very good. Across England “only 18 percent – fewer than one in five – of developments could be classed as ‘good’ or ‘very good’”3 with the quality of 29% of developments being so poor “they simply should not have been given planning consent”4 (Figs. 1 & 2). Common themes were identified across many of the poorer schemes. It was found that they failed to create a sense of place by not responding to their context or developing any kind of individual character. The layouts were often found to be poorly structured with a lack of definition between the public and private realms. In addition to this, dominant roads accompanied by poorly integrated car parking frequently led to layouts dictated by highways design rather than the buildings themselves. The downfall of the modern housing development can be attributed in part to the mind set of the developer, “reward systems within the major volume house builders rarely recognise the quality of what is created by each regional team. Rather, performance is historically appraised on the basis of margins and completions”5. As a result, the design quality is often sacrificed to boost the performance. The CABE housing audit points out that “residents living in high-quality schemes value the benefits that good design can bring”6. Therefore it 1 2 3 4 5 6

RIBA, Improving Housing Quality: Unlocking the Market (London: RIBA, 2009), p. 4. CABE, Housing Audit: Assessing the design quality of new housing in the East Midlands, West Midlands and the South West (London: CABE, 2006), p. 3. Ibid., p. 4. Ibid., p. 4. Ibid., p. 58. Ibid., p. 6.

1


seems absurd that in this tough economic climate where the “mortgage market is now only accessible to those on higher incomes and in more stable occupations”7 that many house builders are still ignoring the potential for good quality design to increase the sales of their homes. In addition to this, surveys have shown that “only a minority of active home buyers would consider buying a new home”8 referring to new homes as “featureless boxes with insufficient living space and small rooms”9. These assertions are well grounded as in 1996 “new homes in England, with a floor area averaging 76m2, were the smallest in Western Europe, some 20m2 (or 21%) smaller than the average”10. This is representative of a general trend of declining size within the UK housing stock (Fig. 3). It is therefore evident that a drastic rethink is required in order to raise the quality of our housing stock.

“When they had that lovely site to play with, there on the hill top: If they had put a tall column in the middle of the small market place, and run three parts of a circle of arcade around the pleasant space, where people could stroll or sit, and with handsome houses behind! If they had made big, substantial houses, in apartments of five or six rooms, and with handsome entrances… You may say the working man would not have accepted such a form of life: the Englishman’s home is his castle, etc, etc. – ‘my own little home.’ But if you can hear every word the next door people say, there’s not much castle… And if your one desire is to get out of your ‘castle’ and your ‘own little house!’ – well, there’s not much to be said for it.”11 - D.H. Lawrence

7 8 9 10 11

RIBA, Improving Housing Quality: Unlocking the Market (London: RIBA, 2009), p. 20. Ibid., p. 1. Ibid., p. 1. Ibid., p. 7. D.H. Lawrence (David Herbert), Late essays and articles (Cambridge: Cambridge University Press, 2004), p. 56.

2


Fig. 1 - Results from the CABE housing audit illustrating the abundance of average or poor housing developments

Fig. 2 - Distribution of housing quality between different built environments. Both rural and suburban are lacking any ‘very good’ developments.

Fig. 3 - Graph demonstrating the declining average size of the UK housing stock by age.

3


Both the quality of the product and the design of the layouts needs to be reconsidered with design principles taking a more fundamental role in the decision making process. Currently, volume house builders deliver their product from a library of standard housetypes which are subtly tweaked depending on the development, but fundamentally the same houses are being distributed all over the country. The reluctance to change these housetypes comes from the fact that they have been cost engineered down to the very last detail, with any changes having a detrimental effect on profit margins. A development comprised of individually designed housetypes will never be able to match the cost efficiency of a more standardised one, and as such there will always be an element of standardisation within volume housebuilding. The quality of these standard products can be improved by introducing more generous space standards and forcing developers to comply to more rigorous design criteria. This research however will focus on improving the quality of the housing layout, rather than the quality of the houses themselves. The basic structure of a housing layout is defined very early on in the design process, the majority of the time is then spent accommodating developer requirements and council space standards. These vary but are almost always a set of clearly defined rules; such as minimum distances between houses, or number of parking spaces per home. The requirements are simple enough to implement individually, but when combined present the designer with an extremely complex puzzle. It is often the case that the designer will fail to meet all of the requirements, resulting in an excessive number of feedback loops in which faults within the design are slowly ironed out. This part of the design process is the most time consuming and as such leaves the designer with little opportunity to refine the quality of the spaces being created. The result is a development with which the developers and council are happy as it ticks all of their boxes, but often lacks good quality urban design. Blame cannot solely be placed on the house builders, the planners must also accept their responsibility to ensure the quality of these schemes and there needs to be “more clarity over priorities, and planners need to be more empowered to block or shape developments around design criteria”12. Design codes for major developments go some way to achieving this, but in reality the design code will only be strictly enforced in specific portions of the site (often those which are most visible) and the rest of the development will consist of the standard ‘off the shelf ’ product. The purpose of this research is to develop a tool which will enable designers to contribute more to the qualitative value of the scheme. This will be achieved by increasing the efficiency in the early stages of the design process which often becomes time consuming due to an excessive amount of feedback loops regarding compliance issues. 12

RIBA, p. 1.

4


Methodology


The research methodology will involve thorough historical research into the development of genetic algorithms and their current applications within design. Having no prior experience of programming, it will be important to develop an understanding of the language and the way in which programs are written and structured. This will be achieved by writing and testing small ‘sketch’ programs in a development environment called Processing. The next stage of research will follow a Constructive Research Methodology. Constructive research requires the “building of an artefact (practical, theoretical or both) that solves a domain specific problem in order to create knowledge about how the problem can be solved (or understood, explained or modelled) in principle”13. In this case the artefact or ‘Construct’ as it is otherwise known, will be a prototype program which forms a research test bed for the application of the genetic algorithm. The basic structure of a genetic algorithm will be taken from existing code and adapted to suit the purposes of this research. This code will then be combined with a modified Packing algorithm developed by Professor Wassim Jabi14 to produce a prototype House Packing program. Following this, a prototype program will be written in MaxScript and subsequently evaluated in terms of its usefulness as a design aid. The evaluation process will involve demonstrating the program to a professional within the housing industry who will give an opinion of its usefulness based upon how the program currently operates, and another based upon the potential of the program once it has been fully developed. It will also be interesting to find out if the program is viewed as enabling the designer to apply more time to the qualitative values of the design, such is the aim of this research, or if it is simply viewed as a way of speeding up the process; enabling designers to take on more projects and ultimately generate more income. The research will then expand on its findings using Normative Theory to present an opinion on the potential for this program to be applied to real-life scenarios in practice, and how further development could expand its potential application.

13 14

Gordana Dodig Crnkovic, ‘Constructive Research and Info-computational Knowledge Generation’ in Model-Based Reasoning in Science and Technology, ed. by Lorenzo Magnani and others (Springer, Berlin: 2010) p. 362 Wassim Jabi, ‘Parametric Design in Architecture’ (Laurence King Publishing: Forthcoming in 2013)

6


Literature Review


3.1 The Origins of Digital Design aids There has been extensive research into the field of Computer Aided Architectural Design (CAAD). However, there is a shared view among those involved in the development of this field that the vast majority of architectural practices are currently using ‘CAD’ packages as little more than digital drawing boards. The packages are used as a means of representing an idea, and whilst they may increase efficiency in the production of drawings, they are not actively contributing to the design process. Indeed, John Frazer remarks that in the early days, “The dream of CAD (computer-aided design) was sold cheap as COD (computerobstructed design)”15. Of course there are obvious exceptions to this rule, with the likes of Gehry, Hadid and Fosters all employing an array of computational techniques in their design process. A certain stigma surrounds the application of generative computation within the architectural design process. Christopher Alexander stated that “A digital computer is, essentially, the same as a huge army of clerks, equipped with rule books, pencil and paper, all stupid and entirely without initiative, but able to follow exactly millions of precisely defined operations… In asking how the computer might be applied to architectural design, we must, therefore, ask ourselves what problems we know of in design that could be solved by such an army of clerks… At the moment, there are very few such problems.”16 This may have been true at the time the statement was made, but now such ‘problems’ are directly manifested in the process of Evolutionary Design. Ulrich Flemming introduces some of the common fallacies that are often applied when computer aided architectural design is criticised from a philosophical perspective. Of particular interest is a fallacy regarding the capabilities of computer-aided architectural design. He states that many people claim that computers are incapable of tackling design problems because design is subjective, or full of ill-defined problems and computers are simply not capable of tackling such unquantifiable problems. He views this argument as fallacious because “it treats design as a monolithic, indivisible process that can only be supported in toto or not at all”17. Flemming argues that design is not, nor has ever been a monolithic process because in reality building design consists of a myriad of decisions made by many individuals from 15 John Frazer, ‘Computing without Computers’, Architectural Design Special Issue- The 1970s is Here and Now, Issue 75 (2005), p. 43 16 Christopher Alexander, ‘The Question of computers in Design’ in Landscape, Autumn 1967, pp. 8-12 17 Ulrich Flemming and Carnegie Mellon University Engineering Design Research Center, ‘Get with the Program: Common Fallacies in Critiques of Computer-Aided Architectural Design’ (Carnegie Mellon University School of Architecture. Paper 74: 1994), p. 1

8


different professions. These individuals focus on small aspects of the design and use a variety of tools to get the job done. As such, the monolith of design is in fact shattered into a number of fragments. He goes on to state that the computer is just another tool which has been added to the architects tool box, and that “if we view computers as tools rather than autonomous agents, many of the problems raised in philosophical critiques of computeraided architectural design will simply go away.”18 There is also the fear that advancing the field of computer-aided architectural design is further marginalising the role of the architect in the design process. However, this is based upon the assumption that developments in this field seek to produce some kind of autonomous artificially intelligent computerised architect. However, if we recognise the fact that we should be employing computers in the design process to carry out tasks quicker than we are able to, or that we are completely incapable of performing, then it makes no sense to develop a kind of artificial intelligence based upon our own image. Therefore the goal of computer-aided design should be to “create programs that work very differently from people because if they work exactly like people in a specific capacity, they will not be able to do better than people in this capacity.”19

18 19

Flemming, p. 3 Ibid., p. 4

9


3.2 Strict Rule-Based Systems In the early days of CAAD, experiments were conducted which aimed to enable the computer to demonstrate intelligent thought through the use of language. The language did not have to be a spoken language, but could be a symbolic language such as a shape grammar. The shape grammar formalism was introduced in 1971 by George Stiny and James Gips20. A shape grammar, in its most simple definition, consists of a vocabulary of shapes, an initial shape and a set of substitution rules to be applied to the initial shape (Fig. 4). Essentially the computer is instructed to find the initial shape “in any of its isometric transformations, i.e. its scaled, translated, rotated, and reflected forms, or combinations of these”21, and replace it with a different shape from the shape vocabulary (with the same transformation applied to the replacement shape). By applying these rules recursively, the resultant designs are said to belong to a language. ‘Symbolic AI’ was created by programming the computers with complex shape grammars. This enabled them to produce mock Palladian villas and fake Frank Lloyd Wright Prairie houses. The designs were so convincing that they could easily have been passed off as genuine. The key to their success was that they completely disregarded any “pragmatic or semantic meaning”22 and just replicated a style. However, this provides more of a proof of concept than an actual useful tool. In the new generation of AI, the paradigm of intelligent behaviour has shifted from a language to biological life- primarily neural networks and genetic algorithms23. Kitchley and Srivathsan propose a method for replicating the organic structure of fishing settlements through the use of Shape Grammar and Genetic Algorithm. Shape grammar has been used to extract the style and aspects of the specified structure, and genetic algorithm is then applied in an evaluative role to seek out the optimal solution. It is interesting to see that Kitchley views this as a complete design solution – a fully automated design process. The shape grammar proposed aims to replicate the “distinct pattern of hierarchy of spaces”24 found within the settlements. It is made up of a number of rules which control the placement of the shape units, the paths and the communal spaces. At the same time it controls their 20 George Stiny and James Gips, ‘Shape Grammars and the Generative Specification of Painting and Sculpture’ in Proceedings of IFIP Congress 71 (IFIP Congress Proceedings, Amsterdam: 1971) pp. 1460-1465 21 Orestes Chouchoulas, Shape Evolution: An Algorithmic Method for Conceptual Architectural Design Combining Shape Grammars and Genetic Algorithms (published doctoral thesis, University of Bath, 2003), p. 25 22 Tomor Elezkurtaj and Georg Franck, ‘Genetic Algorithms in Support of Creative Architectural Design’, in Architectural Computing from Turing to 2000 (eCAADe Conference Proceedings, Liverpool: 1999), p. 646 23 Elezkurtaj, p. 646 24 J. Jinu Louishidha Kitchley and A. Srivathsan, ‘Combining Shape Grammar and Genetic Algorithm for Developing A Housing Layout: Capturing the Complexity of an Organic Fishing Settlement’ in CAADRIA 2005 (CAADRIA Conference Proceedings, New Delhi: 2005) Vol. 2, p. 259

10


Fig. 4 - A simple shape grammar rule and some of its resulting configurations

Fig. 5 - A more explicit shape grammar as defined by Kitchley and Srivathsan

11


relationship to the beach (an important feature of the fishing settlement). The number of different elements within the grammar would suggest the need for a very large number of rules. However, Kitchley’s grammar is defined by just 46 rules (Fig. 5). I would question how a shape grammar of so few rules could enable the program to capture the “physical and socio-cultural requirements”25 as Kitchley suggests it can. It would seem that perhaps the complexities of these settlements has been underestimated or over-simplified in the analysis and design of the shape grammar, and as such it fails to achieve one of its goals. Gabriela Celani proposes a system based around the use of shape grammars as a generative tool. Her work “aims at developing a computer implementation for designing housing developments with a context-based approach”26. This context-based or ‘outside-in’ approach will consider “exterior forces from the built surroundings, but also from the natural site, as the main influences for developing the buildings’ siting and layout”27. Celani recognises the computers efficacy in performing repetitive, quantifiable tasks and develops a set of tools which allow the computer to carry out tasks that would otherwise be very time consuming if carried out by hand. One of these is the land parcelling tool, it enables users to input their desired land parcel size and the computer then divides the site into equally portioned pieces. This would prove very useful for irregularly shaped sites. Following the land parcelling, a shape grammar is implemented to control the distribution of houses into the newly generated plots. Celani’s shape grammar focuses upon the neighbourhood relationships between specified housing units. With careful consideration, a set of rules such as this would allow the user much greater control over the final aesthetic qualities of the layout. It would seem that the end goal for this piece of research is to establish a fully automated housing design process- right down to the design of internal floor plans. Whilst this could allow for a much greater variety of housetype to be implemented in a development, it is a little unrealistic as there is no way that a development of unique housetypes will be able to match the cost efficiency of a development of more standardised units. It would seem therefore that perhaps the generative processes should stop at the level of house placement, and the focus of the designers efforts should then be to improve the standardised product.

25 26 27

Kitchley, p. 265 Gabriela Celani and others, ‘Generative Design Systems for Housing: An outside-in approach’ in Digital Design: The Quest for New Paradigms (eCAADe Conference Proceedings, Lisbon: 2005) p. 503 Celani, p. 503

12


3.3 Genetic Algorithms

John Holland introduced Genetic Algorithms in the 1970’s as a way of formally

understanding biological adaptation in nature28. Genetic Algorithms today are used to find the optimal solution to a problem which has an almost infinite number of possible solutions. Finding the solution to the problem could take an infinite amount of time using a “brute force”29 algorithm (testing all possible solutions). They operate by analysing solutions and assigning each with a score (fitness function) according to how well they address the problem, thus narrowing down the possible solutions. The characteristics of a solution which make it individual are just as in nature, known as Phenotypes (the coded data structures of which are known as Genotypes). The genetic algorithm then further mimics the biological process, by effectively breeding phenotypes with one another. In reality, phenotypes with higher fitness functions are selected as ‘parents’. Crossover and Mutation operators are then applied to their coding, and the resultant ‘offspring’ will inherit the properties of their ‘parents’ that gave them high fitness function scores. The Crossover operator essentially splices together the genotypes of two ‘parent’ solutions to produce offspring that combine the characteristics of both parents. It is also possible to perform Asexual Crossover in which one individual cuts its chromosome and recombines it differently. This can be used to “help express non-inherited phenomena, even though the probability for mutation is generally kept low to guarantee hereditary continuity.”30 The Mutation operator randomly changes parts of the offspring’s coding to ensure that each new phase of the evolution is slightly different from the last. The rate of mutation is the rate at which parts of the offspring’s coding is randomly changed. “Selecting an appropriate rate is critical for successful optimization”31. A rate too low will prevent sufficient mutation for the optimisation process to diversify; a rate too high will destroy the accumulated fitness. “Interestingly, Mother Nature teaches us to select a rate of the order of 1:2.000. With a mutation rate of this order, optimization can be expected to be successful.”The process then repeats until the end conditions are met i.e. the final output is of sufficient fitness to represent a viable solution to the problem at hand (Fig. 6). 28 29 30 31

John Holland, ‘Adaptation in Natural and Artificial Systems: An introductory analysis with applications to biology, control, and artificial intelligence’ (University of Michigan Press, Michigan: 1975) Daniel Shiffman, ‘The Nature of Code: Simulating Natural Systems with Processing’ (Forthcoming 2013) p. 414 Christian Derix, ‘Genetically Modified Spaces’, in Space Craft: Developments in Architectural Computing, ed. by David Littlefield (RIBA Publishing, London: 2008) pp. 46-53 (p. 47) Elezkurtaj, p. 649

13


As already described, Genetic Algorithms must measure the fitness of a given solution in order to choose the best parent genotype. However, one of the main problems facing the application of Genetic Algorithms in architectural design is the difficulty in measuring the fitness of a given solution. The functionality of an architectural design can be assessed on so many different levels, and “If function resists being exhaustively described, the fitness of a design variant resists being measured in a reliable and reproducible way”32. The solution to this is to allow the algorithm to assess the design based upon its compliance with specific ‘hard’ criteria. This further reinforces the fact that these processes are not a kind of artificial intelligence capable of producing a final design, but instead a sophisticated assessment tool which should be used to help the architect make informed decisions.

32

Elezkurtaj, p. 649

14


Fig. 6 - Flow chart illustrating the operation of a Genetic Algorithm

15


One such tool is proposed by Philippe Marin and others in “A Genetic Algorithm for use in Creative Design Processes”. This piece of research sought to create “digital tools inspired by genetic mechanisms capable of assisting and supporting design projects”33. They were looking in particular at the initial design phases, further reinforcing their opinion that the genetic processes should be applied as a design tool rather than a finite solution. Their program was used to generate a number of unexpected forms based upon a “solar passive fitness condition”34. In other words, they were modifying an initial shape, or pattern, in accordance with its amount of solar exposure. Their experiment was carried out in 3DS Max® software using MaxScript for scripting and encoding the genetic algorithm. ‘Shape exploration’ was carried out by applying 5 modifiers on an initial pattern, resulting in an array of unexpected configurations. These were bending, tapering, skewing, twisting and stretching. “The shape explorer takes the initial model as an input that triggers shape exploration, and automatically derives various shapes by simulating natural evolution through crossover and the mutation of genomes”35 – the basic operational method of a genetic algorithm. Each individual offspring of the original pattern was assessed upon its heating needs, those with lower heating needs achieving a higher score. In their model, the user was able to exercise selection preferences by having the computer show the best solutions found within a few generations, and then continuing the process upon their selections. The end result therefore may not be the most technically efficient solution, but it enables a certain degree of compromise with the more subjective preferences of the user. This compromise between user and program based design input is explored in further detail in the following section.

33 34 35

Philippe Marin and others, ‘A Genetic Algorithm for use in Creative Design Processes’ (ACADIA Conference Proceedings, Minneapolis: 2008) p. 332 Ibid., p. 334 Ibid., p. 338

16


3.1 The application of Genetic Algorithm in Housing Design Carlo Coppola and Alessandro Ceso believe that computer aided design should be integrated throughout the design process. They believe that it should be capable of “both the generation of the formal idea and the simultaneous study of technical feasibility“36 which will lead to “the definition of the design in its totality.”37 The model proposed seems to be a complex model of AI; capable of informing many of the designers’ decisions in regards to materials, shape, building method and even the functional or spatial relationship of parts of the design. The idea behind it is to present the designer with as much information as possible, allowing them to quickly make better decisions, based upon actual qualitative evidence. They go on to state that “none of this can replace the designer’s creativity”38 but it seems that there is a fine line to tread between providing useful feedback and potentially stifling the creativity of the designer. Design is an evolutionary process in which it is often more productive in the initial stages if a certain amount of freedom is retained by withholding certain ‘realities’ from the designer. Tomor Elezkurtaj and Georg Franck propose an interactive design system for architectural floor plans using “a combination of an evolutionary strategy (ES) and a genetic algorithm (GA)”39. Essentially the problem to be solved consists of fitting a number of rooms into an outline, whilst at the same time adhering to the functional requirements of the rooms. The fitness function of the proposed solution is expressed in terms of the proportions of the rooms and the ‘neighbourhood relations’ between them. One element of the design process, the proximal arrangement of spaces is a relatively simple task. The computer simply has to find the solution with the least overlap (between rooms) and overflow (of boundary). A Genetic Algorithm is then employed to change the functions of the rooms in order to achieve the optimal neighbourhood relationships. This system gives the majority of design control to the computer, as the final output will be so highly optimised that it will be difficult for the designer to make changes without reducing the efficiency of the design. The significance within this study is the way in which the computational design process is viewed as a complete solution; nullifying the role of the human designer. 36 37 38 39

Carlo Coppola and Alessandro Ceso, ‘Computer Aided Design and Artificial Intelligence in Urban and Architectural Design’ in Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process (eCAADe Conference Proceedings, Weimar: 2000), p. 301 Ibid., p. 301 Ibid., p. 302 Tomor Elezkurtaj and Georg Franck, ‘Geometry and Topology: A User-Interface to Artificial Evolution in Architectural Design’, in Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process (eCAADe Conference Proceedings, Weimar: 2000), p. 309.

17


Orestes Chouchoulas presents a system which combines shape grammars and genetic algorithms to produce concept level designs for apartment buildings. Having received criticism for the output of programs such as this being devoid of semantic meaning, Chouchoulas rightly points out the importance of considering this kind of programming as a tool, not some kind of mechanical architect. Stating that “one must not confuse the method for the designer. Computers, usually the means by which algorithmic processes are employed, are merely tools”40 and indeed the choice of tools and methods is highly semantic in itself and has great impact on the finished design. In Chouchoulas’ eyes, algorithmic design can be seen as a methodology which relies on “quantifiable programmatic constraints to define form”41. He views his thesis, Shape Evolution, as a tool to be used in the early stages of design, not in proposing a final design solution. He states that “the process of design is about constraining choice and revealing the ultimate solutions by chiselling away unwanted options”42. Ultimately the tool aims to improve the efficiency of the highly complex, and often inefficient architectural design process. A system such as this would create a drastic reduction in the number of feedback loops between designer and client, and as such greatly improve the efficiency of the process. This is achieved by allowing the computer to solve the basic quantifiable elements of the design at an early stage. The increased efficiency of the design process means that “the tool could allow its users more time for creative exploration”43. Chouchoulas has an interesting way of viewing design which helps to explain the way in which genetic algorithms work. He views the design process as a search operation in which the designer is searching through a so-called “problem space” 44. Within this problem space, there are an infinite number of potential solutions to the design problem and role of the genetic algorithm is to act as a kind of search engine to identify the optimal solution. This Problem Space is further explained by Robert Woodbury. He states that there are two different design spaces; Implicit Design Space and Explicit Design Space: - Implicit Design space: The space which contains all of the possible solutions to a given problem. (feasible, non feasible, complete and incomplete) - Explicit Design space: A kind of space within the Implicit design space consisting of the 40 41 42 43 44

Orestes Chouchoulas, Shape Evolution: An Algorithmic Method for Conceptual Architectural Design Combining Shape Grammars and Genetic Algorithms (published doctoral thesis, University of Bath, 2003), p. 8 Ibid., p. 4 Ibid., p. 3 Ibid., p. 10 Ibid., p. 6

18


Fig. 7 - The explicit shape grammar used in ‘Shape Evolution’. The white boxes represent the apartments, the blue boxes represent the balconies and the turquoise frames represent the circulation space - one rule of which is that it must be contiguous.

Fig. 8 - One potential configuration of the shape grammar

Fig. 9 - 3D visualisation of a solution generated using the shape grammar

19


different paths of exploration the designer has already followed. “It gains its structure through the exploration behaviour of designers; especially through choices of strategies reflecting the limits of either computation, designer knowledge; or both.”45 Narahara and Terzidis explore the potential application of a Genetic Algorithm in the design of a housing project. The idea being that with the increasing complexity of the formal manifestation and functional requirements of modern buildings, “Genetic algorithms offer an effective solution to the problem allowing multiple constraints to compete as the system evolves toward an optimum configuration that fulfils those constraints.”46 The experiment was the design of a 200-unit residential complex on a corner of two streets in an urban context. The genetic algorithm was capable of producing a solution based around multiple constraints. Three different types of housing units were included in the complex. The value of each being assessed individually based upon its solar exposure, view, and construction economic factors. The overall goal was to achieve all of the above for the lowest cost. The end result was a large tower block, with each floor having its own optimised façade configuration (Fig. 10). The focus of their research is based around high-rise, high-density housing developments, which they feel hold great potential for improvement. The argument is that due to the comparably limited computational ability of the human designer, the tendency is to resolve the complexities of the design problem within one floor and then simply stack these floors up to reach the required numbers of units47. The proposed system would be capable of realising designs with much greater variation from floor to floor through its ability to approach the design at the scale of the entire building. This is an interesting concept to apply to the design of a low-density, low-rise development. Being able to simultaneously evaluate the implications of a particular design decision upon every other house in the development is something that a human designer would never be capable of. The human process is more of a sequential one, working from one side to the other, filling in the spaces and then going back and tweaking afterward. Being able to calculate this all at once would save an enormous amount of time. The authors place great emphasis on the potential for computational systems to “surpass a designer’s capabilities”48. Although true in terms of the ability to simultaneously handle and compute large quantities of data, there is still something to be said for the creativity of the human mind and its ability to embed deeper semantic meaning into a design. It would seem 45 46 47 48

Robert Woodbury, ‘Elements of Parametric Design’ (Routledge, Oxon: 2010) p. 277 Taro Narahara and Kostas Terzidis, ‘Multiple-constraint Genetic Algorithm in Housing Design’ (ACADIA Conference Proceedings, Louisville: 2006) p. 418 Ibid., p. 423 Ibid., p. 423

20


Fig. 10 - A sample of results from Narahara and Terzidis’ experiment

21


that the question should not be ‘is a computer better than a human?’, rather ‘at what point does a design problem fall outside the realm of the human designer and require some form of computational input beyond simple representational tools?’. As the authors have stated, design problems are becoming increasingly complex, so it would seem that computational processing will become more and more a part of the architectural design process. A good example of this is the current need for large scale housing developments. It is the opinion of this researcher that the scale and complexity of such schemes has already surpassed the capability of the human designer, and as such there is a detrimental effect on the quality of the developments. ‘Optimizing the Characteristic Structure’ presents an application of optimisation procedures to urban design in a way that goes beyond purely objective data. It claims to be capable of generating new patterns with an underlying logic based on successful existing urban areas. Essentially it is trying to reproduce a set of specific characteristics found in a built urban environment that have been perceived as ideal by Stephen Marshall in his book ‘Streets and Patterns’ (2005). The process consists of a shape grammar and a genetic algorithm. The shape grammar has been designed by deconstructing an area of urban fabric that represents the ‘characteristic structure’, and from it defining just four rules which make up its structure. Interestingly, the authors state that their system “does not have the pretension to be the sole means of design; it recognizes the fact that the use of computer programs in the design process cannot overshadow the designer’s intentions.”49 In other words it encourages users to apply further changes to the results manually by “defining the likelihood with which certain rules may be applied”50. One of the key points this study has shown is that “it is possible to combine the automated generation of optimized solutions with an active participation of the designer in the process (in other words, GA does no necessarily need to be used as a black box)”51. It is an interesting concept to allow user interaction to shape the results of the genetic algorithm, but the rules on which the genetic algorithm are based seem to be the weakest part of the process. Basing the core of the design methodology upon the subjective opinion of one individual will only produce a result optimal to their personal ideals. A shape grammar that is designed around more specific constraints such as building regulations could provide a more realistic optimal solution.

49 Gabriela Celani and others, ‘Optimizing the “Characteristic Structure”: Combining Shape Grammars and Genetic Algorithms to Generate Urban Patterns’, in Respecting Fragile Places (eCAADe Conference Proceedings, Ljubliana: 2011), p. 493 50 Ibid., p. 496 51 Ibid., p. 499

22


Christian Derix further explores this kind of user interaction within the genetic algorithm system, referring to it as “artificial selection”52, whilst the other scenario in which the program searches within itself for the optimal solution he refers to as “natural selection”53. In artificial selection, an external referee such as a user, guides the development of the population. He admits that the speed of such a system is greatly compromised as the program has to keep stopping to allow for the user’s input. In his Generative Heuristics experiment, Derix innovatively combines both natural and artificial selection. The user selects an individual from the generated population based on a “subjective visual judgement”54, this is then bred with a program-selected individual possessing a high fitness score. The resulting generation will contain elements of each ‘ideal’. This selection process does not necessarily produce the best individuals in subsequent generations as the “subjectively favoured fitness may, in fact, be relatively poor”55, the result being that the convergence towards an overall well performing individual will take much longer. The benefit however is that subjective and qualitative values can be retained throughout the optimisation process. A potential problem with applying genetic algorithm to a housing layout design is the need for a compromise between many parameters. Derix devises a way to approach this in an experiment called “Compromising the Masterplan”56. At the core of this experiment is a method of optimisation known as Pareto Optimisation. This allows for the optimisation of a many different parameters by gradually adjusting individual parameters to achieve a global improvement. Eventually a state is reached where “every further alternation between the parameters’ values would lead to a degradation of any other parameter”57. Allowing the program almost full control over all parameters takes the majority of the design process out of the hands of the designer. The system becomes so highly optimised that it becomes difficult to see where the input of the user could be useful. It is possible that combining his artificial selection process with Pareto Optimisation would allow the user to shape the design development, resulting in a much more complete design solution which encompasses both user-defined ideals and parametric optimisations.

52 53 54 55 56 57

Christian Derix, ‘Genetically Modified Spaces’, in Space Craft: Developments in Architectural Computing, ed. by David Littlefield (RIBA Publishing, London: 2008), p. 48 Ibid., p. 48 Ibid., p. 50 Ibid., p. 50 Ibid., p. 51 Ibid., p. 50

23


Experiment & Analysis


4.1 Program Development This research proposes a computer program capable of operating as a design aid in the design of an housing development. The program will generate optimal configurations of house units based upon a user-defined ‘buildable area’ – thus keeping control of the design firmly within the hands of the designer. For the purposes of this research, optimal will describe the maximum number of houses successfully plotted within a site boundary. The aim at this stage is simply to test the potential of such a system, not to produce a fully implementable program. Having no prior knowledge of computer programming, it was essential to develop a basic understanding of the fundamental principles behind what the research was trying to achieve. A program called ‘Processing’ was used as a software sketchbook to enable the testing of various ideas. Through this testing, certain core principles began to develop which helped to refine the previously ambitious scope of this research. As a starting point, the initial goal was to write a piece of code in Processing which would be able to randomly pack58 a specified number of units into a defined space. The first process the program had to perform was to generate randomly positioned units (houses) within a pre-defined area (the site). The ‘random;’59 function was used to allow the program to generate random x and y coordinates for each of the units. It allows the user to specify a range from which numbers are chosen at random. In the context of coordinates, this allows you to specify a field within which the units will be placed (Fig. 11). Each of the house units was initially defined as a simple rectangle which would only be generated if the x and y coordinates fell within the range defined by the site boundary (blue square). The number of units generated could be specified by the user, but there was often a slight difference between the number specified and the number generated (Fig. 13). The idea was that this program would be able to generate different configurations for the way in which the units were arranged. It would test these configurations using the basic criteria that ‘more units = a better solution’, then use a genetic algorithm to find the optimal solution. The problem with finding an optimal solution when the units are all in the same orientation, is that the solution becomes very predictable. To avoid this, random rotation values were applied to each unit (Fig. 12). If the collisions could be calculated, it is clear that there would be a much greater variety of solutions generated when the units are rotated. 58 59

Benjamin Aranda and Chris Lasch, ‘Pamphlet Architecture 27: Tooling’ (Princeton Architectural Press, New York: 2006) pp. 22-25 Casey Reas and Ben Fry, Processing: A Programming Handbook for Visual Designers and Artists (MIT Press, Massachusetts: 2007) p. 127

25


Fig. 11 - Randomly positioned rectangles within a specified area.

Fig. 12 - Randomly positioned rectangles within a specified area with random rotation factors.

Fig. 13 - The program was set to generate 10 rectangles, however with each attempt a different number was generated.

26


The missing units were simply not being generated because they fell outside of the specified range; removing the user’s ability to control the number generated. The program needed to test each unit to establish whether it was outside of the specified area, and if it was, generate a new location for it and re-test it until it found a suitable location. The ability to do this would also have allowed the program to test for collisions between the units themselves, as they were frequently being plotted on top of each other. Without testing for collisions, the output of the program was somewhat meaningless as it failed to demonstrate anything about the number of units which could be packed inside the area. In addition to this, it would not have been possible to change the shape of the site boundary as the ‘random’ positioning of the units would need to lie within an orthogonal matrix of coordinates. At this point the research had reached a stage where it was being held back by the lack of programming knowledge. As such it became necessary to rely on an external source for the code. Professor Wassim Jabi had written a Max Script called “Circle Packing”60. This code calculated the number of randomly sized circles which could be packed into a user-defined polygon. This Script contained a number of very useful features: -

It was capable of calculating collisions. Not just between the generated units and the site boundary, but also between the units themselves.

-

It allowed very simple user input in the definition of the polygon (outlining the site boundary).

-

The polygon could be absolutely any shape – allowing the user to trace the outline of their site with ease.

-

Being part of the AutoDesk suite, it could export the generated layouts in a number of useful formats (.dwg, .dwf etc).

-

It gave the user the ability to define the number of units required and then gave a read-out of how many it was able to achieve.

-

It automatically calculated the percentage coverage within the polygon – a feature which could prove very useful when calculating the densities of housing developments.

60 Jabi

27


However, there were some minor adjustments which needed to be made to the original Code in order for it to be applicable to the subject matter. Firstly, and most obviously, the circles needed to be changed into orthogonal shapes and the variation in the sizing needed to be removed. To make the model appear more realistic, and also to make the varying densities more perceivable, the circles were replaced with 3D cubes. Building upon the knowledge gained in the Processing experiments, a random rotation factor was then applied to each of the units. The way in which the script was coded meant that at this point, it was producing homogenous patterns as opposed to layouts – there was no sense of hierarchy or any kind of structure (Fig. 14). The script needed the ability to react to a road structure, around which the random generation of houses could take place. As a proof of concept for this idea, the houses within the generated layout which were within a certain distance from a specified line vector, representing the road, were made to orient themselves along that line and change colour (Fig. 15).

28


Fig. 14 - Homogeneous random distribution of house blocks within a specified area.

Fig. 15 - House blocks within a specfied distance of three pre-defined lines were made to turn blue and orient themselves to align with the axis of the lines (blue blocks).

29


A new script was then written which built upon the principles of the Packing program and integrated them into a Genetic Algorithm. This ‘House Packing’ script now enabled the user to define a series of buildable areas (and consequently define the road structure), the program then places houses around the perimeter of these areas and orients them to the nearest road. The goal of the program at this stage was to achieve the maximum number of houses possible within a given site. As such the fitness of each individual is calculated based upon their proximity to the nearest neighbour. This factor of ‘remoteness’ ensures that the density of houses in subsequent generations increases. Parent Phenotypes are chosen using a Weighted Roulette Wheel selection method61 (Fig. 16).

Fig. 16 - Weighted Roulette Wheel selection method

This method places all phenotypes into an array, those with higher fitness scores are more highly weighted and as such are more likely to be chosen. The reason for doing this as opposed to simply selecting the optimal phenotypes each time, is to maintain the diversity of the system. If only the best solutions were chosen each time, the results would become predictable; preventing the evolutionary system from yielding more interesting and unexpected results.

61

Daniel Shiffman, ‘The Nature of Code: Simulating Natural Systems with Processing’ (Forthcoming 2013) p. 414

30


The user interface allows the operator to control a number of factors (Fig. 17): Phenotype Parameters: - Min. Radius: The minimum separation distance between houses. - Min./Max. Length: The range of lengths of the houses being generated. - Min./Max. Width: The range of widths of the houses being generated. - Dist. To Road: The distance from the road in which houses can be placed. Population Parameters: - Maximum: Sets the maximum number of houses to be generated. - Survival Rate (%): Controls the amount of the population which will survive into the next generation. - Mutation Rate(%): Controls the chances of an individual within the population being mutated. - Max. Attempts: The amount of attempts that the computer is given to place the houses correctly. The interface then gives the user a number of read-outs after each generation: - Red Bar: Charts the computers progress through its allocated number of attempts (Max. Attempts variable). - Green Bar: Gives a visual representation of the amount of the Maximum

population which have been successfully placed. - Packed: Tells you how many units have been placed. - Area Coverage: Gives a figure for the % site coverage that the current

population is achieving. - Avg. Fitness: Calculates the average fitness score within the current population. Once the first generation has been created, the user can then review the results, choose to run the program again to create another generation, or alter the parameters before proceeding.

31


Fig. 17- Series of screen shots of the program in operation within the 3Ds Max速 operating environment: - Top: Buildable area is traced and ready for the first generation. - Middle: First generation (note the red bar indicating the number of attempts made and the green representing the packed units). - Bottom: Second generation (note the increased number of individuals packed and the higher Avg. Fitness score)

32


4.2 Case Study The program was then tested on the design for a housing development which has recently been granted planning permission (Figs. 18 & 19). For the purpose of this research, the ‘real’ design was used as a realistic framework onto which the program could be applied. The buildable area was traced in AutoCAD and imported into 3DS MAX to provide the site boundary for the experiment. The network of roads and irregular shaped plots of land create a realistic challenge for the program. The two variables that control the effectiveness of the genetic algorithm are the Survival Rate and the Mutation Rate. To find optimal values for these, a series of evaluations were carried out which first tested the system at varying survival rates and then varying mutation rates. The program was tested over five generations on the case study layout (Fig. 20). For each setting, the three read-outs; ‘Packed, ‘Coverage Area’ and ‘Average Fitness’ were recorded. This gave an indication of how well each rate was performing, though the most telling result was the average fitness score as this is more directly linked to the efficacy of the algorithm. It would seem that the rate showing the greatest increase in average fitness over consecutive generations should be selected as the optimal setting. Over five generations this would appear to be the 100% survival rate. However, a survival rate of this magnitude stifles the genetic algorithm by preventing it from removing poor performing individuals. As a result the values for ‘Packed’, and ‘Coverage Area’ tend to peak very early, and more often than not exceed those achieved by lower survival rates, as the program attempts to squeeze more and more houses onto the site. The average fitness score on the other hand will usually remain relatively low, since the proposed solution still contains a number of poor performing individuals. The results recorded in this early experiment appear contradictory, as the highest average fitness score is achieved by the 100% survival rate. The problem with such an high survival rate is that the algorithm is relying entirely on the mutation of individuals to increase the average fitness. It is more of a brute force / trial and error approach rather than systematically breeding a better solution. This method appears to work for this particular experiment, as the fitness score is so closely linked to the number of individuals placed (more individuals = lower factor of remoteness = higher fitness score). However the purpose of this experiment is to test the potential of the genetic algorithm, and thereby following a method which nullifies part of the breeding process would be contrary to that goal (see appendix 2 for full data). It was found that the optimal Survival rate was 60% and Mutation rate 40% (Figs. 21 & 22), this coincides with Elezkurtaj and Franck’s experiments which suggest a mutation

33


Fig. 18- Layout plan of chosen case study.

Fig. 19- Detail of layout plan showing the complexity and level of refinement achieved in a finished product.

34


Generation 1. Number Packed:

364

Area Coverage:

25.6163%

Average Fitness:

2.55555

Generation 2. Number Packed:

384

Area Coverage:

25.7101%

Average Fitness:

2.57347

Generation 3. Number Packed:

373

Area Coverage:

24.9811%

Average Fitness:

2.57506

Generation 4. Number Packed:

378

Area Coverage:

25.1686%

Average Fitness:

2.61124

Generation 5. Number Packed:

380

Area Coverage:

24.605%

Average Fitness:

2.62862

Fig. 20- An example of the way in which five generations were used to test for the optimal settings of the House Packing script using the case study layout

35


15.00 11.25 400.0 7.50 387.5 3.75 375.0 0 362.5

22.50 18.75 15.00

Nยบ of Houses Packed

11.25 7.50 3.75 0

Generations

Generations

350.0

20% 40%

337.5 20%

40% 60%

60% 80%

80% 100%

100%

325.0

Average Average Fitness Score Fitness Score

312.5 2.740

300.0 2.740

Avg. Fitness % Area Coverage

2.275

2.120 27.000 1.965 25.938 1.810 24.875 1.655 23.813 1.500 22.750 21.688

20%

20.625 19.563

Generations

2.585

2.585 2.430

2.430

1%

20%

40%

50%

60%

80%

100%

2.275

% Area Coverage

2.120 1.965 1.810 1.655 1.500

Generations

20% 40%

40% 60%

60% 80%

Generations

80% 100%

100%

Fig. 21- The results of testing different Survival Rates over 5 generations. 80% and 100% seem to perform better in this short test, however more extensive testing has proven that a more consistent increase in Average Fitness can be achieved using a 60% survival rate.

18.500 Generations

1%

20%

40%

50%

60%

80%

100%

80%

100%

Average Fitness Score 2.700 2.666 2.633 Avg. Fitness

Generation 5 2.56612 2.5958 2.68801 2.63123 2.57682 2.51705 2.48689

% Area Coverage

% Area Coverage

18.75

Avg. Fitness

Generation 5 19.4908 22.9101 25.1785 24.8587 25.892 25.6674 24.5487

Nยบ of Houses

Generation 5 310 359 388 376 378 371 349

22.50

2.599 2.565 2.531 2.498 2.464 2.430 Generations

1%

20%

40%

50%

60%

Fig. 22- The results of testing different Mutation Rates over 5 Generations

36


rate of around 50%62. As previously mentioned, some of the results appeared contrary to this (Fig. 21), as such more extensive testing was carried out using these settings for 15 generations to gain a more reliable pitcture of the effectiveness of the optimisation process. The results show a general positive trend in the ‘Average Fitness’ score and ‘Number of packed houses’, indicating that the optimisation process is functioning correctly (Fig. 23). Whilst the 100% survival rate exceeds the 60% rate on both ‘Number of packed houses’ and ‘Area Coverage’, the 60% rate demonstrates a greater level of ‘Average Fitness’ by the end of testing and it is quite possible that, given a greater number of generations, it would exceed the 100% Survival rate in all three measures. What is interesting is the amount of fluctuation between generations. These results indicate a pattern of 3-4 successive increases followed by a significant decrease, the magnitude of which reduces with each repetition. This is an indicator of how the genetic algorithm works. Over successive generations, the range of fitness scores will decrease as they all become fitter (and the average fitness increases). The result of which, in terms of the roulette wheel selection method, is that each individual has a much more equal chance of being in the percentage of the population which survives to the next generation (the survival rate). Conversely, they also have a more equal chance of being removed from the population. This is demonstrated in the results where after 3 generations at a 60% survival rate, 40% of the relatively high scoring population is replaced by new individuals with lower fitness scores. The program is able to vary the size of the houses within what has been defined as a realistic range based upon the house sizes found in the case study, resulting in a more realistic configuration of houses. However, it is unrealistic to compare the quantity of houses that the program places, with the number achieved on the finished layout as it represents hours of careful refinement (Fig. 19). The final form of this program is only intended to be used as an early-stage design aid and any configurations generated would need to undergo the same process of refinement in order to be comparable. It is therefore possible that using a more simplistic case study in this experiment would have yielded results which better illustrated the function of the program. The large housing layout makes it difficult to keep track of the progress of individuals within the optimisation process, the pattern of houses created is so complex that the organisational principles become harder to see. It was however useful to use this particular example when gaining an expert opinion, as the proffessionals that the program was demonstrated to, were the original designers of this particular layout.

62

Elezkurtaj(1999), p. 649

37


2.47285

2.55832

2.64781

2.67804

2.55558

2.63713

2.61962

2.60978

2.65257

337 23.5309 2.41143

389 26.5671 2.57182

412 27.6455 2.64687

411 27.2027 2.64597

402 27.1076 2.63363

399 27.0847 2.60325

407 27.5794 2.6344

421 27.7679 2.68792

421 28.1948 2.69102

Gen. 15

424.000

27.545

413.125

26.896

402.250

Nยบ Houses Packed

% Area Coverage

Comparing % Area Coverage 28.195

26.247 25.597 24.948 24.299 23.649

396 25.1964 2.67124

391.375 380.500 369.625 358.750 347.875

23.000

337.000 Generations

60% Survival Rate

100% Survival Rate

414 27.5955 2.65915

Comparing Average Fitness Scores 2.730

Average Fitness

2.689 2.648 2.606 2.565 2.524 2.483

2.441 Gen. 9

Gen. 8 383 24.711 2.60978

Gen. 10

2.400391

25.6179 2.65257

Gen. 11

397 25.7874 2.66081

60% Survival Rate 421 27.7679 2.68792

421 28.1948 2.69102

419 27.3346 2.72871

Gen. 12

384 24.4107 2.64617

Gen. 13

388 24.8247 Generations 2.6923

Gen. 14

Gen. 15

392 25.3696 2.69712

395 25.4285 2.68365

396 25.1964 2.67124

421 28.0077 2.67085

410 27.2016 2.66895

414 27.5955 2.65915

100% Survival Rate 415 27.4582 2.67029

424 27.5866 2.71171

Comparing Houses Packed 424.000

2.730 2.689

402.250 Average Fitness

Nยบ Houses Packed

413.125

391.375 380.500 369.625 358.750 347.875

2.648 2.606 2.565 2.524 2.483 2.441

337.000

2.400

Generations

60% Survival Rate

100% Surival Rate

60%

Fig. 23- Testing the chosen Survival Rate (60%) against a control (100%) to prove that a certain amount of the population needs to be refreshed in order to breed the fittest population

38


Implementation in Practice


The overall goal of the House Packing program was to re-distribute the designers time. Given the complexity of large housing developments, a large proportion of the available time is spent ‘Compliance Checking’ – ensuring that the design meets the specific requirements of building codes and the brief. The aim of the finished program is to shift the focus of that time to improving the qualitative values of the design. The finished program would be operated as a design aid for the early stages of the design process, with the intention of establishing a partnership between human and computational design. Different roles can be assigned to each side of the partnership. User: -

Site definition: Specifying the extents of the buildable area.

-

Housetype Mix and Quantity: Defining the target number of houses to achieve and the different types of houses as specified by the client.

-

Spatial Limitations: The user will have to set these parameters in order for the design to conform to strict space standards (e.g minimum distances between houses).

-

Qualitative Input: Outside of the program, this is the main role of the user; to introduce an element of subjective evaluation to the design process.

Program: -

House Placement: Locating the houses within the buildable area.

-

Compliance Checking: Checking each of the placed houses against a set of pre-defined rules set out by the user.

-

Design Optimisation: Finding the optimal configuration of house units within the buildable area.

This partnership is central to the way in which the program is intended to be applied; as an aid not an autonomous design generator. In order to understand the realities behind the use of a program like this, it was necessary to gain the opinion of industry professionals to establish some idea of the potential application of the system.

40


The House Packing script was demonstrated to a practice of experts in the field of commercial housing. Immediately they felt the focus of the program could be altered to make it more useful. Through an informal discussion, an opinion emerged that if the program is to generate a useful starting point for the design of housing layouts, there needed to be less randomness involved. However, this ‘randomness’ is what creates the most unexpected design solutions, making the program more predictable would defy the point of using an evolutionary system. When asked whether this program was likely to make the design process more efficient by giving over control of compliance checking to the computer, one expert remarked that “Programs like this aren’t doing that, programs like Revit already are, with clash detection etc. This [program] can only help you with randomly hitting a high number, it is not really doing it with any intelligence. To get the right answer would require such a complex set of parameters that it would be unachievable”63. The idea was that incorporating some kind of Genetic Algorithm into a Revit based system would enable the program to place pre-defined blocks. The user could then apply specific sets of rules to individuals within their Revit ‘family’- making the placement of the units much less random. There are several issues which become evident with this suggestion. Firstly, the scripting methods currently available in Revit are far less sophisticated than MaxScript and as such would be incapable of performing complex tasks such as running a genetic algorithm. The Revit .NET API allows the user to program with any .NET compliant language but it would seem that it is mainly intended to be used to link to other externally written programs or automate repetitive tasks. There is also an ‘IronPython’ plugin for Revit which gives the user the ability to code in the Python programming language, and create or edit scripts in ‘real-time’ without the need to restart the program. This holds slightly more potential but is still not as immediately applicable as MaxScript. The suggestion to script in Revit, given the fact that it currently would not be possible, demonstrates a bias towards that software. As such, the experts seemed to be somewhat blinkered to the potential of other systems. It was noted that currently “the program always falls down at the corner plots, it plots houses where the garden of the adjacent house would be”64 this can be attributed to the way that the program tries to fill as much of the perimeter of an area as possible. A solution was proposed that if the script could plot houses along a specified line or path, rather than over an area, these overlapping issues could be avoided. Placing the houses either side of a defined road centreline, would enable the user to define the road structure and immediately see the implications on the design. Whilst this is a valid alternative method, the fact that 63 64

Appendix 5, p. 61 Ibid., p. 59

41


the program is failing at the corner plots can be attributed more to the current lack of refinement in the coding. The experts seem to have analysed the software as a finished article than the test of an idea. Also, in order to achieve this the program would have to calculate the placement of houses in real-time as the user defines the structure. This would make the application of a genetic algorithm impossible and therefore defy the point of the program. However, if a genetic algorithm could be integrated into the design of the road layout it could yield some interesting results. “The minimum distance between roads could be specified (establishing suitably sized buildable areas), then the program could be given the site boundary and allowed to generate a network of roads within it. What would be interesting is the effect that different shaped sites would have on the final layout. Giving it a rectangular site would most likely result in a grid, giving it a circular site could result in some kind of interesting polar structure”65. The genetic algorithm could measure the fitness of a solution by counting the number of generated areas which are above a certain width, it could also control how orthogonal the layout is by limiting the number of 90 degree angles within the design. Whilst this could be an interesting area of exploration, it represents the creation of a different program entirely. The design and positioning of roads is a separate subject matter with an equally complex set of parameters, and as such does not represent a more achievable alternative. The program proposed by the experts would be more of a pattern generator than an actual layout design tool. In addition to this, it was stated that the program could benefit by relaying more information to the user. Currently it only presents information about the generation as a whole, if it was able to display information about the fitness of individuals then the user would be able to visualise the selection process more clearly. For example “if there was some kind of colour scale which went from red to green or some kind of traffic light system; so red, amber, green, to represent the fitness of each individual. That would make it much easier to see which individuals were failing”66. These very specific suggestions further reinforce the feeling that the program was being assessed as a finished article rather than a work in progress. However, the idea did lead to an interesting general suggestion that the program lacked user interaction. A model in which “after a generation you could select the individuals that you want to keep just by clicking on them”67 would facilitate a process of artificial selection similar to that proposed by Christian Derix68.

65 66 67 68

Appendix 5, p. 60 Ibid., p. 60 Ibid., p. 60 Derix, p. 48

42


It was eventually suggested that perhaps the focus of the program could be altered to make it more useful. It was proposed that “the house is actually the irrelevant part, what’s important is the plot.... The program has much more potential if the parameters drive plots, not houses”69. A land parcelling tool such as this could be applied at the very early stages of the design process, as proposed by Gabriela Celani70 and also Frank Lloyd Wright in his land parcelling method for the ‘Galesburg Country Home Acres’. “It would enable the Architect to quickly and confidently go back to the client and say ‘We will be able to achieve ‘x’ amount of units on this site, and therefore it is worth buying the site’. This would be useful as initial assessments are time consuming and more often than not unpaid”71. However the program would not be of any use on a completely empty site because achievable densities can easily be calculated once the buildable area is known. The only place it would work is “on a site with an outline permission, as the densities are harder to calculate and the program could test whether the proposed road structure would work”72. Pursuing this line of research would require a drastic redesign of the program in its current form. The Packing function would need to be replaced with a subdivision function based around Tessellation and Voronoi methods. Essentially, the Voronoi method of subdivision is a way of defining areas of space based upon the proximity to a particular point. It achieves this by firstly constructing a Bisector – a line which is halfway between two points and perpendicular to the line that would connect them. It then defines a “Voronoi Cell”73 which is bounded by the intersection of these bisectors (Fig. 24). The process is repeated for every point in the set and the result is a Honeycomb-like collection of tessellated shapes. The Genetic Algorithm could be used to drive the positioning of these points, thus defining the size and shape of the areas being created (Fig. 25).

69 70 71 72 73

Appendix 5, p. 60 Celani, p. 503 Appendix 5, p. 60 Ibid., p. 60 Aranda, p. 77

43


Fig. 24- A simple Voronoi diagram which shows the intersecting bisectors, equidistant and perpendicular between any two adjacent points.

Fig. 25- A much more ‘Organic’ Voronoi diagram demonstrating the potential for this method of spatial subdivision to be used as a land parcelling tool.

44


Conclusion


It is clear that a genetic algorithm based design aid holds great potential in increasing the efficiency of the commercial housing design process. The ability of this kind of software to generate design ideas whilst simultaneously complying with a plethora of constraints, is something that the human designer is far less efficient in achieving. The most pressing question raised by this research is the way in which the software should be integrated into the design process. The expert consultation exposed a desire amongst the designers to have a greater amount of input in the generation of a solution. This gave an interesting insight into the way that they feel about the software. One expert questioned the ability of software to replicate the “human ability to….make a subjective judgement”74, demonstrating a lack of trust in the system to generate a complete design solution. The designers want the software to carry out the time consuming, menial tasks, enabling them to focus their time on the more skilled areas of design. The lack of trust also answers the question of the potential marginalisation of the architects role through the advancement of digital design tools. The designers do not see the technology as marginalising their role, they feel that “If anything the development of IT in design has given more control back to the designers”75. This further reinforces the role of the program as a design aid, not as a complete design solution and as such the suggested partnership between software and user would be its most likely application. This research has provided a useful starting point into the application of genetic algorithms in the field of commercial housing design. Whilst the application discussed may not be developed further in its current configuration, it has generated an invaluable body of knowledge which in turn has presented other interesting avenues of exploration within this topic.

74 75

Appendix 5, p. 60 Appendix 5, p. 61

46


Bibliography Alexander, Christopher, ‘The Question of computers in Design’ in Landscape, Autumn 1967, pp. 8-12 Alexander, Christopher, ‘Notes on the Synthesis of Form’ (Harvard University Press, Massachusetts: 1968) Alexander, Christopher, and others, ‘A New Theory of Urban Design’ (Oxford University Press, New York: 1987) Aranda, Benjamin, and Chris Lasch, ‘Pamphlet Architecture 27: Tooling’ (Princeton Architectural Press, New York: 2006) Besserud, Keith, and Joshua Cotten, ‘Architectural Genomics’ (ACADIA Conference Proceedings, Minneapolis: 2008) pp. 238-245 CABE, Housing Audit: Assessing the design quality of new housing in the East Midlands, West Midlands and the South West (London: CABE, 2006) Celani, Gabriela, and others, ‘Generative Design Systems for Housing: An outsidein approach’ in Digital Design: The Quest for New Paradigms (eCAADe Conference Proceedings, Lisbon: 2005) pp. 501-506 Celani, Gabriela, and others, ‘Optimizing the “Characteristic Structure”: Combining Shape Grammars and Genetic Algorithms to Generate Urban Patterns’, in Respecting Fragile Places (eCAADe Conference Proceedings, Ljubliana: 2011), pp. 491-500 Chouchoulas, Orestes, Shape Evolution: An Algorithmic Method for Conceptual Architectural Design Combining Shape Grammars and Genetic Algorithms (published doctoral thesis, University of Bath, 2003) Choudhary, Ruchi, and Jeremy Michalek, ‘Design Optimization in Computer-Aided Architectural Design’ in CAADRIA 2005 (CAADRIA Conference Proceedings, New Delhi: 2005) Vol. 2, pp. 149-159 Coates, Paul, ‘Programming Architecture’ (Routledge, Oxon: 2010) Coppola, Carlo, and Alessandro Ceso, ‘Computer Aided Design and Artificial Intelligence in Urban and Architectural Design’ in Promise and Reality: State of the Art versus State

47


of Practice in Computing for the Design and Planning Process (eCAADe Conference Proceedings, Weimar: 2000), pp. 301-307 Crnkovic, Gordana Dodig, ‘Constructive Research and Info-computational Knowledge Generation’ in Model-Based Reasoning in Science and Technology, ed. by Lorenzo Magnani and others (Springer, Berlin: 2010) pp. 359-380 Derix, Christian, ‘Genetically Modified Spaces’, in Space Craft: Developments in Architectural Computing, ed. by David Littlefield (RIBA Publishing, London: 2008) pp. 46-53 Duarte, José P., ‘A Discursive Grammar for Customizing Mass Housing: The Case of Siza’s Houses at Malagueira’ in Digital Design (eCAADe Conference Proceedings, Graz: 2003), pp. 665-67 Elezkurtaj, Tomor, and Georg Franck, ‘Genetic Algorithms in Support of Creative Architectural Design’, in Architectural Computing from Turing to 2000 (eCAADe Conference Proceedings, Liverpool: 1999), pp. 645-651 Elezkurtaj, Tomor, and Georg Franck, ‘Geometry and Topology: A User-Interface to Artificial Evolution in Architectural Design’, in Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process (eCAADe Conference Proceedings, Weimar: 2000), pp. 309-312 Flemming, Ulrich, and Carnegie Mellon University Engineering Design Research Center, ‘Get with the Program: Common Fallacies in Critiques of Computer-Aided Architectural Design’ (Carnegie Mellon University School of Architecture. Paper 74: 1994) Frazer, John, ‘An Evolutionary Architecture’ (AA Publications, London: 1995) Frazer, John, ‘Computing without Computers’, Architectural Design Special Issue- The 1970s is Here and Now, Issue 75 (2005), pp. 33-43 Holland, John, ‘Adaptation in Natural and Artificial Systems: An introductory analysis with applications to biology, control, and artificial intelligence’ (University of Michigan Press, Michigan: 1975) Jabi, Wassim, Parametric Design in Architecture (Laurence King Publishing: Forthcoming 2013)

48


Kitchley, J. Jinu Louishidha, and A. Srivathsan, ‘Combining Shape Grammar and Genetic Algorithm for Developing A Housing Layout: Capturing the Complexity of an Organic Fishing Settlement’ (CAADRIA Conference Proceedings, New Delhi: 2005) Vol. 2, pp. 259-265 Kobayashi, Yoshihiro, and Subhadha Battina, ‘Generating Housing Layout Designs: Fractals as a Framework’ (SIGraDi Conference Proceedings, São Leopoldo: 2004) pp. 221-223 Lawrence, D.H. (David Herbert), Late essays and articles (Cambridge: Cambridge University Press, 2004) Marin, Philippe, and others, ‘A Genetic Algorithm for use in Creative Design Processes’ (ACADIA Conference Proceedings, Minneapolis: 2008) pp. 332-339 Narahara, Taro, and Kostas Terzidis, ‘Multiple-constraint Genetic Algorithm in Housing Design’ (ACADIA Conference Proceedings, Louisville: 2006) pp. 418-425 Nilkaew, Piyaboon, ‘Assistant Tool for Architectural Layout Design by Genetic Algorithm’ (CAADRIA Conference Proceedings, Kumamoto: 2006) Reas, Casey, and Ben Fry, Processing: A Programming Handbook for Visual Designers and Artists (MIT Press, Massachusetts: 2007) RIBA, Improving Housing Quality: Unlocking the Market (London: RIBA, 2009) Shiffman, Daniel, ‘The Nature of Code: Simulating Natural Systems with Processing’ (Forthcoming 2013) Stiny, George, and James Gips, ‘Shape Grammars and the Generative Specification of Painting and Sculpture’ in Proceedings of IFIP Congress 71 (IFIP Congress Proceedings, Amsterdam: 1971) pp. 1460-1465 Stiny, George, and James Gips, ‘Algorithmic Aesthetics: Computer Models for Criticism and Design in the Arts’ (University of California Press, 1978) Wellings, Fred, British house builders: History and analysis (Oxford: Blackwell Pub, 2006) Woodbury, Robert, ‘Elements of Parametric Design’ (Routledge, Oxon: 2010)

49


Appendix 1: Code for Random Placement of Rectangles within a defined area Written in the Processing Development Environment //Defines the number of ‘Houses’

int numHouses =10; House ho; void setup(){ size(700,700); smooth(); ho = new House();

//House Width

ho.w = 20;

//House length

ho.l = 20;

//Random x and y positions for Houses

ho.x = random(width); ho.y = random(height); }

50


void draw(){ background(0); noFill(); stroke(0,152,173); strokeWeight(5);

//Yellow box which represents site boundary

int px = 100; int py = 100; int pw = 500; int pl = 500; rect(px,py,pw,pl); strokeWeight(1); stroke(255); fill(100,80);

//Limits number of house units generated to less than the numHouse variable

for (int i=0; i<numHouses; i++){ /*Setting random x and y values for house units again because otherwise they will all use the same randomly generated values resulting in a kind of spiral pattern around one point*/

ho.x = random(pw); ho.y = random(pl); pushMatrix();

51


//Setting 0,0 to the x and y coordinate of the big blue square

translate(px,py); //Setting 0,0 to the x and y coordinate of each individual rectangle so that they rotate around their own individual axes

translate(ho.x,ho.y); rotate(PI/random(1)); //Inverting the 0,0 so that they return to their original distribution

translate(-ho.x,-ho.y); /*This ensures that only shapes who fall within the yellow boundary are drawn - the house length is subtracted from the boundary length to compensate for the rotation factor which would otherwise cause the ends of the houses to lie outside the boundary*/

if ((ho.x<=(pw-ho.l))&&((ho.x>=ho.l))&&((ho.y>=(ho.l))&&((ho.y<=(pl-ho.l))))){ rect(ho.x,ho.y,ho.w,ho.l); } popMatrix(); } noLoop(); save(â&#x20AC;&#x153;Random Rectangles.pngâ&#x20AC;?); } class House{ float x, y; float w, l, spacing; }

52


Appendix 2: Survival Rate Experiment data

Generation 1 357 358 349 353 352

Generation 2 203 314 333 369 387

Generation 3 127 273 322 379 401

Generation 4 84 248 313 382 409

Generation 5 57 237 303 375 420

Area Coverage Survival Rate 20% 40% 60% 80% 100%

Generation 1 24.6354 24.4943 23.9321 24.3377 24.5392

Generation 2 13.8461 21.4224 22.2799 24.4633 26.9176

Generation 3 8.44702 18.1858 21.0064 23.6659 27.8521

Generation 4 5.97016 16.1799 20.2879 23.0715 28.1725

Generation 5 4.0283 15.0443 19.181 21.921 28.2595

Avg. Fitness Survival Rate 20% 40% 60% 80% 100%

Generation 1 2.47831 2.52866 2.5115 2.55006 2.47511

Generation 2 2.13821 2.50169 2.52957 2.63963 2.5622

Generation 3 1.94458 2.45941 2.48235 2.6664 2.58221

Generation 4 1.69938 2.37155 2.55473 2.73318 2.58631

Generation 5 1.52044 2.31868 2.57371 2.72068 2.63928

500.0 437.5 375.0 Nยบ of Houses

Nยบ of Houses Survival Rate 20% 40% 60% 80% 100%

312.5 250.0 187.5 125.0 62.5 0

20%

30.00

Generation 5 57 237 303 375 420

Generation 4 5.97016 16.1799 20.2879 23.0715 28.1725

Generation 5 4.0283 15.0443 19.181 21.921 28.2595

Generation 4 1.69938 2.37155 2.55473 2.73318 2.58631

Generation 5 1.52044 2.31868 2.57371 2.72068 2.63928

Nยบ of Houses Packed 500.0 437.5 375.0 Nยบ of Houses

Generation 4 84 248 313 382 409

% Area Coverage

26.25 22.50 18.75 15.00 11.25 7.50

312.5

3.75

250.0

0

187.5 125.0

20%

62.5 0 Generations

20%

40%

60%

80%

2.740

100%

2.585

% Area Coverage

overage

26.25 22.50 18.75

2.430 Avg. Fitness

30.00

2.275 2.120 1.965 53 1.810


Generation 44 Generation 5.97016 84 16.1799 248 20.2879 313 23.0715 382 28.1725 409

Generation 55 Generation 4.0283 57 15.0443 237 19.181 303 21.921 375 28.2595 420

Generation 44 Generation 1.69938 5.97016 2.37155 16.1799 2.55473 20.2879 2.73318 23.0715 2.58631 28.1725

Generation 55 Generation 1.52044 4.0283 2.31868 15.0443 2.57371 19.181 2.72068 21.921 2.63928 28.2595

500.0 437.5 375.0 Nยบ of Houses

237 303 375 420

Nยบ of Houses

248 313 382 409

312.5 250.0

Nยบ of Houses Packed

187.5 500.0 125.0 437.5 62.5 375.0 0 312.5 250.0

20%

Generations

40%

60%

80%

100%

187.5

% Area Coverage

125.0 30.00 62.5

% Area Coverage

Generations

22.50

20% 18.75

40%

60%

80%

15.00

100%

% Area Coverage

11.25 30.00 7.50 26.25 3.75 22.50 0 18.75

Generations

15.00

20% 11.25

40%

60%

80%

100%

7.50 3.75

Average Fitness Score

0 2.740 Generations

2.585 2.430 20% Avg. Fitness

Generation 5 1.52044 2.31868 2.57371 2.72068 2.63928

Avg. Fitness

Generation 4 1.69938 2.37155 2.55473 2.73318 2.58631

% Area Coverage

26.25 0

40%

60%

80%

100%

2.275 2.120

Average Fitness Score

1.965 2.740 1.810 2.585 1.655 2.430 1.500 2.275

Generations

2.120 1.965 20%

40%

60%

80%

100%

1.810 1.655 1.500 Generations

20%

40%

60%

80%

100%

54


Generation 2 22.1158 23.8289 23.9906 24.9276 24.6716 25.196 24.4594

Generation 3 21.1463 23.1876 25.5393 26.1559 25.8958 25.1721 23.6584

Generation 4 20.01 23.2055 25.0884 25.563 26.0824 24.3297 23.9143

Generation 5 19.4908 22.9101 25.1785 24.8587 25.892 25.6674 24.5487

Avg. Fitness Mutation Rate 1% 20% 40% 50% 60% 80% 100%

At S.R of 60% Generation 1 2.4848 2.46982 2.54961 2.52712 2.43806 2.45571 2.51283

Generation 2 2.48752 2.53035 2.57865 2.5973 2.44037 2.45817 2.46177

Generation 3 2.48529 2.49437 2.62952 2.62589 2.62595 2.55437 2.45659

Generation 4 2.57026 2.56703 2.55961 2.62488 2.59292 2.55489 2.46429

Generation 5 2.56612 2.5958 2.68801 2.63123 2.57682 2.51705 2.48689

Generation 4 318 358 378 388 381 355 348

Generation 5 310 359 388 376 378 371 349

Nยบ of Houses Packed 400.0 387.5

400.0 387.5 375.0 Nยบ of Houses

At S.R of 60% Generation 1 23.4742 23.4871 24.4554 23.8587 24.1637 24.3561 23.9615

Generation 3 324 345 385 393 385 358 349

362.5 350.0 337.5 325.0 312.5 300.0

27.000 25.938 % Area Coverage

Area Coverage % Mutation Rate 1% 20% 40% 50% 60% 80% 100%

Generation 2 334 354 361 375 357 361 350

24.875 23.813 22.750 21.688 20.625 19.563

375.0

18.500

362.5 350.0 337.5 325.0 312.5 2.700

300.0 Generations

1% Generation 5 2.56612 2.5958 2.68801 2.63123 2.57682

20%

40%

50%

2.666

60%

% Area Coverage 27.000

80%

100%

2.633 Avg. Fitness

Generation 4 2.57026 2.56703 2.55961 2.62488 2.59292

Generation 5 19.4908 22.9101 25.1785 24.8587 25.892 25.6674 24.5487

At S.R of 60% Generation 1 349 341 358 349 347 354 341

Nยบ of Houses

Generation 4 20.01 23.2055 25.0884 25.563 26.0824 24.3297 23.9143

Generation 5 310 359 388 376 378 371 349

Nยบ of Houses Mutation Rate 1% 20% 40% 50% 60% 80% 100%

e

Generation 4 318 358 378 388 381 355 348

Appendix 3: Mutation Rate Experiment data

2.599 2.565

25.938

2.531 55 2.498

24.875

2.464


Generation 4 2.57026 2.56703 2.55961 2.62488 2.59292 2.55489 2.46429

Generation 5 2.56612 2.5958 2.68801 2.63123 2.57682 2.51705 2.48689

Nยบ of Houses

Generation 5 19.4908 Generation 22.9101 5 2.56612 25.1785 2.5958 24.8587 2.68801 25.892 2.63123 25.6674 2.57682 24.5487 2.51705 2.48689

Nยบ of Houses

Generation 4 20.01 Generation 23.2055 4 2.57026 25.0884 2.56703 25.563 2.55961 26.0824 2.62488 24.3297 2.59292 23.9143 2.55489 2.46429

375.0 362.5 350.0

Nยบ of Houses Packed

337.5 400.0 325.0 387.5 312.5 375.0 300.0 362.5

Generations

350.0 337.5

1%

20%

325.0

40%

50%

60%

80%

100%

80%

100%

80%

100%

80%

100%

60%

80%

100%

60%

80%

100%

% Area Coverage

312.5 27.000 300.0 25.938 % Area Coverage

Generation 5 310 Generation 5 359 19.4908 388 22.9101 376 25.1785 378 24.8587 371 25.892 349 25.6674 24.5487

387.5

% Area Coverage

Generation 4 318 Generation 4 358 20.01 378 23.2055 388 25.0884 381 25.563 355 26.0824 348 24.3297 23.9143

400.0

Generations

24.875 23.813

1%

20%

22.750

40%

50%

60%

% Area Coverage

21.688 27.000 20.625 25.938 19.563 24.875 18.500 23.813

Generations

22.750 21.688

1%

20%

40%

50%

60%

20.625 19.563

Average Fitness Score

18.500 2.700 Generations

2.666 2.633 Avg. Fitness

359 388 376 378 371 349

Avg. Fitness

358 378 388 381 355 348

1%

20%

40%

50%

60%

2.599 2.565

Average Fitness Score

2.531 2.700 2.498 2.666 2.464 2.633 2.430 2.599

Generations

2.565 2.531

1%

20%

40%

50%

2.498 2.464 2.430 Generations

1%

20%

40%

50%

56


Gen. 1

At S.R 100% Packed Area Coverage Avg. Fitness

Gen. 2

Gen. 3

Gen. 4

Gen. 5

Gen. 6

342 23.7698 2.47285

377 25.8833 2.55832

389 25.9944 2.64781

391 25.5438 2.67804

377 24.9001 2.55558

384 25.2824 2.63713

337 23.5309 2.41143

389 26.5671 2.57182

412 27.6455 2.64687

411 27.2027 2.64597

402 27.1076 2.63363

399 27.0847 2.60325

Comparing % Area Coverage 28.195 Continued... Gen. 5

Gen. 6

Gen. 7

391 25.5438 2.67804

377 24.9001 2.55558

384 25.2824 2.63713

385 24.8903 2.61962

411 27.2027 2.64597

402 27.1076 2.63363

399 27.0847 2.60325

407 27.5794 2.6344

Comparing % Area Coverage

27.545

Gen. 8

% Area Coverage

Gen. 4

Gen. 9

26.896383 24.711 26.247 2.60978

25.597

Gen. 10

Gen. 10

384 24.4107 2.64617

388 24.8247 2.6923

421 28.1948 2.69102

419 27.3346 2.72871

415 27.4582 2.67029

424 27.5866 2.71171

24.948 421

24.299 27.7679

2.68792 23.649

Comparing Houses Pac

23.000

397 25.7874 2.66081

419 27.3346 2.72871

% Survival Rate

415 27.4582 2.67029

Gen. 13

424 27.5866 2.71171

Comparing Houses Packed

392 25.3696 2.69712

421 28.0077 2.67085

Gen. 12

397 25.7874 2.66081

Generations

424.000 Continued... Gen. 11 Gen. 12 384 388 24.4107 24.8247 2.64617 2.6923

Gen. 11

391 25.6179 2.65257

413.12560% Survival Rate

Gen. 14 395 25.4285 2.68365

410 27.2016 2.66895

Gen. 15

100% Survival Rate

402.250 396 25.1964 2.67124

391.375 380.500 369.625

414

358.750 27.5955 2.65915

347.875 337.000

Comparing A

Generations

2.730

100% Survival Rate

Generations

2.68960% Survival Rate Average Fitness

1 8 2

At S.R 60% Packed Area Coverage Avg. Fitness

Nยบ Houses Packed

1 9 7

Appendix 4: Survival Rate Experiment 2 data

100%

2.648 2.606 2.565 2.524 2.483 2.441

Generations

2.400

57

G


At S.R 60% Packed Area Coverage Avg. Fitness At S.R 100% Packed Area Coverage Avg. Fitness

2 6 2

1 7 Rate 5

ate

Gen. 3

Gen. 4

Gen. 5

Gen. 6

Gen. 7

Gen. 8

377 25.8833 2.55832

389 25.9944 2.64781

391 25.5438 2.67804

377 24.9001 2.55558

384 25.2824 2.63713

385 24.8903 2.61962

383 24.711 2.60978

337 23.5309 2.41143

389 26.5671 2.57182

412 27.6455 2.64687

411 27.2027 2.64597

402 27.1076 2.63363

399 27.0847 2.60325

407 27.5794 2.6344

421 27.7679 2.68792

Comparing % Area Coverage 28.195

Gen. 7 385 24.8903 2.61962

Gen. 8 383 24.711 2.60978

% Area Coverage

27.545 26.896 26.247 25.597 Gen. 9 24.948

Gen. 10

391 24.299 25.6179 2.65257 23.649

Gen. 11

397 25.7874 2.66081

Gen. 12

Gen. 13

384 24.4107 2.64617

388 24.8247 2.6923

415 27.4582 Survival Rate 2.67029

Generations 424

392 25.3696 2.69712

Gen. 14

Gen. 15

395 25.4285 2.68365

396 25.1964 2.67124

410 27.2016 2.66895

414 27.5955 2.65915

23.000 407 27.5794 2.6344

421 27.7679 2.68792

421 28.1948 2.69102

419 27.3346 60% 2.72871

27.5866 100% 2.71171

421 28.0077 Survival Rate 2.67085

Comparing Houses Packed 424.000 413.125

Gen. 14 395 25.4285 2.68365

Gen. 15 396 25.1964 2.67124

Nยบ Houses Packed

9 7 5

Gen. 2

342 23.7698 2.47285

402.250 391.375 380.500 369.625 358.750 347.875 337.000

410 27.2016 2.66895

Generations

414 27.5955 2.65915

60% Survival Rate

100% Surival Rate

Comparing Average Fitness Scores 2.730 2.689 Average Fitness

4 4 3

Gen. 1

2.648 2.606 2.565 2.524 2.483 2.441 2.400 Generations

60% Survival Rate

100% Survival Rate

58


Appendix 5: Expert Opinion Interview Transcription 13th January 2013 11:00 a.m Present: ‘D’ - David Rhodes ‘R’ – Russ Green ‘T’ – Tanveer Hussain The meeting began with an explanation of genetic algorithms and demonstration of the program. D: It would be interesting to see an image of each iteration (generation) so that you could compare the way in which the program is progressing. What problems do you see with the direction in which the program is heading? R: The program needs to place houses based upon road centrelines rather than within a certain distance from the edge of a buildable area. The program always falls down at the corner plots, it plots houses where the garden of the adjacent house would be. You might have more success, instead of trying to plot houses within a buildable area, if your algorithm was able to plot houses along a specified line or path. You could make the program test for garden space as well as immediate overlapping using Revit. You can use it to make a family which contains the specific housetype, but also contains a garden area so that it would be clear, once a generation had been created, where the garden zones were overlapping. What changes would you like to see in order for this to become a useful tool? R: What would be more useful is if rather than placing randomly sized anonymous blocks, it was placing pre-defined blocks. You could then set up individual-specific rules such as minimum separation distances between each of the faces. In fact you don’t even need to go as far as establishing face based rules, the housetype block could include the garden area behind and for example the parking spaces in front. T: What would be useful is if you were using the specific housetype blocks you could

59


establish a mix of housetypes which you want to reach by setting a percentage for each housetype. It may have been more useful to test the program on a less complex layout. D: It would be good if the program was able to give some kind of graphical representation of the fitness of each individual, then perhaps select those individuals which would progress into the next generation, just by clicking on them or something. T: The fitness of each individual would be good to have represented somehow. Maybe if there was some kind of colour scale which went from red to green or some kind of traffic light system; so red, amber, green, to represent the fitness of each individual. That would make it much easier to see which individuals were failing. D: It is that human ability to look at the relationship between two individuals and make a subjective judgement about what works and what doesn’t, which would be very hard to code. So having some kind of user interaction with the program could be very useful. How else could you see the program being applied? D: It would be interesting to apply the program to the generation of a road structure. So the minimum distance between roads could be specified (establishing suitably sized buildable areas), then the program could be given the site boundary and allowed to generate a network of roads within it. What would be interesting is the effect that different shaped sites would have on the final layout. Giving it a rectangular site would most likely result in a grid, giving it a circular site could result in some kind of interesting polar structure. It is almost as if there needs to be an initial separate application of the program which deals with the kind of work that the Urban Design Team carry out. This would generate a road structure which could then be fed into the next stage of the program which would generate the houses. It would enable the Architect to quickly and confidently go back to the client and say ‘We will be able to achieve ‘x’ amount of units on this site, and therefore it is worth buying the site’. This would be useful as initial assessments are time consuming and more often than not unpaid. R: I think it is more interesting to talk about this program as a tool to be used really early on in the design process. So for example, a road layout is proposed and the program could help you to very quickly see whether a particular block within that layout works. It wouldn’t work on an empty site, because if you know the area etc you know the kinds of densities you are likely to achieve. It could however work on a site with an outline permission, as the densities are harder to calculate and the program could test whether the proposed road structure would work. The house is actually the irrelevant part, what’s important is the plot,

60


generating the plot boundaries. The program has much more potential if the parameters drive plots, not houses. It would be much more powerful to run Revit using some kind of genetic algorithm. Another thing that would be really interesting is to have a series of ellipses representing plots which dynamically oriented themselves along a spline (representing the road) as it is drawn, enabling you to push and pull the structure around and immediately see the consequences in real time. Would this program make the design process more efficient and give over control of compliance checking to the computer? R: Programs like this arenâ&#x20AC;&#x2122;t doing that, programs like Revit already are, with clash detection etc. This can only help you with randomly hitting a high number, it is not really doing it with any intelligence. To get the right answer would require such a complex set of parameters that it would be unachievable I know that currently the program is not presenting a polished solution to a program, but what I want to know is would you think it a good thing, or a useful tool to have a program which was capable of presenting a complete solution? Or would you see it as marginalising the role of the designer? R: It wouldnâ&#x20AC;&#x2122;t marginalise the role of the designer, that is always talked about when new technologies are proposed and it has been proved to have not been the case on every iteration of IT in design so far. T: If anything the development of IT in design has given more control back to the designers.

61

A Genetic Algorithm Based Design Aid  

Dissertation

A Genetic Algorithm Based Design Aid  

Dissertation

Advertisement