Master of Architecture (M.Arch) Dissertation 2023/25
Gap in the Cloud
ARCHITECTURAL ASSOCIATION SCHOOL OF ARCHITECTURE
Taught Postgraduate School Programmes
PROGRAMME:
Emergent Technologies and Design
2023-2025
YEAR: COURSE TITLE:
M.Arch Dissertation
DISSERTATION TITLE:
STUDENT NAMES:
Burak Aydin [M.Arch.]
Mehmet Efe Meraki [M.Arch.]
Rushil Patel [M.Arch.]
DECLARATION:
DATE:
“I certify that this piece of work is entirely my/our and that my quotation or paraphrase from the published or unpublished work of other is duly acknowledged.” 10thJanuary 2025
ACKNOWLEDGEMENTS
Founding Director:
Dr. Michael Weinstock
Course Directors:
Dr. Elif Erdine
Dr. Milad Showkatbaksh
Studio Tutors:
Paris Nikitidis
Felipe Oeyen
Dr. Alvaro Velasco Perez
We extend our deepest gratitude to Course Director Dr. Elif Erdine, Dr. Milad Showkatbakhsh, and Founding Director Dr. Michael Weinstock for their insightful and intellectually stimulating discussions. The tangential ideas arising from these conversations often led to equally, if not more, thoughtprovoking metaphorical reflections, encompassing a broader scope than the primary subject matter. We also wish to thank Alessio Erioli [Co-de-iT] for assisting us in envisioning alternative possibilities, and express our sincere appreciation to our studio tutors and colleagues for their steadfast belief in the core principles of this dissertation.
Our heartfelt thanks also go to our dedicated studio tutors and supportive colleagues, whose unwavering belief in the core principles of this dissertation fueled our confidence and determination. We are especially indebted to our families, whose continuous support and encouragement provided the foundation upon which this study was built. Lastly, we extend our sincere appreciation to our colleague, Prakhar Patle, for his steadfast assistance and camaraderie throughout this journey.
Lorenzo Santelli
Fun Yuen
ABSTRACT
This thesis explores the interdependent relationship between data—its generation, storage, and consumption—and the utilization of space and energy within the urban fabric, focusing on London as a global data hub. Drawing on historical trends, current observations, and future projections, there is a growing need to reimagine data centre typologies, which must eventually be re-integrated into urban environments where information is produced, processed, and consumed. By challenging traditionally isolated yet highly embedded typologies, the study unfolds a material system that ena functional hybridization, cultivation for food production, and a context-aware space-making framework. These strategies collectively provoke a mutual, participatory integration of data infrastructures into the urban fabric.
The initial (M.Sc) phase investigated material systems and functional interdependencies to enable hybridization by repurposing excess heat. By developing a Phase Change Material (PCM) infilled Triply Periodic Minimal Surface (TPMS) panel system, the heat retention performance of the system is introduced by passively regulating the temperature, ensuring thermal comfort for the enveloped agricultural function. Proof-of-concept experiments demonstrate the developed system’s ability to form an energy loop, reducing external dependencies. Additionally, case studies and scenario-building exercises informed the spatial and functional relationships, identifying how environmental factors influence the system's performance.
The subsequent (M.Arch) phase examines space-making experiments via an automated assembly interpreter to optimize both functional and spatial distribution. Based on contextoriented demographic data, projected computational supply-demand trends, and material system metrics, the capacity calculations determine what and how much to build from 2025 to 2040. Additionally, contextual influences derived from the immediate built environment are integrated into the site-specific vector field optimization, answering where to build. These parameters ultimately inform a semi-modular building strategy, mediating between modularity and permanence for the required spatio-temporal flexibility. Enabled by the initial phase, the topological relationships among spatial nodes are optimized for improved functional distribution and clustering according to function-targeted fitness criteria, further influencing the architectonic element configuration. The resulting space-making framework, developed through successive multi-objective optimization cycles to enhance environmental, structural, and functional performance, is tested across multiple sites, highlighting its adaptability.
These parallel sets of experiments ensured a dynamic set of spectra to enhance the building performance and spatial qualities for adaptability and responsiveness to the ever-changing demands of data, space, and energy. The re-positioning of data centres in the urban fabric, through a re-imagined typology, aims to transform today’s unwieldy and isolated facilities into tomorrow’s integral components of the urban ecosystems.
TABLE OF CONTENTS
Introduction
Rooting back to earliest cave paintings humankind has always had an urge to reposit information it received and generated and tangibly transfer to future receivers. This information has been embedded through various modes that have continuously evolved throughout history.
Following the invention of writing and spanning the era of transition from tablets to books, libraries served as common repository hubs to organize information. For over millennia, paper and the printing press remained the main sources of information storage systems.
The discoveries of the transistor and integrated microchip in the 1950s hinted at the upcoming of the digital age. In the late 90s, the transition occurred when digital storage surpassed paper in cost-effectiveness for storing information.
1This advancement enabled a global shift towards new ways to compute, store, and transfer information faster than ever. To manage the increasing resource and information traffic, new typologies emerged in our built environments such as data centres.
Additionally, the unprecedented power of information processing enabled us to decode humanity itself, leading to the discovery of DNA as arguably the most efficient medium of information storage.2 DNA: The Ultimate DataStorage Solution | Scientific American
Portraying the evolution of information processing, storage, and transfer systems, it is apparent that humankind will continue to pursue its mission to process, store, and transfer information to the future generations.
Tracing back to the fundamentals, the initial observation of this study portrays that since antiquity, three fundamental concepts are common through the evolution of informatics systems:
- Data,
- Space,
- Energy
All of which are necessary to reposit a unit of information, no matter what the medium is.
The complex relationship between these factors raises several questions:
- Is there a hierarchy between them?
- How are they interdependent on each other?
| DOMAIN |
[Fig. 01] Collected buzzwords around data-centric typologies (illustrated by the authors).
1.1_Data
1.1.1 - What is Data?
The concepts of information and data are often used interchangeably in colloquial language, but it is critical to differentiate them to reveal their multi-layered structure. 3
Although an ever-continuing semantic discourse is present to dissect indetail, the general framework of how information is derived and expanded, is portrayed by what is known as the “DIKW (data-information-knowledgewisdom)” pyramid which its precise origin is uncertain as Wallace stated.4
Involving processes such as abstraction, distillation and interpretation, multiple layers of organization and meaning are added at the each step, climbing the pyramid. Interrelating and structuring multiple data, we produce “information”. The patterns of information generate what is called knowledge. And with the ability to judge and execute in context, wisdom is reached.
Although variations appeared over time, “data” always set the base for all. It is recognized as the raw material derived from abstracting the environment around using numbers, characters, bits, and symbols. Data can be stored in multiple analogues and encoded digital forms as binary digits. In this manner, it can be interpreted as a building block. Similar to bricks, data are commodities that gain value when they are used and/or stored.
[Fig. 02] Data - Information - Knowledge - Wisdom (DIKW) Pyramid (illustrated by the authors).
data abstracted element linked data organized information applied knowledge information knowledge wisdom
[Fig. 03] Data - Brick analogy and its continuous development diagram (illustrated by the authors).
Classifying data is a rather complex problem from multiple extents. Initially, the data that are captured as their own are considered “raw data”, meaning they have not been “processed” by any means. We are exposed to both raw and processed data through multiple means.
“Metadata” is data about data; the “private” and “open data” discourses address the accessibility questions for multiple stakeholders. “Structured data” resembles the way we store information in physical libraries with books. On the other hand, “semi-structured data” is a combination of structured and unstructured. This is not necessarily always a negative feature, but because of factors like the internet and expanding social media, around eighty percent (80%) of data flowing are labeled as semi-structured.5
Though all these labels are rather relative, data can be subjective and objective. In this manner, according to Kitchin, data can be attributed as “socio-technical assemblages”.6 raw
"socio-technical assemblage"
collected & translated data about the data
[Fig. 04] Various classifications of data (illustrated by the authors).
[Fig. 05] Various classifications of data (illustrated by the authors).
private data
structured data
semi-structured data
public data
available to access, use & share unstructured data
flowing without a context : text, AV, logs + more
1.1.2 - How much Data do we generate?
To grasp the sheer magnitude of our expanding data landscape, let’s first reduce everything to a single letter, or a single “byte.” By this logic, one megabyte would be equivalent to a short book.
Now consider that, from the dawn of civilization until 2003, humankind collectively produced an estimated five exabytes of data. Today, however, we generate that same volume of five exabytes approximately every twenty minutes. To put this into perspective, it amounts to replicating the entire collection of the British Library—one of the world’s largest libraries—roughly 400,000 times over in less than half an hour.
Such exponential growth underscores how the production of data far outstrips our historical benchmarks, posing both unprecedented challenges and opportunities for how we store, process, and interact with this vast informational ecosystem.
[Fig. 06] Yearly distribution of data generated, projections and analogous relationship (retrieved from the book :The Dark Cloud: How the Digital World Is Costing the Earth).
[Fig. 07] Temporality of the data production, consumption practices and analogy to the British Library.
Processing this ever-growing volume of information is attributed to the so-called “cloud”—an apparently invisible phenomenon which, in reality, has a deeply rooted physical infrastructure elsewhere. In this sense, data truly matters, and it also possesses a tangible materiality that underpins its digital existence.
In this manner data matter, and have matter !
1.2_Space
“In a dark, tepid room lies an array of blinking cuboid machines speaking in code. They compute, store, and transmit immortalized memory bytes - information of today’s mortal lives. In this dark, tepid room lie physical matter supporting virtual terrains, whose boundaries unremittingly expand, transcending the perimeters of the space.” – Tang Jialei (Harvard GSD)
1.2.1 - Physicality of Data
Digital information storage, like writing on paper, occupies physical space. It’s not the information itself that requires space, but the physical medium on which it’s stored. The more compact the writing, the more information can fit on the page, provided it remains legible. Similarly, on hard disks, information is stored magnetically, with tiny sections of the disk magnetized to represent binary data (1s and 0s).7
The evolution of data processors from their inception to the present day encapsulates a remarkable journey of technological advancement and miniaturization. This narrative begins in the 1940s with the advent of ENIAC (Electronic Numerical Integrator and Computer), the first electronic generalpurpose computer. ENIAC, a large weighing approximately 30 tons and occupying 1,800 square feet, signified the dawn of the computing era. Despite its enormous size, it could perform only 5,000 operations per second, a minuscule fraction compared to contemporary standards.
[Fig. 08] Physicality of data.
[Fig. 09] Data processing apparatuses comparison, the first and the most up-to-date computer (images retrieved from https://penntoday. upenn.edu/news/worlds-first-general-purpose-computer-turns-75/ (left) and https://japan-forward.com/a-look-at-the-magic-behindfugaku-the-worlds-leading-supercomputer/ (right).
Similar to “information”, which is embodied in objects, knowledge, and knowhow are embodied in persons and networks of humans. Humankind is limited in its capacity to acquire and reposit knowledge and expertise, which raises the need for the accumulation of information in the form of data.8
The processes to analyse, process, and store data heavily involve apparatuses that are entwined and embedded within the ever-growing infrastructures.
The past decades saw a relentless drive towards “miniaturization”, as per Moore’s Law. In the 1960s, Gordon Moore hypothesizes that the number of transistors on a microchip would double roughly every two years, leading to exponential increases in computing power.9 This prediction has largely held, propelling us into an era where billions of transistors can be integrated onto chips smaller than a fingernail. For example, the Intel 4004, introduced in 1971, was the world’s first microprocessor, containing 2,300 transistors and executing around 92,000 operations per second. In contrast, modern processors consist of 16 billion transistors and perform trillions of operations per second. This comparison illustrates the drastic advancements in processing power and efficiency over the past few decades. Despite the tremendous progress achieved through miniaturization, the pursuit of more powerful and efficient processors persists.
Despite the miniaturization, our demand is increasing enormously. That’s why, from the first to the fastest, the spatial requirements stayed comparable.
Where multiple computers communicate is called a "Data Centre"
1.2.2_Edge - Cloud Data Centres
The data journey begins with connected devices such as smartphones and smart cars. Initially, this data are sent to edge data centres These facilities are smaller and decentralized.10 Located close to end-users, they are positioned in the urban fabric to minimize latency and improve data processing speed.
Their location prioritises closeness to the end-users; so, introducing a new factor beyond the miniaturisation of such kinds. From there, the data progresses to large hyperscale data centres having vast amounts of data processing and storage capabilities. The increasing use of the "smart" devices, that demand rapid access to data
[Fig. 10] The continuous journey of Data (redrawn from thesis DataHub: Designing Data Centers for People and Cities, Harvard GSD) (ReIllustrated by the authors)
and its processing, is driving research into new solutions for managing latency. Edge Computing has emerged as a promising approach to meet the latency demands of the next generation 5G networks by positioning storage and computing resources closer to end-users.
1.2.3_Monolithic – Modular Data Centres
Whether located in remote or urban environments, data centers often manifest as large, static enclosures designed to house complex hardware. Evolving operational demands have prompted a shift toward modular and prefabricated approaches, which enable incremental expansion or downsizing to match fluctuating usage levels.
By employing standardized, repeatable components, these approaches minimize the risk of "overscaling," where a facility is built with significantly more capacity than needed, leading to underutilized hardware and wasted resources. Designing smaller, interchangeable modules instead of massive, monolithic infrastructures ensures that data centers can adapt more efficiently to changing computational loads and operational requirements.
[Fig. 11] Monolithic and Modular approaches (Illustrated by the authors)
1.2.4_Public – Private Data Centres
As data centers have grown in scale, direct human access to these facilities has steadily diminished, rendering them “black boxes” where the processes behind personal data remain largely invisible. This lack of transparency perpetuates uncertainty about how data is handled and where it is physically stored. By incorporating an inclusive interface, these typically closed-off spaces can offer a clearer glimpse into the data processing infrastructure, fostering a more secure and equitable digital future.
To investigate opportunities for such inclusivity, multiple case studies from around the globe were collected and superimposed, revealing a consistent absence of an adaptable, context-informed space-making strategy. Existing models largely prioritize efficiency and security at the expense of accessibility or flexibility. The findings highlight the need for an approach that not only maintains robust operational standards but also adapts to evolving data demands—ultimately bridging the gap between technical necessity and broader awareness of data processes.
[Fig. 12] Data centres, developmental trajectory and reducing human accessible spaces (Re-Illustrated from Data-Polis)
1.3_Energy
With the interdependence of data and space, how does the equation adapt when energy is brought into context? A space hinting a relationship with data tends to also expect to be continuously fed with a source of energy. “This energy experiences conversion to and from different states. How does this energy conversion have a relationship with the distribution and consumption of the same?”
For a unit data–bit-, energy requirements outweigh space as the digital means became the consensus for reaching and sharing information across public and private domains. Although the means to store data become more efficient, the demand to process data is exponentially accelerating, resulting in the energy demands continuously being questioned.
1.3.1 - Data Centre - Definition and Core Components:
A data centre is a specialized facility designed to house an array of networked computer servers that store, process, and transmit data. These centres are equipped with redundant power supply systems, advanced cooling systems, and robust security measures to ensure continuous operation and data protection. At their core, data centres comprise servers, storage systems, networking infrastructure, and environmental controls, all functioning cohesively to support various applications and services. As we move to the Information Age, we’re now dealing with much more than just communication.
Programmatically, a data centre is divided into four main sections: IT load, power distribution and storage, cooling systems, and physical security. There may also be additional areas like small office spaces.
Data centres require significant energy and utility support, such as electricity and water, to operate. A large-scale data centre can cover 1.3 million square feet and use as much power as a medium-sized town. Like earlier infrastructure, data centres are essential and resource intensive.
Raw Information/Input
Processed Information/Output
[Fig. 13] Components of a data centre.
1.3.2 - Energy Demanding Programmes
IT and cooling systems are among the most energy-intensive programs in conventional data centres, with cooling often consuming more energy than the IT hardware itself. As servers, storage, and networking devices generate significant heat during operation, maintaining optimal temperatures requires extensive cooling infrastructure, such as air conditioning, airflow management, and liquid cooling systems. These cooling demands typically represent the largest portion of a data centre’s energy consumption, surpassing even the power used to run IT equipment.
[Fig. 14] Energy demanding programmes of conventional data centres (generated by the authors)
[Fig. 15] De-constructing a data centre (generated by the authors)
[Fig. 16] Traditional Air Cooled DC Diagrams (retrieved and edited from https://journal.uptimeinstitute.com/alook-at-data-center-cooling-technologies/ )
Traditional Information Technology (IT) systems aimed to remove the excess heat from the computers by circulating a treated (filtered, temperature ensured) air around the room.
The electricity consumed by the computing hardware dissipates as heat due to resistances in the circuits. The resultant heat must be dissipated to ensure the required thermal conditions for the information technology equipment (ITE) to operate properly.
Focusing on the core IT stacks in data centres, a wide range of cooling technologies are utilised, employing either air, water, or engineered fluids . In the context of this research, an overview of multiple solutions was conducted to determine the most convenient option that offers both required flexibility in scaling and the highest possible heat-reuse capacity.
As the power density of the chips increased overtime, air became a less viable medium for projected demands.
In the scope of the research, various liquid cooling solutions were dissected in the features regarding their scalability, additional spatial and infrastructural requirements, and the heat removal procedure.
Although many modifications multiply the cooling solutions, the selected ones distinctly differentiate from each other and set the base for their typologies. Starting from the widespread air-cooling solutions to the latest immersion
cooling options, multiple solutions were compared based on aspects such as scalability and modularity, spatial and infrastructural requirements, characteristics of the dissipated heat, and integrated required systems.
Immersion cooling is a type of liquid cooling by submerging the server units in a cooling fluid present in specially designated tanks. On the other hand, water-cooled server racks resemble conventional rack-mount servers, but they are networked with water blocks and fluid-circulating tubing to aid in heat dissipation. Due to the maximum use of liquid around the generated heat, the thermal conductivity advantage is highly retrieved.
[Fig. 17] Comparison of cooling systems in Data Centres (generated by the authors)
Immersion-Cooling solutions in data centres provide PUE (Power Usage Effectiveness) values in a range of 1.02 to 1.04, portraying that they use up to 50% less energy than their traditional air-cooled counterparts while handling the same computational load. 11
Additionally, immersion-cooling solutions provide 5 times more powerdensity per rack, than the traditional air cooled solutions, therefore they fit
more computational capacity into a smaller volume, much more efficiently.12 Within the scope of this research, the single-phase immersion cooling solution has been identified as the most feasible option due to its flexibility through modularity and its effective heat transfer capabilities to the submerged fluid. By circulating the heated fluid through a heat exchanger, the resulting heat is efficiently transferred and can be directed to where it is needed through additional material interventions.
[Fig. 20] Working principle of an Liquid-to-Air heat exchanger (retrieved from https://www.altexinc.com/ case-studies/air-cooler-recirculation-winterization/).
A heat exchanger is a mechanical device designed to efficiently transfer thermal energy between two or more medium at different temperatures without mixing them. It operates based on the principles of conduction and convection, enabling the transfer of heat through solid walls and fluid motion.
In data centres, heat exchangers are vital for maintaining the optimal operating temperatures of ITE Spaces, enhancing energy efficiency, and contributing to ensured thermal management practices.
[Fig. 21] Immersion cooling infrastructure example without heat reuse (retrieved from https://pictures.2cr. si/Images_site_web_Odoo/Partners/Submer/2CRSi_ Submer_Immersion%20cooling%20EN_April_2023.pdf).
The input and output parameters of the heat exchanger are determined by the operational requirements of ITE spaces. Considering the utilization of Immersion Tank ITE Spaces, where high-performance computing, AI, and supercomputers generate substantial energy consumption, liquid cooling systems have demonstrated superior efficiency in managing the thermal loads of high-density data racks.
Liquid-to-air heat exchangers are considered best practice among liquid-based systems. Dry coolers, a type of air-to-liquid heat exchanger, transfer heat from the liquid to the surrounding air. In this system, warm liquid flows through a network of coils or tubes, while large fans blow ambient air over these coils. As the air absorbs heat from the liquid, the liquid is cooled before returning to the system. Typically, dry coolers dissipate heat directly into the air, but this heat can also be repurposed.
[Fig. 22] Types of Heat Exchangers (generated by the authors)
[Fig. 23] Heat Circulation of Immersion cooling system with liquid to air heat exchanger (generated by the authors).
[Fig. 24] Redrawn from (The Goldman Sachs Group, Inc,. AI/ data centers' global power surge and the Sustainability impact, 2024)
1.3.6 – Increasing Demand
The International Energy Agency (IEA) estimates that data centres, cryptocurrencies, and artificial intelligence (AI) consumed approximately 460 TWh of electricity globally in 2022, representing nearly 2% of the world’s total electricity demand. Looking ahead, the energy demand of data centres is expected to grow significantly due to rapid technological advancements and the evolution of digital services. The Goldman Sachs
projects that by 2030, global electricity demand by data centres could range between 680 and 1,400 TWh, with a baseline estimate of around 1050 TWh. This increase—ranging from an additional 130 to 850 TWh compared to 2024 levels—is comparable to the electricity consumption of countries like Sweden or Germany13.
Data centres require well-defined metrics to accurately measure performance and address inefficiencies. Power Usage Effectiveness (PUE) is a key ratio comparing the total energy consumed by the data centre facility to the energy used by the IT equipment. PUE is crucial for evaluating and enhancing the energy efficiency of data centres. By comprehending and optimizing PUE, data centre operators can minimize environmental impact and improve overall performance.
Total Facility Energy: This value encompasses all the energy utilized by the entire data centre, including:
IT Equipment: Servers, storage, network switches, and other computing hardware.
Mechanical Systems: Air conditioners, chillers, compressors, pumps, and other mechanical infrastructure.
Electrical Systems: UPS systems, power distribution units (PDUs), transformers, lighting and miscellaneous
IT Equipment Energy: This value refers to the energy directly consumed by the IT equipment for data processing, storage, and networking.
[Fig. 25] PUE values and their corresponding efficiency values. (retrieved from https://submer.com/blog/howto-calculate-the-pue-of-a-datacenter/).
[Fig. 26] Power Usage Effectiveness (PUE) calculation (generated by author)
Improvements in reducing Power Usage Effectiveness (PUE) and enhancing energy efficiency in data centres are increasingly becoming a significant focus. Hyperscale colocation campuses and many large new colocation facilities are being designed with PUE values significantly below the industry average of 1.4. For instance, Scala Data Centres is constructing its Tamboré Campus in São Paolo, Brazil, aiming for 450 MW with a PUE of 1.4. Cloud hyperscale data centres of companies like Google, Amazon Web Services, and Microsoft already report PUE values of 1.2 or lower at some sites14.
With certain design practices, PUE can be reduced, but reutilizing expelled heat can further lower the 'NET' PUE. Instead of wasting this heat, it can be redirected for beneficial uses like supplying district heating, or supporting agricultural practices. By repurposing this by-product, data centres can minimize their environmental impact while creating more sustainable, multifunctional developments that benefit local communities and industries beyond IT operations.
[Fig. 27] PUE Reduction via repurposing the excess heat (generated by author)
Just as district heating systems capture waste heat from industrial processes to warm buildings, reutilizing expelled heat from data centres can serve agricultural purposes, providing a consistent and sustainable alternative. Data centres generate large amounts of excess heat during their continuous operations, which can be repurposed to support controlled-environment agriculture. Unlike district heating, which often fluctuates with seasonal demand for heating homes and offices, agricultural heat reuse delivers consistent, year-round benefits. It can maintain optimal temperatures for indoor farming, greenhouses, and vertical cultivation, enabling stable crop production regardless of outdoor weather conditions.
[Fig. 28] PUE differences as per the repurposing the excess heat (generated by author)
[Fig. 29] Image courtesy of Solomon R. Guggenheim Museum (retrieved from ,https://metalocus.es/sites/default/files/ metalocus_countryside_koolhaas_guggenheim_01.jpg)
In the past, agriculture was seen as separate from urban life, with farms confined to the outskirts of cities. Today, however, the relationship between the city and agriculture is evolving, with both increasingly conceived as interconnected systems. The merging of urban spaces and agricultural production opens up new possibilities for integrating food cultivation within the fabric of the city itself.
One such opportunity lies in the reutilization of waste heat from data centres to support controlled-environment agriculture. This integration of heat recovery into agricultural systems not only improves energy efficiency but also addresses food security challenges by promoting local, resilient food production. By transforming waste heat into a valuable resource, data centres can contribute to greener urban infrastructure, fostering sustainable practices that reduce emissions and enhance public engagement through urban farming activities.
Data centres contribute significantly for the consumption of city’s electricity. Majority of energy consumed by data centres is converted into heat, currently cooling technologies including air and liquid is used to cool these data centres which often released into atmosphere, representing a potential waste of energy. Additionally, the location of many data centres on the outskirts of cities poses challenges for digital infrastructure, particularly in terms of latency. As more urban services rely on real-time data processing, proximity to data centres becomes critical to ensure faster, more reliable connections.
Meanwhile, agriculture—an essential chain of production—primarily takes place on the outskirts of cities, requiring significant resources for transportation, storage, and distribution to urban areas.
[Fig. 30] Farming activities and data centres are isolated entities (generated by author)
Cities are living systems that consume resources, produce waste, and transform inputs (energy, water, materials) into outputs (goods, services, emissions). Effective heat recovery and potential hybridization of a program that citizens can engage with, can significantly reduce the carbon footprint of data centres ensuring a contribution for the public's good.15 What if we reframe data centers not as isolated entities but as productive nodes in the city’s resource system? Or can my data feed me?
Integrating urban farms with data centres can transform these typically secluded, inaccessible spaces into visible and functional parts of the
community, effectively breaking the notion of data centres as "hidden entities." By allocating space for community gardens, residents can engage directly with the facility, growing their own produce and fostering a connection to the site.16
These allocations serve as social hubs, promoting interaction and collaboration. Additionally, regularly scheduled farmers' markets within the integrated facility can showcase and sell produce grown on-site, drawing visitors and creating a lively, market-like atmosphere. This visibility and community engagement make the data centre an active, integral part of the urban environment.
[Fig. 31] Public integrated farming activity via repurposing excess heat from Data Centre (generated by author)
1.4_Intersections
The trilogy of these individual case domains and their intersections hinted us about the hidden potentials of such wide domain. Through the literature and studies we questioned data centre building practices along with the potential value this typology can have in various contexts.
Apart from the primary domains of data, space and energy, new domains of contextual adaptation, waste energy utilization and spatial hybridization were three interrelated tangents that we extracted from the study of their intersections.
[Fig. 32] data, space and energy (illustrated by authors).
[Fig. 33] Intersections of data, space and energy (illustrated by authors).
Since the data, space and energy demands of urban actors are influenced by the urban fabrics they share, the emerging problem space requires a contextual lens for further resolution.
[Fig. 34] Inferences from the intersection of data, space and energy (illustrated by authors).
1.5_Context
[Fig. 35] Global data centre distribution (retrieved from https://espace-mondialatlas.sciencespo.fr/en/topic-contrasts-and-inequalities/map-1C20-EN-locationof-data-centers-january-2018andnbsp.html)
Although data centres are globally distributed, they are predominantly clustered in North America —particularly in the USA—and Europe —with the highest concentration found in the United Kingdom. In Europe, most data centres are in northern countries such as the Netherlands, Germany, and France, but the UK stands out with the greatest density. Also, it might be worthwhile considering volume or other measures of data exchange traffic. The largest exchanges outside the United States include Frankfurt, Amsterdam, London, Moscow, and Tokyo.
[Fig. 36] Data centre hubs in northern Europe (retrieved and redrawn from https:// www.datacentermap.com/united-kingdom/)
Currently, the UK operates approximately 350+ data centres, with the mentioned 120+ situated in London. The demand for data process and storage are surging due to the AI boom, leading to significant growth and development of data centres in London.
1.6_Research Question
Research Question
How can we generate a scalable data centre typology in the urban fabric of London, by defining a public interface via utilizing the excess heat for cultivation purposes
Regarding London's challenging urban fabric and emerging needs; we ask "How can we generate an adaptable and accessible data centre typology in London, utilizing its excess heat for cultivation?"
phase
phase
How can we generate an adaptable and accessible data centre typology in London, that utilizes its excess heat for urban cultivation?
How can we generate an and accessible data centre typology in that utilizes excess heat for urban cultivation?
How can we generate an adaptable and accessible data centre typology in London, that utilizes its excess heat for urban cultivation?
can we generate adaptable data typology in that its excess for urban
material system space-making
material system space-making informing
| METHODS |
2.1_ Data Sampling
The site selection method in this study uses data sampling by overlaying multiple maps to identify optimal locations for a data-processing typology that hybridizes a cultivation function. In this way, relevant maps are collected from various sources and evaluated based on their legend information.
A grid of sampling points across Central London is generated, and the input data from each map are extracted and juxtaposed. Each sampling point is attributed with the corresponding extracted values. Various maps, serving as source criteria, are weighted according to their relevance to the designated goals, adding up to a total value for each sample point. These goals aim to ensure the chosen locations enhance operational efficiency, community integration, and environmental benefits. These points are then used as center points for potential sites to be identified around them.
2.2_ Computational Fluid Dynamics
Computational Fluid Dynamics (CFD) analysis combines data models to predict the performance of the input in terms of its response to fluid flow and heat transfer. It examines several fluid flow properties, including temperature, pressure, velocity, and density.17
This methodology is implemented in different scales to support different stages of the design development. The implementation scale varies from the urban context level of London to the local assembly level of the proposed material system.
When neeeded, Fast Fluid Dynamics (FFD) methodologies were also utilized to efficiently predict, compare and contrast the performance of the inputted geometry utilizing less computing power.
[Fig. 37] Data sampling.(generated by the authors)
[Fig. 38] Computational Fluid Dynamics Example (generated by the authors)
2.3_ Heat Transfer Mechanisms
Heat transfer mechanisms refer to the ways in which thermal energy moves within a medium, as well as from one medium to another, following the principles of thermodynamics. This research primarily utilizes the second law of thermodynamics, which states that during thermal contact, energy exchange between mediums continues until thermal equilibrium is achieved.19 Computational workflows incorporating specialized equations are used for a quick and accurate understanding of these mechanisms, enabling a performance-driven design process. The design criteria for the proposed material system heavily depended on the characteristics of thermal energy within. Design optimizations to facilitate this movement were implemented based on the outcomes of this analytical study.
2.4_ Additive Manufacturing of
Lattice
Structures
This exploration included additive manufacturing of complex geometries such as minimal surfaces, triply periodic minimal surface-based (TPMS) lattice structures, and their compositions. TPMS exhibit these properties in a periodic manner across three dimensions, posing considerable fabrication challenges with conventional methods.18 Lattice structures, made up of repeating unit cells, offer a combination of lightweight properties and high strength.
The intricacy of these geometries makes additive manufacturing essential, as it eliminates the need for numerous unique formworks or jigs, increasing stability and performance while reducing fabrication time and material waste.
2.5_ Material Test: Phase Changing Materials
To facilitate hybridized cultivation, this study explored the potential integration of phase-changing materials (PCMs). The selection of suitable PCMs within specific temperature ranges involved a series of material experiments, guided by the heat characteristics obtained from relevant datasheets.
Additionally, the latent heat capacity, cycle stability, and volumetric changes during phase transitions of the selected materials are examined through material experiments using multiple data channels, including continuous logging thermometers and thermal imaging methods.
[Fig. 39] Fabricated set of TPMS-Based Lattice Structures (photograph by the authors)
[Fig. 40] Material Test: Phase Changing Materials. (photograph by the authors)
2.6_ Evolutionary Multi Objective Optimization
Evolutionary multi-objective optimization involves multi-criteria decisionmaking. Using principles of genetic evolution and an evolutionary multiobjective optimization engine, multiple solutions are generated, tested, and evaluated based on their performance against the specified objectives.20
The workflow for the design development phase required an evolutionary multi-objective optimization process to generate, compare, and contrast global assembly options in relation to the established and weighted objectives.
To achieve this, Wallacei, an evolutionary engine plug-in for Grasshopper 3D, was utilized. It also allows users to select, reconstruct, and output any phenotype from the population after the simulation is complete.
41] Evolutionary Multi Objective Optimization Process (generated by the authors)
2.7_Volumetric Site Analysis
This method explored the simultaneous mapping of environmental conditions for the volumetric analysis of the selected site and its impact on early-stage design. The methodology is based on deconstructing the urban site into a volumetric grid of points. For each of these points, various physical properties, such as solar radiation, airflow, and visibility, are computed. Subsequently, interactive visualization techniques allow for the observation of the site at a volumetric, directional, and dynamic level, revealing information that is typically invisible.21
The research uses this analysis to identify potential areas for the deployment of built structures by examining the field within the volume and assessing the site's future potential for growth. This analysis is then layered with multiple objectives, providing a potential deployment field defined by a weighting system.
[Fig.
[Fig. 42] Volumetric Site Analysis.Process Diagram (generated by the authors)
2.8_Network Analysis – Shortest Path
To assess the topological conditions created by the assembly of components, Space Syntax and graph theory methodologies have been utilized at the assembly scale. The shortest walk refers to the distance cost between two line segments, weighted by three key factors: metric (least length), topological (fewest turns), and geometrical (least angle change).22
In this context, a further analysis of the clustering and access conditions was conducted using the "ShortestWalk" plugin23 within the Grasshopper environment of Rhino3D. This analysis aims to ensure overall accessibility to the main units by examining the relationships between the space-making units.
2.9_Shape Grammar
As “mission-critical” typologies, one key inference from data centre case studies was its interdependent sub-cluster requirements to enhance performance efficiency and regulate accessibility within the facilities. Within its typological context, a shape grammar approach is proposed to define a set of mutual rules while generating an adaptable configuration of these “localscaled” units, which will gather the sub-clusters at the “regional scale” and ultimately assemble them at the “global scale”.
Shape grammars, being non-deterministic, provide users with various choices of rules and application methods at each iteration.24 This enables multiple potential outcomes as the it proceeds. The growing set of relationships and complex interdependency create a laborious process which is prone to errors if carried out manually. Regardingly, a shape grammar interpreter that will automate the process is required.
In this context, the “Assembler” plug-in25 is utilised within the Grasshopper environment under Rhino3D. The plug-in aim is defined as “distributing granular decision in an open-ended process,” automating the task of determining “Which rule to apply” or, as stated, “Where do I add the next object?”.
[Fig. 43] Network Analysis – Shortest Path.Analysis (generated by the authors)
[Fig. 44] Space-making units and possible combinations (generated by the authors)
2.10_Finite Element Analysis
2.11_BESO Optimization
To assess and refine the proposed structural configuration, a finite element analysis (FEA) was first conducted to model the behavior of beams and columns under various loading conditions. By defining boundary conditions, load magnitudes, and material properties, the FEA provided insight into critical stress regions and overall deflection patterns. Subsequently, an automated cross-section optimization was performed, iteratively selecting and refining beam and column profiles in accordance with both load-bearing requirements and permissible deflection limits.
In parallel, BESO, Beam Reduction Methodologies, combined with the Cross Sectional Optimization studies, drawing upon evolutionary or topology optimization principles, were employed to systematically eliminate underutilized or minimally stressed members, thereby decreasing material usage and overall system weight. This sequential combination of FEA, crosssection optimization, and beam reduction strategies ensured that the final design effectively balanced structural efficiency, weight minimization, and long-term adaptability.
In this context, “Karamba3D” plug-in26 tilised within the Grasshopper environment under Rhino3D.
| RESEARCH DEVELOPMENT |
3.1_Phase Changing Materials
3.1.1 - Need for a Phase Changing Material
As per the previously mentioned concept, hybridization that will utilize the waste heat in the form of cultivation has been proposed. But, computational loads fluctuate up to 40% between day and night activities resulting in varying amounts of released heat. Since the cultivation units require stable thermal conditions, a material system is required to passively regulate the heat transfer from IT units to this agricultural areas.
For the development of the overall system, the inlet and outlet temperatures of the fluids in the dry cooling heat exchangers will be crucial. Since the project involves ITE Spaces, which consist of immersion cooling tanks, and Cultivation Units designed to accommodate crops and vegetation, accurately determining these temperatures is essential for advancing the design and conducting detailed physical experiments.
Immersion Cooling Inlet-Outlet Temperatures:
-Inlet Temperature: around 40°C
-Outlet Temperature: 50°C (10°C higher than the inlet temperature)
Farming Unit Inlet-Outlet Temperatures:
-Inlet Temperature: Typically, between 18°C and 24°C
-Outlet Temperature: It should stay within 2°C of the inlet temperature to ensure a stable growing environment.
The immersion cooling units need to lower the liquid temperature from 50°C to 40°C, while the cultivation space must stay between 18°C and 24°C. This thermal regulation can be achieved by Phase Changing Material (PCM) passively by utilizing it's latent heat storage capacities.
[Fig. 45] PCM as passive thermal regulator (generated by author)
[Fig. 46] Excess heat released to atmosphere in current scenario (generated by author)
[Fig. 48] Excess heat released to agricultural unit without any regulation (generated by author)
[Fig. 47] PCM as passive thermal regulator (generated by author)
[Fig. 49] PCM selection chart (the base graph retrieved from https://thermalds.com/phase-change-materials/)
PCM's performance stands predominantly determined by its material property metrics, such as density, thermal conductivity, latent heat of fusion, phase change temperature, .etc. Further influential aspects often regarded are cycling stability, toxicity and flammability, re-cyclability, and cost-effectiveness.27
PCMs are mainly categorized into "organic", "inorganic", and "eutectic" according to their respective properties. Each class has a distinct range of thermochemical characteristics and operating temperatures, making some more suitable for specific applications than others.
The surveyed graph portrays the ranges of melting temperatures of various PCM kinds. Utilizing the chart, "hydrated salts", "paraffins" and "fatty acids" were determined to coincide the melting temperature range related with the heat exchanger input/output values.
In this manner, salt hydrates, an inorganic material, were preferred due to their ease of maintenance, wide range of customization options, low-volume change in-between phases, and non-flammable nature.
[Fig. 50] Filtered PCM options (generated by the authors)
Within the "salt hydrates" class, Calcium Chloride Hexahydrate (CaCl2.6H2O) was selected as the PCM material due to its typical availability, and overall fit to the expected criteria.
51] Selected PCM phase change temperature graph (retrieved from the article Thermophysical parameters and enthalpy-temperature curve of phase change material(...) 28
[Fig.
[Fig. 52] Selected PCM through its multiple phases (photograph by authors)
3.1.3 - PCM Incorporation Techniques
[Fig. 53] Typical encapsulation layer diagram (generated by the authors)
There are two mainstream encapsulation techniques for PCMs. Micro-capsules are characterized as capsules with a diameter of less than 1 cm, while macroencapsulation refers to a broader range of applications, typically with a diameter of more than 1 cm.29
Respectively, macro-encapsulation of PCMs help the system
1) prevent significant phase separations; 2) quicken the pace of heat transmission; 3) give the PCM infill a self-supporting structure.
In complement to the intrinsic properties of PCMs, the capacity of energy economizing also relies heavily on the design of the structure, as well as the thickness and location of the respective enveloping PCM layer regarding the surrounding space within the envelope.30
Several studies indicate that the PCM layer should be positioned near by the heat source.31 For cooling performance to occur, the building element must have the PCM layer applied outside. On the other hand, it ought to be situated nearer the interior for heating performance purposes.32
In this manner, a PCM layer that is expected to harness the excess heat from the heat exchangers is envisioned as a porous panel system that will allow circulating the hot air around the PCM infill.
3.2_Introducing Triply Periodic Minimal Surfaces
[Fig. 54] TPMS surface ability to subdivide a volume into two equal parts (retrieved from https://blog.fastwayengineering. com/3d-printed-gyroid-heat-exchanger-cfd)
The encapsulation method and its material need to meet particular criteria to be compatible with the building materials concerning the PCM it encapsulates:33
1) a shell formation around the PCM;
2) preventing the leakage of PCM when its molten;
3) should perform expected when encountering mechanical and thermal loads.
Examining the described parameters, an additive manufacturing potential for a macro-encapsulating shell is realized. The porosity attribute and the surface characteristics directed the focus to experiment with Triply Periodic Minimal Surfaces with the attributes described in the figure above.
3.2.1 - TPMS-Based Cellular Structures - Surface Type
Triply Periodic Minimal Surfaces (TPMS) provide effective and passive ways to improve heat transfer performance.34 Their configurations, which include Schwarz-P, Diamond, Neovius, and Gyroid offer a high surface-area-to-volume ratio and comparatively complex geometries. Because of these characteristics, TPMS is suitable for high-temperature and high-pressure settings. In addition to their thermo-hydraulic characteristics, TPMS outperform conventional systems in heat transfer efficiency and pressure drop reduction.
Utilizing these surface definitions, various shell geometries have been tested to efficiently subdivide the volume into PCM infill and void spaces for hot air circulation. The resultant shells are attributed as TPMS-Based Porous Cellular Structures.
[Fig. 56] TPMS-based shell generation and respective Boolean operations illustrating solid-void conditions (generated by the authors)
A series of computational fluid dynamics experiments were conducted to analyse and observe how different surface types respond to the output air characteristics from the heat exchanger. The hot air values were extracted from data sheets of the respective heat exchanger modules.
It was observed that while diamond and gyroid configurations perform better to re-direct the hot air, the pockets generated within the other two supplies potential heat traps.
11.35 m/s hot air, out of the
Gyroid Schwarz-P Diamond Neovius
heat exchanger
[Fig. 57] Outputs of the CFD Simulation (generated by authors)
Gyroid Schwarz-P
Diamond
Neovius
[Fig. 58] Outputs of the CFD Simulation (generated by authors)
The achieved mathematical model, and generative shell generation pipeline allow multiple mathematical modifications of Blending multiple types and grading opportunities to be able to combine multiple advantages.
Gyroid + Diamond (0.5X + 0.5Y)
[Fig. 59] Blended TPMS based cellular structure (generated by authors)
11.35 m/s hot air, out of the heat exchanger
[Fig. 60] Outputs of the CFD Simulation (generated by authors)
[Fig. 61] Graded TPMS based cellular structure (generated by authors)
How can the proposed material system can be further customized in response to the space it inhabits ?
[Fig. 62] Multiple TPMS based cellular structure examples (generated by authors)
Gathering the capabilities, within the material system development; it was questioned how can the morphology of the shell can be customized in relation to the space it inhabits.
Regardingly combining a matrix of space x energy inputs with related equations, an adaptive panel configurator pipeline utilizing the blending and grading capabilities was achieved.
[Fig. 63] Customized TPMS based cellular panel (generated by authors)
[Fig. 65] Customized TPMS based cellular panel (generated by authors)
[Fig. 66] Customized TPMS based cellular panel (generated by authors)
3.3_Experiment Setup
3.3.1 - Overview to Setup
The experimental setup was employed as a proof of concept for investigating the impact of Phase Change Materials (PCM) on temperature changes within spaces. This experiment was conducted in two stages to generate comparable data.
The setup is comprised of several components: the ITE Tank, Heat Exchanger, Fan, TPMS Regulator Volume, and Cultivation Space. These names reflect our conceptualization of architectural spaces. To avoid ambiguity, the Cultivation Space can be referred to as Void Space, and the IT Tank can be designated as the Water Tank. In this configuration, hot water circulates between the Heat Exchanger and the Water Tank. As the Heat Exchanger’s temperature increases due to the circulating hot water, a fan transfers the heated air from the Heat Exchanger to the bottom surface of the TPMS Regulator Volume, from where it is circulated into the Void Space.
[Fig. 67] Space Subdivisions in Experiment Setup.
[Fig. 68] Physical Experiment
3.3.2 - Variables
The only independent variable in this experiment is the TPMS Regulator Volume, which is responsible for transferring heat from the Heat Exchanger to the Void Space. The temperatures measured during the experiment, excluding that of the Water Tank (Sensor 01), include those of the Fan (Sensor 02), TPMS Regulator Volume (Sensor 03), and Void Space (Sensor 04). Sensors 02, 03 and 04 are considered dependent variables. Control variables include fan speed, water velocity, the upper threshold of water temperature, as well as the conditions of the heater, pipes, cables, and the surrounding environment.
In the first stage of the experiment, a gyroid-type TPMS with a void subvolume was used. In the second stage, a gyroid-type TPMS with a sub-volume filled with PCM (Calcium Chloride Hexahydrate) was utilized. The phase change temperature of Calcium Chloride Hexahydrate is 30°C.
3.3.3 - Objective
The primary objective of this experiment was to assess the role of the TPMS gyroid surface in regulating the transfer of hot air from the heat exchanger to the Void Space (also referred to as the Cultivation Space). The focus of the study was on comparing the time required for the Void Space, when equipped with an empty gyroid surface versus a PCM-filled gyroid surface, to heat up and cool down. This comparison aimed to evaluate the impact of the PCM material on thermal regulation within the space.
[Fig. 70] Used materials in experiment setup.
[Fig. 69] Thermometer sensor placement in experiment setup.
[Fig. 71] Temperature measurement setup.
3.3.4 - Data recording
In the experiment, multiple recording devices were employed for real-time data collection. Temperature variations were continuously monitored using thermal cameras positioned at two distinct locations, while key areas of the experimental setup were measured using four temperature sensors connected to a digital thermometer. Sensor data was recorded at 5-second intervals, creating tabular datasets for analysis. These values were subsequently plotted on a two-axis graph for further examination. Additionally, timelapse photography captured the physical changes every 15 seconds, ensuring comprehensive documentation of the experiment.
[Fig. 72] Temperature change in the experiment.
[Fig. 73] Experiment Setup.
The target temperature range for the cultivation area was set between 25 and 28.5 degrees Celsius. The temperatures in the experiment were slightly higher than the required 18-24°C range for the Cultivation Space. This is because the Phase Change Material (PCM) used, Calcium Chloride Hexahydrate, has a phase change temperature of 30°C. As it was the PCM with the lowest phase change temperature available on the market, the experiment’s temperature was consequently higher. However, achieving the desired 18-24°C range is possible with PCMs that have lower phase change temperatures, which commercially can be available. Additionally, the experiment served as a proof of concept rather than a direct application.
When comparing the temperatures during the heating experiments, it was
observed that the setup with the PCM-containing surface took longer to heat up the Void Space compared to the setup without PCM. This indicates that the PCM material effectively absorbs heat during the process, slowing down the Void Space temperature increase which balances the fluctuation.
In the cooling experiments, the data collected indicates that the PCM continued to retain the heat it had absorbed. This retention of heat by the PCM contributed to regulating the temperature change in the Void Space.
The presence of PCM thus demonstrates its effectiveness in maintaining more stable temperature conditions within the Void Space.
74] Experiment result plotted on a graph.
[Fig.
| DESIGN DEVELOPMENT |
Enabled by the material system, second phase investigates a context-aware, adaptable and accessible space-making.
4.1_Space Making
4.1.1 - Network Topology: Nodes and Linkages
The heat generated in the IT system is released via a heat exchanger and ultimately absorbed by the material insert, which regulates the agricultural unit. These essential connections form the smallest root network:
[ IT -> HEAT EXCHANGER -> MATERIAL SYSTEM -> AGRICULTURE ]
Ensuring this continuity requires adjacent surfaces that integrate architectural, material, and mechanical elements for functional connectivity. This minimal circuit reflects the overall system’s operation.
[Fig. 75] n (total number of units) increases directly reflecting to the multiplication of the smallest root network
4.1.2 - Network Topology: Nodes and Linkages
The fundamental circuit must be maintained and multipied until both the current and projected computational demands, along with the associated agricultural capacity, are met—covering the next 15 years of the data center’s operational lifespan without major refurbishment.
This expansion necessitates a complex array of node topologies and linkages to form the required networks. In this regard, cellular packing structures have been identified to accommodate these spatial demands effectively.
4.1.3 - Cellular Packing
In 3D space, to enable spatio-temporal shifts that maintain the network, we investigated cellular packing. Cellular packing refers to the systematic arrangement of discrete volumes in three-dimensional space that minimizes unused gaps while maximizing adjacency.35
By densely positioning cells, it is possible to streamline the overall structure while preserving the planar faces necessary for architectural-material inserts. These shared interfaces ensure both spatial continuity and functional adaptability, as they provide the surfaces required to incorporate system elements across the network. In this approach, various unit geometries were evaluated. Spherical packing did not generate a viable network structure in this context, as the necessary linkages depend on shared faces between cells.
While cubic packing provides such connections through orthogonal face arrays and rectangular connection cells, the truncated octahedron—one of the most efficient space-filling polyhedra—presents a triangular cell network that is twice as dense as the cubic arrangement.
The Kelvin truncated octahedron—often called the Kelvin—is a space-filling polyhedron with fourteen faces: eight hexagons and six squares, all sharing the same edge length. Its name originates from a 19th-century problem posed by Lord Kelvin, which involved finding a configuration of equally sized cells that would minimize the total surface area.36
[Fig. 76] Transition from required network topology to cellular packing investigations (sphere - truncated octahedron - cube)
4.1.4 - Truncated Octahedra
Beyond its higher packing density, the truncated octahedron offers several other advantages. For one, it provides multiple planar faces (squares and hexagons) that facilitate robust connections and structural stability. This variety of face shapes also increases design flexibility, as each face can be adapted for different functional or infrastructural inserts. Furthermore, the geometry’s angular transitions enable efficient branching or segmentation of networks, allowing multiple adjacencies without compromising overall volume efficiency. The increased number of edges and faces—compared to simpler polyhedra—translates into greater modularity and the potential for more nuanced spatial organization, making the truncated octahedron a compelling choice for applications requiring a dense, face-sharing network.
The truncated octahedron, in comparison to the other two dodecahedrons, comprised more polygonal faces, all with the same proportions. This not only increased its potential of creating spatial variations within the same volume but also allowed more permutations and combinations for orienting one face to another face (either polygon to polygon or quadrilateral to quadrilateral).
This process of 'assembling' helped expand the spatial quality of the volume, which again depended on the face selected and its orientation in relation to the second face selected.
The Bisymmetric Hendecahedron, Sphenoid Hendecahedron and Gyrobifastigium fulfilled the purpose of space-filling, but from the perspective of an architecturally feasible space, they comprised quite a few acute angles between their consecutive faces which resulted in a volume with a lot of ‘corner-like’ spaces. On the other hand the Rhombic Dodecahedron, Elongated Dodecahedron and Truncated Octahedron comprised of only right/ obtuse angles between their consecutive faces thereby serving as a better option for space-filling in the context of architectural space making.
Rhombic Dodecahedron
Truncated Octahedron
Elongated Dodecahedron
“ The tetrakaidecahedron (Lord Kelvin’s “Solid”) is the most nearly spherical of the regular conventional polyhedra; ergo, it provides the most volume for the least surface and the most unobstructed surface for the rollability of least effort into the shallowest nests of closest-packed, most securely self-cohering, allspace- filling, symmetrical, nuclear system agglomerations with the minimum complexity of inherently concentric shell layers around a nuclear center. “
—R. Buckminster Fuller, Synergetics37
[Fig. 77] Truncated Octahedron generation approaches originating from a Cube
4.2_Catalogue Of Objects
4.2.1 - Space Making Objects
[Fig. 79] Proportionate scaling of truncated octahedron (illustrated by the authors).
Proceeding with the truncated octahedron, it was uniformly scaled along all three axes to allow for architectural intervention, not only in terms of the human scale but also considering the proportions of the selected heat exchanger, immersion tanks, hydroponic units, etc. Hence, the vertical span threshold was considered to be 7 metres.
[Fig. 80] Splitting of truncated octahedron (illustrated by the authors).
To allow for a larger accessible space, the truncated octahedron was divided in to two parts by splitting it from its symmetrical horizontal plane. The resultant two parts, with their larger 'floor plate', when aligned with each other using its hexagonal or vertical square faces, provided seamless expansion of spatial quality. The topological relationship generated by this method of spatial expansion was ideal for a data centre proposal because not every space needed to have a direct spatial relationship. For example there could be a justifiable relationship between the control centre space and IT space (in terms of IT management systems) without actual spatial connection.
[Fig. 81] Expansion of spatial quality and topological relationships (illustrated by the authors).
[Fig. 78] Space making objects. (illustrated by the authors).
The two parts of the truncated octahedron henceforth in this research are considered to be the preliminary objects of space-making. They would be the building blocks of the assembly process which would signify and allow implementation of programmes as well as their topological relationships with each other in the overall 'assemblage'.
kindB
kindA
4.2.2_Design of Assemblage Objects
In these two parts of the truncated octahedrons, the internal spaces can be further adapted to meet the specific requirements of various programmes, allowing for greater flexibility in their usage. During the initial design and creation of these units, walls were intentionally excluded from the individual modules to ensure seamless connections between horizontally adjacent spaces. This approach promotes a more open and adaptable spatial arrangement that can be tailored to evolving needs.
To achieve seamless integration between these interconnected volumes — both spatially and in terms of infrastructure — the ceiling and raised floor heights have been carefully standardized across all units. This uniformity simplifies the coordination of structural elements and shared systems such as HVAC, lighting, and cabling, ensuring that services can flow uninterrupted throughout the interconnected spaces. By maintaining consistent vertical dimensions, the design also supports future modifications.
[Fig. 82] Two levelled truncated octahedron units
[Fig. 83] Units with cabling, HVAC and all Infrastructure
IT Unit Heat Exchanger
Since the objects consist of half truncated octahedrons, each half serves a different spatial function. While the objects are defined according to the programme, their layout and arrangement can be adjusted based on the module's location within the building to achieve more efficient spatial distributions. For example, the upper half of the units can accommodate around six immersion cooling tanks, whereas the bottom half can hold only
two. However, when these units are combined, they create seamless and efficient spaces by only changing certain internal elements to optimize the overall spatial configuration.
In the heat exchanger units most of the faces considered open to allow air intake in different wind conditions.
[Fig. 84] IT Units
[Fig. 85] Heat Exchanger
Agriculture
Agricultural units are the spaces where cultivation takes place. To integrate such production within a highly technical environment, these units must be shielded from external environmental conditions and designed to operate entirely indoors. Various types of hydroponic panels enable different spatial configurations, allowing the cultivation process to adapt to specific spatial needs.
Control centers and offices are spaces dedicated to monitoring all IT units, power systems, and cooling infrastructure. These areas equipped with advanced control panels and workstations and they serve as the operational hub where technical staff are primarily based, ensuring that all systems are functioning optimally.
[Fig. 86] Agricultural Units
[Fig. 87] Control Spaces/Offices
Control Centre / Office
88] Steps for regional assemblage example
When these spaces come together, they form larger, interconnected volumes that adapt and respond to their immediate surroundings. Additionally, through their relationship with the surrounding programmes, these spaces can either be merged or separated to suit changing needs.
Step_09
Step_08
Step_10
[Fig.
However, to enable these conditions between the units, they must be defined in advance to allow for proper assembly. With the help of designated handles, each object is governed by specific rules that facilitate its integration into the global assemblage.
[Fig. 91] How to 'assemble'? How to describe an 'assembly'? (illustrated by the authors).
[Fig. 90] Space-making object 'kindA' with its circulation path and connecting plane types (generated by the authors).
[Fig. 92] Space-making object 'kindB' with its circulation path and connecting plane types (generated by the authors).
4.2.4 - Space Making Assembly
The halved of truncated octahedrons, when more than one in number, have potential to 'assemble' amongst themselves when the plane of the same type of face were aligned on top one another (hexagonal face to hexagonal face and square face to square face).
For example, consider two space-making objects from the selected set of four, 'kindA' and 'kindB'. Step I is to select which planes will be allowed to 'assemble' (one hexagonal face from each object). Step II is to establish which is the 'sender' object and which is the 'receiver' object ('kindA' and 'kindB' respectively). Step III and Step IV is the outcome of this process of 'assembly'.
To allow for a more systematic and justifiable assembly process, this process of populating the space making object is executed under a set of 'rules'. There could be as many as rules as required hence each of these 'rules' had to have a specific nomenclature to allow easy evaluation. The spatial quality of the output depends on the geometry of the space-filling object as well as the selection of the planes and the selection of the sender and receiver. Sometimes, the spatial quality of each of these outputs are the same even with the use of different 'rules' but their topological relationships are always different and unique.
4.2.5
- Connecting Planes
With this methodology of assembling the space-making objects, each of the two objects were assessed and planes (faces of the space-filling object) which facilitated "space-making" during this process of assembling were selected.
For example, in the space-making object 'kindA', there are a total of four connecting planes, three on three consecutive hexagonal faces and one on the octagonal face of the geometry which are labelled as type 1 and type 2 respectively.
4.2.6 - Circulation Paths
In order to evaluate the space in the entire 'assemblage' of space-making objects, each of these objects have been incorporated a 'circulation path'. When two or more space-making objects are assembled, these circulation paths also assemble simultaneously which emphasizes the nature of 'connectivity' of these space-making objects. The circulation paths generally terminate at the location of a connecting plane because there is a higher probability of the topology expanding its 'connectivity' network from that plane.
[Fig. 93] All heuristics with the defined set of space-making objects - Isometric View (generated by the authors).
All possible outputs are laid out after setting up the rules. Each rule is portrayed as a sender and receiver space-making object to understand the spatial quality "if" that rule is implemented.
kindB
kindA
Circulation Path
[Fig. 94] All heuristics with the defined set of space-making objects and their circulation paths - Isometric View (generated by the authors).
The spatial quality "if" that rule is implemented, can also be interpreted by its circulation path. These circulation paths also represent the topology of the output assembly.
0. kindA|0=0<kindB|1%1
1. kindA|0=0<kindB|2%1
2. kindA|0=0<kindB|3%1
3. kindA|1=0<kindB|1%1
4. kindA|1=0<kindB|2%1
5. kindA|1=0<kindB|3%1
6. kindA|2=0<kindB|1%1
7. kindA|2=0<kindB|2%1
8. kindA|2=0<kindB|3%1
9. kindA|3=0<kindA|3%1
10. kindA|3=0<kindB|0%1
11. kindB|0=0<kindA|3%1
12. kindB|0=0<kindB|0%1
13. kindB|1=0<kindA|0%1
14. kindB|1=0<kindA|1%1
15. kindB|1=0<kindA|2%1
16. kindB|2=0<kindA|0%1
17. kindB|2=0<kindA|1%1
18. kindB|2=0<kindA|2%1
19. kindB|3=0<kindA|0%1
20. kindB|3=0<kindA|1%1
21. kindB|3=0<kindA|2%1
[Fig. 95] All heuristics with the defined set of space-making objects - Top View (generated by the authors).
0. kindA|0=0<kindB|1%1
1. kindA|0=0<kindB|2%1
2. kindA|0=0<kindB|3%1
3. kindA|1=0<kindB|1%1
4. kindA|1=0<kindB|2%1
5. kindA|1=0<kindB|3%1
6. kindA|2=0<kindB|1%1
7. kindA|2=0<kindB|2%1
8. kindA|2=0<kindB|3%1
9. kindA|3=0<kindA|3%1
10. kindA|3=0<kindB|0%1
11. kindB|0=0<kindA|3%1
12. kindB|0=0<kindB|0%1
13. kindB|1=0<kindA|0%1
14. kindB|1=0<kindA|1%1
15. kindB|1=0<kindA|2%1
16. kindB|2=0<kindA|0%1
17. kindB|2=0<kindA|1%1
18. kindB|2=0<kindA|2%1
19. kindB|3=0<kindA|0%1
20. kindB|3=0<kindA|1%1
21. kindB|3=0<kindA|2%1
4.2.10 - Need For A Field
Following the development of spatial objects and the definition of their sender-receiver relationships, we conducted multiple experiments by manually assembling the space making objects. The primary objective was to explore how these spatial objects organize themselves under different conditions and to test the efficacy of the sender-receiver logic in guiding the assembly process. Initial observations indicated a wide range of organizational patterns, but the fact that it was executed intuitively prompted a deeper investigation into the role of environmental directionality in achieving controlled and informed spatial configurations.
While these random organizations during a mere testing confirmed that the sender-receiver relationship logic functions correctly, they also revealed a lack of consistent guiding factors, resulting in uncontrolled and unpredictable assemblies. This observation highlighted the need for an additional mechanism to direct the assembly process toward more meaningful and purposeful spatial organizations. This guide was referred to as a field.
In this experiment, a horizontal field was generated within a confined environment, defined by vectors oriented along the horizontal axis and further manipulated using a curve attractor to yield a non-uniform field. The assembling process was conducted under these conditions to assess how the spatial objects would respond to a unidirectional field.
The assembled objects predominantly exhibited horizontal growth, aligning with the directionality of the field vectors. Although some vertical growth occurred, it was primarily due to the inherent connections between different assembly objects. This vertical expansion was more evident when adjusting the growth parameters, such as increasing or decreasing the number of objects and observing their development over multiple steps.
[Fig. 96] Horizontal field influencing the assembly process (illustrated by authors).
This experiment was similar to case 1 apart from the fact that the field generated and test was a vertical field.
As anticipated, the assembled objects exhibited mainly vertical growth with hints of horizontal growth, which was again due to the nature of connections between assembly objects. These two scenarios led to the interrogation of a hybrid multi-directional field.
[Fig. 97] Vertical field influencing the assembly process (illustrated by authors).
Building upon the first two cases, the third experiment introduced a multidirectional field encompassing both horizontal, vertical and diagonal vectors. The initial spatial object was placed within this environment to observe how the assembly process adapts to multiple directional influences.
Initially, the assembly growth followed a singular direction corresponding to the immediate field vectors. As the assembly expanded and crossed into regions influenced by the vectors in other directions, it began to exhibit growth patterns aligned with the new direction. This behaviour illustrates the assembly’s capacity to adapt to varying environmental cues, modifying its organizational structure in response to changes in field directionality. The experiment demonstrated that by designing fields with specific directional properties, we could effectively control and predict the assembly’s spatial organization.
The conducted experiments affirm the critical importance of directional fields in controlling the assembly process of spatial objects. The integration of environmental directionality transforms random and uncontrolled organizations into purposeful and adaptable spatial configurations. This approach not only validates the sender-receiver relationship logic but also extends its applicability by introducing an additional layer of control through environmental cues.
The necessity of directional fields raises pertinent questions regarding their development and the criteria used to define them.
[Fig. 98] Horizontal, vertical and diagonal field influencing the assembly process (illustrated by authors).
4.3_Environmental Conditions
The site boundary has been voxelized, where it has been divided into individual voxels, each measuring 8x8 meters. This segmentation is based on a uniform unit area, allowing for detailed spatial analysis. These voxels can subsequently be used in the process of field finding, enabling a more precise examination of spatial relationships and environmental influences across the site. This method enhances the ability to generate a refined and context-aware directional field within the spatial assemblage.
4.3.1 - Logic of a 'Field'
The concept of a field in architecture describes a space of propagation and effects—a continuum where the focus is on relationships and interactions rather than discrete objects. As Sanford Kwinter38 noted in 1986, “It contains no matter or material points, rather functions, vectors, and speeds.” In the context of architectural assemblage, fields are conceived as systems that encode distributed environmental information, influencing the assembly process as it unfolds. They can guide the assemblage to follow the intensity of certain environmental signals—such as light gradients, acoustic properties, or thermal conditions—using scalar values. They may also prefer specific component orientations—such as aligning openings towards prevailing winds or views—via vector values. Additionally, fields can determine which subset of assembly rules to apply in certain regions of space by assigning weight values, thereby shaping the architectural outcome in response to contextual factors.
[Fig. 99] Logic of 'Field' (illustrated by authors).
[Fig. 100] Voxelization of an example site (illustrated by authors).
Each voxel is subdivided in each faces' local cardinal direction to extract 18 plane surfaces possibilities. The normal to these planes is used to test for directionality and thereby help in informing the vector direction of the field.
[Fig. 101] Logic of 'Field' on a voxel level by extraction and testing of planes and normals (illustrated by authors).
[Fig. 102] Logic of 'Field' on a site level by extraction and testing of planes and normals (illustrated by authors).
4.3.2 - Field On Site
Each voxel of the voxelized site are tested as per site specific environmental conditions. The deferred explanation of these fitness criteria assists in testing different orientations of each voxel which multiplies the data set for testing and extracting the most optimum planes. The good performing planes are extracted to function as attractors for an empty field (unassigned scalar and vector values to a vector field i.e. a point grid).
The vector directions are informed by the directionality towards the well performing planes, thereby increasing the probability of the assembly process in the location of these planes.
The scalar value of the vector field is informed by the magnitude of the distance between the well performing planes and the empty field (point grid). This acts as a weight to promote the directionality of the vector field.
In London, plotting the data centres, it has been realized that the most data processes and edge computing happen in Central London. However, looking at the allotment distribution for cultivation per person, areas with high population density and office buildings are not capable of providing sufficient allotment areas.
Although these zones have high data processing and significant food consumption, they have low food production capabilities.
[Fig. 103] DC distribution and Area of Allotments (m2 per person) in Greater London, (redrawn from the article : Urban agriculture: Declining opportunity and increasing demand 39)
To analyze the city in greater depth, a three-stage, multi-layered analysis was conducted. The first step involved identifying zones where user profiles generate and consume the most data, focusing on population density and data-reliant sectors such as finance, computer programming, and information services. Next, these high-data-demand zones were evaluated based on their proximity to existing infrastructure. Finally, the zones were assessed and ranked according to the availability of agricultural allotment areas.
The final ranked clustering was determined by calculating the average of all evaluations and rankings across the three stages.
[Fig. 104] London Region Map and Site selection steps
Displays broadband service coverage across London, highlighting data-intensive locations.
These regions are ideal for establishing data centres, as high network bandwidth ensures reliable data transmission and supports the increasing demand for cloud services and connectivity. These locations are particularly suited for facilities handling real-time applications and large-scale data processing.
ICT Sector
Maps areas with high concentrations of computer programming and consultancy professionals.
Locating data centres near these hubs provides a technical advantage, as these professionals typically require large-scale, low-latency infrastructure to support software development, testing, and deployment. The demand for scalable and efficient computing resources makes these areas strategic for data storage and processing facilities.
Finance
Shows regions where financial services professionals are concentrated, excluding insurance and pension funding.
Data centres near these financial hubs must prioritize low-latency operations to support realtime transactions, analytics, and algorithmic trading. These locations also demand heightened security and compliance, making them critical for financial data processing and secure data storage.
Highlights concentrations of individuals working in the information services sector, implying dataintensive activity in these areas.
Data centres in such regions can benefit from proximity to a high number of information service professionals, who require continuous access to cloud platforms and storage solutions. The density of digital activity in these areas also increases demand for fast, local data processing and storage solutions.
Scientific Research
Indicates areas where professionals in scientific research and development are concentrated.
Data centres situated in close proximity to these regions can support the vast computing power required for research activities, such as simulations, data modelling, and AI-driven experiments. These locations are also ideal for handling high-throughput data from research institutions, ensuring smooth operations for data-heavy workflows.
Population Density
Identifies areas with high population density, indicating higher data consumption.
As like Population Density Map, Age could point out the location of younger people who Highlights locations with a younger demographic, who typically consume more data compared to older populations.
All the introduced maps were juxtaposed to identify zones with higher demand. At this stage, clustering was applied to better define these zones. The clustering process grouped zones within a 1 km radius, as this range aligns with the coverage area for low-latency devices and processes.
[Fig. 105] IT Demand and Zoning Maps
After defining the clusters, the Electric Transmission Network map was overlaid to assess the proximity of each cluster to key energy infrastructure and rank them accordingly. The center point of each 1 km radius cluster was used as a reference to measure the distance to the nearest electric transmission
cables and substations, with clusters ranked based on their proximity to these energy sources.
[Fig. 106] Proximity to Infrastructure
In the final step, a similar process was followed as in the previous stage. The proximity of each cluster to the nearest allotment area was analyzed to identify zones that either lack access to allotment spaces or are situated farther away from them. This step helped to pinpoint areas with limited or no urban cultivation, highlighting zones with potential for future agricultural development.
[Fig. 107] Proximity to Agricultural Allotments
All the maps ranked in the previous stages were combined, and the average of their rankings was calculated to identify the optimal zones. These zones demonstrate high data-processing activity to meet demand, are in close proximity to the electric transmission network—minimizing infrastructure expansion costs—and are farther from existing agricultural allotment areas,
[Fig. 108] Selected Zones for further development
indicating a need for public cultivation spaces. Region 2 in Islington emerged as the zone that best meets these criteria at the initial level of analysis. One of the vacant plots within this zone has been identified as a promising location for future development, offering the potential to address both data infrastructure demands and the need for public cultivation spaces.
- Fitness Objectives of the Field
Criteria 1: Maximizing Wind Exposure
Fitness Criteria 1 targets the heat exchanger function by analysing the site for wind flow using Computational Fluid Dynamics (CFD) across multiple levels to generate 3-dimensional vectors. The analysis shows that wind flow is restricted at lower levels due to the presence of neighbouring buildings, while higher altitudes experience significant wind flow. This observation indicates that the spatial organization must be informed by CFD analysis to optimize the design. Allowing wind to flow through the site is essential for improving the ventilation of heat exchangers and regulating the temperature within cultivation units. Integrating CFD insights into the spatial design can enhance airflow and thermal management.
Criteria 2: Minimizing Self-Shading
Fitness Criteria 2 targets the overall envelope and agriculture function by minimizing self shading on the context for each unit within the global formation. It helps in guiding the assembly process to expand and cover as much of the site as possible while not completely overshadowing the immediate context. This helps maintain the current nature of spaces in around the site.
[Fig. 109] Visualisation of wind flow throught the adjacent context (generated by authors).
[Fig. 110] Visualisation of occlusion tested on the voxel planes (generated by authors).
200 kWh/sq.m
0 kWh/sq.m
Criteria 3: Maximizing Solar Radiation
Fitness Criteria 3 targets the agriculture function by maximizing solar radiation. Simultaneously, it serves as a filter to vector directions where potential aggregation of cultivation units can be most effectively deployed. To enhance this process, an solar analysis was conducted on the site, adjusting the orientation of faces to maximize heat gain based on solar exposure. This approach ensures that the spatial organization is both energy-efficient and strategically aligned with the site’s environmental conditions, supporting the optimal deployment of cultivation units.
[Fig. 111] Visualisation of solar radiation experienced on the voxel planes (generated by authors).
4.3.3 - Optimization of the Field
A multi-objective evolutionary algorithm was used to optimize the field in the afore mentioned fitness criteria for 40 generations, each consisting of 20 individuals resulting in a total pool of 800 generations. Each individual was evaluated based on three fitness values.
A parallel coordinate plot analysis revealed that as generations progressed,
Selection Pool
[Fig. 112] Pool of individuals extracted (generated by authors).
fitness values for FC1 (maximize wind flow) and FC3 (maximize solar gain) improved but with variability. However, FC2 (minimize self-shading) maintained its average fitness value throughout the simulation. Based on the selection strategy, the top 10 solutions were chosen for the next phase of the process as potential field data, ensuring a balance between optimizing wind flow, selfshading and solar radiation.
114] Selected field for the given context - Top View (generated by authors).
Within the selection pool, individual 14 of generation 36 depicted a balance between the fitness objectives. This was selected as the field for the given context by weighing 'FC1 - maximising wind flow' as a preliminary selection criteria due to the requirement of placement of heat exchangers in the further stages of the research.
[Fig. 113] Selected field for the given context - Isometric View (generated by authors).
[Fig.
4.4_Assemblage Simulation
4.4.1 - Environmental Setup
[Fig. 115] The context, vector direction of field and scalar values of field (left to right) (generated by authors).
The context informed field environment has been established. The total number of space-making objects to simulate is dependant on the site conditions and the requirements and changes with the change in context.
4.4.2 - Assemblage Simulation
116] Steps of the assemblage simulation (generated by authors).
[Fig. 117] Assemblage generated for the t0 intance (generated by authors).
The simulation involves assembling the previously defined space-making objects, kindA and kindB, with the set of 22 rules and which are further guided by the scalar and vector values of the field.
The t0 instance is assembled and performs as a blank canvas setup for the distribution of necessary functions required in a data centre.
[Fig.
- Generated Assemblage
[Fig. 118] Each space-making object marked (generated by authors).
The space-making objects are interpreted as point nodes. This would help in quantifying the location of a program on site and within the assemblage and calculating the performance of the program in that specific location for multiobjective optimization in the later stages.
The data centre being a mission critical typology requires certain programmes which are non negotiable. The scope of this research allows addition of auxiliary programmes in addition to these mandatory programs.
At the scale of the this pilot plot, the core would be crucial in dictating the programmatic layout within the assemblage apart from its structural important.
The conventional practice of enforcing security in a data centre is still to be adopted with the addition of supervised public circulation to support the given proposal.
Integrates controlled-environment farming systems that utilize repurposed heat to support sustainable food production.
Houses critical computational infrastructure, enabling data processing and storage while generating heat as a byproduct.
Monitors and manages clusters comprising IT, heat exchangers, agricultural units, and auxiliary service spaces to ensure seamless operations.
Ensures consistent and reliable energy flow to all operational units, with built-in redundancy for uninterrupted functionality.
Orchestrates the movement of resources, personnel, and energy flows, connecting private, semi-private, and operational zones
Includes cooling, water supply, waste management, and security systems, which support and sustain the entire ecosystem.
Facilitates the transfer and redistribution of heat between IT units and adjacent agricultural functions, supporting energy efficiency.
Programs are sequentially assigned to each of the space-making object in the assemblage thereby prioritizing the requirements and placements of the programs in the order of hierarchy.
The volumetric average of the t0 assemblage is extracted. That point is considered to be the linkage to the core at that specific elevation. The nearest space-making objects at those linkages are assigned as a the core of the structure.
All the space-making objects at the lowest level of the assemblage are directly assigned as an access control ground level integrated with a plaza for public interaction and a market for the agricultural produce.
A K-means clustering algorithm is utilized to cluster the remaining space-making objects as mini data centres. These are like a decentralized network of data centres within a data centre to allow for spatial and functional flexibility. The volumetric average of each of these clusters is considered to be a control centre unit which would overlook the functioning of that entire cluster.
With the control centres as a primary node, Dijkstra's shortest path algorithm is utilized to establish a circulation system between the nearest core node and a control centre. At this point the preliminary circulation network in the whole assemblage is established.
All remaining space-making objects are tested for their proximity to the site boundary and nearest wind vector to assess the feasibility of a heat exchanger. As per the required proportion of heat exchangers required in that cluster, the most ideal locations are assigned as heat exchangers.
The volumetric average of all heat exchangers within a cluster coupled with the maximum number of shared faces with a heat exchanger helps pick out ideal agriculture unit locations. PCM panels would be used at this interface of heat exchangers and agriculture units.
Since I.T. units are like black boxes, their location is easier to adapt in the overall assemblage. The volumetric average location of the heat exchangers is utilized to establish their location.
The volumetric average of I.T. is utilized to establish locations for the power supply units thereby maintaining an efficient closed loop system for computation.
Auxiliary maintenance services and facilities are assigned to the remaining spacemaking units in the assemblage.
4.4.6 - Optimization Of Programmatic Distribution
1.] Minimize Differences Between Cluster Sizes
FC1 aims to minimize the variation in the number of objects within clusters. This ensures a more equitable distribution of resources and balanced access to the core functions, promoting efficiency and operational fairness across all clusters.
2.] Maximize Wind Exposure for Heat Exchangers
FC2 seeks to maximize wind exposure at strategic locations to identify optimal placements for heat exchangers. This ensures effective channelling of ambient air, leveraging natural airflow to enhance thermal exchange and efficiency.
3.] Maximize Contact Area Between Heat Exchangers and Agriculture
FC3 focuses on maximizing the contact area between heat exchangers and agricultural units. This aligns directly with the developed material system, where adjacency and contact feedback inform the panel configurator designed during Phase 1. This optimization strengthens the integration of energy redistribution and agricultural productivity.
4.] Maximize Solar Radiation in Agricultural Spaces
FC4 targets the maximization of sunlight radiation in agricultural spaces to optimize conditions for crop cultivation, ensuring adequate exposure for healthy growth and productivity.
5.] Promote Dense Packing of IT Units
FC5 prioritizes dense packing of IT units within clusters, creating consolidated hall areas to improve spatial efficiency and reduce fragmentation. This approach also enhances operational management by facilitating better access and oversight from the control centre.
A multi-objective evolutionary algorithm is utilized to optimize the programmatic distribution within the assemblage for t0. The topology of each solution is displayed with its metrics to select the ideal solution for further resolution
4.5_Structural System
4.5.1 - Structural Performance Considerations
The structural system must allow incremental expansion over multiple years without disturbing the previously established network topology. Both spatially and in terms of linkages, it should preserve the shared faces between adjacent cells, ensuring the continuity of functional connections throughout the system.
[Fig. 119] Diagam portraying the required expansion flexibility example
Additionally, because of the heavy equipment and machinery loads, three times of a typical floor loading capacity per unit square meter are is required in data centres when compared with typical office buildings40
[Fig. 120] Comparison of floor loading capacities per unit square meter according to the hosted functions.
4.5.2 - Structural System Approaches
Initial experiments began by examining a single cell and its typical assemblage with six neighboring cells. In Option A, the cell geometry is followed to create a truncated octahedron frame, replicating its edges as a closed cellular structure. In contrast, Option B involves extracting an orthogonal module within the same cell, serving as a modular, flexible enabler of future expansion.
Option A: Truncated Octahedra Frames
Option B: Orthogonal Open Frame Module
[Fig. 121] Generation of two structural system apporaches for a typical assemblage configuration
While both methods preserve the topological relationships spanning the required floorplate configuration, the internal octahedron components in Option A obstruct human circulation on the highlighted faces. Consequently, Option B, orthogonal frame inside a cell, allows for a more open-plan configuration without blocking any faces.
Additionally, the truncated octahedron replica in Option A includes eighty percent more structural members, leading to a significant increase in joinery components and overall labor.
Option A:
Truncated Octahedra Frames
Option B: Orthogonal Open Frame Module
To evaluate the performance of these options on the actual asssembly scale and conditions, a series of Finite Element Analysis were conducted, wth common definition of the following loading condition "LCC" as per the following:
- Gravity= Load resultant from the mass
- Floor= Floor Plate Loading typical 15 kN/sqm equally distributed to the floor plates.
- Wind1 = 4 kN/sqm to the envelope of the building (East)
- Wind2 = 4 kN/sqm to the envelope of the building (West)
- LCC = Loading Case Combination
LCC = Gravity + Floor + (Wind1|Wind2)
[Fig. 122] Generation of the LCC, loading condition that is prevalent through the structual analysis
Analysis revealed the inadequacy of the Option B, orthogonal system, primarily because the discontinuity of beams creates 3.5-meter-long protrusions throughout the structural frame. Meanwhile, the 80% increase in members observed in a basic example of Option A escalated to nearly five times as many when aggregated. Moreover, this type of cellular lattice structure does not allow for member reduction, triggering a domino effect in which the entire structural frame has to remain densely interconnected.
Option A: Truncated Octahedra Frames
Option B: Orthogonal Open Frame Module
[Fig. 123] Finite Element Analysis for both Option A, and Option B
4.5.3 - Exoskeleton Hybridization
[Fig. 124] Hybridization Sequence
To capitalize on the strengths of both Options A and B, a hybrid approach was adopted by combining the enveloping exoskeleton from Option A with the simpler orthogonal framework of Option B. As a result, the system achieved superior performance, reduced the overall member count, and introduced a more efficient internal layout, unlocking further possibilities for further optimization through multiobjective evolutionary algorithms.
[Fig. 125] Finite Element Analysis for Option C : Hybridized Exoskeleton
Glu-lam (GLT):
The decision to use GLT (Glued Laminated Timber) for the orthogonal, internal structural members and steel for the exoskeleton is guided by fabrication and sustainability qualities:
Glulam excels in compressive and bending strength, making it a strong, sustainable choice for internal beams and columns. Its lighter weight per resisted load (compared to steel) can reduce overall foundation requirements. The orthogonal framework benefits from Glulam’s ease of on-site customization. With elements that can vary in length or cross section according to specific floor plans and loads, Glulam can be cut or joined using specific techniques and adhesives.
Using Glulam for the bulk of interior structural elements can reduce the building’s overall embodied carbon compared to an all-steel structure. Wood sequesters carbon during its growth phase, contributing to more favorable life-cycle assessments from product to post-occupancy stages.
Steel:
Steel’s high tensile strength makes it well-suited for an exoskeleton bearing significant lateral and dynamic loads. The exoskeleton often handles wind forces, seismic loads, and other external stresses, which steel can accommodate with relatively slender profiles and robust connections.
Using repetitive, standardized steel members in an exoskeleton enables efficient fabrication off-site and quick on-site assembly. Steel’s uniform consistency also lends itself to precise cutting and welding, ensuring reliable and consistent connections.
Steel performs well in external conditions with proper corrosion protection. For the interior framework, Glulam maintains its structural integrity in controlled environmental conditions when properly managed.
4.5.5 - M.O.E.Optimizations and Results
Cross-section optimization entails an iterative procedure that systematically selects optimal profiles for beams and columns, weighing both load-bearing capacity and, if necessary, maximum deflection limits. Meanwhile, evolutionary structural optimization (ESO), developed by Y.M. Xie and G.P. Steven, involves progressively pruning underutilized elements from a predefined structural volume, based on load distribution analyses. Cross section optimization for the beams and columns and BESO for Beam Reduction were combined under a sequential Multi Objective Evolutionary Algorithm, refining both the topology and the cross-sectional attributes to balance multiple priorities such as efficiency, deflection control, and material consumption. Following these Fitness Criteria
- FC1:
Minimize Total Embodied Carbon Emissions (tonnes CO2e)
x =(Exoskeleton: Steel (kg CO2e/kg) × mass)
y = (Orthogonal Structure: GLT (kg CO2e/kg) × mass)
(x+y) / 1000 = total tonnes CO2e for an individual option
- FC2: Minimize Deflection (cm) Keeping deflection within acceptable limits ensures overall structural integrity and occupant comfort.
- FC3: Minimize the Number of Elements (#) Fewer elements mean fewer connections and potentially longer spans, enabling better open-plan schemes.
- FC4: Minimize Relative Length Differences in Orthogonal Members (cm)
This aims to standardize member lengths, reducing fabrication complexity and enhancing efficiency—much like using identical members in steel on the exoskeleton.
Generation size : 20
Generation count : 100
Total # of individuals : 2000
t(0) - 2025
Selected Individual (Equally weighted FCs) Gen. 87 - Ind. 02
t(3) - 2040 projection
The resulting configuration achieves a 45% reduction in embodied carbon during the A1–A4 product stages—before any building activity— while still meeting projected demand. The hybrid design merges complementary properties: steel’s high tensile strength and durability aptly handle external forces, while GLT’s lower environmental impact and flexibility facilitate efficient interior spans. By balancing the temporary and the permanent, bolstering environmental and structural efficiency and resilience.
[Fig. 126] Comparison of the Option A, and Option C
Analysis of FFD simulations confirm that even though the proposal envelope allows outside air through the heat exchangers and agriculture units only, the optimisation of these entry locations enable the circulation of natural into the structure and then supply heat to it with the help of the PCM.
[Fig. 127] Elevational section of the site after FFD simulation (generated by authors).
5 m/s
0 m/s
[Fig. 128] Top section of the site after FFD simulation (generated by authors).
| DESIGN PROPOSAL |
The re-imagined, context-aware data center challenges the prevailing notion of inaccessible and opaque data facilities by repurposing its byproducts for communal benefit. This cyclic system disrupts the traditional barriers, inviting the public to integrate elements of their daily routines within a carefully curated blend of programs that span the public–private spectrum.
For instance, crops that have been cultivated on-site can be directly traced back to the data owners whose information helped grow them, mirroring the social engagement found in farmers’ markets. This approach encourages broader public participation and supports social sustainability.
The renewed typology remains visually and functionally attuned to its setting by varying transparency and porosity across the façade, echoing the arrangement of enclosed functions and defining access levels.
[Fig. 129] (Left) Street-side point of view
[Fig. 130] (Right) Interior Perspective from an Agriculture Unit
The spatial qualities of interior programs differ in volume, area, and degree of connectivity, further underscored by distinct access levels. In particular, the panel surfaces adjacent to the cultivation zones reinforce their relationship with the functions enabling agriculture. For instance, the proposed material system— TPMS-based, PCM-infilled panels— passively stabilizes internal thermal conditions while visually reflecting the level of computation activity through its varying transparency. (More computation increases heat storage, transitioning the material to a liquid and becoming transparent; reduced computation allows the release of heat, reverting it to a solid, opaque state.)
Additionally, the notion of computation itself is clarified and communicated by establishing a visual interface to the IT units (the true black boxes), rendering the data-processing aspects more tangible to occupants and visitors.
[Fig. 131] (Left) Street-side point of view
[Fig. 132] (Right) Interior Perspective from an Agriculture Unit
| DISCUSSION |
[Fig. 133] Illustration of a scaled doll-house version
5.1_Critical Reflections & Discussion
This research explores the intricate interdependence between data generation, storage, consumption, and the utilization of space and energy within the urban fabric, focusing on London’s role as a global data hub. By reimagining traditional data center typologies, the integration of these data processing facilities into the urban landscape is speculated, prompting the question, “Can the data I produce feed me?” This approach paves the way for a mutual integration of these embedded spaces with the public, through the development of a material system combined with a space-making strategy.
Developing a TPMS-based PCM-infilled panel system has proven to be a suitable approach, as demonstrated by the proof-of-concept material experiment with prototyped versions for continuous, passive, and responsive thermal regulation. However, the material system proof-of-concept model was tested only for several cycles of energy transfers; assessments of maintenance requirements for such materials could additionally deliver practical input in refining fabrication methodologies. Approximating the behavior of PCM materials necessitates complex and highly recursive mathematical modeling that often omits many factors. Therefore, the panel configurator pipeline requires an additional step to better assess performance and determine a “better” outcome. In this context, such improvement could only be achieved through numerous 1:1 prototypes, assembling a relevant dataset, and applying predictive machine learning methodologies. Developed material systems— both the TPMS shell fabrication at a 1:1 scale and its effect on the PCM infill— could provide a missing perspective on the longer-term implications of the proposed system behavior, enabling the framework to become more robust.
Additionally, the material system, representing a catalyst linkage between the topological nodes (heat exchanger and agricultural units) in the function distribution map, has not had its performance tested in combination with its imagined adjacent hybridized cultivation function. In this manner, a longer-term proof-of-concept model could have provided a deeper understanding of such hybridization’s material aspects. Within the scope of this dissertation, the finalized framework has been specifically contextualized to target London as a pilot city, given its position as a global data hub embedded in a challenging urban fabric. Yet, while the study carries significant potential, it also suffers from a limited opportunity to further develop the framework in two major directions.
The district ranking pipeline was developed to filter available plots and identify sites, and ended up serving as a tool to identify the unit
computation density distribution all over London. This could benefit from a temporality dimension for expansion prediction and density changes of the ranked criteria amongst the compared districts.
First, multiple site conditions within London—or other urban contexts around the globe—could further establish a hierarchy in the sequential parametrization of the design process, from assessing needs to optimizing topological conditions, supported by the established network at each iteration. Accordingly, the framework could serve as a test-bed for a modular pipeline capable of incorporating, comparing, and contrasting diverse network analysis methods, matching them with respective functional programs. Continuously plugging in and out various analysis approaches, sorting and ranking these methods considering the cross-functional performances could then refine the model by allowing a context-aware enhancement of analysis method selection, potentially augmented by a tailored machine learning model. Furthermore, the rationalization of spatial organization within the proposed typology requires validation across multiple technical scenarios to reduce the high risk of bias, including emergency reactions and agility in responding to immediate changes in the imagined building. However, for such ambitions, the pilot test can initiate the creation of a comprehensive dataset built on this archetypical framework.
The structural configuration represents a hybridized formatting deviating from a typical cellular geometry construction. The main intention behind the potential reduction of the capacity of a cellular structure originated from assessing and predicting the actual need and its respective stakes. Therefore the assessed demand including the topological conditions and its expansion prediction, when established with the developed approach, portrayed great potential to reduce the carbon footprint starting from the product phase of the utilized materials combined with substantial weight savings and last but not least a significant reduction potential in the investment costs.
Additionally, although the spatial demand approximation model draws from the data consumption patterns of its specific contextual members (users) in a temporal sequence from 2025 (t(0)) to 2040 (t(3)), the combination of land and infrastructure availability with these usage patterns informs the predictive analysis model. This model approximates the spatial qualities and quantities of both technical and social programmatic requirements, which in turn inform the developed assembly interpreter pipeline. The framework holds greater potential to enhance user engagement by enabling participation at multiple stages of the pipeline, thus supporting a multi-stakeholder decision-making scheme that is
uniquely relevant to such mission-critical typology design and construction norms—often prioritizing efficiency and control over ecological integration and ethical considerations. Beyond intrinsic testing and optimization, the environmental changes brought about by the proposal must also be explored.
Furthermore, such inclusive potential hints at issues of ownership and moves toward enhancing democratization, by equalizing participation and considering inputs from neighboring stakeholders who not only share the vicinity but also utilize the data center for their own data and computation practices. Combined with the investor and contractor, the proposed space-making pipeline can effectively mediate these inputs on a shared decision-making platform, enabled by a surgically “recoded” and flexible space-making approach to this previously unwieldy typology. Not only during design but also throughout the operational lifecycle, cooperative-minded strategies will enhance the adaptability of today’s unwieldy typology, steering it toward an evolutionary potential driven by decentralizing consumption and production practices for the collective benefit. This fosters direct community governance and personal data management, increasing public resilience through transparency—a communal benefit reminiscent of the blurred lines in data ownership discourses.
Nevertheless, the proposed typology also positions itself somewhere along the public-private spectrum, leaning toward the public by maximizing
varied access interfaces and collective, productive practices. Comparable to London’s semi-accessible estate gardens, the edge data center’s availability—whether to its immediate surroundings or more distant regions—will be mediated by distance and signal latency. Yet the perceived trustworthiness and security of data, space, and energy have the potential to influence a wider audience to question their data practices, its material footprint, and its ramifications in the “gaps in the clouds” overhead.
In building awareness, scenario-building exercises and testing multiple interfaces and communication strategies could also shape the evolving narrative. Observations suggest that people are skeptical about these “black boxes” and are often unwilling to examine them more deeply due to a missing foundational knowledge and uncertainties regarding architecture, energy usage, and data ownership. Perhaps different methods of engagement can provoke varied responses—visually, for instance, by indirectly gauging computational activity through the transparency of the material system, and more tangibly through the proposed hybridization outputs, such as crop quality and quantity, which ultimately correlate to data usage patterns. Both phases—material development as the first and space-making as the second—possess such potential and represent a crucial first step, offering prototypical achievements with room and multifaceted prospects for further improvement.
5.2_Conclusion
This research stimulates the critical interplay between data, space and energy from generation and storage, to consumption within the urban fabrics, extracting the necessity to re-imagine how data centers represent a block box typology that needs to open while occupying, shaping, and influencing our computation practices within our daily lives around our built environment. By situating data these isolated hubs back into the urban fabric, the thesis challenges conventional notions around data center construction practices tracing from the point zero. It envisions a hybridized system that fosters synergy among data processing, energy flow, and local food production, thereby transforming an often opaque, resource-intensive facility into a participatory infrastructure node that benefits surrounding communities an multiple extents.
A crucial component of this approach lies in the material system developed—a TPMS-based, PCM-infilled panel. Its proof-of-concept testing demonstrates the potential to passively regulate heat, thereby reducing external energy dependencies. Simultaneously, it catalyzes the rethinking of architectural assemblies and spatial configurations. The subsequent automation and assembly interpreter framework ensures that form and function dynamically respond to both real-time data demands and specific environmental contexts, aligning computational requirements with local site conditions. Over time, this adaptability offers the flexibility needed to accommodate shifting data densities and evolving urban scenarios.
Importantly, this thesis presents methods to integrate diverse stakeholders— ranging from investors and contractors to local communities—into the design and operational cycles of these new data center typologies. By embracing participatory strategies, the model fosters a more transparent understanding of energy use, data ownership, and environmental impacts, thus promoting broader acceptance and cooperative engagement. Although tested primarily in London, these strategies have wider implications for global urban centers, suggesting that agile, hybridized data architectures can help navigate rising computational demands while simultaneously contributing to local ecological, social, and economic objectives. Ultimately, the proposed framework stands as a foundational step toward a more responsible and resilient data-driven future, blending technological innovation with mindful urban stewardship.
[Fig. 134] Axonometric section portraying spatial continuity (illustrated by the authors).
BIBLIOGRAPHY
1. Vopson, Melvin M. “The World’s Data Explained: How Much We’re Producing and Where It’s All Stored.” The Conversation, May 4, 2021. http:// theconversation.com/the-worlds-data-explained-how-much-were-producing-and-where-its-all-stored-159964.
2. Settlemyer, Latchesar Ionkov, Bradley. “DNA: The Ultimate Data-Storage Solution.” Scientific American. Accessed September 19, 2024. https://www. scientificamerican.com/article/dna-the-ultimate-data-storage-solution/.
3. Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. London: SAGE Publications Ltd, 2014. https://doi. org/10.4135/9781473909472.
4. Wallace, Danny P. (2007). Knowledge Management: Historical and Cross-Disciplinary Themes. Libraries Unlimited. pp. 1–14. ISBN 978-1-59158-502-2.
5. AltexSoft. “Structured vs Unstructured Data: What Is the Difference?” Accessed 2024. https://www.altexsoft.com/blog/structured-unstructured-data/.
6. Kitchin, Rob. The Data Revolution, 2014.
7. DarkAlman. “Digital Information ….” Reddit Comment. R/Explainlikeimfive, September 20, 2019. www.reddit.com/r/explainlikeimfive/comments/ d6wnwt/eli5_why_does_virtual_data_take_up_physical_space/f0vxo8c/.
8. Hidalgo, César A. Why Information Grows: The Evolution of Order, from Atoms to Economies. New York: Basic Books, 2015.
9. Mills, Christian. “Christian Mills - Notes on Chip War: The Fight for the World’s Most Critical Technology.” Christian Mills, August 28, 2024. https:// christianjmills.com/posts/chip-war-book-notes/index.html.
10. Yeung, Tiffany. “What’s the Difference Between Edge Computing and Cloud Computing?” NVIDIA Blog, January 5, 2022. https://blogs.nvidia.com/blog/ difference-between-cloud-and-edge-computing/.
11. Matsuoka M, Matsuda K, Kubo H (2017) "Liquid immersion cooling technology with natural convection in data-center". In 6th International Conference on Cloud Networking (CloudNet), IEEE
12. Kheirabadi A, Groulx D (2016) "Cooling of server electronics: a design review of existing technology". Applied Thermal Engineering :105
13. International Energy Agency (IEA). Electricity 2024 - Analysis and forecast to 2026 “Electricity 2024 - Analysis and Forecast to 2026,” 2024.
14. Jacqueline Davis, Uptime Institute. “Large Data Centers Are Mostly More Efficient, Analysis Confirms.” Uptime Institute Blog (blog), February 7, 2024. https://journal.uptimeinstitute.com/large-data-centers-are-mostly-more-efficient-analysis-confirms/.
15. “Data Center Heat Recovery and Reuse | Danfoss.” Accessed 2024. https://www.danfoss.com/en/markets/buildings-commercial/shared/data-centers/ heat-reuse/.
16. Ilieva, Rositsa, Nevin Cohen, Maggie Israel, Kathrin Specht, Runrid Fox-Kämper, Agnes Fargue-Lelievre, Lidia Ponizy, et al. “The Socio-Cultural Benefits of Urban Agriculture: A Review of the Literature.” Land 11 (April 23, 2022): 622. https://doi.org/10.3390/land11050622.
18. Lihao Tian, Bingteng Sun, Xin Yan, Andrei Sharf, Changhe Tu, Lin Lu, Continuous transitions of triply periodic minimal surfaces, Additive Manufacturing, Volume 84, 2024, 104105, ISSN 2214-8604, https://doi.org/10.1016/j.addma.2024.104105.
19.Levenspiel, Octave. *The Three Mechanisms of Heat Transfer: Conduction, Convection, and Radiation*. In *Engineering Flow and Heat Exchange*, 147-162. The Plenum Chemical Engineering Series. Boston: Springer, 1984. https://doi.org/10.1007/978-1-4615-6907-7_9.
21. Leidi, Michele, and Arno Schlüter. “Exploring Urban Space: Volumetric Site Analysis for Conceptual Design in the Urban Context.” International Journal of Architectural Computing 11, no. 2 (June 2013): 157–82. https://doi.org/10.1260/1478-0771.11.2.157.
22. “The Space Syntax Approach - Space Syntax,” June 7, 2018. https://spacesyntax.com/the-space-syntax-approach/.
23. Food4Rhino. “Shortest Walk.” Text, December 21, 2010. https://www.food4rhino.com/en/app/shortest-walk.
25. “Co-de-iT/Assembler.” C#. 2021. Reprint, Co-de-iT, February 23, 2024. https://github.com/Co-de-iT/Assembler.
26. Preisinger, C. (2013), Linking Structure and Parametric Geometry. Architectural Design, 83: 110-113 DOI: 10.1002/ad.1564.
27. Memon, Shazim Ali. “Phase Change Materials Integrated in Building Walls: A State of the Art Review.” Renewable and Sustainable Energy Reviews 31 (March 1, 2014): 870–906. https://doi.org/10.1016/j.rser.2013.12.042.
28. Sutjahja, Inge & Silalahi, A. & Kurnia, D. & Wonorahardjo, Surjamanto. (2018). Thermophysical parameters and enthalpy-temperature curve of phase change material with supercooling from T-history data. UPB Scientific Bulletin, Series B: Chemistry and Materials Science. 80. 57-70.
29. Palacios, A., M. E. Navarro-Rivero, B. Zou, Z. Jiang, M. T. Harrison, and Y. Ding. “A Perspective on Phase Change Material Encapsulation: Guidance for Encapsulation Design Methodology from Low to High-Temperature Thermal Energy Storage Applications.” Journal of Energy Storage 72 (November 30, 2023): 108597. https://doi.org/10.1016/j.est.2023.108597.
30 Mukhamet, Tileuzhan, Sultan Kobeyev, Abid Nadeem, and Shazim Ali Memon. “Ranking PCMs for Building Façade Applications Using Multi-Criteria Decision-Making Tools Combined with Energy Simulations.” Energy 215 (January 15, 2021): 119102. https://doi.org/10.1016/j.energy.2020.119102.
31. Yu, Jinghua, Qingchen Yang, Hong Ye, Junchao Huang, Yunxi Liu, and Junwei Tao. “The Optimum Phase Transition Temperature for Building Roof with Outer Layer PCM in Different Climate Regions of China.” Energy Procedia, Innovative Solutions for Energy Transitions, 158 (February 1, 2019): 3045–51. https://doi.org/10.1016/j.egypro.2019.01.989.
32. Vukadinović, Ana, Jasmina Radosavljević, and Amelija Đorđević. “Energy Performance Impact of Using Phase-Change Materials in Thermal Storage Walls of Detached Residential Buildings with a Sunspace.” Solar Energy 206 (August 1, 2020): 228–44. https://doi.org/10.1016/j.solener.2020.06.008.W
33. Sawadogo, Mohamed, Marie Duquesne, Rafik Belarbi, Ameur El Amine Hamami, and Alexandre Godin. “Review on the Integration of Phase Change Materials in Building Envelopes for Passive Latent Heat Storage.” Applied Sciences 11, no. 19 (January 2021): 9305. https://doi.org/10.3390/app11199305.
34. Celaya Granados MX. Study of Triply Periodic Minimal Surfaces for Heat Transfer Applications [Internet] [Dissertation]. 2023. (MATVET Energiteknik)
35. “5.1: Crystal Structures and Unit Cells - Chemistry LibreTexts.”. https://chem.libretexts.org/Courses/East_Tennessee_State_University/CHEM_3110%3A_ Descriptive_Inorganic_Chemistry/05%3A_Structure_and_Energetics_of_Solids/5.01%3A_Crystal_Structures_and_Unit_Cells.
36. “Kelvin Truncated Octahedron – The Geometry of Thinking.” Accessed January 10, 2025. https://geometryofthinking.com/2023/08/19/the-kelvintruncated-octahedron/.
37. Fuller, R. Buckminster. Synergetics: Explorations in the Geometry of Thinking. New York: Macmillan, 1975.
38. MIT Press. “Architectures of Time.” Accessed September 19, 2024. https://mitpress.mit.edu/9780262611817/architectures-of-time/.
39. Davies, Gareth, Graeme Maidment, and Robert Tozer. “Using Data Centres for Combined Heating and Cooling: An Investigation for London.” Applied Thermal Engineering 94 (October 1, 2015). https://doi.org/10.1016/j.applthermaleng.2015.09.111.
40. “How Data Centers Work | HowStuffWorks.” Accessed January 10, 2025. https://computer.howstuffworks.com/data-centers.htm.
LIST OF FIGURES
[Fig. 01] Collected buzzwords around data-centric typologies (illustrated by the authors).
[Fig. 02] Data - Information - Knowledge - Wisdom (DIKW) Pyramid (illustrated by the authors).
[Fig. 03] Data - Brick analogy and its continuous development diagram (illustrated by the authors).
[Fig. 04] Various classifications of data (illustrated by the authors).
[Fig. 05] Various classifications of data (illustrated by the authors).
[Fig. 06] Yearly distribution of data generated, projections and analogous relationship (retrieved from the book :The Dark Cloud: How the Digital World Is Costing the Earth).
[Fig. 07] Temporality of the data production, consumption practices and analogy to the British Library.
[Fig. 08] Physicality of data.
[Fig. 09] Data processing apparatuses comparison, the first and the most up-to-date computer (images retrieved from https://penntoday.upenn.edu/ news/worlds-first-general-purpose-computer-turns-75/ (left) and https:// japan-forward.com/a-look-at-the-magic-behind-fugaku-the-worlds-leadingsupercomputer/ (right).
[Fig. 10] The continuous journey of Data (redrawn from thesis DataHub: Designing Data Centers for People and Cities, Harvard GSD) (Re-Illustrated by the authors)
[Fig. 11] Monolithic and Modular approaches (Illustrated by the authors)
[Fig. 12] Data centres, developmental trajectory and reducing human accessible spaces (Re-Illustrated from Data-Polis)
[Fig. 13] Components of a data centre.
[Fig. 15] De-constructing a data centre (generated by the authors)
[Fig. 14] Energy demanding programmes of conventional data centres (generated by the authors)
[Fig. 16] Traditional Air Cooled DC Diagrams (retrieved and edited from https://journal.uptimeinstitute.com/a-look-at-data-center-coolingtechnologies/ )
[Fig. 17] Comparison of cooling systems in Data Centres (generated by the
[Fig. 20] Working principle of an Liquid-to-Air heat exchanger (retrieved from https://www.altexinc.com/case-studies/air-cooler-recirculationwinterization/).
[Fig. 21] Immersion cooling infrastructure example without heat reuse (retrieved from https://pictures.2cr.si/Images_site_web_Odoo/Partners/ Submer/2CRSi_Submer_Immersion%20cooling%20EN_April_2023.pdf).
[Fig. 22] Types of Heat Exchangers (generated by the authors)
[Fig. 23] Heat Circulation of Immersion cooling system with liquid to air heat exchanger (generated by the authors).
[Fig. 24] Redrawn from (The Goldman Sachs Group, Inc,. AI/data centers' global power surge and the Sustainability impact, 2024)
[Fig. 26] Power Usage Effectiveness (PUE) calculation (generated by author)
[Fig. 25] PUE values and their corresponding efficiency values. (retrieved from https://submer.com/blog/how-to-calculate-the-pue-of-a-datacenter/).
[Fig. 27] PUE Reduction via repurposing the excess heat (generated by author)
[Fig. 28] PUE differences as per the repurposing the excess heat (generated by author)
[Fig. 29] Image courtesy of Solomon R. Guggenheim Museum (retrieved from ,https://metalocus.es/sites/default/files/metalocus_countryside_ koolhaas_guggenheim_01.jpg)
[Fig. 30] Farming activities and data centres are isolated entities (generated by author)
[Fig. 31] Public integrated farming activity via repurposing excess heat from Data Centre (generated by author)
[Fig. 32] data, space and energy (illustrated by authors).
[Fig. 33] Intersections of data, space and energy (illustrated by authors).
[Fig. 34] Inferences from the intersection of data, space and energy (illustrated by authors).
[Fig. 35] Global data centre distribution (retrieved from https://espacemondial-atlas.sciencespo.fr/en/topic-contrasts-and-inequalities/map-1C20EN-location-of-data-centers-january-2018andnbsp.html)
[Fig. 36] Data centre hubs in northern Europe (retrieved and redrawn from https://www.datacentermap.com/united-kingdom/)
[Fig. 37] Data sampling.(generated by the authors)
[Fig. 38] Computational Fluid Dynamics Example (generated by the authors)
[Fig. 40] Material Test: Phase Changing Materials. (photograph by the authors)
[Fig. 39] Fabricated set of TPMS-Based Lattice Structures (photograph by the authors)
[Fig. 41] Evolutionary Multi Objective Optimization Process (generated by the authors)
[Fig. 42] Volumetric Site Analysis.Process Diagram (generated by the authors)
[Fig. 43] Network Analysis – Shortest Path.Analysis (generated by the authors)
[Fig. 44] Space-making units and possible combinations (generated by the authors)
[Fig. 45] PCM as passive thermal regulator (generated by author)
[Fig. 46] Excess heat released to atmosphere in current scenario (generated by author)
[Fig. 48] Excess heat released to agricultural unit without any regulation (generated by author)
[Fig. 47] PCM as passive thermal regulator (generated by author)
[Fig. 49] PCM selection chart (the base graph retrieved from https:// thermalds.com/phase-change-materials/)
[Fig. 50] Filtered PCM options (generated by the authors)
[Fig. 52] Selected PCM through its multiple phases (photograph by authors)
[Fig. 51] Selected PCM phase change temperature graph (retrieved from the article Thermophysical parameters and enthalpy-temperature curve of phase change material(...)
[Fig. 53] Typical encapsulation layer diagram (generated by the authors)
[Fig. 54] TPMS surface ability to subdivide a volume into two equal parts (retrieved from https://blog.fastwayengineering.com/3d-printed-gyroidheat-exchanger-cfd)
[Fig. 55] Selected TPMS types (generated by authors)
[Fig. 56] TPMS-based shell generation and respective Boolean operations illustrating solid-void conditions (generated by the authors)
[Fig. 57] Outputs of the CFD Simulation (generated by authors)
[Fig. 58] Outputs of the CFD Simulation (generated by authors)
[Fig. 59] Blended TPMS based cellular structure (generated by authors)
[Fig. 60] Outputs of the CFD Simulation (generated by authors)
[Fig. 61] Graded TPMS based cellular structure (generated by authors)
[Fig. 62] Multiple TPMS based cellular structure examples (generated by authors)
[Fig. 63] Customized TPMS based cellular panel (generated by authors)
[Fig. 64] Customized TPMS based cellular panel (generated by authors)
[Fig. 65] Customized TPMS based cellular panel (generated by authors)
[Fig. 66] Customized TPMS based cellular panel (generated by authors)
[Fig. 67] Space Subdivisions in Experiment Setup.
[Fig. 68] Physical Experiment Setup
[Fig. 69] Thermometer sensor placement in experiment setup.
[Fig. 70] Used materials in experiment setup.
[Fig. 71] Temperature measurement setup.
[Fig. 72] Temperature change in the experiment.
[Fig. 73] Experiment Setup.
[Fig. 74] Experiment result plotted on a graph.
[Fig. 75] n (total number of units) increases directly reflecting to the multiplication of the smallest root network
[Fig. 76] Transition from required network topology to cellular packing investigations (sphere - truncated octahedron - cube)
[Fig. 77] Truncated Octahedron generation approaches originating from a Cube
[Fig. 80] Splitting of truncated octahedron (illustrated by the authors).
[Fig. 81] Expansion of spatial quality and topological relationships (illustrated by the authors).
[Fig. 79] Proportionate scaling of truncated octahedron (illustrated by the authors).
[Fig. 78] Space making objects. (illustrated by the authors).
[Fig. 82] Two levelled truncated octahedron units
[Fig. 83] Units with cabling, HVAC and all Infrastructure
[Fig. 84] IT Units
[Fig. 85] Heat Exchanger
[Fig. 86] Agricultural Units
[Fig. 87] Control Spaces/Offices
[Fig. 88] Steps for regional assemblage example
[Fig. 89] Regional Assemblage Example
[Fig. 91] How to 'assemble'? How to describe an 'assembly'? (illustrated by the authors).
[Fig. 90] Space-making object 'kindA' with its circulation path and connecting plane types (generated by the authors).
[Fig. 92] Space-making object 'kindB' with its circulation path and connecting plane types (generated by the authors).
[Fig. 93] All heuristics with the defined set of space-making objectsIsometric View (generated by the authors).
[Fig. 94] All heuristics with the defined set of space-making objects and their circulation paths - Isometric View (generated by the authors).
[Fig. 95] All heuristics with the defined set of space-making objects - Top View (generated by the authors).
[Fig. 96] Horizontal field influencing the assembly process (illustrated by authors).
[Fig. 97] Vertical field influencing the assembly process (illustrated by authors).
[Fig. 98] Horizontal, vertical and diagonal field influencing the assembly process (illustrated by authors).
[Fig. 99] Logic of 'Field' (illustrated by authors).
[Fig. 100] Voxelization of an example site (illustrated by authors).
[Fig. 101] Logic of 'Field' on a voxel level by extraction and testing of planes and normals (illustrated by authors).
[Fig. 102] Logic of 'Field' on a site level by extraction and testing of planes and normals (illustrated by authors).
[Fig. 103] DC distribution and Area of Allotments (m2 per person) in Greater London, (redrawn from the article : Urban agriculture: Declining opportunity and increasing demand )
[Fig. 104] London Region Map and Site selection steps
[Fig. 105] IT Demand and Zoning Maps
[Fig. 106] Proximity to Infrastructure
[Fig. 107] Proximity to Agricultural Allotments
[Fig. 108] Selected Zones for further development
[Fig. 109] Visualisation of wind flow throught the adjacent context (generated by authors).
[Fig. 110] Visualisation of occlusion tested on the voxel planes (generated by authors).
[Fig. 111] Visualisation of solar radiation experienced on the voxel planes (generated by authors).
[Fig. 112] Pool of individuals extracted (generated by authors).
[Fig. 113] Selected field for the given context - Isometric View (generated by authors).
[Fig. 114] Selected field for the given context - Top View (generated by authors).
[Fig. 115] The context, vector direction of field and scalar values of field (left to right) (generated by authors).
[Fig. 116] Steps of the assemblage simulation (generated by authors).
[Fig. 117] Assemblage generated for the t0 intance (generated by authors).
[Fig. 118] Each space-making object marked (generated by authors).
[Fig. 119] Diagam portraying the required expansion flexibility example
[Fig. 120] Comparison of floor loading capacities per unit square meter according to the hosted functions.
[Fig. 121] Generation of two structural system apporaches for a typical assemblage configuration
[Fig. 122] Generation of the LCC, loading condition that is prevalent through
the structual analysis
[Fig. 123] Finite Element Analysis for both Option A, and Option B
[Fig. 124] Hybridization Sequence
[Fig. 125] Finite Element Analysis for Option C : Hybridized Exoskeleton
[Fig. 126] Comparison of the Option A, and Option C
[Fig. 127] Elevational section of the site after FFD simulation (generated by authors).
[Fig. 128] Top section of the site after FFD simulation (generated by authors).
[Fig. 129] (Left) Street-side point of view
[Fig. 130] (Right) Interior Perspective from an Agriculture Unit
[Fig. 131] (Left) Street-side point of view
[Fig. 132] (Right) Interior Perspective from an Agriculture Unit
[Fig. 133] Illustration of a scaled doll-house version
[Fig. 134] Axonometric section portraying spatial continuity (illustrated by the authors).