Post-mining Palimpsest (MArch)

Page 1


ARCHITECTURAL ASSOCIATION SCHOOL OF ARCHITECTURE

GRADUATE SCHOOL PROGRAMMES

PROGRAMME: EMERGENT TECHNOLOGIES AND DESIGN

YEAR: 2025-2026

COURSE TITLE: MArch. Dissertation

DISSERTATION TITLE: Post-Mining Palimpsest

STUDENT NAMES: Ajinkya Randive (M.Arch.), Luis Castro Aguilar (M.Arch.)

DECLARATION: “I certify that this piece of work is entirely my/our and that my quotation or paraphrase from the published or unpublished work of other is duly acknowledged.”

SIGNATURE OF THE STUDENT:

Ajinkya Randive (M.Arch.)

DATE: 09 January 2026

Luis Castro Aguilar (M.Arch.)

Acknowledgment

The team would like to express sincere gratitude to everyone who supported and shaped the evolution of this thesis. We would especially like to thank Dr. Michael Weinstock, Dr. Milad Showkatbakhsh, and Dr. Anna Font, along with our tutors Felipe Oeyen, Paris Nikitidis, Abhinav Chaudhary, Dr. Alvaro Velasco Perez, and Danae Polyviou, for their guidance at every stage of the design process. Their distinct perspectives and critical discussions consistently pushed the thesis to deeper and more ambitious territory.

We are also grateful to the DPL team, Angel Lara Moreira, Alexander Krolak, and Henry Cleaver, for their expertise in material prototyping and their support with technical development.

Finally, we would like to acknowledge our peers, family, and friends for their constant encouragement, patience, and belief throughout this journey. This work would not have been possible without the collective contributions and understanding of everyone involved.

abstract

This dissertation addresses the critical environmental degradation resulting from coal mining activities in Jharia, India. This region represents the devastating legacy of extractive industries, characterised by land subsidence, extensive contamination of soils and aquifers, and the almost total loss of local ecosystems. The team focuses on the problem of abandoned coal mining terrains. From this, the team investigates the process of coal extraction. Each stage has been analysed sequentially, from the initial deforestation and removal of the overburden, through the actual extraction, to the final abandonment of the excavated sites, and from there explaining how each stage inflicts distinct and often irreversible damage. These operations have rendered large tracts of land unstable, uninhabitable and biologically sterile, creating complex environmental hazards.

The project begins in a landscape where ground, ecology and habitation have fallen out of sync. Jharia’s terrain, shaped by extraction and continual disturbance, becomes the site for rethinking how architecture might arise from processes of repair rather than imposition. Our research studies how stabilisation, ecological succession and spatial formation can be treated as a single evolving system instead of isolated interventions.

By reading the shifting environmental and biological gradients of the site, we construct a generative framework that allows built volume to develop only where long-term ecological performance can be sustained. Space is not predefined; it is negotiated by conditions of soil behaviour, microclimate, and the capacities of emerging vegetation. The resulting architecture doesn’t arrive fully formed. It materialises gradually as the landscape begins to reorganise itself.

The work proposes a model for post-mining territories where ground recovery and architectural emergence are intertwined, offering a future in which settlement grows from the logic of a healing ecology.

Domain index methods

2.1 Coal Mines as a Global Phenomenon

2.1.1 Global Footprint of Coal Mining

2.2 Evolution of Coal Mining in Jharia

2.2.1 Introduction to the Mining Belt

2.2.2 Historical Evolution of Extractivism

2.3 Environmental Degradation and Pollutants

2.3.1 Air Pollutants

2.3.2 Geotechnical Instability

2.3.3 Soil and Water Pollution

2.3.4 Vegetation Loss

2.4 Coal Mining Processes

2.4.1 Mining Cycle in Jharia

2.4.2 Fractured Landscapes

2.4.3 Material Transport Dynamics

2.5 Socio-Economic Consequences

2.5.1 Living in the Mine Lands

2.5.2 Health Impacts

2.5.3 Economic Dependencies and Displacement

2.6 Building Cultures

2.6.1 Vernacular Building Methods

2.6.2 Building with Mining Materials

2.7 Global Lessons

2.7.1 Case of Studies

2.7.2 Building with Robots

2.8 Discussion

2.8.1 Research Question and Design Intent

2.8.2 Annotated Bibliography and Visual Index

3.1 System Logic: From Environment to Data

3.1.1 Terrain and Data Patching

3.1.2 Agent-Based Environmental Simulation

3.1.3 Multi-Objective Optimisation

3.2 Ecological Intelligence Framework

3.2.1 Understanding the plant guild

3.2.2 Patch–Plant Matching via Proximity

3.3 Site Clustering Methodologies

3.3.1 Data defining Construction Locations

3.3.2 K-Medoids Clustering Method

3.4 Material Logic: From Waste to Resource

3.4.1 Robotic Printing Strategies on Sloped Terrain

3.5 Stabilising Wall Logic: Generative Framework

3.5.1 Form-Finding Strategies

3.5.2 Finite Element Analysis for Wall Performance

3.5.3 Multi-Objective Structural Optimisation

3.6 Emerging points via Cellular Automata

3.6.1 Environmental Encoding within Voxel Matrix

3.6.2 Cellular Automata for Ecological Growth

3.7 Volumetric Logic: Data-to-Mass Encoding

3.7.1 Finite Element Analysis for the Geometry

3.7.2 Agent-Based Growth Simulation

3.7.3 Materialization using volumetric modeling tools

Design development DESIGN PROPOSAL

4.1 Environmental Encoding and Corridor Generation

4.1.1 5×5 Patch Framework

4.1.2 Slope Analysis

4.1.3 Subsidence Mapping

4.1.4 Rock and Ground Conditions

4.1.5 Attractor–Repulsion Logic

4.1.6 Agent-Based Simulation

4.1.7 Evolutionary Algorithms and Optimal Solutions

4.2 Ecological Seeding and Vegetation Logic

4.2.1 Remediation process

4.2.2 Species Categorisation

4.2.3 Solar Radiation Analysis

4.2.4 Water Flow and Accumulation

4.2.5 Carbon Density Mapping

4.2.6 Species Grouping and Planting Logic

4.2.7 Ecological Corridor

4.2.8 Temporal Growth Phases (5–10–15–20 Years)

4.3 Site Clustering: Territory to Site

4.3.1 Stabilisation Logic

4.3.2 Risk Value Thresholds

4.3.3 Maximum Slope Constraints

4.3.4 Data grouping and clustering

4.4 From Data to Emergence: 3d Encoded Growth

4.4.1 Selection of the Stress Site

4.4.2 The Layered Terrain

4.4.3 Volume as a Question: Where and How Much Can We Build?

4.5 The Material Performance

4.5.1 Introduction

4.5.2. From mining waste to printable matter

4.5.3 Printability

4.5.4 Compression Test

4.5.5 Shear Test

4.5.6 pH Test

4.5.7 Water retention Test

4.5.8 Optimal Material

4.5.9 Life Cycle Analysis

4.6 Stabilising Wall Intelligence

4.6.1 From Terrain to Surface: Defining the Base Geometry

4.6.2 Defining Structural Inputs: Piles and Columns

4.6.3 Material Behavior Simulation: Finite Element Deformation Analysis

4.6.4 Structural Intelligence Mapping: Stress Flow Lines

4.6.5 Multi-Objective Optimisation: Balancing Structure

4.7 Data and Ecological: Mass Formation

4.7.1 Simulating Agents from Support Points

4.7.2 Fields of Attraction and Repulsion

4.7.3 Agent-Based Volumetric Form

4.7.4 From Agents to Envelopes: Voxel-Based Mass Formation

5. DESIGN PROPOSAL

5.1 The System: Masterplan Development

5.2 Bio- Receptive Wall System

5.2.1 The Wall as a Remediation Agent

5.2.2 Adaptive Porosity and Plant Integration

5.2.3 Wall Construction: Robotic Fabrication

5.3 Mass Growth:Spatial Formation

5.3.1 From Agents to Envelopes: Voxel-Based Mass Formation

5.3.2 Structural Intelligence: Reading the Mass through Stress Mapping

5.3.3 Agent simulation for emergent envelope

5.4 Multi-Phase Construction Strategy: Building with Time, Data, and Earth

5.4.1 Aggregated Layers: Printing → Casting → Modular Growth

5.4.2 Ecological Dominance in Early Phases

5.5 Environmental Data as Spatial Driver Spatial Analysis : Layers Environmental Logics for Spatial Output

5.6 Emergent Space Appropriation

5.6.1 Human–Transition–Plant Zonation Logics

5.6.2 Adaptive Appropriation of Ecologically Formed Spaces

6. CONCLUSION

6.1 Research Contributions and Architectural Implications

7. BIBLIOGRAPHY

8. APPENDIX

4.7.5 Structural Intelligence: Reading the Mass through Stress Mapping

introduction

iThe Jharia Coalfield has served as the heart of the Indian coal industry for over a century. Even though it still contributes significantly to the national economy, its indiscriminate mining activities have led to extensive environmental degradation. It is a landscape characterised by abandoned open-pit mines where extraction has ceased, vast piles of discarded overburden dumps, and ongoing active mining operations. This devastated landscape is, however, becoming a familiar landscape in multiple regions around the world marked by the effects of extractivism. This overlapping of past and present mining activities inherently creates a compounded set of problems. The mining activities cause ground subsidence, destroying residential areas, and emitting toxic gases, posing a severe threat to the health of the local population. Concurrently, the soil and groundwater are contaminated with acid mine drainage and heavy metals, jeopardizing both ecosystems and drinking water sources.

For Jharia, these problems are exacerbated by its monsoon climate and complex geological structure that accelerate the spread of pollution, exacerbating risks such as water scarcity and waterborne diseases. Jharia Coalfield is at the crossroads of contemporary postextractivist debates.

These put high pressure on the local population. Governmentmandated relocation policies, which often lack realism and sustainability. These displaced individuals eventually return to the mining areas, trapped in a vicious cycle of poverty and pollution. Therefore, there is a cycle between the coal mining labor, subsidence soil problems, lack of job opportunities and working conditions. Mining in Jharia has not only displaced people physically but has also fragmented social structures, livelihoods, and cultural memory.

The aim is to investigate how architecture can become a medium for healing both spatially and socially, creating safe environments where architecture can respond to not only spatial issues but health and environmental issues.

Photograph for Vishal Kumar singh. The burning city : A Photographic Documentary on Jharia (India)

ii domain

2.1 Coal Mines as a Global Phenomenon

2.1.1 Global Footprint of Coal Mining

2.2 Evolution of Coal Mining in Jharia

2.2.1 Introduction to the Mining Belt

2.2.2 Historical Evolution of Extractivism

2.3 Environmental Degradation and Pollutants

2.3.1 Air Pollutants

2.3.2 Geotechnical Instability

2.3.3 Soil and Water Pollution

2.3.4 Vegetation Loss

2.4 Coal Mining Processes

2.4.1 Mining Cycle in Jharia

2.4.2 Fractured Landscapes

2.4.3 Land Cover Mapping

2.5 Socio-Economic Consequences

2.5.1 Living in the Mine Lands

2.5.2 Health Impacts

2.5.3 Economic Dependencies and Displacement

2.6 Building Cultures

2.6.1 Vernacular Building Methods

2.7 Global Lessons

2.7.1 Case of Studies

2.7.2 Building with Robots

2.8 Discussion

2.8.1 Hypothesis

2.8.2 Research Question and Design Intent

2.8.3 Biblography

This chapter aims to establish a deep understanding of the area to be addressed, the mining processes in Jharia, and case studies that will help us tackle the issue. First, we examine the complete mining cycle, from prospecting to abandonment, identifying its spatial, ecological, and social footprints. It also examines the economic and social dynamics associated with extractivism, particularly those linked to the coal economy, job losses following mine closures, and the processes of displacement and territorial disarticulation that often follow the depletion of these systems. Indicators of environmental degradation, social exclusion, and economic stagnation are analysed in order to recognise the barriers in this specific context.

Finally, case studies are presented that have successfully converted post-extractive landscapes into regenerative productive, cultural, or ecological systems. These precedents allow for the identification of relevant strategies, applied technologies, and governance models that serve as references for the proposed project.

territories of extraction

A GLOBAL COAL & MINING AT ATLAS

Fig. 1.1 : Global Territories of Coal Extraction

1. International Energy Agency (IEA). Coal Information 2023. Paris: International Energy Agency, 2023.

2. International Energy Agency (IEA). World Energy Outlook 2023. Paris: International Energy Agency, 2023.

3. World Bank. Coal Mine Closure: Opportunities and Challenges. Washington, DC: World Bank, 2018.

4. World Bank. Mine Closure and Rehabilitation in China. Washington, DC: World Bank, 2014.

5. United Nations Environment Programme (UNEP). Global Environment Outlook – GEO-6: Healthy Planet, Healthy People. Nairobi: UNEP, 2019.

6. Office of Surface Mining Reclamation and Enforcement (OSMRE). Abandoned Mine Land Inventory System (AMLIS). U.S. Department of the Interior. https://www.osmre.gov/programs/aml

7. Ministry of Coal, Government of India. Coal Directory of India. New Delhi: Ministry of Coal, latest edition.

8. Ministry of Coal, Government of India. Status of Abandoned and Discontinued Coal Mines. New Delhi: Ministry of Coal.

9. Department of Industry, Science and Resources. Mine Rehabilitation and Closure. Canberra: Australian Government.

10. United Nations Development Programme (UNDP). Environmental and Social Impacts of Coal Mining in Colombia. Bogotá: UNDP.

11. World Bank and United Nations Environment Programme (UNEP). Extractive Industries and Population Exposure. Washington, DC: World Bank, 2017.

12. Indian Council of Medical Research (ICMR). Health Impact Assessment in Coal Mining Regions of India. New Delhi: ICMR.

13. United Nations Department of Economic and Social Affairs (UN DESA). World Population Prospects 2023. New York: United Nations, 2023.

14. World Bank. World Development Indicators. Accessed 2024. https://data. worldbank.org

2.1 Coal Mines - A Global Phenomenon

Mining landscapes shaped by extractivist operations whether for copper, gold, chromite, coal, or stone span vast tracts of land and radically transform the earth’s surface. These operations often involve expansive open pits and large dumps of waste material, forming disrupted terrains of considerable scale. Globally, approximately 33,000 mines occupy land reflecting both their physical and environmental magnitude.1

Coal mining, in particular, has a century-long legacy, driven largely by industrial and energy needs. However, with increased awareness of global warming and the high carbon emissions from coal combustion, stringent regulations have led to a reduction in coal mining. There are about 4732 global ongoing coal mine sites, nearly 2244 have been shut down or abandoned.1

The closure of mine sites has also triggered socio-economic consequences. These operations often employ thousands, many of whom are migrant or informal workers. The decline of mining in certain regions has resulted in unemployment, forced relocation, and

1. Global Energy Monitor, Global Coal Mine Tracker, 2025 release, accessed September 17, 2025, https://globalenergymonitor.org/projects/ global-coal-mine-tracker/

Fig. 1.2 : Active and Inactive Coal Mines around the glove (adapted from Global Coal Mine Tracker, Global Energy Monitor, 2025).

a loss of community structures, especially where alternative livelihoods are limited. This presents a dual challenge: environmental degradation and social dislocation.

Amid global environmental crises and socio-economic transition, these landscapes call for new ways of imagining their future. What should happen to these vast, scarred terrains? Can they be remediated to restore ecological balance while also addressing the needs of affected communities. These lands pose challenges in socio-ecological Justice, governance and accountability, economic alternatives, climate resilience and landscape futures.

Active Mines
Inactive Mines

2.2 Evolution of Coal Mining in Jharia

2.2.1

Introduction to the Mining Belt

Coal mining, a global phenomenon, has concurrently served as an engine for industrial development and a source of irrevocable damage in specific regions. The Jharia Coalfield (JCF) in Jharkhand, India, stands as a prominent case study. Spanning a vast area of approximately 450 km², it is India’s primary source of coking coal¹. However, its economic significance is overshadowed by a severe environmental catastrophe, systematically accumulated over a century, which establishes Jharia as a critical subject for analysing the destructive outcomes of extractivism.

Fig. 1.3 : Map of BCCL Mining Areas, Urban Settlements, and Site Location within the Jharia–Dhanbad Region(author)
City Mining areas

2.2.2 Historical Evolution of Extractivism

The history of Jharia clearly illustrates how extractivism has evolved through different modalities over time, degrading the region’s physical and social environment.

Phase 1: Underground Mining and the Onset of Disaster in the Colonial Era (1890s–1940s)

Following the establishment of the railway in 1894, Jharia’s mining history was dominated by the ‘pillar and gallery’ method of underground extraction. This method, aimed at maximising shortterm output, fundamentally compromised the structural integrity of the mines through the excessive extraction of coal pillars, in disregard of safety regulations. Such indiscriminate and unscientific mining practices led to large-scale collapses in the 1930s, and the resultant underground fires were the prelude to a disaster that persists to this day².

Phase 2: The Shift to Opencast Mining and the Intensification of Environmental Degradation (1970s–Present)

After the nationalisation of the coal industry in the 1970s, opencast mining was introduced on a massive scale to meet India’s energy demands, altering the nature of extractivism. This approach entailed the complete removal of the surface, causing far more extensive and direct environmental destruction than before³. From this period, the underground fires became increasingly uncontrollable; it is reported that approximately 70 fires are currently active within the Jharia Coalfield, the highest number of any coalfield in India⁴.

The Legacy of Evolving Extractivism: A Compound Environmental Crisis

Thus, the evolution of mining practices in Jharia over a century has left a legacy of an unmanageable environmental disaster. Specifically, a feedback loop has been established wherein underground fires create voids by burning coal, which leads to land subsidence; the resulting surface fissures then supply oxygen that exacerbates the fires⁵. This geological instability has reached a critical level, threatening settlements and essential transport networks⁵. The vast waste dumps, air pollution, and contamination of soil and water generated in this process have permanently altered Jharia’s landscape, and are the direct cause of the Environmental Degradation and Pollutants to be discussed in the subsequent chapter.

¹ Anupal Jyoti Dutta et al., “Investigations of Geothermal Energy Production in Coal Fires Affected Jharia Basin, India,” Proceedings of the 49th Workshop on Geothermal Reservoir Engineering (Stanford University, 2024): 1, lines 12–13.

² B. P. Jha, “Unscientific Mining and Its Social Impact: A Case Study of Jharia Coalfield, India,” Rupkatha Journal on Interdisciplinary Studies in Humanities 3, no. 2 (2011): 253–255.

³ G.S. Saini, “Environmental impact of opencast and underground mining in Jharia coalfield, India,” Environmental Geology 20 (1992): 185–190.

⁴ Vamshi Karanam et al., “Multi‑sensor Remote Sensing Analysis of Coal Fire Induced Land Subsidence in Jharia Coalfields, Jharkhand, India,” International Journal of Applied Earth Observation and Geoinformation 102 (2021): 102439, p.2, lines 24–26.

⁵ Vamshi Karanam et al., “Multi‑sensor Remote Sensing Analysis of Coal Fire Induced Land Subsidence in Jharia Coalfields, Jharkhand, India,” International Journal of Applied Earth Observation and Geoinformation 102 (2021): 102439, p.2, lines 17–18.

⁶ Vamshi Karanam et al., “Multi‑sensor Remote Sensing Analysis of Coal Fire Induced Land Subsidence in Jharia Coalfields, Jharkhand, India,” International Journal of Applied Earth Observation and Geoinformation 102 (2021): 102439, p.2, lines 7–8.

Fig. 1.4 : Ground Subsidence Map. (adapted from R. S. Chatterjee et al., “Detecting, mapping and monitoring of land subsidence in Jharia Coalfield, Jharkhand, India by spaceborne differential interferometric SAR, GPS and precision levelling techniques,” 2015).

2.3 Environmental Degradation and Pollutants

This chapter investigates the environmental consequences of extractive activities, focusing on the degradation processes and pollutants associated with coal mining. Coal mining, especially in its open-pit form, leaves behind severely altered landscapes characterised by soil erosion, loss of vegetation, altered hydrology, and the accumulation of toxic substances.

We analyse the biogeochemical impact of these pollutants, exploring how they affect the earth’s capacity for regeneration and how they disrupt the development of healthy ecosystems. It also examines the spatial distribution of pollution, the ecotoxicological risks to surrounding communities, and the legacy of environmental injustice often linked to extractive frontiers.

2.3.2

Geotechnical Instability

The pervasive ground subsidence and persistent underground fires in Jharia’s abandoned coal mines area caused due to two interlinked processes triggered by desiccation which is the severe drying of subsurface materials. Firstly, prolonged desiccation eliminates natural moisture that binds geological strata. Clay-rich sediments and coal rocks, deprived of water, shrink and fracture causing fissures and voids, destabilising mine galleries and support pillars. Over time, the weakened ground collapses, causing surface subsidence that manifests as sudden sinkholes or gradual land depression.1

Simultaneously, desiccation makes the exposed coal seams highly combustible and pyrophoric which means its increased porosity allows rapid oxidation, generating heat that can ignite it spontaneously at temperatures as low as 80°C. Friction from collapsing structures, residual electrical faults, or even small-scale landslides provide ignition sources. Once ignited, fissures formed by subsidence act as ventilation ducts, channeling fresh oxygen to fuel the flames. This airflow transforms localised fires into selfsustaining infernos that spread through interconnected seams. 2

In Jharia, over 77 active fire zones perpetuate this cycle, with subsidence displacing communities and fires releasing toxic gases especially in unsealed abandoned mines where desiccation remains unchecked. 3 4 However, under the Jharia Rehabilitation Plan, measures have been taken to suppress the fires using techniques such as inert gas injection and sand filling which have reduced fires from 17.32 km² in 2009 across 77 active fire sites just 1.8 km² in 2021, covering 27 fire sites.5

3. IEASRJ; Chapman University, “Underground Burning of Jharia Coal Mine (India) and Associated...”.

4. Global Bihari, “Considerable Reduction in Surface Fire Area in Jharia Claims Coal Ministry,” accessed June 2025, https://globalbihari.com/considerable reduction in surface fire area in jharia claims-coal-ministry/.

5. Ministry of Coal, Government of India. “Jharia Master Plan: Coal Ministry Efforts Bring Down Surface Fire Identified from 77 to 27 Sites.” Press Information Bureau press release, September 25, 2023. Accessed June 25, 2025.

6. R. S. Chatterjee et al., “Detecting, mapping and monitoring of land subsidence in Jharia Coalfield, Jharkhand, India by spaceborne differential interferometric SAR, GPS and precision levelling techniques,” Journal/Conference (2015), accessed September 17, 2025, via ResearchGate: https://www.researchgate.net/figure/Combined land subsidence areas in Jharia Coalfield as obtained from C and L band DInSAR_fig5_282245541

Fig. 1.5 : Vegetation density map. (author)

2.3.3 Soil & Water pollutants & Vegetation Loss

The excavated landscapes of pits cause water collection and the rainfall on the dumps carry pollutants to its immediate low lying lands and water streams that ultimately connect to the damodar river.1 Studies by the Indian School of Mines show the topsoil has low moisture content, poor pH balance, and depleted essential nutrients like nitrogen, potassium, and phosphorus, making it unfit for farming or plant growth. One significant environmental concern associated with this is Acid Mine Drainage (AMD) which occurs when sulfide minerals are exposed to air and water resulting in the formation of sulfuric acid. The soil has traces of heavy metals like arsenic, selenium, mercury, lead, sulphur, and fluorine.

The Damodar river water has transformed into acidic, sludge-choked channels, with recorded pH levels as low as 2.5 making the water lethal to aquatic life and unusable for human activities.2 Coal fines create thick, black sedimentation that suffocates riverbeds, while heavy metals bioaccumulate in fish and agricultural produce.

Communities relying on the Damodar River face contaminated drinking water with arsenic concentrations documented at fifty times the World Health Organisation’s safety limits, alongside poisoned irrigation water that reduces crop yields and compromises food security.

A structured remediation cycle is essential to meet regulatory requirements, sustain ongoing mining operations, and repurpose abandoned sites. While high-intensity methods such as soil washing, chemical stabilization, and ex-situ containment might be resource intensive , bioremediation and phytoremediation are preferred for their sustainability and ecological compatibility. This remediationled approach indicates a cyclical model of recovery and urban transformation, and the landscape acts as a foundation enabling regenerative land use in post-industrial terrains.

1. V. A. Selvi et al., “Impact of coal industrial effluent on quality of Damodar river water,” Indian Journal of Environmental Protection 32, no. 1 (January 2012): 58–65

2. Abhay Kumar Singh et al., “Major Ion Chemistry, Weathering Processes and Water Quality Assessment in Upper Catchment of Damodar River Basin, India,” Environmental Geology 54, no. 5 (2008): 745-58..

Fig. 1.6 : Air Pollutant Concentration Map. (adapted from V. Saini, “Environmental impact studies in coalfields in India: A case study of the Jharia coalfield,” Renewable and Sustainable Energy Reviews 2016).

2.3.1 Air pollutants

The most imminent issue of air pollution caused by underground fires from burning coal seams which have been burning for decades. In Jharia, AQI(air quality index) often exceeds 150–200, which falls in the “Unhealthy” range and this is particularly driven by high PM2.5 and PM10 levels from coal fires and dust pollution.1 PM2.5 and PM10 are 2.5 micrometers and 10 micrometers or less, respectively where the former is small enough to penetrate into the lungs and bloodstream, and the latter capable of causing irritation to the eyes and throat, both leading to health issues. From 2010 to 2013, asthma cases jumped by 92%, and surveys found 42% of residents suffering from chronic bronchitis or COPD.1 In addition, mining activities such as overburden dumping release fine particles that travel with the wind, with dust and trace metals. Future construction risks highlight two architectural priorities: protecting against air pollution near affected sites and stabilizing overburden dumps to limit pollutant spread, while also addressing how communities can safely co-exist with nearby pollution sources.

[Date you viewed the article], https:// www.sciencedirect.com/science/article/abs/pii/S1364032115010424

1. IEASRJ; Times of India
2. V. Saini, “Environmental impact studies in coalfields in India: A case study of the Jharia coalfield,” Renewable and Sustainable Energy Reviews 2016, accessed

2.4 Coal Mining

It is essential to understand how coal is formed and why its extraction is so important for industry, as this determines its location and the methods used to extract it. Coal is a sedimentary rock formed from the decomposition of plant material over millions of years. It is mainly composed of carbon (65–95%), along with hydrogen, oxygen, nitrogen, sulphur, and other minerals. Its formation began around 320 million years ago in swampy environments, where the lack of oxygen prevented normal decomposition. Instead, bacteria slowly broke down the plant matter, retaining its carbon content and enriching the surrounding environment.

As more layers of plant material accumulated, burial depth increased, raising pressure and temperature in the lower layers. This process transformed each layer differently depending on its depth, resulting in coal types with increasing carbon content and energy density. The four main types of coal are peat, lignite, bituminous coal, and anthracite. Peat is the earliest stage and has high moisture content. Lignite (25–30% carbon) is matt black and stains the skin. Bituminous coal (70–90% carbon) is the most commonly used for energy production, while anthracite (90–97% carbon) is the highestquality coal. For this reason, coal is found in stratified layers known as seams.

Coal is extracted mainly through open-pit mining and underground mining, depending on the depth of the seams. Open-pit mining is used when seams are shallow (up to 60–70 metres), allowing the removal of large amounts of soil and rock to access the coal directly. This method is more economical, safer, and more efficient, as it enables the use of heavy machinery and continuous operations, significantly reducing costs.

Underground mining is used for deeper seams and requires the excavation of tunnels, which increases costs, safety risks, and the need for ventilation and structural support. Although it has a smaller surface impact, it can cause subsidence and involves much higher investment due to complex tunnel networks and safety requirements. In Jharia, India, where this research is focused, open-pit mining predominates because many coal seams are shallow and easily accessible. Low operating costs and high coal demand have encouraged large-scale extraction. However, this method has severe environmental and social consequences, including air pollution from spontaneous coal fires, displacement of communities, and extensive landscape degradation.

2.4.1 Mine Cycle in Jharia

To develop a remediation strategy for open-pit mines, it is essential to understand the mining cycles in Jharia and their environmental consequences.

The process begins with exploration, where samples and geophysical surveys (seismic, electrical, magnetic, radiometric, and gravitational) are used to identify coal deposits. Once their presence is confirmed, extraction begins. In Jharia, open-pit mining is predominant because coal seams are located within 60 metres of the surface, making this method more profitable than underground mining. Large areas— typically 5 to 10 km² per mine—are deforested, after which the surface layer is fractured by controlled blasting and excavated using draglines.

After extraction, coal undergoes beneficiation in nearby plants, where it is crushed, separated from impurities, washed, and then transported by rail to steel and power plants.

This cycle causes severe environmental impacts, including irreversible biodiversity loss, overburden accumulation in the landscape, and wastewater containing heavy metals such as arsenic, lead, and pyrite discharged into tributaries of the Damodar River.

Air pollution from mining activities raises PM⁵⁵ levels to three to four times India’s safe limits, while sulphur-rich dust contributes to atmospheric acidification. Combined with high annual rainfall (around 1,200 mm), this acidity accelerates the degradation of soils, water bodies, and vegetation far beyond active mining areas, with recorded rainfall pH values of 4.2–4.8, well below the national average.

Photograph from Vishal Kumar singh. The burning city : A Photographic Documentary on Jharia (India)
Fig. 1.7 : Mining Extraction in Jharia
Photograph from Vishal Kumar singh. The burning city : A Photographic Documentary on Jharia (India)
Fig. 1.8 : Existing Landscape

2.4.2 Fractured Landscape

After explaining how coal is formed and the processes involved in its extraction, the aim has been to provide a foundation for the reader to better understand the implications of this type of human activity.

First and foremost is the physical degradation of the affected area, mainly as a result of open-pit mining. The holes generated through this method of extraction are approximately 500 metres in radius and between 70 and 80 metres deep. These areas were once home to functioning ecosystems, with flora and fauna that can no longer be recovered. During coal mining operations, plant and animal species are lost, along with agricultural activities and other forms of local land use. The mining pits ultimately become physical barriers within the existing ecosystem.1

Moreover, the extraction process generates large quantities of debris which are essentially the overburden removed to access the coal seams. It is estimated that for each mining pit, between 8 and 11 cubic metres of infertile soil are displaced. This material is often left near the mining sites, becoming a permanent part of the altered landscape.

From a geotechnical perspective, the large-scale movement of earth and the accumulation of debris create unstable zones around the mining area. This leads to risks of landslides, ground subsidence, and rockfall, which make the area difficult and dangerous to access or repurpose. Another key concern is soil degradation. As mentioned earlier, above the desired coal seam lie layers of material with lower carbon content, many of which contribute to soil acidity once exposed. In other words, the debris is not simply soil, it also contains carbon-rich material which, through excavation and transport, contaminates the surrounding land.

This brings us to the issue of air pollution. When the land is disturbed, dust containing a high proportion of coal particles is released into the air, contributing significantly to local air pollution. Additionally, in many cases the coal is washed on-site using machinery brought in for this purpose. This cleaning process consumes large quantities of water, and the resulting wastewater is often discharged into nearby bodies of water — in this case, the Damodar River.

Amidst this degraded landscape, local inhabitants attempt to find economic opportunities linked to the coal industry. Over time, this has led to the emergence of informal settlements in and around mining zones. However, the living conditions in these areas are far from adequate.

and

1. Photographs in Figure 5.6 from Newsclick, “Jharia’s Coal Mining-Affected Families Continue to Live Atop Tinderbox,” by Ayaskant

January 13, 2022, https://www.newsclick.in/jharia-coal-mining-affected-families-live-atop- tinderbox 2. Business Insider, “Dhanbad, India: Coal Capital of the World,” by Sebastian

October 2018, https://www.businessinsider.com/dhanbad india coal capital-of-the-world-2018-10

Fig. 1.9 : Diagram showing Chosen Abandoned Mine
Activties in the Vicinity. Photographs included from Newsclick (2022) and Business Insider (2018).
Das,
Sardi,
Fig. 1.10 : Land cover map drawn from data extrapolated from Satellite imagery( author)

2.4.3 Land Cover Mapping

One such abandoned mine land in proximity to Jharia is identified which has undergone excavation such that the area which was a pit in 2016 was backfilled to excavate at the location of the dump. Current abandoned state has lead to water logging and satellite imagery shows the transformation of the site and its impacts on the immediate land cover.

Mapping the ground cover of the current site condition reveals the immediate landscapes around the pit which includes scrub lands and dense ground covers around the water stream. The water flow from the dumps carry coal residues and other metals directly into the water stream that connects directly to the Damodar river. The built fabric is interspersed with mine lands and dust and air pollutants become imminent issues.

Fig. 1.11 : Land Transition over given time period. Image modified from Sentinel 2 Satelline imagery

2.5 Socio-economic Consequences

2.5.1

Living in the Mind Lands

The Jharia coalfields represent a spatial paradox: a site rich in national energy potential yet impoverished in the daily life of its people. Decades of unregulated extraction have produced enormous wealth for the state and industry, but at the cost of local livelihoods, land, and dignity.1 What emerges is a fractured socio-economic fabric where opportunity exists only in the service of coal, an extractive logic that translates directly into spatial neglect. Settlements near the mines are structurally informal and infrastructurally barren. Educational access is limited, mobility is constrained, and financial resilience is nearly non-existent. The people remain suspended in a mono-industrial economy without alternative pathways. Their informal economies, scavenging, day labour, and subsistence trade operate in spaces never designed to support human activity. 2

These are environments that cannot grow because the system itself denies growth. This has critical implications for architecture: the space of Jharia is not just under-designed, it is under-imagined. Addressing socio-economic paralysis demands new spatial logic systems that introduce redundancy, flexibility, and opportunity into otherwise dead-end conditions.

The architectural response must go beyond housing or infrastructure; it must actively re-script spatial economies, enabling communities to generate, share, and sustain value outside the shadow of extraction.

2.5.2 Health Impacts

Particulate matter exceeds safe limits, while subsurface coal fires release methane, sulphur dioxide, and carbon monoxide, transforming daily life into chronic health risk. Respiratory illnesses are widespread, alongside skin and eye diseases and reduced life expectancy. The built environment often intensifies exposure: housing is porous, poorly ventilated, and densely packed, with little separation from pollution sources. Public health infrastructure— clinics, clean water, and sanitation, is largely absent or overwhelmed. Rather than resisting instability, architecture can be conceived as adaptive, folding, fragmenting, and migrating in response to dynamic ground conditions. Designing with fire, rather than against it, reframes uncertainty as a design parameter rather than a constraint.

1. Government of India. Jharia Action Plan 2009. Ministry of Coal.

2. Malkhandi, Mita. “Displacement and Socio Economic Plight of Tribal Population in Jharkhand with Special Reference to Jharia Coal Belt.” International Research Journal of Management Sociology & Humanity 9, no. 2 (2018): 96–105.

3. The Dark Earth: Coal Mining and Tribal Lives of Jharkhand. YouTube video, 12:34. June 22, 2024. https://www.youtube.com/watch?v=u1I2PFpHaYE.

Fig. 1.11 : Open Coal Fires in Informal Mining Landscapes
Photograph from Vishal Kumar singh. The burning city : A Photographic Documentary on Jharia (India)

Economic Dependencies

The economic structure of Jharia is fragile and singularly dependent on coal. Livelihoods revolve almost entirely around the mining economy, with most households tied to it either directly through formal employment or informally through contract labour, scavenging, or support services. This overreliance has produced a brittle ecosystem with volatile wages, unsafe working conditions, and little to no labour protections. Employment is often undocumented, hazardous, and exploitative, offering no path for upward mobility. As a result, families remain trapped in cycles of subsistence without access to diversified income streams.

This economic dependency is mirrored in the physical landscape. Informal worker settlements dominate, with little spatial planning or infrastructure to support alternative economies. There are no dedicated zones for education, skill development, light industry, or community-scale enterprise. Markets are scattered and undersupplied; repair workshops, fabrication spaces, and knowledgesharing hubs are absent. In short, the city produces labour, not economic agency. 1

As part of the design proposal, zoning for economic resilience will be introduced. This includes designated areas for micro-enterprise, vocational infrastructure, shared manufacturing, and community services spaces intended not just to support living, but to enable working, learning, and evolving beyond extractive dependency.

2.5.4 Displacement and Dysfunction

Attempts to address displacement and economic recovery have largely failed. The Belgaria rehabilitation colony, established by BCCL (Bharat Coking Coal Limited), was intended to resettle families evicted from subsidence zones.2 However, it was designed with no regard for economic sustainability. Residents were relocated to peripheral land disconnected from job sites, markets, or transport. Basic services like water, drainage, and healthcare were either delayed or absent. Most critically, there was no provision for livelihoods, no economic zoning, no skill centres, no transport linkages. As a result, many families either returned to illegal mining areas or fell into deeper poverty. The site became spatially stable but economically void, a clear failure of planning that prioritised land clearance over human survival.

The spatial consequence of this economic fragility is a fractured urban form, where informal settlements sprawl chaotically around mining peripheries, with no dedicated space for trade, production, or growth. There are no mixed-use zones, no vocational corridors, and no public infrastructure to support alternative economies. This spatial vacuum reflects a larger planning neglect: the economy is treated as a byproduct of mining, not as a system to be cultivated.

In developing a new intervention at an abandoned mining pit, the objective is to learn directly from the failure of Belgaria. Economic zoning must be central to the spatial framework, not an afterthought. Designated zones for agro-based industries, fabrication units, markets, repair services, and vocational training can be embedded directly into the site. These zones would not just enable income, they would anchor people to place with purpose, offering alternatives to extraction through production, repair, reuse, and education. While architectural form alone cannot solve economic collapse, it can create the conditions for economic plurality to emerge.

2.6 Building Cultures

2.6.1

Vernacular building methods

The sectional drawing articulates how every layer and spatial shift is choreographed to work with the environment. The roof is lifted, allowing air to circulate beneath and draw heat upward and out through a simple yet powerful stack effect mechanism. The thick earthen walls absorb heat during the day and release it slowly through the night, flattening diurnal temperature swings. Overhangs and shading devices extend protection far beyond the envelope. There is no mechanical intervention, and yet these houses remain liveable across extremes. Importantly, the section reveals how people use the space across the day: elders rest under shaded verandas, while children play in the courtyard; clothes, crops, and utensils are set out to dry in solar-rich zones. This spatial choreography is realtime responsive, a kind of temporal programming of architecture that computational tools can simulate and optimise. The courtyard isn’t an aesthetic flourish; it’s a thermal, social, and logistical hub.

Layer by layer, the material system is performing: compacted earth for base insulation and stability, jute and straw for tension and resilience, bamboo as structure and enclosure, and thatch or clay as thermal skin. These materials are locally sourced, recyclable, and low in embodied energy. More importantly, they are maintainable without machinery, relying on human labour, memory, and seasonal rhythm. The act of construction is embedded in social life: everyone builds, everyone repairs, and everyone understands the material’s behaviour. This is architecture as a distributed skill in a system where knowledge is communal, and upkeep is cyclical rather than outsourced. These methods offer a grounded precedent for developing material systems that are not only ecologically responsive but also socially embedded, scalable, and resilient by nature.

1. “Tribes of Jharkhand.” Uploaded by Daphneusms. Scribd. Accessed June 25, 2025. https://www.scribd. com/doc/139703199/Tribes-of-jharkhand.

2. Gautam, Avinash. Tribal Housing: A Case Study of Tribes in Jharkhand. M.Arch. thesis, Kansas State University, 2008.

3. Dutta, Pallabi, and Md. Mustafizur Rahman.

“Learning from the Root – Integrating Tradition into Architecture towards a Self Subsistent Munda Community.” Conference paper, Khulna University Studies, Shahjalal University of Science and Technology, November 2022.

4. Krümmelbein, Julia, et al. “A History of Lignite Mining and Reclamation in Lusatia.” Canadian Journal of Soil Science 92, no. 1 (2012): 53–66.

Fig. 1.12: Sectional Perptective of Typical vernacular courtyard houses (author)
Aged of the house used this as a rest area, supervising the children at play.
Raised platform made of compacted earth

Uninterrupted windowless, Mud wall about 450mm

Bamboo/ Wood Post to support roof

Floor Made of Compacted Earth

To dry clothes, crops, and eatables during the day time.
Outdoor kitchen in summers
Thatched Roof
Wooden/ Bamboo Rafter
Tripol Sheet
Wooden/ Bamboo Frame
Jute Chatai
Raised Plinth

2.7 Global Lessons

2.7.1 Case studies

Fly Ash Pressure Grouting: Mitigating Subsidence and Fires through Void Backfilling

In the room-and-pillar mines of Shaanxi, China, the injection of a fly ash slurry into subterranean voids has been shown to suppress ground subsidence by 75–80%.¹ The technique, which relies on precision control systems to manage the fill material, achieves a

compressive strength of 1-2 MPa after 28 days, thereby stabilising the overlying strata. This has resulted in a significant reduction of maximum subsidence, from 2.0m to 0.4m.

Applicability to Jharia: This technology suggests a pathway towards establishing the fundamental ground stability required before installing any structures on the open-pit slopes. Filling the subterranean voids could contribute to reducing the risk of slope failure, which is exacerbated by the monsoon climate. Utilising the abundant local supply of fly ash in Jharia is a clear advantage, though a critical line of inquiry would be the development of a localised cement-fly ash composite capable of immobilising the region’s primary soil contaminants, namely arsenic and boron.

This case underscores the primacy of pre-emptive ground stabilisation. Filling subsurface voids prior to surface works proves a necessary precondition for subsequent design success. It also illustrates the circular use of local waste streams, indicating how Jharia’s fly ash can be repurposed as a core resource for a sustainable restoration model. Beyond void filling, the case points to a concrete research objective: development of tailored composites capable of immobilising site-specific contaminants such as arsenic and boron. In sum, the project frames geotechnical stabilisation and contaminant management as coupled tasks that can be addressed through locally sourced, performance-driven materials.

¹ Yue Jiang et al., “Mitigating Land Subsidence Damage Risk by Fly Ash Backfilling Technology,” Polish Journal of Environmental Studies 30, no. 1 (2021): 655–61.

² Guozhen Zhao et al., “Ecological Restoration of Coal Mine Waste Dumps: A Case Study of the Ximing Mine in the Arid Desert Region of Northwest China,” International Journal of Mining, Reclamation and Environment 37, no. 12 (2023): 833–55.

Ximing Mine, China: A Composite Barrier for High-Temperature Spoil Heaps

The high-temperature spoil heaps at the Ximing Mine (>180°C) were restored using a dual-defence system, which combines a structuralthermal barrier to block oxygen ingress with a biological cover for surface stabilisation.² Through this method, vegetation coverage was increased from 18% to 72% in two years, with long-term stability assured through computer simulations.

Applicability to Jharia: This case provides a direct conceptual model for a structure that can control Acid Mine Drainage (AMD) on the pit slopes. An impervious cap could prevent rainwater during the monsoon season from infiltrating the slopes and transporting pollutants into the Damodar River. An investigation into developing an eco-friendly mortar, blending Jharia’s local refractory clays and biochar as a substitute for concrete, could lead to a cost-effective solution that simultaneously forms a stable base for a vegetation structure and controls AMD generation.

The Ximing Mine demonstrates the value of a multifunctional composite barrier. A single system can deliver an impermeable cap that intercepts acid mine drainage while simultaneously providing a stable, bioreceptive substrate for vegetation establishment. The key lesson is that structural intervention should not end with engineering stability; it should be configured as an active substrate that initiates ecological recovery. This integrated stance directly informs the present research agenda, in which bio-receptive structures are conceived as living systems that host plants and microorganisms, enable filtration and metabolism, and thereby couple pollution control with long-term landscape rehabilitation.

³ Marta Muro Carbajal, “Restauración Geomorfológica e Hidrológica de la Escombrera del Cerco de San Teodoro (Almadén),” in VI Congreso Ibérico de la Ciencia del Suelo (2012).

⁴ Maximilian Schneider et al., “Modelling of the Acid Mine Drainage Generation in Lusatian Post-Mining Pit Lakes Using Sentinel 2 Data,” Minerals 13, no. 2 (2023): 271.

⁵ Peter Stanley et al., “Pit Lake Water Quality Prediction – A Global Review and a Lusatian Case Study,” in Proceedings of the International Mine Water Association Congress (2023): 338–45.

Barren spoil heap

Almadén, Spain: Geocell Stabilisation of Steep Slopes

At the Almadén mine in Spain, steep 60° slopes of contaminated spoil were stabilised using HDPE geocells.³ The essence of this technique is that the honeycomb-like cell structure confines the soil, preventing erosion, while each cell provides an individual substrate for vegetation to establish. The project not only reduced erosion and pollution but also transformed the site into a tourist asset generating €1.2 million annually.

Applicability to Jharia: Geocells offer a direct model for a structure that provides both slope stabilisation and a substrate for vegetation. This approach could be particularly effective in preventing the severe soil erosion caused by torrential monsoon rainfall. A promising direction for local adaptation would be to trial different infill mixtures within the cells, combining limestone to neutralise Jharia’s acidic soils with biochar, and planting deep-rooted, native species resilient to both monsoon and dry seasons to identify an optimal, climatespecific solution.

Almadén evidences the effectiveness of modular containment at slope scale. Honeycomb geocell systems show how small, repeatable modules can stabilise extensive unstable faces while distributing stresses and anchoring soils. For Jharia, the implication is clear: porosity within bio-receptive structures should operate as functional modules rather than passive voids, gripping substrate and cultivating plants. Each aperture can serve as an independent microhabitat with tailored soil mixes, moisture regimes and species, increasing biodiversity and redundancy. The design principle is therefore a modular ecology, where repeatable units produce largescale stability and ecological gain through localised rooting, nutrient cycling and succession.

¹ Yue Jiang et al., “Mitigating Land Subsidence Damage Risk by Fly Ash Backfilling Technology,” Polish Journal of Environmental Studies 30, no. 1 (2021): 655–61.

² Guozhen Zhao et al., “Ecological Restoration of Coal Mine Waste Dumps: A Case Study of the Ximing Mine in the Arid Desert Region of Northwest China,” International Journal of Mining, Reclamation and Environment 37, no. 12 (2023): 833–55.

HDPE geocells

Lusatia, Germany: Regional-Scale Landscape Engineering

The 1,000 km² lignite mining area of Lusatia, Germany, was transformed into a sustainable landscape of lakes, forests, and farmland.⁴ The strategy’s core was to re-engineer the topography itself by regrading steep slopes to ensure inherent stability, and then covering the area with an engineered soil layer to restore the ecosystem. Consequently, the acidic soil (pH 3.4) was transformed into healthy land (pH 6.8) within five years5.(5)

Applicability to Jharia: The approach taken in Lusatia presents a macro-scale alternative to installing individual structures, focusing instead on re-engineering the open-pit slopes’ topography. Regrading steep slopes is the most fundamental method of ensuring structural stability. This newly created, gentler landscape could provide the safest and most expansive foundation for subsequent revegetation efforts and large-scale architectural interventions.

Lusatia shifts the problem frame from adding structures onto risky ground to remaking the ground itself. Macro-scale terrain regrading that softens steep slopes provides the most robust foundation for all subsequent interventions. For Jharia, the lesson is to adopt a hybrid strategy: combine localised bio-receptive structures with targeted large-scale earthworks that remove hazards at source. Such reshaping improves geotechnical stability, attenuates runoff, reduces erosion, and simplifies downstream maintenance. Strategic recontouring thereby functions as primary risk abatement, upon which finergrained ecological and programmatic layers can be deployed with greater durability, accessibility and long-term operational viability.

³ Marta Muro Carbajal, “Restauración Geomorfológica e Hidrológica de la Escombrera del Cerco de San Teodoro (Almadén),” in VI Congreso Ibérico de la Ciencia del Suelo (2012). ⁴ Maximilian Schneider et al., “Modelling of the Acid Mine Drainage Generation in Lusatian Post-Mining Pit Lakes Using Sentinel 2 Data,” Minerals 13, no. 2 (2023): 271.

⁵ Peter Stanley et al., “Pit Lake Water Quality Prediction – A Global Review and a Lusatian Case Study,” in Proceedings of the International Mine Water Association Congress (2023): 338–45.

Fig. 1.17: Sectional Perptective of Typical vernacular courtyard houses (author)

2.7.2 Building with Robots

The MaxiPrinter, developed by Constructions-3D, is a mobile largescale 3D printing system designed for on-site construction using cementitious and earth-based materials. Unlike factory-bound gantry systems, the MaxiPrinter is conceived as a deployable construction tool, capable of operating directly within complex and hazardous terrains.

The system operates with a compact folded footprint of approximately 3.0 × 0.85 × 2.0 m, allowing it to be transported and assembled on site. Once deployed, it can print within a large working envelope, reaching up to roughly 9–12 m in plan and up to 7 m in height, depending on configuration. This enables the fabrication of full-scale architectural elements such as retaining walls, load-bearing envelopes, and infrastructural components without the need for prefabrication or heavy formwork.

Material is delivered through a continuous pumping system, enabling uninterrupted printing as long as supply is maintained. Typical productivity is reported at around 14 m² of single-wall surface per hour, making it suitable for rapid stabilisation works and structural interventions. Layer-bylayer deposition allows geometry to respond directly to site constraints, including slope, cracks, or uneven ground conditions.

In architectural and landscape applications, the MaxiPrinter has been positioned as a tool for reducing human exposure in hazardous environments while enabling materially efficient construction. Its compatibility with locally sourced mixes supports low-cost, lowcarbon building strategies and makes it particularly relevant for post-industrial, remote, or environmentally sensitive sites.

Constructions-3D. “MaxiPrinter: Mobile Large Scale 3D Construction Printer.” https://www.constructions-3d.com/en/maxiprinter

Constructions-3D. MaxiPrinter Technical Brochure (PDF). https://geopolymerinternational.com/wp-content/ uploads/2024/05/MAXIPRINTER BROCHURE 2023 EN-WEB.pdf

2.8 Discussion

This analysis seeks to understand the comprehensive issues surrounding coal mining, approaching them from a critical perspective that promotes reader awareness. As has been demonstrated, the abandoned cavities resulting from open-pit mining have the main impact of fracturing the landscape, but they also generate profound socio-economic implications. The central objective of the project is to transform these extensive abandoned and degraded lands into an integrated cycle of environmental remediation and social recovery.

As a team, we understand that when intervening in these areas, we face two fundamental challenges. The first is the spatial scale: given that these mining cavities have a radius of approximately 500 metres, we propose categorising the intervention area according to its soil attributes, slopes, water flows and other relevant factors. This characterisation will allow us to define different areas of action, which will determine specific intervention strategies according to their categorisation.

The second challenge is the remediation of the contaminated landscape. To this end, knowledge of phytoremediation and soil restoration through strategic associations of plant and tree species is essential. The project will establish a phased remediation system to achieve this objective. As a team, we are aware that an initiative of this magnitude requires sequential and modular implementation, the phases of which will be detailed in later chapters.

In conclusion, this thesis poses a dual challenge: to establish a phased socio-environmental remediation cycle that allows the lost ecosystem to be recovered, transforming it into a productive and ecologically functional system.

2.8.1

Hypothesis

With strategic planting techniques and stabilization methods in addition to bioreceptive structure would transform a former mine and dump site will significantly reduce soil erosion, enhance ground stability and soil fertility, activate agricultural use of the restored land, and ultimately drive regional economic recovery.

2.8.2

Research Question

How can these abandoned landscapes be converted into productive systems that restore the environment and generate new housing and economic opportunities?

What phytoremediation strategies are effective in converting soils contaminated by mining oxidation processes into substrates suitable for the development of self-regulating living ecosystems?

How can an architectural landscape be designed that evolves over time alongside ecological remediation processes?

Which plant species are suitable given the geology and climate of the mining site?

Fig. 1.18 Proposed Remediation cycle (author)

2.8.3 Bibliography

1. The Coal Mining Life Cycle. Mining for Schools. Accessed June 25, 2025. https://miningforschools.co.za/lets-explore/ coal/the-coal-mining-life-cycle.

2. Central Mine Planning & Design Institute (CMPDI). Annual Report. 2021.

3. Gupta, Shiv Kumar, and Kumar Nikhil. Ground Water Contamination in Coal Mining Areas: A Critical Review. 2016.

4. Jharkhand Pollution Control Board. Annual Report. 2022.

5. Central Institute of Mining & Fuel Research. Research Highlights. 2019.

6. Greenpeace India. Airpocalypse IV: Assessment of Air Pollution in Indian Cities. New Delhi: Greenpeace India, 2020. https://www.greenpeace.org/india/en/story/7764/airpocalypse-iv-assessment-of-air-pollution-in-indian-cities/.

7. United Nations Framework Convention on Climate Change (UNFCCC). The Paris Agreement. 2015. https://unfccc.int/ process-and-meetings/the-paris-agreement/the-paris-agreement.

8. NITI Aayog. India’s Updated Nationally Determined Contributions (NDCs). New Delhi: Government of India, 2021. https://www.niti.gov.in.

9. IEASRJ; Times of India. [No additional bibliographic details provided.

10. Riyas, Moidu Jameela, Tajdarul Hassan Syed, Hrishikesh Kumar, and Claudia Kuenzer. “Detecting and Analyzing the Evolution of Subsidence Due to Coal Fires in Jharia Coalfield, India Using Sentinel-1 SAR Data.” Remote Sensing 13, no. 8 (2021): Article 1521. https://doi.org/10.3390/rs13081521.mdpi.com.

11. IEASRJ; Chapman University. “Underground Burning of Jharia Coal Mine (India) and Associated…”. [No additional bibliographic details provided.]

12. Global Bihari. “Considerable Reduction in Surface Fire Area in Jharia Claims Coal Ministry.” Accessed June 2025. https://globalbihari.com/considerable-reduction-in-surface-fire-area-in-jharia-claims-coal-ministry/.

13. Ministry of Coal, Government of India. “Jharia Master Plan: Coal Ministry Efforts Bring Down Surface Fire Identified from 77 to 27 Sites.” Press Information Bureau press release, September 25, 2023. Accessed June 25, 2025. https:// www.pib.gov.in/PressReleaseIframePage.aspx?PRID=1960543.

14. Selvi, V. A., et al. “Impact of Coal Industrial Effluent on Quality of Damodar River Water.” Indian Journal of Environmental Protection 32, no. 1 (January 2012): 58–65.

15. Sharma, A. K., B. P. Singh, and C. L. Prasad. Degradation of Soil Quality Parameters Due to Coal Mining: A Case Study of Jharia Coalfield. CORE PDF. Accessed June 25, 2025. https://core.ac.uk/download/pdf/188610081.pdf.

16. Saini, Varinder, R. P. Gupta, and Manoj K. Arora. “Environmental Issues of Coal Mining – A Case Study of Jharia Coal-Field, India.” Energy Procedia 90 (2016): 634–641. https://www.researchgate.net/publication/291102685_ Environmental_issues_of_coal_mining_-_A_case_study_of_Jharia_coal-field_India.

17. Government of India. Jharia Action Plan 2009. Ministry of Coal.

18. Malkhandi, Mita. “Displacement and Socio-Economic Plight of Tribal Population in Jharkhand with Special Reference to Jharia Coal Belt.” International Research Journal of Management Sociology & Humanity 9, no. 2 (2018): 96–105.

19. The Dark Earth: Coal Mining and Tribal Lives of Jharkhand. YouTube video, 12:34. June 22, 2024. https://www. youtube.com/watch?v=u1I2PFpHaYE.

20. “Tribes of Jharkhand.” Uploaded by Daphneusms. Scribd. Accessed June 25, 2025. https://www.scribd.com/ doc/139703199/Tribes-of-jharkhand.

21. Gautam, Avinash. Tribal Housing: A Case Study of Tribes in Jharkhand. M.Arch. thesis, Kansas State University, 2008.

22. Dutta, Pallabi, and Md. Mustafizur Rahman. “Learning from the Root – Integrating Tradition into Architecture towards a Self-Subsistent Munda Community.” Conference paper, Khulna University Studies, Shahjalal University of Science and Technology, November 2022.

23. Krümmelbein, Julia, et al. “A History of Lignite Mining and Reclamation in Lusatia.” Canadian Journal of Soil Science 92, no. 1 (2012): 53–66.

24. Lausitz & Central German Mining Company (LMBV). Mine Rehabilitation in Germany: Example LMBV. Senftenberg, 2023.

25. Ram, L. C., and R. E. Masto. “Fly Ash for Soil Amelioration.” Earth-Science Reviews 128 (2014): 52–74.

26. Jiang, Yue, et al. “Mitigating Land Subsidence by Fly-Ash Backfilling.” Polish Journal of Environmental

1 (2021): 655–661.

27. Mishra, D. P., and S. K. Das. “Physico-Chemical Properties of Talcher Fly Ash for Stowing.” Materials Characterization 61, no. 11 (2010): 1252–1259.

28. Siachoono, Stanford M. “Land Reclamation in Haller Park.” International Journal of Biodiversity and Conservation 2, no. 2 (2010): 19–25.

29. Trippe, Kristen E., et al. “Phytostabilisation of Acid Tailings with Biochar and Microbial Inoculum.” Applied Soil Ecology 165 (2021): 103962.

30. Coal Story. https://88guru.com/library/chemistry/coal-story.

31. How coal is made? https://www.thedailyeco.com/how-is-coal-made-877.html.

32. How coal mining works. https://bkvenergy.com/learning-center/how-coal-mining-works/.

33. The coal mining life cycle. https://miningforschools.co.za/lets-explore/coal/the-coal-mining-life-cycle.

34. Singh, Abhay Kumar, G. C. Mondal, Suresh Kumar, T. B. Singh, B. K. Tewary, and A. Sinha. “Major Ion Chemistry, Weathering Processes and Water Quality Assessment in Upper Catchment of Damodar River Basin, India.” Environmental Geology 54, no. 5 (2008): 745-58.

35. Singh, Vishal Kumar. The Burning City: A Photographic Documentary on Jharia. India, n.d.

36. Global Energy Monitor. Global Coal Mine Tracker. 2025 release. Accessed September 17, 2025. https:// globalenergymonitor.org/projects/global-coal-mine-tracker/

37. Saini, V. “Environmental impact studies in coalfields in India: A case study of the Jharia coalfield.” Renewable and Sustainable Energy Reviews 2016. Accessed [Date you viewed the article]. https://www.sciencedirect.com/science/ article/abs/pii/S1364032115010424

38. Chatterjee, R. S., Shailaja Thapa, K. B. Singh, G. Varunakumar, and E. V. R. Raju. “Detecting, Mapping and Monitoring of Land Subsidence in Jharia Coalfield, Jharkhand, India by Spaceborne Differential Interferometric SAR, GPS and Precision Levelling Techniques.” 2015. Accessed September 17, 2025. https://www.researchgate.net/figure/Combinedland-subsidence-areas-in-Jharia-Coalfield-as-obtained-from-C-and-L-band-DInSAR_fig5_282245541

39. Habib, Md Tariq, Saarthak Khurana, and Vivek Sen. Just Energy Transition: Economic Implications for Jharkhand. Climate Policy Initiative, December 28, 2023. Accessed September 17, 2025. https://www.climatepolicyinitiative.org/ just-energy-transition-economic-implications-for-jharkhand/

40. Singh, Vishal Kumar. The Burning City: A Photographic Documentary on Jharia. India, n.d.

41. Global Energy Monitor. Global Coal Mine Tracker. 2025 release. Accessed September 17, 2025. https:// globalenergymonitor.org/projects/global-coal-mine-tracker/

42. Saini, V. “Environmental impact studies in coalfields in India: A case study of the Jharia coalfield.” Renewable and Sustainable Energy Reviews 2016. A

43. Constructions-3D. “MaxiPrinter: Mobile Large-Scale 3D Construction Printer.” https://www.constructions-3d.com/en/maxiprinter

44. Constructions-3D. MaxiPrinter Technical Brochure (PDF). https://geopolymerinternational.com/wp-content/uploads/2024/05/MAXIPRINTER-BROCHURE-2023-EN-WEB.pdf

iii methods

3.1 System Logic: From Environment to Data

3.1.1 Terrain and Data Patching

3.1.2 Agent-Based Environmental Simulation

3.1.3 Multi-Objective Optimisation

3.2 Ecological Intelligence Framework

3.2.1 Understanding the plant guild

3.2.2 Patch–Plant Matching via Proximity

3.3 Site Clustering Methodologies

3.3.1 Data defining Construction Locations

3.3.2 K-Medoids Clustering Method

3.4 Material Logic: From Waste to Resource

3.4.1 Soil and Aggregate Calibration

3.4.2 Robotic Printing Strategies on Inclined Terrain

3.5 Stabilising Wall Logic: Generative Framework

3.5.1 Form-Finding Strategies

3.5.2 Finite Element Analysis for Wall Performance

3.5.3 Multi-Objective Structural Optimisation

3.6 Emerging points via Cellular Automata

3.6.1 Environmental Encoding within Voxel Matrix

3.6.2 Cellular Automata for Ecological Growth

3.7 Volumetric Logic: Data-to-Mass Encoding

3.7.1 Finite Element Analysis for the Geometry

3.7.2 Agent-Based Growth Simulation

3.7.3 Materialization using volumetric modeling tools

Reimagining and remediating mine lands can be looked at as a global model with solutions that could lie in governance or loweffort restoration or planting measures but the research aims to investigate and respond to its immediate complex parameters. A thorough investigation of the complex land morphologies and remediation methods answering multiple environmental factors and social dependencies requires a data-driven methodology to provide a bottom-up solution. This would inform climatic response, intervention strategies, building morphologies, planting strategies and spatial planning.

The research is conducted in two phases: The MSc phase focuses on multiple network strategies, bioreceptive structure designs and development of bio-receptive materials; while the MArch phase undertakes expanded development of Architectural typologies in lines of the established strategies

The research undertakes data collection to inform its experiments at different scales. The quantitative information about the altered minescapes and land morphology, spatial data is procured using Geographic Information Systems(GIS) tools. Numerous published environmental research, scientific studies, environmental research, scientific studies, government plans, articles are sources to understanding social issues and environmental impacts. Local surveys and horticulture data helps in identifying the native plant types. Precedence of global mine sites and reclamation efforts are guides to remediation strategies and models for economic resilience and project timeline.

Land stabilisation, settlement clustering, bioreceptive morphologies and materials along with planting systems are prioritized. Data from various environmental analysis informs the agent based generative stabilization network. Multi-objective optimization methods are used to optimize networks, productive landforms and building morphologies. Planting methods are evaluated by ecological modelling tools to determine improved plant and soil conditions. Environmental optimization and structural analysis is conducted for various scales using Finite Element Analysis(FEA) and environmental analysis tools. Prediction of allocation of built structures, planting patterns and settlement clustering onto a network is conducted using unsupervised machine learning algorithms. Physical tests are conducted on various material experiments. In this way, the research employs multiple tools and methods to address building morphological and landscape interventions.

3.1 System Logic: From Environment to Encoded Data

3.1.1 Patch Extraction and Terrain Encoding

This stage establishes the fundamental matrix on which all subsequent design decisions are based. The main objective was to create a logic that would allow site-specific environmental data to be assigned to discrete units within a geometry, enabling the terrain to become an active, data-responsive surface.

To achieve this, the intervention area was subdivided into a grid of patches, each serving as a container for coded environmental information. These patches contain parameters such as slope, pollution levels, exposure, and hydrological behaviour. The coding process transforms a neutral surface into a differentiated landscape through its data, allowing it to host various functions and behaviours according to ecological performance, which are vital for subsequent design decisions.

3.1.2 Agent-Based Environmental Simulation

Once the data had been encoded in the patch system, agentbased simulations were introduced to explore the area where the intervention was to take place. The agents were programmed to navigate the terrain according to the forces of attraction and repulsion, derived from specific data for each patch. For example, agents may be attracted to low-risk, well-lit areas, while avoiding areas with high slope instability or toxicity.

Agent-based simulation has the function of assigning movement behaviours inspired by ecological logic and natural movement systems, allowing simulations to generate spatial trajectories that respond to coded environmental variables. Therefore, the simulation does not function as a shape generator, but rather as a decision-making tool that reveals circulation patterns, threshold zones, or spatial concentration guided by ecological logic and patch data.

that converges into a single organic network encompassing the entire complex territory. The core of this method, establishing a generative process that reveals itself with an optimised form, rather than directly designing the final form.

3.1.3 Multi-Objective Optimisation

To refine and evaluate the outputs of the simulation, a multiobjective optimisation process was implemented using Wallacei. This tool enables the selection of optimal trajectories based on multiple performance criteria, such as minimising construction in high-risk zones while maximising environmental connectivity or defining terracing areas.

Wallacei does not propose a single fixed solution, but rather offers a spectrum of outcomes ranked by their performance against selected objectives. This method allowed the emergent paths and spatial strategies to be filtered and optimised, ensuring that subsequent design responses were both environmentally informed and computationally robust.

3.1 : Agents (Author)

Fig.

3.2 Ecological Intelligence Framework

3.2.1 Understanding the Plant Guild

The intervention does not seek environmental recovery over the form of the building, which is why the ecological function of plants is vital. Plants are not considered simply decorative elements, but primary agents of soil remediation, capable of absorbing, stabilising, isolating or filtering contaminants embedded in the soil.

To this end, plants were classified according to their functional contribution to remediation. They are grouped into ecological guilds, species that share similar functions, such as phytostabilisation, phytoremediation, nitrogen fixation or biodiversity enhancement. Understanding the capabilities of each plant guild was essential in defining which species were suitable for different areas of the site.

3.2.2 Patch–Plant Matching via Proximity

Once the species had been identified, the challenge was to assign them to the appropriate areas. As mentioned above, the intervention area was divided into a grid of plots, each of which was treated as a data-receiving unit. Environmental variables were accurately extracted using digital tools and simulations. These included exposure to solar radiation, surface moisture, terrain slope, pollution zones (mapped from satellite data) and surface typology.

Once these variables were coded, the suitability of each plot was assessed. Rather than assigning vegetation arbitrarily, each plant species was assigned to a specific plot based on its proximity to ecological needs and compatibility with local conditions. This spatial logic allowed the project to create a responsive and active ecological layer that was not ornamental, but instrumental.

In this way, vegetation becomes the first layer of the design: a system of planted agents that initiate the remediation process, preparing the ground for subsequent architectural interventions that arise from a site already in the process of recovery.

Fig. 3.2 : Ecological Process (Author)

3.3 Site Clustering Methodologies

3.3.1 Data Defining Construction Locations

As the project progresses through successive environmental analyses, new data layers are continuously added to enrich the definition of the terrain. At a specific point in the process, it becomes essential to identify areas with potential for architectural intervention. One of the most critical indicators introduced here is the risk value assigned to each patch. Although the architectural implications of these zones will be explored in later chapters, their role at this stage is tied to the logic of ecological remediation: areas with higher risk—such as unstable slopes or zones of high contamination—are also the ones that require greater stabilisation, and therefore may be structurally prioritised.

This perspective allows construction to emerge not arbitrarily, but in direct response to terrain fragility, aligning architectural action with ecological necessity. These high-risk patches become strategic candidates for intervention, not due to opportunity, but due to urgency of repair.

3.3.2 K-Medoids Clustering Method

Once the logic for architectural interventions had been defined, due to the scale of the extension and the scale of the terrain, it was necessary to group, i.e. generate clusters, for which the K-Medoids clustering algorithm was used. Unlike centroidbased methods such as K-Means, K-Medoids selects actual data points, known as medoids, as representatives of each cluster. This distinction is fundamental in geospatial work, as it ensures that the resulting reference points correspond to actual, analysable plots within the terrain.

Each patch had been pre-coded with environmental risk values. These were used as inputs for the clustering process, along with spatial proximity as a covariate. Thus, the algorithm clustered the patches based not only on similar environmental conditions, but also on their geographical proximity, making the result statistically consistent and spatially relevant.

The result of this clustering method is a set of medoid plots that act as anchor points, nodes of great environmental importance and physical relevance. These points are not theoretical, but are based on the dataset, making them suitable for guiding future structural decisions or ecological interventions.

Fig. 3.3 : K-Medoids Method (Author)

3.4 Material Logic: Waste as Constructive resource

3.4.1 Soil and Aggregate Calibration

The formulation of the material system began with an exploration of the vernacular use of earth-based compounds in the Jharia region. Traditional construction methods based on mixtures of clay, local earth, and organic fibres were taken as a starting point. This vernacular logic was updated to create a printable compound by incorporating waste materials, specifically lime and fly ash, as stabilisers. These materials, both by-products of the mining industry, were selected for their cementitious properties and environmental relevance to the site.

To improve ecological performance, biochar and vermiculite were added, increasing porosity and water retention. These properties promote microbial activity and plant growth, aligning the material with long-term remediation goals. It was decided that the aforementioned components would generate an extrudable material, which will be detailed more precisely in the following chapters, and that other compounds would be added to achieve reliable extrusion. The resulting formulation achieved a balance between mechanical integrity and ecological receptivity, which is essential for the dual function of stabilisation and regeneration.

3.4.2 Robotic Printing Strategies on Inclined Terrain

Once the composition of the material had been optimised, attention turned to its behaviour during robotic manufacturing, specifically on sloping and unstable terrain. Initial tests were carried out using both syringe and auger extrusion methods to analyse flow consistency, accuracy under pressure and printability at different inclinations. A series of 1:10 scale prototypes were successfully manufactured, confirming the system’s ability for layer deposition.

To validate the material’s performance under site-like conditions, mechanical and environmental tests were conducted. Compression tests ensured that the printed elements could withstand the load. Shear tests evaluated the material’s stability against lateral forces, which are critical to prevent slippage on sloping terrain.

Together, these strategies demonstrate the feasibility of deploying robotic construction in high-risk post-mining landscapes. The method integrates waste reuse, digital manufacturing, and ecological performance, offering a material system that responds to terrain conditions and actively contributes to environmental recovery.

Fig. 3.4 : Material Process (Author)

3.5 Stabilising Wall Logic: Generative Framework

3.5.1 Base Geometry: Curved Surface for Analysis

This methodology starts by defining continuous curved surfaces, derived from contextual topographical and volumetric constraints. This surface is not intended to be a definitive architectural form, but rather a neutral geometry that acts as a stabilising base and transition point between the land undergoing remediation and the emerging architecture. The basis of the design is to define a surface that adapts to the topography to which materials will be applied. The curvature allows for a multidirectional distribution of tension, which is essential for testing the response of heterogeneous materials in later stages.

3.5.2 Deformation Simulation (Material Response)

Once the analysis surface has been defined, a simulation is performed to check how this surface responds to stress, based on the parameters calibrated using the material system described in section 3.4. The analysis, performed with finite element tools, represents the intensity of the deformation using colour gradients, where the lighter areas indicate greater displacement. This deformation, based on the defined material, allows us to identify which areas are undergoing greater deformation, thus identifying the areas that require structural reinforcement or adaptation strategies.

3.5.3 Stress Flow Lines (Karamba Analysis)

Following the deformation simulation, a Karamba structural analysis is executed to extract the principal stress lines, those defining the primary tension and compression paths within the curved surface. These flow lines reflect the internal logic of force propagation, and become spatial inputs for further design. Instead of treating structure as imposed, these lines guide how the system self-organises according to stress hierarchies. The output provides the structural DNA upon which geometric differentiation can occur.

3.5.4 Porosity Zoning Based on Stress and Deformation

principal stress lines are obtained as the trajectories of tension and compression across the curved surface. Rather than treating these flow lines as abstract data, they are used as generative guides to spatially differentiate the geometry. Specifically, the stress lines inform the density and distribution of porosity within the surface.

A Voronoi-based algorithm is implemented to map porosity zones, where areas of higher stress concentration generate tighter, more compact cells, and zones under lower stress permit larger, more porous openings. The resulting porosity is not decorative, it is a direct response to material performance, structure performance and controlled permeability based on the system’s own structural logic.

Fig. 3.5 : Ecological Process (Author)

3.6 Emerging points via Cellular Automata

3.6.1 Environmental Encoding within Voxel Matrix

As part of the volumetric environmental analysis, a threedimensional voxel matrix is established over the intervention site, differentiating the three-dimensional space into uniform data voxels. Each voxel functions as a spatial information unit capable of storing multiple ecological parameters simultaneously. Through a series of environmental simulations, such as solar radiation mapping, slope inclination, vegetation type, and pollution exposure indices, each voxel is assigned a composite ecological value.

The resulting dataset forms a spatial field with a large amount of data, in which environmental conditions are accurately expressed at the voxel level. This encoded volumetric grid allows for the computational evaluation of spatial differentiation and becomes the operational basis for the subsequent implementation of growth algorithms and ecological prioritisation strategies.

3.6.2 Cellular Automata for Ecological Growth

To simulate emerging spatial relevance, a custom-designed cellular automaton (CA) model is implemented based on the amount of data per voxel. The simulation begins by activating a series of seed voxels, selected according to their intervention priority based on their environmental values identified in the previous phase. These serve as initial triggers from which spatial propagation develops.

The CA works by iteratively evaluating the state of each voxel in relation to its adjacent neighbours, applying rule-based logic to determine persistence, elimination, or reproduction. These rules are based on ecological priorities: voxels surrounded by other high-yield units are more likely to remain active, while isolated or low-value voxels are suppressed. This logic mirrors biological systems in which local conditions influence the viability of growth.

Over multiple simulation cycles, the algorithm filters and refines a dynamic population of “voxels of interest”, spatial units repeatedly validated through environmental data and contextual consistency. These voxels represent areas of optimal ecological significance, where conditions favour future activation. The method allows for the emergence of a non-linear, self-organising spatial logic based on ecological performance and computational growth principles.

Fig. 3.6 : CA Process (Author)

3.7 Volumetric Logic: Data-to-Mass Encoding

3.7.1 Finite Element Analysis for the Geometry

This phase begins by defining a volumetric envelope that is evaluated using Finite Element Analysis (FEA) to simulate material behaviour under load conditions, extracting the primary stress lines. As in the earlier analysis of groundstabilising structures, the output of this step is the generation of principal stress trajectories across the geometry, which will be used in later experiments as performance-informed spatial attractors. These vectors act as directional fields, guiding the subsequent stages of spatial logic and volumetric growth.

3.7.2 Agent-Based Growth Simulation (Culebra)

Once the stress field has been established, agent-based simulations are implemented to simulate the motion of autonomous agents operating around the defined volume. Unlike previous experiments where agents responded to points of attraction or repulsion, in this phase, the stress curves themselves act as continuous attractor paths. These guide the agents’ movement, resulting in complex trajectories that reflect both structural performance and spatial potential. The accumulated paths define a series of spatial lines that later serve as scaffolds for material growth.

3.7.3 Materialisation Using Volumetric Modelling Tools

The paths generated by the agents are then translated into physical mass using Dendro, a voxel-based volumetric modelling plugin. This step transforms behavioural simulations into inhabitable geometry, where mass is accreted layer by layer, responding to the orientation, density, and branching complexity of the agent trajectories. The voxelisation process allows for variations in thickness and porosity, depending on the local density of path intersections or directional overlap. This produces a spatial structure that is both materially expressive and geometrically adaptive, with emergent spatial logics. The resulting form is not predefined but discovered through an interplay between data, simulation, and material behaviour.

Fig. 3.7 : Volume process (Author)

iv research development

4.1 Environmental Encoding and Corridor Generation

4.1.1 5×5 Patch Framework

4.1.2 Slope Analysis

4.1.3 Subsidence Mapping

4.1.4 Rock and Ground Conditions

4.1.5 Attractor–Repulsion Logic

4.1.6 Agent-Based Simulation

4.1.7 Evolutionary Algorithms and Optimal Solutions

4.2 Ecological Seeding and Vegetation Logic

4.2.1 Remediation process

4.2.2 Species Categorisation

4.2.3 Solar Radiation Analysis

4.2.4 Water Flow and Accumulation

4.2.5 Carbon Density Mapping

4.2.6 Species Grouping and Planting Logic

4.2.7 Ecological Corridor

4.2.8 Temporal Growth Phases (5–10–15–20 Years)

4.3 Site Clustering: Territory to Site

4.3.1 Stabilisation Logic

4.3.2 Risk Value Thresholds

4.3.3 Maximum Slope Constraints

4.3.4 Data grouping and clustering

4.4 From Data to Emergence: 3d Encoded Growth

4.4.1 Selection of the Stress Site

4.4.2 The Layered Terrain

4.4.3 Volume as a Question: Where and How Much Can We Build?

4.4.4 Filtering the Volume

4.4.5 From Volume to Voxel

4.4.6 Cellular Ecologies: A Data-Driven Emergence

4.4.7 Points of Interest: From Voxels to Decision

4.5 The Material Performance

4.5.1 Introduction

4.5.2. From mining waste to printable matter

4.5.3 Printability

4.5.4 Compression Test

4.5.5 Shear Test

4.5.6 Optimal Material

4.5.7 Life Cycle Analysis

4.6 Stabilising Wall Intelligence

4.6.1 From Terrain to Surface: Defining the Base Geometry

4.6.2 Defining Structural Inputs: Piles and Columns

4.6.3 Material Behavior Simulation: Finite Element Deformation Analysis

4.6.4 Structural Intelligence Mapping: Stress Flow Lines

4.6.5 Multi-Objective Optimisation: Balancing Structure

4.7 Data and Ecological: Mass Formation

4.7.1 Simulating Agents from Support Points

4.7.2 Fields of Attraction and Repulsion

4.7.3 Agent-Based Volumetric Form

4.7.4 From Agents to Envelopes: Voxel-Based Mass Formation

4.7.5 Structural Intelligence: Reading the Mass through Stress Mapping

In highly unstable terrains like the Jharia mining region, design cannot begin with form, but with data. The extreme site conditions, including open-pit mines, waste rock piles, and underground combustion zones, present a hostile and constantly changing landscape, resistant to traditional intervention. These challenges require a methodology that interprets environmental risk not as an obstacle to design, but as its very source.

This chapter describes the development of a data-driven system capable of translating volatile environmental information into spatial and operational intelligence. A central aspect of this process is the disaggregation of the terrain into a network of fine-grained patches, where each 5×5 meter unit serves as a spatial container of coded environmental attributes. These patches, when analyzed using computational tools, produce a landscape that can be simulated, queried, and ultimately transformed.

The experiments presented throughout this chapter operate at various scales. At a territorial scale, agent-based simulations respond to the attraction-repulsion fields derived from slope, subsidence, and geotechnical instability. At an ecological scale, layers such as solar radiation, carbon sequestration, and moisture accumulation begin to guide species selection and planting strategies. And at a site scale, clustering techniques and volumetric logic allow for the identification of areas with structural or programmatic potential. Instead of a top-down design process, the result is a field of latent possibilities, where geometry, material articulation, ecological rhythms, and spatial hierarchies are configured from environmental data. Within this framework, architecture is not imposed but rather discovered through simulation, translation, and iterative interaction with the site.

4.1 Environmental Encoding and Corridor

4.1.1 5×5 Patch Framework

To begin translating the complex mining terrain of Jharia, comprising both an abandoned mine shaft and a spoil heap, into a computationally manageable system, it was subdivided into a grid of 5×5 meter plots, as indicated in the methodology. This grid allowed for the precise assignment of environmental and geological data to each unit. The mine shaft covers approximately 900×400 meters with a depth of 80 meters, while the spoil heap covers 400×400 meters with a height of 60 meters. These dimensions underscore the scale and magnitude of the intervention and the need to generate localized data for informed decision-making. Each plot functions as a spatial pixel, integrating specific information related to slope, subsidence, material composition, and altitude.

Fig. 4.1 : Site Patches (Author)
Fig. 4.2 : Environmental Data on Patches (Author)

4.1.2 Slope Analysis with the patch

Framework established, a slope map was generated using Digital Elevation Model (DEM) data provided by the United States Geological Survey (USGS). This allowed the identification of high-risk areas prone to erosion or collapse, which are common in steep mining landscapes. Each patch was algorithmically assigned a slope value and encoded accordingly. This map later serves as a foundation for simulations and optimisation strategies, especially regarding walkability, stability, and potential for architectural anchoring.

4.1.3 Subsidence Analysis

Mapping InSAR satellite data was used to analyse long-term deformation patterns of the ground surface. These subtle movements—caused by underground coal fires and residual voids from past mining—are often invisible to the naked eye. Subsidence values were assigned to each patch to indicate ground instability, and this information became critical in assessing the structural and ecological viability of intervention zones. The subsidence mapping was integrated with the patch grid, producing a responsive layer of ground behaviour that became one of the core criteria in risk calculation.

4.1.4 Rock and Ground Integration

A composite layer representing geological and soil conditions was generated through the analysis of geological survey maps, insitu soil reports, and high-resolution satellite imagery. Each plot was classified according to soil type: bedrock, loose soil, or mixed composition. Rocky areas were identified as having lower viability for intervention due to the complexity of construction and poor root penetration. While geologically stable, they present logistical and ecological limitations.

Fig. 4.3 : Attractors-Repulsions (Author)

4.1.5 Attraction–Repulsion Logic

Prior to proceeding with agent simulations on the ground, using the aforementioned patch dataset, an attraction-repulsion logic was applied to guide agent behavior. This allowed for simulations where movement across the terrain responded directly to coded ecological and geological information. Two types of attractor points were defined: first, areas of steep slope and subsidence, identified as areas requiring urgent attention and stabilization; and second, the coordinates of adjacent settlements, included as social attractors, reinforcing the project’s objective of ecological and social reconnection. Conversely, rocky patches acted as repulsion zones, discouraging passage or intervention due to their material limitations. This dual logic was introduced to guide the emerging network resulting from agent movement across the terrain.

4.1.6 Agent-Based Simulation

Three behavioural models were combined in the agent simulation framework using Culebra:

Stigmergy allowed agents to deposit and follow virtual traces, encouraging the collective reinforcement of efficient paths.

Flocking simulated decentralised coordination among agents, ensuring coherent group movement across terrain.

Weaving Wandering balanced exploration and convergence, creating a mesh-like path network that maximised spatial coverage while enabling consolidation of directionality.

These behaviours interacted with the attraction–repulsion field to produce a self-organising system. The results were not pre-imposed geometries but emergent performative corridors, routes shaped by environmental pressure, data urgencies, and behavioural logic.

Fig. 4.4 : Culebra on Site (Author)

4.1.7

Evolutionary Optimisation

The thousands of potential routes produced by agent-based modelling were fed into a Multi-Objective Optimisation(MOO). The goal is to transparently select the solution that best aligns with the project’s core objectives in a data-driven manner. This allowed the team to evaluate solutions using six simultaneous criteria:

Maximise Terrain Stability: How many high-risk areas (attractors) the corridor passes through.

Maximise Terrace Area: The area of flat land available for agriculture or construction after the corridor is created.

Minimise Corridor Slope: A gentle slope for ease of pedestrian access and maintenance.

Minimise Total Length: The shortest path to reduce material use and construction costs.

Minimum and Maximum Travel Distance: To prevent agents from stagnating in one area, they were required to travel at least 40m.

The simulation explored 800 distinct solutions, balancing genetic diversity and convergence speed with controlled mutation and crossover rates. Multi-Objective Optimisation(MOO) identified the most efficient configurations through successive generations of solution refinement.

Analysis of the Evolution Process (Interpretation by Graph)

Standard Deviation / KDE Graph: This graph illustrates the performance distribution of the solution population per generation. A leftward shift of the curve’s centre indicates performance improvement, a narrowing width signifies convergence, and a widening width suggests exploration is being maintained.

FO1 (1/Stability), FO5 (Total Length), FO6 (Slope): The centre of the distribution shifts distinctly to the left, showing clear performance improvement. Simultaneously, the distributions’ width is maintained or slightly expands, indicating that the algorithm is intentionally preserving the diversity of the solution set while improving performance.

FO2 (1/Terrace Area): A slight leftward shift and maintained width show that improvement is slow but diversity is preserved.

FO3 & FO4 (1/Travel Distance): The distribution centre shows little to no movement, or even a slight rightward shift. This implies that performance improvement for these two objectives is delayed or being sacrificed for other objectives, and the algorithm continues to explore solutions with diverse travel strategies.

Fig.4.6: Parallel Coordinate Plot of the Pareto optimal solution set resulting from the optimisation
Fig.4.7: Wallacei optimisation process showing Standard Deviation/KDE, Fitness Values, and SD/Mean Trendlines

cluster 1

Fig.

cluster 2

Fig. 4.6 : Iterations Cluster
Fig. 4.8 : Iterations Cluster 04
Fig. 4.9 : Best Performing Iterations

4.1.8 Optimal Solutions

From this set of optimized solutions, Cluster 1 was selected due to its excellent performance in stabilization metrics. Within the remediation process, soil stability is the priority and most urgent need, hence the choice of Cluster 1. Although its path length and coverage area were not the most compact, its selection was due to its exceptional ability to intersect the most critical, high-risk zones, validating the resulting iterations as a resilience-focused strategy. The resulting iteration is not simply a geometric connection of critical points, but rather a solution developed organically from the simulation of ecological behavior and urgency.

4.1.9 Risk Corridor

Following agent-based simulation and multi-objective optimization, several trajectories emerged for each simulation within the intervention area. These trajectories, instead of forming a single line, manifested as a network of emergent paths, shaped by attractionrepulsion dynamics and informed by environmental urgency. Some were compact clusters of converging agents, indicating areas of strong directional consensus, while others were more isolated and diffuse.

To move toward the architectural intervention, the next step was to materialize this abstract network. Routes with higher agent density—where multiple trajectories overlapped—were merged to create denser and more coherent corridors. Conversely, weak or isolated routes were discarded, and the data was filtered to utilize the densest sets of trajectories. The refined network was then projected onto the grid of terrain patches, and all patches intersected by this optimized system were extracted for further analysis.

From these selected patches emerged and were defined what we have termed the Risk Corridor: a set of spatially continuous linear data that functions not only as a route but also as the operational backbone of the design proposal. This corridor inherits the data intelligence of each selected patch, thereby enabling the generation of an environmental logic for a parametric risk assessment for each patch.

Each segment of the Risk Corridor was assigned a revised risk score using the following formula:

Risk = S + U + A − 0.3 × R + 1.2 × (S × U)

Where:

S = Slope

U = Subsidence

A = Altitude

R = Presence of rock (acts as a risk-reducing factor)

1.2 × (S × U) = Interaction term that amplifies risk in areas where slope and subsidence coexist.

All variables were normalized between 0 and 1. The equation combines linear and interaction-based weights to reflect the individual and combined effects of environmental risks. For example, rock formation (R), although physically obstructive, contributes to stability and therefore has a negative weight. Meanwhile, the (S × U) term emphasizes areas of instability where erosion and collapse are likely to coincide. Based on the calculated risk values, the corridor was classified into three intervention zones:

Red Zone (Risk ≥ 0.7): These are the most hazardous areas, requiring urgent structural measures. Bioreceptive retaining walls and other stabilization systems, developed in later stages, are concentrated here to prevent further collapses and act as ecological and infrastructural anchors.

Yellow Zone (0.4 ≤ Risk < 0.7): These transition areas balance moderate environmental risk with architectural opportunities. They support hybrid programs such as terraced agriculture, lightweight structures, and social infrastructure integrated into stabilizing elements. Green Zone (Risk < 0.4): Low-risk segments where intervention is minimal and focuses on ecological regeneration. These areas prioritize planting and vegetative restoration, functioning as biodiversity corridors and buffer zones for the more active areas.

In this way, the Risk Corridor becomes more than just a path: it is a performative field where data determines not only the location but also the intensity and type of intervention. Each segment becomes a strategic opportunity for spatial, ecological, and infrastructural planning. This integration of simulation, filtering, and risk mapping ensures that resources are directed where they are most needed, transforming the terrain from a hazard landscape into a framework for remediation and adaptive reuse.

Fig. 4.10 : Risk Corridor on Terrain

4.2 Ecological Seeding and Vegetation Logic

4.2.1 Remediation Process

In post-industrial or mining-damaged landscapes, the challenge of soil restoration is not only ecological but also technical. One of the most effective and low-impact methods for regenerating these lands is phytoremediation, a biological technique that uses living plants to extract, neutralize, stabilize, or volatilize contaminants present in the soil or groundwater.

Phytoremediation works through a set of distinct, root-driven strategies. Each of these mechanisms fulfills a specific environmental function:

Phytoextraction occurs when contaminants, especially heavy metals, are absorbed by plant roots and translocated to surface tissues, such as stems and leaves. Over time, these plants can be harvested, effectively removing the contaminants from the site.

Phytodegradation involves the breakdown of organic contaminants by enzymes secreted by the plant. These metabolic processes act both within the plant tissue and in the surrounding soil. Phytostabilization prevents the migration of contaminants by immobilizing toxic substances in the root zone. Instead of extracting contaminants, plants bind them to the soil, preventing their spread.

Phytovolatilization converts some contaminants, such as selenium or mercury compounds, into gaseous forms that are then released into the atmosphere through the plant’s leaves in a less harmful state.

Rhizofiltration focuses on substances dissolved in water. The roots of certain plants can absorb contaminants from groundwater or surface runoff.

Rhizodegradation is facilitated by microbial communities within the soil surrounding the plant roots. These microbes break down organic contaminants into less toxic or inert forms, contributing to the biochemical recovery of the soil.

As visualized in the diagram, the interaction between roots and their associated microorganisms supports each of these functions. The idea is to introduce an effective soil remediation process that minimizes site disturbance while generating biomass that can be reused or carefully disposed of depending on contamination levels. To ensure the success of this process, the plants must be welladapted to the site conditions. The remediation species must not only tolerate the toxins but also survive in the site’s microclimate. Factors such as sun exposure, water availability, and local soil composition directly influence plant viability; therefore, these variables will be investigated in the following subchapters.

(1) https://www.nature.com/scitable/knowledge/ library/phytoremediation-17359669

(2) https://www.mdpi.com/2079-7737/14/1/23

Fig. 4.11 : Remedation Process (Author)

4.2.2 Species Categorisation

To address the diverse ecological and structural conditions of the terrain, all plant species are categorised into four functional groups, each defined by its primary contribution to the recovery of the site. This classification system facilitates a data-informed approach, ensuring each 5×5 m patch receives the right mix of ecological planting, so we defined four types of plants as a strategy of remediation: stabilisation, remediation, soil improvement, and biodiversity support.

Stabilising Plants

Selected for their capacity to reinforce terrain stability, these species possess deep or fibrous root systems that anchor loose soils and reduce erosion risks. Acting as living meshworks, they are deployed in zones with high slope gradients or instability. Their root density and rapid growth make them ideal for initiating structural restoration. Studies confirm that root systems can account for up to 50% of sediment retention on degraded slopes (1).

(1) https://www.mdpi.com/20734441/17/16/2411

(2) https://www.mdpi.com/1999-4907/13/1/63 (3) https://besjournals.onlinelibrary.wiley.com/ doi/full/10.1111/1365-2664.13625

Phytoremediation Plants

These species are selected based on their ability to extract, degrade, or immobilise pollutants. Hyperaccumulators absorb heavy metals, while others stimulate microbial degradation through rhizosphere interactions. These are introduced in patches with elevated contamination, as identified through carbon and risk mapping. Over time, these plants actively reduce pollutant concentration while contributing organic biomass to the soil (1)(2).

Fig. 4.12 : Stabalizing & Remedation plants illustration (Author)

Soil Enriching Plants

This group focuses on restoring soil fertility through nitrogen fixation, biomass production, and organic matter cycling. Legumes and leafy species are prioritised in compacted or nutrient-poor patches, where they initiate substrate transformation. Their integration accelerates topsoil formation and microbial colonisation, supporting long-term vegetation succession (2).

Biodiversity Plants

Primarily native species that attract pollinators, birds, and insects, these plants restore ecological complexity and establish trophic networks. Though they may not directly stabilise or detoxify, their role is foundational for ecosystem resilience. High-diversity plantings are correlated with more robust recovery trajectories in disturbed environments (3). These species are applied across all zones to avoid monocultures and enhance overall ecosystem health. It is important to note that many plants serve multiple roles. However, this classification allows for structured ecological programming, enabling each patch to receive a custom species mix aligned with local data inputs.

Fig. 4.13 : Soil Enriching & Biodiversity plants illustration (Author)
Fig. 4.14 : Grouping of plants based on Climate data (Author)

4.2.3 Solar Radiation Analysis

Sunlight availability is a key environmental variable that determines plant performance, influencing growth rates, water loss, and stress tolerance. A radiation map was generated using a digital elevation model, simulating seasonal solar exposure for each 5×5 m plot. Since the site is open and terraced due to mining excavation, the radiation profile exhibits steep gradients and is differentiated by the terraced terrain resulting from the excavation. This radiation layer was used to guide species allocation. Highradiation areas were assigned to xerophytic, drier species capable of withstanding direct sunlight and drought. Shade-tolerant species, such as young canopy trees or shrubs, were reserved for the slopes with lower insolation.

4.2.4 Water Flow and Accumulation

Jharia experiences periods of heavy rainfall between May and October. Therefore, the hydrological behavior on the ground was evaluated, and the movement of water down the slope was simulated, generating a map that identifies both runoff corridors and areas of potential water accumulation. In response, species adapted to the humidity are implemented according to local conditions. Areas of high accumulation receive wetland-suitable plants, such as rushes or grasses, which can stabilize and filter the flowing water. Conversely, dry ridges are planted with drought-tolerant species with deep or fibrous root systems to improve water absorption and prevent soil loss. The hydrological layer also informs the erosion control strategy: stabilizing species are prioritized in areas with concentrated runoff to reinforce the soil and reduce sediment production. Vegetation cover in high-flow areas prevents degradation of the topsoil

4.2.5 Coal Density Mapping

To complement the climatic and hydrological layers, a coal density map, representing a mineral extracted from mining activity, was developed to assess the degree of soil degradation and contamination across the site. Using remote sensing data and soil sampling, coal content was estimated plot by plot. This layer visualizes the relative concentration of organic matter, with extremely high values corresponding to barren areas, primarily in waste rock piles and exposed mine tailings, where the topsoil has been lost or severely disturbed.

While high organic carbon content can indicate nutrient deficiency, in this context it also serves as an indicator of the severity of contamination. Areas with unusually high carbon content, especially near abandoned pits or subsidence sites, are often correlated with residual hydrocarbons, buried waste, or chemically altered soils—signs of chronic contamination. Therefore, this map helps distinguish between barren areas and truly toxic areas requiring remediation.

Instead of allocating stabilization plants, these high-risk areas are prioritized for phytoremediation species, which are capable of extracting, neutralizing, or immobilizing contaminants. By overlaying the carbon density map with the previous layers, we identify intersections where contamination and instability exist, requiring a dual response. This layer thus helps to precisely focus detoxification efforts, acting as a diagnostic tool for deploying phytoremediators where they are most needed (2).

(1) https://www.mdpi.com/2073-4441/17/16/2411

(2) https://www.mdpi.com/1999-4907/13/1/63

Fig. 4.14 : Environmental Data on Terrain for Plant Growth (Author)

4.2.6 Species Grouping and Planting Logic

Once the stabilisation corridor has been defined, both spatially and functionally, the next step involves populating it with intelligent, ecologically responsive vegetation. Given the complexity of the terrain and the high resolution of the 5×5 m patch system, manual planting design becomes impractical. To resolve this, a custom C#based planting algorithm was developed and integrated into the Grasshopper environment. Its purpose is to operationalise the guild strategy introduced earlier, turning environmental data into precise planting configurations.

The adjacent diagram visually explains how this logic unfolds. From the large-scale identification of the ecological corridor (bottom), the terrain is subdivided into 5×5 m patches. Each of these is further split into 3×3 planting cells, resulting in 1.6×1.6 m subplots. These become the smallest operational units, where species types are intelligently assigned based on the localised environmental conditions.

The algorithm operates through three key stages:

Stage 1: Suitability Filtering

In this phase, each patch within the corridor is read for its environmental data, slope angle, risk value, soil contamination, and presence of bedrock, as derived from the previous datasets established.. Each species from the Jharia ecological database has a set of tolerances (e.g. maximum slope, pollution resistance thresholds). The algorithm cross-references patch data with these parameters and filters out any species that would not survive those conditions.

As a result, every patch is reduced from a large plant database to a customized shortlist of viable species, creating a foundation for precise planting.

Stage 2: Adaptive Guild Allocation

After filtering, the script evaluates the dominant environmental challenge in each patch. This may be high instability (e.g. steep slopes), contamination (e.g. heavy metal hotspots), or poor fertility (e.g. low carbon soils). Based on this diagnosis, the algorithm generates a guild recipe by weighting plant functions.

For instance:

If erosion is the major concern, over 60% of the 3×3 planting cells may be assigned to Stabilising Plants.

If contamination dominates, Phytoremediation Plants form the bulk of the patch.

If fertility is low, Soil Enriching Plants are prioritised. Each patch therefore receives a bespoke functional guild mix, engineered to respond to its primary constraint, while still integrating secondary support functions.

Stage 3: ‘Diversity First, Dominance Supplemented’ Assembly

Once guild ratios are defined, the algorithm selects species for each planting slot using a diversity-first logic: it fills as many of the 9 cells as possible with distinct species, aiming for high genetic and functional diversity to improve resilience against diseases, pests, and climatic variability.

If the number of viable species is insufficient to fill the patch, a dominance-supplemented rule is applied—filling remaining slots with the strongest available species within that guild, ensuring functional continuity and sufficient biomass density.

Stabilising Plants
Plants

4.2.7 Ecological corridor

The ecological corridor is not a collection of arbitrary paths, but a landscape created by the land itself: every contour, shadow, and scar is transformed into code, and every line of code comes to life. What began as a fractured, hostile, and unstable terrain after mining was gradually decoded through environmental sensing and computational logic. The data ceased to be abstract; it became the very ink with which the territory was redrawed. Every planting decision within this corridor is not the product of aesthetic speculation, but of strategic analysis. Sunlight traced the boundaries of the arid zones; the flow of water shaped retention patches; carbon gaps summoned regenerators; and subsurface risks anchored stabilizers. Layer by layer, the land became legible: its suffering mapped, its needs prioritized, and its potential activated. Thus, the intervention is not imposed; it informs, translates, and grows in dialogue with the place. In this way, the corridor becomes more than just a spatial solution; it becomes a living system for ecological repair, a model of intelligent design based on responsiveness, and a testament to how data, applied intelligently, can heal this forgotten territory.

References: (1) Liu, R., et al. “Designing for Successional Canopy Layers in Restored Ecosystems.” Ecological Modelling, 2020. (2) Glick, S., et al. “Vegetation Structure as a Driver of Remediation Effectiveness on Post-Mine Lands.” Restoration Ecology, 2019.

4.2.8 Temporal Growth Phases

(5–10–15–20 Years)

Since the proposed ecological corridor is not only an environmental infrastructure but also a site for future architectural integration, it became essential to understand how the vegetation will evolve volumetrically over time. The aim is to create an architecture that can coexist with the growing biomass and even emerge from it. Therefore, the spatial design cannot be based solely on the location of plants in plan view (2D) but must also respond to canopy expansion, root depth, and the changing three-dimensional composition of the site over decades.

To address this, we developed a plant growth catalog that graphically represents the vertical and horizontal development of selected species over 5, 10, 15, and 20 years. This study focused on the most relevant species from each previously defined functional group: stabilizing plants, soil-enriching plants, phytoremediating plants, and biodiversity-enhancing plants. The diagram illustrates the projected canopy height and width along with the root structure, creating a visual index of spatial occupation over time.

This analysis revealed that species classified under Soil Stabilization and Enrichment tend to develop the largest volumes, both above and below ground. Understanding the growth patterns of these plants, such as Alstonia scholaris, Gliricidia sepium, and Acacia auriculiformis, allows us to simulate where architectural inserts could be placed, shaded, or adapted to this vegetation mass over time (1).

This growth logic will serve as a parameter in future generative models that simulate how architecture could be integrated into or emerge from the vegetation volumes. This catalog thus functions as a predictive tool, aligning longterm ecological performance with volumetric design criteria. The landscape becomes a co-designer, and the project begins to shift from the static creation of forms to the temporal shaping of space (2).

4.17 : Species Chart (Author)
FAO (2016) – Trees, Forests and Air Pollution
CPCB India (2019) – Air Quality & Vegetation Mitigation Guidelines
ICFRE (2018) – Native Tree Species for Mine Reclamation in India
Fig 4.18 : Corridor Data

4.3 Site Clustering: Territory to Site

4.3.1

Stabilisation Logic

The grouping process presented here is not an abstract zoning strategy, but rather a spatial logic based on the central premise of this project: stabilization through ecological and architectural interventions. This chapter describes how the previously extracted landscape-scale data is now translated into a buildable framework that identifies viable sites for future habitation. This phase connects the macro logic of land remediation with the micro decisions of site location, creating a coherent transition from the ecological system to the architectural foundation.

The defined patches will be classified in a way that proposes an adaptive pre-architectural framework that responds to the fragility of the land, environmental conditions, and social utility. Buildable zones will be determined based on how the data confirms both the need for and the feasibility of development. This logic is present in each subsequent phase, where stabilization is not only an outcome, but also a prerequisite for the implementation of buildable zones. The grouping mechanism, then, will seek to define where, when and how the interventions can materialize according to what the site requests.

4.3.2 Risk Value Thresholds

The previously established stabilization corridor offers a dynamic risk landscape, a constantly evolving field of urgency. This is no longer simply a spatial indicator, but a key determinant in the selection of architectural sites. High-risk zones (risk ≥ 0.6) are considered not as areas to be avoided, but as strategic sites for active intervention. In this new logic, architecture becomes a stabilizing infrastructure: walls, platforms, and structures that simultaneously support the terrain and house the program. Thus, buildable areas are not chosen arbitrarily, but are identified through the accumulation of risk data, functioning as the structural core where human activity can anchor the landscape.

4.3.3 Maximum Slope Constraints

Slope emerges as a secondary, but equally vital, indicator within this stabilization framework. Steep slopes correlate with both vulnerability to erosion and structural instability. By overlaying slope data onto the risk map, we identify complex risk zones requiring architectural intervention. Thus, potential buildings have a geotechnical impact. The architectural response in these zones is twofold: habitation and reinforcement. Sloping terrains thus become opportunities for architectural integration, where the built form also acts as a soil retainer.

4.3.3 Data Grouping and Clustering

Based on the above, the concept of “Priority Intervention Zones” is introduced, where concentrations of instability offer the greatest potential impact for specific interventions. This stage evolves that logic toward a more refined grouping system, which seeks not only to stabilize risk but also to identify land suitable for the introduction of human settlements and productivity. The methodology consists of five stages:

Stage 1: Filtering of Buildable Land

The system retains only plots with slopes ≥10°, discarding flat and low-lying areas where water accumulation or instability could persist. Within this filtered set, the plots are divided into Red Zones (risk between 0.2 and 0.7), suitable for structural reinforcement.

Stage 2: Group Formation

Within the selected Red Zones, the plots are grouped into Foundation Groups according to strict parameters: Elevation consistency (Z tolerance) to ensure realistic ground conditions, i.e., their proximity. Priority is given to the highest-risk plots, aligning construction with the need for stabilization.

Stage 3: Assembly of Settlements Clusters

The Foundation Groups are aggregated using the K-Medoids algorithm. Medoids, selected central points within the patches in spatial reality, are always positioned to ensure accessibility to the resulting stabilization patches, guaranteeing that each cluster forms a connected settlement rather than isolated nodes.

Stage 4: Calculation of the Number of Settlements

Settlement density is calculated in direct proportion to the area of the local terraces (i.e., the negative area defined by the resulting corridor). The area is divided using a predefined metric (Area per Household) to obtain a population threshold for each group. This integrates productive capacity as a determining factor in spatial occupation. Thus, settlements are defined with a certain number of buildable plots that will become the emerging architecture.

The result of this data-driven clustering logic is a distributed network of architectural plots: buildable land units filtered, grouped, and validated using environmental intelligence. Each selected plot contains a narrative of the terrain: absorbed risk, stabilized slope, and projected productivity. It is important to note that this is also a turning point in scale. What has previously operated at a territorial level now shifts toward the localized precision of the architectural site. Each resulting plot is more than a static location; it is a patch of data within data. Each one is a combination of slope, carbon, moisture, solar exposure, and stabilization requirements. These environmental variables constitute the input conditions for the next phase, spatial experiments where matter, form, and growth are governed by ecological parameters, allowing architecture to emerge.

4.4 From Data to Emergence: 3d Encoded Growth

4.4.1

Selection of the Stress Site

With all the corridor data layers and consolidated clustering strategies, a conscious decision was made to explore a specific cluster. It was chosen not for ease, but for intensity. This was an area with the most aggressive readings: high risk and unstable terrain. Thus, instead of avoiding the most difficult cluster, we used it as an opportunity to learn from the most hostile conditions. This selected cluster then became our living laboratory.

4.4.2 The Layered Terrain

We revisited our existing data—risk values, ecological corridor conditions, water flow simulations, solar exposure—and added the atmospheric pollution factor. We projected this information more specifically onto the corridor area within the selected cluster at higher resolution. The result was a mosaic of environmental forces: areas of intense radiation, pockets of stagnant water, zones with polluted air, and points of extreme vulnerability. This two-dimensional data becomes our primary language of the territory, now analyzed in greater detail. The next step was to generate data similarity across the selected patches. Because the ecological data originated from dividing 5m x 5m patches into 1.6m x 1.6m patches, this division was applied to all the patches. This provided us with more detailed information, allowing us to work with patches of these new dimensions. Each data layer indicates patterns we should follow and interpret, so the next step was to combine this analysis.

4.4.2 The Layered Terrain

In order to transition from broad-scale ecological analysis to targeted intervention, it became necessary to unify all environmental datasets under a single, high-resolution spatial framework. The original 5m x 5m grid used throughout the thesis for corridor-level planning was subdivided into 1.6m x 1.6m micro-patches. This allowed for a one-to-one projection of environmental variables, making each patch a precise, data-rich unit of design decision-making.

This reconfiguration was applied specifically to the selected cluster within the ecological corridor, allowing the full spectrum of previously generated data to be deployed at a finer granularity. Through this move, the thesis reaffirms its core methodology: every design operation must stem from environmental intelligence.

Six key environmental and ecological datasets were projected onto the redefined patch system, each extracted from earlier simulation pipelines:

1. Ecological Risk Index

Risk values derived from terrain instability, accessibility, and historical degradation were reapplied to the high-resolution grid. These readings identify zones requiring structural stabilisation, ecological buffering, or human exclusion.

2. Planting Logic (Soil-Based Prescription)

From the earlier soil analysis—interpreted through chemical, biological, and textural metrics—a planting suitability layer was developed. This logic acted as a prescriptive remediation protocol, indicating the most appropriate vegetation type for each patch. Instead of treating planting as generic, this data layer aligns soil health with ecological succession strategy, providing a data-led blueprint for regeneration.

3. Water

Accumulation Zones

Using Grasshopper’s Anemone plugin, a looped runoff simulation was executed to trace micro-slope water trajectories across the terrain. Patches with high intersection density from these paths were classified as likely water-accumulation zones inside the patches of the selected corridor. This data is critical for locating avoiding foundation-heavy interventions.

4. Atmospheric Pollution Overlay

Pollution heat maps, based on satellite imagery and emission reports, were simplified into zonal overlays. This 2D data was locally projected onto the cluster patches, revealing zones with persistent toxic air. From this maps we appreciate that all the area is polluted, but some more than others, and we identify the area with more pollution was also overlapping with the areas of more subsidence. This validates the election of the selected cluster based on its hostile reality.

5. Solar Radiation

Radiation analysis was applied to the corridor’s micro-topography, evaluating exposure during Jharia’s high-radiation months , March to May. Eastern and elevated patches received the highest radiation values, while shaded depressions offered thermally moderate zones. This metric is essential for bioclimatic design and user comfort. Once we developed the information of micro patches, all values were scaled between 0 and 1 across the patch grid to enable multi-criteria comparison and computational classification.

(z > 0.6, z * 12, z * 9) - Pow (a, 2)*12+ (x*r)*6+ Pow(p, 1.3)*5 z

4.4.3 Volume as a Question: Where and How Much Can

We Build?

Before any architectural form emerges, there must be a boundary, an envelope capable of responding to both the spatial intention and the environmental intelligence. In this phase, the challenge consisted of defining the boundaries, in volumetric terms, of the area to be intervened, using data. We needed to understand: where can we build and how much? And, even more importantly, how does the terrain respond?

To translate these questions into design logic, we constructed what we call the Volume of Interest, which we define as a three-dimensional spatial field configured entirely by the ecological and geological behavior of the site. To do this, we developed a formula for how each data point would be interpreted in each 1.6x1.6m voxel.

The proposed formula was as follows:

H = If (Z>0.6, Z*12, Z*9) - Pow(A,2)*12 + (X*R)*6 + Pow(P,1.3)*5

Where:

Z = Air Pollution

A = Water accumulation

X = Radiation

R = Risk Values

P = Plant Type

This formula was used to analyze behaviors such as water accumulation, which pushes the form downwards, indicating areas likely to become saturated or unstable. Risk values modulate the vertical extent; highrisk areas result in lower heights, indicating the intensity of the required architectural response. The vegetation typology, specifically the presence of large species that stabilize or enrich the soil, expands the volume both upwards and outwards, reflecting future biomass and ecological density. The algorithm integrates these variables into a generative rule, where each 5×5 m plot, now subdivided into a 3×3 grid, contributes its environmental profile to the emerging form. Height is not arbitrary; it becomes a reading of urgency, resilience, and potential. Therefore, the Volume of Interest is not a design envelope in the conventional sense, but a morphogenetic structure that suggests how far the architecture should extend, based on the site’s information system. As a final step, this volume of interest was filtered based on patches of buildable potential; that is, while the entire corridor generates a volume, this one was filtered based on the clustering experiment developed earlier. Therefore, this is where the architecture begins to emerge as a conversation with the data. This volume becomes the framework within which future design decisions will be made. It is not a container of forms, but a container of new information.

4.4.4

Filtering the Volume

The volumetric system, initially generated through layered environmental data (stress maps, ecological attractors, soil simulations, and agent paths), defines a potential field of architectural emergence. Yet, potential is not obligation.

The diagram illustrates a critical methodological step, from volumetric possibility to strategic intervention. This is achieved by intersecting the global mass from the Volume of Interest with the high-resolution patch clusters previously classified according to environmental urgency. Only those patches flagged by the clustering algorithm as structurally viable and ecologically n\ecessary are selected.

The volume is filtered not by arbitrary zoning, but by emergent ecological need. A voxel that once held values of humidity, light, or stress now also inherits the layered intelligence of its patch: its ecological role, its connectivity, its vulnerability.

In doing so, the design framework performs as a living system, data feeds data, patches inform volumes, and structure becomes the encoded memory of prior analyses. Filtering is not reduction; it is refinement. It ensures that the architecture which emerges is not just appropriate to place, but inseparable from the logic of its regeneration.

4.4.5 From Volume to Voxel

After constructing the Volume of Interest, the next step was to translate this form into a system capable of designing operations. Following the project's methodological logic—that of allowing the analysis area to materialize into data—the volume, likewise, required subdivision. The solution was voxelization.

The entire volume was discretized into a 3D grid of voxels, each approximately 1.6 × 1.6 × 1.6 m. Unlike traditional modeling, voxelization was not a visual simplification, but a strategy to allow spatial units to carry data, just like the plots in the ecological corridor. Voxels became the smallest operational units, allowing for analysis within the volumetric space. Each voxel was populated with key layers of environmental information:

Risk Value:

Values calculated from slope, subsidence, and rock data were projected onto the voxels, assigning a vertical weight to areas of structural instability.

Air Pollution:

Proximity to Ground

Concentrations of air pollutants were mapped in three dimensions, allowing the system to identify which voxels lie within columns of highly polluted air.

Plant Type Logic:

The dominant plant category in the corresponding land parcel (stabilizing, phytoremediating, enriching, or biodiversity-enhancing) was used to assign expected biomass and ecological function to adjacent voxels, affecting their growth margin.

Solar Radiation:

Insolation data were overlaid onto the voxel field to detect which volumes receive sufficient light for ecological compatibility.

Proximity to the ground on the Z-axis:

Voxels closer to the ground surface were marked as structurally relevant for loading or anchoring, while higher or lower voxels were assigned less value.

The voxel grid is no longer an empty space, but a scaffold with a high data density, where each cube contains a set of parameters that guide future decisions.

This voxelized model now becomes the basis for the next phase: a cellular ecology of decisions, where voxels interact, respond, and evolve.

4.24 : Cellular Auomata Progression to Points of Interest

4.4.6 Cellular Ecologies: A Data-Driven Emergence

Building upon the previously established voxel framework, this phase implements an ecological cellular automaton to simulate which parts of the volume harbor the greatest environmental potential. Inspired by the methodology presented in Chapter 3, on methods, the automaton’s algorithm was adapted to operate within a 3D voxel array. The goal: to allow the data itself, rather than the designer, to select where environmental conditions converge for life and architectural emergence.

Each voxel contains five key parameters: site risk, air pollution, plant type, solar radiation, and vertical proximity to the ground. All values were normalized between 0 and 1, enabling a unified calculation. Thus, the sum of a voxel’s values ranged from 0 (environmentally unsuitable) to 5 (ideal conditions). Voxels with a total score above 3.5 were designated as ecologically favorable zones: strong points with optimal environmental characteristics.

However, the algorithm is not limited to isolated conditions, but evaluates each voxel relationally, considering not only its internal score, but also the performance of its surroundings. Each voxel has between 10 and 26 neighbors; thus, if the voxel under study has a total score below 3.5, but is surrounded by at least eight high-performing voxels, it is reclassified as a viable candidate, revived thanks to ecological proximity. What was “dead” on its own comes to life in context.

This logic creates a dynamic simulation in which each voxel is evaluated twice: first, by its own conditions, and second, by its relationship with others. In this way, the system recognizes ecological synergy: how local groups of high-quality voxels generate momentum for spatial occupation. It is not simply about identifying the best spots, but about observing how environmental quality propagates through space, allowing emergence to occur not by imposition, but by resonance.

4.4.7 Points of Interest: From Voxels to Decision

The result of this ecological simulation is a cloud of activated voxels, filtered data of environmental opportunities. These are the points that will be incorporated into the next design experiments. They are no longer abstract cubes of data, but points of interest, selected by an internal logic of performance, relationship, and resilience. Here, the volume of interest becomes a field of intention. Instead of prescribing where to build, the system suggests where conditions tend to develop. These points mark the beginning of future geometry, structure, and material articulation.

What was once a neutral mass is now a three-dimensional intelligent landscape. The following chapters will explore how these points translate into form, allowing matter, climate, and ecology to influence architecture at all scales.

4.24 : Cellular Auomata Progression to Points of Interest

1. Flora Faleschini et al., “Sustainable Mixes for 3D Printing of Earth-Based Constructions,” Construction and Building Materials 398 (2023): 132496, https://doi. org/10.1016/j.conbuildmat.2023.132496

2. Yameng Ji, Philippe Poullain, and Nordine Leklou, “The Selection and Design of Earthen Materials for 3D Printing,” Construction and Building Materials 404 (2023): 133114, https://doi.org/10.1016/j. conbuildmat.2023.133114

3. Tashania Akemah and Lola Ben-Alon, “Developing 3D-Printed Natural Fiber-Based Mixtures,” in ICBBM 2023: Bio Based Building Materials, ed. Sofiane Amziane et al. (Cham: Springer, 2023), 555–572, https://doi.org/10.1007/978 3 031 33465 8_42

4. Ji, Poullain, and Leklou, “The Selection and Design of Earthen Materials.”

5. Faleschini et al., “Sustainable Mixes.”

4.5 The Material: Performance

4.5.1 Introduction

Mud has long served as a traditional construction material in the region, with local communities well-versed in its handling, application, and maintenance. Its widespread availability and cultural familiarity make it an ideal foundation for adaptive, sitespecific building practices.

In this context, mud refers to a specially designed mix of local soils combined with stabilizers, fibers, and other additives specifically chosen for the site. These mixtures help with insulation, control moisture, adapt to different structures, and fit well with the environment. Materials such as fly ash, biochar, rice husk, and waste from local incinerators enhance the properties of the mud and support recycling by utilizing local waste.

Recent advancements in 3D printing with earthen materials have expanded the architectural potential of mud. Research suggests that properties such as granularity, plasticity, shrinkage, and rheological behaviour can be controlled to optimize extrudability and buildability.³ The inclusion of binders and natural fibres, such as straw, sisal, and hemp, enhances shape retention, increases tensile strength, and reduces cracking.⁴

In regions such as Jharia, these systems provide multiple benefits. They protect communities from persistent environmental hazards and support ecological restoration through carbon sequestration and the absorption of pollutants. As the construction sector strives to strike a balance between performance, climate resilience, and ethical material sourcing, reformulated earth-based composites are likely to play a pivotal role in fostering more regenerative and adaptive built environments.

3.3.1

Fig.4.12: SoilSamplesofvariousminesinJharia(adaptedfromSpringer,2021)

1. Maiti, S. K. Ecorestoration of the Coalmine Degraded Lands: Indian Scenario. New Delhi: Springer, 2021.

2. Singh, G., and B. K. Tewary. “Impact of Coal Mining and Mine Fires on the Local Environment in Jharia.” Environmental Monitoring and Assessment 186, no. 10 (2014): 5955–5964. https://doi.org/10.1007/s10661-014-3824-7

3. Sarkar, D., and R. Rano. “Soil Physical Constraints in Degraded Landscapes of Eastern India.” In Soil Degradation and Restoration in India, edited by A. Bandyopadhyay et al., 131–148. New Delhi: Springer, 2020.

4. Based on site soil testing conducted by the research team in Barrare, Jharkhand (2024).

5. Bhattacharya, P., and N. Chakraborty. “Soil Compaction and Water Stress in Coal Mining Regions of Jharkhand.” Indian Journal of Soil Conservation 45, no. 2 (2017): 112–120.

The land in Barrare, located adjacent to the Jharia coalfields, exhibits clear signs of ecological exhaustion.⁴ Decades of mining activity have stripped the topsoil of essential nutrients, resulting in high acidity (pH 4.5–5.5), low organic matter, and deficient levels of macronutrients such as nitrogen, phosphorus, and potassium.⁵ The texture class in this region largely aligns with sandy-skeletal and coarse-loamy profiles characterised by high gravel content and minimal clay, offering poor water retention and low binding capacity.⁵ This compromises both vegetative growth and foundational load-bearing potential for architecture.

The land in Barrare, located adjacent to the Jharia coalfields, exhibits clear signs of ecological exhaustion.⁴ Decades of mining activity have stripped the topsoil of essential nutrients, resulting in high acidity (pH 4.5–5.5), low organic matter, and deficient levels of macronutrients such as nitrogen, phosphorus, and potassium.⁵ The texture class in this region largely aligns with sandy-skeletal and coarse-loamy profiles characterised by high gravel content and minimal clay, offering poor water retention and low binding capacity.⁵ This compromises both vegetative growth and foundational load-bearing potential for architecture.

1. Ananya Deshpande, Burning Ground: Infrastructure and Survival in Jharia’s Coalfields (New Delhi: Earthline Press, 2021), 88–90.

2. Chea, C. P., Bai, Y., Pan, X., & Arashpour, M. (2020). An Integrated Review of Automation and Robotic Technologies for Structural Prefabrication and Construction. Journal of Engineering Safety and Environment, 2(2), 81–98.

3. Ardiny, H. (2017). Functional and Adaptive Construction for Rescue: An Analysis of the Approach Using Autonomous Robots. EPFL.

Traditional construction methods fail in geologically unstable areas like Jharia, where unstable ground, toxic gases, and underground fires make it unsafe for people and heavy equipment. In such sites, conventional methods are not only inefficient but dangerous.¹ Workers face toxic fumes, unstable terrain, and extreme heat from coal fires. The challenges are both structural and human. A new building method is needed, one that keeps people out of harm, adapts to shifting terrain, and ensures accuracy and repeatability.

Robotic-assisted construction offers such a path.² Using a compact, programmable robotic arm, modular components can be fabricated, assembled, and deployed. Mounted on a stable base or mobile unit, the robot builds shelters, walls, or remediation surfaces with precision even in chaotic ground.² This study proposes a robotic methodology that extrudes and assembles modular elements along programmed tool paths.³ A compact arm deposits material layer by layer under pre-calibrated settings.

The approach draws on precedents in remote fabrication, emergency sheltering, and digital earth-building.³ Its novelty lies in applying mud-based composites under extreme conditions. The goal is a safe, localized, and repeatable system that:

Removes human exposure to toxic or unstable sites

Delivers consistent, high-quality modules

Enables controlled use of experimental, bio-receptive materials

Functions on- or off-site, with simple transport and setup

The objective is to validate a robotic workflow using mud-based composites for thermal resistance and plant growth. It builds modular elements for rapid deployment and can be operated locally by a small, minimally trained team.

Additives and Stabilisers

To engineer a composite material capable of meeting both structural and ecological performance criteria, a suite of natural and industrial additives has been proposed. These stabilisers are introduced in varied combinations across test groups, each selected for its ability to enhance specific material properties while remaining contextually relevant to the Barrare site.

Lime (5%)

Lime acts as a chemical stabiliser, reducing shrinkage during the curing phase and significantly increasing water resistance. By promoting pozzolanic reactions within the soil matrix, it enhances long-term strength and dimensional stability.⁵ This is particularly important in Barrare, where high porosity and seasonal saturation can degrade untreated materials.

Fly Ash (10%)

Fly ash serves a dual role: mechanically, it increases density and fills voids in the mix, simulating fine particle distribution found in stabilised soils.¹⁵ Symbolically and contextually, it integrates a postindustrial byproduct reflective of the region’s coal-burning legacy. This allows us to reframe pollutant residues as active architectural ingredients, converting waste into performance.

Biochar (5–10%)

Introduced as a functional carbon additive, biochar improves internal porosity and moisture buffering.¹¹ Its porous matrix fosters microbial growth, potentially transforming the material into a bioreceptive substrate. In future studies, this quality could support the colonisation of beneficial organisms or vegetation, linking architectural material with regenerative landscape processes.¹²

Hemp (3%)

Hemp fibres are incorporated for tensile reinforcement, helping to control micro-cracking during the drying phase and offering structural stability under thermal expansion.¹³ Its behaviour under strain mimics early fibre-reinforced systems and introduces a lowtech strategy for improving crack resistance, especially in largescale printed or cast elements.

Compost (5%)

Compost is added not merely as filler but as a biological activator. It supports the emergence of living surfaces by supplying nutrients and microbial life to the material body.¹⁴ This opens the door to a new typology of living architecture, one that supports mosses, lichens, or microbial coatings, particularly in shaded, high-humidity conditions typical of the Jharia region.

By systematically varying these additives, we can begin to construct a material taxonomy where the mechanical, environmental, and ecological properties can be parametrically controlled. Each additive introduces a layer of responsiveness that can be tracked, simulated, and integrated into computational design workflows for adaptive architectural systems.¹⁵

1. Sherwood, P. T. Soil Stabilization with Cement and Lime. London: Transport Research Laboratory, 1993.

2. Siddique, Rafat. “Effect of Fly Ash on the Properties of Soil Stabilized with Lime.” Waste Management 24, no. 6 (2004): 583–589. https://doi.org/10.1016/j. wasman.2003.09.003

3. Lehmann, Johannes, and Stephen Joseph, eds. Biochar for Environmental Management: Science, Technology and Implementation. 2nd ed. London: Routledge, 2015.

4. Ziegler, C., and A. Battrick. “Living Architecture: Towards Sustainable, Responsive Building Skins.” Architectural Design 87, no. 2 (2017): 82–89. https://doi. org/10.1002/ad.2147

5. Balaguru, P. N., and S. P. Shah. Fiber Reinforced Cement Composites. New York: McGraw Hill, 1992.

6. Jones, Michael, et al. “Designing Living Architecture: Soil and Compost as Active Materials in Building Systems.” International Journal of Architectural Computing 18, no. 2 (2020): 133–150. https://doi.org/10.1177/1478077120926659

Oxman, Neri. “Material Ecology.” Journal of Design and Science 1 (2016). https://doi.org/10.21428/7e0583ad

4.5.3 Printibility

The base mix was derived from vernacular practices in Jharia, where mud walls are common, reinterpreted as a system suitable for robotic arm extrusion with local resources.

Soil was sieved to ≤2 mm (80–85 wt.%) to ensure particle size uniformity and reduce clogging.² Cohesion was enhanced with 5–12 wt.% organic matter, while short straw fibres (2–3 wt.%) reduced drying shrinkage and improved tensile resistance.³ Water content (35–40 wt.%) was tuned to achieve shear-thinning during extrusion and rapid build-up on deposition.(6)

Stabilisation was achieved with 5–10 wt.% lime and 10–20 wt.% fly ash, acting as fine pozzolanic fillers and regulating pH.⁴ Lime raised pH above 12 to promote pozzolanic activity and early buildability, but dosage was restricted to avoid brittleness reported in high-lime mixes (Bhusal et al., 2023).(5)(6)

Biochar (3–10 wt.%) was incorporated to enhance moisture retention, surface roughness, and microbial colonisation)(7) Acting as micro-reservoirs, it reduced evaporation and maintained a wet microclimate favourable to moss and lichen growth, aligning with the project’s aim of creating a substrate that is both structural and environmentally active.(8)

Additive ratios strongly influenced performance. A small addition of 3 wt.% locust bean gum and 3 wt.% alginate improved stacking and green-phase stability of printed layers.

Hemp fibre served as the main reinforcement, following Akemah and Ben-Alon (2023), who showed plant fibres improve ductility and bridge cracks in printed earth.¹ Hemp played a dual role: bridging shrinkage cracks and redistributing tensile stresses, while also forming a lattice that resisted plastic deformation.² Dosage was carefully limited to avoid clustering and nozzle clogging seen in high-fibre mixes.³

Overall, the formulation combined up to 85 wt.% local soil with biopolymers, fibres, and stabilisers to yield a print-ready substrate. The mix was dimensionally stable, adhesive between layers, and ecologically responsive, providing the basis for further mechanical and bio-receptivity testing.

1. This dual performance requirement is consistent with the sustainability and mechanical guidelines for earthen additive manufacturing outlined by Faleschini et al. (2023), who emphasise reducing binder related embodied carbon while retaining adequate mechanical performance.

2. Ji et al. (2023) highlight the importance of particle gradation in achieving both extrudability and buildability, noting that oversized particles disrupt layer continuity and increase the risk of nozzle blockages.

3. Akemah & Ben Alon (2023) report that natural fibres such as hemp improve tensile resistance and control shrinkage cracking in 3D printed earthen materials, but excessive fibre content can cause feed interruptions.

4. The fly ash–lime system follows the stabilisation principles discussed by Bhusal et al. (2023), where lime provides early alkaline stabilisation and fly ash contributes to long term strength through pozzolanic reactions.

5. While biochar is not commonly present in structural earth printing mixes, façade bioreceptivity studies show its capacity to retain moisture and host microbial growth, suggesting relevance to this project’s ecological

6. Water content was informed by extrusion moisture ranges (20–25%) identified in Gomaa et al. (2019) and Ji et al. (2023) for stable thixotropic behaviour without slump.

4.5.4 Compression Test

Compression tests on samples 6–11 were to test the impact of density, mass, and composition variation on the mechanical performance of bio-receptive earthen composites. Samples had identical raw soil foundation, stabilisers (lime, fly ash), and bio-receptive additives (biochar, hemp fibres, organic matter) but different proportions and water content for varying densities. The results are the compromises between structural capacity and ecological responsiveness in the system.

The densest sample, Sample 8 (1.170 g/cm³), had an intermediate amount of strength with a failure load of 800 kg. Its density allowed for the inclusion of restricted pore space, which reduced microcracking under load but perhaps not the highest strength. The opposite was observed for the sparsest sample, Sample 10 (0.655 g/ cm³), which recorded one of the highest ultimate loads at 1100 kg. This suggests that voids and fibre presence, even reducing density, better redistributed stress and delayed catastrophic failure. The outcome highlights the non-linear relationship between density and strength in the example of fibrous soil composites.

The optimal overall performance was found in Sample 11, which involved a mid-density of 0.773 g/cm³ with a maximum load capacity of 1100 kg. Its balance of porosity and stiffness appears optimal to support load with the bio-receptiveness essential for long-term ecological growth. Conversely, certain samples such as Sample 7 (1.112 g/cm³) collapsed earlier under respective loads, confirming that higher compaction is not always synonymous with higher strength in mixtures incorporating organic fibres and biochar.

These results show that composite strength in earthen materials is not solely a function of density. Particle packing, fibre reinforcement, and internal porosity all contribute to a range of performance. The findings also show that structurally reliability optimised material, like Sample 11, may also be water absorbent, pH moderating, and biologically grow-friendly without ever being unsafe under compression.

In the scenario of the Jharia site, where ecological recovery and ground instability are the most critical issues, such behaviour is particularly significant. It demonstrates that materials can be tuned not to reach a maximum density but to a balanced ratio of strength, resilience, and bio-receptivity, enabling construction on degraded ground while repairing it simultaneously.

Sample 6

Mass: 455 g

Height: 7 cm

Density: 1.022 g/cm³

Sample 7

Mass: 495 g

Height: 7 cm

Density: 1.112 g/cm³

Max Load: 750 kg

Sample 8

Mass: 521 g

Height: 7 cm

Max Load: 750 kg

Density: 1.170 g/cm³ (highest density overall)

Sample 9

Mass: 304 g

Height: 6 cm

Density: 0.796 g/cm³

Max Load: 900 kg

Sample 10

Mass: 250 g

Height: 6 cm

Max Load: 800 kg

Density: 0.655 g/cm³ (lowest density)

Max Load: 1100 kg

The shear test was carried out to evaluate the structural behaviour of the bio-receptive soil composite under lateral stress. On loading, the specimen resisted displacement with no visible cracking, an indication of excellent cohesion between the soil matrix and the fibrous additives. With the increase in applied load, minimal cracks in the shape of micro-fractures began to develop along the potential shear plane, which continually grew until catastrophizing slippage was achieved. This transition was the highest load bearing capacity of the sample, which was succeeded by a post-transition brittle

Observation confirmed that the test specimen had a relatively clean shear break rather than crushing or delamination, which means the mixture had formed a stable internal bond with compressive confinement but lost resistance subsequent to mobilization of the shear plane. The presence of some fibre pull-out showed organic matter contributed to micro-crack bridging but not sufficiently to halt progress in failure.

On an operational basis, what this result implies is that the composite is shear-stress resistant to medium levels similar to those of lowstrength masonry mortars and consequently structurally stable for non-load-bearing printed walls or infill structures. More importantly, sequential failure observed from video analysis helps in calibrating the material model: strengths derived from the test serve as input parameters for robotic printing on slopes or on uneven ground.

Generally, the shear test confirmed that the mix is still intact up to a critical point, beyond which brittle sliding will dominate. This point, in terms of shear strength in MPa, provides the numerical value needed for comparison of the material with conventional masonry units and the optimization of additive ratios in subsequent tests.

Sample 10

Mass: 295 g

Height: 9 cm

Density: 0.685 g/cm³ (lowest density)

Max Load: 900–1200 kg

Shear Strength: 3.2–4.2 MPa

Failure Mode: Clear central shear plane

Sample 11

Mass: 298 g

Height: 9 cm

Density: 0.773 g/cm³ (highest strength)

Max Shear Load: 950–1300 kg

Shear Strength: 3.4–4.6 MPa

Failure Mode: Diagonal split with shear

4.5.6 pH Test

To assess the material composite’s chemical stability and acid resistance, four selected samples were subjected to pH testing, which included Sample 8, Sample 9, Sample 10, and Sample 11. The test was conducted to simulate environmental conditions of acid exposure such as acid rain, microbial metabolites, or root exudates, which are the most critical conditions for assessing a material’s potential bio-receptivity. The ability of a surface to offer a stable pH environment is significant, as microbial colonization and biological activity are highly sensitive to variations in pH.

Approximately 20% vinegar, with the calculated pH of 3.8, was applied to each dry sample to provide controlled acidity. Samples were incubated with the solution for 72 hours at room temperature. After incubation, the pH of the remaining water was measured using a standard analog soil pH meter. Although not highly precise, this device is adequate to identify overall trends and compare buffering capacity between the materials.

The results indicated a clear buffering gradient. Sample 8 returned the most significant buffering activity, returning to a very close-neutral pH range of approximately 6.8 to 7.0.

This was because it contained high levels of organic matter in the form of alginate and locust bean gum, and biochar and vermiculite ingredients that have been shown to enhance both pH stabilization and microbial compatibility. Sample 9 with no buffer agents and high fly ash concentration but minimal organic matter yielded the worst buffering capability, with pH values remaining in the acidic range of 5.0 to 5.8. Sample 10 was an improvement on Sample 9, perhaps due to the presence of biochar and vermiculite, with a midrange pH of 5.8 to 6.5. Sample 11 followed on from this, with end pHs between 6.2 and 6.8. Although it didn’t achieve the full neutrality of Sample 8, it got a good balance between material stability and buffering capacity.

These results indicate that the mixtures’ acid-buffering capacity is directly influenced by the makeup of the mixtures. The combination of natural polysaccharides, biochar, and mineral admixtures such as vermiculite stabilized the pH of the samples. This is particularly significant for materials that will support biological organisms or improve ecological integration, in which a stable and benign chemical condition is necessary. Among the tested materials, Sample 8 was chemically bio-most compatible, and Sample 11 had an even performance with structural strength and an optimal microenvironment for the growth of living things

4.5.7

Water Retention Test

The 24 h water retention values of Samples 6–11 demonstrate how some compositions of additives and stabilisers regulate porosity and water behaviour directly. Sample 8 recorded the maximum value (~65 g) due to the synergistic action of 10% biochar and 5% vermiculite: the microporosity of biochar trapped capillary water and vermiculite swelled to hold interlayer water, creating an extremely absorptive matrix.

In contrast, Sample 9, containing 15% lime and fly ash as primary stabilisers, absorbed only ~12 g. The pozzolanic reaction closed void networks, forming a compact matrix with minimal pore connectivity. Samples 6 and 7, containing intermediate hemp fibre (2–3%) and lower amounts of lime/fly ash, maintained ~25–35 g, suggesting that fibres improved water distribution but were too dense for long-term storage. Samples 10 and 11, balanced mixtures with both mineral binders and trace organic additives, stabilized at ~15–20 g, a compromise between structure stability and environmental sensitivity.

The test confirms that bio-additives (vermiculite, hemp, biochar) modify porosity and retention, while mineral binders (fly ash, lime) increase density and prevent absorption. Optimum performance is obtained from a balance of these, such that composites maximize strength and durability or bio-receptivity and ecological activation, depending on design intent.

4.5.8 Optimal Material Mix

Comparative testing of Mixtures 6–11 showed how each additive reshapes the behavior of mud in measurable but also tactical ways. The most lime- and fly ash-dense Mixtures (Samples 8 and 10) cured quickly and showed the highest values of strength and stability. The pozzolanic reactions chemically bound the particles of the soil into each other, resulting in dense, stonelike composite materials that resist compression and deformation. However, the same density reduced porosity and led to weak water retention and ecological responsiveness failure. Conversely, mixtures with increased biochar and compost ratio (Samples 6 and 7) performed exceedingly well in water retention and pH neutralizing. Porous carbon structure of biochar and organic contents of compost created pathways for moisture, microbial activity, and chemical balance but weakened the load-bearing capacity of the material.

The optimum was attained with Mixture 11, which had stabilisers and green colourants in balance. It did not achieve a peak in any single aspect but possessed medium-to-high values across the board. The mixture was strong enough to retain its shape, porous enough to take in water, inert enough to sustain plant growth, and stable enough to survive curing without severe distortion. In the hostile environment of Jharia, where ground is unstable and environments are toxic, such equilibrium is more valuable than personal optimisation.

The most significant lesson from such tests is that additives are not inert fillers but active design tools. They shift the mud behaviour towards either structural safety or eco-recovery, and real opportunity lies in mediating a marriage between the two. What I found is that material can no longer be thought of as an intransigent limit—it can be coded, calibrated, and is dependent upon context. The implication is not a one-mixture-for-all locus, but that Mixture 11 demonstrates the possibility of an adaptive equilibrium: a hybrid that is structurally resilient, ecologically sensitive, and can be installed by robotic fabrication in unstable ground. This is why the tests were important— utilizing mud as an old medium, they turned it into a strategic agent for landscape and architecture remediation.

4.5.9 Life Cycle Assessment (LCA)

The Jharia earthen–bio-based system shows that soil, agricultural fibres, and industrial by-products can replace conventional brick and cement masonry with strong carbon performance. Life cycle assessment reports cradle-to-gate emissions of 27 kilograms of CO₂e per square metre of wall (A1–A3), placing it in the highest global category. Its footprint is about ten times lower than typical concrete or masonry, proving the potential for major reductions in environmental impact.

The analysis highlights strengths and areas for improvement. Soil currently dominates the footprint because a conservative dataset uses energy-intensive dry sand as a proxy. In practice, raw excavated soil would yield much lower emissions, meaning reported results likely overestimate impact and the system’s true performance is even more favourable.

Hemp fibres account for just over twelve percent of emissions from processing and transport, but this excludes the carbon stored in the fibres, which under long-term rules could offset much of it. Biochar shows a similar pattern: while its direct footprint is small, its real value lies in long-term carbon storage. Both materials suggest a future where bio-based inputs are functional and central to netnegative construction.

Binders remain the main hotspot, even with clinker reduced by half through fly ash and lime, confirming that cement chemistry is the biggest barrier to lowering embodied carbon. Alternatives such as calcined clays, expanded SCMs, or geopolymeric binders represent the next frontier.

From a life-cycle view, about ninety-five percent of emissions occur in sourcing and processing, with transport adding only four percent thanks to regional supply, while on-site emissions are negligible. This shows that material choice and production methods are far more decisive than logistics or assembly.

The study reveals not only efficiency but also potential. Even with conservative modelling, the system performs at the highest global level. If the lower-impact potential of raw earth and the carbon storage of hemp and biochar are credited, the Jharia mix could move from low-carbon to carbon-negative. This marks a shift toward construction that restores rather than depletes.

Together, the findings show that structural materials can be both robust and restorative. By using local soil, agricultural by-products, and industrial residues, Jharia offers a model for redefining construction in similar contexts. The approach cuts embodied carbon to a fraction of conventional levels while supporting regional circular economies, proving that the next generation of materials can be locally sourced, made from available resources, and still surpass global benchmarks.

Global warming t CO2e - Life-cycle stages

Global warming t CO2e - Life-cycle stages

Global warming t CO2e - Resources types

Global warming t CO2e - Life-cycle stages

Resultant Stress Lines

Discplacement Analysis

4.6 Stabilising Wall Intelligence

This chapter presents the design logic behind the proposed stabilizing elements for the ecological corridor. The starting point was the identification of the grouped patches as potential areas for architectural intervention, as defined in Chapter 4.3. These resulting patches will be implemented using the logic of retaining walls. These walls will be living structures that stabilize the terrain, absorb stresses, and simultaneously provide conditions for plant colonization. Thus, the vegetation is not an added layer after construction, but rather a co-author of the design. The wall is configured as a porous scaffold, a tectonic and bioreceptive matrix that responds to the dynamics of the terrain, capable of managing structural loads and facilitating ecological processes at the same time.

The following sections describe, step by step, the parametric processes that enable this hybrid behavior. From the initial definition of the base surface, through simulations of deformation and internal stresses, to the recognition of structural trajectories, this chapter positions the wall not only as a technical artifact, but as a sensitive interface between ecology, topography and emerging architecture.

4.6.1 From Terrain to Surface: Defining the Base Geometry

The process begins by defining a continuous curved surface, extracted from the architectural patch clusters selected in previous chapters. This surface does not represent a final architectural form, but rather a performative base geometry that adapts to the topographic irregularities of the site. It acts as a stabilising intermediary between a volatile terrain and the architectural systems to come.

Curvature is used not only for spatial softness, but as a structural strategy: it distributes forces multidirectionally, allowing stress to diffuse in varied directions across the surface. This sets up the terrain for more complex simulations of material behaviour and future architectural integration.

4.6.2 Defining Structural Inputs: Piles and Columns

To simulate structural behaviour, an algorithm was developed to populate the surface with two sets of interventions. The first includes randomly distributed anchoring points, vertical piles injected into the terrain from beneath the surface, emulating substructural stabilisation by tying the wall to the ground. The second introduces randomised vertical loads of 30kN onto the surface, representing potential architectural columns that will rise from this base in later phases.

Together, these structural inputs establish a hybridised model of support: anchoring below, loading above. This configuration provides the conditions for stress to emerge and be analysed, forming the basis for subsequent structural intelligence.

4.6.3 Material Behavior Simulation: Finite Element Deformation Analysis

Using the available geometry and structural data, a Finite Element Analysis (FEA) simulation is performed using the calibrated material properties from Chapter 4.4. The objective is to observe how this curved base reacts to the combined forces of gravity, material stiffness, and architectural load.

The results reveal displacement gradients, where lighter areas represent greater deformation. These maps illustrate the surface stress landscape: where it bends, where it resists, and where intervention is required. While these results indicate points of failure, they also show areas with potential, that is, where the structure needs reinforcement or where the shape needs to be adapted.

4.6.4 Structural Intelligence Mapping: Stress Flow Lines

Following the deformation study, a Karamba structural analysis extracts the internal logic of force propagation. The result is a set of main tension lines: paths of tension and compression that trace the invisible anatomy of the structure. These lines act as generative tools: they guide the placement of future columns, suggest voids where mass can be reduced, and offer a logic of structural growth linked to performance rather than aesthetics.

4.6.5 Multi-Objective Optimisation: Balancing Structure

Since the placement of piles and columns could significantly influence the structural behavior of the retaining wall, a multiobjective optimization strategy was employed. This allowed for a computational exploration of multiple configurations to find those that best fit the architectural, ecological, and material criteria. Using Wallacei, an evolutionary solver, a series of generational iterations were executed to evaluate and select the optimal results based on the following objectives:

Minimize displacement: Reduce structural deflection to improve overall stability, especially on terrain with irregular topography. Minimize internal stress: Reduce internal stresses in the surface geometry to ensure material longevity and prevent premature failure. Minimize material volume: Promote structural efficiency by reducing resource consumption and environmental impact. Maximize distance between columns: This objective is important because what we are looking for is to spread the architecture in the hole area of the retaining wall.

Maximize porosity: This is because this structure will foster ecological interconnectivity by maintaining voids within the structural envelope.

Instead of producing a single, fixed solution, the optimization process yielded a family of high-performance geometries, each with a unique balance between formal articulation, force distribution, and ecological potential. Among these, the solution with the lowest displacement value was selected as the preferred candidate, reflecting the project’s primary concern: soil stabilization.

From this selected configuration, the resulting tension lines become more than just structural diagrams; they serve as spatial guides indicating where the wall should thicken or open up. Areas of higher tension suggest zones requiring greater mass or lower porosity, while areas of lower tension can allow for greater openness.

This information will be used to calibrate the porosity of the resulting structure to allow for the integration of soil pockets, transforming the wall into an ecological substrate over time. The aim is for the wall to become both a retaining element and a host, a hybrid framework where architecture and ecology co-produce stability.

Attraction Points

Plants Retaining

4.6 Data and Ecological: Mass Formation

This chapter will focus on how to define a volume containing architecture based on the experiments described above. The data, street lines, column locations, and vegetation information from these previous experiments will be used to define an architectural volume. We say "volume containing architecture" because the data tools and the agent system will provide emergent results that we will later explain how to materialize.

4.6.1 Simulating Agents from Support Points

As previously mentioned in the chapter on methodologies, agent simulation, as well as the data that guides their behavior, is crucial. Therefore, at this point in the thesis, we will define the agents that will be simulated within the space defined as the volume of interest. For this purpose, the starting points are the support locations defined by the optimization of the stabilizing walls. These points, abstracted as possible support locations, or the starting points of columns, will be the starting points for exploring the three-dimensional space.

4.6.2 Fields of Attraction and Repulsion

Just as the field experiment was conducted to define the ecological corridor, we need to define certain points to guide the behavior of the agents, that is, spatial fields: attractors and repellents. The attractors are based on Points of Interest extracted from the cellular automata simulation (see 4.4.7). These represent optimal microenvironments: areas validated by data with high ecological potential. The repellents, on the other hand, are the trees. Not because trees are inherently negative, but because the architectural mass must negotiate with living systems, rather than override them. Therefore, the algorithm moves the agents away from the trees while attracting them toward areas of ecological interest. This dynamic tension fosters negotiation between architecture and ecology.

4.6.3 Agent-Based Volumetric Form

Based on the defined volume of interest and the mapped ecological conditions, a three-dimensional agent-based simulation was implemented to explore potential spatial interventions. The agents were deployed from structural support points established in previous analyses.

These agents were guided by a dual system of forces: points of interest derived from cellular automata acted as attractors, while the presence of trees served as zones of repulsion—not as obstacles, but as vital biomass that had to be respected. As the agents navigated this field, their trajectories intersected, overlapped, and gradually forged a spatial pattern. What emerged was not a predefined form, but a mass informed by behavior. The volumetric field began to suggest where architecture could occupy the space and, even more importantly, where it should refrain. Thus, this mass is not configured by a formal intention, but by the negotiation between environmental conditions and structural origins.

4.6.4

From Agents to Envelopes: Voxel-Based Mass Formation

4.28 : Agents movement for Form Finding

To materialize the emergent trajectories, the next step translated the agent paths into a spatial construct. A three-dimensional grid composed of 1.5 × 1.5 × 3 meter voxels was superimposed over the volume. Voxels whose centers were in close proximity to the agent paths were retained; others were filtered out.

These selected voxels were then unified and smoothed into a continuous form—a coherent mass that functions as an architectural envelope. The geometry that emerged does not merely reflect potential occupation, but also internalizes the invisible logic of attraction and repulsion established by the surrounding ecology, en este caso los arboles como puntos de repulsion.

This envelope is not static. It adapts to the forest's presence, curves around patches of high ecological importance, and stretches to connect with structural supports.

4.6.5 Structural Intelligence: Reading the Mass through Stress Mapping

Once the envelope was defined, it underwent a structural analysis to assess its architectural feasibility. A Finite Element Analysis (FEA) was performed to simulate its behavior under tension, using the vertical supports inherited from the retaining wall as anchor points.

This simulation revealed a field of structural information (stress flows and force vectors) indicating how the envelope distributes the load internally. Color gradients marked the degrees of displacement, while primary stress lines traced the natural paths of compression and tension.

Thus, as with the experiment for the retaining wall, the stress lines are used to inform a potential emerging envelope. These curves become design data, indicating where material can be minimized, where porosity is structurally acceptable, and where reinforcement is essential.

4.29 : Agents path for Form Finding

v design proposal

5. DESIGN PROPOSAL

5.1 The System: Masterplan Development

5.2 Bio- Receptive Wall System

5.2.1 The Wall as a Remediation Agent

5.2.2 Adaptive Porosity and Plant Integration

5.2.3 Wall Construction: Robotic Fabrication

5.3 Mass Growth:Spatial Formation

5.3.1 From Agents to Envelopes: Voxel-Based Mass Formation

5.3.2 Structural Intelligence: Reading the Mass through Stress Mapping

5.3.3 Agent simulation for emergent envelope

5.4 Multi-Phase Construction Strategy: Building with Time, Data, and Earth

5.4.1 Aggregated Layers: Printing → Casting → Modular Growth

5.4.2 Ecological Dominance in Early Phases

5.5 Environmental Data as Spatial Driver Spatial Analysis : Layers

Environmental Logics for Spatial Output

This chapter marks a critical threshold in the thesis, where environmental intelligence becomes spatial matter. What began as simulations of stress, flow, and ecological potential now translates into a proto-architecture: not designed in the traditional sense, but developed through layered negotiations between performance, data, and nature.

Here, the stabilizing wall, formerly a digital map of tension lines, will be explained as emerging as an anchored earth structure, 3D-printed to balance porosity and load, material logic and vegetation support. Around it, a volumetric mass grows not from the creation of arbitrary forms, but from the interaction of ecological attractors, agent-based trajectories, and the soil’s own prescription for its repair.

The voxel fields, previously carriers of ecological values, are now transformed into structural and spatial generators. They no longer merely inform; they construct. As internal data evolves, radiation, humidity, and other factors produce atmospheres, and how this materializes will be explained in this chapter.

The introductory diagram summarizes this change, from robotic soil stabilization to ecological planting and gradual spatial growth, culminating in a built environment that listens, adapts, and functions.

: The system : Overview

5.2 Bio-Receptive Wall System

This chapter addresses the implementation of the retaining wall system within the ecological corridor. Conceived not as passive infrastructure, the wall becomes an active agent in the remediation process, stabilizing the terrain and supporting plant growth. It functions as a porous, structurally intelligent, and ecologically responsive scaffold, capable of supporting plants, anchoring roots, and serving as a foundation for future architecture.

5.2.1 The Wall as a Remediation Agent

The wall design stems from previous structural analyses, particularly the stress lines extracted through finite element analysis. These lines not only indicate where reinforcement is needed but also where structural lightness, understood as porosity, can be provided. The design thus interprets the data to determine where to solidify, where to penetrate, and where to support future life. This logic defines the wall not as a monolithic shell but as a dynamic interface between the terrain and the architecture. It emerges from the terrain, learns from it, and gradually becomes part of its restoration. The experiment also reveals the location of the supports, which materialize as pedestals awaiting the next layer of aggregation.

5.2.2 Adaptive Porosity and Plant Integration

An algorithm was developed to generate voids in the wall surface. These are not decorative perforations, but rather openings designed for the environment. Their size and location are determined by tension paths: areas with lower tension allow for larger voids, while areas with higher structural loads remain denser. These openings serve a dual purpose: structurally, they reduce material consumption and maintain performance. Ecologically, they allow for plant colonization and moisture penetration into the soil. The larger voids are intentionally placed to accommodate deeprooted tree species that, over time, will stabilize the wall through their root systems. In this way, the wall and vegetation establish a reciprocal relationship: the plant roots reinforce the wall, while the wall nourishes the plants.

5.2.3 Wall Construction: Robotic Fabrication

To materialize this hybrid element, the wall is conceived as a 3D-printed structure using the material resulting from previous experiments. Robotic fabrication allows for the precision necessary to follow the porosity logic derived from tension, while also enabling adaptability to diverse topographies. The printed layers serve both as structural support and as planting beds, forming the base upon which the architectural system emerges. The construction process becomes an act of stratified growth— robotic, material, and vegetal. The central ambition of the thesis, to dissolve the boundaries between structure and ecology, is also reflected in the construction proposal.

Pre-architecture volume

Primary Porus patches

Secondary Porus patches

Column casting Location

Fig 5.2 : Mass Materialisation
Fig 5.3 : Robotic Printing Illustration (Author)
Fig 5.4 : Plant Growth on Wall ( Author)
Voxelization of Agents Path

5.3.1 From Agents to Envelopes: Voxel-Based Mass Formation

To materialize the emergent trajectories, the next step translated the agent paths into a spatial construct. A three-dimensional grid composed of 1.5 × 1.5 × 3 meter voxels was superimposed over the volume. Voxels whose centers were in close proximity to the agent paths were retained; others were filtered out.

These selected voxels were then unified and smoothed into a continuous form—a coherent mass that functions as an architectural envelope. The geometry that emerged does not merely reflect potential occupation, but also internalizes the invisible logic of attraction and repulsion established by the surrounding ecology, en este caso los arboles como puntos de repulsion.

This envelope is not static. It adapts to the forest's presence, curves around patches of high ecological importance, and stretches to connect with structural supports

Voxel Geometry
Fig 5.6 : Final Mass
Mesh Analysis

5.3.2 Structural Intelligence: Reading the Mass through Stress Mapping

Once the envelope was defined, it underwent a structural analysis to assess its architectural feasibility. A Finite Element Analysis (FEA) was performed to simulate its behavior under tension, using the vertical supports inherited from the retaining wall as anchor points. This simulation revealed a field of structural information (stress flows and force vectors) indicating how the envelope distributes the load internally. Color gradients marked the degrees of displacement, while primary stress lines traced the natural paths of compression and tension.

Thus, as with the experiment for the retaining wall, the stress lines are used to inform a potential emerging envelope. These curves become design data, indicating where material can be minimized, where porosity is structurally acceptable, and where reinforcement is essential.

Fig 5.10 : FEA on Envolope
5.11 : Agents Movement progression on E

5.3.3. Agent simulation for emergent envelope

Following the definition of the envelope through voxel aggregation and its subsequent structural evaluation, the next step sought to reactivate the spatial system using agents. This experiment emerged from the desire to enrich the existing voxel data by exposing it to new forces and behaviours, thus allowing emergent data to overwrite or enhance the initial parameters.

The logic of this simulation was rooted in structural intelligence. The stress lines previously extracted through Finite Element Analysis were not treated as passive outcomes, but rather as active attractor fields. Agents were deployed from the base columns—defined in the wall stabilisation experiments—and allowed to ascend along the stress trajectories, navigating around the outer shell of the envelope. These structural veins became behavioural guides for the agents, encouraging the agents to follow areas of heightened structural tension or compression.

Simultaneously, the envelope itself operated as a bounding container. Its form defined the spatial constraints within which the agents could operate. The paths generated by the agents, when viewed collectively, suggested a network of emergent circulation or spatial organisation—a kind of vascular system layered upon the architectural body.

This simulation was not purely formal. It served a deeper ecological and informational role. By intersecting these new agent paths with the existing voxelised system, the simulation produced new data layers. Voxels previously marked only by static environmental parameters—such as proximity to ecological points or pollution zones, now acquired behavioural overlays. These reflect how the architecture might be traversed, ventilated, or structurally stressed over time.

In essence, the agents operated as translators of structure into space, converting lines of force into trajectories of use. This re-animation of the voxel field allows for the architecture to evolve not just from terrain data, but from performance and behaviour. The results of this simulation will serve as inputs for future design experiments, informing the internal porosity, circulation potential, and ecological responsiveness of the envelope.

Fig 5.12 : Solar Radiation Analysis

5.4 Emergent Interior Spaces: A Reconfiguration of Data and Volume

Following the generation of a new envelope shaped by agent-based volumetric logic and stress-field validation, this chapter investigates how data previously used to shape the early design process is reprojected and updated within this evolved spatial mass. Rather than starting anew, this phase builds on inherited voxel logic, applying fresh simulations that allow environmental data to be reinterpreted through the geometry now formed.

5.4.1 Voxel Update: Environmental Parameters in athe New Mass

To begin, all voxels belonging to the newly generated envelope were isolated and used as spatial units of analysis. Each voxel was reassigned environmental attributes through distinct computational simulations, carried out using a combination of algorithmic scripting and parametric tools:

Solar Radiation Analysis

Using the Ladybug plugin, radiation values were simulated based on the sun path during Jharia’s peak sunlight months (March to May). The results highlighted a gradient, with the eastern face of the volume consistently receiving higher radiation levels. These values were recorded per voxel to inform to understand which possible areas are more exposed to the radiation.

Fig 5.13 : Z Height Directionality

Z-Height Directionality

Since these voxels had already undergone spatial filtering, a new vertical analysis was conducted to evaluate their elevation in relation to ground proximity. This parameter is critical in identifying which voxels are accessible from the terrain, ideal for entries, thresholds, and groundadjacent programmes.

Fig 5.14 : Neighbourhood Density

Neighbourhood Density

Using native algorithms, the number of adjacent voxels was computed for each unit. Voxels located toward the core of the mass showed higher connectivity, while peripheral voxels exhibited lower values. This measure of “neighbourhood density” serves as a proxy for spatial enclosure, indicating where larger communal spaces or voids might naturally

Fig 5.15 : Wind simulation

CFD Wind Simulation

A CFD (Computational Fluid Dynamics) simulation was applied using a volumetric geometry from agent paths surrounding the envelope, simulating airflow from east to west. This allowed the assignment of internal wind exposure per voxel. Average velocities inside the mass ranged between 3–4 m/s, compared to outer borders exposure of up to 9 m/s. This informed the potential for natural ventilation in specific zones.

Fig 5.12 : Daylight Penetration

Daylight Penetration

A custom ray-tracing simulation based on sky occlusion logic was run. Each voxel was subdivided into points, and daylight rays were cast to identify how many of these received direct light. Voxels were scored based on this exposure, offering insight into zones of natural brightness versus deeper interior cores.

Each data stream was normalised from 0 to 1 to ensure compatibility and cross-comparison in later analysis phases.

Fig 5.13 : Clustering of spaces

5.4.2 Gaussian Clustering: A Probabilistic View of Space

With five key data layers assigned to each voxel, the next step was to discover latent spatial patterns. Instead of relying on rigid predefined zones, the process embraced emergent categorisation using Gaussian Mixture Modelling (GMM). This unsupervised clustering method, unlike hardk-means clustering, allows data points (voxels) to belong to multiple groups with probabilistic weightings.

This flexibility is essential in a project of this nature, where environmental behaviours do not conform to hard boundaries. By using GMM, the design methodology remains fluid and adaptable, revealing nuanced regions within the mass that share similar environmental conditions, yet retain overlap and ambiguity, a more accurate reflection of ecological and architectural realities.

5.4.3 From Clusters to Spatial Definition

Once clustered, each probabilistic group of voxels revealed distinct characters. These were not merely analytical results but became spatial prompts. To refine them further, a second layer of clustering was applied—this time spatially, based on each voxel’s XYZ coordinates.

Through this two-tiered clustering process, emergent interior spaces began to take shape. Groups of voxels with shared environmental properties and spatial adjacency were aggregated, forming loosely bounded volumes, each representing a new type of interior condition. These conditions, defined through data and not form-first logic, offered programmatic potential grounded in light, air, proximity, and ecological connectivity.

This process marks a reversal of typical architectural design. Here, form does not dictate function; rather, clusters of conditions determine potential forms. Interiority becomes an outcome of environmental logic, not its container.

Fig 5.14 : Agent movement on Geometry

5.4.1 Layered Aggregation: Printing, Casting, and Modular Growth

The construction process begins with the material articulation of the stabilizing base: the retaining wall, previously optimized for ecological porosity and structural behavior. This first layer is robotically printed, using the earth and clay composite derived from materials research, and acts as ecological scaffolding and foundation anchors. The printed layer follows the trajectories of the stress lines and the logic of the voids, enabling plant colonization and soil retention from the outset. Five years after the remediation process, with the terrain still considered too unstable for safe human intervention, a second robotic layer is deployed. This printed aggregation emerges from the previous structural wall and rises, connecting columns and patches in an architectural transition. The trajectories of the agents and voxelized groups act as digital blueprints, determining not only where to deposit the material but also where it must be removed to respect the growing biomass. In the tenth year, once ecological activity has stabilized and some parts of the terrain allow for controlled human intervention, a layer of cast concrete is introduced. This component is not merely structural but also preserves the formal logic of the previous aggregations. The cast volumes continue to follow the trajectories based on the agents and the emerging spatial zones defined by the data, resulting in cavities and thickened areas.

Finally, between years fifteen and twenty of the aggregation process, the aggregation of modular earth blocks using local construction techniques was proposed. These units followed the same aggregation rules but introduced greater spatial definition, delimiting areas to create integrated environments within the ecological envelope. At all stages, care was taken not to eliminate existing vegetation; instead, the construction adapted its layout to accommodate plant life, thus fostering architectural ecologies.

Fig 5.16 : Year 15+ visual
Fig 5.17 : Agent movement on Geometry (section 2)

5.4.2 Ecological Dominance in Early Phases

In the early stages of the proposed construction strategy, architecture gives way to ecology. These foundational phases do not prioritize the complete architectural solution; instead, they serve as ecological incubators, designed to stabilize, protect, and care for the damaged terrain.

As mentioned earlier, robots become the primary agents of action during this period. Operating in areas too unstable, toxic, or fragile for human presence, they execute a carefully planned process. Each extruded path is based on previous experiments. The logic of porosity, derived from structural stress lines and environmental simulation, directly influences the robots’ footprint. The voids are not random; the larger cavities are positioned to accommodate the main tree species, providing space for roots to take hold deep within the structure. The smaller cavities house shrubs, mosses, fungi, and microbial life, transforming the wall into a living skin.

As the land gradually regenerates and regains structural stability, the architectural mass begins to grow in parallel, but it is never displaced. Instead, it integrates into the maturing ecosystem. In cross-sections, this coevolution becomes visible: trees grow through the interior courtyards, roots envelop the structural cores, and branches pierce soils and thresholds. The result is not architecture with vegetation, but architecture as vegetation. Spaces emerge where humans and plants coexist. Niches form that house both domestic and natural functions: stabilizing roots also define thresholds; trunks become informal partitions; upper canopies allow filtered light to penetrate deeper spaces. In this sense, architecture is no longer a finished object, but an evolutionary process that always begins with ecology and never ceases to respond to it.

Fig 5.18 : Growth aggregation (section 2)
Fig 5.19 : Year 15+ visual (section 2)

5.5 Environmental Data as a Spatial Factor

This chapter seeks to update the environmental information within the new threedimensional spaces generated from the emerging building envelope. As in the initial stages of the design process, the objective is, once again, to transform geometry into information and use it as an active tool for decision-making. The strategy consisted of spatially subdividing the spaces obtained from the clustering data simulation and evaluating them on five key variables: radiation, height along the Z-axis, connectivity, visibility, and humidity. This new interpretation transforms the architectural mass into an environmentally sensitive system.

Todos los datos fueron normalizados (0-1) y asignados a los voxels mediante visualizaciones de gradientes. Esto permitió mapear cómo se distribuyen condiciones como calor, flujo de aire o densidad. La superficie interior del volumen deja de ser un vacío genérico: ahora es sensible, diferenciada y programáticamente diversa.

5.5.1 Solar Radiation

The first experiment consisted of analyzing solar radiation levels within the new building envelope. The Ladybug radiation analysis was used, configured for the months of highest solar radiation in Jharia, March to May, reflecting the current conditions of a semi-arid subtropical climate. The analysis showed that east-facing facades receive the most radiation, while interior areas maintain low levels. This relationship is essential, since in climates like Jharia’s, the goal is not to maximize solar exposure, but to control it, as excessive heat gain negatively affects interior comfort.

It is important to mention that the experiment shows the first indications that this evolutionary process is yielding positive results, creating spaces protected from the heat, which reinforces the idea of architecture that passively adapts to its climate.

5.5.2 Volume-Based Spatial Weighting

In this rating stage, each new patch to be analyzed is identified as belonging to the space generated in Chapter 5.4. Each patch is then assigned a score based on its volume within the space to which it belongs. Therefore, patches integrated into larger aggregate volumes received higher scores, while those in smaller formations received lower scores.

This index does not describe the surface patch itself, but rather acts as a spatial qualifier based on the scale of the environment of which it is a part. This scoring allows us to infer future interpretations, such as the group of volume patches with the highest scores suggesting potential for shared, multifunctional, or public programs, while those with smaller volumes could accommodate more private, compact, or residual uses.

5.5.3 Spatial Connectivity

To evaluate the connectivity of each space within the larger volumetric field, the central point of each voxel group (emergent space) was extracted. These points were used to create a network graph, where connections between groups were drawn according to their spatial proximity and unobstructed paths.

The script calculated how many connections (edges) each point had with other groups, thus generating a connectivity index for each space. A score was then assigned to the patches based on their spatial connectivity. This analysis differs from volumetric analysis because a space may have a large volume but not necessarily be well-connected to other spaces; the score is different.

The connectivity score does not measure density, but rather relational access. Highly connected groups are better integrated into the spatial system, making them ideal for circulation nodes or shared infrastructure. Groups with low connectivity are more isolated, offering potential for quiet zones, controlled-access environments, or transition thresholds.

5.5.4 Based Ventilation Indicator

In this study, a full Computational Fluid Dynamics (CFD) simulation was considered but ultimately discarded due to the complexity and resolution of the geometry and its application to voxels, which made the airflow simulation computationally unstable and unreliable.

Instead, a visibility-based approach was adopted to assess the ventilation potential in indoor spaces. Using the Ladybug Percentage View component, central points were defined within a defined space, and rays were projected outward in all directions to measure how “open” or “closed” each point was within the mass. The resulting visibility percentage for each point was interpreted as an indicator of potential ventilation access. This assumption is based on the following logic:

Areas of high visibility tend to be better connected to external surfaces and, therefore, more accessible to airflow.

Areas of low visibility tend to be more obstructed, suggesting deeper, more isolated interiors where ventilation might be limited or stagnant.

This method provides valuable spatial insight into where air might flow more easily naturally and where pockets of stillness or isolation might form.

5.5.5 Relative Humidity Estimation

To approximate microclimatic humidity without installing on-site sensors, a composite index was developed combining two key environmental variables: solar radiation and airflow intensity.

Both data layers (radiation (Y) and ventilation (X)) were normalized between 0 and 1 and then weighted to reflect their impact on evaporation conditions. The resulting formula was:

R = 1 − (0.6X + 0.4Y)

Where:

X = ventilation value (from Ladybug visibility/air opening)

Y = solar radiation value

R = estimated relative humidity score (from 0 to 1)

This model assumes that:

High radiation and high airflow indicate dry conditions (lower R).

Low radiation and low airflow suggest humid conditions (higher R).

This derived metric provides an indicator of relative humidity levels across the surface areas of spaces, guiding decisions regarding plant suitability, program placement, and user comfort. Plots with high R-values are considered microclimates with greater moisture retention, potentially ideal for moisture-loving vegetation or cooler spaces for shaded community programs.

Conversely, areas with low R-values are expected to be warmer and drier, more suitable for drought-tolerant or resilient plant species, or for programs requiring thermal activation or adequate ventilation.

5.5.6 Environmental-Based Spatial Classification

The final step involves interpreting these data layers within interior spatial surfaces. Before proposing any speculative allocation program, it is essential to interpret and classify the climatic behavior of each patch.

To this end, a custom C# script was developed to classify each voxel (previously analyzed and assigned values for radiation, volume, connectivity, ventilation, and humidity) into functionally suggestive ecological categories. Rather than prescribing their use, this approach reveals latent spatial behaviors based on environmental performance. Thus, each patch is evaluated according to five normalized parameters (from 0 to 1), using a logical branching structure that classifies each one into one of three ecological categories:

ECO-A_Human

Ventilation > 0.5

Connectivity > 0.4

Humidity < 0.55

Volume > 0.5

These spaces are considered thermally ventilated, spatially connected, and volumetrically generous, potentially suitable for human occupancy.

ECO-B_Intermediate

Humidity between 0.40 and 0.75

Ventilation between 0.30 and 0.65

Radiation < 0.75

These spaces are located in ambiguous microclimatic zones, offering potential for flexible shared uses or transitional programs.

ECO-C_Plants

Humidity > 0.70

Ventilation < 0.40

Radiation < 0.60

These areas are defined as sheltered, humid, and low-light, making them more suitable for plant colonization, shaded resting areas, or passive ecological functions. Each group is automatically assigned a label, a distinctive color, and an identification branch within a ECO-C, making the result fully traceable and visually intuitive for diagrammatic representation.

This procedure is summarized in a count of each category to provide a quick overview of the spatial distribution throughout the system. This classification is not a deterministic assignment by a program, but rather a performative perspective through which the latent capabilities of the space are observed. Furthermore, the classification also demonstrates how environmental data, when methodically stratified and computationally analyzed, can become a robust framework for adaptive planning and ecological design. Instead of manually selecting spaces based on intuition, this approach allows for evidence-based speculation, where the performance of each zone suggests its potential for humans, vegetation, or hybrid coexistence.

5.5.7 Propagating Methodologies

This thesis is not based on a single architectural object, but on a transferable system: a methodology that grows from scratch, guided by data, and capable of being replicated wherever urgency and opportunity converge. After defining the main intervention corridor, previous chapters identified additional retaining walls as areas of high geotechnical vulnerability and ecological relevance. These areas, due to the instability of their slopes or their environmental fragility, required not only structural support, but also an ecological prescription.

Each new retaining wall becomes a testing ground for replicating the design logic. The same sequence is implemented: agent-based simulations generate responsive envelopes, which are populated with voxels. These voxels are enriched with local environmental data, radiation, humidity, connectivity, and spatial density, allowing for the formation of emergent interiors adapted to the specific conditions of the site. What emerges is a constellation of architectures, each genetically linked by the unique morphology of the ecological data. In this way, the system remains open and adaptable. The architecture doesn’t impose a form, but rather listens, calculates, and responds.

The process validates the central ambition of the thesis: to construct a data-driven design methodology. While this methodology is reproducible, adaptable, and replicable, the results will differ because each location has different information, and the proposed design negotiates with that information.

Transition Zones Vegetation Zones

Emergent Architecture into selected Patches

Emergent Architecture into selected Patches

5.6 Emergent Spaces Appropriation

The final architectural proposal is based on an emergent spatial system, the result of the interaction between environmental data, ecological processes, structural logic, and materiality. Within this framework, the appropriation of space is not conceived as an intentional or programmatic act, but rather as a gradual process, conditioned by environmental and temporal thresholds derived from the territory’s own behavior. These thresholds are constructed from the simultaneous reading of multiple environmental parameters used for their definition. Instead of operating in isolation, these values are combined to generate fields of environmental compatibility, capable of distinguishing zones with different levels of accessibility and habitability. The system does not classify space by function, but by performance, allowing each area to be evaluated according to its capacity to accommodate different users.

This logic is made explicit in the environmental zoning model of the previous chapter, where the superposition of data defines three distinct spatial states: predominantly ecological zones, transition zones, and zones with potential for human presence. These ranges establish quantifiable thresholds that determine not only whether a space can be occupied, but who can occupy it and under what conditions, shifting the decision from intentional design to a system of continuous environmental assessment.

The project thus shifts the traditional notion of habitability, understood as stable and permanent use, toward an understanding of space as a condition. Instead of responding to a pre-existing program, space is defined by its environmental state at a given moment.

In this way, spatial appropriation is not determined by direct human decisions, but by the environment’s capacity to authorize or restrict the presence of different agents. Instead of designing spaces for specific uses, the system defines fields of possibility, in which humans, plants, and other organisms occupy the territory in differentiated ways according to their environmental performance. Architecture becomes a late consequence of data-driven processes.

5.6.1 Human–Transition–Plant Zonation Logics

The zoning of the system is structured through conditions of access defined by the combination of environmental parameters. These values do not assign uses or programmes, but allow an understanding of how compatible each space is with different agents. The territory is therefore organised not by predefined functions, but by environmental performance and by its capacity to support different forms of presence.

The plant or animal zone constitutes the first phase of occupation within the system. These areas present high relative humidity, low direct solar exposure and stable microclimatic conditions, which support root development, water retention and biological activity in the soil. Remediation occurs through phytoremediation and phytostabilisation strategies, where selected plant species absorb contaminants, stabilise soil particles and reduce the mobility of toxic agents, gradually contributing to ground stabilisation.

Within this context, architecture is not conceived as habitable space, but as ecological infrastructure that supports and sustains environmental processes. Elements such as bio-receptive walls, porous surfaces and adaptive structural systems act as physical supports for plant life, assisting moisture retention, root growth and soil stabilisation. Architecture is therefore present as a discreet system whose primary role is to enable remediation, rather than to occupy the site.

The transition zone corresponds to areas where environmental conditions have partially improved, but are not yet stable enough for continuous human occupation. These spaces allow limited and temporary presence, mainly associated with movement, observation or maintenance, without consolidating permanent use.

The human zone is defined by the possibility of presence, rather than by a predefined use. When conditions of ventilation, radiation, humidity, connectivity and volume reach compatible values, the space becomes habitable for humans without requiring intensive occupation. Human presence may relate to activities such as environmental observation, research, maintenance, transit or temporary stays, allowing a direct relationship with the recovering landscape. This occupation remains subject to review, depending on the overall balance of the environment and its evolution over time.

Overall, this zoning establishes a logic of gradual appropriation, where space is no longer understood as a neutral container but as an active medium. Rather than imposing fixed limits, the system articulates spatial states capable of change, continuously adjusting to environmental conditions and territorial evolution.

5.6.2 Adaptive Appropriation of Ecologically Formed Spaces

The spaces are characterised by adaptive appropriation, as they are formed through ecological processes and continuous environmental evaluation. They are configured as open spatial structures, whose relevance lies not in a final form, but in their ability to respond to environmental transformations over time.

Human presence within these spaces is defined by conditions of environmental compatibility, such as adequate levels of humidity, solar radiation, ground stability and physical accessibility. Occupation occurs when these conditions allow a safe and nondisruptive interaction with the ecological system. Presence is therefore understood as temporary and situated, subject to change according to the behaviour of the territory and its environmental cycles.

This logic produces spaces that do not aim for comfort or domesticity, but for a conscious interaction with the environment. Architecture does not appear as a protected interior or a place to remain, but as a set of minimal spatial devices. Elevated walkways avoid direct contact with regenerating ground, punctual platforms allow brief pauses without disturbing the soil, paths guide movement without fixing it, and structural supports act as points for observation or temporary intervention. These architectural elements do not seek visual dominance; they function as interfaces between the human body and a landscape undergoing recovery.

Adaptive appropriation also includes the possibility of withdrawal. When environmental conditions change, for example due to increased humidity, vegetation recolonisation or the need for soil regeneration, space can return to a transitional or ecological condition without representing a failure of the project. This reversibility is part of the spatial logic and reinforces the idea that architecture does not fix the territory, but continuously adapts to it.

In this sense, architecture establishes the minimum conditions for certain spatial relations to occur when the environment allows them. Appropriation becomes a negotiated process between body, space and environment, where design accepts uncertainty and change as inherent qualities. This mode of occupation proposes a different relationship with space. Rather than colonising, domesticating or adapting the environment to the human body, it is the place and its conditions that determine when, how and for how long presence is possible. The result is an architecture in which the human is never the central focus, nor fully comfortable, but instead learns to adapt to a territory that remains active, changing and partially inaccessible.

CONCLUSION

6.1 Research Contributions and Architectural Implications

This research demonstrates how environmental data can operate as an active design driver, enabling an emergent architectural process in which form, space and occupation are not predefined, but produced through continuous evaluation. Rather than using data to optimise a fixed solution, the project uses data to generate conditions, allowing architecture to emerge as a consequence of ecological, structural and environmental interactions.

Within this framework, sharing ecologies does not imply harmonious coexistence or permanent balance, but a continuous process of negotiation. Vegetation alters the microclimate, humidity affects material behaviour, root growth interacts with structure, and human presence adjusts to these conditions rather than controlling them. Architecture is therefore no longer understood as a boundary between the natural and the artificial, but as part of a continuously evolving ecological continuum, shaped by data-driven feedback over time.

Human presence within this system is necessarily situated and conscious. The human does not occupy space as the primary user, but as one agent among others, adapting to existing conditions and shifting from a position of control towards one of observation, care and adjustment. Because spatial conditions are continuously evaluated through environmental data, certain spaces may be accessible at one moment and unavailable at another. Areas that allow access today may become inaccessible tomorrow due to vegetation growth, increased humidity or the need for soil regeneration. Rather than representing a limitation, this highlights the living and responsive character of the system and shifts architectural value away from permanence towards the ability to coexist with changing processes.

In highly altered contexts, such as post-mining landscapes, where human occupation has historically been intensive and extractive, the project proposes an alternative model of architectural engagement. Architecture does not define when or how space is occupied. Instead, the territory itself, through its ecological and environmental cycles interpreted via data, authorises, limits or suspends occupation. Human presence is therefore understood as temporary and non-sedentary, guided by the behaviour of the site rather than by the intention to dominate it. In this sense, architecture becomes the result of an emergent process, shaped by data, environment and time, rather than a fixed object imposed upon the landscape.

vii BIBLIOGRAPHY

1. The Coal Mining Life Cycle. Mining for Schools. Accessed June 25, 2025. https://miningforschools.co.za/ lets-explore/coal/the-coal-mining-life-cycle.

2. Central Mine Planning & Design Institute (CMPDI). Annual Report. 2021.

3. Gupta, Shiv Kumar, and Kumar Nikhil. Ground Water Contamination in Coal Mining Areas: A Critical Review. 2016.

4. Jharkhand Pollution Control Board. Annual Report. 2022.

5. Central Institute of Mining & Fuel Research. Research Highlights. 2019.

6. Greenpeace India. Airpocalypse IV: Assessment of Air Pollution in Indian Cities. New Delhi: Greenpeace India, 2020. https://www.greenpeace.org/india/en/story/7764/airpocalypse-iv-assessment-of-airpollution-in-indian-cities/.

7. United Nations Framework Convention on Climate Change (UNFCCC). The Paris Agreement. 2015. https:// unfccc.int/process-and-meetings/the-paris-agreement/the-paris-agreement.

8. NITI Aayog. India’s Updated Nationally Determined Contributions (NDCs). New Delhi: Government of India, 2021. https://www.niti.gov.in.

9. IEASRJ; Times of India. [No additional bibliographic details provided.

10. Riyas, Moidu Jameela, Tajdarul Hassan Syed, Hrishikesh Kumar, and Claudia Kuenzer. “Detecting and Analyzing the Evolution of Subsidence Due to Coal Fires in Jharia Coalfield, India Using Sentinel-1 SAR Data.” Remote Sensing 13, no. 8 (2021): Article 1521. https://doi.org/10.3390/rs13081521.mdpi.com.

11. IEASRJ; Chapman University. “Underground Burning of Jharia Coal Mine (India) and Associated…”. [No additional bibliographic details provided.]

12. Global Bihari. “Considerable Reduction in Surface Fire Area in Jharia Claims Coal Ministry.” Accessed June 2025. https://globalbihari.com/considerable-reduction-in-surface-fire-area-in-jharia-claims-coalministry/.

13. Ministry of Coal, Government of India. “Jharia Master Plan: Coal Ministry Efforts Bring Down Surface Fire Identified from 77 to 27 Sites.” Press Information Bureau press release, September 25, 2023. Accessed June 25, 2025. https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=1960543.

14. Selvi, V. A., et al. “Impact of Coal Industrial Effluent on Quality of Damodar River Water.” Indian Journal of Environmental Protection 32, no. 1 (January 2012): 58–65.

15. Sharma, A. K., B. P. Singh, and C. L. Prasad. Degradation of Soil Quality Parameters Due to Coal Mining: A Case Study of Jharia Coalfield. CORE PDF. Accessed June 25, 2025. https://core.ac.uk/download/pdf/188610081. pdf.

16. Saini, Varinder, R. P. Gupta, and Manoj K. Arora. “Environmental Issues of Coal Mining – A Case Study of Jharia Coal-Field, India.” Energy Procedia 90 (2016): 634–641. https://www.researchgate.net/ publication/291102685_Environmental_issues_of_coal_mining_-_A_case_study_of_Jharia_coal-field_ India.

17. Government of India. Jharia Action Plan 2009. Ministry of Coal.

18. Malkhandi, Mita. “Displacement and Socio-Economic Plight of Tribal Population in Jharkhand with Special Reference to Jharia Coal Belt.” International Research Journal of Management Sociology & Humanity 9, no. 2 (2018): 96–105.

19. The Dark Earth: Coal Mining and Tribal Lives of Jharkhand. YouTube video, 12:34. June 22, 2024. https:// www.youtube.com/watch?v=u1I2PFpHaYE.

20. “Tribes of Jharkhand.” Uploaded by Daphneusms. Scribd. Accessed June 25, 2025. https://www.scribd.com/ doc/139703199/Tribes-of-jharkhand.

21. Gautam, Avinash. Tribal Housing: A Case Study of Tribes in Jharkhand. M.Arch. thesis, Kansas State University, 2008.

22. Dutta, Pallabi, and Md. Mustafizur Rahman. “Learning from the Root – Integrating Tradition into Architecture towards a Self-Subsistent Munda Community.” Conference paper, Khulna University Studies, Shahjalal University of Science and Technology, November 2022.

23. Krümmelbein, Julia, et al. “A History of Lignite Mining and Reclamation in Lusatia.” Canadian Journal of Soil Science 92, no. 1 (2012): 53–66.

24. Lausitz & Central German Mining Company (LMBV). Mine Rehabilitation in Germany: Example LMBV. Senftenberg, 2023.

25. Ram, L. C., and R. E. Masto. “Fly Ash for Soil Amelioration.” Earth-Science Reviews 128 (2014): 52–74.

26. Jiang, Yue, et al. “Mitigating Land Subsidence by Fly-Ash Backfilling.” Polish Journal of Environmental Studies 30, no. 1 (2021): 655–661.

27. Mishra, D. P., and S. K. Das. “Physico-Chemical Properties of Talcher Fly Ash for Stowing.” Materials Characterization 61, no. 11 (2010): 1252–1259.

28. Siachoono, Stanford M. “Land Reclamation in Haller Park.” International Journal of Biodiversity and Conservation 2, no. 2 (2010): 19–25.

29. Trippe, Kristen E., et al. “Phytostabilisation of Acid Tailings with Biochar and Microbial Inoculum.” Applied Soil Ecology 165 (2021): 103962.

30. Coal Story. https://88guru.com/library/chemistry/coal-story.

31. How coal is made? https://www.thedailyeco.com/how-is-coal-made-877.html.

32. How coal mining works. https://bkvenergy.com/learning-center/how-coal-mining-works/.

33. The coal mining life cycle. https://miningforschools.co.za/lets-explore/coal/the-coal-mining-life-cycle.

34. Singh, Abhay Kumar, G. C. Mondal, Suresh Kumar, T. B. Singh, B. K. Tewary, and A. Sinha. “Major Ion Chemistry, Weathering Processes and Water Quality Assessment in Upper Catchment of Damodar River Basin, India.” Environmental Geology 54, no. 5 (2008): 745-58. Singh, Vishal Kumar. The Burning City: A Photographic Documentary on Jharia. India, n.d.

35. Global Energy Monitor. Global Coal Mine Tracker. 2025 release. Accessed September 17, 2025. https://globalenergymonitor.org/projects/global-coal-mine-tracker/

36. Saini, V. “Environmental impact studies in coalfields in India: A case study of the Jharia coalfield.” Renewable and Sustainable Energy Reviews 2016. Accessed [Date you viewed the article]. https://www.sciencedirect.com/science/article/abs/pii/S1364032115010424

37. Chatterjee, R. S., Shailaja Thapa, K. B. Singh, G. Varunakumar, and E. V. R. Raju. “Detecting, Mapping and Monitoring of Land Subsidence in Jharia Coalfield, Jharkhand, India by Spaceborne Differential Interferometric SAR, GPS and Precision Levelling Techniques.” 2015. Accessed September 17, 2025. https://www.researchgate.net/figure/Combined-landsubsidence-areas-in-Jharia-Coalfield-as-obtained-from-C-and-L-band-DInSAR_fig5_282245541 .=

38. Habib, Md Tariq, Saarthak Khurana, and Vivek Sen. Just Energy Transition: Economic Implications for Jharkhand. Climate Policy Initiative, December 28, 2023. Accessed September 17, 2025. https:// www.climatepolicyinitiative.org/just-energy-transition-economic-implications-for-jharkhand/

viii APPENDIX

C# Script for the Clustering Algorithm

1 Script Overview

This script runs in the Grasshopper environment and automatically selects buildable plots from terrain patch data, then clusters them into settlement-scale units. Using each patch’s risk Score and slope Angle, the script classifies patches into Red Zone (unsuitable for building) and Yellow Zone (buildable). Adjacent Yellow Zone patches are grouped to form Foundation units, which are subsequently aggregated into a small number of larger clusters using the K-Medoids algorithm. For each cluster, the potential household count is derived from the area of nearby agricultural terraces (Terrace Area). Final plot selection is performed by weighting risk score and proximity to the cluster medoid, producing a priority-ranked set of building sites. Outputs include selected plots, cluster geometry and all associated metadata, formatted for Grasshopper data trees.

2 Code Structure

The code consists of a main RunScript method that orchestrates the workflow, together with supporting classes for data handling and the core algorithms.

RunScript method

Input validation: checks for missing fields and malformed inputs.

Filtering and classification: filters patches to criteria and labels them as Red or Yellow Zones.

Adjacency grouping: merges adjacent Yellow Zone patches into Foundation groups.

K-Medoids clustering: aggregates Foundation groups into K clusters.

Household estimation and final site selection: assigns nearby Terrace Area to clusters, computes potential household numbers, and selects final building plots via a weighted evaluation of risk score and proximity to the medoid.

Outputs: emits selected sites, cluster information and diagnostic data to Grasshopper.

Supporting classes and functions

PatchInfo, FoundationInfo: structured containers for single patches and grouped Foundations, storing geometry, Score, Angle and zone labels.

KMedoids, Cluster: implementation of K-Medoids clustering, ensuring medoids correspond to actual Foundation groups rather than abstract centroids.

GroupAdjacentPatches(): adjacency-driven search that builds Foundation groups from neighbouring Yellow Zone patches.

SelectTopFoundations(): ranks candidate Foundations within each cluster using a weighted combination of risk score and medoid proximity, returning the highest-priority building plots.

using System; using System.Collections; using System.Collections.Generic; using Rhino; using Rhino.Geometry; using Grasshopper; using Grasshopper.Kernel; using Grasshopper.Kernel.Data; using Grasshopper.Kernel.Types; using System.Drawing; using System.Linq; using Rhino.Geometry.Intersect;

/// <summary>

/// This class will be instantiated on demand by the Script component. /// </summary> public class Script_Instance : GH_ScriptInstance { #region Utility functions

/// <summary>Print a String to the [Out] Parameter of the Script component.</summary> /// <param name=”text”>String to print.</param> private void Print(string text) { /* Implementation hidden. */ } /// <summary>Print a formatted String to the [Out] Parameter of the Script component.</summary> /// <param name=”format”>String format.</param> /// <param name=”args”>Formatting parameters.</param> private void Print(string format, params object[] args) { /* Implementation hidden. */ } /// <summary>Print useful information about an object instance to the [Out] Parameter of the Script component. </summary>

/// <param name=”obj”>Object instance to parse.</param> private void Reflect(object obj) { /* Implementation hidden. */ } /// <summary>Print the signatures of all the overloads of a specific method to the [Out] Parameter of the Script component. </summary>

/// <param name=”obj”>Object instance to parse.</param> private void Reflect(object obj, string method_name) { /* Implementation hidden. */ } #endregion

#region Members

/// <summary>Gets the current Rhino document.</summary> private readonly RhinoDoc RhinoDocument; /// <summary>Gets the Grasshopper document that owns this script.</summary> private readonly GH_Document GrasshopperDocument; /// <summary>Gets the Grasshopper script component that owns this script.</summary> private readonly IGH_Component Component;

/// <summary>t

/// Gets the current iteration count. The first call to RunScript() is associated with Iteration==0. /// Any subsequent call within the same solution will increment the Iteration count. /// </summary> private readonly int Iteration; #endregion

/// <summary>

/// This procedure contains the user code. Input parameters are provided as regular arguments, /// Output parameters as ref arguments. You don’t have to assign output parameters, /// they will have a default value.

/// </summary> private void RunScript(List<Brep> Patches, List<double> Scores, List<double> Angles, List<Curve> TerraceAreas, double ZTolerance, int MinGroupSize, int MaxGroupSize, int ClusterCount, double SelectionWeight, int RandomSeed, double AreaPerHousehold, ref object RedZonePatches, ref object YellowZonePatches, ref object RetainingWallGroups, ref object RetainingWallIndices , ref object FoundationGroups, ref object FoundationGroupIndices , ref object ClusteredFoundations, ref object OwnedTerraceAreas, ref object SelectedBuildings, ref object SelectedBuildingIndices, ref object ClusterCenters, ref object Log)

{ // --- 1. Input Validation and Initialization ---

// This section checks if the provided inputs are valid and initializes data structures.

var log = new List<string>();

log.Add(“=== Input Data Validation ===”);

log.Add(string.Format(“Input Patch Count: {0}”, Patches != null ? Patches.Count : 0)); log.Add(string.Format(“Input Score Count: {0}”, Scores != null ? Scores.Count : 0));

log.Add(string.Format(“Input Angle Count: {0}”, Angles != null ? Angles.Count : 0)); log.Add(string.Format(“Input Terrace Area Count: {0}”, TerraceAreas != null ? TerraceAreas. Count : 0));

log.Add(string.Format(“Target Cluster Count: {0}”, ClusterCount)); log.Add(string.Format(“Area per Household: {0}”, AreaPerHousehold)); log.Add(string.Format(“Min Group Size: {0}”, MinGroupSize)); log.Add(string.Format(“Max Group Size: {0}”, MaxGroupSize)); log.Add(string.Format(“Selection Weight (Score={0:P0} / Proximity={1:P0}): “, SelectionWeight, 1 - SelectionWeight));

log.Add(string.Format(“Z-Value Tolerance: {0}”, ZTolerance)); log.Add(string.Format(“Random Seed: {0}”, RandomSeed));

// Initialize outputs to clear any data from previous runs. RedZonePatches = new List<Brep>(); YellowZonePatches = new List<Brep>(); RetainingWallGroups = new DataTree<Brep>(); RetainingWallIndices = new DataTree<int>(); FoundationGroups = new DataTree<Brep>(); FoundationGroupIndices = new DataTree<int>(); ClusteredFoundations = new DataTree<Brep>(); SelectedBuildings = new DataTree<Brep>(); SelectedBuildingIndices = new DataTree<int>(); ClusterCenters = new List<Point3d>(); OwnedTerraceAreas = new DataTree<Brep>();

// Abort if essential data is missing. if (Patches == null || Scores == null || Angles == null || Patches.Count == 0)

{ log.Add(“ERROR: Input data is null or empty.”); Log = log; return;

}

// Abort if input list counts do not match. if (Patches.Count != Scores.Count || Patches.Count != Angles.Count)

{ log.Add(string.Format(“ERROR: The count of Patches ({0}), Scores ({1}), and Angles ({2}) do not match.”, Patches.Count, Scores.Count, Angles.Count)); Log = log; return;

}

// Abort if group size parameters are invalid. if (MinGroupSize > MaxGroupSize)

{ log.Add(“ERROR: MinGroupSize cannot be greater than MaxGroupSize.”); Log = log; return; }

// Combine all inputs into a single list of ‘PatchInfo’ objects for easier management. var allPatches = new List<PatchInfo>(); for (int i = 0; i < Patches.Count; i++)

{ allPatches.Add(new PatchInfo(i, Patches[i], Scores[i], Angles[i])); } // --- 2. Filter, Classify, and Analyze --// Filter for patches suitable for building (angle >= 10 degrees). var buildablePatches = allPatches.Where(p => p.Angle >= 10.0).ToList(); log.Add(string.Format(“Buildable patches with angle >= 10 degrees: {0}”, buildablePatches. Count));

// Classify buildable patches into Red (high score) and Yellow (medium score) zones. var redPatches = buildablePatches.Where(p => p.Zone == ZoneType.Red).ToList(); var yellowPatches = buildablePatches.Where(p => p.Zone == ZoneType.Yellow).ToList(); log.Add(string.Format(“Red Zone Patches: {0}, Yellow Zone Patches: {1}”, redPatches.Count, yellowPatches.Count));

// Determine which patches are adjacent to each other for both zones. var redAdjacency = BuildAdjacencyList(redPatches); var yellowAdjacency = BuildAdjacencyList(yellowPatches); log.Add(“Adjacency analysis for Red/Yellow Zones complete.”);

// --- 3. Grouping ---

// Group adjacent patches into ‘retaining walls’ (Red Zone) and ‘foundations’ (Yellow Zone). var retainingWalls = GroupAdjacentPatches(redPatches, redAdjacency, MinGroupSize, MaxGroupSize, ZTolerance);

log.Add(string.Format(“{0} retaining wall groups created in Red Zone.”, retainingWalls.Count)); var foundations = GroupAdjacentPatches(yellowPatches, yellowAdjacency, MinGroupSize, MaxGroupSize, ZTolerance);

log.Add(string.Format(“{0} foundation groups created in Yellow Zone.”, foundations.Count));

// --- 4. K-Medoids Clustering ---

// Convert the foundation groups into ‘FoundationInfo’ objects for clustering. var foundationInfos = foundations.Select((group, index) => new FoundationInfo(index, group)). ToList();

// Check if there’s enough data to perform clustering. if (foundationInfos.Count == 0 || ClusterCount <= 0 || foundationInfos.Count < ClusterCount) {

log.Add(“WARNING: Not enough foundation groups to perform clustering.”); RedZonePatches = redPatches.Select(p => p.Geometry).ToList(); YellowZonePatches = yellowPatches.Select(p => p.Geometry).ToList(); RetainingWallGroups = ConvertToDataTree(retainingWalls); RetainingWallIndices = ConvertToDataTree(retainingWalls, true); FoundationGroups = ConvertToDataTree(foundations); FoundationGroupIndices = ConvertToDataTree(foundations, true); Log = log; return;

}

// Perform K-Medoids clustering to group foundations into settlements. var kmedoids = new KMedoids(foundationInfos, ClusterCount, RandomSeed); kmedoids.Run(); log.Add(string.Format(“Clustering of foundation groups into {0} clusters is complete.”, ClusterCount));

// --- 5. Calculate Households per Cluster based on Terrace Area --// This section assigns nearby terrace areas to each cluster and calculates how many // households can be supported based on the total area. var clusterTerraceAreas = new Dictionary<int, double>(); var ownedTerraces = new Dictionary<int, List<Brep>>(); for(int i = 0; i < kmedoids.Clusters.Count; i++)

{ clusterTerraceAreas[i] = 0; ownedTerraces[i] = new List<Brep>();

}

if(TerraceAreas != null)

{ // Convert terrace curves to surfaces (Breps). var validTerraceBreps = new List<Brep>(); foreach(var curve in TerraceAreas)

{ if(curve == null || !curve.IsClosed)

{ log.Add(“WARNING: An input terrace curve was not closed and will be ignored.”); continue;

}

Brep[] breps = Brep.CreatePlanarBreps(curve, Rhino.RhinoDoc.ActiveDoc.ModelAbsoluteTolerance); if(breps != null && breps.Length > 0)

{ validTerraceBreps.Add(breps[0]); } else

{ log.Add(“WARNING: A terrace curve could not be converted to a surface and will be ignored.”);

}

}

// Assign each terrace area to the nearest cluster. foreach(var area in validTerraceBreps)

{ var areaProperties = AreaMassProperties.Compute(area); if(areaProperties == null) continue; Point3d areaCentroid = areaProperties.Centroid; double minDistance = double.MaxValue;

int closestClusterId = -1; for(int i = 0; i < kmedoids.Clusters.Count; i++)

{ var cluster = kmedoids.Clusters[i]; if(cluster.Medoid == null) continue; double dist = areaCentroid.DistanceTo(cluster.Medoid.Center); if(dist < minDistance) { minDistance = dist; closestClusterId = cluster.Id; }

}

if(closestClusterId != -1)

{ clusterTerraceAreas[closestClusterId] += areaProperties.Area; ownedTerraces[closestClusterId].Add(area); } } }

// --- 6. Finalize Output Data ---

// This section prepares the clustered and selected data for output. var clusteredFoundationsTree = new DataTree<Brep>(); var selectedBuildingsTree = new DataTree<Brep>(); var selectedBuildingIndicesTree = new DataTree<int>(); var clusterCenterPoints = new List<Point3d>(); var ownedTerracesTree = new DataTree<Brep>();

for (int i = 0; i < kmedoids.Clusters.Count; i++) { var cluster = kmedoids.Clusters[i]; if (cluster.Members.Count == 0) continue; var path = new GH_Path(i);

// Add all foundation geometries in the cluster to the output tree. var allGeometriesInCluster = cluster.Members.SelectMany(f => f.Patches.Select(p => p.Geometry)); clusteredFoundationsTree.AddRange(allGeometriesInCluster, path);

// Calculate the number of households this cluster can support. int householdsForThisCluster = 0; if(AreaPerHousehold > 0)

{ householdsForThisCluster = (int) Math.Floor(clusterTerraceAreas[cluster.Id] / AreaPerHousehold);

}

// Select the top foundations based on the household count and selection weight. var topFoundations = SelectTopFoundations(cluster, householdsForThisCluster, SelectionWeight); var selectedGeometries = topFoundations.SelectMany(f => f.Patches.Select(p => p.Geometry)); selectedBuildingsTree.AddRange(selectedGeometries, path); var selectedIndices = topFoundations.SelectMany(f => f.Patches.Select(p => p.OriginalIndex)); selectedBuildingIndicesTree.AddRange(selectedIndices, path); // Store the center point (medoid) of the cluster. if(cluster.Medoid != null)

{ clusterCenterPoints.Add(cluster.Medoid.Center); } // Store the terrace areas owned by this cluster. ownedTerracesTree.AddRange(ownedTerraces[cluster.Id], path); log.Add(string.Format(“ - Cluster {0}: Contains {1} foundation groups. Terrace Area: {2:F0} m^2. Calculated Households: {3}. Selected {4} for building.”, i, cluster.Members.Count, clusterTerraceAreas[cluster.Id], householdsForThisCluster, topFoundations.Count)); }

// --- 7. Set Outputs --// Assign all processed data to the component’s output parameters. RedZonePatches = redPatches.Select(p => p.Geometry).ToList(); YellowZonePatches = yellowPatches.Select(p => p.Geometry).ToList(); RetainingWallGroups = ConvertToDataTree(retainingWalls); RetainingWallIndices = ConvertToDataTree(retainingWalls, true); FoundationGroups = ConvertToDataTree(foundations); FoundationGroupIndices = ConvertToDataTree(foundations, true); ClusteredFoundations = clusteredFoundationsTree; SelectedBuildings = selectedBuildingsTree; SelectedBuildingIndices = selectedBuildingIndicesTree; ClusterCenters = clusterCenterPoints;

OwnedTerraceAreas = ownedTerracesTree;

log.Add(“=== Final Result Summary ===”); log.Add(string.Format(“Red Zone Patches: {0}”, redPatches.Count)); log.Add(string.Format(“Yellow Zone Patches: {0}”, yellowPatches.Count)); log.Add(string.Format(“Retaining Wall Groups: {0}”, retainingWalls.Count)); log.Add(string.Format(“Foundation Groups: {0}”, foundations.Count)); log.Add(string.Format(“Clusters: {0}”, kmedoids.Clusters.Count)); Log = log; }

// <Custom additional code>

// Defines the classification zones for patches based on their score. public enum ZoneType { Red, Yellow, Green }

// A helper class to store all relevant information about a single patch. public class PatchInfo

{ public int OriginalIndex { get; private set; } public Brep Geometry { get; private set; } public double Score { get; private set; } public double Angle { get; private set; } public ZoneType Zone { get; private set; } public Point3d Center { get; private set; }

public PatchInfo(int index, Brep geometry, double score, double angle) { OriginalIndex = index; Geometry = geometry; Score = score; Angle = angle; Zone = ClassifyZone(score); var areaMassProperties = AreaMassProperties.Compute(geometry); Center = areaMassProperties != null ? areaMassProperties.Centroid : Point3d.Unset; } private ZoneType ClassifyZone(double score)

{ if (score >= 0.6) return ZoneType.Red; if (score >= 0.2) return ZoneType.Yellow; return ZoneType.Green; }

}

// A helper class to represent a group of patches that form a single foundation. public class FoundationInfo

{ public int Id { get; private set; } public List<PatchInfo> Patches { get; private set; } public Point3d Center { get; private set; } public double AverageScore { get; private set; }

public FoundationInfo(int id, List<PatchInfo> patches) { Id = id; Patches = patches; Point3d center = new Point3d(0, 0, 0); double totalScore = 0; foreach(var patch in patches) { center += patch.Center; totalScore += patch.Score; } Center = center / patches.Count; AverageScore = (patches.Count > 0) ? totalScore / patches.Count : 0; }

}

// --- Adjacency Analysis and Grouping Algorithms ---

// Builds a dictionary mapping each patch to a list of its adjacent neighbors. private Dictionary<int, List<int>> BuildAdjacencyList(List<PatchInfo> patches)

{ var adjacencyList = new Dictionary<int, List<int>>();

double tolerance = 0.001; foreach (var patch in patches)

{ adjacencyList[patch.OriginalIndex] = new List<int>();

} for (int i = 0; i < patches.Count; i++)

{ for (int j = i + 1; j < patches.Count; j++)

{ // Check for physical intersection between two patches. Curve[] intersectionCurves; Point3d[] intersectionPoints; bool intersection = Intersection.BrepBrep(patches[i].Geometry, patches[j].Geometry, tolerance, out intersectionCurves, out intersectionPoints); if (intersection && intersectionCurves.Length > 0) { adjacencyList[patches[i].OriginalIndex].Add(patches[j].OriginalIndex); adjacencyList[patches[j].OriginalIndex].Add(patches[i].OriginalIndex); }

}

} return adjacencyList; } // Groups patches into connected components using a flood-fill (Breadth-First Search) approach. private List<List<PatchInfo>> GroupAdjacentPatches(List<PatchInfo> patchesToGroup, Dictionary<int, List<int>> adjacencyList, int minGroupSize, int maxGroupSize, double zTolerance)

{ var allGroups = new List<List<PatchInfo>>(); var assignedPatches = new HashSet<int>(); var patchMap = patchesToGroup.ToDictionary(p => p.OriginalIndex, p => p); // Sort patches to ensure a deterministic grouping order, starting from the same Z-level. var sortedPatches = patchesToGroup.OrderBy(p => Math.Round(p.Center.Z, 3)).ThenByDescending(p => p.Score).ToList(); foreach (var startPatch in sortedPatches)

{ if (assignedPatches.Contains(startPatch.OriginalIndex)) { continue;

}

var currentGroup = new List<PatchInfo>(); var queue = new Queue<PatchInfo>(); queue.Enqueue(startPatch); assignedPatches.Add(startPatch.OriginalIndex); while (queue.Count > 0)

{ var currentPatch = queue.Dequeue(); currentGroup.Add(currentPatch); if (currentGroup.Count >= maxGroupSize)

{ // If the group reaches max size, add remaining items from queue and stop expanding. while(queue.Count > 0)

{ currentGroup.Add(queue.Dequeue()); } break;

} if (!adjacencyList.ContainsKey(currentPatch.OriginalIndex)) continue; var neighbors = new List<PatchInfo>(); foreach (var neighborIndex in adjacencyList[currentPatch.OriginalIndex]) { if (!assignedPatches.Contains(neighborIndex) && patchMap.ContainsKey(neighborIndex))

{ // Only add neighbors that are at a similar Z-height. if(Math.Abs(patchMap[neighborIndex].Center.Z - currentPatch.Center.Z) < zTolerance) { neighbors.Add(patchMap[neighborIndex]); } } }

foreach (var neighbor in neighbors)

{ if (!assignedPatches.Contains(neighbor.OriginalIndex)) {

}

} }

assignedPatches.Add(neighbor.OriginalIndex); queue.Enqueue(neighbor);

// Only keep the group if it meets the minimum size requirement. if (currentGroup.Count >= minGroupSize)

{ if(currentGroup.Count > maxGroupSize)

{ allGroups.Add(currentGroup.GetRange(0, maxGroupSize)); } else

{ allGroups.Add(currentGroup);

}

}

} return allGroups;

}

// --- K-Medoids Clustering Algorithm --// Represents a single cluster, containing its central point (Medoid) and its members. public class Cluster

{ public int Id { get; private set; } public FoundationInfo Medoid { get; set; } public List<FoundationInfo> Members { get; private set; } public Cluster(int id, FoundationInfo medoid)

{ Id = id; Medoid = medoid; Members = new List<FoundationInfo>();

} // Calculates the total distance from all members to the medoid. public double CalculateCost()

{ double cost = 0; foreach (var member in Members)

{ cost += member.Center.DistanceTo(Medoid.Center); } return cost;

}

}

// Implements the K-Medoids clustering algorithm. public class KMedoids

{ private readonly List<FoundationInfo> _foundations; private readonly int _k; private readonly int _randomSeed; public List<Cluster> Clusters { get; private set; } public KMedoids(List<FoundationInfo> foundations, int k, int randomSeed)

{ _foundations = foundations; _k = k; _randomSeed = randomSeed; Clusters = new List<Cluster>();

} public void Run(int maxIterations = 100)

{ if (_foundations.Count < _k) return; InitializeMedoids(); for (int i = 0; i < maxIterations; i++)

{ AssignToClusters(); bool changed = UpdateMedoids(); if (!changed) break; // Stop if medoids no longer change.

}

}

// Randomly selects the initial K medoids from the dataset.

private void InitializeMedoids() { var random = new Random(_randomSeed); var randomIndices = Enumerable.Range(0, _foundations.Count).OrderBy(x => random.Next()). Take(_k).ToList(); for (int i = 0; i < _k; i++) { Clusters.Add(new Cluster(i, _foundations[randomIndices[i]])); } }

// Assigns each foundation to the cluster with the nearest medoid. private void AssignToClusters() { foreach (var c in Clusters) c.Members.Clear();

C# Script for the Planting Algorithm

1 Script Overview

This script operates within the Grasshopper environment and, using Ground data and a Plant database, automatically proposes a planting assemblage suited to the conditions of each target Surface. Environmental variables per Surface, including soil contamination, moisture retention and the presence of bedrock, are evaluated alongside plant tolerances and ecological functions such as stabilisation and detoxification. Based on this evaluation, high-suitability species are filtered, and the proportional mix of three functional guilds—Anchor, Detox (Detoxifier) and Builder— is adjusted dynamically to match local conditions. Each Surface is then partitioned into a 3 × 3 grid, selected species are allocated to cells, and colour and legend data are generated for visualisation.

2 Code Structure

The code comprises a RunScript method that executes the main logic, supported by lightweight data models and parsing utilities.

RunScript method

Execution proceeds in the following order: Data parsing: incoming raw strings are converted into lists of Plant and Ground objects.

Environmental suitability filtering: for each Surface, species that meet local constraints are shortlisted.

Plant-mix algorithm: guild proportions are determined from Surface attributes, for example slope and contamination, and a species combination is assembled accordingly.

Surface subdivision and assignment: each target area is divided into a 3 × 3 grid and the assembled mix is assigned to cells.

Outputs: subdivided patches, planting metadata and colour or legend information are emitted in Grasshopper data trees.

Supporting classes and functions

Plant class: defines species-level attributes such as Name, Synergy_Group, Strategic_Guild, and environmental tolerances.

Ground class: defines site attributes including solar exposure, slope, contamination indicators and bedrock flags.

Parsing utilities: ParsePlantData and ParseGroundData convert raw tabular strings into typed objects compatible with Grasshopper workflows.

using System; using System.Collections; using System.Collections.Generic; using Rhino; using Rhino.Geometry; using Grasshopper; using Grasshopper.Kernel; using Grasshopper.Kernel.Data; using Grasshopper.Kernel.Types;

using System.Drawing; using System.Linq; using Rhino.Geometry.Intersect;

/// <summary>

/// This class will be instantiated on demand by the Script component. /// </summary> public class Script_Instance : GH_ScriptInstance { #region Utility functions

/// <summary>Print a String to the [Out] Parameter of the Script component.</summary> /// <param name=”text”>String to print.</param> private void Print(string text) { /* Implementation hidden. */ } /// <summary>Print a formatted String to the [Out] Parameter of the Script component.</summary> /// <param name=”format”>String format.</param> /// <param name=”args”>Formatting parameters.</param>

private void Print(string format, params object[] args) { /* Implementation hidden. */ } /// <summary>Print useful information about an object instance to the [Out] Parameter of the Script component. </summary>

/// <param name=”obj”>Object instance to parse.</param> private void Reflect(object obj) { /* Implementation hidden. */ } /// <summary>Print the signatures of all the overloads of a specific method to the [Out] Parameter of the Script component. </summary>

/// <param name=”obj”>Object instance to parse.</param> private void Reflect(object obj, string method_name) { /* Implementation hidden. */ } #endregion

#region Members

/// <summary>Gets the current Rhino document.</summary> private readonly RhinoDoc RhinoDocument; /// <summary>Gets the Grasshopper document that owns this script.</summary> private readonly GH_Document GrasshopperDocument; /// <summary>Gets the Grasshopper script component that owns this script.</summary> private readonly IGH_Component Component; /// <summary>

/// Gets the current iteration count. The first call to RunScript() is associated with Iteration==0. /// Any subsequent call within the same solution will increment the Iteration count. /// </summary> private readonly int Iteration; #endregion

/// <summary>

/// This procedure contains the user code. Input parameters are provided as regular arguments, /// Output parameters as ref arguments. You don’t have to assign output parameters, /// they will have a default value.

/// </summary> private void RunScript(List<Brep> Patches, List<double> Scores, List<double> Angles, List<Curve> TerraceAreas, double ZTolerance, int MinGroupSize, int MaxGroupSize, int ClusterCount, double SelectionWeight, int RandomSeed, double AreaPerHousehold, ref object RedZonePatches, ref object YellowZonePatches, ref object RetainingWallGroups, ref object RetainingWallIndices , ref object FoundationGroups, ref object FoundationGroupIndices , ref object ClusteredFoundations, ref object OwnedTerraceAreas, ref object SelectedBuildings, ref object SelectedBuildingIndices, ref object ClusterCenters, ref object Log)

{ // --- 1. Input Validation and Initialization ---

// This section checks if the provided inputs are valid and initializes data structures. var log = new List<string>();

log.Add(“=== Input Data Validation ===”);

log.Add(string.Format(“Input Patch Count: {0}”, Patches != null ? Patches.Count : 0));

log.Add(string.Format(“Input Score Count: {0}”, Scores != null ? Scores.Count : 0));

log.Add(string.Format(“Input Angle Count: {0}”, Angles != null ? Angles.Count : 0));

log.Add(string.Format(“Input Terrace Area Count: {0}”, TerraceAreas != null ? TerraceAreas. Count : 0));

log.Add(string.Format(“Target Cluster Count: {0}”, ClusterCount)); log.Add(string.Format(“Area per Household: {0}”, AreaPerHousehold)); log.Add(string.Format(“Min Group Size: {0}”, MinGroupSize)); log.Add(string.Format(“Max Group Size: {0}”, MaxGroupSize)); log.Add(string.Format(“Selection Weight (Score={0:P0} / Proximity={1:P0}): “, SelectionWeight, 1 - SelectionWeight));

log.Add(string.Format(“Z-Value Tolerance: {0}”, ZTolerance)); log.Add(string.Format(“Random Seed: {0}”, RandomSeed));

// Initialize outputs to clear any data from previous runs. RedZonePatches = new List<Brep>(); YellowZonePatches = new List<Brep>(); RetainingWallGroups = new DataTree<Brep>(); RetainingWallIndices = new DataTree<int>(); FoundationGroups = new DataTree<Brep>(); FoundationGroupIndices = new DataTree<int>(); ClusteredFoundations = new DataTree<Brep>(); SelectedBuildings = new DataTree<Brep>(); SelectedBuildingIndices = new DataTree<int>(); ClusterCenters = new List<Point3d>(); OwnedTerraceAreas = new DataTree<Brep>();

// Abort if essential data is missing. if (Patches == null || Scores == null || Angles == null || Patches.Count == 0)

{ log.Add(“ERROR: Input data is null or empty.”); Log = log; return;

}

// Abort if input list counts do not match. if (Patches.Count != Scores.Count || Patches.Count != Angles.Count)

{ log.Add(string.Format(“ERROR: The count of Patches ({0}), Scores ({1}), and Angles ({2}) do not match.”, Patches.Count, Scores.Count, Angles.Count)); Log = log; return;

}

// Abort if group size parameters are invalid. if (MinGroupSize > MaxGroupSize)

{ log.Add(“ERROR: MinGroupSize cannot be greater than MaxGroupSize.”); Log = log; return; }

// Combine all inputs into a single list of ‘PatchInfo’ objects for easier management. var allPatches = new List<PatchInfo>(); for (int i = 0; i < Patches.Count; i++) { allPatches.Add(new PatchInfo(i, Patches[i], Scores[i], Angles[i])); }

// --- 2. Filter, Classify, and Analyze ---

// Filter for patches suitable for building (angle >= 10 degrees). var buildablePatches = allPatches.Where(p => p.Angle >= 10.0).ToList(); log.Add(string.Format(“Buildable patches with angle >= 10 degrees: {0}”, buildablePatches. Count));

// Classify buildable patches into Red (high score) and Yellow (medium score) zones. var redPatches = buildablePatches.Where(p => p.Zone == ZoneType.Red).ToList(); var yellowPatches = buildablePatches.Where(p => p.Zone == ZoneType.Yellow).ToList(); log.Add(string.Format(“Red Zone Patches: {0}, Yellow Zone Patches: {1}”, redPatches.Count, yellowPatches.Count));

// Determine which patches are adjacent to each other for both zones. var redAdjacency = BuildAdjacencyList(redPatches); var yellowAdjacency = BuildAdjacencyList(yellowPatches); log.Add(“Adjacency analysis for Red/Yellow Zones complete.”);

// --- 3. Grouping ---

// Group adjacent patches into ‘retaining walls’ (Red Zone) and ‘foundations’ (Yellow Zone). var retainingWalls = GroupAdjacentPatches(redPatches, redAdjacency, MinGroupSize, MaxGroupSize, ZTolerance);

log.Add(string.Format(“{0} retaining wall groups created in Red Zone.”, retainingWalls.Count)); var foundations = GroupAdjacentPatches(yellowPatches, yellowAdjacency, MinGroupSize, MaxGroupSize, ZTolerance);

log.Add(string.Format(“{0} foundation groups created in Yellow Zone.”, foundations.Count));

// --- 4. K-Medoids Clustering ---

// Convert the foundation groups into ‘FoundationInfo’ objects for clustering. var foundationInfos = foundations.Select((group, index) => new FoundationInfo(index, group)). ToList();

// Check if there’s enough data to perform clustering. if (foundationInfos.Count == 0 || ClusterCount <= 0 || foundationInfos.Count < ClusterCount) {

log.Add(“WARNING: Not enough foundation groups to perform clustering.”); RedZonePatches = redPatches.Select(p => p.Geometry).ToList(); YellowZonePatches = yellowPatches.Select(p => p.Geometry).ToList(); RetainingWallGroups = ConvertToDataTree(retainingWalls); RetainingWallIndices = ConvertToDataTree(retainingWalls, true); FoundationGroups = ConvertToDataTree(foundations); FoundationGroupIndices = ConvertToDataTree(foundations, true); Log = log; return;

}

// Perform K-Medoids clustering to group foundations into settlements. var kmedoids = new KMedoids(foundationInfos, ClusterCount, RandomSeed); kmedoids.Run(); log.Add(string.Format(“Clustering of foundation groups into {0} clusters is complete.”, ClusterCount));

// --- 5. Calculate Households per Cluster based on Terrace Area --// This section assigns nearby terrace areas to each cluster and calculates how many // households can be supported based on the total area. var clusterTerraceAreas = new Dictionary<int, double>(); var ownedTerraces = new Dictionary<int, List<Brep>>(); for(int i = 0; i < kmedoids.Clusters.Count; i++)

{ clusterTerraceAreas[i] = 0; ownedTerraces[i] = new List<Brep>();

}

if(TerraceAreas != null)

{ // Convert terrace curves to surfaces (Breps). var validTerraceBreps = new List<Brep>(); foreach(var curve in TerraceAreas)

{ if(curve == null || !curve.IsClosed)

{ log.Add(“WARNING: An input terrace curve was not closed and will be ignored.”); continue;

}

Brep[] breps = Brep.CreatePlanarBreps(curve, Rhino.RhinoDoc.ActiveDoc.ModelAbsoluteTolerance); if(breps != null && breps.Length > 0)

{ validTerraceBreps.Add(breps[0]); } else

{ log.Add(“WARNING: A terrace curve could not be converted to a surface and will be ignored.”);

}

}

// Assign each terrace area to the nearest cluster. foreach(var area in validTerraceBreps)

{ var areaProperties = AreaMassProperties.Compute(area); if(areaProperties == null) continue; Point3d areaCentroid = areaProperties.Centroid;

double minDistance = double.MaxValue; int closestClusterId = -1; for(int i = 0; i < kmedoids.Clusters.Count; i++)

{ var cluster = kmedoids.Clusters[i]; if(cluster.Medoid == null) continue; double dist = areaCentroid.DistanceTo(cluster.Medoid.Center); if(dist < minDistance)

{ minDistance = dist; closestClusterId = cluster.Id; }

}

if(closestClusterId != -1)

{ clusterTerraceAreas[closestClusterId] += areaProperties.Area; ownedTerraces[closestClusterId].Add(area); } } }

// --- 6. Finalize Output Data ---

// This section prepares the clustered and selected data for output. var clusteredFoundationsTree = new DataTree<Brep>(); var selectedBuildingsTree = new DataTree<Brep>(); var selectedBuildingIndicesTree = new DataTree<int>(); var clusterCenterPoints = new List<Point3d>(); var ownedTerracesTree = new DataTree<Brep>();

for (int i = 0; i < kmedoids.Clusters.Count; i++)

{ var cluster = kmedoids.Clusters[i];

if (cluster.Members.Count == 0) continue; var path = new GH_Path(i);

// Add all foundation geometries in the cluster to the output tree. var allGeometriesInCluster = cluster.Members.SelectMany(f => f.Patches.Select(p => p.Geometry));

clusteredFoundationsTree.AddRange(allGeometriesInCluster, path);

// Calculate the number of households this cluster can support. int householdsForThisCluster = 0; if(AreaPerHousehold > 0)

{ householdsForThisCluster = (int) Math.Floor(clusterTerraceAreas[cluster.Id] / AreaPerHousehold);

}

// Select the top foundations based on the household count and selection weight. var topFoundations = SelectTopFoundations(cluster, householdsForThisCluster, SelectionWeight); var selectedGeometries = topFoundations.SelectMany(f => f.Patches.Select(p => p.Geometry)); selectedBuildingsTree.AddRange(selectedGeometries, path); var selectedIndices = topFoundations.SelectMany(f => f.Patches.Select(p => p.OriginalIndex)); selectedBuildingIndicesTree.AddRange(selectedIndices, path); // Store the center point (medoid) of the cluster. if(cluster.Medoid != null)

{ clusterCenterPoints.Add(cluster.Medoid.Center);

}

// Store the terrace areas owned by this cluster. ownedTerracesTree.AddRange(ownedTerraces[cluster.Id], path); log.Add(string.Format(“ - Cluster {0}: Contains {1} foundation groups. Terrace Area: {2:F0} m^2. Calculated Households: {3}. Selected {4} for building.”, i, cluster.Members.Count, clusterTerraceAreas[cluster.Id], householdsForThisCluster, topFoundations.Count));

}

// --- 7. Set Outputs ---

// Assign all processed data to the component’s output parameters. RedZonePatches = redPatches.Select(p => p.Geometry).ToList(); YellowZonePatches = yellowPatches.Select(p => p.Geometry).ToList(); RetainingWallGroups = ConvertToDataTree(retainingWalls); RetainingWallIndices = ConvertToDataTree(retainingWalls, true); FoundationGroups = ConvertToDataTree(foundations); FoundationGroupIndices = ConvertToDataTree(foundations, true);

ClusteredFoundations = clusteredFoundationsTree; SelectedBuildings = selectedBuildingsTree; SelectedBuildingIndices = selectedBuildingIndicesTree; ClusterCenters = clusterCenterPoints; OwnedTerraceAreas = ownedTerracesTree;

log.Add(“=== Final Result Summary ===”); log.Add(string.Format(“Red Zone Patches: {0}”, redPatches.Count)); log.Add(string.Format(“Yellow Zone Patches: {0}”, yellowPatches.Count)); log.Add(string.Format(“Retaining Wall Groups: {0}”, retainingWalls.Count)); log.Add(string.Format(“Foundation Groups: {0}”, foundations.Count)); log.Add(string.Format(“Clusters: {0}”, kmedoids.Clusters.Count));

Log = log; }

// <Custom additional code>

// Defines the classification zones for patches based on their score. public enum ZoneType { Red, Yellow, Green }

// A helper class to store all relevant information about a single patch. public class PatchInfo { public int OriginalIndex { get; private set; } public Brep Geometry { get; private set; } public double Score { get; private set; } public double Angle { get; private set; } public ZoneType Zone { get; private set; } public Point3d Center { get; private set; }

public PatchInfo(int index, Brep geometry, double score, double angle) { OriginalIndex = index; Geometry = geometry; Score = score; Angle = angle; Zone = ClassifyZone(score); var areaMassProperties = AreaMassProperties.Compute(geometry); Center = areaMassProperties != null ? areaMassProperties.Centroid : Point3d.Unset; } private ZoneType ClassifyZone(double score)

{ if (score >= 0.6) return ZoneType.Red; if (score >= 0.2) return ZoneType.Yellow; return ZoneType.Green; }

}

// A helper class to represent a group of patches that form a single foundation. public class FoundationInfo { public int Id { get; private set; } public List<PatchInfo> Patches { get; private set; } public Point3d Center { get; private set; } public double AverageScore { get; private set; }

public FoundationInfo(int id, List<PatchInfo> patches) { Id = id; Patches = patches; Point3d center = new Point3d(0, 0, 0); double totalScore = 0; foreach(var patch in patches) { center += patch.Center; totalScore += patch.Score; } Center = center / patches.Count; AverageScore = (patches.Count > 0) ? totalScore / patches.Count : 0; } }

// --- Adjacency Analysis and Grouping Algorithms ---

// Builds a dictionary mapping each patch to a list of its adjacent neighbors. private Dictionary<int, List<int>> BuildAdjacencyList(List<PatchInfo> patches) { var adjacencyList = new Dictionary<int, List<int>>(); double tolerance = 0.001; foreach (var patch in patches)

{ adjacencyList[patch.OriginalIndex] = new List<int>(); } for (int i = 0; i < patches.Count; i++)

{ for (int j = i + 1; j < patches.Count; j++)

{ // Check for physical intersection between two patches. Curve[] intersectionCurves; Point3d[] intersectionPoints; bool intersection = Intersection.BrepBrep(patches[i].Geometry, patches[j].Geometry, tolerance, out intersectionCurves, out intersectionPoints); if (intersection && intersectionCurves.Length > 0) { adjacencyList[patches[i].OriginalIndex].Add(patches[j].OriginalIndex); adjacencyList[patches[j].OriginalIndex].Add(patches[i].OriginalIndex); } } } return adjacencyList;

}

// Groups patches into connected components using a flood-fill (Breadth-First Search) approach. private List<List<PatchInfo>> GroupAdjacentPatches(List<PatchInfo> patchesToGroup, Dictionary<int, List<int>> adjacencyList, int minGroupSize, int maxGroupSize, double zTolerance)

{ var allGroups = new List<List<PatchInfo>>(); var assignedPatches = new HashSet<int>(); var patchMap = patchesToGroup.ToDictionary(p => p.OriginalIndex, p => p); // Sort patches to ensure a deterministic grouping order, starting from the same Z-level. var sortedPatches = patchesToGroup.OrderBy(p => Math.Round(p.Center.Z, 3)).ThenByDescending(p => p.Score).ToList(); foreach (var startPatch in sortedPatches)

{ if (assignedPatches.Contains(startPatch.OriginalIndex))

{ continue;

} var currentGroup = new List<PatchInfo>(); var queue = new Queue<PatchInfo>(); queue.Enqueue(startPatch); assignedPatches.Add(startPatch.OriginalIndex); while (queue.Count > 0)

{ var currentPatch = queue.Dequeue(); currentGroup.Add(currentPatch); if (currentGroup.Count >= maxGroupSize)

{ // If the group reaches max size, add remaining items from queue and stop expanding. while(queue.Count > 0)

{ currentGroup.Add(queue.Dequeue()); } break;

} if (!adjacencyList.ContainsKey(currentPatch.OriginalIndex)) continue; var neighbors = new List<PatchInfo>(); foreach (var neighborIndex in adjacencyList[currentPatch.OriginalIndex])

{ if (!assignedPatches.Contains(neighborIndex) && patchMap.ContainsKey(neighborIndex)) { // Only add neighbors that are at a similar Z-height. if(Math.Abs(patchMap[neighborIndex].Center.Z - currentPatch.Center.Z) < zTolerance)

}

{ neighbors.Add(patchMap[neighborIndex]); } } }

foreach (var neighbor in neighbors)

{ if (!assignedPatches.Contains(neighbor.OriginalIndex)) { assignedPatches.Add(neighbor.OriginalIndex); queue.Enqueue(neighbor); }

}

// Only keep the group if it meets the minimum size requirement. if (currentGroup.Count >= minGroupSize)

{ if(currentGroup.Count > maxGroupSize)

{ allGroups.Add(currentGroup.GetRange(0, maxGroupSize)); } else

{ allGroups.Add(currentGroup); } } } return allGroups;

}

// --- K-Medoids Clustering Algorithm ---

// Represents a single cluster, containing its central point (Medoid) and its members. public class Cluster

{ public int Id { get; private set; } public FoundationInfo Medoid { get; set; } public List<FoundationInfo> Members { get; private set; } public Cluster(int id, FoundationInfo medoid)

{ Id = id; Medoid = medoid; Members = new List<FoundationInfo>();

} // Calculates the total distance from all members to the medoid. public double CalculateCost()

{ double cost = 0; foreach (var member in Members)

{ cost += member.Center.DistanceTo(Medoid.Center); } return cost;

}

} // Implements the K-Medoids clustering algorithm. public class KMedoids

{ private readonly List<FoundationInfo> _foundations; private readonly int _k; private readonly int _randomSeed; public List<Cluster> Clusters { get; private set; } public KMedoids(List<FoundationInfo> foundations, int k, int randomSeed)

{ _foundations = foundations; _k = k; _randomSeed = randomSeed; Clusters = new List<Cluster>();

} public void Run(int maxIterations = 100)

{ if (_foundations.Count < _k) return;

}

InitializeMedoids(); for (int i = 0; i < maxIterations; i++)

{ AssignToClusters(); bool changed = UpdateMedoids(); if (!changed) break; // Stop if medoids no longer change.

}

// Randomly selects the initial K medoids from the dataset. private void InitializeMedoids()

{ var random = new Random(_randomSeed); var randomIndices = Enumerable.Range(0, _foundations.Count).OrderBy(x => random.Next()). Take(_k).ToList(); for (int i = 0; i < _k; i++)

{ Clusters.Add(new Cluster(i, _foundations[randomIndices[i]]));

}

}

// Assigns each foundation to the cluster with the nearest medoid. private void AssignToClusters()

{ foreach (var c in Clusters) c.Members.Clear(); foreach (var foundation in _foundations)

{ Cluster nearestCluster = null; double minDistance = double.MaxValue; foreach (var cluster in Clusters)

{ if(cluster.Medoid == null) continue; double distance = foundation.Center.DistanceTo(cluster.Medoid.Center); if (distance < minDistance)

{ minDistance = distance; nearestCluster = cluster; }

} if(nearestCluster != null)

{ nearestCluster.Members.Add(foundation); } } }

// Tries to improve the clusters by swapping the medoid with a non-medoid member. private bool UpdateMedoids()

{ bool medoidChanged = false; foreach (var cluster in Clusters)

{ if (cluster.Members.Count == 0) continue; double minCost = cluster.CalculateCost(); FoundationInfo bestNewMedoid = cluster.Medoid; foreach (var potentialMedoid in cluster.Members)

{ if (potentialMedoid.Id == cluster.Medoid.Id) continue; var originalMedoid = cluster.Medoid; cluster.Medoid = potentialMedoid; double newCost = cluster.CalculateCost(); if (newCost < minCost)

{ minCost = newCost; bestNewMedoid = potentialMedoid; medoidChanged = true; } cluster.Medoid = originalMedoid; } cluster.Medoid = bestNewMedoid; } return medoidChanged; } }

// --- Utility Functions ---

// Selects the best foundations from a cluster based on a weighted score of suitability and proximity to the center.

private List<FoundationInfo> SelectTopFoundations(Cluster cluster, int count, double weight)

{ if (cluster.Members.Count == 0 || cluster.Medoid == null)

{ return new List<FoundationInfo>();

}

// Normalize scores and distances to a 0-1 range for fair comparison. double maxScore = cluster.Members.Max(f => f.AverageScore); if (maxScore == 0) maxScore = 1.0; double maxDistance = cluster.Members.Max(f => f.Center.DistanceTo(cluster.Medoid.Center)); if (maxDistance == 0) maxDistance = 1.0; return cluster.Members.OrderByDescending(f => { double normalizedScore = f.AverageScore / maxScore; double distance = f.Center.DistanceTo(cluster.Medoid.Center); // Proximity is the inverse of distance. double normalizedProximity = 1.0 - (distance / maxDistance); // Calculate final priority score using the input weight. return (weight * normalizedScore) + ((1.0 - weight) * normalizedProximity); }).Take(count).ToList();

}

// Converts a list of patch groups into a DataTree of Breps for Grasshopper output. private DataTree<Brep> ConvertToDataTree(List<List<PatchInfo>> groups) { var tree = new DataTree<Brep>(); for (int i = 0; i < groups.Count; i++)

{ var path = new GH_Path(i); tree.AddRange(groups[i].Select(p => p.Geometry), path); } return tree;

} // Converts a list of patch groups into a DataTree of Integers (original indices). private DataTree<int> ConvertToDataTree(List<List<PatchInfo>> groups, bool getIndices)

{ var tree = new DataTree<int>(); for (int i = 0; i < groups.Count; i++)

{ var path = new GH_Path(i); tree.AddRange(groups[i].Select(p => p.OriginalIndex), path); } return tree;

} // </Custom additional code> }

C# Script for Neighbors Information

1 Script Overview

This script constructs an adjacency-based neighbourhood structure from a set of 3D points (Pts). For each point in the dataset, the algorithm identifies nearby points according to predefined spatial tolerances in the horizontal plane (XY) and the vertical axis (Z). Neighbourhood relationships are evaluated using axis-aligned distance thresholds rather than Euclidean distance. A point is considered a neighbour if the absolute differences in the X and Y coordinates are less than or equal to MaxXY, and the absolute difference in the Z coordinate is less than or equal to MaxZ. This approach effectively defines a local three-dimensional bounding box around each point.

The resulting neighbourhood information is stored in a Grasshopper DataTree<int>, where each branch corresponds to a single point and contains the indices of all neighbouring points that satisfy the spatial criteria. This data structure is well suited for downstream operations such as proximity-based grouping, clustering, graph construction, region growth, or spatial analysis within parametric workflows.

2 Code Structure

The code is organised as a nested iteration that systematically evaluates point-to-point proximity and outputs the results in a Grasshopper-compatible data structure.

Initialisation

A DataTree<int> is initialised to store neighbour indices for each point in the input collection.

Outer Loop: Point Iteration

The outer loop iterates over all points in Pts. For each index i, the corresponding point ptA is selected and an empty list is created to store its neighbouring point indices.

Inner Loop: Neighbour Evaluation

A second loop iterates through all points in the dataset for index j. The case where i == j is explicitly excluded to prevent selfreferencing.

For each candidate point ptB, the absolute coordinate differences are computed:

dx: difference along the X-axis

dy: difference along the Y-axis dz: difference along the Z-axis

Neighbourhood Condition

A point ptB is classified as a neighbour of ptA if all of the following conditions are satisfied:

dx ≤ MaxXY

dy ≤ MaxXY

dz ≤ MaxZ

This condition defines an axis-aligned spatial filter that captures local proximity without the computational cost of full distance calculations.

DataTree Population

Once all neighbours of ptA have been identified, the list of neighbour indices is added to the DataTree under a path corresponding to the current point index (GH_Path(i)). Each branch therefore represents the neighbourhood of a single point.

Output

The completed DataTree is assigned to the output parameter Neighbors, making the neighbourhood relationships available for further processing within Grasshopper.

/// <summary>

/// This class will be instantiated on demand by the Script component. public class Script_Instance : GH_ScriptInstance { private void RunScript(List<Point3d> Pts, double MaxXY, double MaxZ, ref object Neighbors) { var tree = new DataTree<int>(); for (int i = 0; i < Pts.Count; i++)

{ Point3d ptA = Pts[i]; List<int> neighbors = new List<int>(); for (int j = 0; j < Pts.Count; j++) { if (i == j) continue;

Point3d ptB = Pts[j];

double dx = Math.Abs(ptA.X - ptB.X); double dy = Math.Abs(ptA.Y - ptB.Y); double dz = Math.Abs(ptA.Z - ptB.Z);

if (dx <= MaxXY && dy <= MaxXY && dz <= MaxZ) { neighbors.Add(j); } } tree.AddRange(neighbors, new GH_Path(i)); }

Neighbors = tree;

C# Script for Neighbors Information

1 Script Overview

This script constructs an adjacency-based neighbourhood structure from a set of 3D points (Pts). For each point in the dataset, the algorithm identifies nearby points according to predefined spatial tolerances in the horizontal plane (XY) and the vertical axis (Z). Neighbourhood relationships are evaluated using axis-aligned distance thresholds rather than Euclidean distance. A point is considered a neighbour if the absolute differences in the X and Y coordinates are less than or equal to MaxXY, and the absolute difference in the Z coordinate is less than or equal to MaxZ. This approach effectively defines a local three-dimensional bounding box around each point.

The resulting neighbourhood information is stored in a Grasshopper DataTree<int>, where each branch corresponds to a single point and contains the indices of all neighbouring points that satisfy the spatial criteria. This data structure is well suited for downstream operations such as proximity-based grouping, clustering, graph construction, region growth, or spatial analysis within parametric workflows.

2 Code Structure

The code is organised as a nested iteration that systematically evaluates point-to-point proximity and outputs the results in a Grasshopper-compatible data structure.

Initialisation

A DataTree<int> is initialised to store neighbour indices for each point in the input collection.

Outer Loop: Point Iteration

The outer loop iterates over all points in Pts. For each index i, the corresponding point ptA is selected and an empty list is created to store its neighbouring point indices.

Inner Loop: Neighbour Evaluation

A second loop iterates through all points in the dataset for index j. The case where i == j is explicitly excluded to prevent selfreferencing.

For each candidate point ptB, the absolute coordinate differences are computed:

dx: difference along the X-axis

dy: difference along the Y-axis dz: difference along the Z-axis

Neighbourhood Condition

A point ptB is classified as a neighbour of ptA if all of the following conditions are satisfied:

dx ≤ MaxXY

dy ≤ MaxXY

dz ≤ MaxZ

This condition defines an axis-aligned spatial filter that captures local proximity without the computational cost of full distance calculations.

DataTree Population

Once all neighbours of ptA have been identified, the list of neighbour indices is added to the DataTree under a path corresponding to the current point index (GH_Path(i)). Each branch therefore represents the neighbourhood of a single point.

Output

The completed DataTree is assigned to the output parameter Neighbors, making the neighbourhood relationships available for further processing within Grasshopper.

/// <summary>

/// This class will be instantiated on demand by the Script component. public class Script_Instance : GH_ScriptInstance { private void RunScript(List<Point3d> Pts, double MaxXY, double MaxZ, ref object Neighbors) { var tree = new DataTree<int>(); for (int i = 0; i < Pts.Count; i++)

{ Point3d ptA = Pts[i]; List<int> neighbors = new List<int>(); for (int j = 0; j < Pts.Count; j++) { if (i == j) continue;

Point3d ptB = Pts[j];

double dx = Math.Abs(ptA.X - ptB.X); double dy = Math.Abs(ptA.Y - ptB.Y); double dz = Math.Abs(ptA.Z - ptB.Z);

if (dx <= MaxXY && dy <= MaxXY && dz <= MaxZ) { neighbors.Add(j); } } tree.AddRange(neighbors, new GH_Path(i)); }

Neighbors = tree;

C# Script for Ecological Cellular Automate

1 Script Overview

This script constructs an adjacency-based neighbourhood structure from a set of 3D points (Pts). For each point in the dataset, the algorithm identifies nearby points according to predefined spatial tolerances in the horizontal plane (XY) and the vertical axis (Z). Neighbourhood relationships are evaluated using axis-aligned distance thresholds rather than Euclidean distance. A point is considered a neighbour if the absolute differences in the X and Y coordinates are less than or equal to MaxXY, and the absolute difference in the Z coordinate is less than or equal to MaxZ. This approach effectively defines a local three-dimensional bounding box around each point.

The resulting neighbourhood information is stored in a Grasshopper DataTree<int>, where each branch corresponds to a single point and contains the indices of all neighbouring points that satisfy the spatial criteria. This data structure is well suited for downstream operations such as proximity-based grouping, clustering, graph construction, region growth, or spatial analysis within parametric workflows.

2 Code Structure

The code is organised as a nested iteration that systematically evaluates point-to-point proximity and outputs the results in a Grasshopper-compatible data structure.

Initialisation

A DataTree<int> is initialised to store neighbour indices for each point in the input collection.

Outer Loop: Point Iteration

The outer loop iterates over all points in Pts. For each index i, the corresponding point ptA is selected and an empty list is created to store its neighbouring point indices.

Inner Loop: Neighbour Evaluation

A second loop iterates through all points in the dataset for index j. The case where i == j is explicitly excluded to prevent selfreferencing.

For each candidate point ptB, the absolute coordinate differences are computed:

dx: difference along the X-axis

dy: difference along the Y-axis dz: difference along the Z-axis

Neighbourhood Condition

A point ptB is classified as a neighbour of ptA if all of the following conditions are satisfied:

dx ≤ MaxXY

dy ≤ MaxXY

dz ≤ MaxZ

This condition defines an axis-aligned spatial filter that captures local proximity without the computational cost of full distance calculations.

DataTree Population

Once all neighbours of ptA have been identified, the list of neighbour indices is added to the DataTree under a path corresponding to the current point index (GH_Path(i)). Each branch therefore represents the neighbourhood of a single point.

Output

The completed DataTree is assigned to the output parameter Neighbors, making the neighbourhood relationships available for further processing within Grasshopper.

{ private void RunScript(List<bool> states, DataTree<int> neighbors, DataTree<double> envData, int steps, double minEnv, double maxEnv, int surviveMin, int surviveMax, double birthEnv, ref object A) { { int count = states.Count; List<bool> currentStates = new List<bool>(states);

List<List<int>> neighList = new List<List<int>>(); for (int i = 0; i < neighbors.Branches.Count; i++) neighList.Add(new List<int>(neighbors.Branches[i]));

List<List<double>> envList = new List<List<double>>(); for (int i = 0; i < envData.Branches.Count; i++) envList.Add(new List<double>(envData.Branches[i])); for (int step = 0; step < steps; step++) { List<bool> nextStates = new List<bool>(); for (int i = 0; i < count; i++) { List<int> neigh = neighList[i];

double totalEnv = 0.0; int validN = 0; int aliveN = 0; foreach (int n in neigh) { if (n < 0 || n >= count) continue; double sum = 0; foreach (double v in envList[n]) sum += v; totalEnv += sum; validN++;

if (currentStates[n]) aliveN++; }

double avgEnv = (validN > 0) ? totalEnv / validN : 0.0; bool alive = currentStates[i]; bool next; if (avgEnv <= minEnv) { next = false; } else if (avgEnv >= maxEnv) { next = true; } else { if (alive) { next = (aliveN >= surviveMin && aliveN <= surviveMax); } else { next = (aliveN == birthEnv); } } nextStates.Add(next); } currentStates = nextStates; } A = currentStates; }

C# Script for Data Clustering

1 Script Overview

This script performs a rule-based spatial classification of points based on multiple environmental parameters provided through a DataTree. Each point is evaluated using five weighted criteria, radiation, neighbours, proximity, wind, and light, which together define qualitative spatial conditions.

Using predefined thresholds and weighted values, each point is assigned to a comfort or programme category such as High Comfort, Vegetation Zones, or Residual Space. These categories are associated with numeric group identifiers and colour codes, allowing the classification to directly drive visualisation and downstream spatial logic in Grasshopper.

In addition to per-point classification, the script aggregates points by category, reports diagnostic value ranges for key parameters, and outputs structured data suitable for clustering, colouring, labelling, and further analysis.

2 Code Structure

Parameter weighting

Scalar weights are assigned to each environmental parameter and applied directly to the input values to control their influence during classification.

Group definitions

Dictionaries map group names to numeric identifiers and display colours, ensuring consistency between classification logic and visual output.

Output initialisation

Lists and dictionaries are initialised to store per-point labels, colours, group indices, and clustered point collections.

Classification loop

Each branch of the input DataTree is evaluated individually. Invalid branches fall back to a default category, while valid data is weighted and classified using a hierarchical set of conditional rules based on adjusted parameter ranges.

Diagnostics and aggregation

Minimum and maximum values of key parameters are tracked and printed to the Grasshopper console. Points are simultaneously grouped by category, and a summary of active clusters is generated.

Final output assignment

All computed data is assigned to the component outputs for further visualisation and parametric processing.

using System; using System.Collections; using System.Collections.Generic; using Rhino; using Rhino.Geometry; using Grasshopper; using Grasshopper.Kernel; using Grasshopper.Kernel.Data; using Grasshopper.Kernel.Types;

using System.Drawing; using System.Linq;

private void RunScript(List<Point3d> points, DataTree<double> dataTree, ref object clusters, ref object grouped, ref object colors, ref object labels, ref object averagesOut) { // Weights

double w_radiation = 0.9; double w_neighbours = 1.4; double w_proximity = 0.3; double w_wind = 0.5; double w_light = 1.2;

// Group dictionaries

Dictionary<string, int> groupIDs = new Dictionary<string, int>() { { “High Comfort”, 0 }, { “Medium Comfort”, 1 }, { “Vegetation Zones”, 2 }, { “Gathering Zones”, 3 }, { “Residual Space”, 4 }, };

Dictionary<string, Color> groupColors = new Dictionary<string, Color>() { { “High Comfort”, Color.Orange }, { “Medium Comfort”, Color.YellowGreen }, { “Vegetation Zones”, Color.ForestGreen }, { “Gathering Zones”, Color.Goldenrod }, { “Residual Space”, Color.Gray }, };

// Initialise outputs

List<string> labelsOut = new List<string>(); List<Color> colorsOut = new List<Color>(); List<int> groupedOut = new List<int>(); List<double> averagesOutTemp = new List<double>(); Dictionary<string, List<Point3d>> clusterGroups = new Dictionary<string, List<Point3d>>(); foreach (var name in groupIDs.Keys) clusterGroups[name] = new List<Point3d>();

// Debug: evaluate actual value ranges

double minRad = double.MaxValue, maxRad = double.MinValue; double minNeigh = double.MaxValue, maxNeigh = double.MinValue; double minLight = double.MaxValue, maxLight = double.MinValue; double minWind = double.MaxValue, maxWind = double.MinValue;

// Classification for (int i = 0; i < dataTree.BranchCount; i++) { var d = dataTree.Branch(i); if (d.Count != 5)

{ string fallback = “Supportive Programmes”; labelsOut.Add(fallback); colorsOut.Add(groupColors[fallback]); groupedOut.Add(groupIDs[fallback]); clusterGroups[fallback].Add(points[i]); continue; }

// Apply weights

double rad = d[0] * w_radiation;

double neigh = d[1] * w_neighbours; double prox = d[2] * w_proximity; double wind = d[3] * w_wind; double light = d[4] * w_light;

// Store for diagnostics

minRad = Math.Min(minRad, rad); maxRad = Math.Max(maxRad, rad); minNeigh = Math.Min(minNeigh, neigh); maxNeigh = Math.Max(maxNeigh, neigh); minLight = Math.Min(minLight, light); maxLight = Math.Max(maxLight, light); minWind = Math.Min(minWind, wind); maxWind = Math.Max(maxWind, wind);

// Range-based classification

string group = “Residual Space”; // default

if (neigh > 0.5 && light > 0.4 && rad < 0.7) group = “High Comfort”; else if (rad > 0.45 && light > 0.35 && neigh < 0.5) group = “Vegetation Zones”; else if (light > 0.35 && rad > 0.3) group = “Medium Comfort”; else if (neigh > 0.7) group = “Gathering Zones”; else group = “Residual Space”;

// Assign outputs

labelsOut.Add(group); colorsOut.Add(groupColors[group]); groupedOut.Add(groupIDs[group]); clusterGroups[group].Add(points[i]); }

// Console: display actual ranges

Print(“Radiation: “ + minRad.ToString(“0.00”) + “ – “ + maxRad.ToString(“0.00”));

Print(“Neighbours: “ + minNeigh.ToString(“0.00”) + “ – “ + maxNeigh.ToString(“0.00”));

Print(“Light: “ + minLight.ToString(“0.00”) + “ – “ + maxLight.ToString(“0.00”));

Print(“Wind: “ + minWind.ToString(“0.00”) + “ – “ + maxWind.ToString(“0.00”));

// Clusters: name + count

List<string> clusterLabels = new List<string>(); foreach (var kv in clusterGroups)

{ int id = groupIDs[kv.Key]; int count = kv.Value.Count; if (count > 0)

clusterLabels.Add(id.ToString() + “ - “ + kv.Key + “ (“ + count.ToString() + “ pts)”); }

// Assign outputs

clusters = clusterLabels; grouped = groupedOut; colors = colorsOut; labels = labelsOut; averagesOut = averagesOutTemp; }

C# Script for Interior Spaces Classification

1 Script Overview

1 Script Overview

This Grasshopper C# script classifies a set of points into three ecological categories, ECO-A_Human, ECO-B_Intermediate, and ECO-C_Plants, based on five input parameters supplied per point via a DataTree<double>. Each point is assigned a group label, a corresponding colour, and a numeric group ID. In parallel, the script produces a simple cluster summary indicating how many points fall into each category.

The classification is rule-based and hierarchical: the “Plants” condition is tested first, then the “Intermediate” band, and finally a more permissive “Human” condition; otherwise the default remains ECO-A.

2 Code Structure

Group dictionaries (IDs and colours)

Two dictionaries define the mapping from group names to integer IDs and display colours. These are used consistently across all outputs.

Output containers

Lists are initialised to store per-point results: group labels, colours, and IDs. A DataTree<Point3d> is also created to store points under a path matching their group ID.

Classification loop

The script iterates over each branch of dataTree. If a branch does not contain exactly five values, it is skipped. Otherwise, the five parameters are read (radiation, volume, connectivity, ventilation, humidity) and evaluated using a non-overlapping conditional sequence:

ECO-C_Plants: high humidity, low ventilation, and limited radiation.

ECO-B_Intermediate: mid-range humidity and ventilation, with radiation below a cap.

ECO-A_Human: higher ventilation and connectivity, lower humidity, and higher volume (more flexible rule).

Per-point output assignment

For each point, the selected group label, colour, and numeric ID are appended to the output lists, and the point is added to the output tree under GH_Path(groupID).

Cluster summary

After classification, the script counts how many points belong to each group ID and generates a summary list in the format: GroupName (X pts).

Final outputs

The computed lists and summary are assigned to the Grasshopper outputs: labels, colors, grouped, and clusters.

using System; using System.Collections; using System.Collections.Generic; using Rhino; using Rhino.Geometry; using Grasshopper; using Grasshopper.Kernel; using Grasshopper.Kernel.Data; using Grasshopper.Kernel.Types;

using System.Drawing; using System.Linq;

/// <summary>

/// This class will be instantiated on demand by the Script component. /// </summary> public class Script_Instance : GH_ScriptInstance {

#region Utility functions

/// <summary>Print a String to the [Out] Parameter of the Script component.</summary> /// <param name=”text”>String to print.</param> private void Print(string text) { /* Implementation hidden. */ } /// <summary>Print a formatted String to the [Out] Parameter of the Script component.</summary> /// <param name=”format”>String format.</param> /// <param name=”args”>Formatting parameters.</param> private void Print(string format, params object[] args) { /* Implementation hidden. */ } /// <summary>Print useful information about an object instance to the [Out] Parameter of the Script component. </summary>

/// <param name=”obj”>Object instance to parse.</param> private void Reflect(object obj) { /* Implementation hidden. */ } /// <summary>Print the signatures of all the overloads of a specific method to the [Out] Parameter of the Script component. </summary>

/// <param name=”obj”>Object instance to parse.</param> /// <param name=”method_name”>Method name to reflect.</param> private void Reflect(object obj, string method_name) { /* Implementation hidden. */ } #endregion

#region Members

/// <summary>Gets the current Rhino document.</summary> private readonly RhinoDoc RhinoDocument;

/// <summary>Gets the Grasshopper document that owns this script.</summary> private readonly GH_Document GrasshopperDocument;

/// <summary>Gets the Grasshopper script component that owns this script.</summary> private readonly IGH_Component Component;

/// <summary>

/// Gets the current iteration count. The first call to RunScript() is associated with Iteration==0. /// Any subsequent call within the same solution will increment the Iteration count. /// </summary> private readonly int Iteration; #endregion

/// <summary>

/// This procedure contains the user code. Input parameters are provided as regular arguments, /// Output parameters as ref arguments. You don’t have to assign output parameters, /// they will have a default value.

/// </summary>

private void RunScript(List<Point3d> points, DataTree<double> dataTree, ref object clusters, ref object grouped, ref object colors, ref object labels)

{ { // 1. IDs and colours

Dictionary<string, int> groupIDs = new Dictionary<string, int>()

{ { “ECO-A_Human”, 0 },

{ “ECO-B_Intermediate”, 1 },

{ “ECO-C_Plants”, 2 }

}; Dictionary<string, Color> groupColors = new Dictionary<string, Color>()

{ { “ECO-A_Human”, Color.LightSkyBlue },

{ “ECO-B_Intermediate”, Color.Khaki },

{ “ECO-C_Plants”, Color.ForestGreen }

// Output lists

List<string> outLabels = new List<string>();

List<Color> outColors = new List<Color>();

List<int> outIDs = new List<int>();

DataTree <Point3d> outTree = new DataTree<Point3d>();

// 2. Classification (no overlap)

for (int i = 0; i < dataTree.BranchCount; i++)

{ var d = dataTree.Branch(i); if (d.Count != 5) continue;

double rad = d[0]; double vol = d[1]; double conn = d[2]; double vent = d[3]; double hum = d[4];

string group = “ECO-A_Human”; // default

// ����

ECO-C → Plants (high humidity, low ventilation) if (hum > 0.70 && vent < 0.40 && rad < 0.60)

{ group = “ECO-C_Plants”; } else

// �� ECO-B → Intermediate (WIDER RANGE) if (hum > 0.40 && hum < 0.75 && vent > 0.30 && vent < 0.65 && rad < 0.75)

{ group = “ECO-B_Intermediate”; } else

{ // �� ECO-A → Human (more flexible than before) if (vent > 0.50 && conn > 0.40 && hum < 0.55 && vol > 0.50) group = “ECO-A_Human”; } outLabels.Add(group); outColors.Add(groupColors[group]); outIDs.Add(groupIDs[group]);

GH_Path path = new GH_Path(groupIDs[group]); outTree.Add(points[i], path); }

// 3. Cluster summary List<string> summary = new List<string>(); foreach (var kv in groupIDs) { int id = kv.Value; int count = 0;

// Count how many points have that ID foreach (int g in outIDs)

{ if (g == id) count++; } summary.Add(kv.Key + “ (“ + count + “ pts)”); } labels = outLabels; colors = outColors; grouped = outIDs; clusters = summary; } } }

Planting Catlogue

Stabilising Plants

Plantspeciesandtheirclassification.(varioussourcescompiedbyAI)

Plantspeciesandtheirclassification.(varioussourcescompiedbyAI)

Soil Enriching Plants

Plantspeciesandtheirclassification.(varioussourcescompiedbyAI)

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.