2025 ACSA AI Design Practices - Abstract Book

Page 1


Session: Spatial Organization

Friday, September 26, 2025

Machine learning-based Unit Floor Plan Retrieval and Comparison: Unit Finder

The architectural design of multifamily housing involves an arduous, iterative process of laying out individual apartment units. Analyzing and comparing successful unit designs from prior projects is critical but often cumbersome and requires familiarity with prior projects in order to know where to look to find a floor plan that was successful in a particular situation, geometry, or typology. However, searching for these reference layouts can be time-consuming, especially as the database size increases. Additionally, manual searches can reduce the accuracy and efficiency of the process. Several methods have been explored to speed up the process of retrieving similar floor plans from databases. Machine learning methods can further improve this process by delivering results with higher accuracy. Although images and sketches provide useful features for analysis, they fall short in capturing the internal structural and semantic information. Spatial features derived from vector-based representations of floor plans offer a more comprehensive understanding of structural details and relationships that images and sketches alone cannot fully convey. Here, we demonstrate a novel spatial feature-based unit plan retrieval framework [1-4] to enable intelligent retrieval and comparison of unit floor plans. A distinctive feature of our approach is its capability to analyze unit plans based solely on boundaries, enclosures, and door locations, and match these architectural elements with existing plans in the database, providing users with similar plans enriched with detailed layout information. Our framework integrates seamlessly with architectural modeling software such as Revit and Rhino, facilitating smooth import and export functionality. Our approach leverages spatial features to extract detailed structural and granular information about floor plans. We have adapted a k-nearest neighbor model [5-7] for efficient retrieval of similar floor plans from the database. Key innovations of our method include advanced retrieval capabilities, seamless Revit and Rhino integration, the ability to rate and tag floor plans, and a user-friendly interface. It also includes a filtering mechanism allowing architects and designers to explore unit floor plans based on various parameters such as the number of bedrooms, number of bathrooms, unit type (rental or for sale), square footage, and a user-generated quality rating. This advanced filtering enhances decision-making processes, facilitating quick and precise selection of unit plans tailored to specific project requirements. Additionally, our framework excels in predicting complex floor plans with multiple enclosures and curves, demonstrating its capability to analyze intricate plan structures effectively. This study was conducted at Handel Architects LLP. The data used in this study was collected and is owned by Handel Architects LLP. We also introduced a user interface called Unit Finder, which provides seamless integration with Revit and Rhino, enabling efficient retrieval and export of similar plans back to the original project. Our framework effectively predicts results for complex floor plans, demonstrating its capability to handle intricate architectural design

Toward a Fallible Machine: Computational Design Beyond Optimization

This paper argues that introducing generative computational tools as active design partners introduces productive variability into architectural workflows, an approach that does not undermine the role of the designer, but enhances it. Rather than seeking to optimize outcomes, this research embraces indeterminacy as a catalyst for novel spatial, material, and conceptual discoveries. Through projects exhibited in Exquisite Corpse: Dialogues in Material and Machine and the courses Artificial Assemblies and Dreamlands, the work explores how generative technologies reshape design methodologies, authorship, and the relationship between digital and physical expression. In these case studies, computational processes are employed in varying supplemental capacities, strategically testing their limitations and potential to reveal new design trajectories. The exhibition Exquisite Corpse investigates surrealist methodologies, merging deterministic computational frameworks with interpretive instability. Generative tools such as Stable Diffusion and ComfyUI are used in tandem with custom-coded algorithms and parametric modeling environments, creating workflows that intentionally destabilize conventional input-output expectations. These processes introduce elements of chance, distortion, and emergent form into architectural production. This approach reveals a productive tension between human-defined parameters and machine improvisation, challenging traditional notions of control, authorship, and intentionality in design. In parallel, the pedagogical methods developed in the courses Artificial Assemblies and Dreamlands encourage students to work with a broad set of generative tools, including Rhino, Grasshopper, MidJourney, Stable Diffusion, TouchDesigner, Arduino, and projection mapping techniques, to develop their own hybrid workflows. These tools are not used in isolation but in feedback-rich configurations that span digital drawing, physical computing, and real-time spatial interaction. Projects emphasize both digital-to-analog and analog-to-digital translation, foregrounding ambiguity, divergence, and the unexpected as vital aspects of design exploration. Case studies from these contexts demonstrate the capacity of generative tools to fundamentally alter architectural workflows, prompting a reconsideration of entrenched distinctions between conceptualization and fabrication. Digital outputs often yield unanticipated material textures, structural logics, and spatial configurations. Conversely, analog inputs,sensor data, environmental stimuli, physical artifacts,are reinterpreted through generative systems to produce speculative formal responses rooted in tangible conditions. Ultimately, this research proposes that productive misalignment between authorship and outcome offers an alternative model for architectural practice: one that privileges adaptability over control, dialogue over direction, and emergence over resolution. By treating generative tools as contingent, partial, and fallible,not idealized engines of efficiency,these projects cultivate a more situated and responsive form of computational design. Within this framework, technology becomes one actor in a broader ecology of practice, contributing to outcomes that reflect ambiguity, translation, and shared agency rather than closure or optimization. The stakes of this work extend beyond computational aesthetics. As generative tools become increasingly integrated into architectural workflows, their influence on pedagogy, authorship, and creative culture becomes more urgent. These projects suggest that the most meaningful collaborations arise not from seamless automation, but from carefully designed systems that leave room for failure, negotiation, and purposeful divergence. The goal is not to perfect the machine, but to invite it into the conversation.

Hybrid Workflows: AI and Digital Sketching

Considering the recent advances of AI in the world of architecture practice, one can wonder how these new tools will impact a design process that until now has been very human centric. When compared to previous technological innovations one of the paradigm shifts brought about by AI has to do with control and making design decisions in an environment where a machine can produce a multitude of solutions from a given set of criteria. Should AI initiate a design process or rather help refine ideas developed in a “traditional” way by designers? In that context we are presenting a hybrid design methodology that integrates traditional tools such as handmade models or digital sketching with AI and aspire to be highly iterative. We postulate that AI can be used at different phases of a design process and help generate new and unexpected outcomes or help refine already developed ideas. The goal of this methodology is to identify what tools stimulate specific design thinking activities. In a 3rd year studio project, students used Midjourney as a prompt to image AI to create inspirational graphics as a starting point of their design process. Then, using a selected image, they built a handmade concept model with chipboard. Once these basic components had been organized, the next phase consisted in sketching digitally over a photograph of the previously built model to start defining the building skin and its materiality. Students were drawing using the Morpholio Trace application with a pen and tablet. Finally, the digitally drawn building volumes were further rendered using an image-toimage AI application called Prome AI. The purpose of the AI tool was to produce realistically convincing iteration of a conceptual drawing and instill different levels of variation using filters. The AI generated images were further edited using digital sketching. This multi-step methodology could then be repeated in different combinations, so that different design activities can inform one another. Traditionally, a successful design process is composed of a conceptualization, development and refinement phase and these creative moments can be repeated in an iterative way to bring a project to a certain level of resolution. Within this established design methodology there is an opportunity for AI to play a valuable role, along with more traditional means of design thinking. We propose to show how AI can provide initial inspiration, be a tool for refinement and bring alternative ideas along the way, all within a process that also integrates traditional tools and continues to place designers in a position to meaningfully control and evaluate their design.

Session: Positioning Histories

Friday, September 26, 2025

Sensus Artificialis / Augmented Architectural Historiography in the Age of AI

Eliyahu Keller Technion, Israel Institute of Technology

Mar kJarzombek, Massachusetts Institute of Technology

Eytan Mann, Delft University of Technology

In his Critique of Judgement, Immanuel Kant presents the notion of the Sensus Communis as a “sense shared by [all of us], i.e., a power to judge that in reflecting takes account (a priori), in our thought, of everyone else's way of presenting [something], in order as it were to compare our own judgment [...]”. This reciprocal reasoning should be practiced according to three maxims: “to think for oneself; to think from the standpoint of everyone else; and to think always consistently”. Elsewhere, Kant notes that gatherings meant for discussions of taste should include no more than ten guests so that such communalities can be achieved in small doses. This type of micro-communal reasoning is at odds with how contemporary, large language models (LLMs) operate. Drawing on vast corpora of digital traces, LLMs generate content through statistical pattern recognition rather than reflective exchange. Their outputs, though ostensibly dialogic, often flatten context into an aggregated mean. Yet both Kant’s Sensus Communis and generative AI share an ambition: to produce meaning that extends beyond the individual, whether through imagined universality or data-driven generality. This paper explores this tension, taking up the theme of Venice as envisioned through a fictional seminar feauturing a set of historical and contemporary figures–in the form of AI-trained personalities–who were or are invested in the city of Venice: Veronica Franco, a 16th-century Venetian poet; Giovanni Battista Piranesi, the famous 17th-century architect; John Ruskin, the noted 19th-century writer; Deborah Howard, a living historian of Venice; and, Carlo Ratti, director of the 2025 Venice Biennale. Carved from the slippery boundaries of the model’s training data, these personalities serve as an attempt to resist AI’s tendency toward generalization and instead insist on individuation. As the conversation unfolds, an LLM ‘listens in’ and generates images based on the conversation. The result is a collaboration between human agents (who happen to be scholars of architecture), an AI that serves as a type of design consultant, and a group of bots conducting a symposium inspired by a city both imagined and real. This work lies at the intersection of AI experimentation, pedagogy, critical historiography, and urbanism. Its aim is to soften the boundary between fact and fiction to enrich, reconsider, and even confuse the historiography of a place like Venice. Through listening and positiontaking, the fictional dialogue generates new understandings not only of its content, but also of how such exchanges can shape our critical stance. Since the AI generates designs in real-time, listeners are invited into the familiar architectural mode: the sketch. Rather than deploying AI to reconstruct the past or to fabricate arbitrary speculative futures, we use it to complicate the historiographic horizon, and to explore how AI tools can integrated into pedagogy about the city as both a historical and design artifact. The goal is not to define what Venice “is,” but to expose its multiplicity. In doing so, we resist AI’s tendency to flatten differences into consensus and instead foster a “common sense” grounded in friction, divergence, and interaction.

Composite Histories: AI-Generated Comparative Imagery as a Pedagogical Tool

In recent years, artificial intelligence (AI) has become a prominent tool in architectural image generation, often employed to produce speculative and futuristic visions. While these explorations are visually compelling, they frequently prioritize novelty over historical content. This paper proposes a counterpoint: leveraging AI not for speculative futures, but for deepening engagement with architectural history through the creation of composite images derived from disciplinary precedents. By integrating Heinrich Wölfflin’s comparative method with contemporary AI capabilities, this project reimagines pedagogical tools in the architectural history classroom. The core of this research lies in constructing hybrid images from paired architectural precedents commonly featured in global architectural history survey courses. These pairings, such as the Suleymaniye Mosque and the Villa Rotunda or the Taj Mahal and the Pantheon, are traditionally used in side-by-side comparisons to highlight stylistic, cultural, and formal differences. This project instead collapses these comparisons into single, AIgenerated composite images. These visual mashups arguably may serve as new pedagogical tools that engage students in a challenging prompt for visual analysis, moving beyond rote identification toward a more nuanced understanding of architectural principles, composition, and cultural context. Using curated datasets of historical architectural references, the project trains AI models to generate composite images that merge two distinct precedents into a unified visual field. The resulting images are intentionally ambiguous and visually provocative, designed to stimulate discussion and critical thinking. They invite students to identify and interpret the layered architectural elements, fostering visual literacy and interpretive skills essential to architectural education. This paper presents the methodology behind the image generation process, including prompt engineering, dataset selection, and iterative refinement. It also explores the pedagogical implications of these images: how might they be used in classroom discussions, exams, or as assignment prompts for student-generated images? Drawing on Whitney Davis’s argument for comparativism as an ethical imperative in art history, the paper positions these AI-generated composites as tools for inhabiting the diversity of architectural traditions. The act of visually merging disparate precedents becomes a metaphor for the comparative method itself an analytical practice that for Davis both acknowledges unreconcilable difference while seeking shared ground. In this way, the project not only revisits Wölfflin’s method for a digital generation but also contributes to a more inclusive and critically engaged architectural pedagogy.

Utopia’s Afterlife: Urban Visions Through Human and Machine Collaboration

‘The city is not the manifestation of some iron law but rather part of changing human culture and aspiration.’ - Kevin Lynch. (1) What is a city, and how have we dreamed it throughout history? Last summer, I devised a speculative “vertical theory” seminar to explore precisely this question: how have architects and planners imagined urban space, and how have those speculative blueprints taken shape (or faltered) in reality? Our journey began with early-twentieth-century avant-gardists and traced a continuous thread to today’s digital visions uncovering, along the way, the persistent yearning for utopia. Drawing inspiration from the counter-mapping methods of the Radical Geographers in the United States during the 1960s and 1970s, our group turned their attention to the heavily infrastructural and manufacturing riversides and meadowlands of New Jersey and created a collection of snapshots of the future that combined AI software and individual sketching. Mapping New Territories In this first phase, students selected their own sites of intervention from deindustrialized waterfronts to peri-urban green belts and situated them within a lineage of manifestos and masterplans. By juxtaposing Ebenezer Howard’s Garden Cities (1898) and Le Corbusier’s Radiant City (1935) with their chosen locales, students learned to read both the physical terrain and its ideological sediment. They produced annotated site maps and comparative analyses, asking: how might Howard’s emphasis on nature or Le Corbusier’s obsession with order translate to our fragmentary, contested urban fabrics? Future Ecosystems Moving from critique to creation, students sketched initial proposals that treated the city as a living ecosystem. Students modeled stormwater management in reclaimed riverbanks, proposed vertical farms for heat-island mitigation, and calculated material flows for circulareconomy districts. These rapid iterations became the seed for the AI hybrid designs at the end of the semester. Utopia Proposal In the final phase, the hand-drawn sketches together with the textual prompts, were fed into MidJourney’s AI engine. By experimenting with image-to-image transformations and prompt-weighting strategies, students uncovered unexpected formal solutions tower morphologies that echoed local vernaculars, aerial networks that responded to topographic constraints, and public-realm typologies attuned to climate variations. This workflow underscored that software is never neutral: its latent biases and pattern-recognition heuristics both reflect and refract cultural assumptions (2). Throughout the seminar, the dialectic between human creativity and machine assistance proved revelatory. Sketches established conceptual intentions; AI renderings expanded formal repertoires; and iterative feedback loops cultivated design narratives. By semester’s end, each student presented a comprehensive urban vision complete with critical essays, annotated diagrams, and poetic renderings. This paper reflects on the teaching methodology and outcomes of that course. This process highlighted the challenges of using AI methods in informal areas due to the lack of up-to-date open data. Therefore, it is essential to refine the methodology for collecting and preparing data for future machine learning or spatial data science models, starting with street-level image collection, followed by image processing and qualitative analysis. The objective is to teach students how to use these analyses to construct an argument that becomes the generator of design decisions.

Use of AI to Develop Critical Thinking in Architectural History: Research and Writing.

At an early stage in undergraduate education, architecture students are still developing research and writing skills, and critical thinking. The use of generative AI can hinder this development if not addressed properly, as it may compromise student’s intellectual autonomy and ethical reasoning. The awareness of the opportunities, limitations and potential biases of ChatGPT is crucial for responsible adoption. This study presents an innovation project carried out in the 2023 and 2024 Fall Semesters. It focused on the critical and responsible integration of ChatGPT, to enhance learning (reasoning for complexity), to raise awareness of its use, and to support the development of research and writing skills in a third-semester course related to an architectural history context. Prior to the start of the research, in the first weeks of the course, an introductory session of the use of ChatGPT was conducted to establish a foundation for ethical and reflective AI use. The session included a general method for constructing prompts, based on the premise that the writing of the prompt is more robust if there is prior knowledge, which also helps identify the fallacies and inconsistencies that the AI may show. Students were required to use ChatGPT in two short activities with the intention of allowing students to assess the ease or difficulty of constructing a prompt when there is prior research and when there is not. This exercise attempted to increase awareness of how research preparation impacts AI performance and reliability. Bloom’s taxonomy was also incorporated to formulate prompts with varying levels of complexity. The intervention started when the students were required to use ChatGPT in specific sections of the written research during two major assignments: a monograph and an essay. For each, they submitted: - The prompt and the AIgenerated response of the prompt. - Verification of the AI’s content (provided by the ChatGPT), corroborating the data with at least two academic sources; - A brief description of the contributions and inconsistencies found when comparing the information from ChatGPT against academic sources. - A revised final prompt demonstrating what has been learned (learning progression). The project was designed as a non-experimental study conducted with a regular undergraduate architecture course, less than a year after the launch of ChatGPT. Results point out that students found the tool helpful for organizing ideas and exploring unfamiliar content. However, the process also highlighted the importance of verifying AI-generated information, particularly that regarding citation accuracy and conceptual clarity. Students reported improved writing outcomes, greater awareness of source credibility, and enhanced understanding of the research process. These gains align with the project’s overarching goal: to foster the critical and constructive use of ChatGPT as a support tool in academic research, rather than a substitute for inquiry. By embedding AI use into a structured pedagogical process --grounded in metacognition, source triangulation, and reflection-- this case study proposes a scalable and discipline-sensitive model for integrating generative AI in design education. The findings suggest that with intentional guidance, AI can become a tool that strengthens students’ critical engagement and authorship.

This Presentation was removed

Session: Imaging Workflows

Friday, September 26, 2025

Algorithmic Fracturing: New Aesthetic and Design Process through AI and Fabrication

This project introduces Algorithmic Fracturing, a speculative design methodology that leverages artificial intelligence (AI) not as a tool of optimization, but as a destabilizing agent in architectural thought. In response to the growing sameness of AI-generated imagery, the project embraces ambiguity, error, and material unpredictability as productive forces in design exploration. Rather than producing resolved buildings, it constructs recursive workflows translating between image, code, and material to generate novel architectural fragments. Rooted in Umberto Eco’s notion of the “open work,” this methodology treats architecture as a field of interpretation rather than resolution. The process begins with poetic text prompts input into Midjourney, crafted to provoke spatial incoherence and reject conventional architectural legibility. [Image 1: Prompt breakdown] A typical prompt reads: “A disjointed, incomprehensible [element], shaped by AI hallucinations and deep-learning errors. Erratic geometries misinterpret architectural logic… chaos 80 weird 2400 style raw no recognizable standard form…” The prompt structure is modular, changing only the [element] (e.g., wall, stair, column), while its parameters induce instability and openness. Words like “fractured,” “non-Euclidean,” and “misinterpreted” function as semantic disruptions, nudging the AI toward disobedience of structural and material norms. [Image 2: Comparison of prompts] Promising image fragments are selected for their instability, ambiguity, and interpretive potential. [Image 3: Annotated AI-generated fragment] Using Tripo AI, these fragments are translated into 3D meshes resulting in erratic forms that challenge spatial legibility. [Image 4_Step-by-step development of “column” element across multiple recursive cycles] These forms are 3D-printed in PLA, where material logic introduces a second layer of distortion: sagging, surface anomalies, and incomplete resolutions become productive artifacts of digital-material tension. [Image 5: Printed PLA model showing deformations and anomalies] Rather than correcting these “errors,” the models are scanned back into digital space using photogrammetry (Scaniverse), generating point clouds that feed the next cycle of design. [Image 6: Point cloud from scan of 3D model] This recursive process, image to mesh to material to scan, privileges evolution over refinement. The results are materially specific yet conceptually open fragments of architecture. One such fragment, a “column,” began as an AIgenerated hallucination: neither classical nor structurally coherent, but a distorted mass shaped by algorithmic misreading. In translation, it became a vertically unstable figure, composed of compressed folds and broken geometries. Similarly, a “roof” emerged as a fractured shell, and a “wall” became a disrupted surface that challenges enclosure. [Image 7: Digital Model for column, roof, and wall] Together, these fragments form a speculative tectonic ensemble where each part resists architectural convention. [Image 8: Diagram of recursive workflow] The project offers architects a framework to integrate failure, recursion, and instability into AI workflows, inviting design as a form of speculative discovery. [Image 9: Physical Fabrication of multiple elements columns, walls, stairs] By treating AI not as a representational tool but as a speculative collaborator, the architect shifts from author to curator. [Image 10: Final exhibition] In doing so, Algorithmic Fracturing proposes not a product, but a procedure, one that generates open-ended architectural futures grounded in instability rather than control.

Morphogenetic GANs: Uncovering Morphological Permanence and Adaptation via GAN Workflows

Contemporary AI techniques hold significant potential to revolutionize the study of urban form by enabling multi-layered analyses beyond traditional figure-ground mapping (i). This study asks: (1) Can CycleGAN-derived typologies uncover latent spatial logics within distinct urban fabrics of a single city? and (2) Do DCGAN-generated hybrids broaden morphological variability in ways that indicate enhanced resilience to ecological, social, and spatial disturbances? Highresolution satellite imagery and matching custom figure-ground plans were assembled for several typologically distinct urban fabrics. A CycleGAN was trained to perform unpaired image–map translation, using its cycle-consistency mechanism to ensure faithful, reversible encoding of urban morphology (ii, iii). From the decoded figure-ground outputs, we extracted each building segment’s footprint, edge complexity, and local density forming a multidimensional feature set. A DCGAN was then trained on these features to synthesize “hybrid” morphologies that interpolate between, and extrapolate beyond, the original typologies. Its key contribution lies in operating within a structured latent space, enabling the generation of entirely new configurations that are not simple averages but speculative recombinations of spatial characteristics. These outputs reveal latent spatial logics: embedded syntheses of topographic patterns, structural rules, material attributes, and distributional densities inferred from the satellite data. Once generated, the hybrid typologies were re-translated using the trained CycleGAN, establishing a closed-loop workflow that enabled direct visual and quantitative comparison with real urban forms. This feedback loop allowed us to observe morphological shifts and decode emergent spatial reasoning shaped by the GAN’s latent structure. The hybrid morphologies exhibit a fluid, adaptive logic challenging rigid planning frameworks and proposing alternative typologies that respect morphological permanence while supporting resilient, ecologically responsive urban development. By integrating CycleGAN’s reversible translation with DCGAN’s generative interpolation, we introduce an AI-grounded workflow that (a) reveals hidden spatial logics within a city’s varied fabric, (b) systematically expands morphological variability, and (c) produces hybrid typologies with suggestive resilience advantages. This research offers both a replicable methodological toolkit for AI in architectural and urban design and a conceptual foundation for embedding generative workflows into adaptive, ecologically responsive planning (v). Moreover, incorporating these AI-driven methods into design studios can enrich pedagogy by giving students hands-on experience with computational form-finding and resilience-focused experimentation.

Beyond Images: Positioning Large Language Models as Collaborative Agents in Computational Design

This paper examines the evolving relationship between artificial intelligence and architectural design practice, tracing the shift from early generative adversarial networks (GANs) to contemporary large language models (LLMs). We argue that the pursuit of seamless, perfect, and photorealistic image generation is not the sole nor inevitable trajectory for AI in architecture. Such methods, while visually compelling, may be susceptible to derivative and predictable results, or limited applicability where relevant data is scarce or absent. Instead, we propose that positioning AI as a collaborative agent in the development of computational design strategies opens new, more robust directions for architectural innovation. This expanded approach has the potential to liberate practitioners from over-reliance on precedent, fostering advancements that draw on foundational scientific concepts and algorithmic exploration rather than recombination of existing solutions. Our research explores how LLMs offer opportunities beyond visual generation, particularly in democratizing computational design through code accessibility. Through two case studies, we investigate the potential range of this approach. At its most accessible, LLMs' can bridge the gap between visual programming environments and textbased languages like Python and C#. For experienced computational designers, LLMs can assist in incorporating advanced mathematical and programmatic frameworks. The first case study is the development of a Grasshopper plug-in for landform analysis and generation. The author of this case study is an expert Grasshopper user but only has cursory knowledge of the C# language that is required to produce a plug-in. The second case study explores emerging multi-parameter topology optimization strategies that combine mechanical and thermal simulations. This case study is led by an expert Grasshopper user and proficient coder across both Python and C# development. The problem presented in the case study requires extensive use of partial differential equations and sparse matrix algorithms whose mathematical foundations are beyond the author’s knowledge. The methodologies for prompting, guiding, and debugging AI-generated scripts are systematically documented, with particular focus on the collaborative workflow between authors of differing coding expertise and a range of commercial LLMs. The findings are assessed for effectiveness, efficiency, and the qualitative experience of collaboration. In conclusion, our work advances an understanding of how AI’s role in architecture can extend well beyond automated image production to active participation in addressing the discipline’s material and structural realities, suggesting a shift towards AI as an interactive partner throughout the design process.1,2

AI Tools and the Design Process: A Taxonomy and Evaluation of Formal/Graphic Tools

As artificial intelligence (AI) technologies rapidly evolve, their integration into architectural design practices remains uneven and often misunderstood (Bernstein, 2022). While text-based generative AI tools have seen widespread adoption, the use of formal or graphically oriented AI tools those that generate images, spatial organizations, or construction documentation remains in its infancy (Li et al., 2025). This paper addresses the urgent need to understand how these tools can be meaningfully integrated into the architectural design process. Design is inherently iterative, nuanced, and context-dependent (Soliman, 2017). For architects to remain active agents in shaping design rather than passive curators of AI-generated outputs we must develop a framework that clarifies how different AI tools align with various stages of the design process. This research proposes a taxonomy of formal AI tools, categorizing them by function and evaluating their potential roles, opportunities, and limitations within the design workflow. The taxonomy includes: Image Creation Tools (e.g., Midjourney, DALL·E, Adobe Firefly, RunwayML): useful for early-stage ideation but often producing overly resolved outputs that limit iterative exploration. Organizational Aids (e.g., TestFit, Giraffe, Maket.ai, Hypar): support programmatic and spatial planning, offering rapid feedback loops. Surface/Form Investigations (e.g., Evolve, Autodesk Forma, Rhino + Grasshopper with AI plugins): enable material and formal experimentation. Construction Documentation Tools (e.g., Swapp, Revit with AI extensions): assist in translating design intent into buildable documents. Analytical Tools (e.g., Cove.tool, Spacemaker AI, Autodesk Insight): provide performance feedback that can inform design decisions. This paper evaluates how each category supports or constrains the iterative nature of design. It also explores the distinction between AI as a producer of complete design artifacts where the architect acts as a critic or curator and AI as a collaborative tool that supports open-ended, malleable design development. A key challenge addressed is the difficulty of controlled iteration: how to hold certain design parameters constant while varying others, a task that current AI tools often struggle with (Yoo et al., 2021). Recent research supports the growing potential of AI in design. For example, Dall’Asta and Di Marco (2024) demonstrate how machine learning can act as a creative partner in conceptual design phases. Pan and Zhang (2023) explore the synergy between AI and BIM for smart construction management. In educational contexts, AI-assisted design studios are reshaping how students engage with design tools and processes (Özorhon et al., 2025). These developments underscore the need for a structured understanding of AI’s role in design workflows. Through this taxonomy and evaluation, the paper aims to provide architects, educators, and researchers with a clearer understanding of how to strategically deploy AI tools in ways that enhance rather than replace human creativity. The goal is not to resist AI integration but to shape it ensuring that these technologies serve the design process rather than dictate it.

Session: Mediating Cultures

Friday, September 26, 2025

Retraining the Machine: Emancipatory AI for Informality and Border Urbanism

This project examines how generative AI can be retrained to represent spatial realities that dominant architectural AI tools often overlook, mischaracterize, or erase, specifically informal, vernacular, and self-built housing in the Tijuana-San Diego border region. It emerges from the growing concern that existing generative AI platforms, trained on datasets reflecting Global North aesthetics, modernist typologies, and capital-driven priorities, reproduce a narrow architectural imaginary. Informality, improvisation, and territorial conflict across the Global South are often rendered inaccurately or aestheticized without nuance. Motivation and Context: The Tijuana border region presents a complex interplay of architectural contradictions: platform capitalist infrastructure (e.g., Amazon, Foxconn) adjacent to informal housing, luxury high-rise towers displacing long-standing neighborhoods, and migration flows controlled by biometric surveillance and predictive border technologies. These conditions are rarely legible in AIgenerated design outputs. When prompted to visualize housing or cities in Latin America, popular platforms (e.g., Midjourney) often generate images that are stereotypical, oversimplified, or falsely portray utopian or dystopian scenarios. This gap reveals a profound representational and epistemological limitation in AI’s current role in design. Objective: This project investigates whether retraining or fine-tuning AI models (utilizing Stable Diffusion with LoRA and DreamBooth techniques) on site-specific data from Tijuana’s informal settlements can yield more accurate, situated, and politically informed architectural representations. Challenges and Dilemmas: Key challenges included the lack of usable training data (informal urbanism is under documented, especially in structured formats); the ethical complexities of collecting, tagging, and reusing images of informal communities; and the steep technical learning curve associated with training AI models. There is also a more profound dilemma: how to retrain AI to be useful for spatial justice without reproducing extractive or exploitative relationships with the very communities it seeks to represent. Explored Pathways: The project developed a workflow that includes field photography, image tagging using vernacular architectural taxonomies (e.g., reused materials, self-built additions, surface expressions), and prompt benchmarking against default AI outputs. Retrained visual models were then used to generate speculative imagery, which was analyzed in comparison to both living conditions and AI-native misrepresentations. These visual outputs are not design proposals but rather tools to expose bias and propose alternative representational logic. Frameworks and Findings: The research engages with theories of decolonial AI, platform capitalism, and visual sovereignty in architectural media. Early results suggest that even small, retrained datasets can alter the aesthetic register and spatial grammar of AI outputs, enabling designers to perceive informal space not as a lack of design but as a complex, adaptive urban intelligence. Transferable Knowledge: The approach offers a methodological model for other contexts where mainstream AI tools marginalize architectural knowledge. It contributes to growing efforts to critically reconfigure computational design in the pursuit of spatial justice, particularly in peripheral or contested urban environments.

Tokenizing African Dialects for Inclusive AI Urban Design: A Policy Framework for Equitable Smart City Development

Contemporary smart city developments across Africa and globally including Eko Atlantic (Nigeria), Songdo (South Korea), and Masdar City (UAE) predominantly employ imported aesthetics and technocratic planning approaches that disconnect from local cultural contexts, ecological knowledge, and lived experiences. This disconnection manifests through Maslow's Hammer and Bandura's Social Learning Theory, where cities replicate standardized models while marginalizing vernacular architectural solutions often dismissed as "primitive." Despite their representativeness the Local Content Development and Management Commissions, Traditional Rulers Building, State Cultural Center has failed to proudly embody and project the Ijaw identity with implication for the city . Consequently, urban environments risk inflicting "moral injury" on places by transforming culturally rich spaces into homogenized environments that displace native landscape elements, cultural markers, and social affordances (Allison et al., 2024). This policy-focused research interrogates how Agentic AI can be redirected to empower inclusive urban design through systematic tokenization of African dialects and architectural knowledge systems. While no fully AI-designed cities currently exist, emerging Agentic AI capabilities (Meta's LLMs, OpenAI's GPT-4) in simulating design processes necessitate proactive policy development for equitable technology integration. Preliminary experimental engagement revealed significant AI limitations when prompted to design cities reflecting Nembe Ijaw culture Nigeria's fourth-largest ethnic group. However, a hybrid approach utilizing humancurated Nembe Ijaw architectural tokens (lexical, symbolic, and environmental elements) demonstrated rudimentary AI integration capabilities, indicating feasibility for cultural aesthetic learning through structured indigenous datasets. The theoretical framework encompasses five key principles: (1) Architecture as Language and Metaphor, positioning architecture as cultural narrative communication tool for storytelling and heritage preservation; (2) Cultural Erosion in Contemporary Urbanism, where modern cities become aesthetically advanced yet culturally hollow (Allison et al., 2023); (3) Urbanism and SDGs alignment, advocating diversity preservation and inclusive urban futures requiring cultural input into AI systems; (4) Tokenization as Preservation and Value Creation, recognizing dialects and cultural data as tokenizable assets with preservation and economic potential (Tasca, 2023); and (5) Agentic AI and Local Knowledge Integration, enabling autonomous planning incorporating African design values through appropriate cultural tokens (Greyling, 2025). The research question addresses: What policy frameworks and technological strategies are required to tokenize African dialects and architectural heritage for effective integration into AI systems supporting culturally inclusive and sustainable smart cities? Methodology employs a two-phase hybrid approach. Phase one prompts generative AI models to visualize African cities, documenting limitations and misrepresentations. Phase two curates a comprehensive token library from Nembe Ijaw dialect and architectural heritage, integrating these tokens into prompt structures to assess cultural alignment improvements. The proposed policy framework comprises three core components: (1) Linguistic and Architectural Tokenization Hubs for systematic cultural data collection and processing; (2) Agentic AI Ethics and Inclusion Guidelines ensuring equitable AI development practices; and (3) Smart City Cultural Equity Impact Assessment (CEIA) for evaluating cultural preservation in urban planning processes. This framework aligns with UN SDGs 9, 10, and 11 by promoting equitable digital transformation. Given Nvidia CEO Jensen Huang's (2025) projection of AI's $100 trillion global economic value by 2050, cultural inclusion in AI systems becomes essential for Africa's strategic participation in the emerging digital economy while preserving indigenous knowledge systems and cultural identity.

Embodiment and Expression in Artificial Intelligence: Critical Interventions by Women Artists

Fifty Years of Artificial Intelligence (2007), edited by Max Lungarella, Fumiya Iida, Josh Bongard, and Rolf Pfeifer, commemorates the 50th anniversary of the 1956 Dartmouth Conference, widely regarded as the birth of artificial intelligence as a formal discipline. This volume reflects on AI’s evolution through an interdisciplinary lens, highlighting not only technological advancements but also the shifting epistemological and cultural frameworks that have shaped AI research. It positions AI as both a scientific and cultural phenomenon, emphasizing the importance of embodiment, cognition, and perception in understanding intelligence human or artificial. Several contributors explore the intersection of creativity, aesthetics, and AI, arguing that intelligence involves more than logic and computation; it also encompasses interpretation, intuition, and emotional resonance qualities traditionally linked to the arts. Discussions of robotics and sensory-motor experience often invoke performative and visual dimensions reminiscent of dance, theater, and visual art. The use of simulation, metaphor, and narrative is presented as central to AI research, suggesting that the aesthetics of form, movement, and expression play a crucial role in how intelligent systems are conceived, represented, and interpreted. This focus on aesthetics resonates strongly with the work of several women artists who engage critically and creatively with AI, technology, embodiment, and perception. Lynn Hershman Leeson, Rebecca Allen, Nina Sellars, Stephanie Dinkins, and Adrianne Wortzel exemplify this approach, treating AI not merely as a tool but as a medium and collaborator that probes questions of identity, communication, and the theatricality of intelligence. Hershman Leeson’s pioneering project Agent Ruby (2002) features an evolving AI chatbot that explores digital personhood through linguistic interaction, emotional tone, and performative expression. Rebecca Allen’s digital artworks translate human movement into graceful algorithmic forms, blending the organic and synthetic, while Nina Sellars’ collaborations, including The Blender (2005), use biological materials and conceptual frameworks to investigate human-machine hybridity on molecular and artistic levels. Stephanie Dinkins focuses on race, identity, and social justice through AI-driven dialogical works such as Conversations with Bina48, an AI modeled on a Black woman. Her art reveals how AI’s expressive forms are culturally conditioned and how they can be reclaimed to challenge prevailing narratives. Adrianne Wortzel’s installations, like Eliza Redux, revive early AI chatbots within performative contexts, exposing the emotional gaps and uncanny mimicry in human-machine interaction, while engaging audiences in reflections on agency and empathy. Together, these artists demonstrate that AI is fundamentally an aesthetic and affective field, where form, movement, and expression are inseparable from human values and identities. Their work underscores the necessity of integrating artistic inquiry into AI’s future, ensuring that the development of intelligent systems remains deeply connected to cultural meaning and human experience.

Decoding Industrial Park Urbanization: AI-Driven Spatial Analysis of Complex Urban Patterns in Southeast Asia

This paper focuses on the adoption of big data visual representation and AI-driven semantic interpretation to study complex urban patterns in industrial parks recently developed in Southeast Asia as part of a broader phenomenon of industrial delocalization. In these countries, industrial parks are the DNA of contemporary urbanization, generating urban mobilities, attracting capital and labor, and providing viable infrastructure for new urban hubs in remote places. Manufacturing buildings in these high-intensity productive spaces establish spatial relations with facilities fundamental to support not only modern production but also contemporary urban life. Such intertwined relations between production, logistics, research and entertainment generate hybrid urban structures that unveil industrial enclaves as urban infrastructure supporting diverse economic and social activities. However, despite their critical role in shaping contemporary urban phenomena, industrial parks remain largely overlooked in architecture and planning discourse. Conventional urban analysis tools struggle to capture the complex dynamics of such formal and informal urban systems. Even when cited in academic literature with labels such as "smart," "eco," or "innovative," their development rarely addresses intertwined relationships between global challenges and local community development. This research demonstrates how big data, particularly Location-Based Social Network Data, multispectral satellite imagery analysis, and AI-powered interpretation systems can reveal these large-scale urban structures' ability to attract people, affect movement patterns, and generate economic and social impact. Using a comprehensive database of Southeast Asian industrial parks, the study analyzes 39 years of satellite imagery (1986-2025) combined with LBSN data to quantify spatial configuration patterns and socio-economic activities. The research introduces the Production Isolation Index, distinguishing between productive impervious surfaces (manufacturing, logistics, retail, leisure) and passive surfaces (parking, buffer zones), revealing spatial relationships between industrial function and urban integration. This methodology reveals meaningful differences and commonalities in global spatial design characteristics and local adaptations, assessing accessibility, circulation, programming, density and socio-economic differentiation. The core innovation lies in an scalable AI-powered spatial consultant system utilizing trained Natural Language Processing models that synthesize complex spatial datasets to provide real-time, evidence-based assessments. This system answers complex planning questions such as "Which parks demonstrate optimal walkability?" or "Identify parks with highest socioeconomic diversity potential" by correlating multi-dimensional spatial data into actionable insights. The AI consultant demonstrates three key capabilities: comparative analysis ranking parks across metrics with evidence-based justifications, predictive assessment of new developments, and dynamic learning that improves recommendations as new parks are analyzed. This capacity is especially vital where urban system complexity requires advanced analytical instruments capable of real-time decision support for planning scenarios.

Artificially Imagining the Threshold: A Typological Case Study of Generative AI Responses to Urban Segregation

The satellite image of Caracas, Venezuela, showing a clear boundary between the formally planned district of La Urbina and the sprawling informal settlement of Petare, remains one of the most powerful visual indictments of global urban segregation [1], [2], [3]. Profound spatial divides like these, resulting from complex socio-economic forces, represent a persistent challenge for planners and architects [4]. Today, the rapid emergence of generative artificial intelligence (GAI) introduces innovative tools into the design process that could support brainstorming and enrich dialogue regarding complex urban conditions [5], [6]. However, the potential of adapting GAI's image-filling and editing capabilities to urban fabric studies remains largely unexplored [6], [7], [8]. This study investigates whether contemporary generative AI tools, using satellite imagery alone, can produce visualizations that meaningfully inspire design strategies for these intricate urban contexts. To explore this, we employ an exploratory approach to examine how generative AI tools interpret and intervene in urban segregation scenarios, using the La Urbina–Petare divide as a representative example. This research methodology focuses on comparative analysis among leading GAI platforms. Starting with the iconic Caracas image, we remove central dividing elements, specifically, the highway and topography separating formal and informal urban fabrics. This void then serves as a prompt for three distinct AI models: Midjourney, Adobe Firefly, and Leonardo AI, testing various settings where available. Utilizing generative fill and inpainting functionalities, each GAI platform is tasked with proposing new urban conditions for the intervening space. This process generates a rich visual corpus of architectural and urban proposals, subsequently categorized through qualitative typological analysis to identify recurring patterns and biases. Our findings reveal that despite their technical sophistication, the GAI models default to a limited set of strategies. We identify four primary typologies of AI-generated solutions: (1) The reinforced boundary, in which the AI substitutes the old barrier with new structures like monolithic walls, canals, or forests, thereby reinforcing segregation; (2) The monolithic mediator, proposing singular, largescale mega-structures, such as utopian campuses or mega-buildings that represent top-down, often alienating interventions; (3) The fabric extension, where the AI extends the pattern of one urban condition into the other, effectively choosing sides rather than fostering integration; and (4) The algorithmic anomaly, producing abstract or surreal artifacts indicative of the AI’s failure to interpret the underlying urban context. The study identifies a limitation in current GAI models regarding their capacity to comprehend and interpret urban satellite imagery meaningfully. Based on this specific analysis, current GAI-assisted design practices appear oriented more toward coherent pattern extension rather than mediation or integration. Furthermore, two notable tendencies emerged: the repetition of generic urban scenarios common worldwide and biases suggesting that AI models learn from their training data that separation is a fundamental organizing principle of urban environments. These findings raise concerns about relying solely on such tools for ideation in scenarios of urban segregation. Consequently, the indispensable role of human designers as critical curators within the design process is emphasize

Session: Augmented Imaginaries

Friday, September 26, 2025

Do We Dream of Electric Intelligence? Generative Tools and Design Thinking

Few would argue that artificial intelligence and its associated tools have not shaken the design disciplines. Generative text, image, and video tools are actively disrupting established modes of architectural design, especially in academic settings where the canon is often hesitant to evolve. While AI technologies have disrupted established workflows, they also open new opportunities to expand design methodology, particularly within design pedagogy. This paper offers an exploration of generative AI as a pedagogical tool in the academic design studio. In an undergraduate architecture studio, students engaged directly with prompt engineering and textto-image generation using AI tools such as ChatGPT and Midjourney. Rather than positioning AI as a shortcut or productivity enhancer, the studio framed these tools as speculative catalysts to provoke exploration, visual strategies, and design narratives. Studio projects began with the development of an “artistic subjectivity” a fictional creative persona that guided conceptual exploration. These personas became lenses through which students crafted evocative prompts, refined using AI text tools and translated into synthetic imagery via text-to-image generation. The resulting images were not treated as architectural proposals, but as visual design narratives; termed here as compositional and cultural provocations that students analyzed and interpreted to generate architectural ideas and opportunities for further design inquiry (Figures 1-3). This process emphasized opportunity for creative thinking, positioning AI as an openended design ally rather than a deterministic tool. The synthetic visuals served as launching points for drawing, material experimentation, programmatic studies, and spatial proposals (Figure 4). As students moved from AI-generated images to physical models and architectural drawings, they explored how synthetic content could shape narrative, atmosphere, and spatial form (Figure 5). Grounded in theories of posthumanism and digital culture, the studio examined how emerging technologies could productively disrupt traditional design workflows. It raised questions that remain open for pedagogical exploration: What kinds of architectural ideas become possible when the design process begins with narrative and image, rather than form and function? How might AI tools prompt new modes of thinking and making in the academic studio? What are the limitations of these tools. Ultimately, this paper argues that AI tools, when introduced with care, critique, and creativity, can empower student thinking, expand the range of architectural speculation, and reframe the studio as a space of conceptual openness. Drawing from specific examples of student work, it proposes that AI expands, rather than inhibits, architectural thinking. In doing so, it contributes to ongoing conversations about how emerging tools are not only reshaping the future of architectural practice, but also transforming how we teach design.

From Words to Forms: AI-Augmented Poetic Translation in Architectural Meanings

The act of translation between words and space emerges from the heart of architectural contemplations. This paper explores AI as a conceptual and creative tool for inspiring the poetic translation of architectural meaning with text-form interactions. It investigates how AI encourages architectural design, pedagogy, and research to translate historical, cultural, and aesthetic concepts into architectural semantic representations. The essence of translation emphasized by this study articulates the theoretical and poetic embodiment, going beyond visual forms. The current design gap between textual expressions and architectural articulations requires the act of poetic translation, which presents a challenge for designers and early-stage architecture students. Drawing from hermeneutics, the paper argues that translation constitutes a creative and interpretive gesture equal in importance to original invention. In this light, AI functions as a thinking partner that participates in the translational process to seek the poetic opening. Jacques Derrida’s notion of différance with the concept of deferral suggests the postponing and unfixed meanings behind words. Rather than reproducing pre-existing literal structures and signs of language, the act of translation traces this delay and conveys the interpretative, metaphorical, and perceptual resonance of meanings and context. As a mediator of languages, translation involves the intralingual and interlingual sense of interpretations with the potential of history, culture, and poetry. In Martin Heidegger’s phenomenology, language provides the semantic space and ground of poetry and thinking as the house of Being to reveal the world. Translation connects with the domain of language to enter the threshold of poetic openness and imaginative interpretation to uncover the essence of the horizon. The state-ofthe-art digital tool AI engages with significant text-image representation, enabling the theoretical and design emergence of translation. Deep-learning Neural Networks coincide with the linguistic representation of dimensional semantic modeling to translate and interpret words into architectural language with context and spatial meanings. This paper employs machinelearning tools, specifically the Self-Organizing Map (SOM) and the Stable Diffusion (SD), with a three-step design workflow: textual analysis, image generation, and design inspiration. Collected from historical, philosophical, and literary narratives, a text database is established to reflect in-text abstract architectural ideas, such as “poetic space.” SOM trains and maps the lower-dimensional representation of the text data and identifies hidden structures and similarities. After the text mapping, the SD transforms the machinery textual cluster into new imagery representations with embedded architectural meanings and latent openings of poetry. The study then anticipates reinterpretation of AI-generated images to stimulate design inspiration. This dual process highlights the dialogical relationship between humans and machines, where AI offers results and prompts for reflection, critical engagement, and creative speculation. This research proposes that AI exhibits itself as a design interlocutor that supports students, educators, designers, and researchers in navigating the poetic relationship between words and forms. By framing architectural translation as an interpretative, creative, and reflective process rather than a technical conversion, the paper opens new possibilities for design and research pedagogies. In doing so, it invites reconsidering how we teach, interpret, and design meanings in an age of algorithmic imagination.

Ghostwritten: On Writing, AI, and Architectural Education

When I grade student work in Architectural Discourses, a Writing-Intensive Course (WIC) in our second-year architecture curriculum, I often feel as though I’m responding to a machine. The students are present, but the voice is not theirs. I’m left offering feedback on AI-generated content syntactically clean, semantically hollow unable to engage the student as a thinking author. This shift is not hypothetical. It is widespread, and at present, largely unregulated. In a discipline that often blends design with discourse, the question of how to teach writing in architecture is both crucial and unresolved. What are the writing conventions appropriate to our field, and how do we teach them to undergraduates when generative AI can mimic tone, structure, and citation? In Architectural Discourses, students develop a critical review of a single architectural exhibition across sixteen weeks, building their argument in response to weekly lectures, curated readings, and discussions. The course is designed to foster deep disciplinary engagement one that is interpretive, iterative, and reflective. But in practice, students increasingly circumvent that process. They submit work that cites readings they clearly haven’t read and rely on language patterns common to AI writing generalized praise, empty contrasts, and vague declarations of “challenging convention.” Rather than frame this as a disciplinary crisis, I treat it as a pedagogical turning point. This paper asks: How might AI be addressed in architectural writing instruction without defaulting to prohibition or passive resignation? What does it mean to write in architecture not just about it? And how can WIC pedagogy evolve to preserve student voice, cultivate disciplinary fluency, and meet this new threshold in technological mediation? The paper begins with a critical history of WIC designation, tracing its emergence through Writing Across the Curriculum (WAC) and Writing in the Disciplines (WID) initiatives of the 1970s 90s. It then grounds that lineage within the specific context of architectural education, drawing on theory and criticism to reflect on architecture itself as a form of writing. From there, I turn to recent classroom experiences, the erosion of shared policies around AI-generated work, and the broader precarity of architectural criticism in the predominance of screen-based discourse. As the course coordinator, a design critic, and former editor of an architectural journal, I conclude with a draft AI policy and curricular strategy that introduces AI as both a tool and a topic of architectural discourse. Rather than gatekeeping the writing process, the goal is to reframe authorship as a responsibility not to originality, but to discernment, judgment, and voice.

AI,

Quantum

Physics and 18th Century Revolutionary Drawings: Projecting the Past in Multiple Futures

“Not only is the universe stranger than we think, It is stranger than we can think.” Werner Heisenberg As part of my evolving design studio teaching methodology, architectural drawings from the revolutionary period in France, Italy, and England serve as literal sites for architectural production. These magnificent works by visionaries such as Claude-Nicolas Ledoux, JeanJacques Lequeu, Étienne-Louis Boullée, and Giovanni Battista Piranesi are not merely historical artifacts; they are quantum fields of potentiality, interrogated to project their latent possibilities into future architectural paradigms. Drawing inspiration from quantum mechanics, we treat these drawings as super positional states existing simultaneously in multiple configurations until observed through our creative interventions, collapsing into new forms of architectural reality. This approach allows us to “split the universe”, generating parallel worlds of design that resonate with the probabilistic nature of quantum systems. <br bcx4"=""> <br bcx4"="">Our method can be likened to a quantum entanglement of genres, where the theoretical frameworks of Mikhail Bakhtin his concepts of heteroglossia, trans-linguistics, the chronotope, and the carnivalesque act as additional inputs into our projective process. Just as quantum particles interact across vast distances, these diverse intellectual strands resonate with the architectural drawings, creating a rich, dialogic matrix for generating novel spatial narratives. <br bcx4"="">In my graduate-level design studio, I have led numerous projects, both individually and collaboratively with students, that leverage these quantum-inspired methodologies. By integrating artificial intelligence tools such as Grok (created by xAI), Midjourney and other generative platforms, we introduce new variables into the design process, akin to quantum perturbations that destabilize classical systems and open a multiverse of possibilities. These AI tools serve as observational instruments, measuring the super-positional states of our inputs historical drawings, Bakhtinian theory, and quantum metaphors and collapsing them into tangible outputs. <br bcx4"=""> <br bcx4"="">Artificial intelligence, while powerful, is not the sole driver of these explorations. Its role is akin to a quantum computer simulating complex systems it amplifies our ability to navigate the probabilistic landscapes of our methodology but remains subordinate to the human imagination. Our work transcends the deterministic outputs of AI, embracing the uncertainty principle of quantum mechanics, where the act of creation inherently alters the system being created. <br bcx4"=""> <br bcx4"="">The results of this methodology are varied and definitely pique curiosity. Superimposed upon these timeless drawings, our projects manifest as new worlds, animated by the quantum interplay of past and future. We have produced films that capture the temporal collapse of centuries, tone poems and 3d prints that resonate with the vibrational frequencies of quantum states, and what we call *drawdels* a term borrowed from Thom Mayne of Morphosis to describe a dynamic where drawings and models intertwine. <br bcx4"=""> <br bcx4"="">In essence, our work positions architectural production as a quantum experiment, where historical drawings are not static relics but active participants in a multiverse of creative potential. By interrogating these drawings through the lenses of quantum mechanics, Bakhtinian theory, and AI-driven exploration, we are not only bending genres but also bending the very fabric of architectural reality. The presentation in Boston will be profusely illustrated, as evidenced in the graphic submissions accompanying this abstract. <br bcx4"=""> <br bcx4"="">

Session: Sensation & Access

Friday, September 26, 2025

Striking a Pose: Using Pose Estimation Enhanced Computer Vision AI Models to Measure Public Life

Understanding the physical and social dynamics that contribute to successful public spaces is a central challenge for urban planners and designers. As William H. Whyte observed, “The best way to know how people are using a space is to simply watch them.” His study, The Social Life of Small Urban Spaces (1980), pioneered the use of systematic observation and emerging technology to quantify human activity in public spaces. Traditional methods such as direct observation, interviews, and recorded video analysis(Lynch, 1964; Whyte, 1980; Gehl, 2011) have long provided valuable insights into human behavior in urban settings. However, these approaches are time-consuming, labor-intensive, and often constrained to a specific time frames, limiting their scalability. The increasing accessibility of open-source computer vision (CV) algorithms has opened new possibilities for automating the observation of public life. Yet despite these advancements, most existing CV sensors remain designed for transportationrelated applications such as tracking vehicle and pedestrian flow rather than capturing the more nuanced ways people engage with public space. A key limitation is the inability of many current models to accurately detect activities like sitting or socializing. Additionally, much of the urban-focused CV research relies on static street view imagery (SVI), which, while useful for large-scale assessments of the built environment, fails to capture the dynamic and temporal qualities of public life. In other words, such studies analyze existing, often outdated images rather than live or up-to-date visual data. To address these limitations and build on prior research (Williams et al.,2019), this project developed the PLSK an open-source tool specifically designed to measure public life. Tested in a public space improvement project in Sydney, Australia, PLSK was benchmarked against traditional human observations and a commercially available transportation-focused CV sensor, the Vivacity sensor. The results showed that PLSK not only provided greater accessibility and accuracy in tracking pedestrian movement, sitting, and social interactions but also enabled a data-driven approach to evaluating how design interventions shape public behavior. By offering a scalable, open-source alternative to existing methods and enhancing the computer vision model with pose detection, PLSK empowers architects, urban designers, and planners to test the efficacy of their designs in situ gaining timely, actionable insights on how people engage with and inhabit public space.

Exploring Generative AI Workflows for Public-Sector Urban Design Visualization

This project investigates the potential of accessible generative AI tools to augment early-stage urban design visualization workflows in public-sector contexts. It will be undertaken as part of a design guidelines and form-based code study for a major urban waterfront district in the United States, commissioned by a government development agency. The project team, composed of two School of Architecture faculty members with part-time commitment, will explore whether recent advances in AI-based image synthesis can enable rapid, effective visualization of massing-based urban scenarios without the overhead of conventional high-end rendering pipelines. A basic 3D massing model of the study area will be developed using GIS and modeling software. Rather than investing in detailed model refinement or advanced game engine workflows, the team will test a series of generative AI workflows layered onto base massing views. Tools such as Midjourney, Magnific.ai, ComfyUI, Kling, and Runway will be explored in combination with controlled base imagery and iterative prompt strategies. The aim is to produce visual outputs that can facilitate more immediate understanding and productive dialogue among client stakeholders and the broader community. This work builds on emerging precedents in the field, ranging from research center explorations to expert design studio practices. In contrast, the present project will operate under typical constraints of many publicsector design efforts: limited team size, partial time commitment, and reliance on off-the-shelf tools. The objective is not to replicate state-of-the-art results, but to evaluate what level of visual fidelity and communicative value can be achieved within these common practice realities. The project will aim to demonstrate that AI-enhanced imagery, even when derived from simple massing models, can enhance the clarity, appeal, and communicative power of early urban design visualizations. The process is also expected to surface important limitations and risks, including challenges in visual control, representation accuracy, and interpretive bias. By documenting this case study, the authors intend to contribute practical insights to the growing discourse on integrating generative AI into urban design and architectural practice. The findings are expected to inform how accessible AI tools can support more inclusive and responsive public design processes particularly in settings where conventional visualization resources are constrained.

Fostering Community Engagement through Immersive Visualization of Urban Animal Lives

The 'Understanding Animals' project originates from a recognition that animal behavior and human-animal relationships are essential elements in the design of inclusive and ecologically resilient urban environments. A key objective of the project has been to visually communicate patterns of animal behavior and relationships between people and animals, making animal lives more visible and better understood in design discourse and in discussions with community partners. Participatory design and community engagement projects in architecture can leverage immersive technologies such as 3D interactive environments and virtual reality (VR) to increase accessibility and inclusivity, employing game engines like Unreal and Unity to enable community stakeholders to experience and interact with proposed designs or to communicate design knowledge. Studies have shown that these methods can improve comprehension of spatial and aesthetic impacts of design (1), support participatory engagement (2), and enhance empathy for the experience of non-human animals (3). Our project explores the interaction between humans and artificial intelligence (AI) with the goal of integrating knowledge about animal behavior and needs into an AI-informed design process. We have employed predictive and generative AI methods in the classification of image data (figure 1: bounding boxes), the interpretation of patterns of animal behavior (figure 2: graph of sightings over time; figure 3: close encounters between people and animals and among animals), and the production of synthetic 3D environments for immersive visualization and communication (figure 4: Blender workflow for creating synthetic animal images). This approach aims to provide a deeper understanding of the social and ecological dimensions of animal lives and human-animal relationships, and to offer a framework for engaging with community partners around design with animals. In this work we build on research investigating the benefits of immersive virtual environments for embodied environmental understanding (4) and the use of AI to convincingly simulate animal behavior (5). In the data collection phase of the project we employed a multi-method approach including ecological, social, and design data. Methods included automated photographic documentation of urban animals, a survey of community experiences and attitudes toward wild urban animals, and terrestrial LiDAR scanning of woodlands as the basis for building interactive 3D environments. Machine learning played a crucial role in processing over one million captured images to identify and classify animals, and AI integrated in a gaming engine has been used to establish an AI-mediated relationship with animals in a simulated 3D world based on procedural (computer-generated) and LiDAR-scanned urban woodland environments. This paper presents initial outcomes of our work with immersive visualization and the integration of AI models in the 3D environment to communicate experiential qualities of animal lives in the urban environment. Starting with rigged 3D models of non-human animals, we introduced methods for realistically animating the models and introducing AI models as avatars in the 3D environment. The aim with this work has been to provide a more intuitive and immediate method for communicating complex ecological and social data with community partners, contributing to a deeper understanding of animal lives and human-animal relationships in the urban context.

AI-ACCESSIBILITY: reimagining the future of multi-sensory space-making

More than 1.3 billion people globally report living with a disability, including 61 million adults (26% of the national population) in the United States. For individuals with impairments of vision, mobility, and cognition, who may also be aging in place, living independently, or engaging in self-care, among the greatest obstacles to meaningfully participating in society is the built environment’s unresponsiveness to alternative sensory abilities and modes of interaction. Since the adoption of the Americans with Disabilities Act (ADA) in 1990, the prevailing strategy for accessible space-making has been the retroactive application of bare-minimum, code-compliant “solutions” designed primarily for technologically-unaided navigation. Considering architecture’s charge as a full-body / multi-sensory practice and that more than 90% of disabled individuals now use enhanced assistive devices, including AI-empowered smartphones, to interact with the built environment, we maintain that these solutions are both inadequate and outdated. In the 2024-2025 academic year, an interdisciplinary research team including faculty and student researchers from the schools of architecture, computer science, and human sciences at [affiliation placeholder] was awarded internal and external funding to design and test building elements which maximize the functionality of artificially intelligent navigation aids for individuals with disability. Building on their prior research, which focused on the development of an AIenhanced mobile navigation app for individuals with vision-impairment, the team has partnered with disability leadership from local and national non-profits, government agencies, and private companies to assess and improve real-time wayfinding, spatial perception, and multi-sensory interaction in high-impact settings. This paper will present the methods and outcomes of the team’s research, elucidating both our approach to community-engaged scholarship and evaluating the accessibility of our design proposals based on user feedback. Our methods, which are strongly aligned with ACSA’s focus on social equity, emphasize the inclusion of historically underrepresented communities across multiple constituencies of disability via participatory workshops and post-use surveys and the inclusion of scholars and students with disability on our team to help shape and evaluate the impact of our research. The first round of prototypes were developed in collaboration with vision-impaired membership of the [partner placeholder] lodge, a private nature resort for sighted and visually-impaired vacationers, whose principal accommodation is a paved walking path and half-mile-long stainless steel guiderail used by guests to independently traverse the property and experience its wild woodlands and lakeshore. The design team reimagined the multi-sensory potential of a memory garden adjacent to this path, by testing and installing inclusive and AI-accessible finishings and features, including grooved surfaces and high contrast edging, 3D-printed planters for aromatic plants with embedded QR-codes for digital recognition, and ergonomic handrail additions including legible Braille lettering. The second round of prototypes are currently being developed with [partner placeholder], a state agency providing mobility training and environmental assessments to individuals with vision-impairment. Five commonly-challenging spatial experiences – a residential walkway / threshold, hospital corridor, retail bay, sidewalk / crosswalk, and exhibition / tourist space – will be optimized for non-visual AI navigation and featured as immersive exhibits at [affiliation placeholder’s] premier exhibition space this spring.

Session: Materializing Possibilities

Friday, September 26, 2025

Phygital Fabrications: Weaving AI and Material Logics into Architectural Pedagogy

Artificial Intelligence (AI) has become common as a design generation and ideation tool. Although it still has many constraints to creating full-fledged architectural designs, it is quite easy to chat with to come up with ideas and generate glossy design images. Yet many of these fails to be fully functional designs. Yet with smaller, more focused architectural material designs, there can be a possibility for AI to develop and create patterns and directions fully to produce actual products. This research is explored through an undergraduate architectural design seminar course, titled Phygital Fabrications. The course positions AI not as an end, but as a means for testing and generating ideas of pattern and material logic. This is done through the translation from digital to physical forms. The term “phygital” underscores this hybrid approach, emphasizing the iterative relationship between that AI-generations and the physical fabrication of materials. Students in this course engaged with AI-driven image and text generation tools, such as DALL-E text-to-image diffusion models as well as other pattern classifiers, to develop textile patterns that have a historical-basis and are also built on new aesthetic logic and experimentation. These patterns are then realized through hand weaving, and later with digital fabrication tools such as CNC weaving and 3Dprinting. This research is done through the process of treating AI not as the final output but as an inspiration to their physical creations. The focus is on materiality and how to re-interpret design as a bottom-up process. Meanwhile thinking about material logic and the physicality of parts to whole. This emphasizes the importance of material-driven design and how it can further the ideas of spatial quality and architectural design. The course focuses on textiles as a means think about patterns, assembly, and surfaces. Drawing parallels between these assemblies and larger architectural tectonic design thinking. Students learn through the making and manipulation of patterns not just to generate images with AI, but to create instructions for fabrication and assembly. Using AI as an inspirational tool rather than a final result; the physical fabrics produced are tangible results. This provides discussion about authorship, material agency, and the digital vs physical worlds. As we continue to see the emerging use of AI it is necessary to confront how it can begin to translate into our physical environment. Although technology creates faster ease of production, phygital fabrications is working between the digital and physical worlds to advocate for a slower process of back and forth with a more embodied and materially aware approaches to situate AI into our built environment.

AI & Found Objects: Fabrication Experiments from Generative AI’s Misinterpretations of Materials

Currently, generative AI image models lack an understanding of material properties, making it inherently difficult and problematic to translate their 2D image outputs into physically fabricateable 3D objects. This project addresses this challenge by (1) analyzing the ways in which generative AI systematically misrepresents physical materials and (2) proposing an alternative design workflow that uses these misinterpretations as generative catalysts for design and fabrication. In contrast to the standard human-AI co-design workflow that begins with text/image input and results in a finalized 2D/3D output to be fabricated as accurately as possible, this methodology positions generative AI as a conversational partner against which the human designer negotiates real-time, clearly acknowledging its lack of disciplinary understanding of material tectonics. This workflow uses a live webcam connected to a real-time AI-image generator, in front of which the designer manipulates scraps of found objects as physical models. The designer then reacts real-time to the AI’s generative video-like outputs as part of an iterative design process that the author can revise, depending on their desired output and using the constraints of a CNC-ed wood fabrication. This live AI video methodology is demonstrated through a case study project of the ideation-to-fabrication of a furniture design: a 3-way transformable bench. The study unfolds in three phases: Investigate – A systematic analysis of AI’s tendencies in material (mis)perception through case studies using KREA.ai's real-time generation, documenting the types of discrepancies between AI outputs and real-world material constraints. Ideate - The designer uses a webcam connected to an AI image generator, in front of which the designer manipulates physical models and scraps of wood and cardboard, and observes real-time the AI tool’s generative video-like outputs. This real-time feedback loop process is used to design and fabricate a furniture piece – a transformable wooden bench. Physicalize – Four versions of the wood bench design are fabricated as a four-part installation that juxtaposes AI-generated forms with tactile material assemblies. Using an adaptation of Greimas’ semiotic square, the four corners of the square map a dialogue between human and AI: the top-left shows "found objects" assembled into a bench, the top-right presents a physicalization of KREA.ai’s interpretation, the bottom-left shows a human-crafted bench (albeit while raising questions about the term "human"), and the bottom-right synthesizes both into a hybrid form, fabricated as physically transformable bench. This project contributes to the current discourse on generative AI’s potential role in responsive design by challenging the current human-AI design workflow paradigm in which AI is used to immediately provide finalized design solutions. Recognizing some of the problematic aspects of such an approach, this alternate methodology positions humans and generative AI as separate participants in a cyclical design process, which helps clarify agency. By utilizing Geimas' semiotic square as a brainstorming tool, the project demonstrates how AI-human interaction could benefit from clearer identification of “roles” and “tendencies” of AI in design workflows. The incorporation of real-time feature, adapting dynamically in response to designer input, helps to reinforce the importance of iterations in the human design process.

Ornamental AI: Generative Imagery and Code as Procedural Surface Data for Digital Fabrication

This work investigates a novel pipeline that harnesses generative AI imagery as high-resolution texture maps to enrich digital fabrication workflows specifically FDM 3D printing, clay extrusion printing, CNC milling, and robotic milling. While generative design has produced countless formfinding strategies and wishful visualizations, less attention has been paid to the translation of richly detailed AI-generated ornament into physical artifacts. By treating AI imagery not as end renders but as actionable surface data, we unlock new potentials for custom ornamentation that respond to context, material behavior, and fabrication constraints[1]. This methodology begins with prompt-driven generation of texture patterns using diffusion-based models. AI-generated imagery is repurposed as displacement maps to introduce complex three-dimensional ornamentation within Rhinoceros 3D and Blender environments[2]. By extracting luminance data from generative texture outputs, we translate visual information into surface offsets that drive both additive (3D printing and clay extrusion) and subtractive (robotic milling and CNC milling) fabrication processes. The following paper details the end-to-end methodology, from map preparation through mesh processing, and conclude with strategies for adapting the workflow via AI-assisted code in Python. The study illustrates the workflow through three case studies: (1) a 3D-printed ceramic tile featuring a biomorphic relief derived from “organic texture” prompts; (2) a PLA façade panel whose layered deposition emulates variable natural forms; and (3) a robot-milled log sample bearing generative patterns. In each case, we evaluate fidelity to the original AI imagery and material performance of resultant outcome. This approach advances AI-assisted ornament in two key ways. First, it reframes generative imagery as procedural surface data rather than static visuals bridging the gap between digital creativity and physical making. Second, by embedding Python-driven G-code generation into the loop, it offers an extensible framework for designers to customize deposition strategies, tool-path behaviors, and material responses particularly advancing morphogenetic design[3]. The work continues to extend the use of these textures by converting them into machine-readable G-code through a custom scripting toolkit. For FDM and clay printers, the height-map translates directly into variable-layer thickness or deposition rates pushing for functionally graded materials[4]; for robotic and CNC milling, the normal map informs tool-path offsets and traversals to sculpt relief patterns. This seamless chain from text prompt to fabricated relief enables designers to iterate on ornament at the speed of imagination rather than manual surface modeling. To streamline iterations between generative prompts and physical artifact, these experimental Python scripts are created for both Rhino Python and Blender’s Python API. Parametric exposure of key parameters (e.g., displacement amplitude, mesh density) enables rapid exploration of ornamental variations, effectively collapsing the design-fabrication feedback loop. The research contributes documentation for AI-to-digital fabrication workflows, and demonstrates how generative design can be reimagined as a driver of ornamental innovation in architecture and product design. By situating texture-map generation at the core of fabrication, this work expands the vocabulary of digital ornament and offers new pathways for integrating AI creativity into physical form. Such a framework has potential to democratize complex surface ornamentation, making it accessible to practitioners across scales and disciplines.

This Presentation was removed

This Presentation was removed

Session: Locating Ethics

Friday, September 26, 2025

Training Architect-Curators: Rethinking Iteration and Environmental Ethics in the Age of AI

Generative AI (genAI) introduces radical shifts in how architects design and the roles we play as designers. The rapid production of data and images using genAI pushes architects to adopt new skills, and this prodigious output has measurable environmental costs. These factors provoke novel approaches to pedagogy to arm future architects with the curatorial skills and ethical foundations to navigate these issues. Funded by a campus teaching innovation grant in 2024, our team engaged in a multifaceted research project examining emergent tools, practices, and pedagogies. The focus of our study was the design of two one-week teaching modules that integrated text-to-image genAI tools into studios for 150 undergraduate and graduate students alongside 14 faculty and 3 graduate research assistants. The research and teaching modules yielded a rich array of student work, faculty insights, survey feedback, and data. The production of over 3,450 design iterations in one week demonstrated how AI tools rapidly expanded students' design possibilities. The heightened pace of production begs questions about the quality of images produced and the preparedness of students to function as discerning curators of their image stockpiles. In this paper, we posit novel pedagogical approaches for future studios that stem from our Fall 2024 case study. First, the new capacity for high-volume iteration demands that we train students as architect-curators designers who can critically navigate, filter, and synthesize vast quantities of generated imagery. Architecture students’ skills must expand from simply those of a sole author that “invents” original images, to include a) the ability to contextualize and critique of vast sums of images against a backdrop of histories of architecture and theories of image-making, and b) the skills of sorting, analyzing, and evaluating images as inputs to training models as well as products resulting from genAI design processes. Second, architectural pedagogy must confront the material realities of incorporating AI into design processes. The environmental externalities of genAI such as high energy consumption, carbon emissions, and water usage are not abstract concerns.1 These metrics can become part of how iteration with genAI is taught and evaluated. Just as students are trained to consider the embodied carbon of materials or the energy performance of buildings, they should also understand that iteration through genAI incurs environmental costs that must be accounted for and fed back into the design process. Using our two initial 2024 teaching modules and the resultant creative production as a case study, this paper re-evaluates our pedagogical approaches to integrating genAI tools in design studio through new lines of inquiry into image curation and environmental externalities. In doing so, we will speculate on pedagogical approaches and studio structures that expand beyond our original teaching modules with new knowledge to better prepare students for the challenges and opportunities of iterating as architect-curators in the age of genAI.

Augmenting Studio Critique : A Framework for AI Mediated Reflection in Architecture Education

Responding to inconsistent feedback and stretched faculty resources, this research introduces an AI-mediated critique system integrated into multiple architectural design studios in a collegiate setting. An AI critique agent is trained on a faculty-curated canon of critique frameworks, ranging from spatial-ordering manuals to visual-rhetoric guides, and functions as a conversational guide delivering critical feedback in established design vocabulary.1,2 Rather than generating design solutions, the agent responds to hand sketches, physical models, and digital technical drawings with text-based feedback grounded in the frameworks chosen for the specific studio. This feedback prompts small self-reflection moments which have the potential to surface design opportunities more rapidly. Prototype demonstrations and comparative analyses revealed three outcomes. First, the AI delivered consistent, discipline-specific critiques that reduced variability in studio reviews. Second, student selection of curated texts and the agent’s quick “pause‐predict‐ponder” prompts fostered deeper engagement with evaluative frameworks and encouraged iterative self-explanation.3 Third, supplementing rather than replacing human critique freed instructors to focus on higher-order guidance and individualized mentorship.4 Drawing on student surveys, focus-group reflections, and side-by-side comparisons of AI and faculty comments, this study shows how a well-curated textual corpus can bridge traditional and AI-supported feedback, providing an intermediate layer of critique when instructors aren’t available and scaffolding targeted prediction and self-explanation before formal faculty review. Participants reported greater clarity in design expectations, valued seeing frameworks applied directly to their own work, and appreciated the small, structured reflection moments that made those guidelines easier to grasp and revisit at their own pace. By making tacit criteria explicit, delivering rapid, small-scale reflective prompts, and embedding brief selfexplanation cues, the agent cultivates structured self-assessment. It also liberates instructors to dedicate their expertise to deep mentorship and high-impact interventions elevating the studio to a more reflective, concept-driven design culture.

Ethics as a Design Principle: Shaping AI Integration in Architecture and Design Education

As artificial intelligence becomes increasingly embedded in architectural workflows and design pedagogy, educators face a critical juncture: how to integrate these tools to support creativity, uphold ethical responsibility, and reflect the cultural and disciplinary values of the design field. This presentation, developed through collaborative work with the national EDUCAUSE Working Group on Generative AI resulting in the forthcoming 2025 report "Ethics is the Edge," responds directly to the conference theme by proposing a framework for ethical AI practices rooted in design education and studio culture. Inspired by the 1979 Belmont Report's approach to human research ethics, our framework offers eight core principles beneficence, justice, respect for autonomy, transparency and explainability, accountability and responsibility, privacy and data protection, nondiscrimination and fairness, and assessment of risks and benefits adapted specifically for higher educational contexts. In architecture and design, these principles take on unique dimensions: How do we preserve creative authorship when AI co-generates designs? What constitutes equity when AI tools for visualization and modeling require expensive subscriptions? How do we maintain cultural currency when AI systems are trained on potentially biased datasets that may not represent some architectural traditions? Our research reveals that architecture and design schools face distinct challenges: the tension between computational efficiency and craft-based learning; the risk of homogenizing design solutions through AI pattern recognition; and the potential erosion of critical thinking when AI tools provide instant solutions. Yet we also identify opportunities: AI can democratize access to advanced visualization tools, enable new forms of collaborative design, and support sustainable design practices through complex environmental analysis. The presentation will share concrete scenarios from architecture and design education, including the use of generative AI in studio critiques, where algorithmic suggestions may constrain student creativity; the deployment of AIpowered assessment tools that struggle to evaluate culturally specific design approaches; and the integration of AI in community-engaged design processes, where transparency and consent become critical. We argue that in creative fields, ethical AI adoption requires special attention to preserving human agency, cultural diversity, and the collaborative student-mentor relationship that defines design education. The paper proposes an Institutional AI Ethical Review Board (AIERB) model adapted for design schools, emphasizing interdisciplinary dialogue between technologists, designers, ethicists, and community stakeholders. We present a pragmatic approach that moves beyond compliance to foster innovation aligned with design education's core values. This includes strategies for faculty development, student empowerment, and institutional governance that protect creative exploration while leveraging AI's transformative potential. For architecture and design education, this means developing AI practices that enhance human creativity, celebrate cultural context and innovation, and strengthen the relationships at the heart of design learning. The presentation will conclude with actionable recommendations for design educators, administrators, and students seeking to shape a more transparent and creative AI future. Note: Grammarly and Claude Opus 4 were used to edit and clarify this text.

Don't steal my joy: Integrating ethical AI into design education while prioritizing play and discovery

“You have to be in a state of play to design. If you’re not in a state of play you can’t make anything.” -Paula Scher Artificial Intelligence, when applied at any stage of the design process, can either narrow or expand learning gaps. It can streamline rapid iteration or limit opportunities for discovery. It can enable designers to create high quality designs with limited resources, or it can restrict opportunities for experimentation and skill development. It can either enhance or dull the joy inherent to creating “good” design. As design educators developing ethical use guidelines, case studies, sample projects, and teaching tools across a multidisciplinary public university program, we share insights from our own experimentation with AI exploring how to integrate emerging technologies while preserving discovery, play, and experiential learning. A primary concern for these instructors, each with their own professional practice and diverse area of expertise in Industrial, Graphic, and Interaction Design, is to maintain the play and experimentation inherent to a productive design process. Each of them, in their own design discipline, has experimented alongside students with new Generative AI tools, such as Midjourney, Krea, Vizcom, and Runway. They are mid-stream in developing case studies, sample projects, and ethical constraints that they will continue to test with students, practitioners, and other faculty through ongoing co-design sessions. And while they are prioritizing the most fundamental concepts of ethics, equity, and critical thought, they are also prioritizing what no one has asked them to: Joy & Play. In this presentation and paper, these collaborators will explore the following questions through concrete examples from their work serving the student body at one of the US’s most diverse public universities, where many of their students are the first in their family to pursue an undergraduate degree. When discovery and surprise are facilitated through hands-on manipulation of design materials and translation across media, be they digital or physical, (how) can those moments be preserved and prioritized when AI significantly streamlines these parts of the creative process? Think: Hand sketch to digital and physical models, detailed manipulation of pixel, point, line, and plane What is the right balance in design education? As we train students to use AI tools that make them competitive in the job market, how can (and should) this coexist with the slower, play-like aspects of design that build creative confidence, originality, and process-oriented thinking? What about the fun? Design is not the most lucrative career choice, and often it isn’t supported by families sending the first in their families to college. The joy and satisfaction of developing one’s own skills and style through experimentation is what keeps many students and practitioners dedicated to improving their practice. (How) can this element of joy survive the advent of AI? This interdisciplinary team will share stories, insights, and concrete examples of their ongoing work, providing inspiration for others to prioritize joy, play, and experimentation in their own increasingly tech-fueled processes and pedagogies.

Session: Light Projections

Saturday, September 27, 2025

A Framework for a Qualitative and Quantitative AI Tool for Automated Floorplan Generation

Ahmed Meselhy, Virginia Tech

James R. Jones, Virginia Tech

Amal Almalkawi, Virginia Tech

Rayane Alhajj, Virginia Tech

Artificial Intelligence (AI) tools for automated floorplan generation can produce a wide range of design options in a short time, consequently AI has the potential to streamline the traditional design process. Current AI tools primarily focus on quantitative aspects, such as optimizing physical performance, spatial efficiency, and energy performance, while often overlooking qualitative factors essential to user-centric design, such as daylight and views. Daylight plays a pivotal role in enhancing the quality of space and promoting health and well-being. Research indicates that higher daylight levels contribute to environmental satisfaction, increased productivity, improved vital body functions, enhanced circadian rhythms, and overall well-being. This research introduces an innovative knowledge framework for an AI-driven floorplan generation tool that prioritizes the qualitative aspects of daylight. It enables architects to generate design iterations that enhance occupants' spatial experience by integrating daylight and views. The study employs a four-step methodology for developing a new AI floorplan generation tool , including a literature review to examine qualitative factors of daylight and views, the establishment of a logical argument for their integration into the design process, an analysis of AI engines, and the Delphi method for determining consensus for the proposed framework among knowledgeable stakeholders.

A Generative-AI Workflow for Daylighting Optimization in Schematic Design

Rapid decision-making under uncertainty is a common feature of early-stage architectural design, particularly when including performance-based objectives like daylighting (Østergård et al., 2015). Although quantitative measurements like Useful Daylight Illuminance (UDI) and Spatial Daylight Autonomy (sDA) are provided by modeling tools like Honeybee and Radiance, converting these into workable design changes remains an expert-dependent and challenging (Rane et al., 2023). This research investigates the application of Large Language Models (LLMs), a tool like GPT-4 (Ahn et al., 2023), as generative co-designers that decipher daylight simulation outputs and provide material- and geometry-based design enhancements. The goal is to explore whether LLMs can bridge the gap between simulation output and creative decisionmaking using natural language. The suggested process combines a prompt-engineered generative AI layer with environmental simulation tools. The workflow is summarized in the following steps. First, a set of early-phase design variants–signal -zone office spaces–was created in Rhino and Grasshopper. The key design parameters included window-to-wall ratio (WWR), building orientation, and geometric configuration (e.g., rectangular, L-shaped). Secondly, each model was evaluated using Honeybee and Radiance to compute daylighting performance metrics, including UDI (100–2000 lux), sDA (300 lux for ≥50% of the occupied time), and DGP. These metrics served as quantitative indicators of visual comfort and daylight sufficiency. Third, simulation outputs were exported from Grasshopper as tabular data only (CSV format). While no image-based data was used in this phase, screenshots of floor-level illuminance distributions were used for internal validation but not parsed by the LLM. In addition, for prompt engineering and LLM interaction, a structured text prompt was developed for each design case, summarizing UDI, sDA, and contextual details (e.g., orientation, target use, shading constraints). These prompts were sent to the LLM model via Python (OpenAI API), and the model returned qualitative design feedback, such as reducing south-facing glazing, increasing window depth, or introducing shading fins. Lastly, the design recommendations were qualitatively assessed for architectural viability, clarity, and alignment with the original daylighting goals. The evaluation followed a three-criterion framework: (1) relevance to simulation metrics , (2) architectural feasibility (practical implementation within schematic constraints), and (3) consistency with the intended function and comfort targets of the space. Selected suggestions were re-implemented in the Grasshopper model and re-simulated using Honeybee to determine whether they led to measurable improvements in daylighting performance. Unlike typical daylight optimization workflows, this approach does not rely on optimization algorithms to find optimal solutions. While multi-objective optimization via evolutionary algorithms is well-established, this study positions the LLM as an intelligent frontend capable of rapidly generating high-quality, interpretable suggestions, bypassing the need for exhaustive parametric search. The results demonstrate the feasibility of using LLMs to support early-stage design decisions through contextual, low-friction feedback(Xu et al., 2025). This method connects generative design thinking and quantitative simulation, enabling architects to interact with performance data through natural language. The paper discusses the implications of incorporating LLMs into sustainable design workflows, identifying potential areas for further research, such as rule improvement, user-in-the-loop co-design, and BIM integration.

Transcending AI Representation, A Performance-Driven Integration of AI in Architectural Design Processes

The rise of generative artificial intelligence (AI) and diffusion models has redefined architectural representation, augmenting aesthetic speculation and architectural form variability [1], while simultaneously destabilizing established design epistemologies by foregrounding questions of design agency, and the ontological status of the image in performatively driven architecture design production. While those AI models introduce novel aesthetic imaginaries, they remain detached from aspects of building physics as well as climatic and contextual logics, essential to environmental performance evaluation. Their visually compelling outputs lack fidelity, often producing representational illusions, or hallucinations, rather than performative accuracy. By ignoring site-specific parameters, such as orientation, latitude, as well as spatial parameters such as material reflectance, AI-generated designs compromise environmental evaluation accuracy; there is a need for integrating data-driven feedback loops which are fundamental to evaluate design strategies. Thus, AI-generated design must be reframed not as an autonomous outcome, but as a speculative scaffold requiring rigorous pre and post-processing through validated simulation methods to ensure performative consideration. Central to this inquiry is the question: how to develop a method framework for designers to follow, where AI-driven design is informed by environmental logics of daylighting, and how would the framework allow for sustaining successful concept development throughout the generative process? In response, this work is aimed at leveraging the use of diffusion models, with their strategic integration into a hybrid design workflow in which validated daylight simulations are also incorporated at different design phases. In our introduced framework, to consider environmental performance, the design generation is grounded in climate and context-specificity, with geometrically and materially detailed 3D models that serve as the analytical counterpoint to AI-generated designs. Daylight, as a fundamental form-giver in architecture [2], holds immense potential to inform generative design exploration in early phases, and thus was used as an apparatus for space making. Within this framework, “GEN-Daylight”, daylight simulation serves as a design tool along with utilizing generative AI tools of Lookx.ai [3] and Midjourney AI [4]. Both AI platforms support multi-modal inputs including text, sketches, orthographic drawings, and snapshots of massing models. Our method framework enables dynamic interaction between the designers and those model outputs across analytical, exploratory, and development design stages. Particularly, the Lookx.ai tool allows access to pre-trained models alongside custom model training and fine-tuning, further supporting custom and representation objectives. The framework was implemented and critically evaluated within a coordinated design studio over two consecutive years, engaging 115 students in a design-research environment. Following our proposed design workflow, students were able to pursue articulation, interpretation, and refinement of architectural design intentions through strategically curated inputs, along with rationalization and post-processing, as well as engaging in AI-driven and daylight simulation feedback loops. GEN-Daylight sustained conceptual coherence while leveraging the generative affordances of AI, positioning the designer not as a passive user of algorithmic output but as an active choreographer within a hybrid design ecology. By cultivating a productive tension between speculative imagination and analytical rigor, the framework advances a situated design intelligence, one that is creatively agile, and environmentally attuned.

Toward Human-Building Collaboration: Leveraging AI to Design Interactive, Adaptive, and Resilient Environments

Building automation systems have increasingly garnered attention for their ability to enhance building performance (Domingues et al., 2016) promising to create interactive ambient information environments. These systems rely on sensors and IoT technologies to create dynamic feedback loops between building systems, occupants, and their surrounding environments; built and natural. This paper demonstrates how architects, as designers of occupant interactions with intelligent building systems, can integrate Artificial Intelligence (AI) into their design workflows to enrich the interactivity of built environments. To ground this exploration, lighting design is employed as a case study to reveal broader implications for architectural practice. Contemporary lighting systems have evolved into context-aware entities, reacting to occupant motion and environmental changes (Helvar, 2021). Additionally, they are able to acknowledge user preferences to personalize lighting configurations (Debiasi, 2020). Prior research has examined the development of interactive (Offermans et al., 2013), adaptive (Viani et al., 2017), and self-optimizing (Sun et al., 2020) lighting systems, alongside efforts to spectrally tune lighting through numerical optimization techniques (Aldrich et al., 2010) and establish design considerations for interactive lighting (Van De Werff et al., 2019). Despite these advancements, holistic methodologies for designing intelligent lighting systems and comprehensive guidelines for integrating AI system components and advanced types of interactive behaviors remain scarce. This paper presents a computational framework that leverages AI and machine learning to systematically address the design and evaluation of digitally programmable light-sculpting systems. Interactive systems are evaluated in Unity, the gaming engine, where we have developed a virtual interactive lighting environment modeled after a real space on our university campus. With the proposed human-building interaction design framework we demonstrate three types of user-system interaction: instructional, automated, and collaborative - with collaboration emerging as a novel interaction paradigm largely enabled and amplified by the advent of AI. In this paper we will show how this framework allows designers to 1st determine the desired level of interaction based on the available hardware; 2nd use the framework components to determine and design interaction behaviors between occupants and the lighting system and 3rd, evaluate prior to physical implementation the dynamics of interaction that result from different design choices. Each framework implementation shows how each level of interaction may be attained by progressively synthesizing what the system is able to know, with how it is able to behave, and how it is able to process data. The affordances and limitations of each type of interaction are primarily evaluated through physically based real-time lighting simulations that are conducted in Unity. This digital twin environment allows for virtual simulations of lighting configurations while generating knowledge that can either be applied back into the physical space or be used to make system predictions. On select occasions, lighting configurations and the rendering accuracy of Unity are also validated through time-invariant simulations that are conducted periodically in Radiance. Overall, this framework lays the groundwork for future advancements in the design of ambient and human-machine collaborative intelligence-infused systems within the built environment.

Session: New Foundations

Saturday, September 27, 2025

Wonder, Resistance, and the In-Between

What if architecture education became a space not just for learning new tools, but for questioning the tools we’re told we need? This paper offers a framework for using artificial intelligence (AI) in architecture as a layered, ethical, and imaginative practice grounded in three years of teaching, research, and direct conversation with students and colleagues about what it means to design with machines in a world already shaped by them. The framework emerges from a series of studios where students used generative AI tools, e.g. Midjourney, DALL·E, Runway ML, etc. not only to ideate, but to wrestle with profound concepts like the sacred, the profane, and the role of symbolism in spatial storytelling. These studios encouraged students to sketch with prompts, reflect on their own values, and debate questions of authorship, cultural bias, and spiritual meaning. AI here was a provocation, a collaborator, and at times, a frustrating mirror. These classroom experiences led to broader institutional conversations, including a series of faculty workshops on AI and design, culminating in a student–faculty town hall. There, the tension became palpable: some students, especially those concerned with environmental ethics, called for a total refusal of AI in architectural education. Others saw it as a vital skill for future practice, essential for job readiness and creative experimentation. Educators, too, were divided. What do we do when our roles as mentors, critics, and stewards of the discipline feel pulled in opposing directions? This paper doesn’t offer neat answers. Instead, it proposes a pedagogical approach, now being developed in a new course, Architectural Imagination and AI, that invites students to stay in that space of uncertainty a little longer. Inspired by media theory (Latour’s iconoclash), phenomenology (Norberg-Schulz, Pallasmaa), and student-driven inquiry, the course positions AI as both a technical medium and a cultural system. Architecture is reframed as a form of counter-media, a way to imagine, communicate, and resist dominant narratives about the future. Rather than isolating AI as a topic, we ought to embed it within design processes, critical reflection, world-building, and interdisciplinary collaboration. The course draws on a partnership with media studies and journalism faculty exploring parallel questions of misinformation, bias, and authorship in their own fields. Students will experiment with generative tools, but also with frameworks for thinking. How do we define creativity when machines produce endless options? What voices are amplified or erased in our prompts? What are the ecological and epistemological costs of the tools we use, and who gets to decide what’s worth the tradeoff? By grounding these questions in actual hands-on experience, student feedback, and cross-departmental collaboration, the results will contribute to ongoing debates around how we teach about AI in architecture. It advocates for an approach that is technically informed, ethically aware, and intellectually open, a pedagogy of curiosity and responsibility. The question becomes, how do we work with AI in architecture without losing ourselves or our students?

AI as a Third Party: Introducing Language, Authorship, and Code in Foundation Design Education

In early architecture education, students often struggle to express ideas clearly. Years of standardized K–12 learning emphasize surface-level responses over critical inquiry, leaving many students unprepared to formulate and communicate conceptual thinking. A significant hurdle experienced by students is the difficulty in using language to translate ideas into diagrams. This challenge becomes an opportunity when paired with generative AI, which can act as an impartial third party, requiring clarity, precision, and iteration in order to produce a desired result. Rather than treating AI as a shortcut, we framed it as a tool to help students think, process, and revise their ideas critically. In Fall 2024, foundation studio introduced ChatGPT into a series of design exercises grounded in basic design principles. Students began by identifying a conceptual prompt through the pairing of a design element and a design principle, for example, line and symmetry simplified into the single word “balance.” These basic concepts became the foundation for a process of translation. Students wrote step-by-step instructions for the reproduction their chosen concept. Those instructions were then handed off to a classmate, who used ChatGPT to generate Python code for Rhino 8[i]. This began a cycle of testing, troubleshooting, and iteration[ii], moving back and forth between the AI and Rhino, between code and diagram[iii]. This distance, created through the layering of authorship and digital translation, allowed iteration to grow spontaneously. Students who had never written code before learned to analyze outputs, revise logic, and reflect on the gaps between intention and result. The diagrams they produced served as the first generation of a series of 2D digital explorations. These were then used as the conceptual basis for their final semester projects: the fabrication of an architectural object and the design of a sacred landscape. Weekly pin-ups and peer critiques helped ground these outcomes in conversation, feedback, and revision. What emerged from this process was a new way of introducing both computation and authorship in early design education. By handing off authorship to others, both human and machine, students learned what parts of their process needed to stay constant and what could evolve. More importantly, they began to understand that architectural design is never a solitary act. It is iterative, distributed, and deeply social. Generative tools, when introduced critically and creatively, can expand a student’s capacity to imagine, collaborate, and think beyond the limitations of their previous experience. The goal of this work was not to turn students into coders, but to offer a transferable framework for early design education, one that uses AI not to provide answers, but to ask better questions.

Shape Computation: A Foundation for AI-Embedded Architectural Education

Given the shifting technological landscape in the architecture, engineering, and construction industry, how should architectural education evolve? What skill sets are needed to navigate the uncertainties and demystify the promises of artificial intelligence (AI) in future design practices in the field? This paper argues for the value of formal systems in architectural education at the undergraduate level to prime students for developing design thinking and computational thinking in parallel. More precisely, the argument contends that the shape grammar formalism and its definition of design as a visual calculation provide a powerful and intuitive introduction to computational design to facilitate adaptation toward evolving practices in architecture. The pedagogical research here outlines a three-semester sequence that elaborates this potential by foregrounding three key elements of shape computation: the rule, the schema, and the design machine. The rule is the basis for an introductory core elective, where the formal analysis and synthesis of precedents establishes rigorous and playful mechanisms for encoding design logic and meaning with geometry. This is followed by a second-year design studio where the schema structures design inquiry into the language of a building typology based on imitation and improvisation. Lastly, the design machine motivates a third-year design studio, where students develop and refine a precise rule-based logic for remixing design languages in an urban context. This curriculum has been implemented over the past two years as a prerequisite for a fourth-year creative AI studio within a Bachelor of Architecture program. The curriculum is described as ‘AI-embedded’ because all students participate in this coordinated sequence, which is aimed at developing speculative pedagogies to imagine future practices at the intersection of technology, environment, and community. In all three courses of the shape computation sequence, shape grammars are presented as a formalism that sharpens criticality and creativity systematically to prepare students for increased agency and accountability with more advanced computational approaches in design that span from environmental and structural simulation to machine learning. An essential strength of shape computation is the ability to bridge analysis and synthesis in design as well as analog and digital media through visual algorithms specified geometrically. From this perspective, algorithmic thinking and generative design need not require digital computing but can be explored through exercises that draw and model computations step-by-step. By repeating these processes over multiple semesters, students develop skills of interpretation, pattern recognition, formal specification, logical argumentation, and iterative evaluation that prepare them for designing with and adapting to emerging workflows with increased understanding. The paper is structured in three parts. The first part introduces shape computation and its perceptual and procedural role in design pedagogy. The second part provides an overview of the three-course sequence to explain how the rule, the schema, and the design machine are characterized. Finally, the third part assesses the curriculum to date and concludes with a discussion on shape computation as a framework for teaching architectural design iteratively through a humanist approach that includes historical examples and logical reasoning, similarly to how we teach machines.

From Prompt to Pavilion: AI Tools for Enhancing Design-Build Workflows in Early Design Education

This project proposal centers on the development and implementation of a summer design-build course at [university name redacted] for high school students interested in architecture and design. Over the past several years, the course has explored a variety of themes, including “optics and perception” (e.g., trompe-l'œil and anamorphic projection, color theory and pointillism, parallax and moiré patterns), “participatory publics” (e.g., modularity and reconfigurability, interactivity and agency, urban activation and placemaking), and “material playgrounds” (e.g., speculative fabrication and rapid prototyping, material research and testing, exquisite-corpse assemblies and constructive eclecticism). This July, [course name redacted] will focus on “generative, computational, and co-creative processes.” Specifically, generative AI will be introduced in the early stages of design. Natural language processing tools like ChatGPT will support students in processing and synthesizing data gathered from a community client during on-site workshops. For conceptual development, AI rendering tools such as mnml.ai will help students most of whom have limited exposure to design representation quickly visualize the possibilities embedded within their hand sketches and study models. During design development, faculty will guide student teams in the use of evolutionary computation tools, such as Galapagos and/or Wallacei for Grasshopper, to optimize design decisions based on projectspecific parameters, including budgetary efficiency, material dimensions, overall scale, target occupancy, and lighting or shading requirements. At each stage of the process, these tools are intended to deepen students’ understanding of design logic while also accelerating the timeline needed to reach a collective decision on the final build. In particular, tools like Galapagos’ evolutionary solvers can help reveal systems-based dimensions of design development processes that are often difficult for students, whether high school or college-age, to conceptualize or actively engage with. By the conclusion of the program, the students of [course name redacted] will have collaborated with community members, experimented with emerging technologies, and employed traditional tools of construction and craft to produce a pavilion for installation in [city name redacted]. Ultimately, the resulting work will be far more than the timely delivery of a thoughtful amenity to the [city name redacted] community. This architectural artifact will serve as a physicalization of the hybrid processes explored by students an analogue, appropriately scaled for a four-week high school summer course, for the future of our profession.

Warp-Speed Studios: How AI Integration Influences Design-Feedback Speed in Accredited Architecture Programs

Generative artificial intelligence has been making a huge impact in the educational fields. Studies in more generic domains have shown insightful analysis[1]. When it comes to architectural pedagogy, there have been excellent qualitative research conducted[2]. However, systematic data on the impact of generative AI tools such as text-to-image diffusion models, conversational code assistants, and performance-prediction plug-ins remain scarcely documented. This study addresses that gap by surveying NAAB-accredited design studios across the United States during the 2024–25 academic year to assess how AI integration is reshaping architectural pedagogy, particularly the “design-studio” model, through metrics on iteration tempo (AI-generated iterations per unit time), feedback depth (the proportion of those iterations receiving sufficient feedback), and students’ sense of authorship of AI-assisted work.

A preliminary twenty-item survey has been shared with one design studio from an NAABaccredited architecture program. Many of the students in this studio were not particularly exposed to using AI beforehand and the professor has made a push to integrate AI into the pedagogy. Early survey responses reported that among surveyed students, most produced more designs than in previous studios and felt iteration intervals became “significantly shorter.” Feedback quality, however, was mixed: half of the students felt that critique depth decreased when iteration speed increased, and only one respondent received extensive comments on every version. Two-thirds agreed that AI “significantly shaped” their design direction, yet several found it difficult to track the evolution of their design direction. The results also show that most if not all students were considering leveraging AI for design aid in the future. Although, these pilot results present an insight into the early integration of AI into architecture pedagogy, the results are indicative rather than conclusive, they point to a production–critique imbalance that demands a broader investigation. The future roadmap of this research would be to keep distributing the survey to a wider range of NAAB-accredited design studios, sampling both AIintegrated and non-AI-integrated studios (targeting 60 students and 12 instructors). The survey responses will be studied in relation with the course materials, such as syllabi, rubrics, etc. to analyze parameters that affect speed-feedback loop. Following the survey, one-on-one interviews will be conducted with select students to further investigate the nuances of their experiences. (All in person interactions will clear Institutional Review Board approval before full rollout). We wish this research would shine light on the following aspects: (a) a multi-school dataset on how the integration of AI would impact on the overall design output. (b) how would AI integration facilitate / challenge the conventional way of architecture pedagogy, especially the delicate balance of production-critique culture in a design studio. We aim to plot these results on a tempo-critique trade-off curve, to capture where the shift starts to happen. Finally, (c) strategies for balancing rising iteration speeds with meaningful feedback, ensuring that studio practice evolves in step with AI-driven shifts in architectural pedagogy.

Session: System Workflows

Saturday, September 27, 2025

Evaluating LLM-based Retrieval-Augmented Generation for Enhanced Knowledge Management in the AEC field Domain

Jun Wang, University of Pennsylvania

Tian Ouyang, University of Pennsylvania

Yushan Li, University of Pennsylvania

Xiye Mou, University of Pennsylvania

The Architecture, Engineering, and Construction (AEC) industry, despite generating vast amounts of data, remains hampered by fragmented and underutilized information, resulting in significant inefficiencies and financial losses. This paper introduces a proof-of-concept multimodal retrieval-augmented generation (RAG) platform designed to address the knowledge fragmentation in the AEC sector. Compared to traditional knowledge management systems that typically archive data in isolated repositories with limited reuse and poor integration of visual information, this platform uniquely integrates dual-indexing to simultaneously manage textual and visual information sources such as model data, drawings, and photographs. Despite advances like dynamic knowledge maps and knowledge graphs, current knowledge management systems in AEC still struggle with integrating tacit and cross-file insights, due to their reliance on explicit artifacts and the high effort required for semantic structuring. To address this gap, we propose using multimodal Retrieval-Augmented Generation (RAG) to integrate both text and image-based information. The system ingests and preprocesses publicly available AEC documents, translating textual and visual content into separate databases using text-to-vector embedding models. Queries submitted in natural language trigger function-calling processes to retrieve relevant information and generate structured, visually grounded responses with clear source citations. The platform leverages state-of-the-art commercial vision-enabled large language AI model (SOTA-LLM) for image and data extraction, image captioning, semantic embedding, and autonomous function selection, ensuring highly relevant and contextually appropriate outputs. Evaluation involved two complementary methods: a human judgment survey among 25 industry professionals and a rigorous system consistency assessment. Survey participants, averaging six years of professional experience, judged the platform's outputs across diverse queries, resulting in a favorable average rating of 79%. Technical evaluations demonstrated robust internal consistency, with function selection and retrieval patterns exhibiting high determinism (>99%) and minimal variability in generated responses. Small instances of lower performance could be explained by specific gaps in the database, suggesting clear pathways for immediate improvements. The findings affirm that even a modest dataset of 60 AEC documents can reliably support the retrieval and generation of useful design knowledge. Scaling the dataset with additional resources such as PDFs, BIM snapshots, and richly tagged imagery promises further enhancement of the system's reliability and applicability. Ultimately, the platform represents a significant advancement in AEC knowledge management, offering professionals an efficient, consistent, and intuitive tool to leverage fragmented institutional knowledge, thereby reducing rework, enhancing collaboration, and improving overall productivity. This abstract was developed with assistance from AI tools and reflects research conducted using an interactive Python notebook environment, leveraging state-of-the-art AI model APIs and supporting Python libraries.

The Architect as Orchestrator: A Co-Creative AI Workflow for Conceptual Design through Multi-Prompt Systems

Jiong Wu, Syracuse University

Ruaa Alzahrani, syracuse university

Meng Hsieh, syracuse university

This research investigates how architects can orchestrate multiple artificial intelligence systems in early-stage design, reframing AI not as a tool of automation but as a medium of co-creation. Beginning with an evaluation of over 27 AI engines relevant to architectural design workflows including language models, image generators, and 3D synthesis tools we identify and test the most effective pairings for conceptual development. From this landscape analysis, two highly complementary systems ChatGPT and Meshy AI were selected for prototyping a speculative desert villa project, offering an case study of Co-creative AI workflow. The study proposes a Progressive Prompt Matrix, a prompt-driven design workflow composed of five iterative phases: narrative generation, form articulation, program refinement, interior strategy, and presentation scripting. This matrix serves as a scaffold for a dialogue between human designer and machine, where each phase builds on prior outputs to shape a responsive and layered design iteration process. ChatGPT supports spatial reasoning by generating architectural narratives, site logics, and zoning strategies. Meshy AI, capable of transforming text and 2D sketches into spatialized 3D outputs, serves as a schematic modeling partner. Findings reveal that Meshy AI produces the most coherent results when given both textual and visual prompts, suggesting its strength lies in early formal ideation rather than detailed representation. ChatGPT, meanwhile, excels at synthesizing atmospheric and functional narratives, supporting design logic and critique. Their integration demonstrates how strategic prompt layering combining metaphor, performance logic, and site-specific cues can elevate the spatial and conceptual clarity of outputs. Design performance was evaluated using five criteria: prompt responsiveness, design clarity, creativity, iteration value, and human-AI cooperation. Results highlight the importance of human authorship in crafting meaningful prompts and curating AI outputs. Rather than automating design thinking, this approach amplifies imagination, supports iterative depth, and accelerates schematic exploration. By offering a reproducible co-creative workflow and a prompt typology grounded in architectural priorities, this research contributes to emerging design methodologies that engage AI not merely as a representational assistant, but as a thinking partner. The architect is positioned as a director and translator moving fluidly between tools, shaping prompts, and conducting a distributed design process across multiple AI agents. This orchestration model invites new forms of authorship, pedagogy, and practice in the age of intelligent systems.

Embodied Creativity: Assessing Artificial Intelligence, Human, and Human-AI Collaboration in Virtual Reality

Artificial Intelligence (AI) and Virtual Reality (VR) are transforming architecture and interior design via research, education, and practice. AI’s large language models (LLMs) enable humanmachine collaboration in spatial planning1, while VR’s “embodiment” and “ego-centric perspective” enhance spatial comprehension in isometric drawings2. However, AI is also scrutinized for encroaching on creative expertise3 and displacing design roles4 domains long considered uniquely human. This study empirically addressed these concerns through embodied creativity assessment in VR a validated method for evaluating design creativity across diverse levels of rater expertise5. In identical virtual waiting rooms, 246 crowd-sourced raters (MTurk, Prolific) separately judged three design entries for the 2024 Robert Bruce Thompson Lighting Design Competition: one generated by AI tools (Midjourney and Kaedim AI), one by a human designer, and one from a human–AI collaboration led by a design student. Using the Creative Product Semantic Scale (CPSS)6, raters judged each design entry on three criteria: novelty (originality and surprise), resolution (logic and utility), and style (craftsmanship and elegance) totaling 55 items. They also completed 14 items measuring sense of presence7. The study used Kruskal-Wallis tests with Bonferroni-adjusted Mann-Whitney U follow-ups to test differences among conditions (AI, Human, and Human-AI collaboration), ruling out violations of normality and homogeneity. AI scored significantly higher in style than the human designer in the MTurk sample (p = .02, r = .39, medium effect size). Conversely, Prolific participants rated the human design significantly higher in novelty than the AI entry (p = .01, r = .43, medium effect size). Across both samples, the human–AI collaborative design consistently scored lowest, especially in novelty. In MTurk ratings, the human designer significantly outperformed the collaborative entry in novelty (p < .01, r = .51, large effect size); the same trend was confirmed in Prolific data (p < .01, r = .44, medium effect size). No significant differences were found for resolution. Regarding sense of presence, MTurk participants rated the VR environment as moderately effective (means ~4.2 on a 7-point scale), especially in immersion. Prolific participants rated presence and realism notably lower (means ~3.5), though immersion remained relatively strong. In the MTurk sample, the correlation between sense of presence and creativity ratings was strongest for the AI condition (r = 0.52, moderate), weaker for the Human condition (r = 0.36, weak-to-moderate), and weakest for the Collaboration condition (r = 0.10, very weak). In the Prolific sample, presence-creativity correlations were moderate for both AI (r = 0.41) and Human (r = 0.48) conditions, and weak for Collaboration (r = 0.19). This pattern shows that participants who felt more present in VR were more likely to rate designs as creative. This study advances ACSA’s conversation on AI and immersive methods by showing that VR serves as both an effective assessment tool and a mediator of creative judgment. While AI can match human designers in style, it still falls short in novelty, and human–AI collaboration does not guarantee better outcomes. These findings highlight the need for balanced integration, where AI’s strengths in style complement human originality in the design process.

Recl[ai]ming Design: Automatic Design and the Design Process

The digitalization of architectural design has rapidly changed the discipline over the last fifty years. From physical drafting to the current standard of Building Information Modeling, every past technological advance automated parts of the standard process before it. However, artificial intelligence and generative design are different. They not only automate architectural processes - they automate architectural thought, arguably the most human aspect of architecture. The automation of architectural processes raises questions for the future role of the designer. AI-backed applications are rapidly developing novel solutions for all phases of architectural preparation, design and realization. This shift presents both impressive opportunities and equally impressive challenges for the architectural discipline.The paradigm of Automatic Design describes artificial intelligence systems that are used as decision making agents in the architectural design process [1]. Automation and Artificial Intelligence are a direct mark of the fourth industrial revolution cycle emerging through technological advancements in the industrial and corporate worlds. This is partly characterized by the innovations in AI technology supported by the Internet of Things and Services. Situated within this paradigm are emerging Cyberfordist modes of production seen with machine-machine integration and the automation of manual and intellectual labor [2]. Automatic Design pushes contemporary understanding of architectural automation where practice is hyper-automated through generative design practices [3]. This research explores the changing role of design in an increasingly automated architectural industry. The ever-growing Darwinian boom of commercially available, easy-to-use AI technologies presents a unique challenge for the discipline as a whole. An analysis of AI-backed applications reveals what is currently possible, and how these emerging possibilities disrupt contemporary architectural design practices. This analysis, accompanied by literature review on current architectural AI practices, informs an architectural AI function model. An understanding of the possibilities of AI automation must be accompanied by an understanding of the risks it poses. This model served as a baseline for comparative analysis between AI functionalities and applicable AI risks stated in the ‘Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile’ published by the National Institute of Standards and Technology in 2024 [4]. Additionally, these functions were employed to study the intersection between Automatic Design and the design process with a specific focus on design as an inherently subjective activity requiring personal interpretation and value judgments [5], contradicting current algorithmic understanding. Architecture and design are prescriptive fields, concerned with what could and should be rather than what is [5]. Designers need to ask themselves what the future should look like, and what they can do to make that happen. Practicing an approach of informed contestability [6] towards this prevalent automating paradigm is becoming increasingly necessary. Although generative design offers a compelling, fast and novel alternative to traditional or digitized design techniques, it prompts adaptation to the exponential shift towards the value-engineering of design work. What role will architects and designers play in an increasingly automated workforce?

Session: Curricular Machinations

Saturday, September 27, 2025

AI Hybrids and Mutants: Eight Years of Experimenting with AI in Studio

Over the past eight years, my research has focused on the integration of artificial intelligence technologies particularly style transfer and machine vision into the creative processes of architectural design. This work forms a core part of my pedagogy and research. At each stage, I have sought to test and expand the creative agency of architectural students through a speculative yet pragmatic engagement with AI, treating it not as a tool of automation but as a collaborator in design. My investigations use Ai platforms to generate formal hybrids that synthesize diverse aesthetic languages, allowing students to explore what I term “augmented imagination.” Rather than merely visualizing a precedent or reinterpreting a known style, students are able to produce entirely new architectural expressions by combining features across images, typologies, or representational conventions. This process engages computer vision not simply to classify or recognize, but to hallucinate, transform, and recompose enabling a dialogue between human intuition and synthetic perception. The pedagogical aim has been twofold: first, to familiarize students with the conceptual and technical dimensions of emerging AI technologies; and second, to reposition design thinking in a media-rich environment where authorship is increasingly distributed across human and nonhuman agents. In studio, these methods have led to productive disruptions in conventional workflows, often generating unexpected and speculative results that challenge students to articulate critical frameworks for judgment, coherence, and intent in the face of proliferative output. Across multiple institutions and cohorts, I have developed a consistent yet evolving methodology that blends theoretical reflection, technical instruction, and design experimentation. This includes curated datasets, iterative training exercises, and a library of student-generated hybrid forms that serve both as teaching tools and as provocations for architectural discourse. I will present comparative examples of student work from four different institutions to highlight how institutional context, curricular structure, and access to computation have shaped both the process and outcomes. In an era when generative AI is reshaping the boundaries of authorship, creativity, and pedagogy, architectural education must offer more than technical training; it must cultivate critical literacy around the politics, aesthetics, and ethics of machine intelligence. My talk will reflect on how style transfer and similar tools can be harnessed to extend not replace architectural imagination, and how these experiments might inform a broader discussion about AI’s role in the discipline. Through this lens, I propose that design studios can become sites of technological critique, playful exploration, and speculative world-making preparing the next generation of architects to work with and against the logics of artificial intelligence.

From Generation to Judgment: Balancing Automation and Human Judgment in Architectural Design Education

The rapid advancement of artificial intelligence (AI) and script-based computational tools is reshaping architectural design and pedagogy. This paper presents a case study examining the integration of these tools in an undergraduate architecture studio, with a focus on their benefits, limitations, and implications for qualitative decision-making in design education. A key benefit of AI and scripting tools lies in their capacity to generate diverse design alternatives at unprecedented speed. Generative and algorithmic processes enable rapid iteration, allowing students to explore a wide range of spatial, material, and structural configurations. AI can analyze large datasets and produce design options far faster than traditional methods, expanding the scope of exploration and potentially uncovering novel solutions (Akhtar and Ramkumar 2024). However, the use of these tools also presents notable challenges. Chief among them is the "black box" nature of many AI systems, which limits transparency and reduces opportunities for students to engage critically with the underlying logic of design generation (Castelvecchi 2016). Moreover, the qualitative dimensions of design - such as atmosphere, aesthetics, and human experience - are often difficult to encode in computational terms (Zhang et al 2023). These limitations raise questions about how effectively AI tools can support the kind of nuanced, value-laden judgments central to architectural practice. Pedagogically, a gap exists between the availability of these tools and the readiness of studio curricula to incorporate them meaningfully. As Jin et al. (2024) observe, students often lack the technological fluency required for effective integration. Faculty hesitations, particularly concerns about creativity being overshadowed by automation, further underscore the need for curricular reform that supports computational literacy and critical engagement (Chen et al. 2020) This study analyzes three undergraduate studio projects where AI and scripting tools were actively employed, offering grounded insight into how students negotiate emerging digital workflows. Through these cases, the research explores three key questions: (1) How should AI and computational tools be positioned within the design curriculum? (2) What cognitive benefits or limitations do these tools present in terms of creativity and critical thinking? (3) What skills are necessary for students to engage meaningfully with emerging technologies? Findings suggest a shifting role for the human designer: as AI takes on repetitive and computationally intensive tasks, students may increasingly focus on conceptual framing, evaluation, and the integration of human-centered values through AI literacy (Long and Magerko 2020). While some studies suggest that AI can enhance creativity through diverse stimuli (Grassini and Koivisto 2024, Chandrasekera et al. 2024), others emphasize its limitations in replicating human imagination and intuition (Marrone et al. 2024). Ultimately, the study calls for a pedagogical approach that balances technical proficiency with critical understanding - equipping future designers not only to use AI tools effectively but to engage critically with their implications, limitations, and potential as partners in design thinking.

Reflective Practice Rewired: AI-Driven Ideation in Studio Contexts

This research investigates the integration of generative artificial intelligence (GenAI) into architectural creativity through an exploratory case study that reengages two design exercises originally completed in an advanced studio. GenAI tools, powered by multimodal large language models, have begun reshaping architectural design and education by enabling the generation of images, animations, and 3D forms from textual and visual prompts. Since their release, a growing body of pedagogical research has examined their use across diverse contexts, including culinary-inspired design (Koh 2023), sensory-rich installations (Turchi et al. 2023), and multicultural community spaces (Dortheimer and Schubert 2023). Scholars have also applied GenAI to speculative design (Jacobus and Kelly 2023) and architectural competitions (Guida 2023). Extending this emerging scholarship, the study applies GenAI within a studio context shaped by critical regionalist pedagogy, emphasizing the interplay of local context, materiality, and formal abstraction rather than tailoring the process to GenAI tools.

Following the studio’s completion, two sequential design exercises Unit Prototype and Resolving the Elevation, part of a cultural arts residency project in the American South were revisited using GenAI tools. Design iterations were developed through a hybrid approach that combined hand sketching in Krita with visual programming in ComfyUI, incorporating custom SDXL, Flux, and ControlNet models. In the first exercise, spatial and formal concepts for a Unit Prototype were generated using sketches from Krita and custom workflows in ComfyUI (Fig. 1). In Resolving the Elevation, these methods were extended to include image segmentation and material studies, further refining the design. Final outputs were composed in Photoshop, while Rhino and Grasshopper supported 3D modeling to address the limitations of image-to-3D translation (Fig. 2). In both exercises, GenAI is used as a medium for iterative, intuition-led design shaped by visual feedback, drawing on the situational and cyclic nature of architectural ideation (Schön 1984). This process is captured in a diagram that traces the evolution of decisions and design representations across both exercises, serving as a framework for the study’s inquiries and discussions. The case study suggests that GenAI can significantly expand a designer’s visual and conceptual range, particularly during early-stage ideation, but also introduces challenges such as cognitive overload from rapid image generation and the ongoing need for curation. Additionally, difficulties in controlling scale, tectonic clarity, and material expression were observed (Fig. 3) as also noted by Turchi et al. (2023) and Dortheimer and Schubert (2023). Open-source tools like ComfyUI and Krita helped mitigate these issues through their customizability: ComfyUI supported parameter-driven workflows and custom model integration to refine outputs and reduce visual noise, while Krita enabled sketches and image edits to clarify design intent (Fig. 4). This flexibility, however, required substantial technical fluency and time investment though overall, this hybrid GenAI method proved significantly faster than the conventional design process. Ultimately, this study positions GenAI as a support for reflective design dialogue and ideation, emphasizing the importance of open, community-driven platforms that enable designers to engage hybrid workflows, manage data, and explore ideas crucial for shaping future methodologies grounded in architectural expertise.

Getting Comfortable: AI, Reflexive Praxis, and Future Climates

In Fall 2023, Shoulder to Shoulder, a comprehensive graduate architecture studio, attempted to address two current realities: the need to imagine new forms of comfort, both thermal and social, within a shifting landscape of resource scarcity and environmental unpredictability, and the rapidly evolving role of AI in image generation. The studio placed these two pressures at the center of architectural inquiry, asking students to operate simultaneously within speculative and technical modes, as they wrestled with both the concurrently emerging smooth promises of AI generated images and the performative complexities of the expanded architectural wall section. Central to the studio’s pedagogy was the premise that AI tools such as Midjourney could act as a prompt rather than a product provoking rather than replacing architectural thought. The result was a workflow grounded not in seamless automation but in friction, reflection, and formal invention. The studio dropped students in media res midway into a scenario of environmental and social instability tasking them with designing thresholds between radically different adjacent conditions. To emphasize and explore these conditions, the first assignment asked students to describe a situation of adjacent spaces with extremely different environments (thermal and social) whereby the pressure must be on the “architecture” (surfaces, forms, and systems that create environmental mediating boundaries) to house, control, and negotiate the adjacent conditions. Students began with AI-generated images as pre-design tools, producing visual exaggerations of these speculations. Within days, the students moved from cautious novelty to fluent exploration, learning to manipulate biases, interpret glitches, and develop architectural workarounds in Midjourney. These early images served not as representations but as provocations: environmental atmospheres rendered with aesthetic immediacy but devoid of tectonic logic. Subsequent assignments reversed conventional workflows. Students worked “cart-before-horse,” using architectural tools plan, section, and wall section to reverseengineer the spatial logics (or illogics) apparent in the AI images. This inversion invited nuanced conversations about construction, layering, and climate-specific assemblies. 1/2" Wall sections and section models speculated on thermal, social, and spatial mediation. Across iterations, annotation played a critical role. As students tracked their own evolving ideas, they developed a self-reflexive design process that was rooted in architectural expertise but open to receive other forms of knowledge. The studio culminated in a final round of AI image generation, executed after weeks of iterative spatial development. Students prompted Midjourney using their own now discipline-specific language, describing precise technical relationships and formal dispositions. What emerged were not improbable visualizations but architectural arguments with the accumulated weight of design decisions. In this context, AI served less as a design surrogate than as pedagogical provocateur, encouraging the development of a critical, recursive praxis and resisting both techno-utopian and Luddite postures. Rather than pushing students toward mastery of external systems, the studio emphasized the importance of knowing what one knows and what lies just beyond. This approach suggests a framework for navigating emerging technologies not through wholesale adoption or rejection, but through calibrated engagement: one that repositions the architect as both synthesizer and skeptic in an evolving ecology of practice.

Title: From Studio to Solver: A Comparative Study of Human and Generative AI Design

This case study examines the comparative value systems embedded in human centered architectural pedagogy and generative AI design workflows by analyzing two distinct approaches to housing design on a constrained urban site. For over eight years, a third year undergraduate housing studio has utilized a 96 by 64 foot lot as a pedagogical framework to explore typological logic, spatial customization, and aggregation at a neighborhood scale. The studio sequence engages students who are introduced to housing design for the first time in iterative design processes; beginning with fixed 16’ width rowhouse modules, followed by user specific adaptations and a variable width rowhouse module, and culminating in a demographic analysis toward development of a multifamily housing complex. To interrogate how AI driven platforms might approach the same challenge, this case introduces TestFit.io’s Site Solver, a generative tool built to produce rapid, code compliant massing and unit layouts. According to CEO Clifton Harness, TestFit “solves site plans in seconds,”[1] promising automation of tasks traditionally undertaken through a slower, interpretive design process. By inputting comparable programmatic parameters into TestFit and generating parallel schemes for the same lot, this study creates a side by side comparison between AI generated output and student-authored designs. The comparison centers on five metrics to asses efficiency in design and qualitative impact. The resulting diagrams and spatial data reveal striking differences in both product and process. TestFit excels in layout speed, and zoning logic but will be challenged with typological nuance, communal space activation, and responsive design for diverse user profiles. Conversely, student proposals often sacrifice efficiency in favor of layered circulation strategies, flexible unit programming, and public realm richness. This case study also frames the comparison through the lens of authorship and design agency. It draws on Stan Allen’s assertion that “scripting is design,” [2] and tests the extent to which AI scripting represents not just automation, but a viable form of authorship. The findings suggest that while generative tools can simulate design processes, they encode a narrow range of spatial assumptions that require critical interrogation; particularly in pedagogical contexts where meaning, not just form and production, is a crucial measure of success. Ultimately, this study concludes on various pros and cons of each process and proposes a hybrid model for design education. One in which generative AI tools like TestFit serve not as replacements for human designers, but as a provocative and comparative tool within the design process. When engaged critically, generative platforms can surface constraints, accelerate iterations, and enhance decision making. Simultaneously, studio culture can offer AI systems more complex narratives, culturally situated priorities, and richer spatial problem definitions. The aim is not to prove which system designs “better” but to illuminate how each sees, prioritizes, and builds their models; further determining how educators might bridge the two in a new pedagogy of design intelligence.

Session: Cognitive Systems & Speculative Narratives

Saturday, September 27, 2025

The Hallucinating Alien: Deanthropomorphizing AI for a Post-Artificial Future

The paper uses the filmic narrative of Annihilation (2018) to challenge the persistent dismissal of AI’s creative capabilities. Such skepticism often hinges on the problematic assumption that human traits, such as inspiration, intuition, consciousness, spontaneity, will, or emotion, are a prerequisite for creativity. This line of argumentation relies on an anthropomorphic approach that is unable to imagine/accept non-human modes of perception, cognition, and agency. Contrary to prevailing critiques that dismiss AI’s outputs as derivative imitations or algorithmic collages of human data, this study explores how AI’s so-called alien qualities destabilize traditional notions of creative agency and human exceptionalism. Drawing on interdisciplinary frameworks ranging from Lacanian psychoanalysis to posthumanist theory this paper proposes the concept of deanthropomorphizing AI as a critical approach that resists projecting human expectations onto machine cognition. The paper first suggests adopting an apophatic approach, through which AI is defined not by projecting human qualities, but rather via negation, emphasizing what lies beyond human understanding. It then advocates for "alien phenomenology," as a method of accepting the non-human cognitive capabilities of AI. Accepting AI as an "alien intelligence" shifts the role of the artist to one of critique and curation, emphasizing the artistic talent in recognizing unexplored alien forms of beauty. A central argument advanced here is that AI should not be understood as a singular, monolithic intelligence but rather as a heterogeneous multiplicity an assemblage of tools, architectures, and logics that operate outside human modes of understanding. This distinction becomes especially urgent as AI systems increasingly generate works that not only rival but often exceed human output in perceived creativity, originality, and aesthetic value. The paper considers this condition through the concept of “gray authorship,” a term that denotes the entangled coproductive space between human and machine a space where agency is distributed rather than owned. Using Alex Garland’s 2018 film Annihilation as an allegorical anchor, the paper analyzes the figure of the Shimmer an alien force that transforms, mutates, and mirrors life without intent or memory. This metaphor is employed to reframe AI not as a tool that reflects human will, but as an uncanny and disinterested authorial presence. Like the Shimmer, AI exhibits a mode of creation that is non-continuous, unanchored to selfhood, and indifferent to the afterlife of its creations. This structural amnesia and lack of intentionality invite a reconsideration of theological and metaphysical questions long associated with creation and authorship most provocatively, whether intelligence must care in order to create meaningfully. The paper further engages with contemporary voices in architecture and design theory, including David Chalmenrs, Neil Leach, and Shannon Vallor, to contextualize a “post-artificial” understanding of AI. The post-artificial era describe a future condition in which the distinction between artificial and natural intelligence becomes obsolete a marker of our passage beyond this current shimmer of astonishment, denial, or fear.

Constructing Urban Narratives through Generative AI and Game Engines

Lee Su Huang, Lawrence Technological University

Masataka Yoshikawa, Lawrence Technological University

Sara Codarin, Lawrence Technological University

Introduction This course combines several generative AI workflows to develop urban narratives through speculative urban scenarios, facade studies, and reimagining streetscapes in contemporary urban contexts. As an interdisciplinary examination of the intersection between AI and architectural and urban design, there is a specific focus on the city of Detroit as a case study. Through the utilization of photorealistic urban models, students explore the confluence of 2D and 3D generative AI tools, digital scanning methodologies, photogrammetry, and immersive real-time rendering techniques, in redefining the spatial design process. Objectives Students cultivate a design approach that integrates digital fluency with critical urban storytelling by engaging hands-on with AI-driven workflows, including digital scanning, 2D-to-3D model generation, and real-time visualization. Tools such as Polycam, Midjourney, ComfyUI, Hunyuan 3D-2.0, Rhino, and Unreal Engine support this exploration. Students are challenged to position their work within Detroit's cultural and historical contexts, reflecting on how AI-generated content can convey themes of memory, equity, and transformation. Project proposals aim to recontextualize the urban environment as a dynamic, mutable entity ripe for cultural articulation and speculative transformation. Methodology The course unfolds in three assignments, each building on the last. In Phase 1, students begin by scanning objects or landmarks of their interest with Polycam. These assets are inserted into a model of Detroit, which students modify with cross-scalar transformations and juxtapositions to redefine the relationship between urban spaces and objects. In Phase 2, students shift from scanning real-world objects to generating fictional ones. Using Midjourney, they create synthetic images based on prompts and visual inputs, exploring urban scenarios, facades, and artifacts. Components of these visuals are translated into 3D geometry via ComfyUI's Hunyuan 3D-2.0 workflow and refined in Rhino. Multiple GenAI workflows coalesce into feedback loops that inform each other; 3D models are hybridized and exported as images for further modifications. This phase encourages experimentation with spatial conditions and challenges students to reimagine urban elements across all scales such as facades, urban furniture, installations, and infrastructure. Phase 3 synthesizes these efforts into an urban scenario developed as an immersive interactive medium. Unreal Engine is used to integrate the different assets generated in previous phases into a single, seamless urban environment. This is used to create cinematic or interactive visualizations of a reimagined downtown Detroit one that is more inclusive, pedestrianoriented, sustainable, and culturally expressive. These final outputs bring together narrative, technology, and speculative design in a cohesive urban vision accessible to many. Results and Discussions Students produce a collective body of work that reinterprets the city's identity including 3D scans, AI-generated images and 3D models, creative overlays, conceptual urban interventions, and immersive environments. This seeks to reposition AI as more than a generative instrument it becomes a collaborator in speculative urban design. The work produced suggests that AI-enabled storytelling has a meaningful role to play in shaping more inclusive and imaginative urban futures. The projects reflect each student's unique interpretation of Detroit's identity, revealing how digital tools can be used to tell powerful spatial stories

This Presentation was removed

Training Architectural Intelligence: Towards an Agent-Based Model of Discursive Pedagogy

This paper outlines a pedagogical approach that treats generative image tools not as novelty renderers or optimization engines, but as discursive agents tools capable of provoking, disturbing, and extending architectural thinking. Within this framework, the designer constructs a discursive environment by critically sorting the language that shapes the generative process, and then evaluating the resulting outputs to identify a structured set of consequential design trajectories. What emerges is a new model of design workflow grounded in rapid iteration, careful interpretation, and critical reflection one that trains architectural intelligence through language, image, and the feedback loop between them. Two experimental series Dazzle House Field and Leisure Tectonic anchor this proposal. In the first, a rendered isometric of a speculative housing prototype serves as a generative seed. Through iterative prompting, hundreds of variations emerge, exploring typological conditions such as overlapping thresholds, redundant porches, and nested circulation. The designer is tasked not with selecting a final form, but with interpreting the field of results reading across difference, evaluating spatial logic, and identifying latent patterns. The exercise becomes a method of close reading, where judgment is developed through comparative visual analysis. Leisure Tectonic extends this inquiry into material and environmental fiction. Prompts that describe embedded stone forms within coastal landscapes produce a series of images that oscillate between the geological and the artificial. These outputs question conventional assumptions of tectonic legibility and site specificity. In this context, language not only initiates the design response it conditions its atmosphere, texture, and spatial logic. The tool acts less as a generative engine and more as an interlocutor, reflecting and refracting the designer’s intent through a series of unexpected visual consequences. Across both projects, the act of designing becomes an interpretive exchange prompt, output, evaluation, revision. This cyclical model recalls Yanni Loukissas’s notion that simulation enables pluralistic invention rather than deterministic result, and Keller Easterling’s argument that architecture functions as an information system shaped by patterns of interaction. The generative tool becomes a sounding board, not for automation, but for agentic refinement. This paper positions these practices not as prescriptive curricula, but as speculative scaffolds frameworks for teaching students to navigate an increasingly nonhierarchical design environment. Within this model, students must learn to sort noise from signal, to transform ambiguous visual data into structured architectural inquiry, and to toggle between representational registers from realism to abstraction, from atmosphere to system. In doing so, they practice the core act of design not as resolution, but as choice, grounded in language, guided by judgment, and sustained through iteration. Note: Images supplied are constructed with AI generated components.

The Architecture of Inference: Cognitive Systems and the Posthuman Creative Act

Architectural discourse has embraced artificial intelligence with a speed that far exceeds its conceptual maturity, often celebrating the visual excesses of generative systems while suspending critical engagement with what creativity entails. References to “machine imagination”1 or “creative AI”2 recur in exhibitions, pedagogical experiments3, and speculative projects4, frequently without acknowledging that creativity itself remains incompletely defined in cognitive science5. This paper responds by proposing a theory-grounded framework that does not mythologize machine output, but interrogates the epistemic limits of both neuroscience and computation in modeling creativity. Neuroscience has not resolved what creativity is there is no singular location, trait, or essence that accounts for it. What it does offer, through the work of researchers such as Anna Abraham6, is a model of the conditions under which creative cognition emerges: the interaction of the Default Mode Network, the Executive Control Network, and the Salience Network, which together enable ideation, constraint evaluation, contextual switching, and novelty filtering. These processes do not constitute creativity per se, but describe the dynamic coordination of cognitive systems that support it. AI models such as generative adversarial networks, diffusion architectures, and large-scale transformers exhibit formal analogs of these functions, though without consciousness, intention, or self-directed evaluation. The architectural conversation around AI, by contrast, has largely failed to distinguish between stochastic variation and creative cognition7. Generativity is often equated with creativity, novelty with meaning, output with intelligence. The result is a superficial discourse that prizes visual surprise while bypassing the systemic, recursive, and evaluative dimensions that define creative work in scientific terms. This paper critiques that tendency and introduces an epistemological model that treats creativity not as an aesthetic event but as a cognitive ecology distributed across human and artificial agents, structured by feedback, shaped by constraints, and embedded in systems of interpretation. Agency, in this view, is not assigned to a subject human or machinic but is understood as emerging through recursive operations and situated inference. Authorship is displaced from the individual and repositioned as a condition of participation within an extended system of design logic, training data, material affordance, and conceptual framing8. This model does not reproduce the language of technological awe, nor does it nostalgically return to the figure of the creative genius. It offers, instead, a critical architecture of cognition. This paper contributes to a developing tradition in architectural theory that brings posthuman epistemology, neurocognitive modeling, and critical AI studies into conversation. It rejects the esoteric, the naive, and the spectacular in favor of a rigorous, grounded framework for understanding what intelligence might mean in the age of artificial systems. This submission presents recently completed theoretical research that has not been published or presented beyond internal academic settings.

Session: Environmental Intensities

Saturday, September 27, 2025

Comparative evaluation of GPT5 and Gemini for Sustainability-Oriented Design Critique in Conceptual Architecture

Md Shariful Alam, Mithun

Chi Aoyama, Mithun

Sean Baxter, Mithun

Kerry Garikes & Koushik Srinath, Mithun

As AI rapidly permeates architectural practice, Large Language Models (LLMs) have demonstrated their potential not only as general knowledge tools but also as possible collaborators in design workflows. However, a significant research gap persists: while LLMs like OpenAI’s GPT-4 and Google’s Gemini are now widely accessible, there is limited empirical research comparing their ability to serve as design advisors especially in analysing early-stage architectural data, which is inherently visual, geometric, and contextual. This study addresses this gap by developing and testing a technical framework that enables LLMs to function as design consultants during the conceptual phase of architectural design. We integrate Autodesk Forma a platform that captures 3D massing, spatial data, and proposal variants at the earliest design stages with cloud-based sustainability analytics hosted on AWS, where Python-based functions calculate key performance metrics. These metrics include energy demand approximations, climate sensitivity, and site-specific passive design potentials, all without requiring full simulation pipelines. The framework extracts data via Forma’s API endpoints and transforms it into structured, text-based representations understandable by LLMs. A critical innovation is the preservation of semantic hierarchy from the 3D design environment (e.g., identifying walls, windows, and roof planes), as LLMs are not yet capable of parsing raw geometry natively. The processed input is routed to both GPT-4 and Gemini via separate endpoints, and responses are embedded directly within the design interface using Streamlitbased UI elements integrated into Forma. A detailed case study featuring a student housing project exemplifies this framework. A group of twenty professional designers collaborated on this sample project, discussing concepts openly and contributing equally to diverse design proposals. Designers anonymously reviewed sustainability-focused critiques generated by the LLMs, ensuring that responses were blinded to eliminate bias. Feedback from GPT-4 and Gemini was anonymised before distribution, with designers unaware of the source LLM for each response. After carefully evaluating these anonymised responses, participants independently completed questionnaires measuring interpretability, relevance, clarity, and practicality, again unaware of the LLM identities. Findings indicate that while both LLMs provide contextual feedback, GPT-4 tends to deliver more analytical responses with stronger alignment to performance metrics, whereas Gemini excels in interpretive summaries and provides broader design suggestions. However, both models struggle with precision when architectural semantics are not explicitly structured a limitation noted in prior research on LLM spatial reasoning. This aligns with broader findings in the literature that highlight the challenges LLMs face when navigating non-textual domains. This study contributes a reproducible method for embedding AI-driven feedback loops into early design practice. Rather than treating LLMs as monolithic oracles, it argues for targeted, comparative testing across tools a methodologically underdeveloped area in AI architecture research. The work also underscores that with structured data inputs and embedded sustainability analytics, LLMs can meaningfully assist with early-stage decisions, helping designers reflect, iterate, and learn all without leaving their primary design environment.

Integrating AI and VR in Interdisciplinary Pedagogy for Sustainable Ecotourism

Infrastructure

The growing demand for sustainable and interdisciplinary approaches in the built environment, particularly within ecotourism infrastructure, calls for innovative educational strategies. This study introduces a novel pedagogical workflow that leverages artificial intelligence (AI) and virtual reality (VR) technologies to explore and develop sustainable business models tailored to ecotourism. The approach was implemented in an academic setting, where students engaged in a problem-based learning framework to address real-world challenges in coastal tourism infrastructure. The workflow incorporated a suite of digital tools to support each phase of the design and planning process. Geographic Information Systems (GIS) were used to analyze sitespecific data and determine optimal locations. Systems thinking was facilitated through the use of Loopy, enabling students to visualize and understand complex interdependencies within ecological and economic systems. Autodesk Forma, an AI-powered platform with generative design capabilities, was employed to generate and evaluate multiple design options. For immersive visualization and architectural modeling, Autodesk Revit and Twinmotion were utilized to create VR experiences that allowed for interactive exploration of proposed solutions. Additionally, a structured business plan template guided students in articulating viable and sustainable business strategies. The educational methodology emphasized interactivity and experiential learning, encouraging students to collaboratively develop proposals that addressed environmental, social, and economic dimensions of sustainability. Through a case study approach, student teams focused on revitalizing infrastructure and tourism activities in coastal national parks in the United States. Their proposals highlighted unique selling points such as the integration of renewable energy systems, the use of sustainable construction materials, restoration of marine ecosystems, development of educational and research facilities, and the incorporation of local food systems. Preliminary results suggest that this interdisciplinary and technology-enhanced workflow not only fostered innovative thinking but also yielded practical insights into the potential economic and environmental impacts of sustainable ecotourism development. Course evaluations indicated that students found the experience highly valuable, citing the integration of AI and VR tools as instrumental in enhancing their understanding of sustainable design principles and business planning. This study demonstrates the potential of combining emerging technologies with interdisciplinary pedagogy to equip future professionals with the skills and mindset needed to address complex sustainability challenges in the built environment.

Circular Intelligence: AI-Augmented Evaluation of Biobased Materials for Sustainable Construction

The environmental impact of construction materials is a critical concern in advancing sustainable design. Embodied carbon, construction waste, and material obsolescence are increasingly recognized as systemic issues that demand early-stage interventions. Recent research has demonstrated that biobased materials such as straw, hempcrete, and mycelium composites can offer significant advantages in terms of carbon sequestration, thermal performance, and end-of-life recovery. However, despite their environmental potential, these materials are often underutilized due to fragmented lifecycle data, lack of familiarity among practitioners, and challenges in comparing them against conventional assemblies during design development. This paper presents a novel computational framework that integrates artificial intelligence (AI) techniques with lifecycle assessment (LCA) data to support circular material decision-making in architectural design. The framework embeds environmental product declarations (EPDs), embodied carbon metrics, and disassembly potential into an AI-augmented generative design process. It employs unsupervised clustering and multi-objective optimization algorithms to evaluate hundreds of material assembly permutations, highlighting trade-offs between embodied carbon, thermal performance, cost, and recyclability. The paper presents two comparative case studies to demonstrate the tool’s functionality and insights. The first evaluates straw-insulated timber walls against gypsum-board with mineral wool in temperate and arid climates. The second compares hempcrete and hybrid wood-concrete roof systems in mixed-humid zones. These examples illustrate how biobased alternatives, when selected through the proposed AI workflow, can yield embodied carbon reductions of 60–80% relative to standard assemblies, while enhancing passive thermal regulation and circularity indicators. The results further show that hybrid and layered strategies often overlooked in early design can be surfaced through AI-generated Pareto front visualizations, offering viable paths to integrate low-impact materials within design constraints. A critical aspect of this work is its emphasis on "human-in-the-loop" interaction. Rather than fully automating material selection, the tool presents data-driven suggestions while allowing designers to prioritize based on cultural relevance, local supply chains, aesthetics, or construction logistics. The interface promotes transparency in material sourcing and lifecycle assumptions, addressing key critiques of automation in sustainability assessment. The broader contribution of this research is the operationalization of circular design values through digital tools that are accessible to both students and practitioners. By aligning the framework with open standards in material documentation and digital building modeling, it anticipates future integration with material passports, circular procurement databases, and regulatory compliance systems. This paper argues for a shift in how environmental data is embedded into design decision-making not as a post-rationalization of form, but as a generative and participatory element of early design ideation. This research expands the growing discourse on AI in ecological design by making circularity computationally actionable. It advances methodological tools for sustainable architecture that are simultaneously rigorous, adaptable, and context-sensitive, and contributes to the movement toward regenerative design intelligence in architectural practice.

This Presentation was removed

Session: Locating Biases

Saturday, September 27, 2025

“Unboxed City: critical explorations of [ai] and cities”

Alberto Meouchi, Massachusetts Institute of Technology

Sarah Williams, Massachusetts Institute of Technology

Rohit Sanatani, Massachusetts Institute of Technology

“Unboxed City: critical explorations of [ai] and cities” studies the embedded biases and functionalities of out-of-the-box Gen AI within the context of the built environment and design. Since the release of openly available AI systems in November 2022, numerous critiques and discussions have emerged around their use. These models, capable of performing tasks once thought exclusive for humans, have been deployed in numerous fields, including design and urbanism. The project seeks to explore and understand in what ways embedded power, bias, and omission are embedded in the out-of-the-box generative AI models used in urbanism and design? And how can urbanism and design practices engage critically with black-box AI systems to promote transparency, inclusion, and equity? In an effort to critique and unfold these systems, we developed a year-long research project that ended in an experiential installation: Unboxed City. The research sought to reveal how these systems work to design audiences while critiquing their current potential use in design, architecture, and urbanism. By framing the current deployment of AI systems within the broader historical literature around technological innovations in urban planning, this article reveals similarities between past urban technology forms and AI's development. This project also critiques the presence of Generative AI in the public imaginations, commonly referred to as “intelligent”, or creative”, all of them terms often used uncritically. As these models and systems quickly integrate into our society, industries, and public spaces, it is crucial to unpack how they shape the narratives of our cities and built environment. "Unboxed City" positions the technology in the urban realm and its current use, framing it within a historical context of experiments with technological explorations. Building a parallelism between historical visual tools and contemporary algorithms, the project highlights how each technological deployment has been used to understand or make sense of cities and, at the same time, to assert control over cities and their inhabitants. In multiple ways, the Camera Obscura was a similar type of tool that sought to capture the world inside a box to understand and comprehend it from a distance. “Unboxed City” reveals to its audience how generative AI algorithms work by being immersed within one. The installation is a large black box referencing the notion of black-box algorithms, a concept first introduced in cybernetics, referring to the inability to understand how systems/algorithms work. A totem in the center box holds four large screens playing multi-directional videos illustrating the process of neural networks collecting, encoding, and decoding data, thereby transforming it into artificial intelligence. “Unboxing” is a popular term used to refer to the act of opening a new device or tool; we are now opening the AI, and we must understand what’s inside this box to understand its limitations. Furthermore, the project invites the public to question not only AI outputs, but also what it omits to produce. By critically engaging with the aesthetics of the “black box”, this research fills the gap between Generative AI training and representation of the lived reality in the urban context.

Masked Depth: AI-Induced Ambiguities in Spatial Perception

Smartphone cameras are no longer passive recorders of reality. Each image captured embeds algorithmically generated spatial metadata within its digital file, invisible to the threshold of human perception. In this context, where artificial perception is increasingly obscured from the end user, this research reveals and critiques how AI-generated depth maps in smartphone photography introduce perceptual distortions and ambiguities in architectural representation. The project extracts and visualizes the machine-learning-derived depth maps and segmentation layers embedded in everyday smartphone images. By transforming these latent depth and object-segmentation cues into physical and visual artifacts including 3D-printed spatial reliefs, stereoscopic images, and immersive augmented reality environments the project makes algorithmic perception tangible. These artifacts expose a stark divergence between human spatial experience and algorithmic vision. The AI-generated depth interpretations often diverge from real spatial cues, revealing how machines “see” architectural space in ways that are alien to human intuition. Crucially, the project exposes the perceptual and material distortions introduced by machine-mediated vision. AI interpretations often misread depth relationships, bringing distant objects unnaturally forward or flattening protrusions into backdrops; boundaries between materials blur where segmentation falters, and continuous surfaces become disjointed or fragmented. In effect, the camera’s computational gaze generates spaces that may be coherent to an algorithm but that are riddled with anomalies to a human observer. For instance, in the depth data, the edge of a dark table may bleed into a dark wall behind it, erasing a clear material boundary, and a floor plane might appear to bend or step where it should be level. Reflective or transparent elements in a scene (e.g., glass, mirrors, or water) and uniform, textureless surfaces are especially morphed, illustrating how the technology’s blind spots produce novel spatial ambiguities. The resulting artifacts recall the ambiguities and transparency discussed by Rowe and Slutzky; however, here, ambiguity is not a deliberate design intent but a repercussion of computational perception. These distortions are not mere technical flaws; they reveal the distinct perceptual character of the algorithmic observer. The project ultimately argues for the architectural relevance of materializing machine vision, and it expands our repertoire of representation and critique, thus enabling a dialogue between human and machine ways of seeing space.

Brutalist Dreams: An Augmented AI Tool for Custom Architectural Design Processes in Research, Teaching, &Practice

Recent advancements in Artificial Intelligence (AI), particularly Generative AI (GAI), have rapidly transformed workflows across disciplines, including architecture. (i) While architectural research, pedagogy, and practice benefit from powerful AI tools deployed to various tasks, these technologies often prioritize speed and output fidelity over designers’ agency and control, especially in prompt-driven applications. Additionally, AI is frequently applied in scattered tasks rather than consolidated in comprehensive workflows. In architecture, earlier GAI applications, such as Generative Adversarial Networks (GANs), required intensive data collection and curation, enabling architects to exercise authorship and agency through dataset customization. (ii) However, the rise of Diffusion Models (DMs) and Large Language Models (LLMs) has shifted focus toward prompt engineering, reducing labor but in return limiting agency, control, custom data augmentation, and the ability to tailor results to specific design questions, historical references, or theoretical intentions. (iii, iv) This research proposes a custom AI design tool that reintroduces agency through fine-tuning and augmentation of LLMs with custom architectural data. This method contributes to AI in architectural research, education, and practice by combining the benefits of both generations of DL algorithms, harnessing fidelity, speed, and customized control while preserving multi-modal formats of input data and results. The model is trained on a curated corpus of architectural theory (historical and contemporary), precedent images, writings about AI in architecture discourse, empirical data, and building codes. It is further augmented with typology-specific datasets, material systems, and construction methods, such as mass timber and Cross-Laminated Timber (CLT), as well as accessibility standards (ADA), enabling a versatile yet context-aware design assistant. This tool aims to bridge multiple individual AI-aided design tasks into a comprehensive workflow that can be continuously finetuned and shared across multiple collaborators and steps. The tool serves as an artificial collaborator, capable of providing feedback and suggestions across multiple design phases, ranging from site analysis and programming to conceptual development, spatial articulation, visualization, and theoretical analysis. It bridges LLMs with industry-leading DMs and connects to contemporary 3D modeling platforms, including BIM, to generate coherent outputs across text, images, plans, elevations, and 3D data. In a series of 12 case studies, the method was deployed and evaluated in an educational setting with the aim of industry collaboration. The studies are rooted in Brutalist architecture as a foundation, illustrating the method’s potential, leveraging Brutalism’s theoretical legacy and formal, prefabriction, and part-to-whole logic. The tool translates these lessons into contemporary architectural proposals, engaging emerging materialities such as CLT and exploring design within circular economy frameworks, as well as contemporary aesthetics and construction methods through the lens of AI. The results demonstrate how the method yields a custom AI tool for comprehensive architectural workflows that can be applied for multiple styles, typologies, materialities, and formal agendas, augmented with specific expert focuses. Ultimately, the research offers a customizable, multimodal AI workflow throughout a design process, cross-pollinating contemporary AI systems. It supports the synthesis of diverse architectural inputs and outputs, enabling AI-human collaboration that is speculative, informed, and grounded in disciplinary expertise across research, teaching, and practice.

The

Misrecognition Index:

Predictive Design and Vernacular Invisibility on the Southern Coast

Along the disappearing edges of the coastal United States South, Black domestic architectures have endured through climate instability, infrastructural and systemic neglect and the relentless cycling between hypervisibility and abandonment. These homes, rebuilt, patched, and rearranged through salvage, memory, and hope are living systems by which these spatial conditions endure. These spatial practices, often considered informal or “lo-fi,” have adapted across generations to meet the precarity of flood zones, disappearing land, and the absence of governmental support. But while these architectures persist, predictive models that now shape design education and practice have unfortunately inherited the discipline’s long standing disinterest. This paper investigates how dominant generative AI systems: specifically MidJourney and DALL·E, respond when prompted to visualize the spatial logic of domestic life in Black coastal communities. It focuses on southern Louisiana and adjacent geographies, where climate-responsive but non-standard forms of Black homebuilding practices persist. When asked to render these spaces, the models often substitute exaggerated rural vernaculars, or generic domestic typologies untethered from cultural context. These are not traditional AI hallucinations, which are factually incorrect imaginings of AI. Rather, they are acts of misrecognition: outputs shaped by the limits of the model’s training data which reflect architecture’s overarchin rigidity. These projections are charged and help to expose the disciplinary logics that have long privileged permanence, straightforward authorship, and visual legibility over adaptive, vernacular, and historically misclassified forms of design. To analyze these failures, this paper introduces an index, which evaluates AI outputs across three dimensions: Substitution – What formal languages the system defaults to in place of vernacular references Distortion – How spatial cues are exaggerated, misrepresented, or sanitized Omission – What consistently fails to appear These generated images will then be counter-referenced with archival photographs, oral histories, environmental data, and on-theground documentation of post-storm rebuilding practices. The paper draws theoretical support from Julia Watson’s text, Lo-Tek; Design by Radical Indigenism, which does a critical reframe of “low fidelity” systems as high-complexity, ecologically attuned design. Like Watson’s case studies of ancestral infrastructures, the spatial knowledge embedded in Black Southern homebuilding reveals a form of intelligence unrecognized by data-driven design tools, and unfortunately undervalued by architecture itself. This paper tries not to reduce to merely being about representation. It is a structural critique of how architectural tools misread the very spaces already surviving our most urgent crises. As climate change accelerates, AI systems are being tasked with imagining resilient futures. By studying what AI fabricates when it cannot recognize, this paper develops a new framework for understanding spatial intelligence: one that begins with humanity and what we have created that the dataset has yet to acknowledge.

This Presentation was removed

Session: Poster Exhibition Saturday, September 27, 2025

Twisted Arch: Towards AI-Assisted Analytics for Ultra-Lightweight, Carbon Reducing Concrete

Christopher Romano, Randy Fernando & Michael Hoover, University at Buffalo, SUNY

From Asimov to AI: Five Student Laws of Technology

Edward Orlowski, Lawrence Technological University

Playing ISO

Kelly Rice, Mickhyle Dangalan & Alvaro Sanche, Florida Atlantic University

Rethinking Precedent in Design: AI as a Pedagogical Tool

Isabel Potworowski & Henry Levesque, University of Cincinnati

M. Alan Frost, Judson University

Drawing From Violette Le Duc: Embodied Metaphors of Scarred Subjectivity and AI’s Algorithmic Intuition

Edgardo Perez-Maldonado, Judson University

Ecoblox: AI + Wave Testing for Modular Tiles

Sara Pezeshk, Florida International University

An Exploration of AI-Assisted Critiques in Art and Design Courses Hira Roberts & William Price, Prairie View A&M University

Automating MEP Design: A Two-Stage Agent-Based AI Approach Using BIM Data

Ioannis Kopsacheilis, National Technical University of Athens

Pattern-Form Finding with Artificial Intelligence and Parametric Design Tools

Darion Washington, Kean University

From Studio to Solver: Visualizing Design Outcomes from Human and Generative AI Housing Approaches

Howard Mack, Morgan State University

Quantum Poems: Superimposition, Installation and Adaptation Inside Revolutionary Drawings

Michael Erik Gamble, Georgia Institute of Technology

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.