























































WITH SYMETRI, YOUR DIGITAL PARTNER FOR BUILDING SAFETY
Want to get your building applications approved with confidence? Join our session with The Property Smart Group at Digital Construction Week to learn how!
4th June | 12:30 - 13:00
LEARN MORE & CONNECT
Building Information Modelling (BIM) technology for Architecture, Engineering and Construction
editorial
MANAGING EDITOR
GREG CORKE greg@x3dmedia.com
CONSULTING EDITOR
MARTYN DAY martyn@x3dmedia.com
CONSULTING EDITOR
STEPHEN HOLMES stephen@x3dmedia.com
advertising
GROUP MEDIA DIRECTOR
TONY BAKSH tony@x3dmedia.com
ADVERTISING MANAGER
STEVE KING steve@x3dmedia.com
U.S. SALES & MARKETING DIRECTOR
DENISE GREAVES denise@x3dmedia.com
subscriptions MANAGER
ALAN CLEVELAND alan@x3dmedia.com
accounts
CHARLOTTE TAIBI charlotte@x3dmedia.com
FINANCIAL CONTROLLER SAMANTHA TODESCATO-RUTLAND sam@chalfen.com
AEC Magazine is available FREE to qualifying individuals. To ensure you receive your regular copy please register online at www.aecmag.com about
AEC Magazine is published bi-monthly by X3DMedia Ltd 19 Leyden Street London, E1 7LE UK
T. +44 (0)20 3355 7310
F. +44 (0)20 3355 7319 © 2025 X3DMedia Ltd
Cityweft brings spatial context into CAD/BIM, Dassault re-packages Catia for AEC teams, Maxon targets architects with real-time viz, plus lots more
New technologies are emerging to transform 2D drawings into models, as well as generating drawings automatically from 3D model data
We highlight the key themes - from AI and BIM 2.0 to pioneering tech perspectives from the biggest AEC firms
How is AI shaping architectural design? Keir Regan-Alexander explores the opportunities and tensions between creativity and computation
register.aecmag.com
AI is streamlining viz workflows, pushing realism to new heights, and unlocking new creative possibilities
Powerful design tools from automotive and aerospace are coming to AEC, with competitive, sector-specific pricing
Foster + Partners, renowned for its in-house software development team, is giving away one of its custom tools for free — an environmental analysis plug-in
Snaptrude, the original BIM 2.0 player, is now adding AI discipline-centric knowledge to each phase
Despite AEC’s digital potential, start-ups still struggle. Tal Friedman outlines a strategy to break through HP ZBook Ultra G1a / Ryzen AI
It’s not often a mobile workstation comes along that truly rewrites the rulebook, but this 14-incher does just that
Cityweft, a new web platform aimed at architects, urban designers, property developers and planners, launched this month.
The Cityweft platform enables users to create ‘high-quality’ customisable 3D models of cities and sites that can be used for early-stage design in CAD and BIM authoring tools such as SketchUp, Revit, Archicad and Rhino.
“Context modelling should be fast, accurate, and built for the way architects and AEC professionals actually work,” said Cityweft CEO Alexander Groth.
Cityweft transforms disparate realworld datasets – such as 3D buildings, terrain, and infrastructure data – into ‘unified’ CAD-editable 3D geometry. Each mesh layer is generated separately for ‘easy customisation’.
Through the web-platform, users can find city models and data from around the world (e.g. OpenStreetMaps, Google Open Buildings, Microsoft ML Buildings and Esri Community Buildings), preview and customise directly on the platform, and then export to Rhino, SketchUp, GLTF, OBJ, STL and IFC, or connect via API.
According to the company, in contrast to other solutions that use simple extrusion geometry, Cityweft’s advanced geometry processing and proprietary algorithms produce complex and accurate geometry including close to 20 roof types. Higher quality building models not only aid design but can help deliver more reliable sunlight analysis in tools like Autodesk Forma.
Cityweft will be presenting at NXT BLD in London on 11 June 2025.
■ www.cityweft.com
assault Systèmes (DS) has launched a new Catia software bundle tailored specifically for the AEC sector, offering price reductions to encourage wider adoption among AEC firms dealing with complex geometry and digital fabrication workflows. Prices start at $4,500 per user, per year.
The AEC Special Offer packages Catia’s powerful modelling and design tools with additional applications for scripting and 2D drafting. The bundle includes the full desktop version of Catia, with cloud-
based data management via the 3DExperience platform. It supports simultaneous collaboration, advanced geometry, BIM capabilities, and scripting via the Visual Script Designer.
Also included is DraftSight Premium, the DWG-based 2D drafting tool, which connects to the 3DExperience cloud and supports IFC and Revit model imports for 2D documentation.
DS will be at NXT BLD and NXT DEV on 11-12 June. See page 34 for more info.
■ www.3ds.com/store/catia-for-aec
Autodesk is in the process of updating the Revit graphics engine, with the launch of Accelerated Graphics, a tech preview for Revit 2026, which comes with limited features. According to the company, with Accelerated Graphics users will experience a significant navigation performance improvement in 3D and 2D views. To make this possible Autodesk has implemented new techniques that use the GPU as efficiently as possible. According to the developers, users do not need an expensive graphics card for the Accelerated Graphics Tech Preview to work well, but having one with more memory would allow them to work more efficiently with larger models.
■ www.autodesk.com
ymetri has launched Naviate Nebula, a cloud-to-cloud solution for transferring files from multiple projects in BIM 360 and Autodesk Construction Cloud (ACC) to other BIM 360 or ACC projects. Folders and files are transferred with version history and attributes. Naviate aims to address three key cloud-to-cloud data transfer issues: regional transfer of data between local hubs in US, EU and Australia; automating timeconsuming and manual processes by setting up automated workflows and schedules; and limiting the risk of losing data or working with the wrong files thanks to continuous backups on ongoing projects from project collaborations.
■ www.naviate.com
axon is aiming to put its rendering tech into the hands of more architects and designers with a new ‘real-time’ visualisation plug-in for Vectorworks — with support for other CAD and BIM tools to follow.
“We’re giving architects a powerful yet intuitive tool that elevates their work visually, without adding complexity, so their designs can be expressed as intended with clarity and emotion,” said David McGavran, CEO of Maxon.
Users of the new rendering solution will get access to ‘intuitive controls’ seamlessly integrated within the Vectorworks BIM environment, along
with smart asset libraries.
They will be able to move from realtime previews to stunning final renders with ‘seamless’ export to Cinema 4D and Redshift. According to Maxon, this will allow them to create more advanced architectural visualisations, including procedural animations and simulations.
“Our new partnership with Maxon addresses a significant challenge faced by AEC professionals: the need for a realtime rendering solution that seamlessly integrates with Vectorworks and evolves alongside it,” said Vectorworks senior director of rendering, Dave Donley.
■ www.maxon.net
Intel has unveiled the Arc Pro B50 (16 GB) and Arc Pro B60 (24 GB), two new professional desktop GPUs built on its Xe2 ‘Battlemage’ architecture, featuring Intel Xe Matrix Extensions (XMX) AI cores and ray tracing units.
Compared to the previousgeneration PCIe Gen 4 ‘Alchemist’-based Arc Pro A50 (6 GB) and A60 (12 GB) the new PCIe Gen 5 GPUs offer a big performance uplift and increase in memory. On paper, this makes the Arc Pro B50 and B60 much better equipped to handle more demanding workflows, including larger
viz scenes in applications like D5 Render and Twinmotion, and AI tools such as Stable Diffusion.
The Intel Arc Pro B50 is a low-profile, dual-slot graphics card with a total board power of 70W, which makes it compatible with small form factor (SFF) and micro workstations. Priced at $299 MSRP, the Arc Pro B50 competes directly with the Nvidia RTX A1000 (8 GB) — but, as Intel highlights, it offers double the memory.
rocore is beefing up the BIM capabilities of its cloud-based construction management platform, with the acquisitions of Novorender, a viz platform for viewing huge BIM models, and FlyPaper, makers of Sherlock, a Navisworks plugin for BIM coordination.
Novorender is billed as one of the world’s fastest 3D model viewers. The streaming platform is designed to federate terabytes of BIM/GIS data in the browser, with a view to making 3D construction data available to everyone in the field.
FlyPaper is a long time Procore partner and its technology is already used by Procore customers. Once FlyPaper is integrated, Procore customers will have access to automated 3D design coordination, clash detection, and collaboration to help improve predictability, reduce rework costs, and bolster construction site safety.
■ www.procore.com
AThe Arc Pro B60 is a full-sized board with 120W – 200W of total board power. Learn more at aecmag.com
■ www.intel.com/arcpro
llplan 2025-1 includes several structural design focused workflow improvements designed to enhance integration between the BIM authoring software and Bimplus, the cloud-based collaboration platform.
The Frilo BIM-Connector is now fully integrated into Bimplus, so engineers can access 3D model data directly from Allplan. Also, with Structural Analysis Format (SAF) integration, Allplan now offers direct imports into Bimplus.
■ www.allplan.com
Gauzilla Pro is a new gaussian splatting web editor that allows for real-time and photorealistic 3D/4D rendering and editing of scenes reconstructed from phone and drone videos. The software can be used to track construction progress by creating a ‘4D time lapse’.
Founder Yoshiharu Sato will be speaking at NXT BLD / DEV on 11-12 June
■ www.gauzilla.xyz
The Nemetschek Group has announced a partnership with Google Cloud which will allow Nemetschek to use Google Cloud’s advanced AI and cloud technologies at a group-level. One of Nemetschek’s BIM authoring brands, Graphisoft, already uses Google cloud for its collaboration platform BIMcloud
■ www.nemetschek.com
Bentley Systems has introduced a new asset analytics capability that takes imagery insights from Google Maps to ‘rapidly detect and analyse’ roadway conditions. The new capability is born from its Blyncsy product offering which applies AI to crowdsourced imagery for automated highway asset detection and inspection
■ www.bentley.com
Ametek is to acquire all outstanding shares of reality modelling company Faro Technologies common stock. The transaction values Faro at approximately $920 million. The transaction is expected to be completed in the second half of 2025 ■ www.faro.com
Woolpert, a global specialist in geospatial services, has acquired Bluesky International, an aerial survey firm that largely serves the UK market, specialising in aerial imaging, LiDAR, 3D modelling, vegetation, and renewables mapping
■ www.bluesky-world.com
A new AR solution that combines Pix4D’s 3D mapping technology with underground utility mapping software
ProStar PointMan helps construction and utility professionals visualise and manage buried infrastructure
■ www.pix4d.com ■ www.pointman.com
ollowing its beta launch last year, Chaos has released Chaos Envision, a real-time storytelling tool that helps architects and designers turn 3D models into immersive, cinematic presentations.
Chaos Envision can bring multiple 3D components into its collaborative environment. It accepts content from any application that hosts Enscape or V-Ray and can import common industry formats, so users ‘don’t have to worry about scene prep or data loss’. All lighting, materials and assets from their original CAD or Enscape design will carry over.
Users can add entourage with Chaos
Cosmos, and through a direct integration with the Chaos Anima engine, drag-anddrop ‘hyper-realistic 4D people and crowds’ with AI-enhanced behaviour into scenes, and then direct their movement by assigning them a path. Paths can also be applied to other objects and cameras for more cinematic looks. Chaos Envision also supports variation-based animation to help designers depict sun studies, construction phasing or to cycle through design options. Meanwhile, Chaos has also announced a range of ‘affordable’ industry-specific suites for architectural design, architectural visualisation, and media & entertainment.
■ www.chaos.com
Twinmotion 2025.1.1, the latest release of the real-time rendering software from Epic Games, supports Nvidia DLSS 4, a suite of neural rendering technologies that uses AI to boost 3D performance.
Epic Games shows that when DLSS 4 is enabled in Twinmotion it can render almost four times as many frames per second (FPS) than when DLSS is set to off. DLSS 4 uses a technology called Multi Frame Generation, an evolution of Single Frame Generation, which was introduced in DLSS 3.
Single Frame Generation uses the AI Tensor cores on Nvidia GPUs to interpolate one synthetic frame between every two traditionally rendered frames, improving performance by reducing the
number of frames that need to be rendered by the GPU.
Multi Frame Generation extends this approach by using AI to generate up to three frames between each pair of rendered frames, further increasing frame rates. The technology is only available on Nvidia’s new Blackwellbased RTX GPUs.
■ www.twinmotion.com
enovo has updated its AMDbased mobile workstation line up with the launch of the ThinkPad P14s Gen 6 AMD and ThinkPad P16s Gen 4 AMD. Both laptops feature the ‘Strix Point’ AMD Ryzen AI Pro 300 Series processor with integrated AMD Radeon 890M graphics and a 50 TOPS NPU for AI workloads.
T Lto the 500 nit 2.8K OLED (2,880 x 1,800). Meanwhile, the Lenovo ThinkPad P16s Gen 4 AMD starts at 1.71kg and comes with a choice of 16-inch displays up to the 400 nit WQUXGA OLED (3,840 x 2,400).
Starting at 1.39kg and 16.13mm thick, the Lenovo ThinkPad P14s Gen 6 AMD is the thinnest and lightest mobile workstation in Lenovo’s portfolio. It comes with a choice of 14-inch displays up
Both laptops are built for CAD and BIM workloads and offer up to the AMD Ryzen AI 9 HX PRO 370 processor with 12 cores, 24 threads and a Max Boost Clock of up to 5.1 GHz. There’s support for up to 96 GB of DDR5 (5600MT/s) memory which can be dynamically allocated between the CPU and GPU.
■ www.lenovo.com
Sensat, a visualisation platform designed to bring real-world context to infrastructure projects, has formed a partnership with Transcend, a generative-design engine for water, wastewater, and power facilities.
The two companies have developed an optimised workflow designed to automate early-stage engineering in Transcend, then
stream ‘highly detailed’ conceptual BIM models into Sensat for contextual review, with any feedback then pushed back into Transcend. Everything is synchronised through Autodesk Construction Cloud.
Severn Trent Water, one of the UK’s largest water and wastewater service providers, has piloted the workflow on its Westwood Brook treatment-plant project.
■ www.sensat.com ■ www.transcendinfra.com
OpenSpace, a specialist in 360° reality capture and AI-powered analytics for the construction sector, has announced OpenSpace Air as part of its core subscription.
OpenSpace Air enables construction
teams to consolidate all reality data –from drones, 360° cameras, mobile phones, and laser scanners – into a ‘comprehensive visual record’ accessible from a single platform.
■ www.openspace.ai/products/air
opcon has announced the CR-H1, a handheld reality capture solution that uses PIX4Dcatch, a software tool which runs on iPhones with integrated LiDAR scanners, and uses photogrammetry to create fullcolour 3D point clouds.
The iPhone connects to Topcon’s HiPer CR receiver, enabling the application to collect georeferenced images. The receiver and iPhone are both mounted on a specialised handle designed and manufactured by Topcon, so users can capture point clouds without a tripod.
The solution is designed for use across multiple disciplines, including utilities and subsurface mapping, construction verification and earthworks, civil engineering and site verification.
■ www.topconpositioning.com
aro Blink is a new 3D reality capture solution designed to combine visualisation with automated workflows via the Faro Sphere XG Digital Reality Platform.
“Blink is a ground-breaking innovation designed to break down the barriers to 3D data, facilitating better insights from job sites through straightforward and user-friendly workflows,” said Faro President and CEO Peter J. Lau. “By automating complex tasks and prioritising simplicity, we’ve developed a costeffective solution that enables anyone, regardless of their expertise, to achieve professionalquality data insights.”
■ www.faro.com
In this magazine 2D often gets overlooked, despite it being the primary dimension for AEC deliverables. But, as Martyn Day reports, new technologies are emerging to transform 2D drawings into models, as well as to generate drawings automatically from 3D model data
Widespread adoption of 3D technology and the growth of BIM expertise have transformed AEC –but the most important output for any firm is still documentation — more specifically, the production of 2D drawings.
Before the arrival of BIM, CAD represented a way to accelerate workflows, a speedy alternative to manual drafting. It supported quick drawing, fast editing and some automation. Early dedicated AEC applications (such as the UK-developed AutoCAD AEC) provided still more acceleration, since AutoCAD came with a raft of blocked symbols, layering conventions, hatching and linetype styles. Now, we thought, we’re cooking on gas!
Then along came BIM, insisting we all model buildings in 3D and, as a by-product of this process, derive 2D line takeoffs from the geometry in order to fast-track the production of drawings. If we changed the design (in other words, the 3D model), the drawing would automatically update. BIM front-loaded the design system with more work and more decisions to be made, but at the same time, it helped improve understanding of a design, as well as offering renderings and analysis capabilities. It also fostered an explosion in the output of drawings.
Drawings are not going away – but there is certainly a movement focused on the mass-automation of their creations. Some believe they might be eliminated altogether, with the model becoming the deliverable instead. At AEC Magazine, we don’t think that will ever happen, given that in more advanced sectors such as automotive and aerospace, drawings are still produced, even in situations where fabrication is highly automated around models.
That said, a great deal of effort is dedicated to the automation of drawings, with companies including SWAPP, Graebert, Evolvelab (now part of Chaos) all developing automated BIM to 2D drawing tools.
Both Graebert and Evolvelab will be at NXT DEV in London on 12 June and
Graebert’s auto drawing capabilities will be demonstrated by BIM 2.0 start-up Qonic, which has integrated them into its own product – but more on that later.
But let’s not get ahead of ourselves. While it’s true that most new builds over the last four decades and more have likely been created through 2D CAD or BIM, what about all the buildings that were built before the emergence of the large digital design footprint?
These buildings are documented mostly on paper, or perhaps digitised to images, once archived through raster scanning. With refurbishment becoming an increasingly attractive option for building owners faced with sustainability regulations, wouldn’t it be a good idea to be able to automate BIM from 2D?
Long-held drawing standards and defined 2D symbolic representation give intelligent systems a language that can help 2D drawings be translated into a 3D model. Dimensions and markings can also assist in driving accuracy, in situations where perhaps the drawings are inaccurate, stretched or left askew following scanning.
Several software firms are working on building a bridge between dumb 2D drawings and intelligent 3D models. This means old drawings might be rapidly repurposed as 3D models for digital twins, layouts for facilities management, or as a baseline for refurbishment.
In the last edition we looked at how new start-up Higharc has developed an AI algorithm that can take hand-drawn sketches and convert them into fully detailed BIM (www.tinyurl.com/AEC-higharc)
In the case of Higharc, the AI recognises external and internal walls, doors and windows, and their dimensions, and can identify anchored floors in its American timber frame housing expert system.
This capability was created not to show off how well AI could perform this task, but because Higharc had a real
problem it needed to solve. The type of companies that Higharc typically sells to are house building firms with limited inhouse digital skills.
Higharc’s powerful cloud-based 3D design system models timber frame housing at fabrication-level detail and connects to ERP systems to generate costings, based on the design the customer has chosen.
Considering its impressive capabilities, Higharc is easy to learn, even where employees aren’t that familiar with CAD or digital design tools. The system’s drawing AI enables experienced builders more comfortable with hand-sketching layouts to get involved in the digital design process. It takes dumb lines and substitutes them with intelligent walls and building components that are construction-accurate and that links to the system’s ERP. Aimed at a pretty specific target customer group, Higharc may not be a 2D-to-3D solution for the masses, but it’s an interesting representation of what is enabled when AI is applied to 2D sketches.
If you search, you will find several applications that support the conversion of 2D to 3D for BIM. There’s WiseBIM, Plans2BIM, and usBIM.planAI by ACCA.
In fact, all three systems are related, since they are all powered by the 2D-to-3D technology built by Paris-based software developer WiseBIM. This uses AI to convert 2D images to Revit and IFC objects.
In addition, a new Sweden-based startup called BIMify also uses AI and supports Revit and IFC workflows. Although BIMify claims Archicad support, this is via IFC for now, while its Revit support is a lot more integrated to the format.
For this article we spoke with both WiseBIM and BIMify to discover what’s possible today. We also got a sneak peek of BIM software startup Qonic’s forthcoming auto drawing capability
Conclusion (see page 16)
Last summer, the AEC Magazine web server almost reached a meltdown when we ran a story on Paris-based AEC software company, WiseBIM. News of the developer’s in-Revit 2D-to-3D application attracted tens of thousandsofviews.
In fact, WiseBIM has been leading the charge in 2D-to-3D conversion for some time, having begun its journey around 2017. Before releasing a version that works within Revit, WiseBIM promoted itself under various names—including Plans2BIM and built its online presence across multiple platforms. Its tools were alsolicensedbyItalianBIMsoftware developerACCA.
The primary use case for WiseBIM is projects that involve existing buildings, for renovation, facility management, maintenance and digitaltwininitiatives
The company’s core technology relies on AI algorithms that work on pixel data from 2D plans, such as PDFs, PNGs, JPEGs, DWGs and DXFs. The software automatically identifies building elements from raster or vector input. The process involves importing the plan, setting the scale, running the AI detection forspecificelements(walls,openings, slabs, roofs, columns, beams, furniture, text and so on), and then allowing users to review, edit and correctthegeneratedmodel.
single building model.
In Revit, users can specify Revit families for detected elements to ensure consistency in the generated BIM model. As Garcia explained, “In our latest version, you can specify what family you are looking for, which means, because it’s using pixel-level identification, that if in your plan, the internal walls are ten centimetres thick,
development purposes. Users can add properties (for example, materials) to building elements, such as indicating that a wall is made of concrete, and all properties can be added, such as thermal coefficients.
WiseBIM’s origins can be traced back to a patented thesis at a French research centre initially developed for delivering thermal simulation. The company has a team of nine, who are predominantly technical.
While initial versions attempted ‘all at once’ detection, the current approach allows for more piecemeal detection of elements (walls, then openings, and so on). This has been a deliberate choice by the developers, as Tristan Garcia, co-founder and CEO of WiseBIM, explains. “It’s actually better this way. AI does make mistakes, but now users can correct theseintheworkflow.”
There are two flavours of the application:onethatworksinsideRevit, and one that operates standalone as a web service (Plans2BIM). Obviously, the Revit solution is all about delivering RVTcomponents,whilethewebversion aims to convert 2D to standard IFC components for generic reuse. The online variant allows for the creation and assembly of multiple floors into a
you can say, ‘Okay, every wall that you identify that is between nine centimetres and 11 centimetres is a ten-centimetre wall. This helps the AI deliver homogeneous output, and model those walls in the same family.”
WiseBIM supports multiple export formats, including IFC, DXF, and CSV/ XLSX (quantity take-offs). A specific JSON format is also available for
Their advice to improve the chances of a successful output is to focus on the quality of the 2D plan. This significantly impacts the accuracy of the AI detection. A minimum resolution of 100 pixels per metre is recommended. Removing noise and cropping irrelevant parts of the drawing (like legends) can also improve results. If you leave in the title block, lines around it may be identified as walls. So there is some pre-preparation required to clean up drawings.
In the translation process, certain information, such as wall height or sill height, is often not explicitly present in 2D plans. This will need to be manually set by the user during configuration.
Currently focused on architectural elements, the company plans to tackle the more complex task of converting structural and MEP drawings next year.
Plans2BIM is priced at €49 euros per month or €299 per year. The Revit add-on is $29 per month or $249 per year.
Huge interest in this technology points to significant demand for automating the conversion of existing building data into BIM models –particularly for renovation, facility management and digital twin initiatives. WiseBIM and Plans2BIM offer a compelling AI-powered solution for converting 2D plans to BIM.
While the AI’s accuracy sometimes requires user intervention, the iterative editing process and customisable features still deliver a time-saving solution over the manual alternative. There could be some rework of the standalone user interface, but it seems to work well in Revit as an add-in.
I first heard the term ‘BIMify’ used in relation to a feature in BricsCAD that converted dumb 3D geometry into intelligent BIM components. Now, we have a new service called BIMify, from a totally unrelated company, which is applying AI to convert dumb 2D drawings and point clouds into BIM models, mainly Revit.
Based in Sweden, BIMify is the brainchild of Aleksandar Balicevac and the result of five years of research and experimentation into reliable conversion technologies. As mentioned, BIMify is a service, rather than a software that you buy. It’s built on Balicevac’s AI code and a number of key Autodesk Forge (APS) components. While the BIMify website claims support for Archicad, this is currently via IFC, while Revit is native RVT.
BIMify takes a ‘factorylike’ approach, using machine learning and AI to batch process files, so that it’s possible to ‘feed in’ individual floors of a building and get a fully assembled Revit model out the other end. It’s even possible to have this model use your own family of parts. That means you can go from six dumb floor plan drawings to a fully editable RVT model in minutes.
for example, or the minimum areas for spaces like bathrooms, which improves the confidence level of the AI. It can recognise interior and exterior walls, doors, windows, slabs and so on.
The process begins with a user providing input data for their building, specifying the type of model needed on the platform. This includes defining the purpose-based specification (for example, for space management, reconstruction, or detailed design) and the desired level of detail (LoD).
Users describe their building (type, gross area, number of levels, level heights) and assign the input 2D files to the correct levels. They can also select the desired output format. Critically,
that rely on identifying spaces first, instead processing the full drawing (or parts) to identify objects directly.”
Some elements cannot currently be modelled automatically. These include stairs, railings, roofs, vertical openings and custom specification details, and require manual work by BIMify’s inhouse team.
The model undergoes a standardised, semi-automated quality assurance step. This ensures the delivered model is complete and meets the specified quality standards. The manual completion and QA process also provide direct feedback to the development team to improve the automation algorithms.
Balicevac also tells us that the company has clients that are moving from other BIM systems to Revit and are using their system to convert old projects, as IFC does not give them editable Revit geometry. This means taking 2D out to get 3D native RVT files, which is an extremely interesting workflow.
BIMify supports various input formats – DWG, DXF, point cloud, PDF, image – and can output in RVT or IFC. While focusing on architecture currently, the company plans to expand into other disciplines and its team is developing features for model maintenance and seamless integration with other industry systems.
Balicevac says he is primarily focusing on Europe to start, where he estimates there are over 20-25 billion square metres of buildings, of which some 95% were built pre-BIM and will need digitising sooner or later.
As to the accuracy of the AI, he seems supremely confident of his system. As it’s working, the AI is checked against building rules, leading to what he calls ‘engineering intelligence’ –knowing standard wall thicknesses,
users can specify their own Revit template and families to be used in the generated model. The platform provides an instant price quote based on the building size and specification.
Automated generation uses the company’s deep learning and AI to read the input files. Machines go through the drawing or scan to identify objects like walls, doors, windows, curtain walls, slabs, plumbing fixtures, columns, and rooms. The AI aims to determine object types and their locations (and is being developed to add increased granularity, such as single/double/triple windows).
Engineering intelligence is applied, with building rules used to validate and refine the AI output, as previously described.
Balicevac claims 100% accuracy here, which, if true, would certainly distinguish this tool from many other AI tools. He says that BIMify “avoids heuristic methods
We find that most AI developers are very protective of their ‘secret sauce’. They like to keep quiet about how they achieve what they do, and BIMify is more tight-lipped than most.
However, the company is clearly using Autodesk APS components to build these native Revit models with real-world families, such as the Forge viewer, data exchange and, I’m guessing, Revit.io (a headless Revit in the cloud). This is probably reassuring to Revit users, but may make it hard for BIMify to tackle native Archicad, as there is no equivalent to Revit.io.
BIMify makes big claims about output accuracy. Talking with Balicevac, you get the sense that he really knows what he’s doing with AI and ML. He’s put code in place to keep the AI in check with real-world engineering constraints. Autodesk’s approach with Forge is certainly advantageous here, but additional formats will be harder to achieve. Either way, BIMify is certainly an interesting firm to follow.
CivilSense™ uses AI and multi-source data — including GIS, infrastructure, and climate insights — to pinpoint high-risk pipeline segments before failure strikes.
Take control of your water asset management. Proactive planning starts here.
the only asset-management solution that delivers predictive and real-time AI leak detection with market-leading 93% accuracy. Backed by the expertise and scale of America’s leading infrastructure business.
In 2024, Ghent-based developer Qonic launched its BIM 2.0 platform. On the face of it, the first iteration is a cloudbased common data environment (CDE), capable of handling massive BIM models that far surpass Revit’s loading capability. It offers a really simple interface for filtering and interrogating BIM data, with intuitive sectioning and a frame rate that approaches computer-game level. Underneath, there’s a solid modelling engine that supports highly accurate editing of geometry and is aimed at the junction where architectural BIM meets construction BIM. In short, this was the starting point of what looks set to be a rapid and exciting adventure in software development.
Qonic is beefing up its platform. For the purpose of this article, we are going to focus on just one of the new introductions - and that’s the long-promised Autodrawings function.
has plenty of readers that use Catia, Solidworks, Siemens NX and Inventor, for example, and even when they manufacture parts directly from CAD models, the production of drawings is still mandated for them. But that’s a discussion for another time.
In Qonic, the drawing generation process utilises the rich data and structure within the Qonic 3D model – derived from IFC or enriched from RVT – to produce intelligent and wellannotated 2D outputs. Graebert’s server-side automation and browserbased viewing of drawings, using its Kudo technology, is key to this new,
With this goal in mind, Qonic has licensed Graebert’s Kudo DWG technology, which now comes with some autodrawing capability. It’s taking this base layer and building a powerful integration so that Qonic can ingest huge, multi-disciplinary models and quickly output 2D general assembly drawings.
When the technology was shown to me, I got the message that while the AEC industry generally recognises the necessity of drawings, executives at Qonic feel that their importance may dwindle in the future as model-based workflows become more prevalent, not to mention more accessible to a broader audience.
While I tend to agree with this idea in theory, I know from experience that it’s not always the case in practice. Our sister publication, DEVELOP3D,
Conclusion (from page 12)
While many people are still waiting around for new 3D BIM tools, it’s clear that development work is increasing our options, when it comes to what capabilities we can buy and how we will work in future. Drawing capabilities are a big part of that picture.
The companies developing 2D-to-3D
for export to formats including DWG and PDF. Drawings can be viewed directly within the Qonic environment, with basic annotation supported, such as adding dimensions and moving tags. A mechanism to flag outdated annotations following model updates is planned for the future.
combined cloud functionality.
Qonic’s drawing generation is primarily a one-directional output. Changes to dimensions, tags or other annotations must be made in the 3D model, which then triggers an update to the drawing. Qonic has no intention to create a full 2D editor that directly modifies the 3D model.
The automated generation works a bit like Hypermodels in Bentley iModel, with drawings generated automatically from defined section planes and templates, with the potential for scheduling this process (for example, it might take the form of nightly updates). The output is fully vectorised, allowing
capabilities are heavily focused on using machine learning and AI to perform this task, together with rules and configurations. It’s our understanding that Graebert’s technology is predominantly linear processing without any AI, but in the future, we fully expect to see some machine learning used in the production of auto drawings, as they scale from
Many firms struggle to get BIM models exactly how they want them, and resort to ‘fixing it in the drawing’. Here, Qonic is empowering users to fix inaccuracies in the model due to performance capabilities and provides automation tools for propagating changes, rather than relying on manual fixes in 2D drawings. Initially, the focus is on producing general arrangement (GA) drawings with the goal of extending the capability to more detailed construction and design delivery drawings in the future. It’s also worth pointing out that many firms have their own internal visual styles for general arrangement drawings. Qonic will enable configurations to cater to most firms’ visual tastes for wall styles, openings and other content.
The drawing functionality will likely form part of the paid subscription options and not appear in the free version of Qonic. Pricing is not expected to be based on tokenisation or usage, although these pricing structures are used by some competitors.
Drawing generation capabilities are now in active development. The company already has working prototypes and is hoping to release the software later this year. The Qonic team will appear on the main stage at NXT BLD (London, 11 June) demonstrating this new functionality and will also be offering attendees a chilled-out exhibition space in which to relax and talk to the team.
general arrangement (GA) to Level 400. The key takeaway here is that dumb 2D drawings won’t stay dumb for long. If a drawing is an accurate plan, then there are simple ways to digitise that information and convert it to 3D. As for autodrawings, it’s clear that this June at NXT BLD, there will be real productivity benefits to witness first-hand.
Architects and designers have long embraced technology to work smarter, faster, and more creatively. Since its introduction to the industry, AI has emerged as a powerful force, reshaping how teams optimise processes, brainstorm ideas, and deliver results. This article explores AI’s impact, how it transforms processes, and the new workflows it has created in architecture
AI’s impact on architecture
AI’s significance in architecture is thanks to its ability to process large amounts of data, automate time-consuming tasks, help with quick design exploration, and optimise image quality. It’s useful across various stages of the architectural design process, from early-stage planning and conceptualisation to construction.
According to a survey conducted by Architizer and Chaos on the state of architectural visualisation in 2025, excitement around AI experimentation is up by 20% compared to 2024. Larger firms are more enthusiastic about it, however, smaller firms and freelancers are also finding their own creative approaches.
The same survey reveals that people’s use of AI differs depending on the kind of firm they work for. For instance, large and small firms are using AI to help produce traditional, still renderings quickly and more economically, whereas medium-sized firms are using AI to experiment and discover new creative processes.
The benefits of implementing AI
Increased efficiency: One major way AI is transforming architectural workflows is by accelerating time-consuming tasks. AI tools can handle these processes with impressive efficiency, from generating preliminary floor plans to evaluating structural feasibility and refining spatial organisation.
Enhanced creativity: Generative AI technologies push the boundaries of traditional architectural design. The mix of human creativity and machine intelligence unlocks the potential for new ideas and aesthetics that are unique and functional. It also introduces architects and designers to creative solutions they may not have considered independently. With a single prompt, you can quickly explore different forms, layouts, and styles.
Visualisation quality: AI tools can sharpen and upscale images, often without sacrificing performance. From reducing noise in renders to simulating atmospheric lighting, you can have realistic and presentation-ready images in no time. This level of quality can help communicate technical insight with emotional clarity, ensuring clients and stakeholders understand the vision you are trying to convey.
Better client communication: AI-powered tools can facilitate transparent and effective communication between you and your clients by offering more immersive, comprehensive ways to present ideas. They can also interpret vague client feedback and translate it into actionable design suggestions, helping align expectations early in the process and reducing costly revisions later on.
Chaos Veras is an AI-powered visualisation app for leading modelling tools like Revit and SketchUp. It uses your 3D model geometry as a substrate for creativity and inspiration, making it ideal for ideation and pre-design.
Ideation and pre-design are phases where fast decision-making is crucial, and design quality cannot be sacrificed. Veras helps minimise the gap during these stages by allowing you to generate fast AI-rendered concepts that you can share with colleagues and clients. This makes early design feedback quicker and easier.
The Chaos AI Enhancer allows you to elevate the quality of your visualisations within Chaos Enscape, an industry-leading real-time rendering and VR plug-in. It leverages AI to improve people and vegetation assets, enhancing the realism of your visualisations.
Realistic assets bring a project to life, and the Chaos AI Enhancer allows you to export betterlooking assets straight from Enscape. As it’s a
feature and not a third-party solution, using it means you don’t have to leave your design application, giving you a streamlined visualisation workflow.
New ways to design and visualise Enscape and Veras are popularly used together for an enhanced architectural visualisation workflow. This combination gives you a more robust workflow and eliminates the need for multiple disconnected tools.
Users benefit from faster iteration cycles. The integrated workflow lets you explore a number of different design ideas without locking anything into your BIM/CAD model too early. It also offers a richer canvas for enhancement, which allows you to produce more realistic and immersive output.
Hanns-Jochen Weyland of Störmer Murphy and Partners, an award-winning architectural practice based in Hamburg, Germany, shares, “Over a year ago, we began exploring AI tools to speed up our workflows and were excited to discover Veras, a solution specifically designed for AEC that seamlessly integrates with host platforms. Veras is now our go-to for initial ideation before transitioning to renderings in Enscape. This powerful combination accelerates concept development and ensures reliable outcomes.”
Supercharge your designs with Chaos AI is shaping new ways to work within architecture. By streamlining the design process, it contributes to more polished, functional, and cost-effective solutions, allowing architects and designers to redirect their energy toward designing and solving complex challenges.
Chaos is a global leader in design and visualisation technology, providing world-class visualisation solutions that help you share ideas, optimise workflows, and create immersive experiences.
Visit www.Chaos.com to learn more
With NXT BLD and NXT DEV fast approaching, we put the spotlight on the key themes of our two-day event—from AI and BIM 2.0 to pioneering technology perspectives from some of the biggest firms in AEC
A core element of NXT BLD is the insight shared by leading AEC firms and heads of design technology, who present the real-world workflows and innovations shaping their firms. This year’s line-up includes industry heavyweights from Heatherwick Studio, Perkins&Will, Foster + Partners, HOK, Bouygues, and Buro Happold — with more high-profile names to be announced very soon.
At NXT DEV, many of these same design technology leaders return to help steer the conversation—offering guidance, feedback, and insight to ensure that the next generation of tools directly address the challenges and needs faced by today’s AEC professionals.
Artificial Intelligence (AI) and machine learning are now impossible to ignore. The pace of advancement is exponential—what seemed like a distant possibility just a year ago has already been surpassed.
Soon, every firm will have access to AI tools as capable as the world’s best software developers, profoundly impacting how software is created and how AEC firms interact with their data.
This month’s cover feature explores the automation of 2D—from converting 2D drawings to 3D models to generating automated documentation directly from BIM (see page 12). At NXT BLD and NXT DEV, you’ll see live demonstrations of new tools from EvolveLAB (now part of Chaos), Qonic, and Graebert, showcasing both current capabilities and upcoming features. As these technologies mature, they promise significant productivity gains by reducing the time spent on manual drawing production. They also open the door to more efficient workflows, with fewer high-cost software licences required for documentation tasks.
At NXT BLD, expect thoughtprovoking presentations from leading voices in this space, including Martha Tsigkari, Head of Applied R+D at Foster + Partners; Keir Regan-Alexander of Arka.Works (see page 26); and Sean Young of Nvidia. You’ll also get a first look at cutting-edge AI-powered tools shaping the future of design and AEC collaboration including Finch3D. Consigli and Tektome.
Meanwhile, NXT DEV will take the conversation even further, hosting a deep-dive panel discussion with AI experts to explore the real-world implications of these technologies across the AEC landscape. Nvidia will also explore how the AEC industry is rapidly shifting from AI curiosity to AI action. of discussion
Investing in software and infrastructure has always been key to driving business value—but with the rise of AI, data lakes, open systems, and productivity tools like autonomous drawings, the return on investment is becoming even more tangible. NXT BLD and NXT DEV offer a unique opportunity to gain insight into these developments, connect with innovators, and see real-world solutions that can directly impact your bottom line.
One standout session will focus on how to negotiate effectively with software vendors and resellers, offering a behind-the-scenes look at how pricing models, metrics, and deal structures actually work.
AEC Magazine has long championed the evolution of nextgeneration BIM tools—alternatives to the traditional, filebased desktop modelling systems that have dominated the industry for decades. This year marks a milestone: for the first time, many of the innovators we’ve followed from early development now have real products in the market.
At NXT BLD, the main stage will host a back-to-back showcase of these groundbreaking platforms: Arcol, Qonic, Motif, Snaptrude, and Hypar — all demonstrating new capabilities and sharing their visions for the future of BIM.
Building a full-featured competitor to something as mature as Revit, now 25 years old, is no small feat. These tools are evolving in phases—some still focused on early-stage or conceptual modelling—but the goal is clear: to deliver comprehensive BIM 2.0 platforms, designed for cloud-native workflows, real-time collaboration, and entirely new ways of working.
Alongside a global lineup of startups, there’ll be presentations from major AEC software developers including Autodesk, Bentley Systems, and Graphisoft. Each will showcase new technologies and never-before-seen applications. The energy and innovation driven by emerging startups is clearly influencing the broader industry, inspiring established vendors to accelerate their own development and push the boundaries of what’s possible in AEC.
Capturing buildings and terrain once meant hiring a surveyor with a total station—but the landscape has changed dramatically. Technologies like LiDAR and photogrammetry paved the way, and now cutting-edge methods such as NeRFs and Gaussian Splatting are pushing accuracy and realism even further. Add to that the rapid evolution of mobile phones, SLAM scanners, drones, and advanced reality capture software, and it’s clear we’re entering a new era of digital site capture.
NXT BLD will feature speakers from across the industry, while at NXT DEV, don’t miss a dedicated panel session hosted by Robert Klaschka, where the future of reality capture will take centre stage.
Geospatial & city context
GIS and BIM have long been on a path toward convergence—and that future is finally taking shape. Thanks to new tools, growing industry collaboration, and opensource initiatives, integrating GIS data into everyday workflows is becoming faster, easier, and more accessible than ever. Advances in game engine technology, hardware, and graphics now make it possible to explore highly detailed, cityscale models in real time, while AI is unlocking powerful new ways to analyse environmental conditions.
At NXT BLD, a special group session hosted by Esri will spotlight the latest innovations in this space, along with exciting new startups helping to shape the future of geoBIM.
A recurring theme at NXT BLD has been the push for industry-owned open data lakes. AEC firms increasingly want full ownership of their project data—without being locked into paying software vendors to access and manage it. This issue has become even more urgent with the rise of AI, as firms recognise the immense value their data holds for software companies.
This year at NXT BLD / DEV, expect major announcements on new developments in this space—advancing the vision of open, accessible, and AI-ready project data.
Digital transformation in AEC goes far beyond modelling and documentation. The real opportunity lies in connecting design systems with modern, digitised manufacturing—enabling seamless machine-to-machine communication. Achieving this demands a fundamental rethink of current processes. Increasingly, firms are turning to modular construction and kit-of-parts strategies to drive efficiency and scalability in the next generation of buildings and infrastructure.
Don’t miss an exclusive NXT DEV panel discussion exploring the current state of this rapidly evolving space and what it means for the future of design and delivery.
Open source & data lakes
AEC workflows are evolving at pace, making highperformance workstations more important than ever. We’ll explore how firms can best prepare for the rising demands of AI, visualisation, reality modelling, and other compute-intensive workflows.
A thought-provoking panel of experts at NXT DEV will tackle a variety of topics including AI and sustainability. With AI still in its early stages of adoption, firms must consider how to scale their compute infrastructure — balancing desktop, mobile, datacentre, and cloud. At the same time, rising performance demands bring increased energy consumption. How can firms unlock the full potential of AI while keeping sustainability in focus?
Meanwhile, at NXT BLD, Lenovo’s Mike Leach will offer insight in his presentation - Navigating AI in AECO: balancing visionary potential with real-world practice, while delegates can get handson with Lenovo’s latest Intel and Nvidia-powered workstations built for tomorrow’s AEC challenges.
The exhibition brings together the very latest technologies for AEC — think browser-based BIM, advanced AI, spatial data integration, visualisation and nextgen collaborative platforms.
You’ll also find brand new workstation hardware from Lenovo, Nvidia, and Intel, built to power the most demanding AEC workflows, including AI.
It’s a rich mix of rising stars and industry veterans — each helping to redefine what’s possible in AEC, and driving our industry into exciting new digital territory.
The BIM 2.0 startups will be out in force and for the first time ever you’ll be able to meet Snaptrude, Motif, Qonic and Arcol under one roof —offering hands-on demos of their commercial platforms and fresh approaches to digital design and collaboration.
Innovation from established players will be equally strong. Expect major updates from Autodesk, Graphisoft, and Esri, as well as a brand-new suite of AEC-focused software bundles from Dassault Systèmes, the developer of Catia (see page 34).
And that’s just the beginning. Check out the full exhibitor line up below.
NXT BLD
■ www.nxtbld.com/sponsors-2025
NXT DEV
■ www.nxtdev.build/sponsors-2025
As AI tools rapidly evolve, how are they shaping the culture of architectural design? Keir Regan-Alexander, director of Arka.Works, explores the opportunities and tensions at the intersection of creativity and computation — challenging architects to rethink what it means to truly design in the age of AI
An awful lot has been happening recently in the AI image space, and I’ve written and rewritten this article about three times to try and account for everything. Every time I think it’s done, there seems to be another release that moves the needle. That’s why this article is in two parts; first I want to look at recent changes from Gemini and GPT-4o and then, in the July / August edition, take a deeper dive into Midjourney V7 and give a sense of how architects are using these models.
I’ll start by describing all the developments and conclude by speculating on what I think it means for the culture of design.
Right off the bat, let’s look at exactly what we’re talking about here. In figure 1 you’ll see a conceptual image for a modern kitchen, all in black. This was created
with a text prompt in Midjourney. After that I put the image into Gemini 2.0 (inside Google AI Studio) and asked it:
“Without changing the time of day or aspect ratio, with elegant lighting design, subtly turn the lights (to a low level) on in this image - the pendant lights and strip lights over the counter.”
Why is this extraordinary?
Well, there is no 3D model for a start. But look closer at the light sources and shadows. The model knew where exactly to place the lights. It knows the difference between a pendant light and a strip light and how they diffuse light. Then it knows where to cast the multi-directional shadows and also that the material textures of each surface would have diffuse, reflective or caustic illumination qualities.
Here’s another one (see figure 2). This time I’m using GPT-4o in Image Mode.
“Create an image of an architectural sample board based on the building facade design in this image”
Why is this one extraordinary?
Again, no 3D model and with only a couple of minor exceptions, the architectural language of specific ornamentation, materials, colours and proportion have all been very well understood. The image is also (in my opinion) very charming. During the early stages of design projects, I have always enjoyed looking at the local “Architectural Taxonomy” of buildings in context and this is a great way of representing it. If someone in my team had made these images in practice I would have been
delighted and happy for them to be included in my presentations and reports without further amendment.
A radical redistribution of skills
There is a lot of hype in AI which can be tiresome, and I always want to be relatively sober in my outlook and to avoid hyperbole.
You will probably have seen your social media feeds fill with depictions of influencers as superhero toys in plastic wrappers, or maybe you’ve observed a sudden improvement in someone’s graphic design skills and surprisingly judicious use of fonts and infographics … that’s all GPT-4o Image Mode at work.
So, despite the frenzy of noise, the surges of insensitivity towards creatives and the abundance of Studio Ghibli IP infringement surrounding this release - in case it needs saying just one more time - in the most conservative of terms, this is indeed a big deal.
away and you start to think about what it means for creators, for design methods … for your craft … and you get a sinking feeling in your stomach. For a couple of weeks after trying these new models for the first time I had a lingering feeling of sadness with a bit of fear mixed in.
I think this feeling was my brain finally registering the hammer dropping on a long-held hunch; that we are in an entirely new industry whether we like it or not and even if we wanted to return to the world of creative work before AI, it is impossible. Yes, we can opt to continue to do things however we choose, but this
image and video domains.
This is because the method of production and the user experience is so profoundly simple and easy compared to existing practices, that the barrier for access to image production in many, many realms has now come right down.
‘‘ These techniques are so accessible in nature that we should expect to see our clients briefing us with ever-more visual material. We therefore need to not be afraid or shocked when they do ’’
new method now exists in the world and it can’t be put back in the box.
Again, we may not like to think about this from the perspective of having spent years honing our craft, yet the new reality is right in front of us and it’s not going anywhere. These new capabilities from image models can only lead to a permanent change in the working relationship between the commissioning client and the creative designer, because the means of production for graphical and image production have been completely reconfigured. In a radical act of forced redistribution, the access to sophisticated skill sets is now being packaged up by the AI companies to anyone who pays the licence fee.
The first time you get a response from these new models that far exceeds your expectations, it will shock you and you will be filled with a genuine sense of wonder. I imagine the reaction feels similar to the first humans to see a photograph in the early c19th - it must have seemed genuinely miraculous and inexplicable. You feel the awe and wonder, then you walk
I’ll return to this internal conflict again in my conclusion. If we set aside the emotional reaction for a moment, the early testing I’ve been doing in applying these models to architectural tasks suggest that, in both cases, the latest OpenAI and Google releases could prove to be “epochdefining” moments for architects and for all kinds of creatives who work in the
What has not become distributed (yet) is wise judgement, deep experience in delivery, good taste, entirely new aesthetic ideas, emotional human insight, vivid communication and political diplomacy; all attributes that come with being a true expert and practitioner in any creative and professional realm.
These are qualities that for now remain inalienable and should give a hint at
where we have to focus our energies in order to ensure we can continue to deliver our highest value for our patrons, whomever they may be. For better or worse, soon they will have the option to try and do things without us.
For a while, attempting to produce or edit images within chat apps has produced only sub-standard results. The likes of “Dall-E” which could be accessed only within otherwise text-based applications had really fallen behind and were producing ‘instantly AI identifiable images’ that felt generic and cheesy. Anything that is so obviously AI created (and low quality) means that we instantly attribute a low value to it.
As a result, I was seeing designers flock instead to more sophisticated options like Midjourney v6.1 and Stable Diffusion SDXL or Flux, where we can be very particular about the level of control and styling and where the results are often either indistinguishable from reality or indistinguishable from human creations. In the last couple of months that dynamic has been turned upside down; people can now achieve excellent imagery and edits directly with the chat-based apps again.
The methods that have come before, such as MJ, SD and Flux are still remarkable and highly applicable to practicebut they all require a fair amount of technical nous to get consistent and repeata-
ble results. I have found through my advisory work with practices that having a technical solution isn’t what matters most’ it’s having it packaged up and made enjoyable enough to use that it’s able to make change to rigid habits.
A lesser tool with a great UX will beat a more sophisticated tool with a bad UX every time.
These more specialised AI image methods aren’t going away, and they still represent the most ‘configurable’ option, but text-based image editing is a format that anyone with a keyboard can do, and it is absurdly simple to perform.
More often than not, I’m finding the results are excellent and suitable for immediate use in project settings. If we take this idea further, we should also assume that our clients will soon be putting our images into these models themselves and asking for their ideas to be expressed on top…
We might soon hear our clients saying; “Try this with another storey”, “Try this but in a more traditional style”, “Try this but with rainscreen fibre cement cladding”, “Try this but with a cafe on the ground floor and move the entrance to the right”, “Try this but move the windows and make that one smaller”...
You get the picture.
Again, whether we like this idea or not (and I know architects will shudder even
thinking of this), when our clients received the results back from the model, they are likely to be similarly impressed with themselves, and this can only lead to a change in briefing methods and working dynamics on projects.
To give a sense of what I mean exactly, in figure 3 over the page I’ve included an example of a new process we’re starting to see emerge whereby a 2D plan can be roughly translated into a 3D image using 4o in Image Mode. This process is definitely not easy to get right consistently (the model often makes errors) and also involves several prompting steps and a fair amount of nuance in technique. So far, I have also needed to follow up with manual edits.
Despite those caveats, we can assume that in the coming months the models will solve these friction points too. I saw this idea first validated by Amir Hossein Noori (co-founder of the AI Hub) and while I’ve managed to roughly reproduce his process, he gets full credit for working it out and explaining the steps to me - suffice to say it’s not as simple as it first appears!
Conclusion: the big leveller
1. Client briefing will change
My first key conclusion from the last month is that these techniques are so accessible in nature that we should expect to see our clients briefing us with ever-more visual material. We there-
fore need to not be afraid or shocked when they do.
I don’t expect this shift to happen overnight, and I also don’t think all clients will necessarily want to work in this way, but over time it’s reasonable to expect this to become much more prevalent and this would be particularly the case for clients who are already inclined to make sweeping aesthetic changes when briefing on projects.
Takeaway: As clients decide they can exercise greater design control through image editing, we need to be clearer than ever on how our specialisms are differentiated and to be able to better explain how our value proposition sets us apart. We should be asking; what are the really hard and domainspecific niches that we can lean into?
had to use ComfyUI (a complex nodebased interface for using Stable Diffusion) for ‘re-lighting’ imagery. This method remains optimal for control, but now for many situations we could just make a text request and let the model work out how to solve it directly. Let’s extrapolate that trend and assume that as a generalisation; the harder things we do will gradually become easier for others to replicate.
Muscle memory is also a real thing in the workplace, it’s often so much easier to revert back to the way we’ve done things in the past. People will say “Sure it might be
‘‘
willing to do something in a new way. I do a lot of work now on “applied AI implementation” and engagement across practice types and scales. I see again and again that there are pockets of technical innovation and skills with certain team members, but I also see that it’s not being translated into actual changes in the way people do things across the broader organisation. This is a lot to do with access to suitable training, but also to do with a lack of awareness that improving working methods are much more about behavioural incentives than they are about ‘technical solutions’.
In a radical act of forced redistribution, the access to sophisticated skill sets are now being packaged up by the AI companies to anyone who pays the licence fee ’’
2. Complex techniques will be accessible to all Next, we need to reconsider technical hurdles as being a ‘defensive moat’ for our work. The most noticeable trend in the last couple of years is that the things that appear profoundly complicated at first, often go on to become much more simple to execute later. As an example, a few months ago we
better or faster with AI, but it also might not - so I’ll just stick with my current method”. This is exactly the challenge that I see everywhere and the people who make progress are the ones who insist on proactively adapting their methods and systems.
The major challenge I observe for organisations through my advisory work is that behavioural adjustments to working methods when you’re under stress or a deadline are the real bottleneck. The idea here is that while a ‘technical solution’ may exist, change will only occur when people are
There is an abundance of new groundbreaking technology now available to practices, maybe even too much - we could be busy for a decade with the inventions of the last couple of years alone. But in the next period, the real difference maker will not be technical, it will be behavioural. How willing are you to adapt the way you’re working and try new things? How curious is your team? Are they being given permission to experiment? This could prove a liability for larger practices and make smaller, more nimble practices more competitive.
Takeaway: Behavioural change is the biggest hurdle. As the technical skills needed for the ‘means of creative production’
become more accessible to all, the challenge for practices in the coming years may not be all about technical solutions, it will be more about their willingness and ability to adjust behaviour and culture. The teams who succeed won’t be the people who have the most technically accomplished solutions, more likely it will be those who achieve the most widespread and practical adaptations of their working systems.
3 Shifting culture of creativity I’ve seen a whole spectrum of reactions towards Google and OpenAI’s latest releases and I think it’s likely that these new techniques are causing many designers a huge amount of stress as they consider the likely impacts on their work. I have felt the same apprehension many times too. I know that a number of ‘crisis meetings’ have taken place in creative agencies for example, and it is hard for me to see these model releases as anything other than a direct threat to at least a portion of their scope of creative work.
This is happening to all industries, not least across computer science, after all -
LLMs can write exceptional code too. From my perspective, it’s certainly coming for architecture as well, and if we are to maintain the architect’s central role in design and placemaking, we need to shift our thinking and current approach or our moat will gradually be eroded too.
The relentless progression of AI technology cares little about our personal career goals and business plans and when
we consider the sense of inevitability of it all - I’m left with a strong feeling that the best strategy is actually to run towards
similar — we try to explain objectively what great design quality is, but it’s hard. Certainly it fits the brief - yes, but the intangible and emotional reasons are more powerful and harder to explain. We know it when we see it.
While AIs can exhibit synthetic versions of our feelings, for now they represent an abstracted shadow of humanness - it is a useful imitation for sure and I see widespread applications in practice, but in the creative realm I think it’s unlikely to nourish us in the long term. The next wave of models may begin to ‘break rules’ and explore entirely new problem spaces and when they do I will have to reconsider this perspective.
We mistake the mastery of a particular technique for creativity and originality, but the thing about art is that it comes
multistep process that uses the
the opportunities that change brings, even if that means feeling uncomfortable at first.
Among the many posts I’ve seen celebrating recent developments from thought leaders and influencers seeking attention and engagement, I can see a cynical thread emerging ... of (mostly) tech and sales people patting themselves on the back for having “solved art”.
from humans who’ve experienced the world, felt the emotional impulse to share an authentic insight and cared enough to express themselves using various mediums. Creativity means making something that didn’t exist before.
The posts I really can’t stand are the cavalier ones that actually seem to rejoice at the idea of not needing creative work anymore and salivating at the budget savings they will make ... they seem to think you can just order “creative output” off a menu and that these new image models are a cure for some kind of long held frustration towards creative people.
Takeaway: The model “output” is indeed extraordinarily accomplished and produced quickly, but creative work is not something that is “solvable”; it either moves you or it doesn’t and design is
That essential impulse, the genesis, the inalienably human insight and direction is still for me, everything. As we see AI creep into more and more creative realms (like architecture) we need to be much more strategic about how we value the specifically human parts and for me that means ceasing to sell our time and instead learning to sell our value.
In the next edition I will be looking in depth at Midjourney and how it’s being used in practice, I’ll also be looking specifically at the latest release (V7) in more detail, until then — thanks for reading.
4.6
Tudor Vasiliu, founder of architectural visualisation studio Panoptikon, explores the role of AI in arch viz, streamlining workflows, pushing realism to new heights, and unlocking new creative possibilities without compromising artistic integrity
AI is transforming industries across the globe, and architectural visualisation (let’s call it
‘Arch Viz’) is no exception. Today, generative AI tools play an increasingly important role in an arch viz workflow, empowering creativity and efficiency while maintaining the precision and quality expected in high-end visuals.
In this piece I will share my experience and best practices for how AI is actively shaping arch viz by enhancing workflow efficiency, empowering creativity, and setting new industry standards.
Streamlining workflows with AI AI, we dare say, has proven not to be a bubble or a simple trend, but a proper productivity driver and booster of creativity. Our team at Panoptikon and others in the industry leverage generative AI tools to the maximum to streamline processes and deliver higher-quality results.
Tools like Stable Diffusion, Midjourney and Krea.ai transform initial design ideas or sketches into refined visual concepts. Platforms like Runway, Sora, Kling, Hailuo or Luma can do the same for video. With these platforms, designers can enter descriptive prompts or reference images, generating early-stage images or
videos that help define a project’s look and feel without lengthy production times.
This capability is especially valuable for client pitches and brainstorming sessions, where generating multiple iterations is critical. Animating a still image is possible with the tools above just by entering a descriptive prompt, or by manipulating the camera in Runway.ml.
Sometimes, clients find themselves under pressure due to tight deadlines or external factors, while studios may also be fully booked or working within constrained timelines. To address these challenges, AI offers a solution for generating quick concept images and mood boards, which can speed up the initial stages of the visualisation process.
In these situations, AI tools provide a valuable shortcut by creating reference images that capture the mood, style, and thematic direction for the project. These AI-generated visuals serve as preliminary guides for client discussions, establishing a strong visual foundation without requiring extensive manual design work upfront.
Although these initial images aren’t typically production-ready, they enable both the client and visualisation team to align quickly on the project’s direction.
Once the visual direction is confirmed,
the team shifts to standard production techniques to create the final, high-resolution images that would accurately showcase the full range of technical specifications that outline the design. While AI expedites the initial phase, the final output meets the high-quality standards expected for client presentations.
For projects that require multiple lighting or seasonal scenarios, Stable Diffusion, LookX or Project Dream allow arch viz artists to produce adaptable visuals by quickly applying lighting changes (morning, afternoon, evening) or weather effects (sunny, cloudy, rainy).
Additionally, AI’s ability to simulate seasonal shifts allows us to show a park, for example, lush and green in summer, warmtoned in autumn, and snow-covered in winter. These adjustments make client presentations more immersive and relatable.
Adding realism through texture and detail
AI tools can also enhance the realism of 3D renders. By specifying material qualities through prompts or reference images in Stable Diffusion, Magnific, and Krea, materials like wood, concrete, and stone, or
greenery and people are quickly improved.
The tools add nuanced details like weathering to any surface or generate intricate enhancements that may be challenging to achieve through traditional rendering alone. The visuals become more engaging and give clients a richer sense of the project’s authenticity and realistic quality.
This step may not replace traditional rendering or post-production but serves as a complementary process to the overall aesthetic, bringing the image closer to the level of photorealism clients expect.
Bridging efficiency and artistic quality
While AI provides speed and efficiency, the reliance on human expertise for technical precision is mandatory. AI handles repetitive tasks, but designers need to review and refine each output so that the visuals meet the exact technical specifications provided by each project’s design brief.
Challenges and considerations
It is essential to approach the use of AI with awareness of its limitations and ethical considerations.
Maintaining quality and consistency: AI-generated images sometimes contain
inconsistencies or unrealistic elements, especially in complex scenes. These outputs require human refinement to align with the project’s vision so that the result is accurate and credible.
Ethical concerns around originality: There’s an ongoing debate about originality in AI-generated designs, as many AI outputs are based on training data from existing works. We prioritise using AI as a support tool rather than a substitute for human creativity, as integrity is among our core values.
Future outlook: innovation with a human touch: Looking toward and past 2025, AI’s role in arch viz is likely to expand further – supporting, rather than replacing, human creativity. AI will increasingly handle technical hurdles, allowing designers to focus on higher-level creative tasks.
AI advancements in real-time rendering are another hot topic, expected to enable more immersive, interactive tours, while predictive AI models may suggest design elements based on client preferences and environmental data, helping studios
anticipate client needs. AI’s role in arch viz goes beyond productivity gains. It’s a catalyst for expanding creative possibilities, enabling responsive design, and enhancing client experiences. With careful integration and human oversight, AI empowers arch viz studios – us included – to push the boundaries of what’s possible while, at the same time, preserving the artistry and precision that define highquality visualisation work.
Tudor Vasiliu is an architect turned architectural visualiser and the founder of Panoptikon, an award-winning high-end architectural visualisation studio serving clients globally. With over 18 years of experience, Tudor and his team help the world’s top architects, designers, and property developers realise their vision through high-quality 3D renders, films, animations, and virtual experiences. Tudor has been honoured with the CGarchitect 3D Awards 2019 – Best Architectural Image, and has led industry panels and speaking engagements at industry events internationally including the D2 Vienna Conference, State of Art Academy Days, Venice, Italy and Inbetweenness, Aveiro, Portugal – among others.
In
recognition of the fact
that AEC
customers require different pricing structures from those in the automotive and aerospace sectors, Dassault Systèmes (DS) recently introduced a Catia for AEC bundle deal
Catia sits at the Ferrari end of the manufacturing CAD spectrum. It’s typically used in demanding engineering design projects in the automotive, aerospace and maritime sectors. Here, the CAD package built and sold by Dassault Systèmes demonstrates its strength in handling complex geometries and huge assemblies, with advanced simulation and analysis thrown into the mix.
All this means that applying Catia to AEC projects might initially seem like overkill. However, as BIM has evolved, AEC users are increasingly creating large, complex datasets that they hope to carry into digital fabrication. In addition, DS has championed the digital twin concept from its earliest days.
The upshot is that this is not so much a case of Catia moving into AEC, but AEC customers increasingly needing the kinds of capabilities that Catia provides.
To date, DS has notched up numerous
successful wins in AEC: with construction companies such as Bouygues and advanced architecture practices including ZHA and SHoP, for example, as well as façade design firm Felix and a number of global modular construction firms. The software has also been used on a number of significant Chinese infrastructure projects.
This year, DS has been ramping up its efforts to position Catia as a competitive AEC design platform option. In particular, it has focused on addressing the platform price issue, by bundling up several apps and capabilities to encourage team adoption in the sector. The core Catia for AEC bundle comes with capabilities for advanced modelling, BIM design, generative design/scripting and 2D drawings.
So what’s included in this bundle? At its heart is the full, unadulterated Catia experience. While the company offers a version that runs in the cloud (namely,
In the AEC industry, even the best and the brightest designers and engineers are working with tools that are incomplete and inefficient. That’s the argument put forward by Jonathan Asher, Catia global sales director when he recently met with AEC Magazine to discuss the launch of Catia for AEC.
These professionals, he continued, are tasked with forcing point solutions to work together, leading to a big waste of productivity. Designers struggle to patch holes and spend time searching for workarounds that will compensate for insufficient modelling capabilities. BIM tools, he added, “were created to deliver scaled
drawing sets, not detailed 3D models for fabrication.”
Another key problem, according to Asher, is that with today’s “midrange desktop modellers” like Revit, users soon hit a “scalability wall”, because file size is a major issue in BIM that frequently impacts collaboration and performance.
Asher contrasted these failings with what’s on offer from Catia, which he claimed scales with the size and detail of a project, and is database-based (rather than file-based) on the 3DExperience Platform, which benefits collaboration and performance.
DS has been eyeing the AEC market
3DExperience Platform), there is also a desktop flavour that DS executives refer to as the product’s ‘rich client’. The Catia for AEC bundle is based on this desktop variant for performance reasons, although all data is stored in the cloud and is therefore accessible via a web interface. The platform supports simultaneous work on the same dataset and provides access to collaborative web applications that use the same live data.
The role of the user is defined as the building design engineer, who needs access to parametric modelling, automation and advanced surface modelling (Class A, subdivision and parametric), as well as live rendering with materials libraries for buildings, and specific building-specific tools.
These building-specific applications cover concept structure design; detailed structures for steel and concrete connections; BIM for elements like walls and doors; geolocation capabilities; terrain
since 2012, so it has taken the firm quite a while to pull its software together to compete in the market.
The fact that the digitisation of the AEC industry is now focusing on the manufacturing of buildings opens up a convergence opportunity for DS, as Catia is already relevant there.
When asked who the target user of the Catia for AEC bundle might be,
Asher replied: “It’s someone who is kind of stuck between Rhino, Grasshopper and Revit. This ‘super user’ is working on multiple projects, at different phases, and is stretched between multiple applications, trying to connect disparate tools.”
He argued that the bundle is highly
price-competitive for users already using both Autodesk’s AEC Collection and Rhino, especially when factoring in the costs of Autodesk’s cloud collaboration and potential clash detection/viewing tools that Catia also provides.
Asher recognised that there are still significant barriers to adoption of Catia in AEC. This includes the investment users have already made in existing tools like Rhino, where people have built their careers around using Rhino, teaching Rhino and developing a Revit ecosystem. He also implicitly acknowledged the learning curve associated with a CAD system such as Catia.
and mesh modelling; and cut-and-fill operations.
It’s important to note that this building design engineer role does not, however, include mechanical, electrical, or plumbing (MEP) functionalities.
Collaboration is a key element of the core platform provided by the underlying 3DExperience Platform. The cloud capabilities facilitate simultaneous work on the same dataset. Models are partitioned into individual components or parts, and users can ‘reserve’ or ‘lock’ parts that they are working on, in order to prevent others from overriding their geometry.
When a user saves their changes to the database, others working within the same assembly can synchronise their sessions to see the updates, although the software doesn’t support real-time visualisation of movement.
The Visual Script Designer, meanwhile, is a powerful Catia-specific generative design tool with a visual scripting-style interface. This enables designers to craft scripts through building blocks, so that they can generate 3D geometries autonomously. DS executives claim the Visual Script Designer is easy to use, even by those without programming skills.
For drafting, there’s DraftSight Premium, a standalone, full version of DS’s 2D DWG-based drafting application. While this is a desktop application, it is also connected to the 3DExperience Platform, so that users can save DWG and DXF data to the cloud.
It also includes a BIM module that enables the import of IFC and Revit models in order to produce 2D drawings. The workflow between Catia and DraftSight currently involves exporting IFC data from Catia, saving it to the cloud, and then importing it into DraftSight. Over time, we expect this integration to improve and expand to include important productivity boosts, such as auto drawing capabilities.
While Catia includes several drafting applications, DraftSight is noted as a better 2D drawing environment, particularly for architectural and structural drawings, than other offerings that are better suited to mechanical CAD and/or civil drawings.
The Catia for AEC Special Offer is available in several different configurations. They include:
Single user: $6,500 per year or $1,950 per quarter for an individual subscription.
Three user: $16,800 per year or $5,000 per quarter. This bundle equates to a price of $5,600 per user, per year or $1,680 per user, per quarter. Buying the 3-user bundle saves 14% off the single user price.
Six user: $27,000 per year or $8,100 per quarter. This bundle offers the lowest per-user price, as it works out to $4,500
per user, per year, or $1,350 per user, per quarter. This bundle represents a saving of 30% off the single user price.
At present, Catia is still not widely used in the AEC space – so there remains an issue with attracting and training architects to use the software. On YouTube, there’s an eleven-episode video series that walks them through the Catia building design fundamentals ( www.youtube. com/@CATIAAEC ), as well as an associated wiki that provides datasets and documentation.
There are also plans to revamp the wiki for easier navigation and to add a new ‘Getting Started’ section. Assistance is also available from a public Buildings and Infrastructure community on 3DSwym, a collaborative communication environment where questions can be posted and answered.
There are also online tutorials available for DS’s Visual Scripting tool that amply demonstrate its powerful capabilities. It looks to work in a very similar way to Grasshopper or Rhino, with graph nodes and connectors.
DS’ Visual Scripting also has online tutorials, which will, with just a brief poke around, clearly demonstrate the powerful capabilities of this scripting module. The visual scripting looks to work in a very similar way to Grasshopper/Rhino with graph nodes and connectors. The CGM kernel and scripting make for a very powerful geometric combination (www.tinyurl. com/AEC-catia)
The Catia for AEC bundle is offered at an aggressively low price, in DS terms. The cost would be low for Catia alone, but here, you’re also getting the Scripting and DraftSight modules.
In terms of function, it may not compare to Autodesk’s AEC Collection, which is stuffed with product. However, only four of them are regularly used by many users: AutoCAD, Revit, Navisworks and 3ds Max. For most users, the DS bundle will cover most requirements.
While I don’t expect many firms in the AEC sector to abandon their current BIM tech stacks wholesale and adopt one based on Catia instead, I can certainly see the Catia bundle being added to the mix to tackle specific complex projects and hard-to-solve geometric problems. It offers the kinds of capabilities that many expert modellers lust after, as well as helping firms to speak the same language as their business partners in fabrication.
The inclusion of DraftSight is significant. We already know that this tool will soon be getting auto drawing capabilities.
In short, this is a bundle of tools that will only get better over time.
From the perspective of prospective
Catia is a platform. Its core modeller is based on DS’s own CGM (Catia Geometric Modeller) kernel for solids, surfaces and meshes. On top of that modeller, there are thousands of plugins that enable the platform to offer a wide range of capabilities.
Typically, these are purchased around industry design bundles, including Design and Styling, Digital Prototyping, and Sustainability. They are also based on roles. These bundles include selections of tools, from carbon fibre design to advanced generative design – in short, whatever capabilities make sense for those roles.
It’s easy to switch between these specific applications within the main Catia environment using its ‘compass’ interface, which provides new user interfaces, commands and functionality.
There are several different licensing options, including quarterly and yearly subscriptions, as well as perpetual licences that include an annual maintenance fee.
A significant capability of Catia is its ability to handle very large models – even complete virtual twins with every nut and bolt featured, such as those created in the automotive and aerospace industries.
Catia manages this through its assembly logic. When assembling many components, it primarily loads only the visual representation of the data, not the full authoring data. The detailed internal data and parametrics are only loaded into memory when a specific part is activated.
This enables users to model ‘in context’, while only loading the data they need, with other parts displayed as visualisation geometry. The first time a model is opened locally, it takes time to grab the visualisation data, but subsequent openings are faster as it performs a difference check with the cloud data.
AEC customers, DS needs to concentrate on delivering more training tools for the bundle. It should look to integrate it with some of the tools more commonly used in AEC, as well as point clouds, Rhino, CDEs, alternative renderers and perhaps some of the newer, open source offerings.
And there’s certainly a desire among them to put Catia to work on AEC Projects. For example, in the early 2000s, Hugh Whitehead from the Specialist Modelling Group at Foster + Partners told us that Catia would be his tool of choice (at that time, Foster + Partners was using Bentley Systems MicroStation, powered by Parasolid).
Whitehead described a project that featured a very complex guttering design. As it was being modelled, this design repeatedly crashed the firm’s tools. Eventually, Whitehead ended up in Italy, watching the fabricator successfully model this ‘impossible’ part in Catia, albeit after several attempts, and resolve the geometry issue.
Said Whitehead: “Other modellers just crash out when they fail. You have no idea why the problem happened. Catia may fail, but it won’t crash – and it will even tell you why it could not produce the geometry asked for.”
For many in AEC, that could be an appealing prospect.
■ www.3ds.com/store/catia-for-aec
Foster + Partners is renowned in the industry for employing a large team of software developers who work alongside its design team members. The result is a range of in-house tools used to create the company’s expressive architecture, one of which has recently been made freely available to benefit other designers
Signature architects sit at the cutting edge of design technology and typically use a vast array of digital tools and services to define their buildings. When they can’t find a commerciallyavailablesoftwaresolution tomeettheirneeds,theyturntotheirown teams of in-house developers to create custom code instead. The code built by these brightest minds might be intended for a specific project, or applied time and timeagain,ondifferentwork.Eitherway, in-house development teams incorporate a wealth of knowledge and expertise and theapplicationstheybuildareacorecomponentofafirm’sintellectualproperty.
In the commercial world, when developersat an independent softwarevendor create an application, they not only have to write the code, but must also ensure that the interface is easy to use, predictable to navigate and that its features and functionsarewelldocumented.
Typically, tools developed in-house
don’t get that kind of love. In many cases, they would not meet the quality standards set for commercially available software.
But at Foster + Partners, it seems that the Covid pandemic provided an opportunity to apply some polish to regularly used in-house programmes. At the Shape to Fabrication 2024 conference, the firm’s Applied R&D (ARD) team demonstrated some of these, including Cyclops, an environmental analysis plug-in for Grasshopper; Hydra, an optimisation tool; and Hermes, which supports data exchange for CAD and BIM.
This demonstration gave attendees a new insight into the depth and breadth of in-house software development at Foster +Partners and the extent to which these tools are productised. In other words, these bespoke applications looked like ‘proper’, commercial-quality software.
In April 2025, Foster + Partners did the seemingly unthinkable, making one of these tools — Cyclops — freely available.
What lies behind this unexpected move, it seems, is not just the firm’s deep-seated commitment to sustainability, but also its belief that making the tool more widely available could help the AEC industry achieve improved environmental performance for buildings.
Cyclops supports real-time environmental analysis during the conceptual design phase, with the aim of speeding up decision-making and freeing up time for critical thinking. Architects can use it to quickly understand how changes to their design will impact a building’s environmental performance. This builds their intuition for future projects and makes performance-driven design an integral part of the early design process, as well as supporting more frequent design iteration than traditional workflows.
Foster + Partners has been developing Cyclops for fifteen years now, refining it, boosting it with GPU power and actively using it on projects. It supports several essential environmental analyses for buildings and masterplans, including calculating metrics relating to radiation, daylight, shading mask, sky component, sunlight obstruction and sunlight hours.
The Cyclops website, meanwhile, includes informative guides to how each of these analysis capabilities work, providing essential reading for those looking to get the most out of each tool (https:// docs.cyclops.fosterandpartners.com/Analyse). Since Cyclops works inside Rhino, designers don’t need to swap between applications. It addresses the challenge of time-consuming analytical cycles and interoperability issues. Its primary function is to accelerate ray tracing-based simulations by leveraging the power of GPU
computing, specifically using Nvidia GPUs. This allows Cyclops to provide real-time environmental analysis, on a city-scale, faster than traditional tools.
According to Foster + Partners’ own benchmarking studies, when compared to other, non-accelerated analytical methods, Cyclops can be up to 10,000 times faster at analysing sunlight hours and up to 800 times faster at analysing radiation. Apparently, the use of GPUs means Cyclops can deal with some 10 billion rays per second. It will run on a decent workstation containing a Nvidia GPU and, for optimum performance, the significantly faster RTX cards are recommended.
GPUs are essential because the speeds achieved are derived directly from the massive parallelisation which they deliver, as compared to standard CPU-based processing.
ware accelerated analysis tools running inside that environment.
Making Cyclops more widely available took considerable effort, according to Tsigkari. The application needed intensive quality checking and documenting. The legal issues around freely giving away IP needed to be resolved. But the motivation to gift Cyclops to the industry came right from the top, from Sir Norman Foster and the board of directors.
Tsigkari believes Cyclops will be a game-changer for a lot of firms that don’t have access to real-time tools. While there are commercial applications on the market, the sheer speed of Cyclops and its
‘‘
not be as professionally finished as Cyclops, but it would be interesting if this philanthropic act by Foster + Partners acts as a catalyst to more toolsharing within the industry.
After all, plenty of firms are creating sustainability tools and building optimisations that could benefit everyone, even their direct competitors. If there were a free library of effective, commonly-used tools, the in-house software developers employed by AEC firms might be freed up to develop tools in more niche areas.
Foster + Partners has stepped up, making it clear that wider environmental goals are more important to the firm than being the sole beneficiary of an in-house analysis tool, despite years of investment in it
As previously stated, Cyclops is free for anyone to use, in educational, personal and even commercial projects. However, according to the simple terms and conditions (https://cyclops.fosterandpartners.com/ Terms) , users are not permitted to sublicence Cyclops or to resell it for profit.
AEC Magazine spoke with Martha Tsigkari, a senior partner and head of ARD at Foster + Partners, to hear more about the history behind the software. It began back in 2007, when the in-house ARD group started investigating ray tracing for view analysis. They quickly realised that ray tracing could be applied to other environmental conditions. Back then, Foster + Partners was a MicroStation customer and had hard -
integration with Rhino/Grasshopper is the reason Foster + Partners relies on this tool. The ability to check a design against sustainability goals, and to do so as you are creating it, without breaking the flow, is where Cyclops wins big, she says.
This is great news for Rhino-based firms in the AEC industry. With this announcement, Foster + Partners has stepped up, making it clear that wider environmental goals are more important to the firm than being the sole beneficiary of an in-house analysis tool, despite years of investment in it.
There are many architectural firms developing in-house tools. These may
And while it might be easy for those at smaller firms to fume with jealousy over the software development resources open to their counterparts at much larger practices, we are now in a world where programming is becoming easier for everyone, thanks to the huge strides that AI makes each and every month. By the end of 2025, AI will probably be the best coder on the planet – and open to anyone to use, for a small subscription. All you’ll need is a clear definition of input and output requirements. Custom tools and custom code will become the norm, but you won’t need to be a programmer to build it (although programming skills certainly won’t hurt).
The real value you bring will be your industry knowledge. The challenge will be to codify it.
■ https://cyclops.fosterandpartners.com
At NXT BLD and NXT DEV in London on 11 and 12 June, Martha Tsigkari of Foster + Partners will present on the topic of AI and its impact on the industry.
Today’s design leaders are embracing a new foundation: geographic context. By weaving location intelligence, real-world imagery, and spatial analytics into their workflows, architects and urban designers can create spaces that are not only beautiful, but functional, sustainable, and resilient.
Whether you’re reimagining a neighborhood or designing a landmark, geospatial thinking ensures your work responds to the environment, connects communities, and stands the test of time.
Learn how to leverage GIS for more informed planning and design. Download your copy of GIS for Architecture, Planning, and Urban Design.
Download the ebook now at go.esri.com/aec-mag
Of all the BIM 2.0 players currently vying for customer attention, Snaptrude is the original and probably the most advanced. Until recently, the company took a fairly traditional approach to mapping out its features and functions, albeit in the cloud – but a rethink on segmentation and capabilities is now leading to a more modular approach
BIM is dominated by monoliths. These are single programmes that aim to support the full workflow from concept to documentation and everything in between. If its intention was to compete head-on with these monoliths, albeit with a new, cloudbased offering, a start-up would probably be inclined to build out a similarly monolithic, do-it-all system.
However, it’s starting to look very much like this is not the way that BIM 2.0 will play out.
What seems more likely is a host of companies offering ‘point’ solutions, all based in the cloud and all boasting API integrations and open formats that allow them to ‘speak’ to each other. That will make it possible for customers to choose key design tools from different vendors, in order to assemble their own ‘best of breed’ portfolio of tools. At the same time, the way that design and fabrication is changing will also be helped along by more choice, with vendors offering more focused, industry-specific tools.
All this spells big changes for BIM. By their very definition, BIM tools are pretty
general, while still being specific to the AEC industry. In a world of focused point solutions, we will still need broader platforms in order to design, document and collaborate — but the landscape undoubtedly looks set for upheaval.
AEC Magazine spoke with Snaptrude CEO Altaf Ganihar about the company’s evolving product strategy, and in particular, its recent move towards a compartmentalised platform with AI integration.
Snaptrude is positioning itself as a viable alternative to existing BIM tools, offering a more flexible and automated approach, particularly in the early design stages, while also anticipating a broader shift towards data lakes and AI-driven processes.
As Ganihar explained: “Snaptrude’s new product release breaks down the entire design process scientifically into four modes that can be operated in isolation or connectedly.”
First, he explained, there’s Program, which is used like an Excel spreadsheet to input requirements, along with RFPs, room data sheets, and so on. It focuses on structured data.
Second, there’s Design, which is an open canvas for conceptual massing, form finding, analysis (in areas such as solar) and other design explorations. BIM converts this data into a traditional detailed BIM model in order to add elements such as walls, doors and windows.
Third and fourth are Presentation and Documentation, for creating sheets, presentations and renders.
According to Ganihar, “the significant advantage is that all these modes are connected, allowing data and views to be dragged and dropped between them.”
So while the underlying Snaptrude technology hasn’t changed, its workflow and packaging is now based on tasks, or
on the particular phase of a design. That means that users can dip in and out, using the tool for specific tasks without having to commit to the whole feature set — and perhaps continuing to use some of their existing tools.
Snaptrude is currently focused on making early-stage design profitable using its tools and AI. It is seeing this approach resonate at midsize to large firms that currently burn significant billable hours in the concept stages.
Adoption among such customers often begins by deploying the software on a single project to prove its value, typically in terms of helping to reduce early-stage design spend. If successful, then confidence grows, leading to a gradual replacement of incumbent technologies,
often starting with tools like SketchUp first, then eventually Revit.
Architects, particularly when age and experience is taken into account, can be resistant to change. However, early-stage design in Revit has never really been ideal. Ganihar says that, for this particular stage in the design process, customers are more open to shifting to easier-to-use tools.
When speaking with architects who have looked at BIM 2.0 software, we often hear from them that what they see today is simply not comparable to mature desktop BIM tools. That’s entirely understandable. Customers certainly struggle to place their bets on a new, unknown system simply on the promise of future capabilities.
But Ganihar insists that incumbents such as Autodesk should be scared. Focus is now shifting from applications creating information to the quality of that information and the ease of extracting it, using AI. On top of that, he argues, AI can now easily perform tasks historically reliant on programmers or on specific software tools, simply by querying a model in natural language. These changes, he says, are not just incremental improvements. They represent a fundamental shift in how AEC professionals will interact with software and data, moving towards more automated, AI-driven and data-centric workflows.
On the subject of AI
less possibilities,” he says.
“At the moment, AI is being used as a co-pilot to generate or refine programs with structured data but this will expand. We are prototyping AI automated adjacency creation, blocking and stacking based on physics models, inspired by AlphaFold. Users can also delegate tasks to AI, much like Gmail’s autocomplete, and refine the AI’s suggestions iteratively. Also, AI can automate mundane tasks such as generating render options on a pinboard, creating presentation slides and documentation.”
Snaptrude is heavily investing in AI, building a machine learning team to accelerate AI development, and is exploring how to expose APIs in the pro -
‘‘
are evaluating how much of each design stage AI can handle, with the answer increasingly being: ‘Affirmative’.”
All this will, of course, have a big impact on the business models of software firms. Microsoft’s CEO has already announced that, with AI, SaaS is dead. That’s an idea with which some firms in our industry have yet to come to terms.
As Ganihar explained: “Business models will change from traditional seatbased pricing to outcome-based pricing, where companies pay for the results they get from the software. This could make software more expensive per outcome, but reduce the number of seats needed, as firms might not need as many people doing manual tasks. There may well be an argument at some point for making core software use free and only charging for AI use.”
Ganihar thinks the incumbents (like Autodesk) should be scared because the focus is shifting from applications creating information to the quality of information and the ease of extracting it with AI
gramme environment, so that external AI agents can call and return structured data. An early scripting environment is being built where users can ask AI to write scripts for jobs such as reviewing duct lengths or designing a facade based on sunlight hours.
“Things have become very interesting in the last six months,” is Ganihar’s dry understatement – and we tend to agree!
It is perhaps indicative of the fierce competition that’s developing between BIM 2.0 developers and the huge impact that AI is likely to have that Snaptrude has chosen to rationalise its development work.
As AI agents start to appear, the very nature of the industry and what software does will change, Ganihar predicts.
“AI will only work on structured data. Snaptrude’s Program mode is built like a data lake for structured, granular data, which is something other companies like Autodesk are also trying to figure out. Structured data is seen as a door to end -
“AI will do the mundane tasks faster than before, but won’t do the creative tasks, suggesting it will remain an advanced ‘answering machine’ for the next five to ten years,” he continues.
“However, the quality of questions asked by humans will change. Way off in the future, AI has the potential to automate most stages of design, predicting fully designed buildings from AI, based on their internal development. AI will perform functions like real-time clash checking and building code checks. They
Snaptrude CEO Altaf Ganhialf is especially proud of a spreadsheetbased conceptual design tool known as Program mode or Tables. This integrates an Excellike interface directly with the Snaptrude platform and is specifically designed for early-stage concept design. It functions as a data lake for structured information such as requirements, RFPs, and room data sheets. This system is live and connected bidirectionally to the 3D design environment, allowing data to be seamlessly linked to massing and form finding.
I can see some capabilities that answer Motif’s initial offering, in concept design and presentation mode. And there is also the realisation that it’s much easier to adopt another design tool by focusing on winning one specific design phase, rather than trying to win everything. If anything, the modes approach that Snaptrude has adopted lends more clarity to its capabilities in each phase.
Snaptrude has the biggest development team in BIM 2.0 Land and it is maintaining its velocity, now stretching to add AI discipline-centric knowledge to each phase.
Altaf Ganihar will be at NXT BLD and NXT DEV in BIM 2.0 indicated events. The company can also be found in the exhibition space.
■ www.snaptrude.com
This integration enables architects to create, edit and iterate the program smoothly alongside the design. An AI co-pilot is available to generate or refine programme data. By providing a single, connected environment for structured data and design, it helps reduce early-stage design spend and makes concept design more profitable.
Why are there no unicorns in construction?
AEC start-ups continue to hit the concrete ceiling, despite the vast potential for digital disruption in the sector. In this article, Tal Friedman proposes a strategy for entrepreneurial success that he believes could help break the pattern and see the rise of the first contech mega companies
The construction industry is one of the largest sectors on the planet. Yet even with an annual value estimated at $14 trillion, it has so far yet to experience digital disruption.
The potential is huge, too, and billions of dollars were invested in construction and property technology (contech and proptech) start-ups over the past five years, with the mission of sparking a revolution in how we create the built environment.
Yet what has been perceived as a field brimming with potential is still to produce the desired fruit. In this article, we will delve into the key challenges facing AEC start-ups and how these might be overcome.
In recent years, I have witnessed firsthand the growth of the construction tech industry, through my own involvement in running a start-up, taking on advisory roles for government organisations and corporates, and working in the construction material commerce space. That’s given me an in-depth view of the huge potential here, and how far we are from filling the current gap. I present my views with hope that it can help AEC entrepreneurs and investors find some common ground on which to build.
Why the delay?
So what makes construction so tricky and so seemingly immune to digital disruption as compared to other sectors? As I see it, construction tech start-ups face a number of key challenges, including:
Dinosaurs versus unicorns: Construction is a game that comes with a very heavy ticket price to play. Not only does it require a strong financial backbone and stamina, it also takes a lot of experience – one that cannot be bought, even with lavish funding. As opposed to the IT sector, where new
start-ups can spring up to become marketleading mega companies with valuations measured in billions of dollars in just a few years, the building sector is dominated by veterans. Even in the construction software space, the market is led by well-established players offering tech stacks that are often 20 to 30 years old. This would seem like a natural breeding ground for a new age to emerge, where the Canva, Figma or Wiz of construction might flourish, but it is yet to happen. Let’s begin the analysis by seeing how the money is spread.
Construction start-ups are from Venus, venture capitalists (VCs) are from Mars: Traditionally, most VCs come from a software-first mindset, in which success is defined in terms of recurring revenue, user acquisition, and scalable SaaS models. They focus on large user numbers, lean, quick growth and owning categories. In contrast, construction start-ups are rooted in a world of physical infrastructure, where value is created through complex, capital-heavy projects with long lead times, tight regulations, and high execution risk. Even if they are software-oriented, the mindset is completely different, as we will explore later in this article. VCs expect exponential growth curves and monthly churn reports, while construction founders grapple with bidding cycles, procurement hurdles and delivery timelines measured in years, not sprints. This disconnect can make it difficult for construction innovators to translate their impact into metrics that VCs can understand. And that looks set to remain the case, at least until a new generation of investors emerges who are prepared to go deep into the screw level. Some construction tech oriented VCs have emerged over the last years such as Foundaental, Brick and Mortar,
Building Ventures, Blackhorn Ventures, Zacua and Kompas. Some leading construction companies also make corporate venture capital (or CVC) available to promising start-ups. These include Cemex, Saint-Gobain and Vinci. However, we still need the ‘heavy lifters’ of the VC world to help scale the field.
Construction tech versus tech for construction: One of the crucial misconceptions of this market lies in the failure to distinguish between these two verticals. The term ‘construction tech’ refers to innovations in how we build things, focusing on construction processes, materials and methodologies. The term ‘tech for construction’, by contrast, is used to refer to pure technological applications, such as management software, Internet of Things (IoT) technologies, AR/VR and others. In other words, these are technologies that can be applied to construction, but are not specific to the field. But VCs tend to invest in the second category, the pure tech models, seeking the next Uber, Figma, Airbnb, Facebook or Tesla of construction. Their investments aim for disruptive, high-growth technology, but are often disconnected from the ground – or in our case, the building site. Fields like manufacturing, materials, and on-site work have always been less lucrative to tech investors, but this is where the greatest potential for sector-wise change lies.
Warning: hurdles ahead
So what are the biggest hurdles faced by construction tech start-ups? In my experience, the most prevalent barriers include:
1. The ‘butterfly effect’, where risk is greater than gain: In construction, time is money and projects are inherently
risky. Each project is unique, involving numerous variables and stakeholders. For a start-up, developing a new solution might be straightforward and worth the risk of trying, but that’s only because they are not paying the price of their mistakes. A minor deviation from the plan can lead to significant delays and cost overruns in completely unexpected areas. The potential risks associated with untested technology are often too great for construction companies to bear. Therefore, start-ups must provide watertight solutions with a clear silo of their work scope that mitigate the risk.
2. Recognition that a one-trick pony is not a unicorn: Despite being a $14 trillion industry, the construction sector is a relatively small market for point solutions. These are specific, narrow applications designed to address particular problems. The number of practicing AEC professionals is estimated at between 3 million and 4 million, which translates to a limited customer base for specialised software solutions. It’s important to note that the AEC software market is valued at around $6 billion, controlled by centralised players. This discrepancy highlights the limited scope and scalability of many construction tech solutions. While a point solution might solve a specific problem efficiently, its applicability is often too narrow to attract widespread adoption to build a unicorn.
3. A monopolised software ecosystem: Due to the heavy nature of the industry, software start-ups face a difficult situation in which they can never replace the existing platforms, but only integrate. Players such as Autodesk, Nemetschek, Bentley Systems are so heavily rooted in the industry that start-ups must adapt their solution to mature infrastructure and formats that are sometimes not at the forefront. Closed formats create a very muddy puddle in which to play, while IFC simply does not provide enough of a format. This rough landing makes it difficult for new entrants to board. Start-ups must fight an uphill battle before even entering the ring. Creating non-intrusive on-boardings that don’t require switching and implementing a long learning curve can drastically improve chances of success
4. The difference between a service and SaaS: The days of software training and specialists are over. In the instant AI age, expecting users to retrain with complex new tools as in the past is unrealistic. With tech solutions popping up like mushrooms after the rain, users lack the skills to go deep and this often creates problems for anyone making a serious construction application. As a result, construction tech companies frequently need to provide comprehensive services alongside their product to tailor fit it and
follow through. This service-intensive model can be very heavy and costly not just for the customer, but also for the start-up. Given the ‘plug-and-play’ expectations of the market, start-ups can easily find themselves in a swamp of unprofitable projects that far exceed their initial offering and remain unscalable. It is key to build products that are selfmaintained and exercised by the client.
5. Makers developing for makers: A significant challenge in achieving product/ market fit arises when creators design solutions to solve their own problems, rather than addressing broader industry needs. This insular approach often results in products that resonate very much with a niche audience, but fail to gain traction in the wider market. For instance, an engineer might develop a tool to streamline a specific aspect of their workflow, without considering how many professionals face the same issue, or if the solution can be scaled. It may be so that 90% of their colleagues will find it useful, yet there are not enough of them to build a large business, or they can’t afford to pay the required prices. This type of company may be quick to build traction, but that can be very misleading.
6. Lengthy time-to-market: Construction projects often span two to five years. This extended timeline is fundamentally at odds with the rapid growth expectations applied to start-ups and their investors. Start-ups typically need to demonstrate exponential growth and achieve scalability quickly. However, the slow pace of the construction industry means that it can take years for a start-up to complete a real proof of concept (POC), regardless of how much funding it has raised.If a project spans three years, and a new tech is onboarded for trial, it could take a few years before the technology becomes a watertight solution and only then scan it start to scale. This protracted timeline hinders the ability to achieve the rapid growth necessary to reach unicorn status, deterring investors who seek faster returns.
Turning disadvantage to advantage
Having described the hurdles, let’s remember that along with barriers comes potential. The biggest advantage of construction is scale and stability. If it works, it’s huge! Yes, it takes a bit more time, but that time is won back as a competition barrier.
The market has proven that the building sector is one of the safest and most reliable markets for those with a competitive advantage. Start-ups offering true innovation can win big and become unicorns if they have the patience and foresight to understand their path. Seeking quick returns and hyper growth from the get-go is likely to lead to disappointment.
For me, the AEC unicorn formula is:
Onboarding ease * users * profit = Potential
This simple equation provides a quick analysis tool to predict the likely success of startups. It is simply a grading system based on three crucial metrics. Combine these together in the formula, and you will find a useful grading system to evaluate success.
The Unicorn formula for AEC
Onboarding (Grade factor 1-10)
Ease of adoption • Required support • Risks associated
Potential users
Total amount of users or projects. Take 10% of that.
Profitability
Estimated profit per user/project.
Ease of onboarding: This is perhaps the most crucial determiner of a go/no-go situation. As previously mentioned before, there is no time in today’s world for deep training and long integration times. Smooth onboarding and usage is the key to winning customers. This is calculated as a factor between 1 and 10, where 10 indicates the optimum ease of use.
Users: These are users/projects that show potential to become customers. The goal is to identify a realistic number of customers who are willing to pay for the tech product or service and assume 10% of that number will be won over.
Profit: This is a term that isn’t sufficiently talked about in a start-up world where everyone is chasing scale, but construction is a cut-throat business and if your product is not competitive or is just a ‘nice-to-have’ rather than a ‘must-have’, it will not survive. Profit margins should be watertight and based on competitive advantage.
As we step into the age of AI and automation, it is predicted that construction will transform itself, and with it the nature of its tools and the daily lives of its workers. This is an exciting time in history and an opportunity to get on board. AI is not only being used in the tools that we makeit’s also being used to make them. That’s lowering barriers to entry like never before. For this reason, we can expect to see a big increase in digital solutions from developers of all sizes, opening doors that for years were thought of as closed.
I hope this article helps you position yourself in this brave new world, either as a user, an entrepreneur or an investor looking to take part in the digital transformation of construction. I am certain that if all six points are addressed, success is guaranteed. Best of luck to us all!
Tal Friedman will be presenting at NXT BLD on 11 June - ‘From AI visions to fabricated realities’
New opportunities open for boutique architecture firms such as Scott Shields Architects as they embrace new technology that boosts their early-stage workflows and business. Read about how the team implemented Autodesk Forma to maximise resources, reduce rework and win bigger projects
For Scott Shields Architects’ (SSA) Toronto-based team of 15, flexibility and efficiency are key to delivering successful projects–from commercial and residential to master planning–and main criteria when trying new tools. “Efficiency is not necessarily just about time. If you can be more efficient, it gives you more time and opportunities to focus on other aspects of a project,” explains Andrew Shields, managing principal at Scott Shields Architects. “We look for tools that will help us overcome the challenge of time pressures and deliver better outcomes for our clients.”
The firm is often commissioned to do feasibility studies and schematic designs of mixed-use, highdensity housing projects. To enable efficient workflows and effective decision-making, the team integrated Autodesk Forma, a cloud-based tool for planning and design. This strategic addition connects Forma–available from their AEC Collection subscriptions–with Revit and Autodesk Construction Cloud. Shields: “Linking software in a continuous workflow from
concept to construction allows us to seamlessly move through the project without wasting time redrawing at each stage.” It’s also about having the right information at the right time to make the best design decisions. “With Forma, we get an exponentially higher output because we can analyse so much more, and I know it’s going to give our clients better outcomes in the same amount of time,” he continues.
Forma’s quick site set up gets the team testing in no time
Having accurate site information helps the team hit the ground running. “With so many developments in the city, you need to monitor what’s happening and how regulations apply. So, having access to context models instantaneously is very valuable. To get that base set up within Forma in a couple of clicks is fantastic–it saves a huge amount of time and hassle, which allows us to start testing concepts straightaway.”
Iterating multiple ideas in parallel helps them quickly rule out the ‘what ifs’. Shields: “We can test concepts and if they don’t work, we just move onto the next one. Previously we might not spend much time exploring an idea because we don’t know if it will work–so it’s extremely creative.”
changes immediately,” he says.
Data gives the team confidence they’re on the right track. “Having analysed multiple options with data to back them up, we can justify our design rationale more convincingly,” Shields explains. “We can show options with the ideal unit mix, least overshadowing or best wind conditions. So, we feel comfortable that we can move the project forward and the thorough testing allows clients to make decisions faster.”
After the schematic phase, the Forma model is sent to Revit to start the BIM process. Shields: “As the Revit model already has the base parameters such as the context, massing and levels, we can start the main modelling straightaway without time-consuming, manual processes. There was downtime previously when different software was not connected and we had to remodel everything–so by avoiding that rework using the Forma and Revit connection, this helps us to save 1-2 days on average.”
WithAutodeskForma,weget anexponentiallyhigheroutput becausewecananalysesomuch more,andIknowit’sgoingtogive ourclientsbetteroutcomesinthe sameamountoftime.
AndrewShields,Managing principal,ScottShieldsArchitects
Forma’s easy-to-use analyses come into play for designing comfortable urban environments, often governed by regulations. Shields: “Considering overshadowing and ensuring good daylight access is crucial for the approvals process. Being able to visualise this from day one in Forma avoids problems down the line.” Everyone in the team can quickly access this data, regardless of their technical knowledge. “Previously we had to wait weeks for reports and then redo the design once we got the information. Now we can do those checks almost instantly ourselves and make
Despite SSA’s compact team, project scale is no obstacle. “Having Forma allowed us to find more opportunities because we’re confident we can handle them,” Shields says. “Master plans of two million square feet are projects we may have shied away from before because of size, complexity and resource restrictions whereas having this software certainly allows us to compete for them now.”
Innovation is rooted in SSA’s culture. Shields: “We’re always looking for new possibilities whether that’s in software or materials. We keep up to date with the latest technologies because at the end of the day, it allows us to present better projects to our clients.”
Want to learn more? Join our “What is Forma” webinar: autodesk.com/forma-webinar
It’s
not often a mobile workstation comes along that truly rewrites the rulebook, but the HP ZBook Ultra G1a does just that, with an integrated graphics processor that has fast access to more memory than even the most powerful discrete laptop GPU, writes Greg Corke
Intel has dominated the mobile workstation sector for decades. Its processors have powered everything from ultra-compact 14-inch models to high-performance 17-inch behemoths. Today, Intel Core chips are almost always paired with discrete Nvidia laptop GPUs. AMD hardly gets a look in.
But with the new 14-inch HP ZBook Ultra G1a, and its powerful AMD Ryzen AI Max Pro ‘Strix Halo’ processor, that balance could be starting to shift.
The AMD Ryzen AI Max Pro comes with an integrated RDNA 3.5-based AMD Radeon GPU with professional graphics driver. But unlike integrated GPUs of the past, this one flexes some serious muscle. Together with 16 highperformance ‘Zen 5’ CPU cores, this means the HP ZBook Ultra G1a can deliv er unprecedented levels of performance in a highly portable 14-inch form factor.
But there’s more. Unlike a discrete GPU with a fixed pool of memory, the integrat ed AMD Radeon GPU can tap directly into up to 96 GB of high-speed sys tem memory—far exceeding what any other laptop GPU provides. As long as the laptop is properly configured you shouldn’t have to worry about datasets exceeding the GPU memory limit, which can happen with discrete laptop GPUs with 8 GB or even 16 GB of memory. When that limit is breached, workflows can grind to a halt— or worse, cause the software to crash.
designers, it’s the first 14-inch mobile workstation that can truly be used for GPU-accelerated 3D visualisation. Most other 14-inch models are largely limited to 2D / 3D CAD and BIM workflows.
The AMD Radeon 8060S GPU in the HP ZBook Ultra G1a—integrated into the top-tier AMD Ryzen AI Max+ Pro 395 processor—is significantly more powerful than those typically used in other 14-inch mobile workstations. Its performance is comparable to the
with 2024’s AMD Ryzen Pro 8000 Series processor and integrated AMD Radeon 780M GPU. In most GPU intensive tasks, the HP ZBook Ultra G1a is around two to three times faster.
While all eyes are on the GPU, the HP ZBook Ultra G1a also comes with a very powerful ‘Zen 5’ CPU with 16-cores and 32-threads. In multi-threaded workflows like rendering and simulation, this puts it head and shoulders above most 14-inch mobile workstations, which tend to come with Intel processors such as the Intel Core Ultra 9 185H, with far fewer highperformance cores. It’s even on par with more powerful laptop processors like the Intel Core i9 13950HX that are only available in 15-inch and 17-inch models. However, in single threaded or lightly threaded tasks such as CAD and BIM, Intel still appears to have the lead.
The 14-inch mobile powerhouse ‘14-inch’ and ‘powerhouse’ are two words not often used together, but the HP ZBook Ultra G1a delivers the kind of performance you’d usually expect from a much larger 15-inch or 17-inch laptop. For architects, engineers and product
Most 14-inch laptops are limited to Nvidia RTX 500 Ada (4 GB), Nvidia RTX 1000 Ada (6 GB), or Nvidia RTX 2000 Ada (8 GB) GPUs. These entry-level GPUs are not only less powerful, on paper, but have access to far less memory - limitations that can impact demanding workflows (more on this later).
The HP ZBook Ultra G1a also marks a massive leap forward from HP’s previous AMD-based mobile workstation, the HP ZBook Firefly G11A (read our reviewwww.tinyurl.com/AEC-Firefly), which comes
AMD Ryzen AI Max Pro has twice the number of cores as the older AMD Ryzen Pro 8000 Series processor which comes with eight ‘Zen 4’ cores. This gives it a massive uplift in highly multi-threaded workflows. The AMD Ryzen AI Max Pro platform also supports much faster memory — 8,000MT/s LPDDR5X compared to DDR5-5600 on the AMD Ryzen Pro 8000 Series — so memory-intensive workflows like simulation and reality modelling should get an additional boost.
Like most new laptop chips, the AMD Ryzen AI Max Pro also comes with a Neural Processing Unit (NPU), capable of dishing out 50 TOPS of AI performance, meeting Microsoft’s requirements for a CoPilot+ PC.
While 50 TOPS NPUs are not uncommon, it’s the amount of memory that the NPU and GPU can address that makes the AMD Ryzen AI Max Pro particularly
interesting for AI. In theory, having access to large amounts of memory should allow the processor to handle large AI workloads, such as multi-billion parameter large language models (LLMs), which would not fit into the fixed memory of a discrete GPU.
On a more practical level for architects and designers, the chip’s ability to handle large amounts of memory could offer an interesting proposition for text-to-image generators like Stable Diffusion, which are increasingly used for ideation for early stage design. It should be able to output higher resolution images without the massive slow-down that typically happens when GPU memory becomes full (read our GPUs for Stable Diffusion article - www.tinyurl.com/AEC-GPU-SD).
A massive pool of memory
Discrete GPUs, such as Nvidia RTX, have a fixed amount of on-board memory. In a 14-inch mobile workstation this is 4 GB, 6 GB or 8 GB. In contrast, the AMD Radeon GPU, built into the AMD Ryzen AI Max Pro processor, has direct and fast access to a massive, unified pool of system memory. It can use up to 75% of the system’s total RAM, allowing for up to 96 GB of GPU memory when the HP ZBook Ultra G1a is configured with its maximum 128 GB.
memory, if available, temporarily expanding its capacity. Since this memory resides in the same physical location, access remains very fast. Even with the smallest 512 MB profile, when borrowing 10 GB, we found 3D performance only dropped by a few frames per second, maintaining that all-important smooth experience within the viewport.
This means that if system memory is in short supply, opting for a smaller GPU memory profile can offer more flexibility by freeing up RAM for other tasks.
As a laptop, the HP ZBook Ultra G1a is a very impressive piece of kit —solid, exceptionally well-built, lightweight, and slim. Weighing just 1.50 kg and 18.5mm in profile, it’s the thinnest ZBook ever. Considering the sheer processing power packed inside, that’s nothing short of remarkable.
The slender chassis is made possible by the power efficient AMD processor,
‘‘
– GPU clock speeds fell between 2,600 MHz and 2,850 MHz.
Running off battery, the system’s power consumption drops to 40W, leading to reduced all-core CPU frequencies of around 2.5 GHz and GPU frequencies of 1,700 MHz. At these levels, fan noise was virtually silent. The HP XL-Long Life 4-cell, 74.5 Wh polymer battery delivered 95 minutes of runtime under full GPU load in Twinmotion, and 125 minutes in the SolidWorks SPECapc benchmark, which combines demanding graphics with intensive single- and multi-threaded CPU tasks. For typical day-to-day use— where modelling happens in short bursts and many editing tasks are relatively lightweight—you can expect significantly longer battery life. The good news is that the laptop charges quickly, reaching 50% in just 29 minutes and 80% in 55 minutes.
‘14-inch’ and ‘powerhouse’ are two words not often used together, but the HP ZBook Ultra G1a delivers the kind of performance you’d usually expect from a much larger 15-inch or 17-inch laptop ’’
This means the mobile workstation can handle certain workloads that simply aren’t possible with other laptop GPUs.
When a discrete GPU runs out of memory it must ‘borrow’ some from system memory, but as data must be transferred over the PCIe bus, this is highly inefficient.
Performance can drop dramatically depending on how much memory the GPU needs to borrow. Render times increase and frame rates can fall from double digits to low single digits—making it nearly impossible to navigate models or scenes. In some cases, the software may crash entirely.
The HP ZBook Ultra G1a allows users to control how much memory is allocated to the GPU. In the BIOS, simply choose a profile - from 512 MB, 4 GB, 8 GB, all the way up to 96 GB (should you have 128 GB of RAM to play with). Of course, the larger the profile, the more it eats into your system memory, so it’s important to strike a balance.
The amazing thing about AMD’s technology is that should the GPU run out of its ringfenced memory, then it can seamlessly borrow more from system
which is rated at 50W. Under heavy loads it draws 70W at peak, regardless of whether you’re hammering CPU or GPU. For extreme multi-tasking, power is shared between both processors, resulting in lower clock speeds.
The HP ZBook Ultra G1a comes with a 140W power supply, chunkier than most other 14-inch mobile workstations. But it’s still USB-C, so you get flexibility.
For cooling there’s an improved HP Vaporforce thermal system that incorpo
There are two display options: a standard FHD (1,920 × 1,080) panel, and one of the standout features of our test machine—a stunning 2,880 × 1,800 OLED touchscreen with a 120Hz refresh rate, 400 nits of brightness, and 100% DCI-P3 colour gamut. There’s only a single NVMe TLC SSD (512 GB – 4 TB), which is hardly surprisingly given the size of the machine.
The HP ZBook Ultra G1a is well equipped with ports –two USB-C (40 Gbps) Thunderbolt 4 with DisplayPort 2.1, one USB-C (10 Gbps) with DisplayPort 2.1- all supporting power delivery - and one USB Type-A (10 Gbps) with charging support. While there’s no RJ-45 Ethernet port, wired connectivity is still possible via a USB adapter, and you can also get fast data transfer from built in Wi-Fi 7.
In use, the keyboard feels solid and responsive, while the large trackpad is smooth and natural. That said, serious 3D modelling is always best handled with an external mouse.
Finally, for Teams and Zoom calls, there’s an impressive 5 MP IR camera with Poly Camera Pro software. Advanced features like AutoFrame, Spotlight, Background Blur, and virtual backgrounds are powered by the 50 TOPS NPU, optimising power efficiency and helping maximise battery life. The camera shutter’s black-and-white stripes is a nice touch, making it immediately obvious when it’s closed.
The elephant in the room is the price. If you’re used to budget-friendly 14-inch mobile workstations — or assumed an integrated GPU would save you money — this might come as a shock. Our fully loaded review unit — featuring an AMD Ryzen AI Max+ Pro 395, 128 GB of RAM, 2 TB SSD, and a 14” 2.8K touch display — comes in at a hefty $4,049.
There are savings to be had, though. Dropping to a 1 TB SSD knocks off a significant $490 (bringing it down to $3,559), and opting for 64 GB of RAM instead of 128 GB saves another $460 (down to $3,099).
It certainly pays to shop around — prices vary wildly. We’ve even seen our exact review configuration listed on hp. com for a jaw-dropping $9,060.
We put the HP ZBook Ultra G1a to work in a variety of real-world CAD, visualisation, simulation and reality modelling applications. Our test machine was fully loaded with the top-end AMD Ryzen AI Max+ Pro 395 processor and 128 GB of system memory, of which 64 GB was allocated to the AMD Radeon 8060S GPU. All testing was done at the OLED display’s maximum 2,880 x 1,800 resolution.
AMD Ryzen Pro 8000 Series processor stuttered with very large models, the HP ZBook Ultra G1a took everything in its stride. It even made light work of the colossal Maunakea Spectroscopic Explorer telescope assembly with 8,149 parts and 58.9 million triangles, delivering silky smooth model navigation. As Solidworks is largely single threaded or lightly threaded, there wasn’t a huge difference between the ZBook Ultra G1a and ZBook Firefly G11A in terms of computational performance.
Real-time GPU rendering / visualisation is where you start to see the true benefits of the HP ZBook Ultra G1a. In workloads that require a lot of GPU memory, having access to a large pool gives it a significant performance edge over systems with memory-constrained discrete GPUs.
In many viz tools, memory usage rises significantly with output resolution of final renders. In Solidworks Visualize, for example, a small suspension model uses 1.2 GB to load but then a considerable 7.4 GB to render at 4K and 12.8 GB to render at 8K. The much larger snow bike model with 32 million polygons uses 12.4 GB to render at 4K and 19.1 GB to render at 8K.
Of course, if you load the equivalent CAD model in Solidworks at the same time, as most designers would, that adds an additional 5.0 GB to the overall GPU memory footprint. In multi-application workflows like these, when there’s not enough GPU memory to go around, each app must release memory before the
‘‘
happens when a discrete GPU runs out of memory? To render six 4K images, the 8 GB desktop GPUs were around 6 GB short, resulting in significantly longer render times.
The HP ZBook Ultra G1a completed the job in just 183 seconds, while the AMD Radeon Pro W7500 (8 GB) took 659 seconds, the Radeon Pro W7600 (8 GB) 688 seconds, and the Nvidia RTX A1000 (8 GB) 799 seconds.
The important thing to note here is that while these desktop GPUs may only appear slightly less powerful on paper, they shouldn’t perform dramatically slower. However, once GPU memory is maxed out and the discrete GPU starts swapping to system RAM, performance drops sharply and render times increase significantly.
We saw even more dramatic behaviour in Lumion 2024, where our test scene uses 12 GB of GPU memory when rendering at 4K. With both 8 GB Radeon Pro desktop GPUs, the software simply couldn’t cope with running out of memory and subsequently crashed.
Of course, for lighter workloads, which require less than 8 GB, render times are much more comparable. In the D5 Render benchmark, for example, which uses well under 8 GB of GPU memory, the HP ZBook Ultra G1a completed the task in 266 seconds. By comparison, the AMD Radeon Pro W7500 (8 GB) took 386 seconds, the Radeon Pro W7600 (8 GB) 261 seconds, and the Nvidia RTX A1000 (8 GB) 294 seconds.
In the HP ZBook Ultra G1a, it feels like we’re witnessing a genuine shift in the landscape for 14-inch mobile workstations
We compared the laptop to the previous generation HP ZBook Firefly G11A, as well as a range of desktop workstation CPUs and desktop workstation GPUs with 8 GB of memory. While we didn’t have access to recent Intel-based mobile workstations for testing, the comparisons still offer valuable context—particularly in terms of GPU memory limitations.
In 3D CAD software Solidworks 2025 the HP ZBook Ultra G1a easily handled everything we threw at it. While the previous generation HP ZBook Firefly G11A with
other app can claim what it needs. This transition can cause a stutter as memory is freed and allocated. And if you’re trying to render in the background while continuing to model in Solidworks, insufficient memory will likely result in much longer render times.
We found Twinmotion by Epic Games demands even more from GPU memory. The Snowdon Tower scene, for example, which is fairly typical of a mainstream arch viz dataset, uses 8.1 GB of GPU memory simply to load, 9.8 GB to render at FHD resolution, 14.2 GB to render at 4K and a whopping 35.1 GB to render at 8K!
Of course, with 64 GB of GPU memory to work with, the HP ZBook Ultra G1a handled the task with ease. But what
With 64 GB of GPU memory at its disposal, the HP ZBook Ultra G1a never approached its limit—even with our most demanding models. However, as datasets grow larger, although it can continue rendering, navigating and working within the 3D environment can become increasingly challenging.
In D5 Render, for example, the HP ZBook Ultra G1a took 795 seconds to render a colossal lakeside model with 2,177,739,558 faces at 4K resolution, using 21 GB of GPU memory. However, viewport performance dropped to just 4.3 frames per second, making it difficult to navigate the scene—a serious productivity killer. For very large models like this, a more powerful GPU such as the Nvidia RTX 5000 Ada (16 GB) would probably be a better choice and you’d also be able to take advantage of Nvidia DLSS to boost real time performance using AI. However, that would require stepping up to a 16-inch or 18-inch mobile workstation.
1 Rendering this Twinmotion Snowdon Tower scene requires 14.2 GB of GPU memory at 4K resolution and a whopping 35.1 GB at 8K resolution
2 Running out of GPU memory can have serious consequences. In Lumion, our 12 GB test scene renders without issue on the HP ZBook Ultra G1a, thanks to the AMD Radeon 8060S having access to 64 GB of shared memory. However, on a desktop workstation with an 8 GB AMD Radeon Pro GPU, the same scene causes the software to crash
3 GPU memory usage is not limited to a single application. Here Solidworks CAD uses 5.0 GB while Solidworks Visualize uses 12.4 GB to render at 4K
It’s also important to note that AMD’s RDNA 3.5-based GPUs generally trail Nvidia when it comes to hardware ray tracing performance. For example, in Lumion 2024, the HP ZBook Ultra G1a is only 1.44 times slower than a desktop Nvidia RTX 2000 Ada GPU in raster rendering, but that gap widens to 2.61 times slower during ray-trace rendering. More critically, enabling path tracing in Twinmotion on the HP ZBook Ultra G1a caused the software to crash.
Over the past decade, AMD GPUs have become incompatible with certain professional 3D software tools. While AMD GPUs work directly with open standards like DirectX 12, the graphics API which forms the backbone to Unreal Engine, Lumion, Twinmotion, Revit, D5 Render and others, any software developed with Nvidia CUDA has not been able to run on AMD Radeon (Pro) GPUs.
The good news is this is starting to change. As Independent Software Vendors (ISVs) cotton on to the huge potential of AMD’s new integrated graphics technology, there appears to be real appetite to support it. AMD has collaborated with several ISVs to port Nvidia CUDA code over to AMD’s HIP framework. For visualisation, software includes Luxion KeyShot, Maxon Cinema 4D and Redshift, and Rhino with Cycles. For simulation, there’s Altair Inspire and Ansys Discovery.
Of course, there’s still a lot of work to do. Some AEC-focused ISVs depend on Nvidia GPUs to accelerate specific features. In reality modelling software Leica Cyclone 3DR, for example, AI classification is built around Nvidia CUDA (read our workstations for reality modelling articlewww.tinyurl.com/WS-reality).
One of the most interesting developments is from Luxion, which currently supports AMD GPUs in a beta version of KeyShot 2025.2.
4 With access to 64 GB of GPU memory, the HP ZBook G1a can render scenes that would typically need to be handled by a CPU renderer. In KeyShot 2025.2 beta we manged to GPU render this colossal multi room supermarket model from of Kesseböhmer Ladenbau that features 447 million triangles and uses up 18.1 GB of GPU memory simply on loading 2 1 3 4
Luxion was a long-time advocate of CPU rendering, but in 2020 finally embraced the GPU, adding render support for Nvidia RTX through OptiX. Today, many of its customers use GPUs to benefit from
faster results. However, CPU rendering remains essential for more complex projects. When a discrete GPU runs out of memory and must swap out to system RAM, rendering performance can slow significantly and warning messages appear. If memory demands then increase further, the system can eventually crash.
With the HP ZBook Ultra G1a KeyShot customers have real choice - render on CPU or GPU, it’s doesn’t matter as they can both have access to the same memory.
We tested the hardware in KeyShot 2025.2 beta, using an absolutely colossal multi room supermarket model courtesy of Kesseböhmer Ladenbau that uses 18.1 GB of GPU memory simply to load. The scene features 447 million triangles (nearly 900 times more than KeyShot’s sample headphone scene) and includes 2,228 physical lights and 237 382 parts with incredible detail. There are chiller cabinets, cereal boxes, and 3D fruit and vegetables!
In fairness, this model is probably too big to use on a laptop like this, day in day out. While FHD renders took under 10 minutes, we found it very hard to move around the scene.
But the key point here, is that it would have been previously unthinkable for a scene this complex to be GPU-rendered on a 14-inch laptop.
Don’t forget the CPU
It’s easy to forget that the HP ZBook Ultra G1a also has an incredibly powerful CPU. In our CPU rendering benchmarks, its scores in V-Ray 6,0 (33,285), Cinebench 2024 (1,596), KeyShot 11,.3 (4.15) and Corona Render 10 (9,923,886) are almost exactly twice that of the HP ZBook Firefly G11A - although this is hardly surprising given it has twice the number of cores.
What might surprise you is that compared to the desktop AMD Ryzen 9 9950X processor, which boasts the same number of ‘Zen 5’ cores but draws 230 watts vs 70 watts at peak, there’s not that big a difference.
The desktop AMD Ryzen 9 9950X is only 45% faster in V-Ray, 39% faster in Cinebench, 44% faster in KeyShot and 50% faster in Corona Render 10.
The competition
Mobile workstation manufacturers— including Dell, Lenovo, and HP—are currently in a transitional phase. New models featuring Intel and Nvidia processors have been partially announced, and by July we can expect a full rollout of workstation-class laptops powered by Intel Core Ultra (Series 2) CPUs and Nvidia Blackwell laptop GPUs.
It’s important to note that Nvidia is increasing memory across its professional GPU lineup. However, this upgrade is modest on the models most likely to appear in 14-inch laptops. The RTX Pro 500 Blackwell (6 GB), RTX Pro 1000 Blackwell (8 GB), and RTX Pro 2000 Blackwell (8 GB) all remain at 8 GB or below.
because of its complexity, viewport performance drops to just 4.3 frames per second, making it difficult to navigate the scene - a real productivity killer
ultimate portability. It’s all about finding the right balance.In saying that, not all pro applications need a fully interactive 3D viewport, and the HP ZBook Ultra G1a could be a good fit for GPU-accelerated simulation and reality modelling - not forgetting CPU-accelerated workflows too.
In contrast, the higher-end models— RTX Pro 3000 Blackwell (12 GB), RTX Pro 4000 Blackwell (16 GB), and RTX Pro 5000 Blackwell (24 GB)—offer significantly more memory than their Ada Generation predecessors. This capacity lift could make them much better equipped to handle more demanding datasets. And let’s not forget that the RTX Pro 4000 Blackwell and RTX Pro 5000 Blackwell will likely offer significantly more performance than the integrated AMD Radeon 8060S GPU in the HP ZBook Ultra G1a.
In the HP ZBook Ultra G1a, it feels like we’re witnessing a genuine shift in the landscape for 14-inch mobile workstations. This isn’t just about cramming more power into a device that slips easily into a backpack — it’s about redefining what’s possible with integrated GPUs. And, with one eye on the future, perhaps even workstation GPUs in general.
HP is currently the only major workstation OEM to take on the processor, but Dell and Lenovo will certainly be paying close attention.
Naturally, there’s a practical limit to how large models the laptop can handle. An architect or designer might tolerate longer render times with a GPU that’s not in the high-end league. But if that trade-off means only being able to navigate a model at a few frames per second, it can quickly lead to frustration. In some cases, full interactivity from a more powerful GPU will outweigh the benefits of
Of course, there are still some software compatibility hurdles to overcome, particularly in CUDA-only tools. But from the software developers we have spoken with, there appears to be genuine excitement about the new technology. For the first time in years, it feels like the playing field is beginning to level. To sustain this momentum, AMD will need to continue to invest in software development.
Perhaps the most exciting thing is where this technology could be heading — both in the short term and further down the line.
HP has already announced the HP Z2 Mini G1a, a micro desktop workstation powered by the same AMD Ryzen AI Max Pro processor. But with a bigger 300W power supply, it’s likely to deliver significantly better performance.
Could we also see AMD Ryzen AI Max Pro make its way into larger mobile workstations, which typically come with 230W power supplies? It certainly seems possible.
Looking further ahead, we’re eager to see what the rumoured Zen 6-based ‘Medusa Halo’ processor — expected to launch next year — might bring to the table. According to the “Moore’s Law is Dead” a respected tech-focused YouTube channel, it could offer a 30% to 50% GPU performance boost over the current AMD Ryzen AI Max Pro chip.
If these predictions hold true, the AMD chip could deliver a substantial edge when handling larger models, potentially redefining the capabilities of both mobile and desktop workstations.