AEC March / April 25

Page 1


Motif

Former Autodesk colleagues take aim at Revit with $46m funded BIM

2.0 startup

editorial

MANAGING EDITOR

GREG CORKE greg@x3dmedia.com

CONSULTING EDITOR

MARTYN DAY martyn@x3dmedia.com

CONSULTING EDITOR

STEPHEN HOLMES stephen@x3dmedia.com

advertising

GROUP MEDIA DIRECTOR

TONY BAKSH tony@x3dmedia.com

ADVERTISING MANAGER

STEVE KING steve@x3dmedia.com

U.S. SALES & MARKETING DIRECTOR

DENISE GREAVES denise@x3dmedia.com

subscriptions MANAGER

ALAN CLEVELAND alan@x3dmedia.com

accounts

CHARLOTTE TAIBI charlotte@x3dmedia.com

FINANCIAL CONTROLLER

SAMANTHA TODESCATO-RUTLAND sam@chalfen.com

AEC Magazine is available FREE to qualifying individuals. To ensure you receive your regular copy please register online at www.aecmag.com about

AEC Magazine is published bi-monthly by X3DMedia Ltd 19 Leyden Street London, E1 7LE

UK

T. +44 (0)20 3355 7310

F. +44 (0)20 3355 7319

© 2025 X3DMedia Ltd

All rights reserved. Reproduction in whole or part without prior permission from the publisher is prohibited. All trademarks acknowledged. Opinions expressed in articles are those of the author and not of X3DMedia. X3DMedia cannot accept responsibility for errors in articles or advertisements within the magazine.

Industry news 6

Motif comes out of stealth, Snaptrude gets Excel-like interface, Lumion launches ‘design companion’ viz tool, Archicad goes solo, plus lots more

Workstation news 12

Nvidia launches Blackwell workstation GPUs, Dell rolls out Intel-based Pro Max PCs, and HP unveils new Z workstations

AI in AEC news 16

Chaos bags AI software firm EvolveLab, BIM assistant for Revit launches, and Architechtures adds context

Information Integrity 17

As AI reshapes how we engage with information, Emma Hooper explores how to harness the power of LLMs without losing sight of critical thinking

AI-driven design

Studio Tim Fu 18

The London practice is reimagining architectural workflows, blending human creativity with machine intelligence

Motif to take on Revit 22

We get the low down on the new £46 million funded BIM 2.0 startup led by former Autodesk co-CEO, Amar Hanspal

Future BIM voices 28

At NXT BLD and NXT DEV four leading BIM 2.0 startups present their commercial tools, alongside a wealth of innovations

Rebuilding BIM 30

Register your details to ensure you get a regular copy register.aecmag.com

In the last edition, we asked five established AEC software developers to share their observations and projections for BIM 2.0. Now it’s the turn of the startups

Higharc AI: 3D BIM model from 2D sketch 36

A cloud-based design solution for US timber frame housing presents impressive new AI capabilities

Reimagining civil infrastructure design 48

Civil engineering software startup Infraspace is transforming early-stage design using generative design and AI

AI agents for civils 52

How LLMs can help engineers work more efficiently, while still respecting professional responsibilities

Reviving Brutalism 56

How 3D visualisation can help change the conversation around Brutalism

Polycam for AEC 58

Blending iPhone LIDAR with photogrammetry, this reality capture startup is now targeting the AEC sector

Autodesk Tandem 60

Autodesk’s cloud-based digital twin platform, is evolving at an impressive pace. We take a closer look at what’s new

On the subject of digital twins 64

We spoke with the developer of Twinview to hear the latest on digital twins=

Nvidia RTX A1000 66

This entry-level workstation GPU is a notable upgrade from the Nvidia T1000, but could the slightly pricier Nvidia RTX 2000 Ada Generation be the better option?

Motif launches cloud-based AEC collaboration platform

otif, the AEC software startup that came out of stealth in February, has launched its cloud-based platform offering ‘seamless real-time collaboration’ for architecture and engineering teams.

The single platform, which aims to unify 2D and 3D workflows, is designed to eliminate the fragmentation that often plagues design review processes.

Motif offers a ‘unified collaboration space’ which allows architects, engineers, and project stakeholders to work together in an ‘infinite canvas workspace’ that integrates 2D drawings, 3D models, images, sketches and specifications.

With direct connections to Revit and Rhino it supports live model streaming and integration, allowing real-time

updates without manual re-uploads.

2D / 3D sketching and markups allows teams to capture feedback directly on models and drawings with real-time commenting and sketching tools.

Comments made in Motif appear in Revit and vice versa.

Smart presentation tools use frames and views to create focused, interactive design presentations without having to export static slides and maintaining connections to evolving designs.

Motif is not stopping with collaboration. The company, which is headed up by ex Autodesk co-CEO, Amar Hanspal, has plans to build a next generation BIM authoring tool. Learn more in our cover story on page 20.

■ www.motif.io

D5 Render gets real-time path tracing

5 Render 2.10, the latest release of the AEC-focused real-time rendering software, introduces several new features including real-time path tracing, AI-driven post-processing, a city generator, and night sky simulation.

The new real-time path tracing system delivers global illumination (GI) with ‘superior efficiency’, allowing for ‘cinematic-quality’ rendering in real time. According to the company, ‘instant lighting results’ reduce trial and error while minimising the need

for extensive post-processing.

The real-time path tracing system enhances visual fidelity with physically accurate reflections, soft shadows, and indirect lighting with customisable GI precision, reflection depth, samples per pixel (SPP), and noise reduction. An accumulate mode progressively refines render output.

D5 Render 2.10 also expands its AI functionality with a new tool designed to simplify post-processing, minimising the need for third-party editing software.

■ www.d5render.com

Cintoo offers BIM and Twin editions

Cintoo is adding new portfolio options to its Cintoo platform – the BIM and Twin Editions.

The Cintoo platform, which is focused on reality data, allows users to upload and stream huge 3D data files from any desktop or laptop via a web browser. Users can compare reality data to their BIM and CAD models or scans to scans for project collaboration and optimisation.

The new BIM Edition is designed for AEC workflows and includes features such as progress monitoring and issue tracking for performing analysis and to help eliminate risk.

■ www.cintoo.com

Tekla gets AI-enhanced

The 2025 release of structural BIM software Tekla Structures features new AI-enabled tools designed to enhance productivity and accelerate the creation of fabrication drawings.

The new AI Cloud Fabrication Drawing service, introduced as a preview in Tekla Structures 2025, uses AI to create fabrication drawings, reducing the need for manual adjustments.

Meanwhile, an AI-powered Trimble Assistant for Tekla is available, both within the product as an extension and as a web app in Tekla User Assistance, a centralised product support system for customers.

The assistant is designed to provide accurate and concise answers to users’ questions on Tekla products — Tekla Structures, Tekla Structural Designer, Tekla Tedds or Tekla PowerFab — by using the Tekla User Assistance knowledge base.

■ www.tekla.com

Snaptrude builds Excel-like interface into BIM software

Snaptrude has built an Excel-like interface directly into its BIM authoring software, to make architectural programming simpler and allow architects to quickly generate databacked design concepts with views, renders, and drawings.

With the new ‘Program’ mode every row, formula, and update is synced live with the 3D model, and vice versa. According to Snaptrude, this means architects don’t need to juggle separate spreadsheets, ensuring real-time accuracy and eliminating the need for manual cross-checking. Users can define custom formulas and rules to fit their specific building program needs.

‘Program’ works alongside ‘Tables’,

which is billed as a new home for all kinds of structured information inside Snaptrude. Tables includes an AI wizard, so users can ‘quickly generate’ or refine their program with an AI co-pilot.

“Over the last 18 months, we’ve started spending a lot of time with mid to large sized architectural firms across the US and globally as well. And one thing which we constantly kept hearing is Excel is everywhere,” said Altaf Ganihar, CEO, Snaptrude. “From programming to construction, everybody knows how to use it, it’s very easy to use, and everybody relies on it. So instead of fighting it, we said, let’s just embrace it, we built an Excel like interface directly into Snaptrude.”

■ www.snaptrude.com

SketchUp gets viz and interop boost

Trimble SketchUp 2025 features better interoperability with Revit and IFC, and new visualisation capabilities, including photorealistic materials and environment lighting options.

To improve interoperability the 3D modelling software now includes more predictable IFC roundtrips, greater control over which Revit elements and 3D views are imported, and improved support for photorealistic materials when exporting USD and glTF file formats.

The new visualisation features, according to Trimble, enable designers to apply photorealistic materials, turn on environment lighting and see how they

interact in real time without hitting a ‘render’ button or waiting to see changes.

For enhanced environments, 360-degree HDRI or EXR image files now act as a light source, reflecting o photoreal materials. Meanwhile, dynamic materials are said to more accurately convey texture and represent how realworld materials absorb and reflect light, with a view to producing richer, more realistic visuals within SketchUp.

Finally, the introduction of ambient occlusion adds visual emphasis to corners and edges, adding perceived depth and realism with or without having materials applied.

■ www.sketchup.com

Topcon and Pix4D form partnership

opcon Positioning Systems and Pix4D have announced a strategic agreement that combines their expertise in geopositioning and photogrammetry solutions.

The collaboration includes Topcon becoming an authorised distributor of Pix4D’s photogrammetry software and will ‘streamline procurement’ and enhance technical support.

“The agreement on close collaboration with Topcon marks an important milestone in Pix4D’s growth strategy,” said Andrey Kleymenov, CEO at Pix4D. “A combination of precision positioning technology from Topcon and advanced photogrammetry and GeoFusion algorithms from Pix4D creates a powerful set of solutions for professionals in the utilities, infrastructure, and horizontal construction markets globally.

■ www.topconpositioning.com

■ www.pix4d.com

Newforma links cloud to on-premise

ewforma, a specialist in project information management software for the AEC industry, has launched Newforma Konekt File Server Connector.

The new technology is designed to act as a golden thread in AEC information management, ‘seamlessly linking’ on-premise and cloudbased systems into one unified, live view of project data.

According to Newforma, the key difference of the software lies in its ability to integrate with Autodesk Construction Cloud, SharePoint, file servers, and more.

■ www.newforma.com

Twinmotion 2025.1 adds realism to exterior scenes ROUND UP

Carbon assessment

Preoptima has launched PACER (Planning Application Carbon Evaluation and Reduction ) a platform built to support the UK’s local planning authorities (LPAs) in reviewing whole life carbon assessments (WLCAs) and enforcing carbon policy

■ www.preoptima.com

Corrosion mapping

TRACE-SI has introduced InspecTerra’s iCAMM to the UK market. The system helps detect corrosion, stress distribution, structural degradation and defects in ferromagnetic materials like steel reinforcement, beams and piles

■ www.trace-si.com

Structural rebrand

Following on from its acquisition of Strucsoft Solutions in 2021, Graitec has rebranded its flagship MWF (Metal Wood Framer) product line to Strucsoft. The software is designed to automate the design and manufacturing of framing components directly within Revit ■ www.strucsoftsolutions.com

Bluebeam link

Vectorworks has partnered with fellow Nemetschek Group brand Bluebeam to develop an integration for real time collaboration. The new Vectorworks Bluebeam Connection supports efficient tracking of PDF markups for revisions, RFIs, punch lists, and other submittals without leaving the Vectorworks interface ■ www.vectorworks.net

Inclusive design

Torus 2025, the latest version of Transoft’s roundabout design and analysis software, includes new features aimed at creating roundabout designs that are safer and more inclusive of all road users, including cyclists and pedestrians ■ www.transoftsolutions.com

Bentley development

Software development consultancy, AMC Bridge, has become a member of the Bentley Developer Network. The company’s recent R&D experiments include enhancing MicroStation capabilities in handling point cloud data and establishing a bi-directional link between MicroStation and Cesium ■ www.amcbridge.com

Epic Games has unveiled Twinmotion 2025.1, the latest release of its AEC-focused visualisation software.

New features include volumetric clouds; enhanced real-time rendering of orthographic views; and ‘configurations’ that enable users to build interactive 3D presentations.

To enhance the realism of exterior scenes, Twinmotion now has the option to use ‘true volumetric clouds’. Users can author the appearance of clouds by adjusting their altitude, coverage, and distribution, and by fine-tuning their density, colour, puffiness, and more.

Volumetric clouds can be affected by wind and will cast shadows. The software includes several presets so users can choose different formations as starting points for their own creations.

Adding more control to exterior scenes, users can now adjust the clarity and colour of the dynamic sky via new settings for turbidity and atmosphere density. In addition, the colour or temperature of the sun (or the directional light in the case of HDRI skies) can also be set, as well as colour, height and density of fog.

To showcase different variations of a project to clients or stakeholders, the new ‘Configurations’ feature allows users to instantly switch between the variations in Fullscreen mode or when viewing images, panoramas, videos, or sequences in local presentations.

There are also several Camera animation enhancements, a measure tool, and automatic level of detail (LoD) to help maintain real-time performance when working with complex meshes. ■ www.twinmotion.com

raphisoft has launched Archicad Studio, a new subscription plan tailored to solo practitioners working independently or with local teams.

Archicad Studio includes Archicad on macOS and Windows, local teamwork with BIMcloud Basic, Graphisoft AI Visualizer, BIMx mobile app for iOS and Android, BIMx Pro features.

It also incorporates Archicad extensions, like Python API, PARAM-O, Maxon Redshift, Library Part Maker, and additional Surface Catalog, plus

training, support, and services.

“With AEC technology evolving at such a rapid pace, we want solo practitioners to have access to cuttingedge BIM software innovations as soon as they hit the market,” said Gábor Kovács-Palkó, senior director of product portfolio strategy at Graphisoft.

“Archicad Studio achieves exactly that — affordable access to Archicad’s powerful BIM workflow at a competitive price point scaled to the solo practitioner’s needs.”

Archicad Studio costs £159 per month. ■ www.graphisoft.com

Lumion View ‘design companion’ viz tool debuts

Lumion has unveiled Lumion View, a new visualisation plugin that allows architects to visualise their projects in a path-traced real-time viewport without having to leave their CAD/BIM environment.

The software is billed as an early-stage design companion, purpose-built for design exploration by delivering live rendered feedback to design choices. Any geometry or material changes that are made in CAD/BIM are automatically reflected in the Lumion View window. Features include conceptual render

styles (clay, wood, Styrofoam, glossy), sun studies, material adjustments and 4K renders. VR walkthroughs and a Mac version are on the roadmap.

Lumion View is currently available in Early Access for SketchUp, but there are plans to expand to Revit later this year, followed by Archicad, Rhino, and others.

Pricing has not yet been announced, but the company has said that Lumion View will be ‘very affordable’ and will also run on ‘much lower grade hardware’ than other viz tools.

■ www.lumion.com

Topcon and Faro laser scanning partners

Topcon and Faro have announced a strategic agreement to ‘develop and distribute innovative solutions’ in the laser scanning market.

The companies expect the agreement to expand access to digital reality solutions

and result in complementary product developments, such as the seamless integration of Topcon and Sokkia solutions with Faro’s solutions. The companies will also harness their collective expertise in laser scanning.

■ www.topconpositioning.com ■ www.faro.com

Allplan expands offsite capability

AEC software specialist Allplan, part of the Nemetschek Group, has acquired Manufacton, the US developer of an offsite construction platform that provides real-time visibility to offsite production and optimises prefabrication processes through AI and data-driven decision-making.

According to Allplan, the acquisition will enable it to capitalise on the potential growth in the modular construction and Design for Manufacturing (DfMA) sectors, strengthen its position in the US market, and provide Manufacton with a platform to expand its presence in Europe and Asia Pacific.

■ www.allplan.com ■ www.manufacton.com

Solibri adds formulas for take offs

sers of Solibri Office can now use formulas to make it easier to extract, manipulate, and classify data directly within the AEC-focused checking and collaboration software.

With the new feature, Columns in ITO (Information TakeOff) and Classifications are now labelled with letters (A, B, C, etc.) to make formula writing more intuitive. A new column type, “Formula,” allows for dynamic calculations within ITO tables.

Solibri Office also now includes a new and improved measurement tool to make it easier to measure objects, and enhanced integration with Autodesk Construction Cloud (ACC) and BIM 360, offering direct access to ACC/BIM 360 issues within Solibri.

■ www.solibri.com

Vectorisation to launch for HP Build

In May 2025, HP plans to officially launch an AI vectorisation feature for its HP Build Workspace collaboration platform, first announced in September 2024.

The software uses AI to convert raster images into CADeditable documents and can detect lines, polylines, arcs, and text. Once text has been extracted and indexed, users can search on that data.

The conversion service comes with a simple editor, which allows users to change lines that were incorrectly converted from dashed into solid, connect lines that should have been snapped together, as well as clean, remove or add elements.

■ www.build.hp.com

Workstation

Nvidia RTX Pro Blackwell workstation GPUs launch to take performance crown

vidia has launched the Nvidia RTX Pro Blackwell generation of professional workstation GPUs for desktops and laptops, with significantly improved AI and ray tracing capabilities.

performance and is also faster in AI and Ray Tracing workloads.

This marks a change in strategy for Nvidia, as the company’s top-end workstation GPUs usually run slower than their consumer GeForce equivalents.

Workstation Edition GPUs, not least because of the axial fan design which makes it hard to cool multiple GPUs when placed close together.

The According

The new GPUs also support DLSS 4, the latest release of Nvidia’s real time neural rendering technology, where 15 out of every 16 pixels can be generated by AI, which is much faster than rendering pixels in the traditional way.

Render, enabling DLSS

According to Nvidia, in arch viz software D5 Render, enabling DLSS 4 can lead to a fourfold increase in frame rates, leading to much smoother navigation of complex scenes.

One of the reasons for this is that workstation cards usually draw less power. But this is not the case for the Nvidia RTX Pro 6000 Blackwell Workstation Edition, which goes up to 600W, a massive step up from the 300W Nvidia RTX 6000 Ada

Of the new desktop GPUs, the flagship Nvidia RTX Pro 6000 Blackwell Workstation Edition features a whopping 96 GB of GDDR7 memory, double that of the previous Nvidia RTX 6000 Ada Generation. This opens up the Nvidia RTX Pro family to even more demanding workflows in AI, simulation and visualisation.

smoother navigation of Of and Nvidia

Nvidia is billing the new dual slot board as the most powerful desktop GPU ever created. On paper, it outpaces the 32 GB consumer-focused Nvidia GeForce RTX 5090, which launched earlier this year. With a slightly beefier chip, the RTX Pro offers better single-precision

Generation GPU and slightly more than the 575W Nvidia GeForce RTX 5090.

This increased power draw will likely have an impact on how the new chip is deployed by the workstation OEMs. While some high-end desktops can physically

house up to three or four dual slot GPUs, we don’t expect many will be able to handle the thermal demands of multiple Nvidia RTX Pro 6000 Blackwell

Why GPU architectures matter

Nvidia’s current range of desktop workstation GPUs now spans three generations of GPU architecture: Ampere, Ada, and Blackwell.

The ‘Ampere’ lineup includes the RTX A500 and A1000, ‘Ada’ covers the RTX 2000 and RTX 4000

SFF Ada, while ‘Blackwell’ covers the RTX 4000, 4500, 5000, and 6000 Blackwell. So why does this matter?

Higher model numbers are faster, but the generation of the GPU becomes particularly important when it comes to AI. Newer architectures, like Ada and

particularly Blackwell, offer significantly improved AI processing capabilities. This translates to faster results in AI tools such as Stable Diffusion, and support for newer versions of DLSS, which can dramatically boost real-time performance in compatible applications.

This is probably why Nvidia has also launched the Nvidia RTX Pro 6000 Blackwell Max-Q Workstation Edition. It offers similar specs, but in a more familiar 300W package, translating to around 12% less performance across the board –CUDA, AI and RT.

300W package, translating to around 12%

CUDA, AI and RT.

Other new workstations additions include the Nvidia RTX Pro 5000

4000 Blackwell (24 GB) (140W), each with slightly more memory

Generation predeces-

RTX Pro boards feature

Other new workstations additions include the Nvidia RTX Pro 5000 Blackwell (48 GB) (300W), RTX Pro 4500 Blackwell (32 GB) (200W), and RTX Pro 4000 Blackwell (24 GB) (140W), each with slightly more memory than their Ada Generation predecessors. All new Blackwell RTX Pro boards feature 4 x DisplayPort 2.1 connectors.

5000 Blackwell (24 GB), Pro 4000 Blackwell (16 GB), Pro 3000 Blackwell (12 GB), Pro 2000 Blackwell (8 GB), Pro 1000 Blackwell (8 GB) and Pro 500 Blackwell (6 GB). The RTX Pro 5000

For mobile workstations, Nvidia has launched a much broader range of laptop GPUs. This includes the Nvidia RTX Pro 5000 Blackwell (24 GB), Pro 4000 Blackwell (16 GB), Pro 3000 Blackwell (12 GB), Pro 2000 Blackwell (8 GB), Pro 1000 Blackwell (8 GB) and Pro 500 Blackwell (6 GB). The RTX Pro 5000 Blackwell stands out because it has 50% more memory than its predecessor, the Nvidia RTX 5000 Ada Generation, which should make a big difference in some workflows.

The new laptop chips will be found in mobile workstations, such as the HP ZBook Fury G1i, available in both a 16� and an all-new 18� form factor.

Nvidia has also launched the Nvidia RTX Pro 6000 Blackwell Server Edition, a successor to the Nvidia L40 data centre GPU, which along with the new ‘Pro’ branding now makes it much easier to understand Nvidia’s entire pro GPU lineup.

The data centre GPU can be combined with Nvidia vGPU software to power AI workloads across virtualised environments and deliver ‘high-performance virtual workstation instances to remote users. ■ www.nvidia.com

Dell rolls out Intel-based Dell Pro Max PCs

Dell has revealed more details about its workstation-class Dell Pro Max PC lineup, following a major rebrand earlier this year that marked the end of its long-standing Precision workstation brand.

For Dell Pro Max laptops (in other words, mobile workstations) there are three tiers – Premium, Plus and Base.

The Premium tier is said to balance performance and style in a ‘sleek, lightweight design, and come in two sizes –14-inch (1.61kg) and 16-inch (2.25kg). There’s a choice of 45W ‘Arrow Lake’ Intel Core Ultra processors, Nvidia RTX Pro Blackwell GPUs, and up to 64 GB of LPDDR5x 7,467MT/s memory. Other features include a haptic touchpad, a zero-lattice keyboard, and an 8-megapixel IR camera.

The 14-inch and 16-inch Premium models have slightly different graphics and display options. The Dell Pro Max 14 Premium goes up to an Nvidia RTX Pro 2000 Blackwell (8 GB) GPU, which should hit the sweet spot for CAD, while its top-end display is a QHD+ (3,200 × 1,800) Tandem OLED with touch, low blue light, and VESA HDR TrueBlack 500 support.

The Dell Pro Max 16 Premium offers more powerful GPUs, up to the Nvidia RTX Pro 3500 Blackwell (12 GB), capable of entry-level viz, and a Tandem OLED 120Hz display with 100% DCI-P3 colour accuracy, touch support, and VESA HDR TrueBlack 1000. The laptop also offers up to 8TB of dual storage (RAID 0 or 1 capable).

charging, there’s a 165W / 280W USB Type-C adapter with Extended Power Range (EPR) support.

Generation (July 2025).

For CPUs, the Dell Pro Max Tower and Dell Pro Max Slim come with a choice of 125W ‘Arrow lake’ Intel Core processors, up to the Intel Core Ultra 9 285K (24 cores). Dell claims the Tower T2 is the world’s fastest tower for single-threaded application performance, made possible by Dell’s ‘unlimited turbo duration technology’, which is said to ensure top-tier performance in prolonged intensive tasks.

The the go, is available in

The base tier, simply referred to as Dell Pro Max, comes in a portable, lightweight design, designed for entry-level design applications and AI inferencing. The 14-inch model is limited to ‘Arrow Lake’ Intel Core Ultra 7 processors, and Nvidia RTX Pro 500 Blackwell graphics but is said to be up to 36% more powerful than its predecessor, the Dell Precision 3490. The 16-inch model offers the beefier Intel Core Ultra 9 and Nvidia RTX Pro 2000 Blackwell graphics and is said to be 33% faster than

Expect to see Dell Pro Max laptops with AMD Ryzen processors in

The Dell Pro Max Plus tier, which is said to offer ‘massive scalability’ for desktop-like performance on the go, is available in 16-inch (2.25kg) and 18-inch (3.13kg) form factors. Both laptops offer more powerful processors – up to 55W ‘Arrow Lake’ Intel Core Ultra, and Nvidia RTX Pro 5000 Blackwell (24GB) for graphics, plus significantly more memory – up to 256 GB.

To keep the devices running cool and quiet there’s a new patented thermal design. And for single-cable docking and

Meanwhile, the Dell Pro Max Micro is limited to 65W processors, up to the Intel Core Ultra 5 235 vPro, which means fewer cores and lower single core frequencies. However, these can run up to 85W thanks to a new thermal solution.

Meanwhile, the first wave of Dell Pro Max desktop PCs are classified as ‘Base’ models and are built around ‘Arrow Lake’ Intel Core processors. They come in Tower, Slim and Micro form factors and offer a wide range of Nvidia RTX GPUs, including Ada Generation (now) and Blackwell

Graphics is a big differentiator between the form factors. The ‘Micro’ and ‘Slim’ are limited to the Nvidia RTX 4000 SFF Ada (20 GB), whereas in July 2025, the Dell Pro Max Tower T2 will go all the way up to the up to the Nvidia RTX Pro 6000 Blackwell Workstation Edition (600W).

Expect to see Dell Pro Max desktops with AMD Threadripper processor options in July.

Expect to see Dell Pro Max desktops with AMD Threadripper processor options in July.

Nvidia RTX Pro 5000 Blackwell (24GB) the Precision 3591. see Ryzen processors in July 2025. Pro desktop PCs are classified as ‘Base’ models and are 85W are July

Dell Pro Max AI developer PCs, powered by the Nvidia Grace Blackwell architecture and a pre-configured Nvidia AI software stack. The Dell Pro Max with GB10 is powered by the GB10 Grace Blackwell Superchip and comes with 128 GB of uni-

Finally, Dell has also launched a pair of Dell Pro Max AI developer PCs, powered by the Nvidia Grace Blackwell architecture and a pre-configured Nvidia AI software stack. The Dell Pro Max with GB10 is powered by the GB10 Grace Blackwell Superchip and comes with 128 GB of unified memory, while the Dell Pro Max with GB300 features the more powerful GB300 Grace Blackwell Ultra Desktop Superchip and comes with 784 GB of unified memory.

■ www.dell.com

Nvidia RTX Pro Blackwell GPUs at heart of new Z by HP workstations

To coincide with the launch of the new Nvidia RTX Pro Blackwell GPUs (see page 12), HP has introduced two new Z by HP workstations: the HP Z2 Tower G1i desktop and the HP ZBook Fury G1i mobile workstation. Both models are powered by Intel processors (signified by the ‘i’ suffix) and support a variety of Nvidia GPUs.

The HP Z2 Tower G1i is billed as the world’s most powerful entry workstation, likely because it can accommodate Nvidia’s new flagship 600W Nvidia RTX Pro 6000 Blackwell Workstation Edition GPU with 96 GB of memory.

Other specs include a 24-core Intel Core Ultra 9 processor, including K-Series mod-

els that support 250W sustained TDP, up to 256 GB of DDR5 5600 MT/s memory, and up to 36TB of total storage (12 TB with 3 x NVMe and 24 TB with 2 x HDD).

The HP Z2 Tower G1i features a redesigned chassis with ‘phase change cooling’ and ‘lattice thermal venting’. With an angular 4U form factor, the workstation is also designed for rack environments and can be fitted with an HP Remote System Controller for remote workstation fleet management.

HP is also rolling out three more Intelbased desktop workstations – the HP Z1 Tower G1i, HP Z2 Mini G1i, and HP Z2 SFF G1i. With an Intel Core Ultra 9 CPU and up to Nvidia RTX 4000 SFF Ada Generation GPU, the HP Z2 Mini G1i is an

Boxx workstations get Blackwell upgrade

Boxx is rolling out the new Nvidia RTX Pro 6000 Blackwell Workstation Edition and Nvidia RTX Pro 6000 Blackwell Max-Q Workstation Edition GPUs inside multiple workstation form factors. This includes Boxx Apexx desktop workstations, Boxx Flexx high density data centre workstations, and Boxx Raxx rackmount workstations.

Boxx Apexx desktop workstations are available as a small tower or mid-tower,

and offer a choice of liquid cooled multicore Intel Core Ultra, AMD Ryzen 9000, AMD Ryzen Threadripper, AMD Ryzen Threadripper Pro, and Intel Xeon W processors.

Boxx Flexx is capable of simultaneously supporting up to ten high density data centre workstations in a standard five rack unit enclosure.

Meanwhile, Boxx Raxx are purpose built rackmount workstations available in 1U or 3U rackmounts.

■ www.boxx.com

Intel-based alternative to the HP Z2 Mini G1a, which launched earlier this year sporting a powerful AMD Ryzen processor with integrated graphics.

On the mobile workstation front, the HP ZBook Fury G1i, available in both a 16-inch and a brand new 18-inch form factor, is said to offer desktop-class performance in a laptop. It boasts up to an Nvidia RTX Pro 5000 Blackwell laptop GPU, up to an Arrow Lake’ Intel Core Ultra 9 285HX CPU, up to 192 GB of DDR5-5600 memory, and up to 16 TB of NVMe storage.

The HP ZBook Fury G1i 18 is billed as the world’s most powerful 18-inch mobile workstation and includes an ‘industry first’ three-fan design.

HP is also in the process of streamlining its HP ZBook product range, dropping the ‘Firefly’ and ‘Power’ brands in favour of ‘Fury’ and ‘Ultra’. HP is also introducing a numbering system that signifies increasing device features and overall performance. HP says the numbers 8 and 10 (represented by “X”) will show this progression.

Also coming soon are the HP ZBook 8 G1a (14-inch) with ‘Next Gen AMD Processors, HP ZBook G1i (14-inch and 16-inch) with up to Intel Core Ultra 9 CPU and up to Nvidia RTX 500 Ada Laptop GPU, and HP ZBook X G1i (16inch) with up to Intel Core Ultra 9 CPU and up to Nvidia RTX Pro 2000 Blackwell Laptop GPU. All of these new laptops look best suited to CAD and BIMcentric workflows and come with up to 64 GB DDR5-5600 memory.

■ www.hp.com/z

Samsung Pro SSD

Samsung has launched the 9100 Pro Series, a ‘lightningfast’ family of PCIe 5.0 NVMe SSDs available in 1TB, 2TB, 4TB or 8TB capacities.

According to the company, the drive can achieve sequential read/write speeds of up to up to 14,800/13,400 MB/s –moving data twice as fast as the previous generation – and random read/write speeds of up to 2,200K/2,600K IOPS. Prices start at $199.

■ www.samsung.com

AI NEWS BRIEFS

Augmenta.ai

Augmenta, an ‘autonomous design platform for the built environment’, has secured an additional $10 million in funding, which will help it expand the capabilities of its electrical system design (ESD) agent, and accelerate the development of its mechanical and plumbing agents

■ www.augmenta.ai

Arch-e assistant

Arch-e from Twinmaster is a new multi-agent copilot designed specifically for the AEC sector. Users interact intuitively with Arch-e which can generate simulations, analytics, and 3D design options in ‘seconds or minutes’

■ www.thetwinmaster.com

Design in context

Architechtures, a ‘generative AI-powered building design platform’, now includes OpenStreetMap (OSM) integration. This new capability streamlines the placement of buildings within real-world sites by automatically adjusting their positioning to match existing topography

■ www.architechtures.com

Small packages

Zenerate 3.0, tbe latest release of the AI-powered feasibility design software for architects, developers and contractors, features an enhanced AI engine that can generate optimised building layouts for small propertiesirregular lots under 3,500 m2

■ www.zenerate.ai

Report automation

Kirkor Architects is using the Cogram AI platform for site reporting and to automate minutes for virtual and on-site meetings. Cogram’s AI integrates with Outlook to automatically organise emails, while the job site software combines voice, images, and geolocation data to draft site or inspection reports automatically

■ www.cogram.com

AI CONTENT HUB

For the latest news, features, interviews and opinions relating to Artificial Intelligence (AI) in AEC, check out our new AI hub

■ www.aecmag.com/AI

Chaos acquires AI software firm EvolveLab

C

haos, a specialist in arch viz software, has acquired EvolveLab, a developer of AI tools for streamlining visualisation, generative design, documentation and interoperability for AEC professionals.

According to Chaos, the acquisition will reinforce its design-to-visualisation workflows, while expanding to include tools for BIM automation, AI-driven ideation and computational design.

Founded in 2015, EvolveLab was the first firm to integrate generative AI technology into architectural modelling software, demonstrating the potential of mixing imaginative prompts with 3D geometry. Through its flagship software Veras – which AEC Magazine reviewed back in 2023 – EvolveLab connected this

capability to BIM tools like SketchUp, Revit, Vectorworks, and others. EvolveLab and Chaos tools can be used together to accelerate both design and reviews. In the schematic design phase, ideas can be rapidly generated in Veras before committing the design to BIM where Enscape’s real-time visualisation capabilities pushes the project further. Chaos and the EvolveLab teams are exploring ways to integrate their products and accelerate their AI roadmaps. EvolveLab products will remain available to customers. This includes Glyph, for automating and standardising documentation tasks; Morphis, for generating designs in real-time; and Helix, for interoperability between BIM tools.

■ www.evolvelab.io ■ www.chaos.com

AI BIM assistant for Revit launches

Pele AI is a new AI BIM Assistant for Revit designed to simplify, automate and streamline manual tasks such as tagging elements, generating views, organising sheets or graphically modifying elements.

The Revit add-in understands plain language instructions, bypassing the need for technical scripts or complex syntax. Example prompts include: Create a building with 200m2 area as a rectangle, six storeys high; open all the floorplans with a scale of 1:20; Select any floors thinner than 400mm; make dimensions for the rooms in this plan view; and highlight clashes

between ducts and beams in this 3D view. Pele is charged per prompt (e.g. 500 prompts for $40) and a free trial gives users 20 prompts.

■ www.pele-assistant.online

AI: Information Integrity

As AI reshapes how we engage with information, Emma Hooper, head of information management strategy at RLB Digital, explores how we can refine large language models to improve accuracy, reduce bias, and uphold data integrity — without losing the essential human skill of critical thinking.

In a world where AI is becoming an increasingly integral part of our everyday lives, the potential benefits are immense. However, as someone with a background in technology — having spent my career producing, managing or thinking about information — I continue to contemplate how AI will alter our relationship with information and how the integrity and quality of data will be managed.

Understanding LLMs

AI is a broad field focused on simulating human intelligence, enabling machines to learn from examples and apply this learning to new situations. As we delve deeper into its sub-types, we become more detached from the inner workings of these models, and the statistical patterns they use become increasingly complex. This is particularly relevant with large language models (LLMs), which generate new content based on training data and user instructions (prompts).

A large language model (LLM) uses a transformer model, that is a specific type of neural network. These models learn patterns and connections from words or phrases, so the more examples they are fed, the more accurate they become. Consequently, they require vast amounts of data and significant computational power, which puts considerable pressure on the environment. These models power tools such as ChatGPT, Gemini, and Claude.

The case of DeepSeek-R1

DeepSeek-R1 which has recently been in the news, demonstrates how constraints can drive innovation through good old-fashioned problem-solving. This open-source LLM uses rule-based reinforcement learning, making it cheaper and less compute-intensive to train compared to more established models. However, since it is an LLM it still faces limitations in output quality.

However, when it comes to accuracy, LLMs are statistical models that operate

based on probabilities. Therefore, their responses are limited to what they’ve been trained on. They perform well when operating within their dataset, but if there are gaps or they go out of scope, inaccuracies or hallucinations can occur.

Inaccurate information is problematic when reliability is crucial, but trust in quality isn’t the only issue. General LLMs are trained on internet content, but much domain-specific knowledge isn’t captured online or is behind downloads/paywalls, so we’re missing out on a significant chunk of knowledge.

Training LLMs: the built environment

Training LLMs is resource-intensive and requires vast amounts of data. However, data sharing in the built environment is limited, and ownership is often debated. This raises several questions in my mind: Where does the training data come from? Do trainers have permission to use it? How can organisations ensure their models’ outputs are interoperable? Are SMEs disadvantaged due to limited data access? How can we reduce bias from proprietary terminology and data structures? Will the vast variation hinder the ability to spot correct patterns?

With my information manager hat on, without proper application and understanding it’s not just rubbish in and rubbish out, it’s rubbish out on a huge scale that is all artificial and completely overwhelms us.

How do we improve the use of LLMs?

There are techniques such as RetrievalAugmented Generation (RAG), that use vector databases to retrieve relevant information from a specific knowledge base. This information is used within the LLM prompt to provide outputs that are much more relevant and up to date. Having more control over the knowledge base ensures the sources are known and reliable.

This leads to an improvement, but the machine still doesn’t fully understand what it’s being asked. By introducing more context and meaning, we might achieve better outputs. This is where returning to information science and using knowledge graphs can help.

A knowledge graph is a collection of interlinked descriptions of things or concepts. It uses a graph-structured data model within a database to create connections – a web of facts. These graphs link many ideas into a cohesive whole, allowing computers to understand realworld relationships much more quickly. They are underpinned by ontologies, which provide a domain-focused framework to give formal meaning. This meaning, or semantics, is key. The ontology organises information by defining relationships and concepts to help with reasoning and inference.

Knowledge graphs enhance the RAG process by providing structured information with defined relationships, creating more context-enriched prompts. Organisations across various industries are exploring how to integrate knowledge graphs into their enterprise data strategies. So much so they even made it onto the Gartner Hype Cycle on the slope of enlightenment.

The need for critical thinking

From an industry perspective, semantics is not just where the magic lies for AI; it is also crucial for sorting out the information chaos in the industry. The tools discussed can improve LLMs, but the results still depend on a backbone of good information management. This includes having strategies in place to ensure information meets the needs of its original purpose and implementing strong assurance processes to provide governance.

Therefore, before we move too far ahead, I believe it’s crucial for the industry to return to the theory and roots of information science. By understanding this, we can lay strong foundations that all stakeholders can work from, providing a common starting point and a sound base to meet AI halfway and derive the most value from it.

Above all it’s important to not lose sight that this begins and ends with people and one of the greatest things we can ever do is to think critically and keep questioning!

Studio Tim Fu: AI-driven design

The pioneering London practice is reimagining architectural workflows through AI, blending human creativity with machine intelligence to accelerate and elevate design, writes Greg Corke

It’s rare to see an architectural practice align itself so openly with a specific technology. But Studio Tim Fu is breaking that mould. Built from the ground up as an AI-first practice, the London-based studio is unapologetically committed to exploring how generative AI can reshape architecture—from the earliest concepts to fully constructable designs.

“We want to explore in depth how we can use the technology of generative AI, of neural networks, deep learning, and large language models as well, in an effort to facilitate an accelerated way of designing and building, but also thinking,” explains founder Tim Fu.

Studio Tim Fu’s current methodology uses AI early in the design process to boost creativity, accelerate visualisation, and improve client communication — all while maintaining technical feasibility.

The technological journey began during Fu’s time at Zaha Hadid Architects, where he explored the potential of computational design to rationalise complex geometries. “We were thinking about the complexity of design and how we can bring that to

fruition through computational processes and technologies,” he recalls.

This early exploration laid the groundwork to the Studio’s current AI-driven approach, which involves a sophisticated iterative process that blends human intention with machine learning capabilities. Initial AI-generated concepts are refined through human guidance, then reinterpreted by diffusion AI technology. This creates a dynamic feedback loop for rapid conceptualisation, where hundreds of design expressions can be explored in a single day.

Fu’s technical approach employs a complex system of AI tools, from common text-to-image generators such as Midjourney, Dall-E and Stable Diffusion to custom-trained models. Using these tools at the start of a project presents a ‘gradient of possibilities’, says Fu, both using AI’s creative agency and incorporating human intentions. The team uses text prompts to spark fresh ideas, producing ‘mood boards’ of synthetic visuals, as well as hand sketches to guide the AI.

“We use a mesh of back and forth with

different design tools,” he explains. Ideas are generated and refined before they are translated into 3D geometry using modelling tools like Rhino.

“Once we figure out the architectural design and planning that solves real life situation and constraints and context, we bring those back into the AI visualising models, to visualise and continue to iterate over our existing 3D models,” he says. This enables the design team to see, for example, different possible expressions of window details and geometries. It’s a continuous loop—a creative dialogue between human intention and machine imagination.

Fu believes the results speak for themselves: in just one week, his team can deliver high-quality, client-ready concepts that far exceed what’s possible using conventional methods within the same time frame.

This level of efficiency brings new economic opportunities. Studio Tim Fu can charge clients less than traditional archi-

Lake Bled Estate masterplan in Slovenia
Credit: Studio Tim Fu
‘‘

tects while boosting its earnings, all within conventional pricing structures. “We can lower the price because we can, and we can up the value, so it’s a win for the client and it’s good for us,” he says.

AI meets heritage

The Studio’s work on the Lake Bled Estate masterplan in Slovenia, its first fully AI-driven architectural project, serves as a landmark demonstration of these technical capabilities.

Spanning an expansive 22,000 square metre site, the project comprises six ultra-luxury villas set alongside the historic Vila Epos, a protected cultural monument of the highest national significance.

To produce a design that respects its historical context while creating an elevated luxury space, Studio Tim Fu synthesises heritage data with AI.

The Studio captured the local architectural vernacular by analysing material characteristics and extracting geometric

Once

we figure out the architectural design and planning that solves real life situation and constraints and context, we bring those back into the AI visualising models, to visualise and continue to iterate over our existing 3D models

’’

parameters to comply with strict heritage regulations, including roof layout, height, and slope.

“This is the first time we are showing AI in its most contextually reflective way,” says Fu, “Something that is contrary to all the AI experiments that have come out since the dawn of diffusion AI processes.

“We want to showcase that this whole diffusion process can be completely controlled under our belt and be used for specifically addressing those issues [of respecting historical context].”

Delivering the details

Studio Tim Fu currently applies AI primarily at the concept-to-detail design stage. However, Fu believes we’re at a pivotal moment where AI is poised to take on more technical aspects of architectural design—particularly in areas like BIM modelling and dataset management.

“Because these are technical requirements, technical needs, and technical

goals, it’s something that can be quantified,” he explains. “If it’s maximising certain functionality, while minimising the use of material and budget, these are numerical data that can be optimised. We’re just beginning that process of developing artificial general intelligence.”

But where does this leave humans? While Fu acknowledges that we must humbly recognise our limitations, he believes that human specialists—architects, designers, and fabricators—will remain essential, each working with AI within their own domain. At the same time, he sees enormous potential for AI to unify these fields.

“What AI can do is bring all of the human processes into a cohesive, streamlined decision making, to design to production process, because that’s what AI is good at. It’s good at cohesing large data sets, it’s good at addressing macro scale and micro scale values in the same time.”

■ www.timfu.com

How AI is Transforming AEC Design: A Conversation with Stefan Kaufmann, ALLPLAN

Artificial intelligence (AI) has made waves across nearly every industry, and the AEC sector is no exception. As AI tools become more powerful, accessible, and reliable, they’re reshaping how professionals plan, design, and build. To explore this transformation, Stefan Kaufmann, Product Manager for BIM Strategy & New Technologies at ALLPLAN, speaks about AI’s role in AEC, its practical applications, and what the future may hold.

How is AI currently being used in the AEC industry, and what are some of the most impactful applications – both today and on the horizon?

Stefan Kaufmann: AI in AEC has evolved along two main tracks: broad, general-purpose AI like chatbots and image generators, and very focused tools that solve specific tasks – like classifying point clouds or monitoring the energy consumption of buildings. It’s been most effective where access to broad knowledge is needed or repetitive, low-value tasks can be replaced, freeing up professionals for more strategic, value-adding work.

On a practical level, AI is currently being used in a wide field of applications. Text translations,

research on building technology and requirements, and project document management. There are also “Any2BIM services” which convert data from drawings or point clouds into BIM models and are already reducing the manual burden on design teams in the project preparation phase.

Generative pretrained AIs today extract structured information from unstructured sources like PDF plans and use this to create knowledge graphs and link project data intelligently. We are continuously assessing multi-modal AI models to assist our customers in many areas of complex construction projects – processing huge sets of 2D information can really be optimised using ground-breaking technology such as this.

Most of the problems AEC specialists are

addressing are still too complex to be resolved in one step with AI. Therefore, the currently evolving reasoning models are a real game changer for us. Reasoning models analyse complex problems deeply, break them down into structured, manageable components, and provide multistep solutions. They can be used to work with huge data sets from multiple sources like BIM files and spreadsheets to deliver advanced analysis, realtime decision-making, and optimised workflows.

What about generative AI – how is it helping in the early stages of design?

Generative AI is proving useful for sparking early design ideas. Tools like NEMETSCHEK’s AI Visualizer create images that let architects explore architectural styles and materials in seconds. The latest image generation models allow more precise restyling – for example, adjusting elements like façade materials interactively or adding people to an image.

We’re also seeing early tools that convert 2D inputs into 3D models. IFC prompting, for example, can create structured BIM data from a simple description – but still with limited architectural quality. AI can also assist with room book creation and understanding norms and standards, helping shape design quality at an earlier stage.

How is AI influencing parametric and computational design?

The future lies in the convergence of statistical methods like machine learning and logical methods

like parametric modelling. There are two promising directions: first, generating agents from the parametric models that are adapted by the AI to the context in the building model; second, enabling AI to generate new parametric models from legacy or non-parametric CAD models. This hybrid approach opens exciting opportunities. However, I don’t believe we’ll see a general-purpose AI capable of handling the full design and engineering process autonomously that delivers design on a human quality level within the next three years.

How is AI being used in conjunction with BIM? AI is increasingly becoming a core part of BIM workflows. Conversational tools allow users to query BIM databases directly – for example, asking a model to extract quantities or locate specific elements. AI also helps map internal standards to project-specific requirements, a process that’s currently manual and error-prone. Model enrichment is another growing area, along with efforts to automate drawing generation, which paradoxically remains one of the most time-consuming tasks in BIM projects today.

AI is often discussed in the context of sustainability. How can it support greener construction practices?

Delivering sustainable design is rather more timeconsuming than complex. AI can streamline many of these tasks – like classifying building elements into material systems or proposing suitable material and construction solutions. At the Georg Nemetschek

Institute, we’re also researching how AI can support the AEC industry to become a circular economy. AI’s ability to process sensor data, for example, may help assess the structural health of concrete elements without destructive testing – which is especially relevant for monitoring corrosion in steel reinforcement.

What ethical and regulatory concerns should the industry be aware of?

Data privacy and intellectual property are key concerns when it comes to AI. Regulations –particularly in Europe – help ensure that customer data is handled responsibly. At ALLPLAN, for example, we’ve committed to not using customer data to train AI models and protect it under the strict European law when using our AI services Trust and transparency are essential. As AI becomes more embedded in daily workflows, clear ethical standards will be critical for building long-term confidence in these technologies.

What challenges do AEC firms face when trying to implement AI?

Prioritisation is a major challenge – what seems innovative today could be replaced by an integrated tool from a hyperscaler tomorrow. That’s why it’s crucial to focus on domain-specific solutions. The other key barrier is access to clean, structured data. Without it, even the best AI tools won’t deliver meaningful results. Data quality is everything.

What advice would you give to AEC firms just starting to explore AI?

1 The AI Visualizer in ALLPLAN 2025 generates realistic images in seconds, allowing architects to experiment with styles, materials, and interior elements effortlessly. Copyright: ALLPLAN

2 The AI Visualizer enhances creativity from the very first sketch—helping architects explore different architectural styles early in the design process and visualize furniture and materials in later stages. Copyright: ALLPLAN

The most valuable step any AEC firm can take right now is to organise and consolidate its data. AI tools are evolving rapidly, but their effectiveness depends entirely on the quality and structure of the data they’re applied to. Make sure your project information is accessible, consistent, and under your control. With this foundation in place, you’ll be ready to take advantage of the tools and innovations that continue to emerge.

Finally, what excites you most about AI in the AEC industry?

We are currently experiencing the evolution of the most significant and far-reaching technological development in human history in fast motion. Every week there are breakthroughs in AI that would have seemed impossible just a few months ago. Problems that have persisted for decades are now being solved with AI. The real opportunity lies in making AI truly usable – transforming how we design and build, not on PowerPoint slides, but in CAX practice. The future of AEC is being shaped today. www.allplan.com

Motif to take on Revit

The race to challenge Autodesk Revit with next-generation BIM tools has intensified with the launch of Motif, a startup that emerged out of stealth at the beginning of the year. Motif joins other startups including Arcol, Qonic, and Snaptrude, who are already on steady development paths to tackle collaborative BIM. However, like any newcomer competing with a wellestablished incumbent, it will take years to achieve full feature parity. This is even the case for Autodesk’s next generation cloudbased AEC technology, Forma.

What all these new tools can do quickly, is bring new ideas and capabilities into existing Revit (RVT) AEC workflows. This year, we’re beginning to see this happening across the developer community, a topic that will be discussed in great detail at our NXT BLD and NXT DEV conferences on 11 and 12 June 2025 at the Queen Elizabeth II Centre in London.

Though a late entrant to the market, Motif stands out. It’s led by Amar Hanspal and Brian Mathews, two former Autodesk executives who played pivotal roles in shaping Autodesk’s product development portfolio.

Hanspal was Autodesk CPO and, for a while, joint CEO. Mathews was Autodesk VP platform engineering / Autodesk Labs and lead the industry’s charge into adopting reality capture. They know where the bodies are buried and have decades of experience in software ideation, running large teams and have immediate global networks with leading design IT directors. Their proven track record also makes it easier for them to raise capital and be taken as serious contenders from the get-go.

Motif aims to provide holistic solutions to the fractured AEC industry. Led by former Autodesk co-CEO Amar Hanspal and backed by a whopping $46m in funding, the startup stands out in a crowded field. Martyn Day explores its potential impact

alongside key VC investors. Motif secured $46 million in seed and Series A funding. The Series A round was led by CapitalG, Alphabet’s independent growth fund, while the seed round was led by Redpoint Ventures. Pre-seed venture firm Baukunst also participated in both rounds. This makes Motif the second largest funded start-up in the ‘BIM’ space – the biggest being HighArc, a cloud-based expert system for US homebuilders, at $80 million.

Motif has been in stealth for almost two years, operating under the name AmBr (we are guessing for Amar and Brian). Major global architecture firms have been involved in shaping the development of the software, even before any code was written, all under strict NDAs (Non-disclosure Agreements).

The firms working with Hanspal’s team deliver the most geometrically complex and large projects. The core idea is that by tackling the needs of signature architectural practices, the software should deliver more than enough capability for those who focus on more traditional, low risk designs. There is considerable

The low-down

In late January, the company had its official launch

industry standard software tools. This hunger has been expressed in multiple ‘Open Letters to Autodesk’, based on a wish for more capable BIM tools – a zeitgeist which Motif is looking to harness, as BIM eventually becomes a replacement market.

The challenge

Motif’s mission is to modernise the AEC software industry, which it sees as being dominated by ‘outdated 20th-century technology’. Motif aims to create a nextgeneration platform for building design, integrating 3D, cloud, and machine learning technologies. Challenges such as climate resilience, rapid urbanisation modelling, and working with globally distributed teams will be addressed, and the company’s solutions will integrate smart building technology.

Motif will fuse 3D, cloud, and AI with support for open data standards within a real-time collaborative platform, featuring deep automation. The unified database will be granular, enabling sharing at the element level. This, in many ways fol-

appetite to replace the existing lows the developments of other BIM start-ups such as Snaptrude and Arcol, which pitch themselves as the ‘Figma’ for BIM. In fact, Hanspal was an early investor in Arcol, alongside

Procore’s Tooey Courtemanche.

In late March Motif released a V1 product (see box out on page 24) , but in its current guise it’s far from being the Revit challenger that it will eventually grow to be. To get an idea of how the software might evolve we sat down with Hanspal to discuss the company, the technology and what the BIM industry needs.

A quantum of history

Before we dive into the interview, let’s have a quick look at how we got here. At Autodesk University 2016, while serving as Autodesk’s joint CEO, Hanspal introduced his bold vision for the future of BIM. Called Project Quantum, the aim was to create a new platform that would move BIM workflows to the cloud, providing a common data environment (CDE) for collaborative working.

Hanspal aimed to address problems which were endemic in the industry, arising from the federated nature of AEC processes and how software, up to that point, doubled down on this problem by storing data in unconnected silos.

Instead of focusing on rewriting or regenerating Revit as a desktop application, the vision was to create a cloud-based environment to enable different professionals to work on the same project data, but with different views and tools, all connected through the Quantum platform.

Quantum would feature connecting workspaces, breaking down the monolithic structure of typical AEC solutions. This would allow data and logic to be accessible anywhere on the network and available on demand, in the appropriate application for a given task. These workspaces were to be based on professional definitions, providing architects, structural engineers, MEP professionals, fabricators, and contractors with access to the specific tools they need.

Hanspal recognised that interoperability was a big problem, and any new solution needed to facilitate interoperability between different software systems, acting as a broker, moving data between different data silos. One of the key aspects of Quantum was that the data would be granular, so instead of sharing entire models, Quantum could transport just the components required. This would mean users receive only the information pertinent to their task, without the “noise” of unnecessary data.

left Autodesk. Meanwhile, the concept of Quantum lived on and development teams continued exploratory work under Jim Awe, Autodesk’s chief software architect. Months turned into years and by 2019, Project Quantum had been rebranded Project Plasma, as the underlying technology was seen as a much broader company-wide effort to build a cloud-based data-centric approach to design data . Ultimately, Autodesk acquired Spacemaker in 2020 and assigned its team to develop the technology into Autodesk Forma, which launched in 2023—more than six years after Hanspal first introduced the Quantum concept. However, Forma is still at the conceptual stage, with Revit continuing to be the desktop BIM workflow, with all its underlying issues.

In many respects, Hanspal predicted the future for next generation BIM in his 2016 Autodesk University address. Up until that point Autodesk had wrestled for years with cloud-based design tools, with its first test being Mechanical CAD (MCAD) software, Autodesk Fusion, which demoed in 2009 and shipped in 2013. Cloud-based design applications were a tad ahead of the web standards and infrastructure which have helped product like Figma make an impact.

In conversation

On leaving Autodesk in 2017, after his 15+ year stint, Hanspal thought long and hard about what to do next. In various conversations over the years, he admitted that the most obvious software demand was for a new modern-coded BIM tool, as he had proposed in some detail with Quantum. However, Hanspal was mindful that it might be seen as sour grapes. Plus, developing a true Revit competitor came with a steep price tag—he estimated it would take over $200 million. Instead, Hanspal opted to start Bright Machines, a company which

During Covid, AEC Magazine was talking with some very early start-ups, and pretty much all had been in contact with Hanspal for advice and/or stewardship.

Martyn Day: Unlike Revit, you don’t have a single-platform approach. Why’s that?

Amar Hanspal: In contrast to the monolithic approach of applications like Revit, we aim to target specific issues and workflows. There will be common elements. With the cloud, you build a common back end, but the idea is that you solve specific problems along the way. You only need one user management system, one payment system, collaboration etc. There are some technology layers that are common. But the idea is about solving end-user problems like design review, modelling, editing, QA, QC.

This isn’t a secret! I talked about this in the Quantum thing seven years ago! I always say ideas are not unique. Execution is. When it comes down to it, can anybody else do this? Of course they can. Will they do this? Of course not!

Martyn Day: Data storage and flow is a core differential from BIM 2.0. Will your system use granular data, and how will you bypass limitations of browser-based applications? You talk about ‘open’, which is very in vogue. Does that mean that your core database is Industry Foundation Classes (IFC), or is there a proprietary database?

Amar Hanspal: There are three things we have to figure out. One how to run in a browser, where you have the limited memory, so you can’t just send everything. You’ve got to get really clever about how to figure out what [data] people receive – and there’s all sorts of modern ways of doing that.

‘‘ Motif facilitates granular sharing of information through “frames,” allowing users to select and share specific subsets of data with different stakeholders

Second is you have to be open from the get-go. However we store the data, anybody should be able to access it, from day one.

Eight months later, the Autodesk board elected fellow joint CEO, Andrew Anagnost as Autodesk CEO and Hanspal

delivers the scalable automation of robot modules with control software which uses computer vision machine learning to manufacture small goods, like electronics.

After almost four years at Bright Machines, in 2021, Hanspal exited and returned to the AEC problem, which, in the meantime, had not made any progress.

And then the third thing is, you can’t assume that you have all the data, so you have to be able to link to other sources and integrate where it makes sense. If it’s a Revit object, you should be able to handle it but if it’s not, you should be able to link to it.

You have to do some things for performance – it’s not proprietary, but you’re always doing something to speed up your user experience. The one path is, here’s your client, then you have to get data fast to them, and you have to do that in a very clever way, all while you’re

encrypting and decrypting it. That’s just for user experience and performance, but from a customer perspective, anytime you want to interrogate the data send and request all the objects in the database – there is a very standard web API that you can use, and it’s always available.

Of course we’ll support IFC, just like we support RVT and all these formats. But that’s not connected, not our core data format. Our core data format is a lot looser, because we realised in this industry, it’s not just geometric objects you’re dealing with, you must deal with materials, and all sorts of data types. In some ways, you must try and make it more like the internet in a way. Brian [Mathews] would explain that the internet is this kind of weirdly structured yet linked data, all at the same time. And I think that’s what we are figuring out how to do well.

Martyn Day: We have seen all sorts of applications now being developed for the web. Some are thick clients with a 20 GB download – basically a desktop application running in a web browser, utilising all the local compute, with the data on the cloud. Some are completely on the cloud with little resource requirement on the

local machine. Autodesk did a lot of experimentation to try and work out the best balance. What are you doing?

Amar Hanspal: It’s a bit of a moving edge right now. I would say that you want to begin first principles. You want to get the client as thin as possible so that if you can, you avoid the big download at all costs. That can be through trickery, it’s also where WebGPU and all these new things that are showing up are helping. You can start using browsers for more and more [things] every day that will help deliver applications. But I do think that there are situations in which the browser is going to get overwhelmed, in which case, you’re going to require people to add something. Like, when the objects get really large and very graphical, sometimes you can deliver a better user experience if you give somebody a thicker client. I think that’s some way off for us to try and deal with, but our first principle is to just leverage the browser as much as possible and not require users to download something to use our application. I think it may become, ‘you hit this wall for this particular capability’, then you’ll need to add something local.

Delving deeper into Motif V1

With its stated aim of developing a next generation BIM tool to rival Revit, Motif’s initial offering was bound to be a small subset of what will be the finished product. In AEC Magazine, we have explained this many times before, but it’s worth saying again –the development of a Revit competitor is a marathon and all the firms that are out of stealth and involved in this endeavour (Qonic, Snaptrude, Arcol and Motif), will be offering products with limited capabilities before we get to detailed authoring of models.

Motif V1 is a cloud-based tool which aims to address a range of pain points in architectural engineering and construction workflows, particularly in the design presentation and review phases. From what we have seen of this initial offering, it’s clear that Motif has identified several features which you would typically find across a number of established applications - Miro, Revizto, Bluebeam, Speckle, Omniverse and many CDEs (Common Data Environments). This means that there’s no obvious single application that Motif really replaces, as it has a broad remit. Talking to CEO Amar Hanspal, the closest application the company is looking to as a natural replacement for is Miro (miro.com), which became popular during Covid for collaborative working. As it’s browser-based it works on desktop, laptop or tablet.

Ideation assembly

The initial focus of the release is to enhance design review workflows by offering a more connected and 3D-enabled alternative to Miro. Users can collate 2D drawings, PDFs, SVGs and 3D models from a variety of different sources, to bring them into the Motif space for the creation of presentations, markup and collaboration.

The primary sweet spot is for collating project images and drawings into Concept presentations, using an ‘infinite canvas’ which can be shared with team members or clients in real time. Models can be imported from multiple sources and views snapshot, drawings from Revit added, material swatches for mood boards, images of analysis results, pretty much anything. These can be arranged collaboratively and simultaneously by multiple users and the software neatly assists in grid layout with some auto assistance. There’s also the ability to add comments for team members to see and react to.

Motif recognises that a data centric approach is essential in next generation tools. With this aim in mind, Motif borrows some ideas from Speckle (www.speckle.systems), offering plugins for a variety of commonlyused design tools, such as Rhino and Revit. These plugins offer granular,

Martyn Day: You have folks that have worked on Revit in your team. Will this help your RVT ability form the get go?

Amar Hanspal: We’ve not reverse engineered the file format, but, you know, we do know how this works. We’re staying good citizens and will play nice. We’re not doing any hacks, we’re going to integrate very cleanly with whatever – Revit, Rhino, other things that people use – in a very clean way. We’re doing it in an intelligent way, to understand how these things are constructed.

Martyn Day: The big issue is that Revit is designed to predominantly model, in order to produce drawings. Many firms are fed up with documentation and modelling to produce low level of detail output. Are you looking to go beyond the BIM 1.0 paradigm?

Amar Hanspal: Yes, fabrication is very critical for modular construction. Fabrication is really one of the things that you have to ‘rethink’ in some way. It’s probably the most obvious other thing that you have to do. I also think that there are other experiences coming out, not that

bi-directional links to the cloud-based, collaborative Motif environment. One of the special capabilities is the live broadcasting of objects from Revit as they are placed, with Motif displaying the streamed model.

It’s possible to run Revit side by side with Motif, with Motif automatically synchronising views. As geometry is added to Revit it appears almost instantly in the Motif view. This is food for thought, as it makes live Revit design information available to collaborative teams. While this is Speckle-like there’s no need to set up a server or have high technical knowledge.

Motif facilitates granular sharing of information through “frames,” allowing users to select and share specific subsets of data with different stakeholders. The software translates data from native object models (e.g. Revit) into a ‘neutral internal object model’ (mesh and properties) which allows it to connect with different systems.

Buildings can be manipulated in 3D and there’s smart work plane generation. This might not be super useful right now, but we can imagine how it will play out once the BIM modelling tools get added in. For now, images can be applied to surfaces and freehand 3D markup and surface-based detection give the software an uncanny intuition for selecting surface planes and geometry when the mouse is near.

It’s possible to make markups to these ingested objects in Motif, and somewhat amazingly these comments can also be seen back in the Revit session. For now, though, there’s no clash detection or model entity editing available in Motif - its initial use is design review. Motif stores all the history at an object level, allowing users to go back in time to previous states of a project and see who changed what.

The product’s interface is wonderfully uncomplicated with only nine tools. The display feels very architectural, presenting ‘model in white’ with some grey shadowing.

The data model

The underlying data model is important. Motif uses a ‘linked information model’ based on the idea that in AEC all data is distributed data. Instead of trying to centralise all the project information in a single system, which is what Autodesk Docs / Autodesk Construction Cloud (ACC) does, Motif aims to link data where it resides and assumes that no single system will have all the necessary information for a building. So instead of ingesting and holding all the data to be one version of the truth, somewhat trapping users in a file format, or cloud system, Motif will pull in data for display and reference reasons. In the future we guess it will be mixed

we are an AR/VR play, but you’re creating other sorts of experiences, and deliverables that people want. We need to think through that more expansively.

Martyn Day: Are you using a solid modelling engine underneath, like Qonic?

Amar Hanspal: Yes, there is an answer to that, but what we’re coming out with first, won’t need all that complexity, but yeah, of course, we will do all that stuff over time. There is a mixture of tech that we can use –off the shelf – like licence one or use something that is relatively open source.

Martyn Day: For most firms who have entered this space, taking on Revit is the software equivalent of scaling the North face of the Eiger – 20 years of development, multidiscipline, broadly adopted. All of the new tools initially look like SketchUp, as there’s so much to develop. Some have focused on one area, like conceptual, others have opted to develop all over the place to have broad, but shallow functionality. Are you coming to market focussing on a sweet spot?

Amar Hanspal: One of the things we learned from speaking to customers is

that [in] this whole concept modelling / Skema / TestFit world there are so many things that developers are doing. We’re going after a different problem set. In some ways, the first application that we have launched feels much more like a companion, collaboration product. I don’t want to take anything out of market that feels half incomplete. The lesson we’ve learned from everything is that even to do the MVP (Minimum Viable Product) in modelling, we will be just one of sixteen things that people are using. I think, you know, I’d much rather go up to the North face and scale it.

Martyn Day: Many of the original letter writers were signature architects, complaining that they couldn’t model the geometry in Revit so used Rhino / Grasshopper then dropped the geometry into Revit. So, are you talking to the most demanding users, the hardest to please?

Amar Hanspal: I 100% agree with you. I think someone has to go up the North face of the Eiger. That’s my thing, it’s the hardest thing to do. It’s why we need this special team. It’s why we need this big capital. That’s why Brian and I decided to

do it. I was thinking, who else is going to do it? Autodesk isn’t doing it! This Forma stuff isn’t really leading to the reinvention of Revit.

All these small developers that are showing up, are going to the East face. I give them credit. I’m not dissing them, but if they’re not going to scale the North face… I’m like, OK, this is hard, but we have got to go up the North face of the Eiger, and that’s what we’re going to do.

Martyn Day: From talking with other developers, it looks like it will take five years to be feature comparative. The problem is products come to the market and aren’t fleshed out, they get evaluated and dismissed because they look like SketchUp, not a Revit replacement and it’s hard to get the market’s attention again after that.

Amar Hanspal: Yeah, I think it’s five years. And that’s why, deliberately, the first product that’s come out is the editor. It’s a little bit more like Revizto-like because I think that’s what gives us time to go do the big thing. If you’re gonna come for the King, you better not miss. We’ve got to get to that threshold where

with its own design information.

Motif is intended to be ‘pretty open’ according to the team, with plans to expose the API and SDK to allow users and developers access to extract and add their own data and object types.

At the moment the teams are developing plugins to connect Motif with various commonly-used BIM and CAD applications, including Grasshopper, Dynamo, SketchUp and AutoCAD, in addition to Rhino and Revit which are already supported.

Business model

At the early stage of most startups, having a sales force and actively selling an early version of an application is usually a low priority. Instead, many startups just seek early adopters for trial and feedback. Motif, while being in development for almost two years already has a small sales team and is actively selling the software for $25 a month per user. Hanspal says this is to ensure good discipline in software development, to provide scalability, performance, and responsiveness to customer feedback. The initial adoption is expected to come from companies looking to replace parts of their Miro workflow.

Conclusion Motif fully intends to take on Autodesk Revit in the long term. CEO

Hanspal realises this is a multi-year marathon, so while the team develops a modelling capability, it is utilising elements of its current technology to provide collaborative cloud-based solutions for a variety of pain points which they have identified as being under-serviced.

For now, the company aims to develop a cloud-based 3D interface for project information which will not necessarily replace existing BIM or drawing systems but will act as an aggregator and collaboration platform for those using a wide array of commonly used authoring tools. The software comes to market with an interesting array of capabilities, which may seem basic but provides some insight into what’s coming next - the bi-directional streaming between authoring tool and Motif, the deep understanding of Revit data, models and drawings, Revit synchronisation, connectivity to Rhino and smart interaction with model data all impress. There may be some frustration with obvious capabilities that are currently omitted, such as simple clash detection between imported model geometry but we are sure this is coming as development progresses. What Motif does, it does well. It’s hard to pigeonhole the functionality delivered when compared to any other specific genre of application currently

on the market. Many will find it’s well worth having for the creative storyboarding alone, others may find collaborative design review the key capability. Those that can’t afford Omniverse might love the ability to have an application that can display all the coordinated geometry from multiple applications in the cloud for project teams to see and understand. It’s important to remember that this is a work in progress and as the software develops its capabilities, it will expand into modelling and creating drawings. Its tight integration with Revit will be useful and reassuring to those who want to mix and match BIM applications as the industry

inevitably transitions to BIM 2.0. Meanwhile, the Motif team continues to grow, adding in serious industry firepower. After hiring Jens Majdal Kaarsholm, the former director of design technology at BIG last year, the company has added Greg Demchak, who formerly ran the Digital Innovation Lab at Bentley Systems, as well as Tatjana Dzambazova formerly of IDEO. Demchak was an early recruit at Revit before Autodesk acquired it and Dzambazova was a long time Autodesk executive, deeply involved in strategy and development of AEC, reality capture and AI. It seems the old gang is getting back together.

somebody looks at it and goes, ‘It doesn’t do 100% but it does 50% or 60%’ or I can do these projects on it and that’s where we are – it’s why we’re working [with] these big guys to keep us honest. When they tell us they can really use this, then we open it up to everybody else. Up until then, we’ll do this other thing that is not a concept modeller but will feel useful.

Martyn Day: Talking about the first product, what was the rationale behind bringing out this subset of features. They seem quite disparate?

Amar Hanspal: What we are trying to do, over multiple years, is build out a system that you would call BIM, to provide everything you need to describe a building and create all the documents that are necessary to describe the building, There are four key elements, plus one: modelling, documentation, data and collaboration and then the plus one is scripting.

The data part is all about how it’s managed, stored, linked, represented and displayed for a customer, which is the user interaction model, around all of this. Scripting is just automation across all of these four things. And we have always thought about BIM that way.

We know people will react to the initial product because they see the user interface and think we are doing markup and sketching. But behind the scenes, these are just the two things that got ‘productised’ first, data handling and collaboration, while we build towards the other capabilities.

Martyn Day: You have been talking with leading AEC firms for two years. How will you go from this initial functionality to full BIM?

Amar Hanspal: We can’t wait ten years like Onshape [cloud-based MCAD software] to [Autodesk] Fusion to get all the capabilities in there. So, what’s the sequencing of this? From sitting down and talking to customers, the design review process that they were implementing, the product we ran across the most was Miro. For design review many are using a Miro board. They would express frustration that it was just a painful, static, flat process. That’s where our ‘light bulbs’ went off. Miro is just collages and a bunch of information. Even when we become a full BIM editor, we’re still going to have to coexist with Tekla, Rhino, Tekla, some MEP application. We actually have to get good at being part of this ecosystem and not demanding to be the source of truth for everything.

It gets us to the goal that we’re looking for, and we’re solving a user problem. So that’s how we came up with what we were going to do first, a Miro workflow mirror, and some companies are doing design review using Adobe InDesign.

‘‘

Martyn Day: How many people are in the team now?

Amar Hanspal: I think we’re getting close to 40. It’s mainly all product people. We are a distributed company, based around Boston, New York or the Bay Area – that’s our core.

We’re constructing the team with three basic capabilities. There’s classic geometry, folks – and these are the usual suspects. The place where we have newer talent is on the cloud side, both on trying to do 3D on the browser front end, and then on the back-end side, when we’re talking about the data structures. None of those people come from CAD companies, none of them; they are all Twitter, Uber or robotics companies – different universes to traditional CAD.

The third skill set that we’re developing is machine learning. Again, none of those guys are coming from Cloud or 3D companies. These are research-focused, coming from first principles, that kind of focus.

At this moment in the AEC space, trying to do a fullfrontal assault of the Revit installed-base, is like climbing North Face of the Eiger – you better take a mighty big run up and have plenty of reserves

Over time, we can become more capable of replacing some of the things that Bluebeam and Revizto do.

Martyn Day: By trying to rethink BIM and being heavily influenced by what came before, like Revit, is there a danger of being constrained by past concepts? Somone described Revit to me as 70s thinking in 80s programming. Obviously now computer science, processors, the cloud have all moved on. The same goes for business models. I recently heard the CEO of Microsoft say SaaS was dead!

Our philosophy around data is, no matter how we store it, fundamentally, no system is going to have all of the data necessary for a building. So instead of trying, like ACC tries to centralise the information - and while you will always have some data in your system, I think the model we’re trying to bring to bear is a ‘link information model’, like the idea that you’re watching us bring with the plugins and the round tripping of the comments. We’re going to assume that data is going to stay where it is, and like the Internet, we have to figure out a linking model, sharing model, to bring it together.

You can look at the app where it currently is, which features a couple of core concepts that we’re trying to bring to market - this distributed data idea, and then the second one is the user model on top of it, enabling sharing.

Martyn Day: Many start-ups put off developing sales to get early adoption. But with your initial release you have started selling the product. Why are you taking this approach?

Amar Hanspal: It’s good discipline. It’s like eating your vegetables. When you ask people for money, you have to prove value. It’s good discipline for us to deliver something that’s useful to customers, and see them actually go through the process of making decision to spend money on it because they see how much it’s going to help or save them.

Fundamentally, we want to make sure that we’re professional people developing software in a professional way, it forces us to be good about handing things like scalability, performance.

Amar Hanspal: We know we’re living in a post subscription world. Post ‘named user’ world is the way I would describe it. The problem with subscription right now, is that it’s all named user, you’ve got to be onboard, and then this token model at Autodesk is if you use the product for 30 seconds, then you get charged for the whole day.

It’s still very tied to, sort of like a human being in front in a chair. That’s what makes the change. Now, what does that end up looking like? You know the prevalent model, there’s three that are getting a lot of interest: one is the Open AI ChatGPT model. It’s get a subscription, you get a bunch of tokens. You exceed them, you get more.

The other one, which I don’t think works in AEC, is outcome-based pricing, which works for call centres. You close a call, you create seven bucks for the software. I don’t see that happening. What’s

the equivalent in AEC time? Produce drawing, seven bucks? What is the equivalent of that? That just seems wrong. I think we’re going to end up in this somewhat hybrid tokenised / ChatGPT style model, but you know we have to figure that out. We have to account for people’s ability to flex up and down. They have work what comes in and out. Yeah, that’s the weakness of the subscription business model, is that customers are just stuck.

Martyn Day: Why didn’t Autodesk redevelop Revit in 2010 to 2015?

Amar Hanspal: What I remember of those days – it’s been a while – is I think there was a lot of focus on just trying to finish off Revit Structure and MEP. I think that was the one Revit idea, and then suites and subscriptions. There was so much focus on business models on that. But you’re right. I think looking back, that was the time we should have redone Revit. I started to it with Quantum, but I didn’t last long enough to be able to do it!

Conclusion

One could argue that the decision by Autodesk not to rewrite Revit and minimise the development was a great move, profit-wise. For the last eight years, Revit sales haven’t slowed down and copies are still flying off the shelves. Revit is a mature product with millions of trained users and RVT is the lingua franca of the AEC world, as defined in many contracts. There is proof to the argument that software is sticky and there’s plenty of time with that sticky grip, for Autodesk to flesh out and build its Forma cloud strategy.

Autodesk has taken active interest in the startups that have appeared, even letting Snaptrude exhibit at Autodesk University, while it assesses the threat and considers investing in or buying useful teams and tech. If there is one thing Autodesk has, it’s deep pockets and throughout its history it has bought each subsequent replacement BIM technology – from Architectural Desktop (ADT) to Revit. Forma would have been the first inhouse development, although I guess that’s partially come out of the SpaceMaker acquisition.

But this isn’t the whole story. With Revit, it’s not just that the software is old, or the files are big, or that the Autodesk team has given up on delivering major new productivity benefits. From talking with firms there’s an almost allergic reaction to the business model, coupled with

the threat of compliance audits, added to the perceived lack of product development. In the 35+ years of doing this, it’s still odd seeing Autodesk customers inviting in BIM start-ups to try and help the competitive products become matchfit in order to provide real productivity benefits – and this has been happening for two years.

With Hanspal now officially throwing his hat into the ring, it feels like something has changed, without anything changing. The BIM 2.0 movement now has more gravitas, adding momentum to the idea that cloud-based collaborative workflows are now inevitable. This is not to take anything away from Arcol, Snaptrude, Hypar and Qonic which are possibly years ahead of Motif, having already delivered products to market, with much more to come.

From our conversation with Hanspal, we have an indication of what Motif will be developing without any real understanding of how it will evolve. We know it has substantial backing from major VCs and this all adds to the general assessment that Revit and BIM is ripe for the taking.

At this moment in the AEC space, trying to do a full-frontal assault of the Revit installed-base is like climbing the North Face of the Eiger – you better take a mighty big run up and have plenty of reserves. And, for a long time, it’s going to look like you are going nowhere. Here, Motif is playing its cards close to its chest, unlike the other start-ups which have been sharing in open development from very early on, dropping new capabilities weekly. While it is clear to assess the velocity with which Snaptrude, Arcol and Qonic deliver, I think it’s going to be hard to measure Motif’s modeller technology until it’s considerably along in the development phase. It’s a different approach. It doesn’t mean it’s wrong and

with regular workshops and collaboration with the signature architects, there should be some comfort for investors that progress is being made. But, as Hanspal explained, it’s going to be a slow drip of capability.

While Autodesk may have been inquisitive about the new BIM start-ups, I suspect the ex-Autodesk talent in Motif, carrying out a similar Quantum plan, would be seen as a competitor that might do some damage if given space, time and resources. Motif is certainly well funded but with a US-based dev team, it will have a high cash burn rate.

By the same measurement, Snaptrude is way ahead, has a larger, purely Indian development team, with substantially lower costs and lower capital burn rate. Arcol has backing from Tooey Courtemanche (aka Mr. Procore) and Qonic is doing fast things with big datasets that just look like magic and have been totally self-funded. BIM 2.0 already has quality and depth. The challenge is to offer enough benefit, at the right price, to make customers want to switch, for which there is a minimal viable product.

It’s only April and we already know that this will be the year that BIM 2.0 gets real. All the key players and interested parties will all be at our NXT BLD and NXT DEV conferences in London on 11-12 June 2025 – that’s Arcol, Autodesk, Bentley Systems, Dassault Systèmes, Graphisoft, Motif, Snaptrude, Qonic and others. As these products are being developed, we need as many AEC firms onboard to help guide their direction. We need to ensure the next generation of tools are what is needed, not what software programmers think we need, or limited to concepts which constrained workflows in the past. Welcome Motif to the melee for the hearts and minds of next generation users!

■ www.motif.io

NXT BLD / NXT DEV

Future BIM voices at NXT

NXT BLD and NXT DEV offer a unique opportunity to witness the evolution

of BIM 2.0 firsthand. This year, four leading startups will present their commercial products, alongside a wealth of additional innovations

For almost twenty years the AEC software world was centred around Autodesk Revit and its definition and workflow of BIM. The concept was to ideate, model detail designs and create all the necessary drawings in one monolithic platform.

But software typically has a lifespan, where it needs to be rewritten or rearchitected (for OS changes, new hardware, and to clean-up years of bloat). Following open letters from customers concerned at the lack of Revit development (www.tinyurl.com/ aec-letters) Autodesk explained that it was not going to rewrite Revit for the desktop, but instead would develop a next generation AEC design environment on the cloud, branded Forma (N.B. Carl Christensen, the Autodesk VP in charge of delivering Forma, will be presenting at NXT BLD on June 11).

This gap between Revit and what will

come next has presented an opportunity for new software developers to rethink BIM and its underlying technologies, to bring the AEC design software into the 21st Century. Investors have become equally excited and NXT BLD and NXT DEV will provide a unique forum for multiple startups—Snaptrude, Motif, Qonic and Arcol—to present new commercial BIM 2.0 products, with more firms in stealth, probably in the audience!

While the velocity of the startups is impressive, we need to temper expectations by pointing out that competing against established desktop BIM applications, which are 20+ years old, will take years (and millions of dollars). Over the coming years, expect to see these tools become more feature comparative.

While BIM 2.0 shifts the focus away from producing drawings, there’s no

Based in New York, Arcol is headed up by Irishman, Paul O’Carroll, who brings a games development background to BIM and 3D. One of the earliest to profile its approach as ‘Figma for BIM’, the company has attracted investors such as chief executives of both Procore and Figma.

Arcol has focussed heavily on concept design for its initial offering, enabling live in-context modelling with building metrics and data extraction and

collaboration built-in. The software supports complex geometry, an easy to learn UI, board creation for presentations (which can be shared by just sending a link), live plans and sheets. It integrates with Revit, SketchUp and Excel. Reports are highly visual and Arcol see it as a replacement for PDF as well. The solution is aimed at architects, developers, general contractors and owners. Arcol will be officially shipping by the time of NXT BLD. ■ www.arcol.io

escaping their continued importance to the AEC industry. That’s why there’s also a big focus on autodrawings, as this AI-powered technology promises to massively reduce the time spent doing the mundane boring work. Autodrawings could also mean fewer licences of BIM software are required. Both Snaptrude and Qonic have developments here. However, it’s quite possible that autodrawings and AI will become cloud services that don’t need to be in an all encompassing BIM platform.

At NXT BLD / DEV you can meet and engage with all these firms, plus many more individuals innovating in the AEC space, such as Antonio González Viegas of ThatOpenCompany and Dalai Felinto of Blender bringing the benefits of impressive Open Source tools to our industry. We hope that you will join us.

■ www.nxtbld.com ■ www.nxtdev.build

Motif is our cover story this month (see page 22). The company is headed up by former joint CEO of Autodesk, Amar Hanspal, who has assembled the old gang to finish off a task he started in 2016the rewriting of Revit as a cloud application.

Motif is also pitched as Figma for BIM and is backed by Alphabet (Google) with a sizable war chest. In stealth for the last two years, the company has been working with signature

architects to learn what a BIM 2.0 application should be able to do - the idea being that by catering to the most demanding customers, the software should benefit everyone.

The company has just launched its first version but recognises the journey will take many years. The feature set of version 1 lends itself to design review and client presentations, taking aim at Miro, but with some Speckle and Omniverse like capabilities. ■ www.motif.io

Motif

Qonic

The origins of Ghent-based Qonic go back to TriForma, a BIM system which co-founder Erik de Keyser created and licensed to Bentley Systems. de Keyser then created BricsCAD and Bricsys – a DWG and formative BIM tool, which was later sold to Hexagon.

Many of the Bricsys team then started up Qonic, a cloud-based BIM 2.0 competitor which initially (and uniquely) focuses on the model and data interface between architecture and construction.

Antonio González Viegas, CEO of That Open Company, the creator of free and open technology that helps AECO software firms and practitioners create their own AECO software, will be speaking at NXT DEV again this year

Qonic can load huge Revit models and lets users fly through them with butter smooth refresh rates on the desktop or mobile. The program also has powerful solid modelling core for geometry edits, as well as supporting IFC component labelling. The initial release is exceptionally easy to use to see, manipulate and filter BIM data, as a CDE on steroids. The team is working on architectural tools, smart drawings and a range of features to expand capabilities.

■ www.qonic.com

11/12 June 2025 The Queen Elizabeth II Centre // London, UK www.nxtbld.com / www.nxtdev.build

Snaptrude has the accolade of being the first BIM 2.0 startup that AEC Magazine discovered. CEO Altaf Ganihar was first to demonstrate cloud-based collaborative working on Revit models and has gone on to raise $21m in VC funding.

The New York-based company seeks to be a one stop shop for conceptual, detailed design and drawing production, while linking to all the common tools –Revit, SketchUp, AutoCAD, Rhino, as well as Nemetschek’s

Archicad. Snaptrude currently offers the widest range of BIM 2.0 features from concept to AI renderings and drawings and looks as if it will probably be first with feature parity to Revit for Architecture, with plans to also support MEP and structural. With the biggest development team in the BIM 2.0 space the company is moving at pace to deliver on its aims. The company is soon to announce a range of major new features.

■ www.snaptrude.com

Rebuilding BIM [the startups]

In the last edition, we asked five established AEC software developers to share their observations and projections for BIM 2.0. Now it’s the turn of the startups

Qonic: Erik de Keyser, co-founder

Breaking the compromise in digital project delivery

For years, the AECO industry has struggled between big ambitions and everyday challenges. While many envision a future of smooth, high-performing digital project delivery, most professionals are stuck with tools that fall short — locked into compromises that waste time, money, and creativity.

Qonic wasn’t intended to be a better version of what came before. We started by asking: what if we could remove the compromises entirely?

The industry’s impasse

In recent years, the AECO industry has clearly voiced what’s holding back the future of digital project delivery. Open letters and software specs have highlighted the challenges and hopes for change. However, new tools merely put a fresh coat of paint on old problems, making minor improvements without addressing core inefficiencies.

Qonic was founded on the belief that patching old systems isn’t enough. The AECO industry needs a complete reset, a chance to rethink what’s possible and achieve new levels of efficiency.

Every day, designers, engineers, and constructors lose valuable time waiting for data, fighting interoperability issues, and over-investing in hardware that underperforms.

Teams fragment their information across multiple files and tools, sacrificing collaboration and clarity in the process.

At Qonic, we asked a simple question: what if we could start over? What if we could remove the bottlenecks of outdated tools and workflows, giving professionals back control?

It’s not about adapting to legacy constraints, it’s about eliminating them altogether.

An evolution, not a revolution

Qonic started from a blank sheet, totally free from any legacy constraints. However, change is hard, and the industry has been through this before. The shift from hand- drawn lines to object-based modelling took years. Qonic understands that real change has to be as smooth as it is significant.

That’s why Qonic doesn’t just offer a clean slate, it offers a clear path forward. Its foundation is built on solid modelling and data handling that surpasses existing solutions while remaining accessible to existing workflows. Legacy models can be upgraded instantly and large 3D models that once took minutes to open now stream seamlessly.

The core principles of Qonic We are guided by three core principles we believe will shape a better way forward:

• 3D solid modelling without limits

• Open and flexible interoperability

• Cloud-first collaboration

3D solid modelling without limits

One area ripe for change is 3D modelling. While many tools focus on early-stage conceptual design, Qonic delivers end-toend capability, robust, high-performance solid modelling that works for the entire lifecycle of a project.

Traditionally, professionals have had to choose between flexibility and precision: free- form modelling tools for creative design or object-based modelling tools built for intelligent, data-rich models. At Qonic, we believe it is possible to combine the best of both worlds. Qonic is developing a unified modelling environment where designers can nimbly navigate between the two approaches:

Direct modelling: Users can push and pull the geometry freely, ensuring full accuracy and flexibility, not forcing you to adapt to the limitations of the software. Complex geometry? No problem. NURBS and solids are at the core, ensuring precision from concept through construction.

Intelligent modelling tools: With socalled automated modelling ‘procedures’, you streamline the development of reallife building systems. Combined with manufacturer details stored in structured libraries as components, Qonic enables high-detail 3D modelling with advanced automation.

Open and flexible interoperability

Qonic speaks the language of today’s most widely used design tools:

• Native import of Rhino and SketchUp models.

• Seamlessly handling Revit import and export.

• For structural, MEP, and other disciplines, full IFC support.

But integration is just the start. Qonic transforms traditional file-based workflows into a database-driven approach. Instead of monolithic files, each BIM model becomes a dynamic collection of assemblies, subassemblies, and individual parts, where geometry and data are intrinsically linked and easily accessible.

For Qonic, openness also means the possibility to distribute this geometry and data using Application Programming Interfaces (API), and free it from monolithic BIM silos.

Taking a platform approach, Qonic empowers users to build custom workflows tailored to their specific needs, while maintaining full control over their data.

Collaboration without constraints Qonic’s cloud-first platform was built to simplify how teams work together, no matter their location or role:

Effortless collaboration: Projects scale smoothly without friction, and unlimited team members can securely access the data they need.

Granular permissions: Fine-grained access controls ensure the right people have the right access at the right time, maintaining both security and control while enabling seamless collaboration.

Integrated conflict resolution: Built-in conflict resolution ensures that design and execution remain in perfect sync. Complete project histories and versioning allow teams to trace their steps and move forward with confidence.

Streamlined coordination: With built-in workflows for clash detection and issue management, Qonic enables teams to discover, discuss, and resolve coordination issues effortlessly, without disrupting workflows.

A new standard, not just another tool

Qonic isn’t aiming to be “Revit 2.0” or the next evolution of yesterday’s ideas. It’s about helping the industry rethink how digital project delivery can work when freed from unnecessary compromises. It represents a fundamental shift in digital project delivery, one that gives designers, engineers, and constructors the tools they’ve always needed. By combining solid modelling, open interoperability and cloud collaboration, Qonic aims to make it easier for architects and contractors to model designs and deliver project outcomes (reports, drawings, etc.) with unparalleled accuracy, flexibility, detail, and intelligence.

And this is just the beginning. The structured, data-rich models created in Qonic provide a solid foundation for AI and machine learning. As models accumulate detailed geometry and information, machine learning algorithms analyse patterns, optimise workflows, and automate repetitive tasks. We are already delivering today our neural network capable to recognise 3D geometry and add missing classification information.

Our future roadmap

Qonic isn’t just solving today’s challenges — it’s building a foundation for the future of digital project delivery. Qonic is an agile, forward-thinking team, from both industry and with prior AEC tool development experience. We’re completely self-funded, built by industry experts, and free from investor or shareholder pressure to release an incomplete product.

With rapid development cycles and a commitment to redefining AECO software, Qonic continues to push boundaries, from automated drawing generation to a model quality hub. For those ready to leave compromise behind, Qonic isn’t just a platform. It’s an invitation to rethink what’s possible in digital project delivery.

Motif: Amar Hanspal, CEO

BIM 2.0: why it’s time to reinvent the tools that power the built world

For more than two decades, Building Information Modelling (BIM) has promised to revolutionise how we design, construct, and operate buildings. At its core, BIM integrates geometry, data, and documentation into a single, intelligent model-envisioned as a digital twin of the built environment. The goal was clear: streamline collaboration, enhance coordination, and unlock datadriven decision-making across the lifecycle of every building.

But ask today’s architects, engineers, or contractors, and many will say: BIM hasn’t evolved much beyond its early promise.

Despite billions invested and years of adoption, the tools that power BIM remain anchored to outdated paradigms. Built for a PC- and LAN-centric world, most BIM workflows are still siloed, static, and sluggish. Collaboration is clunky. Interoperability is limited. And critical design decisions are often made using software that looks and feels like it hasn’t changed since the ‘90s.

Put simply: we’re trying to design 21stcentury buildings with 20th-century tools.

It’s time for a reboot. Welcome to BIM 2.0—a new vision powered by platforms that are open, intelligent, and built for how teams actually work today.

From static to dynamic BIM 1.0 delivered a meaningful leap: it united 3D geometry with metadata and documentation. But it was built on a foundation designed for an earlier era. Files had to be saved, exported, and shared manually. Collaboration was mostly asynchronous. Real-time feedback loops were rare. And too often, BIM software became glorified drafting tools—used more for generating drawing sets than driving design.

process—where models don’t just represent decisions but help shape them.

From fragmented to collaborative Ironically, one of the biggest failures of BIM 1.0 is how much it fractured collaboration. Teams cobbled together PDFs, whiteboarding tools, issue trackers, and disjointed 3D viewers. Design decisions were made in one app, recorded in another, and implemented in yet another—often without full context or continuity.

BIM 2.0 flips that script. Collaboration isn’t an add-on—it’s the starting point.

Motif centers its platform around a shared, infinite canvas where teams can sketch, annotate, present, and iterate together—on top of live models. Think Miro meets Revit, but with the intelligence of a connected design system underneath.

This unified workspace does more than streamline communication—it expands access. Clients, consultants, and extended stakeholders can participate from anywhere, with no downloads, steep learning curves, or risk of version drift. Everyone sees the same model, the same notes, the same design logic—in real time.

From manual to machine-learned BIM 1.0 automated drafting. BIM 2.0 will automate design intelligence.

As machine learning enters the design stack, we’re moving beyond repetitive documentation toward systems that learn, suggest, and adapt. Imagine tools that:

• Propose design alternatives based on performance goals

• Validate compliance automatically

• Fill in documentation as you go

• Optimise layouts based on usage patterns, daylighting, or energy metrics

teams into rigid ecosystems. Interoperability has suffered. Innovation has stalled. And designers have paid the price in the form of rework, exports, and brittle integrations.

BIM 2.0 is rooted in openness. Motif is built on modern, API-first architecture that integrates with the tools firms already use— Revit, Rhino, AutoCAD, SketchUp, and beyond. Live links replace file exchanges. Data stays fluid, accessible, and usable across systems.

This openness is not just technical—it’s philosophical. It’s about giving teams choice, flexibility, and the freedom to build the workflows that work best for them.

Built for the next generation of designers

Ultimately, BIM 2.0 reflects a generational shift. Today’s architects and engineers expect tools that are collaborative, fast, and intuitive. They grew up using mobile apps, real-time multiplayer games, and intelligent productivity software. They don’t want to wait 30 minutes to open a model or export a PDF just to share feedback.

Motif embraces this shift. Its UI is clean, its workflows feel natural, and its early features—from 3D sketching to live-linked presentations—are designed for how designers actually think and work.

And this is just the beginning.

Led by veterans from Autodesk, Revit, Twitter, Vimeo and Onshape, the Motif team is building for the long term.

Just as AWS began with a single service and evolved into a foundational platform, Motif is starting with collaboration—and setting the stage for a full, intelligent BIM ecosystem.

Future releases will expand into predictive modelling, AI-assisted documentation, intelligent agents, and beyond—bringing us closer to the original promise of BIM.

This shift isn’t just about speed. It’s about unlocking a smarter, more adaptive design

BIM 2.0 changes the equation. Built on modern, cloud-native infrastructure, platforms like Motif are replacing static filebased workflows with dynamic, distributed data models. Updates flow across applications and stakeholders in real time. Comments and markups stay connected to the source model. Simulations run in the background and return insights — eliminating the lags that kill iteration.

These aren’t sci-fi dreams—they’re becoming reality. Motif and its peers are laying the groundwork for systems where designers focus on high-level intent, and intelligent assistants handle the details. It’s not about replacing creativity—it’s about elevating it.

The stakes are high

The built environment is responsible for nearly 40% of global energy use and a third of greenhouse gas emissions. If we want to design buildings that are more sustainable, resilient, and human-centered, we need tools that can meet the moment.

From rigid to open we

The future of BIM can’t be built on closed formats and walled gardens. For too long, legacy vendors have controlled data flows and forced

BIM 2.0 isn’t just a technical upgrade—it’s a creative, cultural, and ethical imperative. By building open, intelligent, and collaborative platforms, we can empower the next generation of designers to build a world that works better—for people, for the planet, and for generations to come.

The tools are coming. The future is open. Let’s build it, together.

4.6

Arcol: Paul O’Carrol, CEO

Beyond Buzzwords: the real future of BIM

When thinking about the future of BIM, it’s easy to fall back on familiar buzzwords— AI, automation, cloud computing, datadriven insights. Don’t get me wrong; these aren’t just trendy phrases. They represent genuine opportunities to radically transform our industry. But honestly, they aren’t the starting point for me.

When I think about what BIM should become, I focus on one essential thing: collaboration.

Rethinking collaboration from first principles

AEC might be the ultimate team sport. Every great project—from the homes we cherish to iconic global landmarks—isn’t the work of one person or even one discipline. It’s the outcome of many diverse, talented individuals working together, pooling expertise, and solving complex problems collectively. Ironically, despite our industry’s naturally collaborative nature, the tools we’ve been forced to rely on have historically pushed us into isolation.

Our current BIM tools are file-based, desktop-bound, and essentially built for single-user experiences. Even when they offer “collaboration,” it’s often superficial and awkward. True collaboration isn’t something you simply bolt onto a tool; it needs to be integral to every feature and interaction. Building genuinely collaborative design tools demands a fundamental rethinking, where realtime collaboration influences every decision, workflow, and user experience from the ground up. It also requires an extremely special technical team - we’re fortunate to be joined by lots of exFigma engineers (Figma is the company that pioneered real-time collaboration in a design tool).

There’s an essential distinction here: we must strive for collaboration rather than mere coordination. Coordination is reactive, aligning disparate efforts after they’ve been done. True collaboration, however, is proactive, continuous, and interactive, shaping the project together from the earliest stages through completion.

Unlocking the power of AI and automation

Once we have genuinely collaborative tools, we can fully leverage transformative technologies like AI and automation. Artificial intelligence holds immense promise—automating tedious tasks, optimising design decisions, and fuelling unprecedented creativity. Yet its true potential can only be realised when embedded within workflows designed for deep, continuous collaboration. AI agentic workflows— intelligent systems autonomously handling tasks and streamlining processes—clearly represent the future of automation in our industry. Imagine AI agents independently managing routine tasks, predicting project bottlenecks, or even proposing innovative design solutions. But before we can effectively collaborate with these intelligent agents, we must first establish a foundation of seamless human collaboration.

‘‘ Building genuinely collaborative design tools demands a fundamental rethinking, where real-time collaboration influences every decision, workflow, and user experience from the ground up ’’ similarly automation

Data-driven decisions through collaborative BIM

Data remains central to modern BIM, but without collaboration, it quickly becomes overwhelming and disconnected. Collaborative BIM tools democratise data, making it accessible and actionable for everyone involved.

Real-time, shared insights enable smarter decision-making, reduced risk, and improved sustainability outcomes. Seamless data integration across design, construction, and manufacturing fundamentally changes the game, cutting waste and unlocking innovative possibilities we’ve yet to fully explore.

Arcol incorporates zoning, program, and costing data that is tightly synced to your model, and we’re constantly working to improve how all project stakeholders can interact with and access this data.

Arcol’s vision for a collaborative future

At Arcol, we’re actively creating this future. Our browser-first BIM solution represents a fundamental rethink of collaboration in design, removing the limitations of desktopbound, file-centric software. By enabling real-time teamwork and seamless iteration, Arcol isn’t just improving workflows—it’s redefining what’s possible.

Automation similarly offers enormous potential, especially in an industry burdened by repetitive tasks. Integrating automation into genuinely collaborative tools liberates designers, engineers, and construction professionals from mundane work, allowing them to focus on innovation and creativity. It’s not just about efficiency; it’s about re-humanising the daily workflow.

Ultimately, our vision for the future of BIM is profoundly human-centric. Technology is powerful, but it’s only meaningful when it amplifies human creativity, innovation, and collaboration. At Arcol, our mission is clear and ambitious: empower people, foster real innovation, and facilitate effortless collaboration across the entire project lifecycle.

The future of BIM isn’t defined merely by exciting technologies—it’s about connecting talented people, cultivating groundbreaking ideas, and building a better, more sustainable world together.

Snaptrude: Altaf Ganihar, founder and CEO

Rebuilding BIM: Beyond Legacy Thinking

Beyond the Familiar Crisis:

Understanding the Root Cause

We’ve all seen the reports. We know the AEC industry struggles with productivity. We’ve read the McKinsey statistics and heard about budget overruns countless times. Yet despite this awareness, the same problems persist. Why? I believe we’re focusing on symptoms rather than the disease. The real issue isn’t just poor collaboration, it’s a broken information chain throughout the building lifecycle. Think about it: decisions made during early design affect a building for decades. These choices impact construction costs, energy use, maintenance expenses, and occupant health. Yet our current systems break this chain at every link, especially in the critical early stages where decisions have the most impact.

The three-dimensional disconnect

The AEC industry suffers from what I call a “three-dimensional disconnect” that prevents smart decision-making:

1) Disconnected tools: Our industry uses specialised software that’s great at specific tasks but terrible at talking to each other. Existing tools force design teams to wait weeks to bring designs up to basic specifications before gathering feedback. Even tools from the same company often can’t share information without someone manually converting the data.

2) Fragmented data: Critical building information is scattered across dozens of systems in incompatible formats. Requirements, floor plans, 3D models, renders, and presentations exist as separate files in separate programs. This makes it nearly impossible to see how a change in one area affects everything else.

3) Isolated people: Most importantly, our technology keeps experts apart when they need to work together. For an industry that thrives on collaboration, with multiple stakeholders working across geographies, there’s no reliable platform for real-time collaboration. Architects, engineers, clients, and contractors use different systems that reinforce silos instead of breaking them down.

In an industry where the impact lasts for years, this lack of actionable feedback and limited collaboration causes decisions to get severely delayed, resulting in massive cost implications for retrospective corrections.

The cloud-native foundation

Seven years ago, we began with a dream of a connected ecosystem for the AEC industry, which evolved into our mission to connect people, data, and tools. We recognised that

this would only be possible by building natively in the cloud.

This cloud-native foundation enables several critical capabilities:

• Universal representation of data across the lifecycle, enabling:

• Atomic changes to designs with realtime feedback on cost, climate, and carbon impact

• Seamless interoperability with legacy tools to enable easy transition from old workflows to new

• A powerful geometry kernel optimised for web-based editing

• Practical automations that enhance rather than complicate the design process

Today, I’m encouraged to see the entire industry, including incumbents and startups alike, converging on this worldview. The debate is no longer about whether we need a connected ecosystem, but how quickly we can create one and what specific approaches will work best.

Reimagining the early design process

We recognised that the most critical phase in a building’s lifecycle is the early design stage, from RFP analysis to schematic design. This is where the most impactful decisions are made, yet it’s also where our disconnected tools create the greatest financial strain on firms.

According to a recent AIA report, 15% of the work that architecture firms do is not compensated, a staggering amount of lost revenue. Much of this unpaid work occurs during early design phases, which have become increasingly unprofitable. While incumbents are rushing to rebuild legacy systems on the cloud, this approach misses the fundamental opportunity. The industry doesn’t just need faster horses, it needs automobiles.

Cloud foundations are like inventing the steam engine; they enable entirely new possibilities rather than merely improving existing ones. What the industry desperately needs is a reimagined approach to early design that connects the disjointed steps from requirements to presentation, a unified workflow that connects people, their tools, and the data they need for the most decisive phase of building design. By focusing on where decisions have the greatest impact, the early design process, we’re addressing the industry’s most pressing challenges head-on.

The AI inflection point for AEC

The emergence of advanced AI capabilities represents both an opportunity and a challenge for our industry. There’s a saying circulating that “people who use AI will replace people who don’t.” While oversimplified, it captures an important truth: AI will fundamentally reshape AEC workflows, but only in connected ecosystems

with structured data that can leverage its full potential. While the cloud foundations are like the steam engine, AI is like electricity, transforming not just how we power our work, but fundamentally changing what’s possible in every aspect of the design and construction process. We as an industry are uniquely placed to see this transition happen simultaneously, unlike other industries where it was clearly sequential. We’re witnessing large firms investing heavily in proprietary AI systems built on their internal data. While commendable, this approach is ultimately unsustainable. Building reliable, maintainable AI systems requires expertise that, while possible for AEC firms to develop, distracts from their core competency of designing better buildings. At Snaptrude, we’ve invested heavily in AI for practical, urgent industry problems. For example, our agent that helps firms create program requirements by analysing RFPs gives teams a solid foundation to begin their design process. Similarly, our investment in AI-powered rendering tools achieves state-of-the-art performance in adhering to geometry and design intent.

Real transformation requires new foundations

The future of BIM isn’t about incremental improvements to existing tools but a fundamental reimagining of how we design, build, and operate the built environment. By creating a platform that connects stakeholders, streamlines workflows, and harnesses the power of data and AI, we can address the industry’s most pressing challenges.

At Snaptrude, we’re proud to lead this transformation, particularly in the critical early stages of the design process. Our platform is already delivering measurable value to firms today, with customers attributing project profitability directly to our deployment. But we’re just getting started.

The AEC industry has the potential to be more efficient, more sustainable, and more creative than ever before. By rebuilding our tools with collaboration, data, and openness at their core, we can create a built environment that truly serves the needs of both today and tomorrow.

No more broken chains. No more silos. No more disconnects.

Just better buildings created through truly informed decisions at every stage of the process, starting with the most critical early design phase. I’m so glad to see that we as an Industry are moving towards this future and it may not be that far away.

Higharc AI 3D BIM model from 2D sketch

In the emerging world of BIM 2.0, there will be generic new BIM tools and expert systems, dedicated to certain building types. Higharc is a cloud-based design solution for US timber frame housing. The company recently demonstrated impressive new AI capabilities.

While AI is in full hype cycle and not a day passes without some grandiose AI claim, there are some press releases that raise the wizened eyebrows at AEC Magazine HQ.

North Carolina-based start-up, Higharc, has demonstrated a new AI capability which can automatically convert 2D hand sketches to 3D BIM models within its dedicated housing design system. This type of capability is something that several generic BIM developers are currently exploring in R&D.

Higharc AI, currently in beta, uses visual intelligence to auto-detect room boundaries and wall types by analysing architectural features sketched in plan. In a matter of minutes, the software then creates a correlated model comprising all the essential 3D elements that were identified in the drawing – doors, windows, and fixtures.

Everything is fully integrated with Higharc’s existing auto-drafting, estimating, and sales tools, so that construction documents, take-offs, and marketing collateral can be automatically generated once the design work is complete.

In one of the demonstrations we have seen, a 2D sketch of a second floor is imported, analysed and then automatically generates all the sketched rooms and doors, with interior and exterior walls and windows. The AI generated layout even means the roof design adapts accordingly. Higharc AI is now available via a beta program to select customers.

Marc Minor, CEO and co-founder of Higharc explains the driving force

behind Higharc AI. “Every year, designers across the US waste weeks or months in decades-old CAD software just to get to a usable 3D model for a home,” he says.

“Higharc AI changes that. For the first time, generative AI has been successfully applied to BIM, eliminating the gap between hand sketches and web-based 3D models. We’re working to radically accelerate the home design process so that better homes can be built more affordably.”

AI demo

In the short video provided by Higharc () we can see a hand drawn sketch imported into the Autolayout tool. The sketch is a plan view of a second floor, with bedrooms, bathrooms and stairs with walls, doors and windows indicated. There are some rough area dimensions and handwritten notes, denoting room allocation type. The image is then analysed. The result is an opaque overlay, with each room (space) tagged appropriately, and a confirmation of how many rooms it found. There are settings for rectangle tolerance, minimum room areas. The next phase is to generate the rooms from this space plan.

We now switch to Higharc’s real-time rendered, modelling and drawing environment, where each room is inserted on the second floor of an existing single floor residential BIM model, where walls, windows, doors and stairs are added, and materials are applied. This is simultaneously referencing an image of the sketch. The accurate BIM model has been created, combining traditional modelling with AI sketch-to-BIM generation.

What is Higharc?

Founded in 2018, Higharc develops a tailored cloud-based BIM platform, specifically designed to automate and integrate the US housing market, streamlining the whole process of design, sales, and constructing new homes.

Higharc is a service sold to home builders, that provides a tailored solution which integrates 3D parametric modelling, the auto creation of drawings, 3D visualisations, material quantities and costing estimates, related construction documents and planning permit application. AEC Magazine looked at the development back in 2022 (www.tinyurl.com/higharc-aec)

The company’s founders, some of which were ex-Autodesk employees, recognised that there needed to be new cloud-based BIM tools and felt the US housing market offered a greenfield opportunity, as most of the developers and construction firms in this space had completely avoided the BIM revolution, and were still tied to CAD and 2D processes. With this new concept Higharc offered construction firms easy to learn design tools, which even prospective house buyers could use to design their dream homes. As the Higharc software models every plank and timber frame, accurate quantities can be connected to ERP systems for immediate and detailed pricing for every modification to the design.

The company claims its technology enhances efficiency, accelerating a builder’s time to market by two to three times, reducing the timeline for designing and launching new plots by 75% (approximately 90 days). Higharc also claims that

plan designs and updates are carried out 100 times faster than with traditional 2D CAD software.

To date, Higharc has raised $80 million and has attracted significant investment and support from firms such as Home Depot Ventures, Standard Investments, and former Autodesk CEO Carl Bass. The company has managed to gain traction in the US market and is being used to build over 40,000 homes annually, representing $19 billion in new home sales volume.

While Higharc’s first go to market was established house building firms, the company has used money raised to expand its reach to address those who want to design and build their own homes. The investment by Home Depot would also indicate that the system will integrate with the popular local building merchants, so selfbuilders can get access to more generic material supply information. The company also plans to extend the building types it can design, eventually adding retail and office to its residential origins.

In conversation

In previous conversations with Higharc, it became apparent that the company had become successful, almost too successful, as onboarding new clients to the system was a bottleneck. Obviously, every house builder has different styles and capabilities which have to be captured and encoded in Higharc but there was also the issue of digital skill sets. Typically, firms that were opting to use Higharc were not traditional BIM firmsthey were housebuilders, more likely to use AutoCAD or a hand drawn sketch, than have much understanding of BIM or

‘‘ The only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up ’’

modelling concepts. It turns out that the AI sketch tool originated out of a need to include the non-digital, but highly experienced, house building workforce.

guy who works on the computer, where he models in SketchUp, so they can do virtual prototype walk-throughs to really understand the building, the design choices, and then make changes to it. The challenge here is that it takes a long time to go back and forth. We showed them this new AI sketch to model work we were doing, and they gave us one of their sketches for one of their homes that they’re working on. The results blew their mind. They said for them ‘this is huge’. They told us they can cut weeks or months from their conceptual stage and probably bring in more folks at the prototype walk-through stage. It’s a whole new way of interacting with design.

After the launch, AEC Magazine caught up with co-founder Michael Bergin and company CEO Marc Minor to dig a little deeper into the origin of the AI tool and how it’s being used. We discovered that this is probably the most useful AI introduction in any BIM solution we have seen to date, as it actually solves a real world problem –not just a nice to have of demoware.

Mark Minor: The sketch we used to illustrate at launch is a real one, from one of our customers. We have a client, a very large builder in Texas who builds 4,000 houses per year just in Texas. They have a team of 45 or so designers and drafters, and they have a process that’s very traditional. They start on drawings boards, just sketching. They spend three months or so in conceptual design and eventually they’ll pass on their sketches to another

What makes this so special, and is the only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up. Because it’s data first, it means that we can not only generate a whole lot of synthetic data for training rapidly, but we really have a great target for a system like this - taking a sketch and trying to create something meaningful out of the sketch. It’s essentially trying to transform the sketch into our data model. And when you do that, you get all the other features and benefits of the Higharc system right on top of it.

Martyn Day: As the software processes the file, it seems to go through several stages. Is the first form finding?

Technology

Mark Minor: It’s not just form finding, actually, it’s mapping the rooms to particular data types. And those types carry with them all kinds of rules and settings.

Michael Bergin: At the conceptual / sketch design phase these are approximate dimensions. Once you’ve converted the rooms into Higharc, the model is extremely flexible. You can stretch all the rooms, you can scale them, and everything will replace itself and update automatically. We also have a grid resolution setting, so the sketch could even be a bubble diagram, or very rough lines, and you just set the grid resolution to be quite high, and you can still get a model out of that.

Higharc contains procedural logic, as to how windows are placed, how the foundation is placed, the relationships between the rooms. So the interaction that you see as the AI processes the sketch and makes the model, places the window, doors and the spaces between the rooms, that is all coming from rules that relate to the specifications for our builder.

Martyn Day: If doors collide, or designs do not comply with local codes, do you get alerted if you transgress some kind of design rule?

Michael Bergin: We have about 1,000 settings in Higharc that relate to the building that are to adjust for and align to issues of code compliance. When you get into automated rule checking, evaluating and digesting code rules and then applying that to the model, we have produced some exciting results in more of a research phase in that direction. There’s certainly lots of opportunities to express design logic and design rules, and we’ll continue to develop in that direction.

Mark Minor: One of the ways we use this, is we go to a home builder we want as a customer. In advance of having a sales chat, we’ll actually go to their website and screenshot one of their floor plans. We’ll pull it the AI tool and set it up as the house. We want to help folks understand that it’s not as painful and as hard as you might think. The whole BIM revolution happened in commercial, that’s kind of what’s happening in home building now. But 90% or more of all home builders use AutoCAD. We rarely come across Revit.

Martyn Day: I can see how you can bring non-digital housebuilders into the model creation side of things, where before everything would be handled by the com-

puter expert. With this AI tool, does that mean suddenly everyone can contribute to the Higharc model?

Michael Bergin: Yes! That’s extremely important to us, bringing more of the business into the realm of the design, that’s really the core of our business. How do we bring the purchasing and the estimating user into the process of design? How do we take the operations user who’s scheduling all of the work to be done on the home into the design, because ultimately, they all have feedback. The sales people have feedback. The field team have feedback, but they’re all blocked out. They are always working through an intermediary, and perhaps through an email to a CAD operator. It goes into a backlog. We are cutting that distance between all the stakeholders in the design process and the artefact of the design has driven a lot of our development.

It’s exciting to see them engaging in the process, to see new opportunities opening up for them, which I think is broadly a great positive aspect of what’s happening with the AI revolution.

Martyn Day: You have focused on converting raster images, which is hard, as opposed to vector. But could you work with vector drawings?

Michael Bergin: While it would have been easier to use a vector representation to do the same AI conversion work, the reason that we did focus on raster was that vector would have been quite limiting. It would have blocked us out from using conceptual representations. If our customers are using a digital tool at all, they are building sketches in something like Figjam (www.figma.com/figjam). In this early conceptual design stage, we have not seen the Rayon tools (www.rayon. design) or really any of the new class of tools that the market is opening up for. Our market in US home builders tends to be the way that they’ve been doing things for some decades, and it works well for them, and we are fortunate that they have determined that Higharc is the right tool for their business.

Making it possible that the businesses process can change has required us to develop a lot of capabilities like integrating with the purchasing and estimation suite, integrating with the sales team, integrating with ERPs, really mirroring their business. Otherwise, I don’t think that we would have an excellent case for adoption of new tools in this industry.

■ www.higharc.com

REALITY MODELLING SPECIAL REPORT

Reality modelling is unlocking new efficiencies in AEC projects, providing highly detailed 3D models across the entire lifecycle. To keep projects on track, rapid data processing is critical, so selecting the right workstation hardware is more important than ever

PHYSICAL MEETS DIGITAL

Reality modelling is reshaping the AEC sector, providing precise 3D models that bridge the physical and digital realms, improving planning, design, construction, and maintenance of projects

Reality modelling is transforming architecture, engineering, construction (AEC), delivering highly accurate, data-rich 3D models from planning and design, all the way through to maintenance and operations. It’s being adopted at an unprecedented rate and is set to become a cornerstone of modern construction, bridging the physical and digital worlds like never before.

The applications are incredibly diverse. Reality modelling can provide critical context for new projects, where accurate 3D site data allows designers to integrate new buildings or infrastructure into existing environments more effectively. It also serves as a foundation for retrofit projects, enabling precise planning for renovation and refurbishment.

Tracking construction progress has

become more efficient as frequent reality capture helps teams compare current conditions against project schedules.

Construction verification is enhanced by comparing ‘as-built’ conditions with ‘as-designed’ BIM models, allowing discrepancies to be identified early, reducing costly errors. Additionally, reality modelling supports ongoing analysis of settlement, erosion, crack propagation, and other structural or environmental changes.

The technologies used to capture on-site reality are more advanced and accessible than ever before. Tripod-mounted terrestrial laser scanners, which capture precise 3D point clouds of buildings and infrastructure, are now augmented with photogrammetry and handheld SLAM scanners. These versatile scanners can also be mounted on backpacks, drones, or autonomous robots.

Multirotor drones are most common due to their stability and manoeuvrability, even in tight spaces, making them ideal for photogrammetry and close-range LiDAR scanning of buildings and infrastructure. Fixed-wing drones are better suited to large-scale mapping, such as surveying vast construction sites or road networks. Meanwhile, autonomous quadruped robots equipped with scanning technology can capture data on large or complex sites, even outside of standard working hours.

On a single project, reality capture can generate terabytes of raw data. One of the biggest challenges is processing this into usable data, such as registered point clouds, 3D meshes, or even BIM models. All of these processes are computationally intensive, and the speed at which data is processed is critical to construction efficiency and decision-making. Fast processing ensures that stakeholders have up-to-date information, minimising delays and reducing costly re-work. In fast-paced construction environments, delays in converting reality capture data into usable models can lead to outdated site conditions, misaligned planning, and inefficiencies in project execution. Rapid processing allows for real-time monitoring of construction progress, enabling teams to detect and address deviations from design specifications before they escalate into major issues. It also enhances safety by identifying hazards or structural concerns as they arise, ensuring prompt intervention.

VISUALISATION - BRINGING REALITY MODELS TO LIFE

Reality modelling produces massive datasets, but raw data alone can be challenging to interpret. This is where visualisation plays a crucial role, transforming complex models into interactive real-time environments that enhance decision-making.

Tools like Twinmotion and Unreal Engine from Epic Games bring reality models to life with high-quality, realtime rendering. This greatly

improves communication and makes it easier for stakeholders, clients and project teams to understand site conditions, identify issues early and make informed decisions faster.

Bringing reality models into a real-time viz environment also allows for seamless integration with design data from BIM authoring tools such as Revit, placing proposed buildings in their real-world context, even across cities.

Image alignment in RealityCapture from Epic Games
Visualisation of London’s skyline using Cesium’s 3D geospatial technology. Image courtesy of Bentley Systems

REALITY MODEL BOTTLENECKS

The speed at which reality capture data is processed is critical to project efficiency. We highlight seven key photogrammetry, point cloud and mesh workflows where faster processing can make a huge difference

FILE CONVERSION 3

Laser scan data often needs to be converted to different formats for interoperability, compliance, visualisation, or efficient storage through compression. The process is computationally intensive but usually only runs on a single CPU core within a workstation, making processor frequency a priority.

In photogrammetry, captured images typically must be enhanced before they can be used to generate a point cloud or mesh. This often involves techniques such as tone stretching and denoising. To improve efficiency these can be automated and batch-processed using a workstation’s CPU or (sometimes) GPU.

POINT CLOUD REGISTRATION

Point cloud registration is a key workflow for laser scanning and involves aligning and merging multiple point clouds into a single, unified coordinate system. The process is typically accelerated by a multi-core workstation CPU, though certain computational phases can be offloaded to the GPU.

ANALYSIS 6

Compared to point clouds, mesh models are easier to interpret, making them ideal for visualisation. They also have smaller file sizes, which improves sharing and navigation. However, converting a point cloud into a mesh is computationally intensive. The process can be accelerated by multi-core CPUs and, in some cases, GPUs.

Various analyses can be performed on reality modelling data, such as comparing point clouds to monitor construction progress or structural changes over time, or aligning a point cloud with a BIM model for construction verification. These processes are typically handled by the CPU and are largely multi-threaded.

ALIGNMENT

This is one of the most important steps in photogrammetry, where the software detects and matches key features across overlapping images, estimates camera positions and orientations, then creates a sparse point cloud. The process is computationally intensive and is often accelerated by multi-core CPUs.

AI CLASSIFICATION 7

Auto-classification uses AI algorithms to intelligently categorise point clouds. The machine learning models are trained on large datasets of terrestrial scans to identify various elements such as floors, doors, ducts, pipes, beams, roads, and curbs. The process typically requires a GPU, but also heavily relies on the CPU.

Visualisation isn’t limited to 2D displays — it can also be experienced in VR, allowing teams to immerse themselves in virtual spaces for planning, construction, and design review. However, bringing reality models into real time viz tools comes with challenges. Maintaining high accuracy while ensuring smooth realtime performance is critical, especially for VR, where high frame rates are essential. Large, high-resolution meshes

— often containing billions of polygons — can be difficult to render efficiently in real time, even with the most powerful workstation GPUs.

To optimise these datasets, models must be structured effectively. One approach is to simplify geometry, split models into multiple parts, and assign Levels of Detail (LoD), but this process can be timeconsuming, restricting when visualisation can be effectively used throughout a project.

Another powerful option in Unreal Engine is Nanite, a virtualised mesh technology that dynamically streams and processes only the visible details, eliminating the need for manual LoD adjustments. However, converting a standard mesh into a Nanite mesh is computationally intensive. To streamline the scan-to-visualisation pipeline, a high-performance workstation with a powerful CPU is essential.

For large-scale environments, Cesium 3D

Tiles provide an alternative solution, enabling massive 3D datasets to be streamed from a local resource or the cloud and rendered dynamically. By using optimisation techniques, large reality models can be efficiently visualised in real time on powerful workstations, enabling smooth navigation and high-fidelity rendering — all without sacrificing performance.

Construction verification in Verity
Point cloud registration in Leica Cyclone Register 360

CAPTURING THE ROCK

For Pete Kelsey of VCTO Labs, a project to capture Alcatraz in its entirety represented a once-in-a-lifetime opportunity. With a tight three-week schedule, processing data on-site was crucial — made possible only by the Lenovo ThinkStation P8 workstation and its 96-core AMD Ryzen™ Threadripper™ PRO processor.

There must have been some points during the long December nights spent on D Block at Alcatraz Island when Pete Kelsey questioned the series of decisions that had led him to occupy Cell 31.

But it’s safe to assume that what kept him going was not just his infectious enthusiasm for reality capture technologies and projects, but also the once-in-alifetime opportunity to capture Alcatraz in its entirety and see the data put to work on preserving this historic landmark.

Kelsey has loved the National Parks of the United States since childhood and has worked pro bono on a number of important reality capture projects for the US National Park Service (NPS), including capturing the USS Arizona Monument at Pearl Harbor in Hawaii, the fossil wall at Dinosaur National Monument in Utah, and the USS Constitution in Boston.

1.2 million visitors per year.

Erosion has taken its toll on the Island and continues to do so. While it’s a natural geological process, it has caused the Island’s cliffs to recede and walkways to collapse into the San Francisco Bay. “This baseline survey could help NPS to triage repairs and maintenance work – basically, deciding what to fix first,” says Kelsey. “And I knew from the start that it would necessitate scanning the whole island, right down to the waterline, preferably at low tide.”

A PUNISHING SCHEDULE

facing a number of constraints regarding the project. Above all, they had only been granted a three-week window of time to perform the entire capture of Alcatraz. In any case, it was nine months before he could start the work with the necessary permits from the NPS in hand, because seabirds had to fledge and leave their nests before drones could be operated in the area.

A regular tide of visitors arriving and departing from the island was also a complication, even in December. It raised issues when it came to both flying drones and scanning interiors, and Kelsey was also mindful of the need to respect tourists’ vacation time and not disrupt it.

Weather, too, was potentially a problem. Strong winds, choppy waters and dense fogs are year-round occurrences on the San Francisco Bay.

“Alcatraz was a good fit for me,” he says. “It’s a great story, everybody’s heard of it, and at 22.5 acres, the size is no issue for reality capture.” So when he found out through conversations with NPS employees that they needed a way to start measuring the effects of sea-level rise on Alcatraz Island, he jumped at the chance to contribute. Reality capture, he argued, would create a baseline survey against which future studies of the Island could be compared, in order to quantify those effects over time.

This survey needed to be exhaustive in its level of detail. As Kelsey points out, once people know that the data from a reality capture project exists and is publicly owned, they tend to have a lot of questions to ask it. And those questions might lead to all sorts of valuable insights, depending on the inquirer. Geologists might wish to simulate the impact of a major earthquake on the Island. Marine biologists might want to analyse the nesting

habits of seabirds there. “Once that data’s captured and available, there’s no putting the genie back in the bottle,” Kelsey jokes.

Alcatraz closed as a prison in 1963, having become too expensive to maintain. It opened to the public in 1973 and is now a major tourist site that attracts some

major tourist site that attracts some

At the same time, he and his team were the

All this meant it was a priority for the project team to make the most of whatever access time was available to them. Kelsey quickly realised that meant that a core team of around eight people - including himselfwould have to stay on the island, rather than rely on the boats that ferry employees and visitors across the Bay to the Island between early morning and 4.30pm. “The NPS told me, ‘Sure, you can stay overnight if you want to – but you’ll all have to sleep in cells.’”

Undeterred, Kelsey and his team pushed on with the project, using a vast array of reality-capture technologies. “There’s no one piece of gear that could do everything we needed it to, to achieve the kind of comprehensive capture we were seeking. It doesn’t exist,” he says. “So that’s why I had to create a sort of Frankensteinesque collection of skills, people and technologies to get the work

the kind of comprehensive capture we were seeking. It doesn’t exist,” he says. “So that’s why I had to esque collection of skills, people and technologies to get the work

done in the three weeks available to us.”

The first step was to set the survey control network - the markers or targets acting as fixed reference points on the island for sensors on drones. Kelsey had already decided on three types of dronebased capture technologies – LiDAR, photogrammetry and multi-spectral imagery – to achieve a complete view of the Island and building exteriors. With those plans in place, the two days set aside for actual flying could not have gone better, he says. “It was perfect, perfect weather – a miracle, really. And the work completed on those two days ticked off the biggest strategic aim for the project, in terms of capturing data to address sealevel concerns.”

With the interiors, Kelsey settled on SLAM LiDAR technology. “The levelof-detail requirement meant terrestrial laser scanning wasn’t necessary, and we’d probably still be there if we’d done it that way,” he jokes.

“We were SLAM scanning from Day One, using handheld scanners you walk around holding. But they could also be hung from drones, which we did using an Elios 3 drone from Flyability, contained in a protective cage so it can bump into ceilings and walls without crashing. We mounted them to backpacks. We even used Spot, the mobile dog robot from Boston

● 1 Pete Kelsey used the Lenovo ThinkStation P8 workstation with 96-cores for extreme multi-tasking

● 2 Spot, the mobile dog robot from Boston Dynamics, took the Emesent Hovermap LiDAR scanner into areas of the buildings that were off limits

● 3 RealityCapture from Epic Games was used to process the bulk of the data

● 4 Detailed LiDAR scan of the Alcatraz Island site, captured by drone

● 5 The AMD powered Lenovo ThinkPad P16s laptop was used for drone flight planning using Sitescan for ArcGIS

Dynamics, to take scanners into areas of the buildings that were totally off-limits, due to asbestos and lead contamination.”

The team Kelsey used included representatives and equipment from a huge range of vendors, including Emesent, Riegl, Freefly Systems, Flyability and Boston Dynamics – all contributing their time and kit to the project on a pro bono basis.

POST-PROCESSING CHALLENGES

All post-processing was conducted in real time on the Island, where connectivity is a real challenge and the Wi-Fi network operates “at dial-up speeds”, according to Kelsey. With terabytes of data to crunch through, this meant the cloud was simply not an option.

Team members would perform a scan, come to the cellhouse office and dump gigabytes of data for post-processing, there and then. “Before we left this remote location, we needed to know that we’d scanned everything we needed to, that the data captured was usable and that the postprocessing had been successful,” he says.

Data was pumped into RealityCapture from Epic Games. “I knew I wanted to use this, because it’s one of the only products I know about that can integrate both LiDAR and photogrammetric data in a single model. With that combination – the geometry of LiDAR and the photorealism

of photogrammetry – the resulting models are really second to none,” says Kelsey.

Help with this hefty compute burden soon arrived in the form of the Lenovo ThinkStation P8, a powerful 96-core AMD Ryzen ™ Threadripper ™ PRO workstation with 256 GB of memory. The machine shouldered the burden immediately and, according to Kelsey, it literally saved the project.

“I will never forget that day in the office at Alcatraz, with RealityCapture crunching away on our capture data, probably the photogrammetry,” Kelsey recalls. “I brought up the Task Manager to see 96 blue squares, all running at 100% at 4.7 GHz. I don’t mind admitting that I squealed with excitement.”

Soon, he was confidently running simultaneous workloads; for example, with SLAM scanning data captured using Emesent’s Hovermap devices and Aura software: “We found we could get 12 to 15 sessions of Aura running on this monster of a workstation at the same time. This was a huge benefit to our schedule.”

In fact, that workstation was a big factor in the project’s successful, on-schedule finish. The result is a unique, 3TB dataset offering multiple layers of detail, thanks to the diverse tools used. It is capable of supporting all manner of applications, from efficient park management to historical research. And it doesn’t stop there. The reality mesh and point cloud datasets are currently being combined in Unreal Engine to develop a hyperealistic, immersive real-time experience.

But the thing that Kelsey is perhaps most looking forward to is sharing his passion for Alcatraz Island with the rest of the world. “We now have the data for the NPS to create a virtual Alcatraz that anyone can visit, regardless of where they live. I can’t say when or if that will happen, but the potential for outreach and education in the wider world is inspiring.” historical research. And it doesn’t stop there. The reality mesh and point cloud datasets are currently being combined in Unreal Engine to develop a hyperealistic,

STORIES AND SECRETS

At HOK’s Centre Block project in Ottawa, the race is on to capture 25,000 historical assets and the building in which they reside, with stakeholders able to explore the massive reality model via an Unreal Engine hub.

In the multi-year project to preserve, restore and modernise one of Canada’s most iconic buildings, HOK director of design technology Mark Cichy has learnt to expect the unexpected.

After all, Centre Block is not just the main building of the Canadian parliamentary complex on Parliament Hill in Ottawa, Canada. It’s also a physical manifestation of the nation’s democratic history, with many stories – and secrets –to tell. Every day brings new surprises, says Cichy: antique mouldings get uncovered, hazardous materials are found, animal remains get dug up.

“But the important thing is that there

should be no surprises for the client,” he says. From the start, the project has involved painstaking work to capture not only the building and its spaces, but also more than 25,000 movable and fixed heritage assets found inside, ranging from radiators to works of art.

“On this project, there is significant investment in recording past states,” Cichy explains. “So, a key client requirement is that we compile a complete record of the building over time, capturing each and every space before anyone moves in to dismantle it, and then while it is being dismantled, and then while rooms and assets are being remediated and reinstalled.”

This work calls for a wide range of reality capture technologies. LiDAR drones are used for capturing views of the site and of Centre Block’s exterior. Inside the building, tripod-mounted laser scanners are used, supplemented by SLAM devices in tight spaces or areas that are otherwise difficult to access.

Reality capture data, in the form of both photogrammetry meshes and point clouds, are combined with architectural, structural, and MEP design content from Revit and Rhino.

Due to Centre Block security restrictions, it’s not permitted to use wireless networks or personal devices within the building itself, so reality capture data must leave the building on security-checked capture devices and then be physically transported to the project team’s integrated project office a short distance away from the main site, where the data is processed. AI plays a big part in streamlining data processing, identifying hazards, and automating documentation.

Data is then pumped into a massive hub powered by Unreal Engine, which is capable of ingesting many terabytes of data in real time. For interactive visualisation, Nanite technology dynamically renders only the visible details, eliminating the need for manual Level of Detail (LoD) adjustments. Everything is processed on the hub, with pixels streamed to any device with an internet browser.

The result is a dynamic resource that enables multiple project stakeholders to concurrently explore highly detailed 3D models at various stages, assess the impact of design choices in real time, take VR walkthroughs and interactively review and provide feedback on the project’s progress.

The computational demands are considerable. To test and scale the Unreal Engine build, HOK is currently using a multi-GPU Lenovo ThinkStation workstation as a “sandbox” environment.

It’s an impressive achievement, both in terms of technology deployment and historic preservation. And Cichy sees the Unreal Engine-based hub continuing to play a valuable long-term role in how the Canadian government uses Centre Block, long after the project is completed, in areas such as facility management, building operations, office planning and staff distribution.

Aerial view of Canadian parliamentary complex

CRUSHING THE COMPUTATION

Reality modelling is one of the most computationally demanding tasks in AEC. With the right workstation hardware, teams can save hours, if not days, of processing time, accelerating project timelines. We break down the key components and highlight what to consider when specifying a workstation.

GPU (GRAPHICS PROCESSING UNIT)

In reality modelling, the Graphics Processing Unit (GPU) serves two key roles: 3D model visualisation and data processing.

Visualisation can take place within dedicated reality modelling applications or in real-time game engine environments like Unreal Engine, where the GPU faces greater demands due to the emphasis on realism and interactivity, especially in VR.

For data processing tasks such as mesh reconstruction or AI classification, more powerful GPUs can speed up workflows. However, because these processes share some of the workload with the CPU, the performance gains from a high-end GPU are often less significant compared to a fully GPU-driven task like ray-traced rendering. As a result, entry-level workstation GPUs can often provide better value for money than pricier high-end models.

For reality modelling there is currently little reason to invest in more than one GPU unless, perhaps, the workstation is virtualised to serve multiple users.

STORAGE (SSD)

Fast NVMe Solid State Drives (SSDs) are crucial for handling massive reality modelling datasets, often reaching terabytes in size. Processing this data can also generate huge temporary files, making highendurance SSDs even more essential.

To optimise performance, separate SSDs are often recommended — one for reading, one for writing, and sometimes a third for the operating system and applications. Advanced tower workstations, such as the Lenovo ThinkStation P8, support multiple SSDs directly on the motherboard and also offer front-accessible SSDs and PCIe add-in boards that can host multiple drives.

For additional flexibility, multiple SSDs can be configured in RAID arrays. RAID 0 (striping) enhances performance, while RAID 1 (mirroring) provides redundancy, safeguarding data in case of a drive failure. Given that it can take days to process the most complex reality modelling datasets, this serves as a shrewd insurance policy. Meanwhile, traditional Hard Disk Drives (HDDs) remain the most cost-effective option per gigabyte, making them ideal for archiving.

PROCESSOR (CPU)

Many reality modelling workflows are multi-threaded, so benefit from multiple CPU cores. However, more cores can mean diminishing returns, especially since certain stages in some workflows rely on just one core. Other tasks are entirely single-threaded, making it essential to strike the right balance between core count and clock speed (GHz). CPUs with large caches — high-speed on-chip memory for frequently accessed data — can further enhance performance. Additionally, AMD Simultaneous Multithreading (SMT), which enables a single core to run multiple threads, can impact processing time. Disabling it can sometimes lead to faster execution. AMD Ryzen Threadripper PRO processors offer a powerful combination of high core counts, fast clock speeds, and large caches. With models ranging from 12 to 96 cores, they cater to a variety of workloads. When selecting a processor, multi-tasking should also be considered, as processing data in parallel from multiple sources in multiple applications, can provide significant productivity gains.

MEMORY (RAM)

Workstation memory can have a big impact on performance in reality modelling software. When handling large datasets, or running multiple operations in parallel, running out of memory can significantly slow performance, forcing the workstation to temporarily page data to the SSD to complete calculations. Memory performance is equally important, especially when multi-tasking. It is determined by memory bandwidth, which depends on both the number of supported memory channels, and the frequency of the memory modules. AMD Ryzen™ Threadripper™ PRO processors offer a strong balance of both, supporting 8-channel memory with speeds of 4,800 MHz. To fully utilise this bandwidth, the number of memory modules should match the number of available channels.

Additionally, ECC memory, standard on AMD Ryzen Threadripper PRO, helps prevent crashes — critical for lengthy calculations where losing hours, or even days, of newly processed data is not an option.

REALITY TECH: WORKSTATIONS

Lenovo workstations powered by AMD processors, including the AMD Ryzen™ Threadripper™ PRO with up to 96 high-performance cores, make light work of the most demanding reality modelling workflows

Reality modelling presents some of the most demanding workflows in AEC. And as reality capture hardware continues to improve in resolution, datasets grow larger, and capture frequency increases, these demands continue to intensify.

Processing LiDAR or photogrammetry data can take hours, even days. On a busy construction site, every second counts, making fast, accurate data delivery essential for decision-making, minimising delays, ensuring quality control, and reducing costs.

Powered by the AMD Ryzen™ Threadripper™ PRO processor, the Lenovo ThinkStation P8 is engineered to handle the most demanding reality modelling workloads. It delivers the speed and efficiency needed to keep projects on track, whether in the office or on-site.

• Flexible and expandable highend desktop or rack-mounted workstation for multi-application, multi-threaded reality modelling workflows with huge datasets

• Can be optimised for CPU or GPU accelerated workflows, including high-end visualisation

Ryzen™ Threadripper PRO 7000 WX Series processor with 12, 16, 24, 32, 64 or 96 cores

What sets the ThinkStation P8 apart is its versatility — offering a wide range of AMD Ryzen Threadripper PRO processors, that simplify fleet management for IT admins. Choose from 12, 16, 24, or 32 cores for mainstream workflows, or scale up to 64 or 96 cores for the most demanding workloads and extreme multi-tasking.

With up to 1TB of DDR5 memory, the P8 seamlessly handles massive datasets, while up to 8 M.2 SSDs provide extensive storage options, allowing users to prioritise both performance and redundancy.

With up to 10Gbe Ethernet as standard and optional 25Gbe, it also enables superfast transfer of colossal reality modelling datasets across the network.

Reality modelling doesn’t have to be confined to the office. The P8 can be rackmounted for remote access, delivering huge

• Value-driven flexible and expandable high-end desktop workstation for multi-application, multi-threaded reality modelling workflows with huge datasets

• Can be optimised for CPU or GPU accelerated workflows, including high-end visualisation

Ryzen™ Threadripper PRO 5000 or 3000 WX Series processor with 12, 16, 24, 32, or 64 cores

computational power from any location.

For those who need powerful computational capabilities on the go, Lenovo ThinkPad mobile workstations are an excellent choice. The ThinkPad P14s and ThinkPad P16s strike the perfect balance between performance and portability. While they may not match the scalability of the ThinkStation P8, they still pack a serious punch with AMD Ryzen 7 PRO 7040U Series processors offering up to 8 cores and 64GB of RAM.

Lenovo workstations are more than just high-performance hardware — they are built to withstand the rigours of professional use. With legendary build quality, superior thermal management, and impressive reliability, these systems are designed to run cool, quiet, and consistently, even in the most demanding environments.

• Compact, lightweight, sturdy, and energy efficient 14-inch (2.8K) mobile workstation for entrylevel reality modelling workflows, especially on construction sites

• Emphasis on CPU computational workflows, with GPU used for viewing reality models

Ryzen™ 5 / 7 PRO 7040U Series processor with 6 or 8 cores

• Compact, lightweight, sturdy, and energy efficient 16-inch (4K) mobile workstation for entry-level reality modelling workflows, especially on construction sites

• Emphasis on CPU computational workflows, with GPU used for viewing reality models

Infraspace: reimagining civil infrastructure design Software

Greg Corke caught up with Andreas Bjune Kjølseth, CEO of Infraspace, to explore how the civil engineering software startup is looking to transform early-stage design using generative design and AI

In the world of infrastructure design, traditional processes have long been plagued by inefficiencies and fragmentation. That’s the view of engineer turned software developer Andreas Bjune Kjølseth, CEO of Norwegian startup Infraspace. “Going from an idea to actually having a decision basis can be a quite tedious process,” he explains.

Four years ago, Kjølseth left his career in civil engineering consulting and founded Infraspace, to develop a brandnew generative design tool for civil infrastructure alignments - road, rail or power networks. In his years as an engineer and BIM manager Kjølseth was left frustrated by the limitations of traditional processes. Civil engineers commonly must navigate multiple software tools, explains Kjølseth – sketching in one platform, generating 3D models in another, using GIS for analysis on land take and environmental impact, and then manually assembling, comparing and presenting alternatives.

the outcomes, such as, ‘I want options with the least possible construction costs, shortest travel time or length, and the least land take in certain areas.’ Then the algorithm will quickly explore opportunities to make better solutions.”

The Infraspace cloud platform generates thousands of alternatives within minutes, enabling engineers to explore options they might not have considered if done manually.

Design options are displayed via an intuitive web-based interface, featuring a 3D model alongside an analytics dashboard with key performance indicators

‘‘

easier for multiple stakeholders to understand the consequences quicker, explains Kjølseth.

“The typical project manager often has limited access to advanced CAD, BIM or analysis software. With Infraspace they can quickly log into their projects in their browser and see the 3D models together with the analytics instantly,” he says. “It’s also possible to invite external stakeholders into the project to explore a selected number of alternatives.”

Project seeds

To start a project, users can pull in data from various sources, such as Mapbox or Google, or upload custom digital terrain models, bedrock surface models, or GIS data.

Users can define where they want the generative AI engine to explore alternatives and define the outcomes, such as, ‘I want options with the least possible construction costs, shortest travel time or length, and the least land take in certain areas’ Andreas Bjune Kjølseth, CEO, Infraspace

(KPIs) such as cost, route length, land take, and cut-and-fill volumes.

Infraspace aims to unify this fragmented workflow within a single, cloudbased platform. The software is primarily designed to tackle the initial phases of linear civil infrastructure projects, using an outcome-based approach, as Kjølseth explains. “Users can define where they want the generative AI engine to explore alternatives and define

The system can also be used to assess the environmental impact of proposed designs, including carbon footprint, viewshed, noise, and which buildings or areas might be affected.

Based on this information engineers can quickly compare and evaluate multiple design alternatives, then use the software to refine designs further. As the software is cloud based, this makes it

The design can then be kickstarted in several ways. An engineer could simply define the start and end point of an alignment, then let the software work out the best alternatives based on set goals. Alternatively, an engineer can define geometric constraints—such as sketching a corridor or marking environmentally protected areas as off-limits.

The system is not limited to blank slate designs. It can also import alignments from traditional infrastructure design tools like AutoCAD Civil 3D and use them as a basis for optimisation. As Kjølseth explains, some engineers are even just using the platform for its ana -

4 Infraspace can quickly assess the potential environmental impact of proposed designs 2 4 1 3

1 The generative AI engine can deliver thousands of design options in minutes

2 Infraspace can be used on a variety of civil infrastructure alignment projects - road, rail or power networks

3 Design options are presented as a 3D model alongside a KPI analytics dashboard

lytical capabilities, to get fast feedback on traditionally crafted designs. The software offers import / export for a range of formats including LandXML, IFC, OBJ, BCF, glTF, DXF and others.

Adaptability across geographies

Infraspace is not hard coded for specific national design standards, but as Kjølseth explains, the platform captures the fundamental mechanisms of infrastructure design. It allows engineers to define geometric constraints, set curve radii, specify vertical alignment parameters, and adapt to different project types including roads, railways, and power transmission lines. It can handle projects with varying levels of design freedom, from short access roads to expansive highway corridors.

Designed by engineers, for engineers

For civil engineers seeking to streamline their design process, reduce environmental impact, and explore more design options, faster, Infraspace offers an interesting alternative to traditional fragmented workflows.

Most importantly, with a team combining civil engineering expertise and software development skills, it’s clear the company understands the nuances of infrastructure design.

While Infraspace is currently focused on early-stage design and optimisation, its ambitions extend beyond. “We will continue to add more features as we go,” says Kjølseth. “I see that generative design as a concept and the platform we have, can definitely be applied to many use cases — during the latter stages of a project, and to even more complex problems.”

■ www.infraspace.tech

Driving Collaboration

Every Step of the Way

From more efficient feedback processes to better accuracy, Bluebeam is helping structural and civil engineers work faster –and smarter

Pinnacle Consulting Engineers is a UK-based construction consultancy that has been providing global services in structural and civil engineering for more than 25 years. Working in sectors including infrastructure, logistics, mission-critical systems, commercial and retail, they do everything from feasibility studies to detailed design.

But with teams across multiple locations, something was holding them back: their software. Engineers didn’t have much power to manage markups and update drawings. Productivity and communication needed to improve. And when the COVID-19 pandemic hit, it wasn’t possible anymore to print everything out and do things by hand.

Pinnacle found Bluebeam in 2020 and they haven’t looked back since. Their engineers now work faster and provide quicker responses. Reiterations and rework reduced by 50%, allowing teams to provide clients with an even better customer experience. Pinnacle became more efficient, more nimble, and more competitive by simply looking at their software stack and key processes.

Customisation is key

With more functionality and ways to enhance productivity, teams at Pinnacle found that Bluebeam makes it much easier to manage projects – especially when communicating ideas and design changes.

“Bluebeam is designed for the construction and engineering industry, which means there are lots of features already set up that fit what you need, including a range of customisation options,” says senior civil engineer Adam Prais.

“For example, we have a lot of specific line types and presets in our tool chest. These match how we draw things in CAD. It speeds up communication, as we don’t need to explain a curb type or a drainage solution, and everything is coded and consistent.

It means people can read suggestions and amendments instantly because it reflects the way we already work.”

Flexible, agile working

Along with bringing teams closer together, Bluebeam helps Pinnacle work more interactively with clients, according to principal structural engineer Christos Angelidakis.

“It’s a very fast way of working. You can show a drawing to a client and work on it together in a meeting, tweaking layers and marking it up,” he said. “Every design change can be highlighted clearly, and it means that we don’t have to go to CAD every time, which takes longer.”

“Clients really like this,” he continued. “They can work closely with us, see their feedback being added in real time and get a drawing much faster. It’s useful during the project when something needs to change on site, too. We can amend the drawing and get it back to the site team quickly, so there are no delays. This then gets passed to our BIM team to update the model or original design drawings.”

Angelidakis adds that working in this way makes the decision-making process much smoother. “We only update the master drawings once things are agreed,” he said. “Essentially, it means that we are working with the drawing in Bluebeam before working in the master files. This can save two or three days in the process, and the client is happy because they don’t have to wait as long to make their decisions.”

Workflows are smoother, too. “We don’t have to do detailed briefs for our CAD designers, and we’re not going back and forth with the client and team members all the time, as everything we need is on the plan,” says Angelidakis.

Sketch up tools add speed and accuracy to projects

The sketching and measurement functions within Bluebeam also improve efficiency

“At concept stage, we do the initial drawing work [in Bluebeam] to make changes much faster than we would be able to by hand or using CAD. The idea is to give the team and client more certainty before time and effort is spent on the detailed design,” says Prais.

“One of the most useful tools is layering. You can add drawings on top of each other and dim one of the layers so that you can easily track changes or test different options between drawings. The markup tools are so clear that we can do work like this in about a third of the time, and there are fewer mistakes or requests for clarification during projects. Add in the ability to import and transfer details between drawings, and it’s easy to keep everyone on track with plans.”

Working in a paperless environment

Going back to why Pinnacle initially picked Bluebeam, Prais described the ease of working remotely with the tool: “It’s helped us to work more effectively, without needing to print out drawings and creating hand markups. It means that we’re not dependent on an office infrastructure, using less paper and not needing access to things like a printer.”

A competitive edge

Since using Bluebeam, Pinnacle has increased its competitiveness, smoothing communication processes and making projects more efficient.

“The benefits to both internal and external project communication are invaluable. Bluebeam helps us bring different disciplines together to discuss projects and make edits and decisions in a collaborative way,” says Angelidakis. “It helps us to meet deadlines more easily as we save days of work on our projects, because we can get information out to the team faster – keeping projects on track and managing approvals.”

“We don’t need to go back and forth with clients, as we can work together in the same tool and make it very clear on what decisions need to be made,” says Angelidakis. “Ultimately, Bluebeam gives us confidence in our projects – we have all the information we need at the tips of our fingers, whether we’re on site or in the office.”

AI agents for civil engineers

Anande Bergman explores how AI agents can be used to create powerful solutions to help engineers work more efficiently but still respect their professional responsibilities

As a structural engineer, I’ve watched how AI is transforming various industries with excitement. But I’ve also noticed our field’s hesitation to adopt these technologies — and for good reason. We deal with safety-critical systems where reliability is a requirement.

In this article, I’ll show you how we can harness AI’s capabilities while maintaining the reliability we need as engineers. I’ll demonstrate this with an AI agent I created that can interpret truss drawings and run FEM analysis (code repository included), and I’ll give you resources to create your own agents.

The possibilities here have me truly excited about our profession’s future! I’ve been in this field for years, and I haven’t been this excited about a technology’s potential to transform how we work since I first discovered parametric modelling.

What makes AI agents different?

Unlike traditional automation that follows fixed rules, AI agents can understand natural language, adapt to different situations, and even solve problems creatively. Think of them as smart assistants that can understand what you want and get it done.

For example, while a traditional Python script needs exact coordinates, boundary conditions, and forces to analyse a truss, an AI agent can look at a hand-drawn sketch or AutoCAD drawing and figure out the structure’s geometry by itself (fig 1) It can even request any missing information needed for the analysis.

This flexibility is powerful, but it also introduces unpredictability — something we engineers typically try to avoid

The rise of specialised AI agents

It’s 2025, and you’ve probably heard of

ChatGPT, Claude, Llama, and other powerful Large Language Models (LLMs) that can do amazing things, like being incredibly useful coding assistants. However, running these large models in production is expensive, and their general-purpose nature sometimes makes them underperform in specific tasks. This is where specialised agents come in. Instead of using one large model for everything, we can create smaller, fast, focused agents for specific tasks — like analysing drawings or checking building codes. These specialised agents are:

• More cost-effective to run

• Better at specific tasks

• Easier to validate

Agents are becoming the next big thing. As Microsoft CEO Satya Nadella points out, “We’re entering an agent era where business logic will increasingly be handled by specialised AI agents that can work across multiple systems and data sources”.

For engineering firms, this means we can create agents that understand our specific workflows and seamlessly integrate with our existing tools and databases.

The engineering challenge

Here’s our core challenge: while AI offers amazing flexibility, engineering demands absolute reliability. When you’re designing a bridge or a building, you need to be certain about your calculations. You can’t tell your client “the AI was 90% sure this would work.”

On the other hand, creating a rule-based engineering automation tool that can handle all kinds of inputs and edge cases while maintaining 100% reliability is a significant challenge. But there’s a solution.

Bridging the gap: reliable AI agents

We can combine the best of both worlds by creating a system with three key components (fig 2):

1) AI agents handle the flexible partsunderstanding requests, interpreting drawings, and searching for data.

2) Validated engineering tools perform the critical calculations.

3) Human in the loop: You, the engineer, maintain control — verifying data, checking results, and approving modifications.

Let me demonstrate this approach with a practical example I built: a truss analysis agent

Engineering agent to analyse truss structures

Just as an example, I created a simple agent that calculates truss structures using the LLM Claude Sonnet. You give it an image of the truss, it extracts all the data it needs, runs the analysis, and gives you the results.

You can also ask the agent for any kind of information, like material and section properties, or to modify the truss geometry, loads, forces, etc. You can even give it some more challenging problems, like “Find the smallest IPE profile so the stresses are under 200 MPa”, and it does!

The first time I saw this working I

couldn’t help but feel that childlike excitement engineers get when something cool actually works. Here is where you start seeing the power of AI agents in action.

It is capable of interpreting different types of drawings and creating a model, which saves a lot of time in comparison with the typical Python script where you would need to enter all the node coordinates by hand, define the elements and their properties, loads, etc.

Additionally, it solves problems using information I did not define in the code, like the section properties of IPE profiles or material properties of steel, or what is the process to choose the smallest beam to fulfil the stress requirement. It does everything by itself. N.B. You can find the source code of this agent in the resources section at the end.

In the video in fig 3, you can see the app I made using VIKTOR.AI

How does it work: an overview

Now let’s look behind the screen to understand how our AI agent works, so you can make one yourself.

In the image below (fig 4) you can see

that in the centre you have the main AI agent, the brains of the operation. This is the agent that chats with the user and accepts text and images as input.

Additionally, it has a set of tools at its disposal, including another AI Agent, which it uses when it believes they are needed to complete the job:

• Analyse Image: AI Agent specialised in interpreting images of truss structures and returning the data needed to build the FEM model.

• Plot Truss: A simple Python function to display the truss structures.

• FEM Analysis: Validated FEM analysis script programmed in Python.

The Main agent

The Main agent is powered by Claude 3.7 Sonnet, which is the latest LLM provided by Anthropic. Basically, you are using the same model you are chatting with when using Claude in the browser, but you use it in your code using their API, and you give the model clear guidelines on how to behave and provide it with a set of tools it can use to solve problems.

You can also use other models like ChatGPT, Llama 3.x, and more, as long as they support tool calling natively (using functions). Otherwise, it gets complicated to use your validated engineering scripts.

For example, here’s how we get an answer from Claude using Python (fig 5).

Let’s break down these key components:

• SYSTEM MESSAGE: This is a text that defines the agent’s role, behaviour guidelines, boundaries, etc.

• TOOLS_DESCRIPTION: Description of what tools the agent can use, their input and output.

• messages: This is the complete conversation, including all previous user and assistant (Claude) messages, so Claude knows the context of the conversation.

Tools use

One of the most powerful features of Claude and other modern LLMs is their ability to use tools autonomously. When the agent needs to solve a problem, it can decide which tools to use and when to use them. All it needs is a description of the available tools, like in fig 6

The agent can’t directly access your computer or tools — it can only request to use them. You need a small intermediary function that listens to these requests, runs the appropriate tool, and sends the results back. So don’t worry, Claude won’t take over your laptop... yet ;-)

The Analyse image agent

Here’s a fun fact: the agent that analyses truss images is actually another instance of Claude! So yes, we have Claude talking

to Claude (shhh.... don’t tell him). I did this to show how agents can work together, and honestly, it was the simplest way to get the job done.

This second agent uses Claude’s ability to understand both images and text (www. tinyurl.com/claude-vision). I give it an image and ask it to return the truss data in a specific JSON format that we can use for FEM analysis. See fig 7 for the prompt I use.

I’m actually quite impressed by how well Claude can interpret truss drawings right out of the box. For complex trusses, though, it sometimes gets confused, as you can see in the test cases later.

This is where a specialised agent, trained specifically for analysing truss images, would make a difference. You could create this using machine learning or by fine-tuning an LLM. Fine-tuning means giving the model additional training on your specific

type of data, making it better at that task (though potentially worse at others).

Test case: book example

The first test case is an image of a book (fig 8) . What’s interesting is that the measurements and forces are given with symbols, and then the values are provided below. You can also see the x and y axis with arrows and numbers, which could be distracting.

The agent did a very good job. Dimensions, forces, boundary conditions, and section properties are correct. The only issue is that element 8 is pointing in the wrong direction, which is something I ask the agent to correct, and it did.

Test case: AutoCAD drawing

This technical drawing has many more elements than the first case (fig 9) . You

6

8

can also see many numerical annotations, which could be distracting.

7

Again, the agent did a great job. Dimensions and forces are perfect. Notice how the agent understands that, for example, the force 60k is 60,000 N. The only error I could spot is that, while the supports are placed at the correct location, two of them should be rolling instead of fixed, but given how small the symbols are, this is very impressive. Note that the agent gets a low-resolution (1,600 x 400 pixel) PNG image, not a real CAD file.

Test case: transmission tower

This is definitely the most challenging of the three trusses, and all data is in the text. It also requires the agent to do a lot of math. For example, the forces are at an angle, so it needs to calculate the x and y components of each force. It also needs to calculate x and y positions of nodes by adding different measurements like this: x = a + a + b + a + a.

As you can see in fig 10 , this was a bit too much of a challenge for our improvised truss vision agent, and for more serious jobs, we need specialist agents. Now, in defence of the agent, the image size was quite small (700 x 600 pixels),

so maybe with larger images and better prompts, it would do a better job.

An open-source agent for you I’ve created a simplified version of this agent that demonstrates the core concepts we’ve discussed. This implementation focuses on the essential components:

• A basic terminal interface for interaction

• Core functionality for truss analysis

• Integration with the image analysis and FEM tools

The code is intentionally kept minimal to make it easier to understand and experiment with. You can find it in this GitHub repository (www.tinyurl.com/CivilAI-1). This simplified version is particularly useful for:

• Understanding how AI agents can integrate with engineering tools

• Learning how to structure agent-based systems

• Experimenting with different approaches to truss analysis

While it doesn’t include all the features of the full implementation, it provides a solid foundation for learning and extending the concept. You can use it as a starting point to build your own specialised engineering agents. See video below (fig 11).

2) Reliability comes from smart architecture

• Let AI handle the flexible, creative parts

• Use validated engineering tools for critical calculations

• Keep engineers in control of key decisions

3) The future is specialised

• Instead of one large AI trying to do everything

• Create focused agents for specific engineering tasks

• Connect them into powerful workflows

4. Getting started is easier than you think

• Modern LLMs provide a great foundation

• Tools and APIs are readily available

Conclusions

After building and testing this truss analysis agent, here are my key takeaways:

1) AI agents are game changers for engineering workflows

• They can handle ambiguous inputs like hand-drawn sketches

• They adapt to different ways of describing problems

• They can combine information from multiple sources to solve complex tasks

• Start small and iterate

Remember: AI agents aren’t meant to replace engineering judgment — they’re tools to help us work more efficiently while maintaining the reliability our profession demands. By combining AI’s flexibility with validated engineering tools and human oversight, we can create powerful solutions that respect our professional responsibilities.

I hope you’ll join me in exploring what’s possible!

Resources

Simple Truss Analysis agent Project repository: GitHub www.tinyurl.com/CivilAI-1

Claude: Learn promping, tool calling and multimodality (free course) www.tinyurl.com/CivilAI-2

LLama 3.2: Learn promping, tool calling and multimodality (free course) www.tinyurl.com/CivilAI-3

Claude documentation on tool use www.tinyurl.com/CivilAI-4

Claude documentation on vision www.tinyurl.com/CivilAI-5

About the author

Anande Bergman is a product strategist and startup founder who has contributed to multiple successful tech ventures, including a globallyscaled engineering automation platform.

With a background in aerospace engineering and a passion for innovation, he specialises in developing software and hardware products and bringing them to market.

Drawing on his experience in both structural engineering and technology, he writes about how emerging technologies can enhance professional practices while maintaining industry standards of reliability.

Reviving Brutalism preserving the legacy of concrete giants

Roderick Bates of Chaos highlights how 3D visualisation can help change the conversation around Brutalism—offering practical pathways for adaptive reuse and public engagement

Brutalism, one of the most polarising architectural styles, with its bold concrete forms and oversized design, returned to the spotlight with The Brutalist - a newly released film exploring the intertwined fate of a Brutalist architect and his buildings.

The film’s revival of interest in Brutalism highlights the circular nature of unique architectural trends. While some admire Brutalism for its raw, imposing honesty, others see it as an eyesore that clashes with the modern architectural landscape - an ongoing debate since Brutalism first emerged.

Given ever-changing architectural trends, we should not be so hasty to demolish these buildings based on contemporary aesthetic judgement, as they may come back into favour in a decade. Cultural moments, like this film, can

shift public perception with the representation of the artform, allowing the public to once again understand and see the beauty in Brutalism - before it’s lost to the wrecking ball.

Why preservation over demolition? Cultural legacy and historical impact: Preserving buildings with a rich history brings incredible cultural value reflecting the culture and lived experiences at their time of construction. Brutalism emerged in the UK during the 1950s as part of post-war reconstruction. With Britain left in ruins and limited funds for rebuilding, architecture prioritised functionality and cost-effectiveness, shaping the stark aesthetic of the movement.

Inexpensive modular elements, concrete and reinforced steel were used for institutional and residential buildings that need-

ed to be rebuilt quickly to return the UK to a liveable state. As historical symbols of the country’s resilience in a post-war era, these buildings should not be so readily dismissed over debates on aesthetics as they are cultural icons embodying Britain’s resilience and commitment to social progress, accessibility and equality. Rather than simply demolishing these physical manifestations of the British spirit, efforts should be directed toward preservation through thoughtful repurposing, ensuring their integration within the modern architectural landscape.

Environmental impact: The preservation and adaptive reuse of Brutalist buildings, however, presents considerable challenges. In many instances, building codes and regulations inhibit retrofitting efforts to such a degree that demolition is the only solution. Where renovation is possible, listed Brutalist structures pose a distinct set of challenges, with the buildings presenting a level of energy performance well below modern energy efficiency standards and the required modifications to make them both efficient and usable running afoul of conservation guidelines. Looking beyond challenging operational efficiency, the preservation of Brutalist buildings does have a compelling carbon argument. The clinking of lime to produce the cement in concrete is a massive source of carbon emissions, which is why architects and designers often prefer more environmentally friendly materials. Brutalist buildings, due to their impressive mass and extensive use of concrete are vastly carbon-intensive. However, since the carbon has already been emitted during construction in the 50s, preserving these buildings rather than demolish-

ing them prevents additional emissions from new construction.

Preserving Brutalist buildings conserves resources by extending the lifespan of structures where the bulk of carbon emissions have already occurred. This makes adaptive reuse not only the right choice historically and culturally, but also the more environmentally responsible option.

Contemporary meets traditional Contemporary architects are already leading the repurposing charge by reimagining Brutalist principles and blending them with modern, sustainable materials while retaining core stylistic elements. Raw concrete used in existing Brutalist structures is being combined with materials like wood and glass to soften its boldness, creating a more artistic interplay of textures and materials. This softening of Brutalism’s rough edges has enabled it to integrate more seamlessly into the surrounding landscapes.

Moreover, unlike many other historical buildings, Brutalist structures are highly adaptable for modern use. Their mass and robust design not only provides acoustic isolation, a desirable trait in the context of residential reuse, but it also makes slab penetrations for the running of pipes, ducts and other systems through walls and floors, much easier. When repurposing contemporary buildings with a lightweight structure every penetration must be carefully considered, which fortunately isn’t the case with the overbuilt brutalist structures.

Changing perceptions

Repurposing any building isn’t cheap, and before investing in an adaptive reuse pro-

ject, it’s essential that the public, including the potential future residents, understand both the vision for the final result and the motive behind repurposing over demolition. Otherwise, in 10 years, we could find ourselves facing the same debate over aesthetics and potential demolition.

3D visualisation technology enables designers to produce accurate digital representations of existing structures, while incorporating proposed design modifications, new features, and materials, creating an accurate reflection of what the project will actually look like, once completed. This greatly facilitates the presentation of the design to the public, allowing for stakeholder feedback to be gathered and integrated early in the processavoiding costly delays, and even more importantly, potential commercial failure.

Secondly, to authentically experience the raw scale of a brutalist building and resulting emotional impact of Brutalism, one must visit the building in person, though this is not always possible. Interactive renders offer a solution, allowing both designers and the public to virtually experience being towered over by the building’s mass. On an entirely different scale, the intricate patterns of board-formed concrete is a subtle yet significant feature of Brutalist buildings, that can only be appreciated either through direct experience or with high quality renders that capture the dynamic nuances of lighting and materials, accurately conveying the beauty and emotion of Brutalism to stakeholders.

The visual impression of Brutalist buildings is incredibly strong. This is key to their appeal, but it can be difficult to visualise the buildings taking on a

new life, much less as a welcoming apartment building or office. A highquality visualisation can allow people to see a new reality, allowing them to experience, virtually, the beauty and emotion of brutalism, hopefully shifting public perception in the process.

The future of Brutalism

The future of Brutalist buildings is unclear, but it is evident that demolition, without considering alternatives, would be a waste. A waste of resources, of cultural history, and of beautiful buildings that contribute an emotional element to the urban landscape they inhabit. Reimagining and embracing Brutalism is not only about preserving the past but also about recognising its relevance in the present and the cultural values these structures embody. In our current culture where architecture strives for sustainable design solutions, we must look at what we already have and repurpose it to meet modern needs, establishing an important thread tying the old and the new.

The distaste for Brutalism shows the beauty of these designs was never clearly communicated. By making these repurposed designs accessible through emotive, immersive visualisations, the door to public appreciation is opened - before large budgets are spent on redevelopment. At Chaos we strive to democratise the design process, making it accessible to all stakeholders by simplifying complex styles and revealing their inner, timeless beauty.

About the author

Roderick Bates is head of corporate development at Chaos, a specialist in design and visualisation technology.

■ www.chaos.com

Polycam for AEC

Reality capture devices are usually either high-cost laser scanners or affordable photogrammetry via drones or phones. Polycam, blending iPhone LIDAR with photogrammetry, is now aiming at the professional AEC market. Martyn Day reports

Precise reality capture has come a long way. We are in the process of moving from rare and expensive to cheap and ubiquitous. Laser scanning manufacturers are currently holding their price points and margins, but technology and mobility are closing in from the consumer end of the market. Matterport recently launched a low-cost laser scanner combined with photogrammetry, and Polycam, a developer of smartphone-based reality capture software for consumers, is looking to sell up to the professional market.

Polycam can be used to quickly document existing conditions (as-builts), measure spaces, and generate floor plans. The latest release looks to dig deeper into AEC workflows. The app is available for iOS and Android and makes use of the iPhone’s built in LiDAR and cameras to capture interiors and exteriors when using footage from a drone. The software also supports Gaussian Splats to achieve high-resolution 3D capture. While the product has proved incredibly popular, the firm is looking to move into new areas of AEC, such as interior design, structural, construction inspection and facilities managmenet.

The company

Polycam was founded four years ago by Chris Hinrich and Elliot Spellman. Their initial aim was to build software that could deliver the power of 3D capture to users of smartphones.

Before Polycam, the pair worked at a company which was developing a ‘3D Instagram’ that processed uploaded images on a server for photogrammetry.

This was a bottleneck. The pair left the company and set up Polycam. The big innovation was the fact that you could process the 3D creation fast, on device.

With over half the Fortune 500 companies actively using Polycam and well over 100,000 paying users, the firm has been able to raise over $22 million in 2024 in investment, based on revenues of $6.5 million in 2023. One of the core areas that showed regular growth was in their AEC user-base. The latest release focuses on providing tools for the growing base of AEC customers.

New features

Polycam supports Apple’s AR toolkit, allowing for easier and more accurate model creation by recognising walls, doors, and windows. I have used Polycam on my iPhone and compared it to a Leica Disto and have found the accuracy to be within a few millimetres when scanning a room. This makes it suitable for schematic designs and perhaps material ordering (though precise cuts might still require manual measurements). The platform supports multifloor scanning, to build a model very similar to that of Matterport.

The AI automatically derives room classifications by detecting objects like beds (for bedrooms) and appliances (for kitchens).

The new Scene Editor allows multiple scans to be combined, including both interior captures and drone footage, into a single, unified 3D scene. This provides a holistic view of a property or project site, enabling users to navigate and analyse the entire space. Using layers, it’s possible to filter scenes and control the visibility of different parts of a capture.

The platform also has new collaboration and sync tools that allow users to add comments and start threaded conversations within a scanned space, facilitating review processes for architects and other stakeholders. The cross-platform nature of Polycam ensures that teams can access and share this data across various remote devices.

‘‘ Complex geometry can fool the application. I found that accurately capturing ceilings with multiple levels and stairs, resulted in gaps in the models ’’

While an automated scan-to-BIM workflow is seen as the aim, Polycam offers a service where users can order professional-grade 3D files that are then converted into CAD (AutoCAD) and BIM (Revit) files – but with a human-in-the-loop, through a collaboration with Transform Engine. This provides a higher quality and more detailed BIM output than automatic processing currently offers. AutoCAD layouts start at $95 and Revit models $200. Furthermore, Polycam has plans to add IFC (Industry Foundation Classes) file export, which will make it easier for users to create their own models.

That said, Polycam does instantly generate customisable 2D floor plans from its scans. These floor plans can be tweaked within the app for business and enterprise tiers, allowing for adjustments to wall thickness, colours, and labels.

There’s a new AI Property Report, which automatically generates PDFs and includes the floor plan along with information such as the number of bedrooms and bathrooms, floor area, total wall area, and a room-by-room breakdown with measurements. This could be used for insurance or costing and ordering materials.

3D Generator

The latest version offers a quick way of making 3D components for a library, from real world objects like a chair, starting from an image or a prompt, describing the details of the object you would like to create. This isn’t just the geometry, but the materials used too. These 3D objects can be placed in the real-world scans, enabling users to visualise and design spaces with custom virtual objects.

Limitations

Because everything is on device and there is no option for cloud or serverbased processing, there comes a natural limit. On-device memory is also a constraint. Polycam recommends a horizontal size limit of around 279 sq metres for a single scan, to ensure a decent result. Beyond this, the app might require compromises to process quickly without running out of memory. While the new scene editor addresses combining multiple scans, individual scans still have practical size limits.

Complex geometry can fool the application. I found that accurately capturing ceilings with multiple levels and stairs, resulted in gaps in the models. While the technology has improved, complex or non-planar geometry in older buildings might still present some challenges.

While Polycam is accurate enough for schematic designs and potentially ordering bulk materials (the company claims within 2% compared to expensive LiDAR scanners), it might not be sufficient for

tasks requiring very high precision, such as cutting kitchen cabinets, which may still necessitate manual measurements. Also while using the AR Toolkit object recognition the spatial reports is not totally foolproof and may require users to manually override classifications if they are incorrect.

Polycam seems to have approached the market more aimed at construction and its use in the American market. While this is predominantly 2D, the BIM side of the product has a lot yet to be delivered connecting the data on device to BIM software. Scan-to-BIM still requires the cost and eye of a human to properly check the conversion. This has to be compared to having a professional survey and the legal indemnity that it provides. Would I use Polycam on a house? Hell yes! Would I use it on a major airport refurbishment? Only as a quick rough.

Conclusion

Polycam is certainly on the right path with its concentration of development of instant 2D floor plan generation and measurements, as well as building 3D models for AEC users. AR Toolkit’s intelligence always seems like magic when scanning a room. However, the software and service does have limitations with obvious omissions and the need for closer integration with AEC workflows. Surely, we can’t be too far away from not requiring a human in the loop to create reliable results from scan to BIM?

Size matters. While the possibility of real-time streaming of large-scale scans is a compelling idea for future development, the current focus of Polycam appears to be on enhancing on-device processing and providing relevance to the AEC industry. The planned addition of features like IFC export and improved BIM workflows indicates a clear direction towards serving the professional needs of architects, engineers, and construction professionals.

Despite these limitations, the monthly use cost is $17 per user (Pro) and $34 per user (Business level). At those prices, it’s an application that many in the industry might well use regularly, when onsite vs the alternative. This is like having a budget Matterport scanner in your pocket.

The ongoing development and the specific features being introduced demonstrate a clear trajectory towards making Polycam a better fit for AEC professionals, especially surveyors and architects, particularly for initial site assessment, as-built documentation, schematic design, and collaboration.

■ https://poly.cam

Autodesk Tandem in 2025

Autodesk Tandem, the cloud-based digital twin platform, is evolving at an impressive pace.

Unusually, much of its development is happening out in the open, with regular monthly or quarterly feature preview updates and open Q&A sessions.

Martyn Day takes a closer look at what’s new

Project Tandem, as it used be known, was initiated in February 2020, previewed at Autodesk University 2020, and released for public beta in 2021. Four years on, there are still significant layers of technology being added to the product, now focussing on higher levels of functionality beyond dashboards and connecting to IoT sensors, adding systems knowledge, support for timeline events and upgrades to fundamentals such as visualisation quality. Tandem development seems to have followed a unique path, maintaining its incubator-like status, with Autodesk placing a significant bet on the future size of an embryonic market.

For those following the development of Tandem the one thing that comes across crystal clear, is that creating a digital twin of even a single building — model generation, tagging and sorting assets, assigning subsystems, connecting to IoT, and building dashboards — is a huge task that requires ongoing maintenance of that data. It’s not really ‘just an output of BIM’ which many might feel is a natural follow on. It has the capability to go way beyond the scope of what is normally called Facilities Management (FM), which has mainly been carried out with 2D drawings.

with significant operating expenses - this should be a ‘no brainer’ but as with any investment the owner/operator has to pay upfront to build the twin, to realise the benefits in the long tail, measured in years and decades. This, to me, makes the digital twins market not a volume product play.

Tandem evolution

My first observation is that the visual quality of Tandem has really gone up a notch, or three. Tandem is partially developed using Autodesk’s Forge components (now called Autodesk Platform Services). The model viewer front end came from the Forge viewer, which to be honest was blocky and a bit crappy-looking, in a

‘‘

operators who might be more familiar with 2D floor plans than 3D.

Other features that have been added include the ability to use labels or floor plans to isolate them in the display, auto views to simplify navigation, asset property cards (which can appear in view, as opposed to bringing up the large party panel) and thresholds, which can be set to fire off alerts when unexpected behaviour is identified. Users can now create groups of assets and allocate them to concepts such as ‘by room’. Spaces can now also be drawn directly in Tandem.

Tandem is a cloud-based conduit, pooling information from multiple sources which is then refined by each user to give them insight into layers of spatial and telemetric data ’’

1990s computer graphics kind of way. The updated display brings up the rendering quality and everything looks sharper. The models look great and the colour feedback when displaying in-model data is fantastic. It’s amazing that this makes such a difference, but it brings the graphics in to the 21st century. Tandem looks good.

Speed is also improved. As Tandem is database centric, not file based, it enables dynamic loading of geometry and data, leading to fast performance even with complex models. It also facilitates the ability to retain all historical data and easily integrate new data sources as the product grows. This is the way all design-related software will run. Tandem benefits from being conceived in this modern cloud era.

The quantitative benefit of building a digital twin requires dedication, investment and an adoption of twins as a core business strategy. For large facilities, like airports, universities, hospitals - anything

As Tandem has added more layers of functionality the interface tool palettes have grown. The interface is still being refined, and Autodesk is now adopting the approach of offering different UIs to cater to different user personas, such as

That said, development of Tandem has moved beyond simply collecting, filtering, tagging and visualising data to providing actionable insights and recommendations. From talking with Bob Bray, vice president and general manager of Autodesk Tandem and Tim Kelly, head of Tandem product strategy, the next big step for Tandem is to analyse the rich data collected to identify issues and suggest optimisations. These proactive insights would include potential cost savings and carbon footprint reduction

through intelligent HVAC management based on actual occupancy data.

Systems tracing

Having dumb geometry in dumb spaces was pretty much the full extent of traditional CAFM. Digital twins can and should be way smarter. The systems tracing capability in Tandem simplifies the understanding of all the complex building systems and their spatial relationships, aiding operations, maintenance, and troubleshooting. By clicking on building system elements, you can see the connections between different elements within a building’s systems and see how networks of branches and zones relate to the physical spaces they serve and identify where critical components are located within the space. This means if something goes wrong, should that be discovered via IoT or reported by an occupant, systems tracing allows the issue to be pinpointed down to a specific level and room. Users can select a component like an air supply and then trace its connection down though subsystems to the spaces it serves. Building in this connection between components to make a ‘system’, used to be a pretty manual process. Now, Tandem can automatically map the relationships between spaces and systems and use them for analysis to identify the root cause of problems.

Timelines

Data is valuable and BMS (Building Management Systems) and IoT sensors generate the building equivalent of an ‘ECG’ every couple of seconds. The historical, as well as the live data is incredi-

bly valuable. Timelines in Tandem display this historic sensor data in a visual context. Kelly demonstrated an animated heatmap overlaid on the building model showing how temperature values fluctuate across a facility. It’s now possible to navigate back and forth through a defined period, either stepping through specific points or via animation, seeing changes to assets and spaces.

While the current implementation focuses on visualising historic data, Kelly mentioned the future possibility of the timeline being used to load or hide geometry based on changes over time, reflecting renovations or other physical alterations to the building.

Bray added that Tandem never deletes anything, implying that the historical data required for the timeline functionality is automatically retained within the system. This allows users to access and analyse past performance and conditions within the building at any point in the future, should that become a need.

Asset monitoring

Asset monitoring dashboards in Tandem are designed to provide users with a centralised view for monitoring the performance and status of their key assets. This feature, which is now in beta, aims to help operators identify issues and prioritise their actions. They will be customisable, and users can create dashboards to monitor the specific assets they care about This allows for a tailored overview of the most critical equipment and systems within their facility.

The dashboards will likely allow users to establish KPIs and tolerance thresholds

for their assets. By setting these parameters, the system can accurately measure asset performance and identify when an asset is operating outside of expected or acceptable ranges with visual feedback of assets out of optimal performance.

Assets that are consistently operating out of tolerance or experiencing recurring issues can be grouped to aid focus e.g. by level, room, manufacturer. With this in mind, Tandem also has a ‘trend analysis’ capability, allowing users to identify potential future problems based on current performance patterns. The goal of these asset monitoring dashboards is to help drive preventative maintenance and planning for equipment replacement.

Tandem Connect

Digital Twin creation and connectivity to live information means there is a big integration story to tell and it’s different on nearly every implementation. Tandem is a cloud-based conduit, pooling information from multiple sources which is then refined by each user to give them insight into layers of spatial and telemetric data. To do that, Autodesk needed to have integration tools to tap into, or export out to, the established systems, should that be CAFM, IoT, BMS, BIM, CAD, databases etc.

Tandem Connect is designed to simplify that process and comes with prepacked integration solutions for a broad range of commonly used BMS. IoT and asset management tools. This is not to be confused with other developments such as Tandem APIs or SDKs.

The application was acquired and so has a different style of UI to other Autodesk

Software

products. Using a graphical front end, integrations can be initially plug and play, such as connecting to Microsoft Azure, through a graph interface. The core idea behind this is to ‘democratise the development of visual twins’ and not require a software engineer to get involved. However more esoteric connections may require some element of coding. Bray admitted there was significant ‘opportunity for consultancy’ that arises from the whole connectivity piece of the pie and that a few large system integrators were already talking with Autodesk about that opportunity.

Bray explained that Tandem Connect enables not only data inflow and outflow but also ‘workflow automation and data manipulation’. He gave an example where HVAC settings could be read into Tandem Connect, and a comfort index could be written, which was demonstrated at Autodesk University 2024.

Product roadmap

Autodesk keeps a product roadmap (www.intandem.autodesk.com/our-productroadmap/) which has been pretty accurate to show the development of travel, given the regular video updates.

Two of the more interesting capabilities in development are portfolio optimisation and the development of more SDK options, plus the possibility of future integration of applications. Portfolio optimisation will allow users to view data of multiple facilities in one central location and should provide analytics to predict future events with suggested actions for streamlining operations.

Beyond the current Rest API (Now), Autodesk is developing a full JavaScript Tandem SDK to build custom applications that leverage Tandem’s logic and visual interactivity. In the long-term, Autodesk says it will possibly enable extensions for developers to include functionality within the Tandem application itself.

Conclusion

Tandem development continues relentlessly. The capabilities that are being added now are starting to get into the high value category. While refinements are always being added to the creation and filtering, once the data is in and tagged and intelligently put into systems, it’s then about deep integration, alerts for out of nominal operation at a granular level, historical analysis of systems, spaces and rooms, all with easy visual feedback and the potential for yet more data analysis and intelligence. Bray uses a digital twin maturity model to outline the key stages of development needed to realise the full potential of digi-

tal twin technology. It starts with building a Descriptive Twin (as-built replica), then Informative Twin (granular operational data), then Predictive Twin (enabling predictive analytics), Comprehensive Twin (what-if simulation) and Autonomous Twin (self-tuning facilities).

At the moment, Tandem is crossing from Informative to Predictive, but the stated intent for higher level functionality is there. However the warning is, your digital twin is only ever as good as the quality of the data you have input.

Some of the early users of Tandem are now being highlighted by the company. In a recent webinar, Brendan Dillon, director of digital facilities & infrastructure, Denver International Airport gave a deep dive into how they integrated Maximo with Tandem to monitor facility operations (www.tinyurl.com/Tandem-denver)

Tandem is an Autodesk outlier. It’s not a volume product and it’s not something that Autodesk’s channel can easily sell. It’s an investment in a product development that is quite unusual at the company. It doesn’t necessarily map to the way

Autodesk currently operates as, from my perspective, it’s really a consultancy sale, to a relatively small number of asset owners – unlike Bentley Systems, whose digital twin offerings often operate at national scale across sectors like road and rail. The good news is that Autodesk has a lot of customers, and they will be self-selecting potential Tandem customers, knowing they need to implement a digital twin strategy and probably have a good understanding of the arduous journey that may be. The Tandem team is trying to make that as easy as possible and clearly developing it out in the open brings a level of interaction with customers that in these days is to be commended.

Meanwhile, with its acquisition of niche products like Innovyze for hydraulic modelling, there are some indications that Autodesk is perhaps looking to cater to more involved engagements with big facility owners, and I see Tandem as falling into that category at the moment, while the broader twins market has still yet to be clearly identified.

■ www.intandem.autodesk.com

Regarding digital twins

AEC Magazine caught up with Rob Charlton, CEO of Newcastle’s Space Group to talk about digital twin adoption and advances. Twinview, created by the the company’s BIM Technologies spin off, is one of the most mature solutions on the market today and now has global customers

It’s tough being one of the first to enter a market but for Space, one of the country’s most BIM-centric architectural practices, it was a case of needs must. In 2016, its BIM consultancy spin-off, BIM Technologies, identified a need for its clients to be able to access their model data without the need for expensive software or hardware. Development started and this eventually became Twinview, launched in 2019.

Space Group is a practicing architecture firm, a BIM software developer, a services supplier, a BIM components / objects library creator and distributor.

So, not only does it develop BIM software, it also uses the software in its own practice, as well as sell its solutions and services to other firms.

Selling twins

In previous conversations with CEO Rob Charlton on the market’s appetite for digital twins, he has been frank in the difficulty in getting buy in from fellow architects, developers and even owner operators. The customers who got into twins early were firms that owned portfolios of buildings which were sold as eco-grade investments.

Charlton acknowledges that he always expected it to be a long-term endeavour, “We started this development knowing it was it was a five year plus journey to any level of maturity or even awareness”. He draws a parallel to the adoption of BIM, recalling that even though Space bought its first license of Revit around 2001, it didn’t gain significant traction until around 2011, and even then, this was largely due to UK BIM mandates.

The early digital twin market development was a ‘slow burn’. Charlton contrasts BIM Technologies’ patient, self-funded approach with companies that seek large

VC funding, arguing that “ the market will move at the level it’s ready for”.

He explains that the good news is that over the last year, there has been an increase in awareness of the value of digital twins, particularly in the last six months.

This awareness is seen in the fact that clients are now putting out Requests for Proposals (RFPs) for digital twin solutions. For Charlton, this is a fundamental difference compared to the past, where they would have to approach firms to explain the benefits of digital twins. Now, the clients themselves have made the decision that they want a digital twin and are seeking proposals from providers.

Priorities and needs

There’s a lot of talk about digital twins but very little talk concerning the actual benefits of investing in building them. Charlton explains a lot of twin clients are increasingly interested in reducing carbon in buildings, whether that be in embodied or operational and compliance and safety. “It’s an area that Space is particularly passionate about but there is an inconsistency in how embodied carbon reviews and measurements are conducted,” he says.

Customer access to operational data is also important, explains Charlton, “Clients want to gain insights into how their buildings are actually performing in real time.”

He also notes that the facilities and the integration with facilities management is equally important, to streamline maintenance, manage issues, and improve overall building operations.

Clients value the ability to have “access to their information in one place” adds Charlton. And here, the cloud is the perfect solution to deliver a unified platform which consolidates models, and documents related to building assets.

Twinview clients are especially interested in owning their own data. Charlton gives the example of a New Zealand archive project, explaining that the client was particularly interested in having Twinview to maintain independence when using a subcontractor or external service provider, which might come and go over the project lifetime.

Back in the UK, Twinview is being used in conjunction with ‘desk sensors’ on an NHS project to optimise space and potentially avoid unnecessary capital expenditure. Charlton explains that the client was finding the digital twin useful for “analysis on how the space is used” because they were seeking to validate or challenge space needs assessments by consultants.

Increasingly, contractual obligations include performance data. For one of Space’s school clients, the DFA Woodman Academy, there’s a contractual obligation to provide energy performance data at

month, three months and 12-months. Digital twin technology facilitated the compliance goal within the performancebased contract. The IoT sensors also identified high levels of CO2 in the classrooms, prompting an investigation into the cause. Twinview goes beyond the traditional digital twin model for operations and has been used to connect residents to live building information. On a residential project, tenants access the Twinview data on their mobile phones to see energy levels in the buildings, temperatures and CO2, all through their own app.

Artificial Intelligence

Everyone is talking about AI, and Twinview now features a ChatGPT-like front end. This enables plain language search within the digital twin, both at an asset level and with regards to performance data. Charlton explains that while the AI in Twinview has a ‘ChatGPT-like interface’, it is not directly ChatGPT, although it does connect to it. He explains that Twinview developed its own system. This is possibly due to the commercial costs associated with using ChatGPT for continuous queries. The AI in Twinview accesses all building information, including the model, operational data, and tickets, which are stored in a single bucket on AWS.

Looking to the future, Charlton mentions that the next stage of AI development for Twinview will be focused on prediction and learning. This includes the ability to generate reports automatically (e.g. weekly on average CO2 levels), predict future energy usage, and suggest ways to improve building performance. A key differentiator for AI in Twinview in the future, will be in its capacity to understand correlations between disparate datasets that are often siloed, such as occupancy data, fire analysis, and energy consumption. By applying a GPT-like technology over this connected data, the aim is to uncover new insights and solutions.

Development Journey

From a slow burn start, despite being a relatively small UK business and competing with big software firms with deep pockets, Charlton told us that Twinview had already won international clients and is currently being shortlisted for other significant international projects, including one on the west coast of America, against international competition.

■ www.twinview.com

Workstations

PNY Nvidia RTX A1000

Nvidia’s entry-level workstation GPU is a notable upgrade from the Nvidia T1000, but could the slightly pricier Nvidia RTX 2000 Ada Generation be the better option to future-proof your investment? writes Greg Corke

Nvidia RTX redefined professional graphics when it debuted in 2018. With dedicated RT cores, it cemented the role of the GPU as a ray tracing powerhouse, while the Tensor cores made AI an integral part of many visualisation workflows.

Nvidia RTX workstation GPUs were initially limited to Nvidia’s mid-range 4000 series and high-end 5000 and 6000 series. Over time, however, the technology trickled down, reaching a significant milestone last year with the debut of the first entry-level Nvidia RTX cards—the RTX A400 (4GB) and RTX A1000 (8GB).

The RTX A1000, the focus of this review, features 8 GB of GDDR6 memory and four Mini DisplayPort outputs. Unlike most of Nvidia’s current pro graphics lineup, which are either based on the Ada Lovelace architecture or the brand new Blackwell architecture (see page 12) , the RTX A1000 uses the older Ampere architecture, introduced in 2020. That means its RT (ray tracing) and Tensor (AI) cores are either one or two generations behind.

Like its predecessor, the Turing-based Nvidia T1000, the Nvidia RTX A1000 is a low-profile card, making it compatible with compact workstations like the HP Z2 Mini G9 and Dell Precision 3280 CFF. However, with an optional ATX bracket, it can also be installed in standard desktop systems. With a peak power consumption of 50W, it draws all its power directly from the motherboard’s PCIe slot.

The CAD workhorse

As expected for a GPU in this class, the RTX A1000 handles most CAD and BIM workflows with ease. In Solidworks 2024, it delivered a perfectly smooth viewport at 4K resolution when navigating a large 2,300-part, 49-million-triangle snow bike assembly—even with RealView enabled for realistic lighting and materials. Eighteen second-gen RT cores also provide a level of future proofing for CAD. We anticipate ray tracing techniques will become integrated with traditional rasterisation to create more realistic CAD viewports. The idea is that users will be able to

switch to ‘ray traced’ mode just as they do now with shaded, shaded with edges, and realistic view modes.

Entry-level viz

Visualisation demands significantly more GPU power than CAD. Here, the RTX A1000 offers an entry point, and noticeably faster ray tracing than its predecessor, the Nvidia T1000, which relied solely on general-purpose CUDA cores.

For example, in Twinmotion 2024, we could navigate the Snowden sample project smoothly at FHD resolution. However, the RTX A1000 falls significantly behind when compared to the Nvidia RTX 2000 Ada Generation (16 GB), which delivers far better performance for just £157 more (£509 vs. £352). Rendering six standard 4K images took nearly four times as long, while five 4K path-traced images took twice as long. Similar slowdowns were observed in Lumion, D5 Render, and V-Ray.

This performance gap isn’t just due to the RTX 2000 Ada’s more powerful processor; memory also plays a crucial role. With only 8 GB of VRAM, the RTX A1000 struggles with larger visualisation models. When a scene exceeds 8 GB, the card must borrow from system memory via the PCIe bus, significantly reducing performance. In some cases, such as with our 12 GB Enscape dataset, this limitation even caused the software to crash.

We explore this in more detail in our “Workstations for arch viz” article [www.tinyurl.com/ws-viz]

AI-enabled workflows

With 72 third-gen Tensor cores, the RTX A1000 brings AI capabilities to the entry-level segment. In visualisation workflows, this could be beneficial in three key areas: Nvidia DLSS for improving 3D performance, AI-powered denoising for reducing noise in low-pass renders, and AI image generators like Stable Diffusion.

We did not test DLSS directly, but we expect the benefits with this card may be limited due to relatively low Tensor per-

formance and the fact that the Ampere architecture only supports older versions of DLSS (2.0 and below).

In Stable Diffusion 1.5, the RTX A1000 was about twice as slow at generating images as the RTX 2000 Ada. However, in the more demanding Stable Diffusion XL, performance plummeted, as 8 GB is insufficient to run the software effectively. Nevertheless, it remains significantly faster than the Nvidia T1000, which lacks Tensor cores altogether. While we did not test the T1000 directly, Nvidia claims the RTX A1000 is up to three times faster.

Beyond visualisation, the RTX A1000 supports a range of AI workflows, including inferencing for large language models (LLMs) and AI assistants—some of which are not particularly demanding computationally.

For example, reality modelling software like Leica Cyclone 3DR uses machine learning to ‘intelligently classify’ point cloud data. It requires an RTX GPU, but in our tests the RTX A1000 was not that much slower than the RTX 4500 Ada Generation, trailing the high-end GPU by just 29% [see page 34 of our Winter 2025 workstation special report - www.tinyurl.com/ AEC-WSR-2025].

The verdict

The Nvidia RTX A1000 marks a significant leap forward for Nvidia’s entry-level workstation GPUs. With dedicated RT cores and Tensor cores, it enables ray tracing and AI workflows that simply weren’t viable on the Nvidia T1000.

However, potential buyers must consider whether the RTX A1000 provides enough value compared to the RTX 2000 Ada Generation. For only £157 more, the RTX 2000 Ada delivers significantly better RT and Tensor performance and, crucially, twice the memory, which can be a limiting factor in some workflows.

With software evolving so quickly, especially in the area of AI, spending a bit more now could be the smarter way to future-proof your workstation.

■ www.nvidia.com ■ www.pny.com

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.