

CDE Forging New Frontiers
Issue 04 | Jan 2025

Dear Reader,
Welcome to the first CDE Research Newsletter for 2025!
We are excited to kick off our first issue with a topic that will be a cornerstone of innovation for the NUS CDE community: Robotics!
The field of robotics is vast, and covers highly diverse topical domains that span artificial intelligence, built environment, mechanical engineering, biomedical engineering, electrical and computer engineering, design, and beyond. With regards to applications, everything from healthcare to drones, and construction to disaster relief are poised for major advances due to innovations in robotics.
Importantly, the continued implementation of new robotics technologies into everyday operations across industries will also have far-reaching impact in public policy and the social sciences.
Bridging ideation with real-world impact is a hallmark of NUS CDE, and we’re excited to feature our colleagues’ amazing work in robotics as we kick off another exciting year from our community. Enjoy!
All the best,
Dean Ho Editor-in-Chief
Teaching soft robots self-awareness

Soft robots with human-like perception can anticipate sensory inputs, detect contact and adapt dynamically, paving the way for applications in autonomous exploration and precision-driven medical procedures.
Did you know that you actually have a “sixth sense”? Called proprioception, it helps your body make sense of where it is in space. It’s what allows gymnasts to orient themselves mid-somersault or basketball players to dribble while running without glancing at the ball. But it’s also what lets you touch your nose with your eyes closed, sip coffee without looking at your
mug, or feel the difference between hard cement and soft grass, even while wearing shoes.
Conferring this sort of sensory awareness upon soft robots is a key focus of Professor Cecilia Laschi and her team at the Department of Mechanical Engineering, College of Design and Engineering (CDE), National University of Singapore. Drawing inspiration from the human perception system, they developed the “expected perception” framework, which enables robots to anticipate sensory inputs, detect external forces and adapt dynamically — all without relying on cameras or external vision systems.
Professor Cecilia Laschi and her team developed a framework that enables robots to anticipate sensory inputs, detect external forces and adapt to environments dynamically. Issue 04 | January 2025 Forging

Applied to a flexible soft robot equipped with liquidmetal sensors, the expected perception system enables precise detection of external strain and deformation. Being aware of its own shape, the robot can interpret its surroundings, distinguish between self-induced motion and external contact and even determine the direction and magnitude of forces. This gives robots better perception abilities — a boon for operation in dynamic environments.
The team’s findings were published in Nature Communications on 18 November 2024.
Giving robots a sensory upgrade
Proprioception in humans relies on sensors located in the muscles, tendons and joints, which work in tandem with other senses to provide constant feedback about body position and movement. Think about splaying out your fingers — you know it’s happening without even looking. These sensors also let us gauge the weight of objects we’re interacting with, or pick up on subtle changes in our surroundings — like a shift from smooth tile to uneven gravel underfoot.
“Soft robots, too, need proprioception,” says Prof Laschi, who is also the new Director of the Advanced Robotics Centre at CDE. For instance, a robotic gripper designed to handle groceries needs to feel what it is touching and sense the positions of its fingers. Without this feedback, soft robots struggle to perform tasks that require adaptability and precision. “For a soft robot, however, it is difficult to distinguish between proprioception and exteroception. Its strain sensors respond both when the soft robot deforms because of its own movement and when there is an external contact, in the same way.”
Issue 04 | January 2025 Forging New Frontiers
To address this, Prof Laschi’s team devised a new system — an “expected perception” — to let soft robots better perceive what they’re interacting with. At its core, the loop mimics the way human brains predict sensory input and combine it with sensory feedback. Embedded into a flexible robot capable of bending in all directions, the system allows the robot to calculate its predicted position based on movement commands and compare it with its real-time position, measured using liquid-metal-based sensors in its body. Any discrepancies between the two positions signal external contact. The robot then quickly detects and responds to such forces. This mirrors how humans adjust to external stimuli, such as catching a falling object or regaining balance.
The researchers tested the system in two scenarios: navigating a maze and learning from human interaction. In the maze experiment, the robot moved through pathways autonomously, using touch to detect walls and adjust its movement. With no cameras or external tracking systems, it relied entirely on its proprioceptive abilities to find its way out. In the second scenario, a human operator guided the robot through a simulated massage or medical procedure on a manikin. The robot learned the operator’s movements and forces, then replicated them with high accuracy.
“It could detect external contact within 0.4 seconds and distinguish its source with remarkable precision,” adds Prof Laschi. “The robot also identified the direction of applied forces with an error margin below 10 degrees, even in dynamic environments.”
“It could detect external contact within 0.4 seconds and distinguish its source with remarkable precision.”
Sensory breakthrough for better human-robot interaction
There are possibilities aplenty for soft robots with heightened senses. “They could be used as highly responsive arms for an octopus-inspired robot, deployed for autonomous underwater exploration, environmental monitoring and other operations,” says Prof Laschi.
Assistive soft robots may give a better human-robot interaction experience, using physical contact and delivering assistance to senior citizens.
“Robotics is inherently a cross-disciplinary field — bringing together expertise from various domains, from engineering and biology, to neuroscience, material science and AI, to realise its practical applications with real-world impact, in industry, exploration and monitoring, medicine and healthcare, and many more.”
Soft robots could also assist surgeons in minimally invasive operations, exerting just the right amount of force while manoeuvring through delicate tissues. What’s more, their capacity to learn from human guidance and repeat actions makes them well-suited for rehabilitative and therapeutic tasks.
“Robotics is inherently a cross-disciplinary field — bringing together expertise from various domains, from engineering and biology, to neuroscience, material science and AI, to realise its practical applications with real-world impact, in industry, exploration and monitoring, medicine and healthcare, and many more,” adds Prof Laschi.
Looking ahead, the team aims to develop the idea of brain-inspired predictions further, using machine learning for building the internal models that brains build from experience and use for predictions. With our expected perception framework, the researchers plan build robots for assisting the elderly and their caregivers, as well as workers in most physically-demanding tasks.
Biomedical
A robust robotic support system

Driven by a differential series elastic actuator, a novel backsupport exoskeleton aids users during lifting tasks, easing strain without adding muscle activation to the back and legs.
f you’ve ever experienced a lower back injury, you know how it can turn everyday tasks into trials. Simple chores become strenuous; workout routines fall by the wayside; even lifting a bag of groceries can feel like a risk. And all too often, a dreaded relapse can deem all recovery efforts null.
Issue 04 | January 2025 Forging New Frontiers

For industries heavy on physical labour, back injuries carry a hefty price tag — costing more than USD$200 billion annually in lost productivity, medical expenses and absenteeism. Against a backdrop of an ageing population, coupled with declining fertility rates, protecting those who handle heavy loads — from warehouse workers to construction crews to hospital orderlies — has come to the fore.
Associate Professor Yu Haoyong wants to give lowerback protection a shot in the arm. At the Department of Biomedical Engineering, College of Design and Engineering, National University of Singapore, he has led a team to develop an exoskeleton designed to reduce strain on the lower back during heavy lifting. This suit offers a new line of defence against chronic injuries in physically demanding workplaces.
The team’s work was published in IEEE Transactions on Robotics on 9 November 2023.
Taking the strain out of heavy lifting
Back-support exoskeletons (BSEs), worn directly by the user, help reduce lower back strain by providing supportive torque at the hip joints, connecting the trunk to the thighs. Conventional passive BSEs rely on elastic materials, such as springs or carbon fibre, to provide support. This design lacks adaptability — as the support strength of the components is fixed, making them less effective for tasks requiring varied levels of exertion.
Active BSEs, driven by motors and sensors, were developed to bridge this gap. But this configuration, too, has its limitations. Active BSEs generally require two motors, one for each hip joint, to deliver adjustable torque as needed — but the added weight and complexity can lead to discomfort and balancing issues for the user.
“We thought that using only one motor could offer a better solution — but three requirements had to be met,” says Assoc Prof Yu. “First, the device should allow for natural differences in hip angles during movement. Second, precise force control is essential to deliver the right level of support to the user. Third,
Together with his team, Associate Professor Yu Haoyong innovated a back-support exoskeleton designed to ease strain on users during lifting tasks.
04 | January 2025
the exoskeleton should automatically recognise when a user is lifting versus walking, so that it doesn’t make natural movement awkward.”
With these goals in mind, Assoc Prof Yu’s team set out to design a single-motor BSE with a differential series elastic actuator (D-SEA) as its centrepiece. “This actuator allows the exoskeleton to provide balanced torque to both hips, even when hip movements differ — such as when walking with one leg in front of the other,” explains Assoc Prof Yu. Using a custom-built cable-roller system, the D-SEA can transfer force smoothly to each hip joint, while a high-precision feedback controller keeps support consistent and responsive to the user’s movements.
“The D-SEA enables ‘backdrivability’, meaning that when users aren’t lifting, they can move naturally without resistance.”
“All this allows users to walk as if the suit isn’t there at all. The D-SEA enables ‘backdrivability’, meaning that when users aren’t lifting, they can move naturally without resistance. Secondly, an intelligent assistive technology automatically shifts support modes based on movement, delivering force when lifting and yielding during walking. This means users can carry loads to their destination without obstruction — a key feature in workplaces that require frequent cargo handling.” adds Assoc Prof Yu.
At just 5.3 kilograms, the team’s design is the lightest active BSE available, prioritising portability while ensuring balanced, comfortable support during strenuous tasks. In experiments, it reduced back muscle activation by up to 40% during lifting, without adding strain to the back or legs during walking — a prevalent drawback with many other active BSEs.
Backing workers, boosting productivity
The team’s exoskeleton suits made waves behind the scenes of the 2024 National Day Parade (NDP). Worn by Singapore Armed Forces personnel working on the packing and shipping of over 300,000 NDP packs, the suits provided support as they lifted boxes ranging from 10 to 45 kilograms. One soldier noted that with the suit, he felt as though he was carrying only half the box’s weight.
Issue 04 | January 2025




Feedback from the deployment further underscored the exoskeleton’s potential. According to a survey, 80% of participants felt their performance improved while using the device, 70% expressed confidence operating it and 90% found it useful enough to use again in the future.
“The suit could lower injury risks while boosting both productivity and wellbeing in the workforce,” says Assoc Prof Yu. “The idea is to protect workers from when they are young so that they can maintain quality performance for much longer.”
The prototype suits have also been trialled by, among others, baggage handlers at Singapore Changi Airport and workers at a chemical plant. Data collected from these trials, including the NDP deployment, will be an important step in optimising the suit’s design and advancing its commercialisation. In addition, the researchers are exploring how regular use of the exoskeleton might impact the long-term incidence of back-related musculoskeletal disorders.
With this exoskeleton, heavy lifting might just get a little lighter on the back.
Members of the Biorobotics Lab at the Department of Biomedical Engineering with some of the prototype exoskeleton units. (Photo credit: NUS Biorobotics Lab)
The exoskeleton suit is worn like a harness, supporting the user’s movements and reducing the strain on the back and joints. (Photo credit: NUS Biorobotics Lab)
A newer model of the exoskeleton suit developed in 2024. (Photo credit: NUS Biorobotics Lab)
Biomedical Engineering
Soft robots take on hard tasks

Soft, robust miniature robots powered by fluid kinetic energy can traverse tricky terrains at impressive speeds, offering a valuable new tool for search and rescue operations.
Tiny robots can be just as useful as their larger cousins. Imagine smallscale robots, equipped for precision tasks, navigating confined spaces within the human body to inspect and treat tissues or zooming through challenging terrains for reconnaissance in search and rescue operations.
Issue 04 | January 2025
At the Department of Biomedical Engineering, College of Design and Engineering, National University of Singapore, Associate Professor Raye Yeow and his team are making huge leaps in the world of miniature robots. Their latest creation is a completely soft, amphibious crawling robot powered by electrohydraulic kinetic energy. No larger than the palm of a hand and weighing as little as a few paperclips, this robot can move in multiple directions — forward, backward and even turn — thanks to its flexible design.
From inspecting underwater objects or performing delicate tasks in hard-to-reach spaces, such as the narrow crevices, the team’s adaptable, durable robot thrives in scenarios where precision and versatility are paramount.
The team’s findings were published in Advanced Science on 1 February 2024.

Associate Professor Raye Yeow pioneered a soft, miniature robot powered by electrohydraulic kinetic energy, capable of traversing tricky terrains at impressive speeds.
Great things come in small packets
Traditional rigid robots excel in many tasks — manufacturing cars, assembling circuit boards, packaging electronics — but they are not naturally geared to be flexible enough to navigate through confined, unstructured environments. Soft robots, with their squishier, flexible bodies, are far better suited for such situations.
“One major challenge in designing soft robots is controlling how they stretch and deform, which governs how they move about.”
Like our joints, all robots rely on components called actuators to generate movement. Unlike rigid robots that move in fixed ways depending on their joints, soft robots can bend, stretch and expand in multiple ways depending on the materials used.
“One major challenge in designing soft robots is controlling how they stretch and deform, which governs how they move about,” says Assoc Prof Yeow. “Traditional soft actuators often face limitations such as slower speeds or difficulty in scaling down for practical applications.”
Issue 04 | January 2025
To tackle these limitations, Assoc Prof Yeow’s team designed a soft miniature robot that harnesses electrohydraulic fluid kinetic energy. Underpinning the movement of the robot is an electrohydraulic actuator, composed of two flexible electrodes and a membrane filled with dielectric fluid. When a high voltage is applied, the electrodes compress the fluid within the membrane, transforming electrical energy into mechanical motion that drives the robot’s movements.
This fluid-driven mechanism creates a continuous crawling motion. By controlling how the fluid flows inside the membrane, the robot can move in multiple directions — not just forward and backward, but also turn. This is achieved by deploying four electrodes on the robot that, when activated, direct the fluid to specific areas of the robot’s body, enabling it to pivot on command in up to eight directions.
“This allows the robot to manoeuvre accurately — even through gaps as narrow as one centimetre,” adds Assoc Prof Yeow. “Such capability could, for example, enable the robot to inspect confined spaces or ferry small payloads in challenging environments.”
Speed, robustness and adaptability are also defining features of the robot. Despite its small size, it can crawl up to 16 millimetres per second — a rate nearly 290 times faster than other soft miniature crawling robots powered by hydraulically amplified self-healing electrostatic actuators. Its fully waterproof body enables it to operate on both land and underwater, while its durability allows it to recover quickly from severe compression, such as being stepped on.
Small robot, big dreams
The versatile capabilities of the soft robot make it well-suited for a range of specialised tasks. Its current applications could include exploration in narrow gaps, underwater inspection and small-scale reconnaissance, making it a valuable tool in areas such as disaster management, underwater exploration and industrial maintenance. With improvements, such as better water-to-land transitions and reduced power requirements, its utility could expand further to areas like medical robotics and environmental monitoring.
With that in mind, the team plans to improve how the robot transitions between land and water. For instance, it could be given soft paddles to help it swim better. In addition, the researchers are also exploring alternative dielectric materials to reduce the robot’s power requirements. A fully untethered version is also in the works, which would provide greater autonomy for real-world applications.
Lean, mean flying machines

A fast-adaptive estimator for robust flight control draws from the best of both deep-learning techniques and conventional control algorithms to improve drone performance.
uadcopters, or drones, with their four whirring rotors, have recently become a mainstay technology among militaries, avid hobbyists, and first responders alike. Whether delivering crucial medical supplies to the critically injured or aiding in search and rescue missions in terrains too hazardous for human involvement, advanced, nimble drones have transformed high-stakes operations once limited by battery life and payload constraints.
But as billions of dollars are funnelled into building the next generation of fully autonomous systems, some concerns are still up in the air: Are today’s drones reliable enough for flight over densely populated neighbourhoods? Can their onboard intelligence adapt to the rugged, erratic conditions of the real world?
At the Department of Electrical and Computer Engineering, College of Design and Engineering, National University of Singapore, Assistant Professor Zhao Lin is tackling these questions by developing AI-powered drones. His team has created a fast-adaptive estimator, called a neural moving horizon estimator (NeuroMHE), that leverages the best of deep-learning techniques and conventional control algorithms to enhance disturbance estimation and response. This could mean more robust, reliable flight-control systems for high-performance drones — even in the face of unexpected and severe disruptions.

By leveraging deep-learning techniques alongside conventional control algorithms, a fast-adaptive estimator developed by Assistant Professor Zhao Lin’s team makes drones more robust and reliable. Issue 04 | January 2025
The findings were published in IEEE Transactions on Robotics on 8 November 2023.
Fine-tuning the flight game
Today’s drones are far from perfect. Sudden gusts of wind, abrupt shifts in aerodynamics during tight manoeuvres, or sudden changes in payload weight can throw flight stability out of kilter. The consequences can range from minor deviations and delays to severe, costly crashes, especially in high-stakes situations where even a momentary lapse could be catastrophic.
Most current flight-control systems use estimators to manage disturbances, which come with various limitations. They often rely on either highly specialised, manually tuned parameters that need careful adjustment for each specific environment, or extensive datasets for neural networks to ‘learn’ from disturbances along the way. These limit the potential of drones. Tuning parameters manually is time-consuming and requires extensive domain knowledge, while data-intensive neural networks require vast amounts of ground-truth data — actual recorded disturbances — for training, which are difficult to obtain. Both approaches can achieve good accuracy for the specifically tuned flight environment or trained type of disturbances, but are unable to autonomously adapt to drastically new or unforeseen scenarios.
“At its core, NeuroMHE leverages a neural network to finetune parameters automatically, eliminating the need for extensive manual adjustment.”
NeuroMHE combines the best of the two worlds — it inherits the generalisability of neural networks and the robustness of the model-based moving horizon estimator (MHE). “At its core, NeuroMHE leverages a neural network to fine-tune parameters automatically, eliminating the need for extensive manual adjustment,” says Asst Prof Zhao. “At the same time, NeuroMHE provides a physical rule-based reliability guarantee, which optimises drone’s performance based on its physical properties in real-time as the drone zips through its environment.”
This unique fusion enables NeuroMHE to anticipate, and counteract, a wide range of severe disturbances on the fly — giving drones a safer, smoother, and more responsive edge. Unlike previous methods, the team’s system does not require ground-truth disturbance data for training. “Instead, it learns from tracking errors in the drone’s trajectory — deviations from its intended path — making it more efficient and versatile,” adds Asst Prof Zhao. “The system’s adaptive weighting, a mechanism that adjusts its responsiveness based on current flight conditions, is guided by the neural network, allowing NeuroMHE to tweak its sensitivity to disturbances in real-time.”
To validate NeuroMHE, the team put it through tests in high-speed flight simulations and real-world environments. They challenged the system with a range of flight scenarios, from routine paths to extreme manoeuvres, using both synthetic disturbances and actual environmental data. “Our system outperformed one of the best estimators today, NeuroBEM, reducing force estimation errors by up to 76.7% using a fraction of the computational load. In addition, NeuroMHE’s compact neural network requires only 7.7% of the parameters used in NeuroBEM, making it a lightweight yet powerful solution for robust flight control,” adds Asst Prof Zhao.
Taking flight
NeuroMHE’s ability to adapt to unexpected disturbances gives it potential for applications where reliability is paramount — from disaster response to autonomous
delivery in dense urban areas, and even precision agriculture where labour shortages are increasingly gripping the sector. “Its adaptable framework can also be generalised to complement other robotic systems that require robust control in dynamic environments, such as underwater or space exploration vehicles,” says Asst Prof Zhao.
To make drone systems even more versatile, the team’s future developments will focus on refining training algorithms for faster, more efficient learning, and integrating NeuroMHE with controllers to create a unified system that optimises both estimation and control simultaneously. A better path for autonomous flight is already in sight.
“Its adaptable framework can also be generalised to complement other robotic systems that require robust control in dynamic environments, such as underwater or space exploration vehicles.”
Helping robots find their way around the construction maze

By improving how unmanned ground vehicles see and navigate complex, cluttered construction sites, automation is set to transform tasks like site mapping and monitoring.
Have you ever wondered how the spaces around you — apartments, offices, shopping complexes — are built?
From site inspections and planning to logistics and assembly, construction is a highly coordinated process involving a network of interconnected suppliers
Issue 04 | January 2025

and workflows. Human hands and eyes have traditionally guided the process every step of the way, ensuring that every i is dotted and every t is crossed.
But today, the construction sector faces a fresh challenge: labour shortages, inflamed by the Covid-19 pandemic. Even as construction demands rise, almost a third of employers are struggling to hire skilled staff.1 Much of this gap is due to an ageing workforce.
Automation is emerging as a solution to fill this gap, with robots stepping in to support various tasks alongside human workers, including building mapping and inspection. Dr Justin Yeoh from the Department of Civil and Environmental Engineering, College of Design and Engineering, National University of Singapore, offers two solutions that could transform the deployment of unmanned ground vehicles (UGVs) for such roles.
In a study, published in the Journal of Computing in Civil Engineering on 9 July 2024, Dr Yeoh and his team introduced a method that uses video footage from earlier construction stages to map out navigation routes. This allows UGVs to manoeuvre through busy construction sites unimpeded by scattered materials and equipment.
In another study, published on 16 April 2024 in the same journal, the researchers tackle a different challenge: ensuring that UGVs align what they ‘see’ in realtime with digital building plans, or Building Information Modelling (BIM). This alignment enables UGVs to accurately compare on-site conditions with digital blueprints — enhancing reliability in inspections.
Cutting through the noise
The only constant at construction sites is change. These environments are in constant flux — workers moving materials and equipment, debris falling, layouts shifting throughout the day. It’s challenging enough for humans — imagine the complexity for robots.
“UGVs are great for tasks such as monitoring site progress, but construction sites are tough environments for them to navigate. Maps or models created at the
Working with his team, Dr Justin Yeoh improved the ability of unmanned ground vehicles to perceive and navigate complex, cluttered construction sites.
Issue 04 | January 2025
start often go outdated as the site progresses, which may risk inaccurate UGV inspections,” explains Dr Yeoh.
“There’s also the challenge of aligning the UGV’s view with digital plans, or BIM, in real time. It’s like moving through a maze that’s always changing, where the map and the actual layout don’t always match.”
“There’s also the challenge of aligning the UGV’s view with digital plans, or BIM, in real time. It’s like moving through a maze that’s always changing, where the map and the actual layout don’t always match. Without precise alignment, these robots risk overlooking important site details or misinterpreting data,” he adds.
Dr Yeoh’s team developed two complementary measures tailored to the demands of indoor construction sites. The first approach involves a premapping method that uses video footage taken earlier in the construction process to create an accurate, updated map. Construction sites often have temporary objects — tools, scattered materials or debris — that appear during recording but may not be present when the UGV is deployed. The team’s method analyses this footage to pinpoint and remove such transient objects.
“This essentially ‘cleans’ the map, weeding out clutter and irrelevant data to give UGVs a clear navigation route,” adds Dr Yeoh. “By creating the map offsite, we also minimise disruption to ongoing construction, which can help foster greater acceptance of UGVs on site.”
While premapping optimises navigation, the second approach addresses an equally critical need: ensuring that UGVs can precisely match their on-site views with digital models, such as BIM. Misalignment could cause UGVs to miss important details — like a wall that is slightly out of place, or an electrical outlet installed erroneously. To prevent such oversights, Dr Yeoh’s team designed a sequential rectification process that continuously compares the UGV’s view with BIM data, prompting small adjustments to correct positioning and angle. In featureless areas, like long and uniform hallways, geometric markers help fine-tune the robot’s orientation.
“With accurate real-time alignment, UGVs enable construction managers to perform quality checks without relying on manual inspections,” adds Dr Yeoh.
Paving a way forward for construction site robots
Improving UGV navigation and functionality on construction sites is an important first step towards achieving a fully autonomous robotic workforce — one that is not just efficient and accurate but also capable of making more effective decisions. “The biggest benefit would be enhanced safety in site operations, particularly for hazardous tasks,” adds Dr Yeoh. “These advancements will also improve the reliability and consistency of how robots perform their tasks. With this, project owners may then see significant returns on investment, encouraging broader adoption and acceptance over time.”
“The biggest benefit would be enhanced safety in site operations, particularly for hazardous tasks.”
The research team has already identified areas for further improvement. They aim to improve the efficiency of the alignment method in heavily cluttered spaces. They also plan to integrate the approach into a real-time simultaneous localisation and mapping framework, which would enable UGVs to continuously adjust their position and maintain alignment with BIM models as they move through the site.
As robots find their bearings, smart navigation may soon be the new foundation for progress at construction sites.
Automating 3D scanning of built environments

By integrating building information with indoor spatial data, robots are equipped with the capability to navigate and map complex indoor spaces more efficiently, advancing how industries capture the digital representation of the built environment.
Imagine a sleek robot navigating narrow corridors and cluttered spaces, automatically mapping architectural and structural details precisely as it moves — driven by intelligent planning algorithms instead of human intervention.

This is the vision Assistant Professor Vincent Gan and his team are working to realise at the Department of the Built Environment, College of Design of Engineering, National University of Singapore.
Scanning and 3D reconstruction in the built environment present unique challenges, including irregular building layouts, weakly-textured elements like white walls, dynamic obstacles such as people and furniture, and diverse material properties like concrete and glass. These complexities not only present significant opportunities for technological innovation but also demand robustness in algorithms and adaptability in hardware. Key questions emerge: How can scanning paths be optimised to achieve both efficient and comprehensive coverage? How can building information be integrated to enhance robots’ understanding of built spaces, enabling smarter navigation and operations?
By integrating Building Information Modelling (BIM), a commonly used digital model that represents a building’s physical and functional characteristics, with spatial data from IndoorGML, a standard for indoor spatial information, the team has developed a new approach that allows quadruped robots to optimise their routes and scanning at the upfront. This reduces the need for manual oversight and enhances scan data quality — a boon for industries that rely on accurate 3D insights, especially in GPS-limited spaces like indoor environments.
The team’s findings were published in Automation in Construction on 11 July 2024.
Reimagining 3D reconstruction
Aligned with Singapore’s Smart Nation initiative, the concept of digital cities is gaining prominence. There is a growing momentum to harness automated, intelligent technologies to capture and reconstruct 3D digital representations of the built environment, integrating this data into virtual platforms. Such platforms potentially facilitate information management, condition monitoring, assessment and maintenance of the built facilities. For example, 3D scanning enables surveyors to conduct precise as-built surveys, allowing early identification of errors during the design-to-construction phase to mitigate costly rework. This technology is equally valuable for legacy buildings and infrastructures in urban areas, where detailed scans play a critical role in documentation and preservation efforts, ensuring the longevity and integrity of these assets.
Assistant Professor Vincent Gan integrated building information with indoor spatial data to help robots navigate and map indoor spaces more efficiently.
04 | January 2025 Forging
Traditionally, 3D scanning has been labour-intensive, relying on stationary or mobile laser scanning tools that require manual setup and positioning. This becomes especially cumbersome in cluttered environments without GPS access. While robotics offers a better way, automated navigation and scanning in such spaces remain challenging.
Many robotics-enabled methods struggle to capture the nuances of indoor spaces — whether tight passages, machinery or furniture — often leading to incomplete scans that require extensive postprocessing.
“We wanted to develop a new approach that enables robots to navigate and scan automatically, ensuring high-quality scan data with enhanced coverage, without human intervention,” says Asst Prof Gan.
“We wanted to develop a new approach that enables robots to navigate and scan automatically, ensuring high-quality scan data with enhanced coverage, without human intervention.”
To achieve this, the researchers integrated BIM with IndoorGML. BIM provides detailed geometric information, but it lacks the detailed spatial connectivity needed for robotic navigation. “Enriching BIM with IndoorGML enables the creation of a dynamic indoor navigation model that integrates building geometry with spatial topological data. This equips robots with the ability to interpret multi-scale spatial networks,” explains Asst Prof Gan.
The team also infused the navigation model with an algorithm that guides the robot to optimal scanning positions along its path. This algorithm strategically optimises the key scanning positions and traversal sequences that reduce the number of scans needed while maximising the scan coverage. Additionally, a sensor perception model has been incorporated to address potential occlusions in scanning, ensuring high-quality data acquisition.
The researchers then put their system to the test, deploying a quadruped robot equipped with a 3D LiDAR sensor at an NUS building. The robotic scan demonstrated accuracy comparable to conventional terrestrial laser scanning while enhancing scanning efficiency and coverage. By facilitating the reality capture of built facilities,
this approach reduces the time and costs associated with traditional manual data acquisition and processing, transforming workflows in the digitalisation of the built environment.
Advancing robot navigation with BIM
Automating 3D scanning brings multifaceted benefits across different industry applications. Faster, more comprehensive scanning helps robots operate in unstructured, dynamic environments such as construction sites.
“Real-world environments could be complex — think moving furniture, workers, fluctuating light conditions or doors opening and closing. These variables present new challenges for robots in the built environment.”
“Real-world environments could be complex — think moving furniture, workers, fluctuating light conditions or doors opening and closing. These variables present new challenges for robots in the built environment,” says Asst Prof Gan.
Moving forward, the team plans to leverage BIM-derived ‘as-planned’ information to rapidly generate virtual point clouds that accurately represent the geometry of built environments. These virtual point clouds can then be used to train AI models to classify point clouds with semantic labels, producing 3D semantic maps of the surrounding structure. This capability enhances a robot’s understanding of its environment and supports the development of semantic-aware algorithms to improve situational awareness and navigation in complex spaces. Another recent study by the team published in Automation in Construction, looks into this challenge.
“We will work closely with experts in mechanical engineering, electronics, and computer science to advance the hardware and software aspects to facilitate the adoption of robotics and automation in the built environment,” shares Asst Prof Gan.
Teaching robodogs new tricks

Modelled on the neural control systems of animals, a new layered control framework enables legged robots to navigate complex terrains with greater agility and precision.
Eyes on the finish line. The whistle blows. A trained canine whizzes past obstacles, swerves around bright orange cones, leaps through metre-high hoops and glides effortlessly under low bars as the spellbound audience oohs and aahs in unison.
Issue 04 | January 2025 Forging New Frontiers

Agility championships are a showcase of how nimble and acrobatic dogs can be. Assistant Professor Guillaume Sartoretti is training agile dogs of his own. But not the furry kind; rather, those that take a sleek, metallic form.
At the Department of Mechanical Engineering, College of Design and Engineering, National University of Singapore, Asst Prof Sartoretti and his team, in collaboration with researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, have designed a robotic control framework that emulates the central nervous system’s layered approach to movement control. This equips quadruped robots with the agility and adaptability needed to grapple with complex terrains. The team’s work also deepens our understanding of the biological principles that drive animal movement.
The team’s findings were published in the preprint repository arXiv on 27 April 2024, and later presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems, one of the top conferences in the field, in Abu Dhabi in October 2024.
Learning from nature
Traversing unpredictable, obstacle-laden terrain with ease is second nature to many vertebrates — thanks to the wonders of the central nervous system (CNS). For locomotion, the core of this system are the central pattern generators (CPGs) — networks of neurons in the spinal cord that generate rhythmic muscle movements even without direct orders from the brain. This provides the basis for essential actions such as walking, running and climbing.
To handle and adapt to complex environments — such as obstacles, changing surface textures, or sudden shifts in direction — animals depend on descending modulation, which are signals from the brain that tweak the CPG-generated rhythm based on sensory feedback. Visual cues, balance and touch provide immediate data, enabling animals to make split-second adjustments to their movements. This multi-layered, feedback-driven system is what makes animals so adept at moving across various terrains, from rocky paths to steep inclines.
Assistant Professor Guillaume Sartoretti led a team to design a robotic control framework that mimics the central nervous system’s approach to movement control.
Issue 04 | January 2025
Robodogs, however, are not trained to match this natural versatility. Traditional frameworks that control their movement are single-layered and perform well in simpler environments, but stumble when presented with stairs or uneven ground. Newer models, while slightly better at rhythmic movement, lack the finesse required for rapid, real-time adjustments in rough terrains.
To bridge this gap, Asst Prof Sartoretti’s team looked to nature for inspiration. Their hierarchical control system emulates the CNS of vertebrates using two distinct neural networks to handle movement.
“The first network is like the spinalcord CPGs in animals, where it produces basic rhythmic gaits that enable the robot to walk and maintain balance on flat terrain,” says Asst Prof Sartoretti. “Meanwhile, the second, or descending modulation network, takes in sensory data from the environment, using inertial and joint sensors, as well as a visual sensor, to adjust the robot’s movement as needed — much like how animals use their senses to fine-tune their actions.”
Together, these layers enable the robotic system to react dynamically. Simulations have demonstrated the robot’s ability to navigate environments with stairs, high obstacles and wide gaps, maintaining coordinated movement and avoiding missteps.
“The first network is like the spinal-cord CPGs in animals, where it produces basic rhythmic gaits that enable the robot to walk and maintain balance on flat terrain.”
Training ground
The team’s work also uncovers fresh insights into the biological principles governing animal locomotion. “With the support of the EPFL team, we’ve further underlined how the CNS’s layered control — balancing rhythmic movement with adaptive feedback — provides stability and agility in animals,” adds Asst Prof Sartoretti. “For instance, robustness to sensorimotor delays — kinks that make current robotic systems stumble in unpredictable environments — has been widely documented in animals, but this is the first time it has been studied and replicated in robodogs.”
04 | January 2025
This means robodogs could be trained to do much more, particularly in challenging, high-stakes environments. For instance, disaster relief often relies on rescue dogs to explore collapsed buildings and rubble in search of survivors. Autonomous robots could scale up these efforts, critical in the first hours after a disaster, while reducing risks to human and canine rescuers. More generally, legged robots could play a role in last-mile delivery, scaling stairs to deliver packages to doorsteps, or serving as home assistants to help older or less-able individuals with daily tasks and chores.
The team’s next steps involve refining how the two control layers interact, allowing for real-time adjustments in rhythm, reflexes, and even gait transitions, such as switching to running or carefully stepping over obstacles — a feat yet to be achieved in robotics. They are currently wrapping up tests of their bio-inspired framework on actual robodogs to gauge its reliability and adaptability in realworld settings.
Turns out, you can teach an old dog new tricks — especially if it’s made of metal.
Robot safety a top priority

Designing safe, reliable and human-centric robots is a growing priority as they become an integral part of daily life.
Barrier guards, safety fences and motion-sensing alarms used to separate man from machine. But with advances in technology, robots have moved out from factory floors, and into human-centric spaces, from homes to hospitals to hotels.
Issue 04 | January 2025
Collaborative robots, purpose-built to work alongside humans, are increasingly being integrated into sectors grappling with a labour crunch. Singing robot waitstaff deliver bowls of ramen, alleviating the shortage of workers in the food industry; humanoid porters ferry toiletries to hotel rooms, enhancing efficiency and reducing labour costs without compromising the quality of guests’ experience.
As our interactions with robots get more frequent and personal, the quality of these interactions has become a priority. At the Department of Electrical and Computer Engineering, College of Design and Engineering, National University of Singapore, Assistant Professor Fan Shi’s Human-Centred Robotics Lab is driven by one goal: making robot operation as safe as possible in humancentric environments.
Establishing transparency

“I envision a society where everyone can live with joy and dignity,” says Asst Prof Shi. As populations age, we face a growing challenge: there aren’t enough young people to support the daily needs of an increasingly elderly population.”
To address this, Asst Prof Shi believes robots designed to coexist harmoniously with humans are essential. This vision inspired the name of his research group: the Human-Centred Robotics Lab. In the lab, his team develops robots that not only learn and perform tasks but operate safely and reliably in human environments. “Safety and human-centric design lie at the core of our work because robots should enhance lives, not complicate them,” adds Asst Prof Shi.
One of the biggest challenges today is the opaque nature of many state-of-the-art methods, largely due to advancements in deep learning. The advent of foundation models has further complicated this issue, as their complex task dimensions make identifying failure scenarios even more challenging. One of the team’s projects, Diversifying identified failure scenarios to fully address the weak spots of the AIbased robotic systems, tackles this challenge, focusing on creating methodologies that lead to more transparent systems and establishing proper benchmarks to evaluate these AI-based robotic systems.
Assistant Professor Fan Shi’s Human-Centred Robotics Lab makes robot operation as safe as possible in human-centric environments.
Issue 04 | January 2025
To do so, Asst Prof Shi’s team is improving the sample efficiency of these AI methods to adapt them to a variety of robotic platforms, all while prioritising safety. These methods have wide-ranging applications, from robotic arms to quadrupedal robots and drones, as these robots are increasingly integrated into our daily environments. Ultimately, he aims to develop a comprehensive approach that serves as a critical benchmark for ensuring the safety and reliability of robots before they are deployed, allowing them to better assist people in real-world settings.
Safety for physical AI

neural-network-controlled quadrupedal
In another project, Systematically evaluating the robotic solution in human-centred environments, Asst Prof Shi’s team aims to make robot operation safer. “When it comes to evaluating the safety of robots in human-centred environments, the first challenge we face is gathering the data needed,” he adds. “To systematically assess safety, we need a wide range of realistic scenarios. However, real-world testing is both dangerous and time-consuming. That’s why we’re focusing on developing more advanced generative methods and high-fidelity simulations to validate these more systems safely and efficiently.”


Currently, his team is working on tackling the problem at its core — by refining the physics models and simulators that underpin these evaluations — as building a robust foundation will ultimately raise the upper limits of what robots can achieve. This will have a significant impact on a wide range of practical applications, from locomotion and drone flight to manipulation tasks — each of which plays a critical role in humancentred settings.
State-of-the-art
robots in Asst Prof Shi’s lab are undergoing rigorous testing to ensure the safety and reliability of advanced robotic systems.
Robotic arms being tested for their safety in human-centred environments.
To validate state-of-the-art methods and benchmark their performance, Asst Prof Shi’s team uses various robotic platforms as testbeds, such as the human-like manipulation arms. “Such designs are highly likely to be used in environments where they will coexist with people, and one of our main focuses is on identifying and mitigating potential safety risks, which is an essential aspect of deploying these robots alongside humans,” he adds.
In addition, the team is also exploring the risks associated with AI-based robotic tasks across multiple levels, from foundational models to low-level control systems, which will have broad applicability, spanning various robotic platforms and tasks. “To achieve this, we’re collaborating with domain experts from around the world, combining robotics, simulation and AI expertise to tackle these challenges,” says Asst Prof Shi. “There’s a lot of exciting research in this pipeline, and we look forward to sharing more exciting new results soon this year.”
“Such designs are highly likely to be used in environments where they will coexist with people, and one of our main focuses is on identifying and mitigating potential safety risks, which is an essential aspect of deploying these robots alongside humans.”
CDE brings together world-class researchers across the entire robotics pipeline — from innovative robot hardware design and control theory to artificial intelligence for robotics. “My research complements these efforts by focusing on robust benchmarking and performance evaluation, especially in real-world, humancentred environments,” adds Asst Prof Shi. “The supportive and collaborative environment at CDE has enabled me to build strong partnerships with colleagues and advance impactful robotics research. I look forward to continuing these efforts and contributing to innovation in the field.”