Heights - 2025

Page 1


Nadir to Oblique

Lifting the UAV game; solutions in aerial construction; path planning algorithms for beyond visual line of sight; micro-drones moving into commercial operations.

Quantum Sensing

Profound change could be ahead for airborne reality capture and mapping, in the realm of quantum physics.

Ready, Set, Modernize

ASPRS is preparing the geospatial industry for the modernized National Spatial Reference System.

A Green Light for Shoreline Mapping

A hybrid system for capturing land and water promises dramatically improved efficiency over legacy aerial approaches.

Under Pressure

Can traditional aerial photogrammetry survive in the era of uncrewed aviation?

Publisher Shawn Dewees shawn.dewees@xyht.com

Editor-in-Chief Jeff Thoreson jeff.thoreson@xyht.com

Director of Sales and Business Development Chuck Boteler chuck.boteler@xyht.com

Creative Director Ian Sager ian.sager@xyht.com

Accounting and Classifieds Angie Duman angie.duman@xyht.com

Circulation subscriptions@xyht.com Phone: 1 301-682-6101

Editor, Nadir to Oblique Jeff Salmon jeff.salmon@xyht.com

Contributing Writers Qassim Abdullah

Jenna Borberg

Marc Delgado

Linda Foster

Juan Plaza

Stephen White Heights 2025

Christopher Parrish

Gavin Schrock

Partners and Affiliates

Nadir Oblique from to FRESH SPECS

THAT ARE LIFTING THE UAV GAME

THE LATEST UAV TECHNOLOGIES RELEASED IN THE FIRST QUARTER of this year seem to be bucking the trend of white-washed main frames and disappearing control buttons commonly seen in the last five years. Instead, the current crop of drones is gaining more advanced robotic payloads, as well as imbibing unconventional shapes that allow for better functions.

Plus, there’s a big jump in chip performance thanks to more advanced artificial intelligence technology. Here are the latest UAV specs that bring in the bells and whistles.

SHAPE SHIFTING BODY

The most common UAV main frame or chassis takes on a spider shape, forming the X-like configuration that holds the motor, rotors, and payloads. Because this design blueprint is so common, all drones somewhat look the same.

Two UAV companies are betting on a more rounded approach in building

the drone chassis, a breath of fresh air in drone design. Winner of this year’s CES Innovation Awards is the Hagamosphere, a spherical drone from DIC Corporation, a Japanese company.

With its eight propellers built inside a cubic frame, the Hagamosphere is essentially an omnidirectional multicopter, which allows it to move horizontally and vertically without tilting while airborne. But because it is also

The verti-Pit mini by WEFLO has a built-in AI capability.

housed in a geometrically shaped sphere guard, it can also move itself by rolling on the ground, allowing the drone to work in environments where flying could be restricted because rotor blades can hit hanging cables or pipes.

But if blades are the problem, then why not just eliminate them from the drone’s main frame design? Well, that’s what the designers at Airus and Hanseo University of Korea did when they created the Bladeless Drone that relies on rotorless propulsion technology.

The drone’s round shape, which harks back to the flying saucer era in movies, pushes air out from its six wind pits, providing lift while at the same time reducing noise by about 40 percent, perfect for hovering over cities. The Bladeless Drone, which also won this year’s CES Innovation Awards, can carry up to 10kg of cargo, a bit lighter than bigger UAVs but just the right payload capacity to deliver food stuffs and medicines in tight urban spaces, while also compact enough to be used in building assets inspection.

ROBOTIC PAYLOADS

Fitting robotic arms to UAVs is definitely more

useful than simply equipping them with cameras and other imaging payloads. Drones outfitted with robotic arms, or automatic manipulators, can allow UAV operators to perform many other useful tasks apart from the typical aerial photography or delivery.

MobiRobo, from the Japanese company Kailas Robotics, is an ultralight and dexterous robotic arm that can be mounted as a payload in UAVs. It can be used to carry and move objects from one place to another, like dropping lifesaving equipment during emergencies, or recovering dangerous materials in work sites, or debris removal from solar panels during inspection. Stability and accuracy of the robotic arms while drones are in motion is assured by MobiRobo’s patented stabilization technology.

Another robotic arm payload to look out for is Pliabot from Hong Kong-based Wisson. Its lightweight Orion Pliabot Aerial Manipulator can also be attached to UAVs to facilitate gripping, situating, and seizing samples while the drone is airborne.

AI INTEGRATION

chip is ultra compact (17 x 17 mm) and has been optimized for use in drones which are usually space-constrained due to their body size and limited battery life.

The AI chip can support robotic payloads and AI camera applications because of its ability to process ultra-high-definition images and videos with minimal latency and maximum accuracy. It also supports major machine learning frameworks, including Tensorflow, Pytorch, and ONNX to create new custom AI models that are specific for UAVs, such as image analysis and segmentation during reconnaissance surveys.

One drone that already has a built-in AI capability is the verti-Pit mini by WEFLO, another Korean company that has been creating top of the line UAVs for the industrial sector. Fitted with payload sensors and an AI-based diagnostic system, the verti-Pit drone can conduct self-diagnosis in 10 seconds, doing away with human contact and providing faster analysis before takeoff.

UAVs are also joining the current wave of gadgets that leverages artificial intelligence technology, from embedding high performance AI-powered microchips to applying AI-based computer vision technology.

Regulus, a pioneering on-device AI chip promises to transform UAVs into smart systems but without the extremely power-hungry demands typical of AI applications. Consuming just under three watts of power, the Korean-designed Regulus

When verti-Pit drones fly into the air, they are ready to work because its AI-powered smart landing pad will automatically inspect the drone’s propulsion system before launching and after landing. The drone’s diagnostics are backed by advanced AI algorithms to make sure that every drone is safe to fly and ready to conduct its air operations.

An innovative spherical drone from DIC Corporation.
An ultralight and dexterous robotic arm from Kailas Robotics can be mounted as a payload on UAVs.

Nadir Oblique from to

AERIAL CONSTRUCTION

Monitoring Solutions that Require No Pilot

here are two solutions that require, in one case, no drone, and in another, a drone that requires no FAA-Part 107 certified operator.

reality capture solution that delivers daily as-built data to track, verify, and document

and is ideal for crowded city construction environments where the use of a UAV wouldera include site progress monitoring, daily as-built verification, operational planning, quality control, jobsite documentation, and

A CraneCamera, as the name suggests, the-sky” capability. A modular system, the unit is weatherproof and designed to resist crane vibrations so that crane movements

with RTK GPS and IMU sensors, the system can be set up to collect site images every

UAVs are not an option, CraneCamera looks like a great way to leverage geospatial data ing system uses an active tethered drone

Part-107 licensed pilot. The Sigma is a twopart system. The first is a hexacopter UAV equipped with a radiometric thermal camera, a wide-angle color camera, and a zoom color camera mounted on a three-axis gimbal. The Sigma’s second component is its container which is available as a transport case or a rooftop box configuration for vehicle mounting.

Besides eliminating the need for a FAA-certified pilot, and therefore the expense and time associated with that training, Sigma affords several other features. The UAV is powered via a tether, this making extremely long duration flights routine. Operations

taking days, weeks, and more are common. Additionally, unlike free-flying UAVs, Sigma’s active tethering system stabilizes the UAV and allows it to operate in rain, snow, and windy conditions, where conventional UAVs are grounded.

On the con side, from a construction site monitoring viewpoint, Sigma’s functions are limited to on-demand image capture. This is consistent with the unit’s primary application–providing public safety teams (generally fire, rescue, and law enforcement) with “mission-critical situational awareness from elevated perspectives.” The system is, at present, not geared to the geospatial data

collection market.

Yet one of its use cases listed is “infrastructure security,” including site inspection and monitoring applications. It’s not too much of a stretch to imagine a survey vehicle equipped with a Sigma rooftop box configuration visiting a construction site, driving the site and sending the tethered drone up to capture imagery at designated control points and then processing the data to create geospatial deliverables. No Part 107 required. Mind you, they are not offering this (yet), but it seems like a distinct possibility down the road as the system evolves.

—Jeff Salmon, Jeff.Salmon@xyht.com

Nadir Oblique from to NOT JUST

WINGING IT

Path planning algorithms will bolster beyond visual line of sight UAV flights

WHILE CANADA AND EUROPE ARE ALREADY RELAXING THEIR RULES on flying UAVs outside the visual range of pilots, BVLOS (beyond visual line of sight) operations in the U.S. may still take some time for government approval due to airspace safety and security considerations. This is, of course, another classic case of technological innovation that’s out of step with bureaucratic rule making as more advanced algorithms for autonomous UAV navigation have now become more widely adopted and easier to deploy.

Take for example path planning, or the algorithms that permit UAVs to avoid obstacles and reach their target efficiently. Appropriate path planning is crucial so that UAVs operating under BVLOS can complete their flights safely while accommodating real time air traffic constraints.

Path planning makes this possible by finding the optimal path between the origin and destination points, while at the same time taking into consideration travel distance and time, as well as battery power. Crunching all this information can now be easily accomplished by the new generation of path planning algorithms.

In an in-depth analysis published in the Journal of Physics last year, Elena Politi and her colleagues from Harokopio University in Athens, Greece, identified three important factors that are shaping the development of safe and reliable BVLOS operations. They include path planning algorithms, sensors for environmental detection and perception (think of cameras, lidar, and radar), and satellite-based advanced surveillance technology that allows drones to determine their position in real-time (called ADS-B or Automatic Dependent Surveillance-Broadcast).

Path planning is so important to the UAV industry that many algorithms have already been developed throughout the years and drone users can choose algorithms to employ depending on the complexity of their BVLOS operations.

In general, path planning algorithms can be pigeon-holed as either global (users provide the path to be followed by the drone), local (real-time path planning using sensors), or a hybrid of both approaches. They have interesting names, such as the most commonly used A* algorithm, Ant, Dijkstra’s, Neural Network, Particle

Swarm, and Genetic algorithms. But what unites all of them is their computational efficacy.

In a recent review of path-planning algorithms by researchers from Queensland University of Technology in Australia, global and local pathplanning algorithms have been shown to exhibit various levels of efficacy. For example, algorithms such as the A*, Genetic, and Particle Swarm are more useful in “static and regulated situations,” meaning in environments where the obstacles are known beforehand, such as in cities and other built-up areas.

Conversely, local and hybrid path planning algorithms that are based on neural network or reinforcement learning have greater flying adaptability because they allow drones to make decisions in real time, although they require a lot of computing power.

Commercial UAV data platforms like DroneDeploy and Pix4D, are already offering these path planning options for their users. Yet according to an article in the journal Nature, available path planning features are often limited to only a few

including those that operate under BVLOS, must capture their targets multiple times in order to perform their task effectively. This, according to the authors of the article, is a time-consuming process which can delay BVLOS projects.

Another setback to operating UAVs beyond visual line of sight, at least in the U.S., is acquiring a BVLOS waiver from the Federal Aviation Administration (FAA). The FAA oversees all drone operations in the country under its Part 107 rules, and procuring a Part 107 BVLOS waiver can be a complicated and arduous process. Since the rule was implemented in 2016, only a handful of companies have been granted waivers, mostly big drone delivery businesses such as Zipline and UPS Flight Forward.

So how about private drone operators who want to fly their machines for surveying and asset inspections? They could always apply to get the BVLOS waiver themselves, or they can use the services of companies like American Robotics and DroneDeploy, which have been granted BVLOS waivers by the FAA at both local

These companies use advanced path planning algorithms that allow BVLOS-operated drones to access larger areas, reduce costs and security risks to operators, as well as expand to other tasks such as site inspections, mapping, and even sampling.

For now, all this talk of the benefits of BVLOS is limited to those who can acquire the FAA waivers. Whereas path planning algorithms are already proving their worth in normal UAV operations, drone operators who want to conduct BVLOS flights will just have to be patient for the release of a definitive set of BVLOS rules by the U.S. government.

In the meantime, other countries are on the path to launching uncrewed BVLOS drone operations. The UK, for example, will permit BVLOS flights by 2027, creating new jobs and bolstering the nation’s coffers up to £45 billion by 2030, according to estimates by PWC, a global consulting firm. In the high-stakes drone business, anything that promotes flight efficiency, safety, and cost savings is worth waiting.

Nadir Oblique from to

MICRO-DRONES

MOVING INTO COMMERCIAL OPERATIONS

MICRO-DRONES, THOSE THAT WEIGH LESS THAN 250 grams, are popular with recreational UAV flyers for several reasons. They are relatively inexpensive, with models starting at under $500. Additionally, the 249-gram-andunder category requires no FAA registration, nor do they fall under the remote ID (RID) regulation.

Now, with the aid of sophisticated flight planning and automated control software, micro-drones are tackling commercial operations such as construction monitoring.

One such example is Magil Construction, a general contractor with projects across Canada. Using Dronelink’s flight planning and mission control app, the company developed an internal drone program which flies weekly progress monitoring missions on active sites to improve project management processes and quality control. Captured data is processed in Pix4D into high-resolution maps, and 3D models, and shared online between all team members.

Magil Construction’s experience has shown that automating micro-drones allowed the program to expand easily and offers less regulations, lower costs, and safer operations on active sites. Personnel on site simply execute pre-planned missions on a weekly basis.

One of the challenges of construction monitoring is the fact that these sites are generally occupied with construction personnel during daylight hours and thus pose a safety concern for drone overflights, what the FAA calls “operations over people” (OPP). This is another advantage of using a micro-drone for this task. In the U.S., the FAA classifies micro-drones as Category 1, which means they can fly OPP missions without needing additional documentation from the manufacturer.

For lower cost, easier deployment, safer overflights, and reduced regulation, micro-drones could be a cost-effective solution for a variety of construction monitoring applications.

—Jeff Salmon, Jeff.Salmon@xyht.com

An Integrated Drone-to-CAD Solution

CARLSON SOFTWARE provides the land development industry with innovative software and hardware solutions built to work for the clients that depend on them every day. As a one-source solution, we provide CAD design software, field data collection, GNSS, UAV, and laser measurement products for the surveying, civil engineering, GIS, and construction industries.

Our wide product range for drone-to-CAD workflows includes

Carlson PhotoCapture for photogrammetry, UAV mapping, and LiDAR-photogrammetry integration, Carlson Point Cloud for powerful point cloud manipulation, combination, and feature extraction, UAVs that include photogrammetry, LiDAR, and bathymetric payloads, Carlson Precision 3D for engineering design in 3D, and solutions that include SurvPC data collection software, data collectors, GNSS receivers, robotic total stations, and laser scanners.

Carlson Software is proud to provide our customers with free technical support both online

and on the phone. The day-to-day users of our solutions constantly provide invaluable feedback from throughout the land development

industry that is used to ensure that we are constantly developing new innovative, helpful, customer-driven features and solutions.

Exyn Technologies Develops Advanced LiDAR

EXYN TECHNOLOGIES DEVELOPS

ADVANCED LIDAR-BASED MAPPING and surveying solutions that combine our industry-leading autonomy with fully customizable modularity that can tackle all your mapping needs in challenging operational environments. Our flagship product, Nexys, is an advanced autonomous mapping and navigation system designed to capture highly accurate, colorized 3D point clouds in dynamic, dangerous, or otherwise inaccessible environments. Its application spans multiple industries including construction, mining, aerospace, civil engineering (AEC), and geospatial sectors, offering an unparalleled

combination of precision, efficiency, and safety for capturing critical datasets.

The cutting-edge technology at the heart of Nexys leverages a robust Simultaneous Localization And Mapping (SLAM) algorithm to autonomously navigate and precisely map data-rich areas of interest without the need of a prior map or pilot in-the-loop. Real-time data collection enables swift decision making in the field, while a robust post-processing pipeline enables operators to quickly create feature-rich 3D models for stakeholder evaluation and input.

Its modular design allows for multiple deployments to

meet specific project needs and seamless integration with existing robotic systems. The payload can be quickly and easily switched between a variety of configurations, including handheld, boom-, vehicle-, robot-, or drone-mounted, giving you the flexibility and efficiency to use one device in any mapping environment.

For companies striving to stay ahead in their industry, Nexys not only presents a solution to traditional surveying challenges but also equips them with a modular solution to meet any future data capture needs while keeping survey teams safely out of harm’s way.

Exyn Technologies’ commitment to enhancing autonomous mapping not only sets a new standard for BVLOS data capture, but also promises significant advancements in how spatial data is captured and utilized in the future.

Contact Information

For questions about our modular autonomous drones, visit our website at www.exyn.com email us directly at hello@exyntechnologies.com

SENSING QUANTUM

Profound change could be ahead for airborne reality capture and mapping, in the realm of quantum physics.

The term “quantum leap” is often used loosely to describe excitement over new technologies. This time, though, the “quantum” part is literal—specifically, “quantum sensing.”

In this case, it is the leveraging of quantum physics, quantum mechanics, particle states, and properties to improve foundational geodesy, sensors, systems, and methods.

R&D, and in some cases, productization is underway for

quantum state-based sensors. Such developments could bring benefits for the following:

• Magnetometry for mapping, positioning, and navigation

• Accelerometers and gyros for navigation, monitoring, and solution stabilization

• Full spectrum antennas for secure communications and multi-band radar

Atoms are more like erratic blobs of unstable sub-atomic particles that can have wavelike behaviors, and not like the tidy diagrams of orbiting particles in school textbooks. However, this seeming chaos and varied states (normal and induced) provide boundless opportunities for quantum computing and spatial measurement techniques, such as interferometry, gravimetry, imaging, radar, and lidar. Credit: Gavin Schrock

• Radar, lidar, and imaging

• Computing for “big data” processing, classification, and analysis

Aerial mapping has already benefited from a great example of particle physics: Single Photon Lidar. SPL can be advantageous in many situations, and disadvantages in others. While SPL can work at a single particle level, not all implementations interrogate quantum states; not in the manner of the cold matter and Rydberg atom techniques we’re about to examine.

QUANTUM VS. CLASSICAL PHYSICS

In the early 20th century, quantum physics underwent a pivotal period of discovery and understanding. Great minds theorized about a fuzzy universe where certain elements of classical physics might not apply. For example, how “uncertainty” becomes one of the keys to a new frontier in scientific understanding. The rejection of many foundational elements of classical physics in the new quantum realm was controversial; Einstein himself insisted that much of classical physics did apply, while others vociferously disagreed (and were later proven to be on the right track).

Quantum computing, while garnering a lot of airplay in the general media, is not to be confused with quantum sensing. The former leverages certain elements of quantum physics but does not play a direct role in how quantum sensors work. Instead, quantum sensors employ two key concepts that could make many sensor types dramatically more sensitive and capable: cold matter and excited atoms. Plus, there is great promise for multi-sensor stacks.

transporters), would take volumes to cover… read an expanded version via the link at the end.

THE COLD AND THE EXCITED

Cold is cool, literally. Cooling with lasers is the mechanism. This makes matter slower, compared to the speed of light. And colder, in ranges where the Kelvin scale is used.

Atoms are more like erratic blobs of unstable sub-atomic particles that can have wave-like behaviors, unlike the tidy diagrams of orbiting particles in school textbooks. However, this chaos, wave behaviors, and varied states (normal and induced) provide boundless opportunities for quantum interferometry and inertial applications.

QUANTUM LIDAR AND IMAGING

Beyond SPL, there is R&D underway for approaches that interrogate the state of photons, including, for instance, spin behaviors and two-photon interference techniques.

The other key concept is Rydberg Atoms. When an atom is excited by a laser, a photon or electron can enter a state where its orbit is expanded and isolated from others nearer the core. These isolated particles become quite sensitive, and their states are observed to enhance applications like magnetometry, radar, lidar, and imaging.

Quantum physics concepts such as “superposition,” “entanglement,” “squeezing,” “cold matter,” and even “teleportation” (no, it is not related to futuristic dreams of beam

A challenge facing cold matter techniques is miniaturization; vacuum pumps and large shielded chambers are needed. Rydberg Atom approaches are much less expensive and easier to miniaturize, and we will likely see sensors leveraging these sooner for geomatics and aerial mapping.

Exciting work is being done with entangled particles. Imagine sensors that are not analyzing received photons, but instead, detecting the state of distant entangled photons. There have already been demonstrations of, for instance, a lidar system that can detect high detail, even in turbid water.

QUANTUM MAGNETOMETRY AND NAVIGATION

Magnetic surveys would benefit from enhanced sensors and would also be key to creating more detailed magnetic maps that would be needed for proposed magnetometry-based navigation. The concept is to have very detailed mag maps and readings from super-sensitive magnetometers to determine coarse position and heading.

While this idea is often floated as an alternative to GNSS-based navigation, in reality it would be very imprecise. Sure, it could yield a “ship scale” position but never anything close to ground control point positions. Where it would augment GNSS is as a “canary in a coal mine,” detecting large changes that might be due to spoofing (though rare outside of conflict zones: gpsjam.org).

QUANTUM INERTIAL NAVIGATION

Some fascinating devices have been developed and deployed on ships and

Experimental quantum lidar (right), capturing high detail, even in turbid underwater conditions. Source: Heriot-Watt University

submarines, and tested on trains. Because these are cold matter systems, size and weight is an issue, for now. Again, these might serve to detect compromised GNSS positions.

QUANTUM GRAVIMETERS

Gravity is an acceleration, and there are already quantum gravimeters, that use interferometric techniques. “The advantage of using a quantum gravimeter is that the atoms act as an internal calibration reference for everything: the time between light pulses, the frequency of the laser light, even the spacing of our “optical ruler” in the matter-wave interferometer,” said Dr. Brynle Barrett, Associate Professor, Quantum Sensing Ultracold Matter Lab at the University of New Brunswick. “What’s more,” explained Barrett, “is that you can stack two or more to determine a gradient.”

There are already commercial models deployed on the ground, on the water, and in the air (despite being bulky and heavy). They have also been used for

One approach to quantum magnetometry for navigation is to use enhanced quantum sensors to detect positions relative to Earth magnetism models. Precision could meet many course navigation, like for ships and aircraft, but could not replace GNSS for precise positioning. Source: NOAA
Dr Joseph Cotter, senior research fellow, department of physics, Imperial College London, explaining the components of one of their development quantum navigation systems.  Credit: Gavin Schrock

underground feature detection, like tunnels. Nasa-JPL for instance, is also researching quantum gravity as a way to navigate in space.

QUANTUM RADAR

Leveraging the sensitivity afforded through Rydberg Atom techniques, the U.S. Army recently announced a quantum antenna capable of detecting the RF spectrum from 0 to 100 GHz. The potential of similar antenna technologies for multi-band and more compact radars is already being explored.

“One of the big challenges in radars today is that they are not very tuneable systems, and they need to be big because of the antennas,” said Dr. Darmindra Arumugam, program manager at Jet Propulsion Laboratory, Caltech. “And you’d need different antennas for different bands. A multi-sensor package might benefit from having different radars in different bands.” Consider, for instance, foliage penetrating radar (FOPEN or FOLPEN), airborne multi-band InSAR, etc.

QUANTUM COMPUTING

“In the quantum world, when we have

two qubits, together they will be in all four states 0, 1, 2, 3 simultaneously with varying probabilities,” said Venkateswaran Kasirajan, author of Fundamentals of Quantum Computing – Theory and Practice, “with this exponen-

Dr. Darmindra Arumugam, group supervisor, senior research technologist, and program manager at Jet Propulsion Laboratory, Caltech said, “The key for radar is to tune to the state that it’s in. If it was more sensitive, it could reach all of those bands. So, atom-based techniques like Rydberg radar are focused on tuning. You tune the atom differently, and it’s now sensitive enough for this-or-that band— you can make very sensitive detectors that cover MHz up to THz—it’s just mind-blowing because it changes the game on how radars are done today. As a result, there’s a lot of activity on this topic and I’m leading a large team developing these techniques.” Credit: Gavin Schrock

tial capability, complex problems can be represented easily with fewer numbers of qubits.” This is in contrast to the 0.1 realm of classical bits. For example (as stated

by the team at Azure): “It would take a classical computer millions of years to find the prime factors of a 2,048-bit number. Qubits could perform the calculation in just minutes.”

While large and extremely costly at this time, quantum computers may, in a not-too-distant future, augment cloud services used for reality capture and airborne mapping.

The science for the potential applications we touched on is proven, and key technologies have been tested. Now comes productization, where timelines can be difficult to predict, but we are already seeing early implementations. 

Gavin Schrock is a professional land surveyor who writes on surveying, mapping, GIS, data management, reality capture, satellite navigation, and emerging technologies. Read the full article “Quantum Surveying” as published by GoGeomatics.ca: bit.ly/4idDvmY

An experimental quantum antenna, using Rydberg atom techniques, that can detect the RF spectrum from 0 to 100 GHz. Such broad-spectrum antennas could enable compact multi-band radars. Source U.S. Army press release. Credit: U.S. Army
The marine chronometer revolutionized navigation in the early 18th century. On display at the Science Museum in Kensington London. The museum is only a short walk away from the Center for Cold Matter at Imperial College, where quantum navigation is being researched. Credit: Gavin Schrock

Imaging is a global leader in aerial imaging, renowned for its industryleading UltraCam aerial cameras and hybrid camera-LiDAR

THE BROAD ULTRACAM

LINEUP OFFERS OPTIMIZED CAMERAS for every application, delivering exceptional data quality at unmatched flying efficiency.

The aerial camera portfolio offers a diverse range of imaging capabilities, including photogrammetric nadir (UltraCam Eagle & UltraCam Merlin), photogrammetric oblique (UltraCam Osprey), and wide-area mapping (UltraCam Condor). The hybrid mapping

systems (UltraCam Dragon), integrate LiDAR with nadir and oblique imagery and provide seamless solutions for complex mapping projects. Outputs include photogrammetric imagery, point clouds, LiDAR point clouds, and other derived products.

Covering all aspects of airborne photogrammetry, the portfolio is complemented by the UltraMap processing suite. The fully integrated software enhances project workflows with advanced automation, efficient data interaction, and intuitive tools. It enables

the creation of photogrammetric outputs such as DSMs, DTMs, orthoimages, and 3D data of the highest standard.

This end-to-end technology is the basis for the Vexcel Data Program (VDP), the world’s largest aerial imagery and geospatial data library providing organizations with location-based insight and intelligence. Industryleading UltraCam sensors provide up-to-date high-resolution vertical and oblique imagery along with other digital representations of the world, and precision geometry enabling AI and machine learning. VDP allows businesses and organizations to make better strategic decisions through intelligent imagery to uncover crucial location insights.

ASPRS is preparing the geospatial industry for the modernized National Spatial Reference System

The National Geodetic Survey (NGS) is modernizing the National Spatial Reference System (NSRS) in the United States. The modernization involves significant updates to the official reference frames and vertical datum used across the country, affecting the entire geospatial industry. The ASPRS NSRS Modernization Working Group prepared this article to help prepare the geospatial industry for the upcoming changes.

The geospatial industry is on the brink of a major advance that will affect all facets of our work. For the first time in over four decades, the official reference frames and geopotential (vertical) datum of the U.S., including territories, are scheduled to be updated.

The primary reasons for the updates include the non-geocentricity of the current North American Datum of 1983

Ready Modernize Set

(NAD 83) frames, bias and tilt of the North American Vertical Datum of 1988 (NAVD 88), multiple vertical datums, sea level change, the dynamic movements of geodetic control marks, and vast improvements in survey technologies and accuracies since the 1980s. As large volumes of existing maps and geospatial data are reference to NAD 83 and NAVD 88, these updates are a significant undertaking with broad-reaching implications.

The agency leading these updates is

the National Geodetic Survey (NGS), a program office within the National Oceanic and Atmospheric Administration (NOAA), National Ocean Service (NOS). NGS is mandated to define, maintain, and provide access to the National Spatial Reference System (NSRS), the official system that defines latitude, longitude, gravity, scale, orientation, and height throughout the nation. Most geospatial professionals understand the impending changes. They are highlighted in Table 1, and you can view

them online in the ASPRS PE&RS publication, November 2024 edition.

This article looks at the benefits of NSRS modernization for the geospatial industry, including those working in photogrammetry, lidar, sonar, remote sensing, mobile mapping, surveying, and GIS, among others. It presents recommendations for geospatial firms in preparing for NSRS modernization. These recommendations are separated into those for geospatial service providers, software manufacturers, and the entire industry.

The article concludes with a look ahead at the anticipated NSRS modernization schedule and opportunities for getting involved in ongoing efforts to assist with the integration of the modernized NSRS into geospatial infrastructure and workflows.

BENEFITS OF A MODERNIZED NSRS FOR THE GEOSPATIAL INDUSTRY

The improved accuracies and data interoperability that will be enabled through NSRS modernization will provide tremendous benefits across all segments of the geospatial landscape. The modernized NSRS will better support data sustainability, meaning that geospatial data will remain useful over longer time periods and across multiple applications.

Just a few examples of specific applications that stand to benefit tremendously from the Modernized NSRS include:

• Floodplain modeling

• Coastal storm inundation modeling

• Improved hydrodynamic modeling (e.g., in support of salmon migration protection on the Columbia River)

• Precision navigation (including autonomous vehicles)

• Marine navigation safety, including computation of real-time under-keel clearance

• Infrastructure positioning and monitoring

• Transportation and engineering projects construction and maintenance

Also of importance, NGS is building in mechanisms to support time-dependent coordinates through the use of reference epoch coordinates (RECs), which will be computed by NGS every five or 10 years, and survey epoch coordinates (SECs), which will provide the position at the time of survey.

PREPARING FOR NSRS MODERNIZATION IN THE GEOSPATIAL INDUSTRY

To prepare to take full advantage of the benefits enabled by NSRS modernization, it is imperative that geospatial service firms and software providers take certain steps now. The following are ASPRS Working Group recommendations for geospatial firms, separated into those that apply mainly to geospatial service providers, those that apply mainly to geospatial software manufacturers, and those that apply to the entire profession.

A critical aspect of these recommendations is ensuring forward and backward compatibility of coordinates.

WORKING GROUP RECOMMENDATIONS FOR GEOSPATIAL SERVICE PROVIDERS

Geospatial Service Providers, including those who collect, process, and provide aerial and satellite imagery, lidar, sonar, hyperspectral imagery, and other forms of geospatial data, are advised to take the following steps:

• Ensure that all metadata for all archived data (not just final deliverables) is complete and correct, paying particular attention to reference frames, coordinate epochs, units (if feet, be sure to document whether international feet or U.S. survey feet), geoid models applied (e.g., GEOID12b, GEOID18), and acquisition dates and times.

• For all control points and checkpoints, archive the survey report and store the observation data files (for example, RINEX raw observation files, processed GNSS vector solutions, or total station observation files), so that they can be reprocessed later relative to the modernized NSRS. To the extent possible, store data using the NGS standard file formats. Reprocessing or readjusting the raw data or processed observations (such as GNSS vectors) are the most accurate forms of relating legacy data to the new datums. Users can also transform data, but the transformed coordinates will not be as accurate as if the raw data are reprocessed in the new datums.

• For all data deliverables (and possibly important intermediate products), store versions with geodetic coordinates (latitudes, longitudes, and ellipsoid heights) relative to

Figure 1. Simplified difference in origins of NAD 83 and NATRF2022 (adapted from NGS).
Figure 2. Estimated horizontal shift from NAD 83 (2011) epoch 2010.0 to NATRF2022 epoch 2020.0. Credit: NGS.

the current NSRS (e.g., NAD 83(2011) epoch 2010.00), even if the project deliverables call for, say, SPCS 83 northings, eastings, and NAVD 88 heights.

• Document the full project workflows with particular attention to any coordinate transformations or conversions.

• Work with software manufacturers for all steps in your end-to-end project workflow

to ensure they are aware of and preparing for NSRS Modernization.

• Assess and document the uncertainty of spatial coordinates in all geospatial data products. This will enable additional uncertainties associated with transformations to be accounted for and used in assessing whether transformed products still meet requirements.

WORKING GROUP RECOMMENDATIONS FOR SOFTWARE MANUFACTURERS

Geospatial software manufacturers are advised to take the steps listed below. As a note on terminology, many of these recommendations refer to handling of what are widely (if somewhat loosely) referred to as “Coordinate Reference Systems” or “CRSs” in geospatial software. Ideally, a CRS provides a complete definition of the reference frame (e.g., NAD 83, ITRF2020, or, in the future, NATRF2022), the realization (e.g., 2011), and epoch (date for which coordinates are valid), and, if applicable, the map projection system (e.g., Universal Transverse Mercator (UTM) or SPCS 83), zone, units (e.g., international feet, or meters), and vertical datum. (Unfortunately, current methods of storing CRS do not allow specifying the epoch, except in the remarks, but this is anticipated to be addressed in future standards revisions.)

• If your software uses European Petroleum Survey Group (EPSG) codes or International Organization for Standardization (ISO) Geodetic Registry (ISOGR) to define CRSs internally and/or in exported data products, ensure that the EPSG codes or ISOGR entries for new terrestrial reference frames and NAPGD2022 and SPCS2022 are supported. (Side note: the intent is for EPSG to be replaced with ISOGR, although the timeline is yet to be determined.)

• Ensure SPCS2022 coordinates can be computed in units of both meters and international feet (1 international foot = 0.3048 meter, exactly)

• Ensure coordinate conversions and transformations (if provided in your software) are consistent with those of NGS

• Ensure proper and consistent use of geoid models. Importantly, any geoid model is designed for and valid for only a specific reference frame (and often also a specific realization of the frame) and region. For example, in the current NSRS, NGS’s GEOID18 is designed only for coordinates in the North American Datum of 1983 (2011) epoch 2010.00 and will convert ellipsoid heights to orthometric heights in the following datums: NAVD 88 (in the conterminous U.S. only, not Alaska), the Puerto Rico Vertical Datum of 2002 (PRVD02), or the Virgin Islands Vertical Datum of 2009 (VIVD09). Applying GEOID18 geoid heights to WGS84 ellipsoid heights is invalid and does not provide heights in any recognized system.

Figure 3. Estimated ellipsoid height shift from NAD 83 (2011) epoch 2010.0 to NATRF2022 epoch 2020.0. Credit: NGS.
Figure 4. Estimated orthometric height shift from NAVD 88 (epoch undefined) to NAPGD2022 epoch 2020.0. Credit: NGS.

Similarly, applying Earth Gravitational Model 2008 (EGM08) geoid heights to NAD 83(2011) ellipsoid heights is invalid and does not provide heights in any recognized system. Other examples of geoid models designed for specific reference frames include GEOID09 associated with NAD 83 (NSRS 2007) and GEOID99 or GEOID96 with NAD 83 (HARN). When a geoid model is used to

compute heights relative to a particular datum, it is important to document the specific geoid model (e.g., GEOID12b, GEOID18, etc.). In some software and metadata, the geoid model is included in parentheses after the datum, such as NAVD 88 (GEOID18).

• Provide uncertainties in output geospatial data products, accounting for uncertainties associated with coordinate transforma-

tions. Note that NGS is planning to provide uncertainties for transformations between current and modernized reference frames conducted using NGS’s software utilities. This is already done in the existing NGS Coordinate Conversion and Transformation (NCAT) software for transformations between all frames and datums, and that will continue in the Modernized NSRS.

WORKING GROUP RECOMMENDATIONS FOR THE ENTIRE GEOSPATIAL INDUSTRY

A recommendation on terminology is to avoid using the term “height above mean sea level” or “MSL height” when referring to NAPGD2022 orthometric heights. The correct term for height above the geoid, measured along a plumbline, is “orthometric height.” To explain, local mean sea level (MSL) is a tidal datum that varies along the coast, not only in response to changes in geopotential, but also to currents, local hydrodynamics and other variables.

For example, if one were to set a series of benchmarks along the coast, each adjacent to a tide gauge and each set at MSL = 0.000 m, differential levels run between these marks would show them to be at different NAVD 88 (or, in the future, different NAPGD2022) orthometric heights. Future versions of NOAA’s vertical datum transformation tool, VDatum, will enable transformation between NAPGD2022 and tidal datums, such as MSL, mean lower low water (MLLW), and mean high water (MHW).

A final, and most important, recommendation for everyone in the geospatial industry is to take advantage of NSRS modernization educational materials and opportunities. NGS, as well as university partners, have developed training modules, workshops, and short courses related to coordinate transformations, geoid models, map projections and distortion (including overviews of SPCS2022), and geodesy. A list (although not intended to be comprehensive) of recommended training modules and continuing education resources can be found on geodesy.noaa.gov. 

Christopher Parrish, Oregon State University; Qassim Abdullah, Woolpert, Inc.; Linda Foster, ESRI and NSPS; Stephen White, NOAA National Geodetic Survey; Jenna Borberg, Oregon State University.

Figure 5. Preliminary SPCS2022 design: number of zone layers per state. Credit: NGS.
Figure 6. Preliminary SPCS2022 design: number of zones per state. Credit: NGS.

Trust Your Position with Trimble’s Applanix Solutions

to enhance the efficiency and accuracy of aerial UAV mapping and remote sensing, eliminating the need for ground control points and reducing operational costs.

TRIMBLE APPLANIX, A DIVISION OF TRIMBLE INC., is a leader in advanced positioning and orientation solutions, serving diverse industries such as surveying, mapping, and navigation. Since its inception in 1991, Applanix has been at the forefront of geospatial technology, offering products that integrate high-

precision GNSS (Global Navigation Satellite System) and inertial technologies to deliver accurate and reliable data.

Among its notable offerings, Applanix provides the APX-RTX, a cutting-edge solution that combines GNSS and inertial technology for direct georeferencing in UAV applications. This system is designed

Additionally, the PX-1 RTX is another innovative product from Applanix, tailored for drone delivery applications. It ensures precise navigation and data collection, supporting UAV navigation and with high accuracy and reliability.

Applanix also offers POSPac software, a powerful post-processing tool that enhances the

accuracy of GNSS and inertial data collected by its systems, as well as the POS AV for manned aerial applications, the POS MV for marine surveys, and the POS LV family of products for use on land vehicles.

Trimble Applanix’s commitment to innovation and quality has established it as a trusted partner in the geospatial industry. Applanix empowers users to achieve their objectives with confidence, accuracy, and efficiency across airborne, land, and marine environments.

GREEN LIGHT SHORELINE MAPPING

A For

A hybrid system for capturing land and water promises dramatically improved efficiency over legacy aerial approaches.

Capturing the four distinct elements of shorelines and coastlines often entails using two, three, or more separate systems. Zones of offshore, nearshore, the shoreline, and uplands, captured with separate systems, are stitched together—not always seamlessly. The zones represent a single ecosystem, and the latest approach captures it with one system.

The concept is not entirely new, however, advances in sensor technologies have made the execution much more practical, with combined outputs rivalling the precision and accuracy of individual sensors. The key to this is advances in green laser technologies.

While hydrography and bathymetry professionals would be well aware of these advances, the goal of this article is to help

Example output from the hybrid airborne sensor featuring red and green lasers. Credit: Lecia Geosystems

you explain these new possibilities to others in the decision chain of your enterprise or organization.

Not long after the first practical laser was introduced in 1960, systems employing red lasers were tested for not only terrestrial mapping but also for bathymetry. While there were limited successes, red lasers (infrared wavelength of 1064 nm) were not as well suited for bathymetry as green lasers (which operate at 532nm).

The first green lidar for bathymetry was tested in the late 1960s to mid-1970s. However, the first (in the sense of today’s technology) green laser diode was not developed until 2009 (Sumitomo, Osram, and Michia). Utilization of these types of green lasers for bathymetry would soon follow.

There are a few, commercially available

large-format green laser bathymetry sensors, typically deployed for wide-area mapping applications by manned aircraft. We’re beginning to see many small-format systems for drone deployment, with significantly less collection efficiencies well suited for smaller projects. To capture the entire shoreline ecosystem, these might need to be deployed in huge numbers as separate captures in phases, together with other sensors and conveyances. The fixed-wing platform with a high-efficiency system are more cost efficient.

Green laser sensors for bathymetry often employ a circular or elliptical sweep pattern, a method proven to deliver consistent results and maximize efficiency.

“Our sensors use circular or elliptical sweep pattern,” said Andy Waddington, vice president of Bathymetric Services at

Hexagon’s Geosystems division. “You get a front and a back return as the mirror rotates. However, they don’t have to be circular or elliptical. One type of sensor uses a sweep type approach. However, we found in our work that the circular or elliptical pattern produces the best efficiency for a green pulse laser. One advantage of such patterns is that they are less sensitive for breaking waves, as the forward and backward scan passes at different times and the waves move between the two captures,” said Waddington.

LEGACY APPROACHES

Offshore bathymetry is often acoustic (single-beam, split-beam, multi-beam, side-scan), mounted on a ship or boat. Dedicated green-laser systems may be deployed on aircraft as well. In the shallows, small, unmanned surface vessels are fitted with acoustic sensors. Again, green laser systems, drone or aircraft carried, can also be used. For the shore and uplands, red laser systems on drones, aircraft, or terrestrial systems might also be used.

It is not uncommon for surveying and mapping firms to have a boat, a USV, a drone, and airborne and/or terrestrial scanners. The firm might perform all tasks or contract out the airborne component. Then each mapping component output needs to be stitched together. Positioning approaches, such as ground control points (GCP), often include post-processed kinematic (PPK) using on-board GNSS and IMU data. The capabilities of each system could vary, bringing uncertainty and inconsistencies. A single positioning stack, on a single platform, plus red and green lasers—and imaging—could all but eliminate inconsistencies inherent in many merged datasets.

To derive depth, the water surface must also be captured. Multiple returns, such as those that capture canopy layers, as well as the ground, which terrestrial applications often provide, is one approach.

“A red laser, which is what most people associate with lidar, reflects off the water surface to allow it to very accurately measure the water surface location and apply corrections for difference between the speed of light in the air and the speed of light in the water,” said Waddington. “A green laser reflects on the water surface as well, but the reflection is blended with backscatter reflections from

the water volume below which gives a higher uncertainty compared to the near infrared wavelength. If there’s something in the water column, light reflects off of that and it may not go down to the seabed. Full waveform algorithms are used on post processing to extract multiple points from the water surface, objects in the water column and from the seabed for each laser pulse.”

HOW DEEP

Mention bathymetric lidar to anyone in geomatics or mapping and the first topic is “How deep can it go?” You will often see the capability of a particular sensor expressed as a factor of the Secchi Depth (how deep a small disk can be seen from the surface). You might see 1.5x, 2x, or 3x the Secchi Depth in the specs for smaller green laser systems. But Secchi Disk measurements are very subjective, there are different standards of Secchi disks, and there are many more factors to consider.

“Although the physics is pretty well established for how well light will penetrate different types of water, it depends primarily on the system itself. And the actual physics terminology that we use is the Kd value (Diffuse Attenuation Coefficient),” said Waddington. “The Kd value is calculated via an equation which, more or less, equates to the Secchi depth, which is what you can see in particular conditions. But also, what’s going on in the water column is really important, and that’s where the turbidity observation comes in.”

If you’ve got a lot of suspended sediment or microscopic life in the water, then the green energy is either absorbed by some of that sediment or reflected by it. The green energy is sent off in different

directions so that the expected return pulse never actually makes it back to the aircraft.

“If the water’s quite turbid and has a low Secchi, you won’t be able to see through it. We can associate with that as human beings. But equally, the laser light, as it goes in, can get absorbed or scattered.”

The reflectivity of the object you’re trying to detect is another key factor.

“If you’re trying to detect the seabed, and it’s nice white sand under the water, there’s a very good chance that you’ll detect it deeper,” said Waddington. “And less for say, black rock, because it tends to absorb the green light more than the white sand reflects it. We try to use a standard measure. So, when we say, for example, it’s 3.5x Secchi depth, that’s assuming that the seabed is about 15 percent reflectance value.”

The power of the laser and flight height are also factors.

“Increasing the power and energy of the laser pulse tends to result in a wider beam,” said Waddington. “If you want to get very deep, you may only get one ping from the laser because you’re putting so much energy into it. But the height you’re flying is really key. Obviously the shorter the distance, the less water there is, or the less air if you’re flying at low altitude. Then the more of the pulse actually gets reflected from the seabed. It’s the strength of that reflected pulse that’s critical. You can put as much power as you like into the transmit, but if you aren’t able to pick up the sensitivity of the reflected pulse, it doesn’t really matter how much energy you put into it.”

It would be an oversimplification to say that flying lower is better, as some of today’s green laser systems can deliver quality at depth, even at altitudes that are common for red laser systems.

“Our latest generation of sensors, including the Leica CoastalMapper, are addressing this altitude issue,” said Waddington. “We’ve developed a new sensor specifically for this type of application, a new green laser that means you can fly higher and still get good returns from the seabed. Higher than previously possible with older generations of sensors.”

Green lasers on drones can be attractive for small area missions, and where lower flights could suffice. There are issues of endurance and payload limitations for drones as it stands at the moment. And while they might become better for these sorts of applications, there are trade-offs in endurance, and how much energy a sensor might draw. Aircraft do not have such limitations, nor the weather constraints for drone operations. Coastal and shoreline environments tend to be windy, another handicap.

Next, consider that hybrid sensors, with red and green lasers, and imaging, might not be practical for all but some very large drones. Fixed-wing aircraft can target wide-area survey work much more efficiently than patched-together data from discrete sensor approaches.

ONE ECOSYSTEM

“I’m a hydrographer by background and my original work is all based around

Andy Waddington, vice president of bathymetric services at Hexagon Geosystems.
The CoastalMapper (left) on a gyro-stabilised aerial mapping mount. Credit: Leica Geosystems

acoustics,” said Waddington. “Particularly multi-beam, but also synthetic aperture sonar, and side-scan. If you look at the workflows for each of those things individually, they are pretty well established, but quite challenging to bring that data together into a genuine data set that covers the whole ecosystem. From my perspective, and I’ve been talking about this for a while: the coastal environment is a single ecosystem. We look across and analyze the whole swath of a coastal or shoreline area. And then we start to get new insights into how important one bit of that environment is to another, which is why I refer to it as an ecosystem—and it should be captured as such.”

twins) or other broad areas, combining a high-definition aerial camera and lidar in one system. The CoastalMapper now provides this type of functionality for coastal and shoreline areas.

green lasers, we do find that our green laser can be good at this as well: the top of the canopy, the middle of the vegetation, and the return from the water surface. Though of course, the red laser is often the go-to for this. In this case, the red laser is our Hyperion laser, which is the same one we use in our highend terrain mapper sensors.”

THE HYBRID TREND

We see less often now stand-alone sensors in geomatics, surveying, reality capture, and mapping. For example, in surveying you see scanning and imaging total stations. Newer GNSS rovers sometimes feature cameras and even small scanners.

To this end, Lecia Geosystems has developed a series of both green and red laser systems and cameras for efficient, wide-area capture.

“We are now exclusively down a hybrid route. Our current generation of the green laser is the HawkEye and the Chiroptera sensors, which are aimed at what hydrographers will refer to as shallow water but in the lidar world, we refer to as deep water,” said Waddington. “Our Chiroptera sensor is primarily a green laser, but has a red laser, and a camera in it as well that targets the back of the beach. It also targets inland and around the 20- to 30-meter depth contour, conditions permitting. And, if you want to go a bit deeper, we have the HawkEye module, which is a bolt-on to the Chiroptera. This can get you into the 25-to-45-meter range, though in clear water conditions, we’ve been down more than 50 meters with both the Chiroptera and HawkEye.”

The recently announced Leica CoastalMapper was particularly intriguing–a single-sensor combo to capture a whole environment. This is much the same approach as CityMapper does for mapping cities (e.g., for creating and updating 3D digital

“We’ve brought in a high-end red laser topographic sensor and a new green laser designed so that we do not need separate sensors for the shallow and the not-so-shallow,” said Waddington. “And it is one unit, as opposed to a bolt-on. There is an improvement in efficiency because we’ve developed this new bathymetry laser, and we’ve worked on the receiver sensitivity. You can fly much higher as well. In our current generation of sensors, you would normally operate at around about 500 meters, whereas with the new sensor, we’re reckoning to operate between 600 and 900 meters.”

“The CoastalMapper is going to be the first system that uses the new Leica MFC 250 imaging sensor,” added Waddington. “It is a 250-megapixel, multiple format camera using the latest technology and we’ve got some fantastic images from tests.”

What about the vegetation and canopy performance of such a hybrid system?

“Both red and green lasers can have good penetration performance,” said Waddington. “You can set the parameters for multiple returns from different parts of the laser waveform yourself. Unlike a lot of

Mobile mapping systems, backpack systems, drone payloads—hybrid sensor is the new normal. Miniaturization of certain types of sensors has made it practical to combine them on one platform. However, this might not be the case for all applications, yet.

Shoreline and coastal mapping present challenges to combining sensors for low-cost operations. The efficiency afforded by large-form hybrid systems like the CoastalMapper might make it out of reach for many small firms, but the same goes for terrestrial mobile mapping systems. The efficiencies will be such that firms may no longer seek to do all elements themselves, but where contracting and partnering with aerial mapping firms, or buying data from service bureaus would be the most practical option. Whichever approach is chosen, hybrid topobathy systems are a welcome development. 

Gavin Schrock is a professional land surveyor who writes on surveying, mapping, GIS, data management, reality capture, satellite navigation, and emerging technologies.

Legacy shoreline/coastal mapping was often undertaken with different sensors on different platforms. For example, sonar on boats for offshore, small format sonar sensors on USV for inshore, and aircraft, drone, or terrestrial scanners for onshore—then each dataset is stitched together. Now, the entire ecosystem can be captured with one airborne hybrid system that combines different types of sensors, such as green lasers and imagery. Credit: Gavin Schrock

Can traditional aerial photogrammetry survive in the era of uncrewed aviation?

UNDER PRESSURE

On December 11, 2024, a photogrammetry aircraft registered N818BR made an unscheduled forced landing on a busy highway in the vicinity of Victoria, Texas. The pilot survived with minor injuries, three cars were hit with occupants suffering minor injuries, too. The aircraft was destroyed. At the beginning, it sounded like just another case of a pilot making the difficult decision to land on a paved road after an inflight emergency. As details emerged and the National Transportation Safety Board (NTSB) published its preliminary report, we found out

that it was a photogrammetry aircraft owned and operated by Marc Inc., a Bolton, Mississippi-based contract flight services company that according to their website is “North America’s largest provider of specialized contract aircraft and flight crews for airborne GIS, survey and surveillance projects.”

The initial cause of the accident, according to interviews with the pilot by NTSB officials, was fuel exhaustion. The Piper Navajo was supposed to carry 236 gallons of fuel, but apparently the pilot failed to do a comprehensive fuel level inspection as part of the mandatory

The remains of the crashed Piper Navajo, which broke in half upon an emergency landing on a highway.

pre-flight.

The mission flown was a basic photogrammetric flight with long lines (about 60 miles each) oriented northwest to southeast, a few miles southeast of San Antonio, Texas. According to the authorities and public flight data, the flight took more than five hours, right on the limit of the Piper Navajo. The mission started at 9:52 a.m. central time and ended at 02:57 p.m., exactly five hours and five minutes.

On January 21, 2025, the NTSB published its final accident report and in it states the cause of the accident, “The pilot’s inadequate preflight planning and preflight inspection, which resulted in a total loss of engine power due to fuel exhaustion.”

So, what happened? The NTSB report states that, “The local aerial survey flight was flown at 16,500 feet mean sea level (msl) and lasted about [five] hours.” Which means that it was a perfect day and the pilot, the sole occupant of the aircraft, decided to stay aloft as much as he could and cover as much of the survey area as possible.

For those of us who fly photogrammetry flights, especially those missions conducted in complex aircraft (multi-engine and retractable landing gear), the pressure of taking advantage of clear, cloudless days is always there. The temptation to fly just another line that has eluded us for weeks and perhaps months, is omnipresent.

And it is here that we would like to make an analysis of the market conditions today in comparison to 10 or 20 years ago, when the marketplace for these types of missions was different, and the competition did not involve small, non-piloted aircraft.

Traditionally, a photogrammetric flight involved a pilot, a copilot, a navigator, and a camera operator. In countries where governments were concerned about rogue

aircraft taking vertical photographs of sensitive installations, such as military bases and government sites, these flights also had onboard a military supervisor who made sure no pictures were taken of such facilities.

The last mission of N818BR was conducted with just one person onboard performing the tasks of piloting, navigating, and operating the camera. How is this possible? Well, the simple answer is automation. Let us analyze the three aspects of every photogrammetry flight to understand the reality of today.

Piloting: In the days before the normalization of Global Positioning Systems (GPS), photogrammetry flights were conducted using pilotage (navigating by reference to visible landmarks) and dead reckoning (calculating position based on time, airspeed, and direction). These

gets particularly difficult, and that is why we use two pilots, one to make sure the plane was level and at a constant speed during the actual photographing of lines and the other to make sure that the aircraft is clear of obstacles and other aircraft in accordance with Visual Flying Rules or VFR.

Camera Operation: In the old days when cameras were analog, it was impossible for a pilot to change the photographic roll during flight, therefore the presence of an experienced camera operator was imperative. On top of this, the crabbing of the camera to adjust for lateral wind had to be done manually, forcing the crew to untighten a bolt, turning the camera in the right direction in coordination with the navigator and tightening again.

It was tedious work but given the noise and the length of the flight it was intensive,

aeronautical tasks are complicated enough when we are flying traditional missions to go from Point A to Point B, but on photogrammetry flights, navigating while aviating

exciting, and definitely a team effort. As soon as the airplane aligned with each new line, the camera operator, in coordination with the navigator, determined the crab and adjusted

The takeoff point, grid flown, and crash point of a photogrammetry mission in Texas.

the camera. Each line is different, even by minute margins.

Navigation: The navigation for aerial photogrammetry is radically different to the traditional navigation used to get to the site. Pilot and copilot used traditional VFR methodology to arrive at the mission site and then delegated the navigation responsibility to the photogrammetry navigator who normally flew in the back of the cabin close to the camera operator. Some aircraft were equipped with a vertical periscope that allowed the navigator to see the ground

First, aircraft today are heavily automated and use GPS and autopilots to perform most tasks except takeoff and landing, therefore the absolute need for a copilot has evaporated and some companies use a second pilot as a safety measure, but it is no longer a must-have.

Second, cameras are now digital and automated, so they can be operated from a central console in the cockpit by just operating a few switches and monitoring a few indicators on a screen, so the need for a dedicated camera operator has also vanished.

one more line when the fuel begins to look low is high. This is when a copilot or a navigator or a camera operator might offer some advice and make the pilot think twice before committing to a course of action that might end in a forced landing on a highway.

Perhaps this was the case of N818BR on December 11, 2024, or perhaps it was something else. But every indication of the accident and the subsequent investigation from the FAA and the NTSB point to a flight that was stretched to the limits of the aircraft. Just deciding to go back to the airport, after the previous-to-last line would have given the pilot plenty of time and fuel to land, refuel and return to the site.

directly through a lens.

In all cases, the photogrammetry navigator had a paper map in which the lines were painted in thick lines, preferably red, to indicate the position and length of each one and the separation necessary to comply with lateral overlap. The navigator identified features on the terrain and instructed the pilot to turn either left or right and then to align with the next target line. By looking through the window or using the periscope, the navigator gave precise instructions to the pilot during each line to make sure the camera was taking photographs over the correct alignment.

Now that we understand the complexity of the analog years, let us jump a few decades to December 2024 when a single person can do all of that using automation.

Third, photogrammetry navigation today is also managed by the aircraft GPS and sometimes, to add precision, a second GPS antenna, this time a geodetic device is added to the aircraft to turn the GPS signal precision to a few yards instead of a quarter of a mile. This precision is accomplished by adding a screen with dedicated software in which the lines of the mission have been programmed and all the pilot needs to do is to align the aircraft in straight and level flight with the entrance to each line, and the software and the autopilot will do the rest.

In short, automation has reduced the crew of four to a crew of one. Unfortunately, an unintended consequence of this drastic 75 percent personnel reduction is that there are no dissenting voices during a flight. In clear, cloudless, perfect days, the temptation to fly

The “why” this happened is a bit more complicated. Aerial photogrammetry companies today are under tremendous pressure from cheaper alternatives such as drones and smaller aircraft to reduce costs in order to remain competitive. But placing the public at risk by landing on a busy highway is too much. Perhaps insurance companies can demand a second pilot on board, leveling the playing field for all companies.

One thing is clear, when the decision to fly that final, fateful line was made, there were no dissenting voices and that created a series of events that culminated in the destruction of the aircraft, possibly the camera, and the injury of the pilot and people on the ground.

This accident should give the industry pause. ■

Juan B. Plaza is CEO of Plaza Aerospace, a drone and general aviation consulting firm specializing in modern uses for manned and unmanned aviation in mapping, lidar, and precision GNSS.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
Heights - 2025 by Diversions Publications, Inc. - Issuu