REPLACING HUMAN DIVERS WITH FREE-SWIMMING ROBOTS In the simultaneous localization and mapping (SLAM) process, the hull inspection robot collects imagery with underwater camera and sonar equipment, supplemented by a periscope camera looking above water,to build a map of the vessel’s surface in real time.
relationship) can be certain that every inch of the hull has been seen.
the reconstructed geometry of the environment, and of where the robot has been.”
The potential uses of this technology did not escape notice in the commercial maritime sector. For two years, PeRL worked with ABS on a pilot project to apply the technology to underwater inspection in lieu of dry docking (UWILD).
The self-driving car uses quite a similar matching, verification and correction process, but its position is referenced to a detailed map containing road geometry and area imagery. The present level of technology is such that today’s autonomous land robots can position themselves with an accuracy far greater than possible with the publicly available Global Positioning System (GPS) data used by conventional automobiles.
“In the ABS project, we were able to build dense, photo-realistic models of the hull that let the surveyor zoom in and get a close-up view of the hull surface,” Dr. Eustice recalls. “We would have liked to further explore this by adding ultrasonic equipment, or another technology, that would allow the robot to characterize hull plate thicknesses. We didn’t do so then, primarily because the vehicle we used was borrowed from the US Government and we didn’t have authority to modify it; but it is certainly promising to explore in the future.” Self-correction is an important element of autonomy, particularly for a robot generating massive amounts of map, measurement and positioning data. To keep itself correct, the hull inspection robot uses what Dr. Eustice characterizes as “a lot of probabilistic math” and a verification process that involves matching camera and sonar imagery to the vessel map and accounting for such sources of error as its own motion. “Each measurement has some error or uncertainty associated with it, so the way this problem is formulated is such that we very rigorously track all the different sources of error associated with these different measurements,” Eustice explains. “By rigorously accounting for that error in a probabilistic way, we are able to look at all the various sources of measurements used and develop a best estimate of
8 | SURVEYOR | 2018 VOLUME 1
“The commercial GPS that cars use today is able to tell you what road you’re on, but not what lane you’re in; the error can be on the order of several meters,” Dr. Eustice says. “One of the bedrocks of the technology that goes into self-driving cars today is to use extremely detailed maps of road environments; matching what the vehicle’s cameras and lasers see with the maps, the robots are able to know their position to centimeter-level accuracy at any given time. The GPS gives the car a very coarse estimate of its location relative to the earth and an initial guess as to where it is in that map, but once it matches its camera and laser data with the map, it no longer needs GPS to know where it is,” he explains.
ROBOT THINKING, MACHINE LEARNING
Today’s autonomous vehicles can be thought of as having completed the first half of a long journey – they know where they are, they can see their environments, and they can move about freely and do their jobs within those spaces. They are now entering the second half, moving up a steep incline towards true integration into society. This integration requires giving them a kind of reasoning capability to deal with external moving objects, and make rational predictions about the movement of