Page 44

FAR OUT

MIT RESEARCHERS HELP DRIVERLESS CARS, ROBOTS SPOT OBJECTS AMID CLUTTER By Brian Sprowl Researchers at MIT say they have developed a technique that allows robots to quickly identify objects hidden in a three-dimensional cloud of data. According to the researchers, sensors that collect and translate a visual scene into a matrix of dots help robots “see” their environment. The researchers note, though, that conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both. With the new technique developed by MIT researchers, it takes a robot just seconds from when it receives the visual data to accurately pick out an object that is otherwise obscured within a dense cloud of dots, such as a small animal. This technique can help improve a variety of situations in which machine perception must be both speedy and accurate, the researchers say, including driverless cars and robotic assistants in the factory and the home. “The surprising thing about this work is, if I ask you to find a bunny in this cloud of thousands of points, there’s no way you could do that,” says Luca Carlone, assistant professor of aeronautics and astronautics and a member of MIT’s Laboratory for Information and Decision Systems (LIDS). “But our algorithm is able to see the object through all this clutter. So, we’re getting to a level of superhuman performance in localizing objects.” 42

| UNMANNED SYSTEMS | SEPTEMBER 2019

Finding an object in a point cloud can be tricky for vision systems, but MIT researchers have developed a technique that allows them to find hidden objects quickly. Image: MIT

Currently, robots attempt to identify objects in a point cloud by comparing a template object, i.e. a 3-D dot representation of an object such as a rabbit, with a point cloud representation of the real world that may contain that object. The template image includes collections of dots — also known as features — that indicate characteristic curvatures or angles of that object, such as the ear or tail of the bunny. Similar features from the real-life point cloud are first extracted by existing algorithms, and then those algorithms attempt to match those features and the template’s features, and ultimately rotate and align the features to the template to determine if the point cloud contains the object in question. The point cloud data that streams into a robot’s sensor includes errors, though, as the dots are in the wrong position or incorrectly spaced, which can cause great confusion in the process of feature extraction and matching. Consequently, robots can make a lot of wrong associations — or “outliers,” as researchers call them — between point clouds, which ultimately leads to the misidentification of objects, or missing them entirely. According to Carlone, state-of-the-art algorithms can recognize the bad associations from the good once features have been matched, but this can take an “exponential” amount of time. While accurate, these techniques

Profile for AUVSI Unmanned Systems magazine

AUVSI's Unmanned Systems magazine — September 2019