City drive in an autonomous test vehicle. Safety driver has hands off the wheel but needs to keep their eyes on the road.
AI behind the wheel for city driving Wojciech Derendarz, of the UP-Drive project, is working out how autonomous vehicles can develop improved perception, relying on combined technologies to process urban environments, so self-driving cars can take a step closer to earning their driving license. Wojciech Derendarz, Project Leader
Taxi – take me home!
at Research & Development of Volkswagen Group, is a veteran of the autonomous car sector who has been sitting ‘hands off’ behind the wheel of self-driving prototypes for 13 years. His latest goal is to make vehicle AI better understand challenging urban environments. It sounds hard, and it is. To accomplish this feat, a car needs to adapt, accurately comprehend and react instantaneously in complex, changing environments. The original aim of UP-Drive was to develop a technology for self-driving cars, able to locate a parking space after the driver has been ‘dropped off’ and self-park. As the goal was to offer that technology anywhere in a city – not just on special premises like parking garages at airports – the project scope quickly evolved to tackling the myriad of challenges faced by city driving, with parking being the simplest piece. With 70% of the global population predicted to be living in urban and suburban areas by 2050, having reliable, completely autonomous cars will have a significant impact and represent a huge benchmark in self-driving technology.
“It is important to realise two different approaches out there,” explained Derendarz. “Most car manufacturers take the evolutionary approach. They currently offer privately owned cars with driver assistance systems, meaning that the driver still needs to monitor the system at all times – and they gather data and optimise performance over time. Once the systems become good enough, they hope to enable the higher level of autonomy that will finally allow the driver to take their eyes off the road. “We decided to shift the UP-Drive project when we realised you need similar technology for someone’s private on-street valet parking function, as you need for a robo taxi. It seems realistic that the systems we are developing will be seen first in robo taxies and robo shuttles, backed by companies willing to invest heavily in innovation. There is a stronger commercial case as they earn around the clock in the business model. A private car is parked most of the time and the cost for the technology may initially be prohibitive to car owners.”
So how would such a system work? A system capable of the higher levels of autonomy would need to have a strong capability of localisation and mapping, would need to master complete round-view perception of the vehicle’s environment, and be able to form a detailed understanding of complex scenes it encounters as well as predict what other traffic participants are up to. UP-Drive used and combined several sensing technologies such as camera, lidar and radar as a foundation for all those tasks.
Know where you are “The first step is knowing where you are and where you want to go. You need an idea of where you are in the world which means for AI, relying on a navigation application or a map from your smartphone or car – only much more detailed and precise.” It is useful and currently seems necessary to use a map in autonomous cars to provide information about the environment beyond that, what can be perceived with sensors. “We have the information in maps but you also need to know precisely where you are in
In an urban environment an autonomous car needs to be able to detect and track plenty of objects: pedestrians, cyclists, cars and other obstacles. 3D geometric data is the foundation for that.
The full picture emerges only after the perceived information has been combined with information stored in the digital maps.
Perceiving reality in complex scenes Although crucial, mapping and localisation are not the biggest challenges in self-driving vehicles, perception is much harder. “We have different sensing items we can use for this task,” explains Derendarz. “We are transforming toward more human like perception and this poses challenges. We humans rely very strongly on complex information. We classify the scene and everything we see in it.” People use top down perception to ‘fill in’ gaps with their imagination. For example, we know that if we see a chair leg obscured by a
table that there is a whole chair there, but a computer may perceive a piece of chair leg in isolation. More relevantly, if there is a row of parked cars, we may judge the gaps between them without seeing the gaps, so we know that they are not one long object. This level of understanding can pose a challenge for artificial perception. “As a driver you will never have a situation where you have a car somewhere and the next second you will think that this car has disappeared and the next second the car is there again. It’s no good if your self-driving car is stopping every five minutes or worse, hitting things. Typically, we can position the things that we do not see directly, I can connect the dots in my mind. We create a very strong context-based understanding of what we see. It looks very different to what a machine sees. “Machines try to get the geometric information first. For example, we take a laser scanner and capture a 3D point cloud out of that and out of that point cloud we look for objects. We look for road surfaces and placed objects and everything else
Test vehicles used in the project are based on the fully electric VW e-Golf and have been equipped with many additional sensors – some of them on the roof of the car.
Automated Urban Parking and Driving
The UP-Drive project focus is on advancing key technologies for autonomous driving: • Robust, general 360° object detection and tracking employing low-level spatiotemporal association, tracking and fusion mechanisms. • Accurate metric localization and distributed geometrically consistent mapping in largescale, semi-structured areas. • Scene understanding, starting from detection of semantic features, classification of objects, towards behavior analysis and intent prediction. • Behavior and motion planning for complex environments.
But 3D points and boxes alone is not enough an autonomous car needs also to understand what it is seeing. Just as we humans do.
the map. This is what we call a localisation task. An autonomous car needs to localise itself with great accuracy to around 10cm and 0.1 degree in angular accuracy, to be able to utilise all the information in the map and to be able to navigate through a narrow part of road. We are approaching this level of accuracy.” From all the key technologies, localisation is the closest to being ready for autonomy.
is what we can collide with. We try to separate those things from one another. In human contextual understanding we have no problem understanding when cars are close to one another, where one car ends and the next one starts – we see the wheels and the windows, the different colours of the cars and so it’s easy. To do the same task in a point cloud, it gets much more difficult, especially when you are trying to describe it in a rule based algorithm that takes the gaps in the data or measurement noise into account. It’s difficult to find rules that do this task perfectly.” The project tackles the challenge by splicing the data from different sensing devices together into a holistic viewpoint which is rich with correlating points of information. This creates a more contextual, ‘fluid’ overview. “In UP-Drive we have taken an approach to build up on top of this geometrical way of looking at objects. We utilise geometry strongly and introduce this more contextual understanding of the scene to stabilise the perception. We do it by putting the camera images and point clouds into one representation. We project the points from the laser scanners onto the virtual image plane so that we can see where those points from the laser scanner would be in this image that is being captured by the camera. Then we do correlations, so we do a one to one correlation with the laser, camera and radar combined. “In the city you have a huge amount of objects and you cannot combine a high quota of data in traditional ways as it gets too cluttered. That’s why we decided to fuse the information on a much lower level. This is one of the more fundamental ideas we have proposed and explored in UPDrive. This has led to very good results but still, we have a long way to go to do what a human can do.”
Are we nearly there yet? The nuances of driving are becoming clearer for researchers trying to establish better AI in this sector. Even with perfect perception, there are still many challenges: the car needs not only to see what is happening right now but also to predict the behaviour of other traffic participants for the next couple of seconds. In UP-Drive this can be done with success if you assume the typical behaviour. However, predicting the atypical behaviour, like a pedestrian stepping onto the road to cross it even though they have no right of way, is what researchers are only starting to explore. Another aspect is that humans take more calculated risks than computers would in
there are many situations that don’t seem challenging at all but once you see it from the eyes of the car, sometimes it’s unsure of the better solutions to choose. You take a different perspective on everyday situations with the traffic. And there are situations that might be difficult for us as humans but the car masters those without any problems. The car can navigate complex crossroads and change lanes with ease, for example. With mapping information, it was a simple optimisation problem, which lane to take for the least lane changes. In contrast, in the factory facility there are very slow, narrow vehicles about the width of half a car, with trailers behind them. For a driver it would be simple to overtake slowly but
“We need to build
the technology to a level where we have complete trust in it ourselves, where we can put our children in the car and we trust it will take them safely to where they need to go.” typical driving scenarios. When you overtake a car on a busy motorway with traffic ahead and behind, braking space is often compromised for driving efficiently. So, what is acceptable risk when many people will expect machines not to take any risks but still drive effectively like a human? This brings us to the two final challenges of autonomy: prediction and trajectory planning. This is where the car’s ‘decision making’ is taking place. In the UP-Drive project the self-driving technology was initially trained and tested in the grounds of a factory facility which was large and diverse enough in scale and complexity to simulate a small city. The job was to interpret how the car viewed and interacted with the environment. “It’s always an adventure to see how the car is solving the different situations and
the autonomous car seemed challenged by its conflicting policies of getting to its destination fast and keeping enough safety distance for other traffic participants. With light variations of lane width, it changes its mind and hesitates, so there are these challenges to overcome.” For smooth and safe driving you have to have a sharp, reactive system in the car, so if something goes wrong, or the initial predictions are wrong, there is the ability to react fast when new information arrives. “We need to build the technology to a level where we have complete trust in it ourselves, where we can put our children in the car and we trust it will take them safely to where they need to go. Then we need to prove to users, this is reliable and stable in every situation. We’re working on it and there is a little way still to go yet.”
Funded under Research and Innovation Action. Programme H2020-EU.2.1.1. - Information and Communication Technologies (ICT)
• Volkswagen AG, Germany • Swiss Federal Institute of Technology in Zurich (ETH Zürich), Switzerland • IBM Research GmbH, Switzerland • Technical University of Cluj-Napoca, Romania • Czech Technical University in Prague, Czech Republic
Wojciech Derendarz Innovation ADAS, Autonomous Driving Department Volkswagen car.SW Org Wolfsburg AG T: +49 5361 915662 E: email@example.com W: https://up-drive.eu Wojciech Derendarz
Wojciech Derendarz started his career in 2007 at the Technical University of Braunschweig, Germany, where he worked on “Caroline”, the robotized car that participated in the Darpa Urban Challenge finals. In 2008 he moved to Volkswagen Group, where he continued his research on self-driving cars. In 2011 he took on the role of a project leader and has since been responsible for a number of research projects in the area of autonomous driving.