Urban Scene Segmentation for Autonomous Vehicles using Multi-Domain Adaptation

Page 4

1

Introduction

Background The autonomous vehicle is a supercomputer running on the road. The perception of Objects from the image: road, other vehicles, bicycles, pedestrians, and traffic signs. Datasets like the Cityscapes dataset have low quality/Coarsely segmented images. On the other hand, the GTAV dataset has fine-segmented Render-Elements. Using API calls we can get the Render Elements in any location in the game such as GTA.

In our project, we will apply the semantic segmentation process on Autonomous vehicles as an Urban Scene Segmentation using Deep Learning. So, labels in our project will be something that interacts with vehicles, like people, roads, trees, buildings, etc. The common in such semantic segmentation projects is using Deep Learning Networks like U-Net, Deep lab series, Seg-Net, FCN …etc. So, in our project we used “Multi-target knowledge transfer approach to multi-target U”. It's a frame of Multi-Target Adversarial Frameworks for the Domain Adaptation in Semantic Segmentation.in practice, as in urban Scene for autonomous vehicles, the perception system is often put to test in various scenarios including different cities, weathers or lighting conditions. To deal with multiple test distributions We trained multiple models for all target domains and adaptively activated one at test time. So, our source dataset is GTA5, and the targets are Mapillary Vistas and Cityscapes datasets. The accuracy metric we used is the Intersection over Union (IoU).

Urban Scene Segmentation for Autonomous Vehicles Using Multi-Domain Adaptation


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
Urban Scene Segmentation for Autonomous Vehicles using Multi-Domain Adaptation by mohamed elmesawy - Issuu