Issuu on Google+

UAV NAVIGATION SYSTEM BASED ON AERIAL IMAGE MATCHING IN CASE OF GPS FAILURE Akhil A M, Arun A K, Aravind Ghosh N A, Ajith George S 5 Aeronautical Jawaharlal college of Engineering and Technology, Lakkidi, Ottappalam

ABSTRACT: The aim of this paper is to explore the possibility of using aerial images and georeferenced satellite for navigation of fully autonomous Unmanned Aerial Vehicle (UAV) in case of GPS failure. The above said vision based navigation system combines; inertial sensor, visual odometer and an image register to match a UAV on board video against geo-referenced aerial image, developed from real flight-test data. This system is unlike the present GPS integrated inertial system which is viable to velocity drift in case of GPS failure. The experimental results show that it is possible to track the exact position information only by using this aerial image matching. Further more, this paper also demands the possibility of autonomous and safe UAV flight in urban environment without using GPS.

INTRODUCTION: The present state of UAV systems still does not guarantee acceptable safety to use such

a system in populated places. The primary goal of this paper is to propose an integrated UAV platform for fully autonomous flights in urban environments. GPS integrity is the main problem to be solved before introducing UAVs to civilian airspace. Most of the present UAV navigation systems rely on GPS and Inertial Sensor System (INS). There are instances when GPS signal is unavailable or corrupted due to multipath reflections by obstacles. In these situations, the state (position, velocity & altitude) of UAV is estimated only by using INS. Keeping in mind that INS is viable to velocity drift; the present system becomes unreliable for flights in urban regions. In addition, the vulnerability of GPS to jamming makes problems even worse. For this reason, a navigation system for autonomous UAV must be able to cope with short and long term GPS fallouts. Great effort has been put into the development of a vision based autonomous UAV navigation system. Several researchers experimented on the commercial Yamaha

RMAX helicopter using this navigation architecture which can cope with GPS fallouts.

In this system, GPS is replaced by an odometer and an algorithm which registers the on-board video to a geo-referenced satellite or aerial images. Such aerial images are taken from a real manned flight and made available in the UAV beforehand. The growing availability of high resolution satellite images is a sign of possible outburst in these systems in near future. The INS is composed of three gyros and three accelerometers, a monocular video camera and a barometric pressure sensor. The information from INS is fused using kalman filters to obtain full UAV state. This system includes two image processing techniques, feature tracking and image registration are used to update the navigation filter during the time GPS is not available. The KLT feature tracker is used to track corner features in the on-board video image from subsequent frames, while the odometer function uses the KLT results to calculate the distance provided by UAV. The above said odometer, in absence of GPS is affected by drift. To compensate this drift error, a geo-referenced image registration

module is used. On correct image registration, the absolute drift-free position of UAV similar to the one provided by GPS is obtained. Large position errors will be introduced, if the on-board image is registered to an incorrect location. So the fundamental problem to be solved is the detection of correct and incorrect registrations. The success rate of UAV navigation system is also dependant on the terrain it is flying over. Terrains with robust features like road intersections are easier to match, while unconstructed terrain such as rural areas is difficult to match. In particular, the position is calculated using the visual odometer and image registration are fused together and the resulting position is used to update the kalman filter, which in turn will estimate the present UAV state. The architecture proposed on the figure is tested on real flight data and

On-board video. During this flight, material data, barometric altitude and on-board video were acquired. This non-GPS navigation approach worked without accumulating drift errors in the same path followed by a UAV with GPS system.

position calculated from the odometer. If the time elapsed between two valid registrations is large, larger will be the position discontinuity. So the registration update is only introduced gradually over time into the kalman filter and is treated as a correction added to the odometer solution.



The vision aided fusion architecture tested in this work is composes of a traditional kalman fitter which is used to fuse an INS sensor (3 accelerometers and 3 gyros) with vision system.

The visual odometer developed in this work is based on the KLT feature tracker. The KLT algorithm tracks point features from two subsequent frames. The algorithm selects a number of features in an image according to certain goodness criteria. Then it tries to re-associate the same features in the next frame. This association criterion gives very good results when the feature displacement is not too large. The faster the algorithm is, the more successful is the association process. The readings obtained from the odometer also depend upon the camera orientation relative to the UAV body. Here we fix the camera looking perpendicular downward. So the feature depth of the camera is same as the altitude of UAV. This depth is calculated by using a barometric pressure sensor. But this way works only when ground is flat. So the use of radar or laser altimeter on-board will be helpful to get even accurate altitude.

An INS mechanization function performs the time integration of inertial systems while the kalman filter function estimates the INS errors. The errors estimated by the kalman filter are then used to correct the INS solution. As mentioned earlier, the vision system combines two techniques to calculate the position of the UVA, visual odometry and image registration. Both odometer and image registration calculate UAV position. The odometer can provide position update at a rate of 4 Hz, while the position update calculated from the image registration algorithm occurs only when reliable matching is found. I n case, the reliable position update from image registration module is not available; the output from the visual odometer is directly taken to update filter. When reliable image registration is obtained, usually it produces a position jump when compared to the

Experimental results show that the visual odometer alone combiner with INS gives drift-free velocity and altitude estimation. Which means the UAV can still be controlled once GPS is lost. Image

registration combined with this technique will yield a more accurate navigation system.

IMAGE REGISTRATION: The image registration technique is based on edge matching. A sobel edge detector is applied to both the geo-referenced image and image taken from the on-board video camera. This is used because; the edges are quite robust to environmental illumination changes. Since the geo-referenced image and video camera image are taken at different times, illumination conditions will differ. Choosing features which are robust to illumination changes are necessary in visual imagery system. The image registration is more reliable at higher altitudes because, higher the UAV flies more structure from environment can be captured. Another change lies in the fact that, the environment changes over time. Small details change quite fast, whereas the large structures tend to be more static. Flying at higher altitude makes the registration more robust to small dynamic changes in the environment. The image registration process is represented in the block diagram;

the on-board color image is converted to grey scale and then median filter is applied. This filter removes small details which are visible from on-board camera but not from the reference image. The median is well suited for removing these small details while preserving the edges sharp. After filtering, the sobel edge detector is applied. The image is then scaled and aligned to the reference image. Like this, resolution of the on-board image is converted to the resolution of reference image. The reference image is first converted into grey scale and sobel edge detector is applied. The resulting edge image is kept in memory and used during the visual navigation. After both images have been processed as explained above, a matching algorithm tries to find the position in the cropped reference image which gives the best match with video camera image. The position that results the greater number of overlapping pixels between the edges of the two images is taken as matching result. This matching

criterion gives a reasonable success rate.

Once the matching is obtained the on-board image can be geo-referenced and the absolute position of UAV can be calculated. The most difficult part is to decide whether to take the position as a good match or not. So it has to be detected whether the matching is an outlier and then rejected or can be used to update the filter. Outlier detection is not as easy as it sounds, since there are areas where outliers are predominant compared to the good matches. One idea would be to segment the reference image and assign different matching probability values for different areas. Prior knowledge can be applied on this process. For example, it is known that image registration in urban areas is more reliable than in rural areas or that road intersections result in more stable matching than road segments. By doing this a different degree of uncertainty can be assigned to the matching based on the location where the match has occurred. The uncertainty can then be used to update the navigation filter. This method would require a huge amount of off-line image processing which should be applied to the reference

image before it could be used and would be unpractical for large images. The outlier’s detection method applied here does not require any image preprocessing. It is based on the observation that in areas where the matching is unreliable, the matched position is very noisy. While in areas where the matching is reliable, the position noise decreases. The rejection criteria applied is based on the analysis of the position difference between the predicted position coming from the filter and the position given by the matching algorithm.

RELATED CONCEPTS: Non-GPS navigation of UAV is an area in which a lot of research is going on. One technique which could be applied to this problem is called Simultaneous Localization And Mapping (SLAM). SLAM localize a robot in the environment while mapping it at the same time. Prior image of the environment is not required. Although SLAM techniques have worked tremendously well in indoor missions, its capability in large outdoor environments remains a challenge. The main advantage of SLAM is that it does not require any prior maps of the environment. But SLAM approach makes sense when robots have to close loops, which means to come back to previously visited landmarks to decrease position uncertainty. This could be potential limiting factor for UAV applications. If UAV has lost its GPS signal, probably the best navigation

strategy is the one which minimize the risk of crashing in populated areas. So flying back home by using previously visited sites is not always the safest strategy, while flying different roots might be more preferable. For this reason, the navigation functionality based on image matching has great potential for this application and gives more flexibility as regards choosing emergency flight routes in case of GPS failure. There also exist other kinds of terrain navigation methods which are not based on aerial images but on terrain elevation methods. Here the flight altitude relative to the ground is required, which is measured using a radar altimeter. This type of localization system is implemented on some military jet fighters. But for UAV helicopters which fly short distances at very low speed, the altitude variation is quite poor in terms of allowing ground profile matching.

CONCLUSION: The aerial image matching navigation system architecture described in this paper has potential of drift free UAV navigation solution in the absence of GPS. Experimental results prove that a UAV can fly without the aid of GPS at all. The monocular camera which is now days used in vision aided navigation system has an angle of 45 degrees, which can capture only small amount of environment structure. So by using a wide angle high resolution camera, we can improve image

registration robustness and cover more environment structure. The monocular camera which captures only the static, permanent structures on ground and neglecting the moving objects will make the images easier to match and aid accurate tracking. The barometric altimeter which works on flat world assumption has to be omitted from UAVs; we should replace it by a radar altimeter which could provide direct ground altitude measurement. The possibility of using satellite images from Google earth is something to look forward in UAV navigation, as it is free, contains enormous amount of information. So in near future it is not unlikely that we could use Google earth images to navigate UAVs. The main problem to be solved in visual aided navigation system is that, when there is no update from the registration process, the position uncertainty grows; a fixed size uncertainty window should be implemented to solve this registration issue.

Uav navigation system based on aerial image matching in case of gps failure