The main contribution of this thesis is to present a censor fusion approach to scene environment mapping as part of a Sensor Data Fusion (SDF) architecture. Under this architecture, one of the requirements to take into account is the choice of the internal representation. This internal representation must be chosen such that it is common to all sensors. The fusion process for different sensors must be feasible under this internal representation. The occupancy grid as well as the Dampster-Shafer grids representations have been chosen to map the sonar and vision readings respectively.
This approach involves combined sonar array with stereo vision readings. A probabilistic sensor model has been used to represent the behavior of the sonar beam. This model is divided into two models where each one represents the probability of a free and an occupied space of the beam respectively.
The Scale Invariant Feature Transform (SIFT) is a method for extracting distinctive invariant features from digital images. The features are invariant to cale and rotation. They also provide a robust matching across a substantial range of affine istortion, change in 3D view point, addition of noise, and change in illumination. Furthermore, the features are distinctive, i.e. they can be matched with a high probability with other features in a large database with many images. This particular property makes the SIFT method suitable for robot navigation, localization, and mapping. Once the descriptors are found in each image, eg. left and right images, a matching algorithm is applied in both images. Stereo triangulation is implemented in order to get the depth from the pair of cameras to the features.
Due to the factors of quantification and calibration errors, a certain degree of uncertainty must be expected in the triangulation. Geometrically these uncertainties translate into ellipsoidal regions.
The Bayesian estimation as well as the Dempster-Shafer approaches are applied to update and integrate the sonar array and the SIFT descriptors' uncertainty grids. These two techniques are compared each other
The fusion of the data from the sensors is performed by the following methods: a) Fusion of two sensor readings which is done in a single representation. In this method both sensors share the same grid. b) Fusion of two independent internal representations which correspond to each sensor reading respectively into a single independent one.c) Fusion of two grids with respect to sensor accuracy. In this method the accuracy of one sensor -in this case the camera- and the inaccuracy of the other sensor -in this case the sonar- are taken into account to carry out the fusion process.
The experiment was done on the robot Pioneer 3AT from ActiveMedia Robotics, as it can be seen in fig 1. The robot is equipped with an ultrasonic ring of 16 sonars, a stereo camera pair and a laser rangefinder. The laser rangefinder was used for the purpose of evaluating the incoming data from the sonar and the stereo pair of cameras respectively.
Another aim of this thesis is to show that it is feasible to perform path planning based on the potential fields derived from maps that have been generated using fused range readings from the sonar and vision system.
The current research has been focusing on fusing sonar readingswith features extracted fromstereo vision using the SIFT algorithm in order to reduce the uncertainty about the angle as well as the specular reflections from range readings. The result from this fusion reduces uncertainty of mainly the free space that is hard to detect by using sonar measurements. Therefore this approach is suitable mainly for cheap robot solution where the webcams are used for
SIFT-feature detection and significantly improve the sonar measurement. The fusion of all sensors is able to improve navigation of an autonomous robot in home/office like environments.
Effective start/end date01/01/200431/12/2007

    Research areas

  • Sensor fusion, mobile robots, stereo vision, sonar, Bayesian theory, Demspter-Shafer evidential theory, occupancy grids, empster-Shafer grids, SIFT, potential
ID: 214550914