Return to Session B2

Session B2: Autonomous Navigation

A New Dimension of Mapping and Sensing: Image-aided Driverless Vehicle Navigation
Charles K. Toth, Dorota Grejner-Brzezinska, Zoltan Koppanyi, The Ohio State University
Location: Grand Ballroom F

As vehicle technology is moving towards higher autonomy, the demand for highly accurate geospatial data is rapidly increasing, as accurate maps have a huge potential of increasing safety. In particular, high definition 3D maps, including road topography and infrastructure, as well as city models along the transportation corridors represent the necessary support for driverless vehicles. Road surface information, such as pavement markings is essential for navigation as they provide excellent localization reference, such as keeping vehicles in lanes. Unfortunately, these marks can be easily obscured in rain and snow conditions, and in those adverse scenarios, other objects that have better visibility should be used to position the vehicle. Traffic signs, traffic lights and buildings along the roads provide a fairly reliable source for landmarks, and can be accessed from transportation databases. In a Smart City environment, integrated data exchange can provide the link between the geospatial/GIS database and the vehicles (V2I/V2X). Access to this database, which includes high definition 3D maps and the corresponding metadata, is essential for autonomous vehicles as it enables the sensor systems to accurately relate the vehicle’s location/trajectory to the surrounding environment in any situation.
The quality of the 3D data, measured in accuracy and currency must be clearly far superior to any conventional map data, traditionally provided by federal and local governments, and also significantly richer in information than the typical 2D map data used in contemporary car navigation systems. There are many private and government organizations initiatives focused on acquiring this type of data, including crowdsourcing. The questions are: (1) what geospatial data accuracy for the map database is needed to effectively support driverless vehicle navigation?, and (2) what sensor performance is required to benefit from the availability of these high-resolution and accurate databases?
In this paper, the accuracy of the bag-of-words (visual words) model type localization from various sensor streams is investigated for positioning driverless cars. The proposed algorithm works as follows. First, a vehicle equipped with high-, medium- and low-resolution cameras acquire data in a typical environment, in our case at the OSU campus, where GPS/GNSS data are available along with other navigation sensor streams. This forms the training data. For each image, feature descriptors, such as SIFT and SURF, are extracted and organized in a database, called codebook, with their positions and orientations. Then, during the test sessions, we assume that no GPS/GNSS data is available, and the navigation solution is mainly derived based the images; basically, the image features are extracted and matched to the database, and thus, the position and orientation can be obtained. The paper presents the accuracy assessment of this method including the comparison between the derived navigation solutions from images of various resolutions with respect to the map database resolution and accuracy. These results provide essential information on optimizing the choice of geospatial map databases and sensors’ quality to support driverless vehicle technologies.



Return to Session B2