Abstract: | The detection and tracking of objects around an autonomous vehicle is essential for them to operate safely. This paper presents an algorithm to detect, classify, and track objects. All objects are classified as moving or stationary as well as by type (e.g. vehicle, pedestrian, or other). The proposed approach uses state of the art deep-learning network YOLO (You Only Look Once) [1] combined with data from a laser scanner to detect and classify the objects and estimate the position of objects around the car. The Oriented FAST and Rotated BRIEF (ORB) [2] feature descriptor is used to match the same object from one image frame to another. This information fused with measurements from a coupled GPS/INS using an Extended Kalman Filter. The resultant solution aids in the localization of the car itself and the objects within its environment so that it can safely navigate the roads autonomously. The algorithm has been developed and tested using the dataset collected by Oxford Robotcar [3]. The Robotcar is equipped with cameras, LiDAR, GPS and INS collected data traversing a route through the crowded urban environment of central Oxford. |
Published in: |
Proceedings of the 2019 International Technical Meeting of The Institute of Navigation January 28 - 31, 2019 Hyatt Regency Reston Reston, Virginia |
Pages: | 870 - 883 |
Cite this article: | Aryal, Milan, Baine, Nicholas, "Detection, Classification, and Tracking of Objects for Autonomous Vehicles," Proceedings of the 2019 International Technical Meeting of The Institute of Navigation, Reston, Virginia, January 2019, pp. 870-883. https://doi.org/10.33012/2019.16731 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |