Return to Session D4 Next Abstract

Session D4: Ground Vehicle Navigation

A Combined Approach to Single-Camera-Based Lane Detection in Driverless Navigation
Xin Zhang, Xingqun Zhan, School of Aeronautics and Astronautics, Shanghai Jiao Tong University, China
Location: Spyglass

The fast evolvement of driverless cars is driving a boost for car electronics and market of automotive electronics is expected to double its current size by the year of 2025. This article presents a case study to test the feasibility of using real data and general computing platform to develop lane detection algorithms, with an additional attempt to use Commercial-Off-the-Shelf (COTS) components and OpenCL as the programming language to test the algorithms in a simulated environment.
The paper is structured as follows. Background of the project is first introduced and previous work on lane detection used on driverless cars summarized. The second part introduces detailed procedures to compute the position of the lane lines, including (i) computing camera calibration matrix and distortion coefficients, (ii) applying distortion correction to raw images, (iii) using combined color transforms and gradient computation to obtain a thresholded binary image, (iv) applying a perspective transform to rectify binary image (“birds-eye view”), (v) detecting lane pixels and fit to find the lane boundary, (v) determination of lane curvature and vehicle position with respect to center of the detected lane, (vi) warping the detected lane boundaries back onto the original image, and (vii) outputting visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. Among these steps, steps (iii) and (v) will be focused since these steps play a major part in ensuring the accuracy of the algorithm. In step (iii), different color spaces (RGB, HSV, HLS, YCbCr, and gray) will be examined. The objective is to find an effective combination of channels of different color spaces and corresponding thresholds to detect the pixels that indicate the existence of a segment of lane lines. This can be tricky since there are many disturbing effects. For example, shadows of trees, preceding cars, and even guardrails can alter the color of the lane lines. Some sections of a highway may have split colors within the same lane due to new pavements. All these factors count to the effectiveness of the algorithm. In step (v), a second order polynomial will be fit to the detected pixels that are assumed to form a lane line (either left or right). The collection of these pixels can present a problem because the preceding steps cannot ensure that each and every “good” pixel is detected (i.e. there must be noises). Sliding windows, smoothing, and a Look-Ahead-Filter (LAF) will be examined. The third part reports the processed results at each of the steps in section two. The implementation is tested on 3 video streams from different datasets with different road settings. We demonstrate that the algorithm can provide more than sufficient accuracy for lane detection. The fourth part reports an OpenCL and COTS framework for implementation and testing the algorithms in a simulated environment. The last section presents planned future efforts and concludes the paper. The datasets and source code are available at github.
The reported work is expected to benefit the community in three ways:
(1) to show that a hardware-in-the-loop simulator can be useful in driverless car development if the current research does not allow a full hardware evaluation.
(2) to show the effectiveness of using only a front-facing camera in driverless car perception.
(3) to presents decent sets of parameters is the aforementioned procedures that will guarantee detection performance in some common yet comprehensive settings.



Return to Session D4 Next Abstract