Register    Attendee Sign In Sign in to access papers, presentations, photos and videos
Return to Session A5

Session A5: Sensor-Fusion for GNSS-Challenged Navigation

Increasing Positioning Accuracy in Urban Environments Using Radar-Based Point Clouds
Zheng Yu Lang, Royal Military College of Canada; Emma Dawson, Queen's University; Paulo Ricardo Marques de Araujo, Queen’s University Aboelmagd Noureldin, Royal Military College of Canada
Location: Beacon A

With all newly manufactured vehicles having some level of autonomy, obtaining an accurate positioning solution is crucial for motion planning and operational safety. While using the Global Navigation Satellite System (GNSS) for positioning and navigation is standard practice in most environments, its accuracy suffers in dense urban and indoor environments. In these circumstances, relying entirely on the vehicle’s Inertial Measurement Unit (IMU) is challenging due to the accumulation of drift errors over time. To bridge the gap, proprioceptive sensors like cameras, Lidars, and radars can be used to integrate with IMUs, correct their inherent errors and provide a more stable and accurate positioning solution. Environmental effects like rain or fog do not influence Automotive Radars operating around 77GHz and can function independently of illumination. Radars also provide two unique measurements: the Radar Cross Section (RCS) and the Doppler velocity of detected objects. However, in urban environments, radar-generated point clouds often suffer from ghost detections and cross-talk. For positioning solutions involving map registration, static vehicles parked along the streets can sometimes resemble building edges, increasing the likelihood of registration errors. To improve map registration accuracy in urban settings, it is crucial to eliminate ghost detections, noise, and static vehicles from the radar data. This refinement can significantly improve the reliability of map registration-based positioning solutions. This paper introduces a velocity and geometric filter to address the detection of dynamic objects and noise observed by the radar, as well as a classifier to identify the remaining static vehicles. By filtering out dynamic points, ghost detections, and static vehicles, this method has the potential to enhance the accuracy of map registration-based positioning algorithms in GNSS-denied and Urban environments, thus achieving high–precision positioning at the decimeter level suitable for level 3+ of autonomy.
1. Introduction
Low-cost sensors are increasingly being utilized to enhance positioning accuracy while maintaining cost-effective solutions for end users, which is essential for newly manufactured vehicles. In GNSS-challenged environments, such as urban areas or during active GNSS jamming, where positioning accuracy is crucial, reliance on GNSS alone is insufficient due to signal degradation and multi-path errors. To bridge this gap when GNSS signals are degraded, data from the ego vehicle’s onboard motion sensors can be fused with proprioceptive sensors, such as radar, Lidar, and camera. While automotive radars are not challenged by the weather or the surrounding environments, their point clouds are sparse and susceptible to noise.
Dawson et al. (Dawson, 2022), demonstrated that by fusing automotive radar with Inertial Measurement Unit (IMU) for radar point cloud aggregation can be registered to available maps of indoor parking garages and achieves positioning accuracy of less than 50cm 95% of the time. This high level of accuracy is attributed to the controlled indoor environment, where fewer dynamic objects are present. In outdoor environments, additional dynamic elements would increase the complexity of radar detections, making accurate, high–precision positioning more challenging.
Kellner et al. (Kellner, 2013), proposed a method of using the measured Doppler velocity of the detected object to obtain the instantaneous ego motion of the ego vehicle. Their algorithm effectively determines the vehicle motion without the need for pre-processing, like clustering or data storage, and it can label targets as stationary or non-stationary. Building on the authors’ idea and modifying their approach, using the vehicle’s odometer speed as a reference, the algorithm can be tuned to distinguish dynamic and static objects without the additional computational complexity.
Zhou et al. (Kamijo, 2020), developed a classification method using a Support Vector Machine (SVM) to differentiate between humans and vehicles using a millimeter wave radar point cloud data. Their approach uses 11 different features to describe object characteristics to achieve high classification accuracy. However, the dataset collected is a highly controlled environment, which limits the model’s use for real-world scenarios. However, the work presented in this paper serves as a proof of concept that SVM is capable of effectively classifying different object types within automotive radar point cloud data in dynamic, uncontrolled environments.
2. Problem Statement
In urban environments, radar-generated point clouds contain noise, dynamic objects, and non-landmark static points, such as parked vehicles. These extraneous points can lead to inaccuracies in map-matching algorithms, which rely on static landmarks for positioning. Removing dynamic objects like vehicles and pedestrians and filtering out non-landmark static objects such as parked cars can enhance the performance of map-matching algorithms. By eliminating such points, we can increase the reliability and precision of radar-based positioning systems in GNSS-denied environments. This would result in more robust and accurate positioning, ultimately contributing to safer and more reliable autonomous vehicle navigation.
3. Research Objective
The aim of this work is to develop an algorithm to identify and remove dynamic and extraneous radar detections from point clouds in urban environments to improve the accuracy of vehicle ego-motion estimation using pre-existing map-matching techniques. Only the static features corresponding to features in the reference map are useful for map registration, as they provide consistent landmarks for accurate positioning. To achieve this, the following are the specific objectives of this work:
(a) Design and realize a velocity filter to distinguish between static and dynamic objects detected by the radars.
(b) Implement a geometric filter for the removal of ghost objects and noise.
(c) Devise an SVM classifier to classify detections as static environments, vehicles, or others.
4. Methodology
Using a combination of classical filters and machine learning classifiers, we aim to remove dynamic objects, ghost detections, cross-talk noise between similar radars, and static parked vehicles.
4.1. Velocity Filter
The velocity filter removes any dynamic objects from the radar point cloud by calculating their velocity in the y-axis using the detected Doppler and azimuth angle. It uses the detected object’s velocity in the vehicle’s frame, its detected Doppler velocity, and the radar sensor’s mounting angle. We can determine if the object is static or dynamic by comparing it to the vehicle's forward speed. The detected objects exceeding this threshold are classified as dynamic and filtered out, reducing the number of overall points. The removed dynamic points may aid the map-matching algorithm with the remaining static points, reducing the overall noise of the detected point cloud.
4.2. Geometric Filter
Geometric filtering is achieved using the DBScan algorithm (Raj, 2022 ) to eliminate noise and ghost detections from radar point clouds. DBScan clusters points within a defined radius and requires a minimum number of points to form a cluster. We realized the DBScan algorithm so that any radar cloud points that fail this search criteria will be considered noise. This process effectively removes spurious radar detections, ensuring that only meaningful, dense clusters are retained for the map registration process, thus leading to a more robust and accurate positioning.
4.3. Support Vector Machine (SVM) Classifier
SVM classifier is trained on radar point cloud features, including range, azimuth, Doppler velocity, and radar cross-section (RCS), to classify objects into predefined categories such as vehicles, static objects, and others. Others include objects that do not fit into the two primary classification classes, such as pedestrians, cyclists, road signs, barriers, or other miscellaneous objects in the environment that may be present in urban settings but are not critical for vehicle positioning or map-matching algorithms. The SVM classifier uses parameters such as the regulation factor to control the trade-off between misclassification and complexity and kernel functions (e.g., linear, radial basis function (RBF)) to handle non-linear relationships in the data. Among the other important SVM classifier parameters is the one, which determines the sensitivity of the decision boundary, influencing how closely the model fits the training data. By fine-tuning these parameters, the SVM is optimized to differentiate between moving vehicles and static objects, improving the radar point cloud quality for map-matching algorithms. This approach ensures that only relevant, static objects are used for positioning.
To evaluate the SVM model, we observe the following:
(a) Precision: The ratio of true positive predictions to all positive predictions. It indicates how many of the predicted positives are correct.
(b) Recall: The ratio of true positives to all actual positives, showing how well the model identifies positive instances.
(c) F1-Score: The average score of Precision and Recall combined.
(d) Support: Number of instances of each class in the dataset.
5. Results
The velocity filter is applied to all detected points to classify dynamic or static objects. The testing was conducted by using the velocity filter to all detected points in each radar scan, with each point being classified as either static or dynamic based on the predicted label. For ground truth, the static classification was determined using RadarScene’s original label 11, which exclusively represents static environments, such as landmarks. However, this ground truth does not account for other non-moving objects like parked vehicles, which are not included in label 11. As a result, any object not labelled as 11 was considered dynamic. While not ideal, this testing setup leads to the observed classification errors of 31% for misclassifying static objects and 23% for misclassifying dynamic objects. Despite the limitations in the ground truth, the results validate the performance of the velocity filter, demonstrating its potential for distinguishing between static and dynamic objects in radar point clouds.
Through visual inspection, the geometric filter was able to cluster groups of vehicles and static environments. By setting a minimum cluster size of 30 points or more, noise and ghost detections, along with small clusters of pedestrians, can be filtered out.
The SVM classifier was trained using the RadarScenes dataset, with sequences 1 through 126 and 146 through 153 serving as the training data. As seen in Table 1, the overall accuracy for all three classes was 80%. A crucial step in the training process was to balance the data points for each class: vehicle, static environment, and other objects. This was essential because SVM models struggle with predicting unbalanced datasets (Wang, 2021). An unbalanced dataset can lead to a bias in the model towards the larger class. In this case, class 1 was ten times larger than class 0, and both were much larger than class 2. To balance the dataset, the total number of detections from class 0 was used to randomly sample class 1 to ensure equal representation between the two classes. Class 2, being much smaller, was left unchanged.
Class Precision Recall F1-Score Support
0 0.85 0.76 0.80 515953
1 0.77 0.83 0.80 515230
2 0.78 0.81 0.80 410595
Table 1: Classification Report: Training Dataset
Validation was conducted using sequences 141 to 145 of the RadarScene data set, chosen for their urban environment trajectory; Table 2 shows the results of this testing phase. Class 1 is the static environment, while class 0 corresponds to vehicles. The F1-Score for both classes 0 and 1 achieved an accuracy of 70% and 71%, respectively, indicating that the SVM performs well. The testing was completed with balanced data between the two major classes to ensure that the SVM model’s performance could be accurately gauged. The model’s true capabilities in distinguishing between vehicles and static environments in complex urban settings can be evaluated by maintaining a balanced dataset during testing. If the raw, unbalanced dataset were used, the model would overclassify vehicles due to the overlap in radar cross-section values between the two classes, leading to a bias toward static detections.
Class Precision Recall F1-Score Support
0 0.71 0.68 0.70 810337
1 0.73 0.69 0.71 810337
2 0.28 0.4 0.33 175896
Table 2: Classification Report: Model Testing
6. Discussion
Two main errors occur when using SVM to classify raw radar point clouds: over-classification of vehicle points and a bias toward static points. The radar cross-section overlap between vehicle and static environment classes is a significant reason for these errors. The over-classification of vehicles, even when points represent static environments, results from training on a balanced dataset. When the raw, imbalanced radar point cloud is used, the model assumes the same distribution as the balanced dataset, leading to misclassification. Similarly, clusters of vehicles are often misclassified due to the overwhelming presence of static points, further reinforcing the model’s bias toward static environments. Training with an unbalanced dataset causes the model to become biased toward the majority class, leading to the misclassification of minority-class objects. To prevent this bias, the number of vehicle and static environment points must be balanced during training, which helps create a more robust classification model. Additionally, tuning parameters such as the regularization factor and kernel functions can improve performance. Adjusting these parameters can reduce the model’s sensitivity to overlapping features, enhancing its accuracy in complex urban environments. Instead of relying on SVM to classify all points within the trajectory, a secondary geometric filter can be used to identify larger clusters within the trajectory, as these clusters contribute the most errors in map registration. By reducing the number of points passed to the SVM, the classifier becomes more effective in differentiating between vehicles and static environments. A threshold can be set to determine whether a cluster is classified as a vehicle or a static object. If a cluster is classified as a vehicle class, it can be removed from the point cloud, reducing noise and improving the accuracy of map registration. This method optimizes the SVM’s performance by focusing on the most relevant points and reducing computational complexity while ensuring that large, problematic clusters are addressed effectively.
7. Conclusion
Using radar-based point clouds, we introduced a method to improve positioning accuracy in urban environments. We filter out dynamic objects, ghost detection, and static vehicles by applying velocity, geometric filtering, and an SVM classifier. Our results will demonstrate that removing irrelevant points enhances map-matching algorithms’ performance. The proposed approach optimizes the radar data processing, making it more effective in urban environments. Future work will explore integrating positional updates with onboard motion sensors to improve accuracy and robustness in GNSS-denied environments.
Bibliography
Dawson, E. (2022). Integrated Remote Sensing and Map Registration System for High-precision Positioning in Covered Parking Garages. ION International Technical Meeting (ITM) . Long Beach, CA.
Kamijo, Z. Z. (2020). Point Cloud Features-Based Kernel SVM for Human-Vehicle Classification in Millimeter Wave Rada. IEEE Access, 8, 26012-26021.
Kellner, D. a. (2013). Instantaneous ego-motion estimation using Doppler radar. 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013).
Raj, S. a. (2022 ). Optimized DBSCAN with Improved Static clutter removal for High Resolution Automotive Radars. 19th European Radar Conference (EuRAD), (pp. 1-4).
Wang, L. a. (2021). Review of Classification Methods on Unbalanced Data Sets. IEEE Access, 64606-64628.



Return to Session A5