LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D Object Detection
IEEE International Conference on Robotics and Automation (ICRA), 2024

  • 1University of Macau
  • 2RAL, Baidu Research
  • 3University of California, Irvine

Abstract

Over the past few years, there has been remarkable progress in research on 3D point clouds and their use in autonomous driving scenarios has become widespread. However, deep learning methods heavily rely on annotated data and often face domain generalization issues. Unlike 2D images whose domains usually pertain to the texture information present in them, the features derived from a 3D point cloud are affected by the distribution of the points. The lack of a 3D domain adaptation benchmark leads to the common practice of training a model on one benchmark (e.g. Waymo) and then assessing it on another dataset (e.g. KITTI). This setting results in two distinct domain gaps: scenarios and sensors, making it difficult to accurately analyze and evaluate the method. To tackle this problem, this paper presents LiDAR Dataset with Cross-Sensors (LiDAR-CS Dataset), which contains large-scale annotated LiDAR point cloud under six groups of different sensors but with same corresponding scenarios, captured from hybrid realistic LiDAR simulator. To our knowledge, LiDAR-CS Dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic. Furthermore, we evaluate and analyze the performance using various baseline detectors and demonstrated its potential applications.

LiDAR-CS Dataset

Pattern-aware LiDAR Simulation

First of all, the real LiDAR points are normalized to a spherical surface. Due to points missing, statistics information from multiple scans is required to build the LiDAR Ray Pattern. Then, the Ray Pattern vectors are simultaneously projected and query the depth value from the depth map to generate the simulation point cloud.

kitti

Samples

An example of the LiDAR-CS dataset. All the point clouds are generated from the same scenario under different sensor patterns. The points in the cycle are zoomed in and shown in the white boxes for a better view. The point clouds are colorized by the height of the points

kitti

Experiment Results

(a) and (b) are LiDAR point cloud examples collected from different types of sensors which are from 64-beams and 16-beams LiDAR sensors respectively. The vehicle has been cropped and zoomed in for detailed visualization purposes. Sub-figure (c) gives a cross-sensors evaluation of experimental results where four baseline detectors are trained on VLD 64 LiDAR data and evaluated on five different sensors in the same scenario. The results show that the domain gaps are obvious across different sensors

results

Cross evaluation on LiDAR-CS benchmark under five different LiDAR sensor patterns with five baseline detectors. “Ped.” is short for “Pedestrian”

results

Citation

Acknowledgement

The website template was borrowed from Michaël Gharbi and Ben Mildenhall.