LidarDM: Generative LiDAR Simulation in a Generated World

1University of Illinois Urbana-Champaign, 2NVIDIA
* Equal Contribution
Accepted to ICRA 2025
teaser

LidarDM can generate LiDAR videos that are realistic, layout-conditioning, physically plausible, diverse, and temporally coherent, as shown by 2 diverse videos from one map condition.

Abstract

We present LidarDM, a novel LiDAR generative model capable of producing realistic, layout-aware, physically plausible, and temporally coherent LiDAR videos. LidarDM stands out with two unprecedented capabilities in LiDAR generative modeling: (i) LiDAR generation guided by driving scenarios, offering significant potential for autonomous driving simulations, and (ii) 4D LiDAR point cloud generation, enabling the creation of realistic and temporally coherent sequences. At the heart of our model is a novel integrated 4D world generation framework. Specifically, we employ latent diffusion models to generate the 3D scene, combine it with dynamic actors to form the underlying 4D world, and subsequently produce realistic sensory observations within this virtual environment. Our experiments indicate that our approach outperforms competing algorithms in realism, temporal coherency, and layout consistency. We additionally show that LidarDM can be used as a generative world model simulator for training and testing perception models.

Video Method

Play with sound.

Competitive Single-Frame LiDAR Generation

LidarDM can be run unconditionally to generate single-frame LiDAR readings (KITTI-360 samples shown below).

comparison

Consistent Multi-Frame LiDAR Generation

Short Sequence Generation

LidarDM generates temporally consistent sequences of LiDAR Readings.

Long Sequence Generation

LidarDM generates simulated LiDAR sensor readings for long traffic scenarios.

(Bonus) Visualizations of Agent Motions

LidarDM uses Mixamo to animate pedestrian motions for realistic human walking motions.

Aligned Map-LiDAR

Visualized agent meshes

Corresponding point cloud

Applications

End-to-end Traffic Simulation

LidarDM can extend traffic simulators to provide a platform for end-to-end autonomous driving evaluation and training.

Ego-vehicle Trajectory Manipulation.

Agent Trajectory Manipulation.

Out-of-Distribution Scene Generation

LidarDM is capable of generating scenarios with just a course layout, such as the hand-drawn that of Champs-Élysées, which are not in the training set.

animal mesh

Out-of-Distribution Scenario Composition

LidarDM provides a flexible composition pipeline that allows self-driving autonomy evaluation in dangerous scenarios, such as animals escaping a zoo.

animal mesh

Inserted animal mesh

animal lidar

Corresponding LiDAR data

BibTeX

@misc{lidardm,
      title={LidarDM: Generative LiDAR Simulation in a Generated World}, 
      author={Vlas Zyrianov and Henry Che and Zhijian Liu and Shenlong Wang},
      year={2024},
      eprint={2404.02903},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
    }