Object3DD Challenge 2025

Important Dates

Challenge Overview

Welcome to the 2025 Object3DD Challenge, a premier international competition focused on advancing traffic scene understanding through 3D object detection. This challenge is part of the 13th International Conference on Mobile Mapping Technology (MMT 2025) and aims to bring together researchers and practitioners to tackle fundamental challenges in object perception under autonomous driving scenarios.

The significance of this research domain is multifaceted:

  1. Advancing autonomous vehicle perception systems through robust 3D object perception.
  2. Enabling knowledge transfer across different mechanisms to enhance the generalizability of autonomous driving models under various mechanisms of LiDAR sensors.
  3. Developing multimodal collaborative perception systems to improve the safety of autonomous driving.

With the continuous iteration of 3D sensors and the widespread adoption of intelligent vehicles, there is an urgent need to develop frameworks for cross-sensor LiDAR and multi-agent collaborative object perception. In this challenge, we welcome submissions of efficient domain adaptation methods and multimodal collaborative perception approaches. This workshop provides a structured platform for the dissemination of algorithmic innovations, methodological advances, and empirical findings in this rapidly evolving field. The challenge is open to students, teachers, and researchers in relevant fields.

Challenge Tracks

The Traffic3D Challenge 2025 focuses on two critical domains:

Track 1: CMD Cross-Mechanism Domain Adaptation 3D Object Detection

Dataset: CMD: A Cross Mechanism Domain Adaptation Dataset for 3D Object Detection

  • Introduction: https://github.com/im-djh/CMD/blob/master/docs/competitiontrack.md
  • Challenge:https://www.codabench.org/competitions/7749/
  • Data Description: The CMD dataset comprises three well-synchronised and precisely calibrated LiDAR mechanisms—128-beam mechanical, 32-beam mechanical, and solid/semi-solid-state—capturing 10 000 frames per sensor (50 sequences, 20 s each at 10 Hz). Data span a rich variety of environments, including urban, suburban, rural, highway, bridge, tunnel and campus settings, under five illumination conditions ranging from bright daylight to dusk.
  • Tasks: Participants must train detectors on point clouds from 128-beam or 32-beam mechanical LiDARs and, without any target-domain labels, generalise them to a hidden solid-state LiDAR test set for cross-mechanism domain adaptation 3D object detection.
  • Evaluation Metrics: Mean Average Precision (mAP) over four classes (Car, Truck, Pedestrian, Cyclist). Per-class APs reported as supplementary scores (IoU = 0.5 for Car/Truck, 0.25 for Ped/Cyc).

Track 2: LiDAR-4D Radar Fusion for Cooperative 3D Object Detection

Dataset: V2X-R

  • Introduction: https://github.com/ylwhxht/V2X-R/tree/Challenge2025
  • Challenge:https://www.codabench.org/competitions/7754/
  • Data Description: The dataset covers a large number of simulated urban roads and contains a total of 12,079 V2X (Vehicle-to-Everything) scenarios, which are divided into 8,084 training frames, 829 validation frames, and 3,166 testing frames. Each scenario includes 37,727 frames of LiDAR point clouds and 4D millimeter-wave radar point clouds, as well as 170,859 annotated 3D vehicle bounding boxes. In each V2X scenario, the number of interconnected agents (connected vehicles and infrastructure) ranges from a minimum of 2 to a maximum of 5.
  • Tasks: Participants are required to train a 3D object detector using cooperative perception data from multimodal sources, including LiDAR point clouds and 4D radar point clouds. The detector should be capable of successfully identifying objects of interest in a cooperative perception scenario and outputting their 8-dimensional attributes, which include the length, width, height, 3D coordinates, orientation angle, and category of the objects.
  • Evaluation Metrics: The overall Average Precision (AP) with an Intersection over Union (IoU) threshold of 0.7 will be used as the main ranking metric, and AP at different distances (0-30m, 30-50m, 50m~Inf) will also be evaluated. The evaluation class is 'vehicle', the result is within the field of view (FOV) of the 'ego' vehicle camera and the range of x ∈ [0,140] m and y ∈ [-40,40]m. The broadcast range of the connected agent is 70 meters.

Prizes

We are thankful to our sponsor for providing the prizes. The prize award will be granted to the Top 3 individuals and teams on the leaderboard that provide valid submissions.

Workshop Format

The workshop will implement a hybrid participation model, accommodating both in-person attendance and virtual engagement to maximize accessibility and international participation. The programmatic structure will comprise:

Schedule

Time Event
14:00-14:05 Welcome Introduction
14:05-14:35 Invited Talk (Talk 1)
14:35-15:05 Invited Talk (Talk 2)
15:05-15:35 Coffee break
15:35-15:50 Awarding Ceremony
15:50-16:10 Winner Talk (Track 1) + Q&A
16:10-16:40 Winner Talk (Track 2) + Q&A
16:40-17:20 Panel Discussion
17:20-17:30 Closing Remarks

Organizing Committee

Primary Organizer

Chenglu Wen: Professor of the Department of Artificial Intelligence at Xiamen University. Her main research focuses on 3D vision, intelligent processing of point clouds, and multimodal fusion perception.

Co-Organizers

Qiming Xia: Ph.D. Student, ASC, Xiamen University. His research interests lie in the field of point cloud processing and intelligent transportation systems.

Xun Huang: Ph.D. Student, ASC, Xiamen University and Beijing Zhongguancun Institute. His research interests lie in the field of point cloud processing and intelligent transportation systems.

Wei Ye: M.S. Student, ASC, Xiamen University. His research interests include 3D computer vision and their applications in intelligent transportation systems.

Huanjia Zhang: M.S. Student, ASC, Xiamen University. His research interests include 3D computer vision and their applications in intelligent transportation systems.

Shijia Zhao: Ph.D. Student, ASC, Xiamen University. His research interests include 3D computer vision and their applications in intelligent transportation systems.

Confirmed Speakers

Hai Wu: Assistant Researcher, Pengcheng Lab. Topic: Research on High-Precision 3D Object Detection Algorithms.

Broader Impact

The intellectual contributions and methodological frameworks developed through this challenge have the potential to catalyze significant technological and societal advancements across multiple domains:

Ethical Considerations

The datasets utilized in this challenge have been collected and annotated in strict accordance with applicable privacy legislation and regulatory frameworks. All personally identifiable information has been methodically anonymized to ensure the protection of individual privacy rights and community interests. The organizing committee will implement rigorous protocols to ensure that the dataset utilization remains exclusively within the intended research domain of point cloud-based traffic scene understanding.

For more information, please contact the organizing committee at xiaqiming@stu.xmu.edu.cn