Important Dates
- Submission Deadline: May 30, 2025, 24:00 (UTC+8)
- Results Notification: June 5, 2025
- Workshop Implementation: June 20, 2025
Challenge Overview
Welcome to the 2025 Object3DD Challenge, a premier international competition focused on advancing traffic scene understanding through 3D object detection. This challenge is part of the 13th International Conference on Mobile Mapping Technology (MMT 2025) and aims to bring together researchers and practitioners to tackle fundamental challenges in object perception under autonomous driving scenarios.
The significance of this research domain is multifaceted:
- Advancing autonomous vehicle perception systems through robust 3D object perception.
- Enabling knowledge transfer across different mechanisms to enhance the generalizability of autonomous driving models under various mechanisms of LiDAR sensors.
- Developing multimodal collaborative perception systems to improve the safety of autonomous driving.
With the continuous iteration of 3D sensors and the widespread adoption of intelligent vehicles, there is an urgent need to develop frameworks for cross-sensor LiDAR and multi-agent collaborative object perception. In this challenge, we welcome submissions of efficient domain adaptation methods and multimodal collaborative perception approaches. This workshop provides a structured platform for the dissemination of algorithmic innovations, methodological advances, and empirical findings in this rapidly evolving field. The challenge is open to students, teachers, and researchers in relevant fields.
Challenge Tracks
The Traffic3D Challenge 2025 focuses on two critical domains:
Track 1: CMD Cross-Mechanism Domain Adaptation 3D Object Detection
Dataset: CMD: A Cross Mechanism Domain Adaptation Dataset for 3D Object Detection
- Introduction: https://github.com/im-djh/CMD/blob/master/docs/competitiontrack.md
- Challenge:https://www.codabench.org/competitions/7749/
- Data Description: The CMD dataset comprises three well-synchronised and precisely calibrated LiDAR mechanisms—128-beam mechanical, 32-beam mechanical, and solid/semi-solid-state—capturing 10 000 frames per sensor (50 sequences, 20 s each at 10 Hz). Data span a rich variety of environments, including urban, suburban, rural, highway, bridge, tunnel and campus settings, under five illumination conditions ranging from bright daylight to dusk.
- Tasks: Participants must train detectors on point clouds from 128-beam or 32-beam mechanical LiDARs and, without any target-domain labels, generalise them to a hidden solid-state LiDAR test set for cross-mechanism domain adaptation 3D object detection.
- Evaluation Metrics: Mean Average Precision (mAP) over four classes (Car, Truck, Pedestrian, Cyclist). Per-class APs reported as supplementary scores (IoU = 0.5 for Car/Truck, 0.25 for Ped/Cyc).
Track 2: LiDAR-4D Radar Fusion for Cooperative 3D Object Detection
Dataset: V2X-R
- Introduction: https://github.com/ylwhxht/V2X-R/tree/Challenge2025
- Challenge:https://www.codabench.org/competitions/7754/
- Data Description: The dataset covers a large number of simulated urban roads and contains a total of 12,079 V2X (Vehicle-to-Everything) scenarios, which are divided into 8,084 training frames, 829 validation frames, and 3,166 testing frames. Each scenario includes 37,727 frames of LiDAR point clouds and 4D millimeter-wave radar point clouds, as well as 170,859 annotated 3D vehicle bounding boxes. In each V2X scenario, the number of interconnected agents (connected vehicles and infrastructure) ranges from a minimum of 2 to a maximum of 5.
- Tasks: Participants are required to train a 3D object detector using cooperative perception data from multimodal sources, including LiDAR point clouds and 4D radar point clouds. The detector should be capable of successfully identifying objects of interest in a cooperative perception scenario and outputting their 8-dimensional attributes, which include the length, width, height, 3D coordinates, orientation angle, and category of the objects.
- Evaluation Metrics: The overall Average Precision (AP) with an Intersection over Union (IoU) threshold of 0.7 will be used as the main ranking metric, and AP at different distances (0-30m, 30-50m, 50m~Inf) will also be evaluated. The evaluation class is 'vehicle', the result is within the field of view (FOV) of the 'ego' vehicle camera and the range of x ∈ [0,140] m and y ∈ [-40,40]m. The broadcast range of the connected agent is 70 meters.
Prizes
We are thankful to our sponsor for providing the prizes. The prize award will be granted to the Top 3 individuals and teams on the leaderboard that provide valid submissions.
Workshop Format
The workshop will implement a hybrid participation model, accommodating both in-person attendance and virtual engagement to maximize accessibility and international participation. The programmatic structure will comprise:
- Invited keynote presentations from recognized domain experts
- Formal recognition ceremonies for competition winners
- Technical presentations from winning teams detailing their methodological approaches
- A structured panel discussion addressing emerging research directions
Schedule
Time | Event |
---|---|
14:00-14:05 | Welcome Introduction |
14:05-14:35 | Invited Talk (Talk 1) |
14:35-15:05 | Invited Talk (Talk 2) |
15:05-15:35 | Coffee break |
15:35-15:50 | Awarding Ceremony |
15:50-16:10 | Winner Talk (Track 1) + Q&A |
16:10-16:40 | Winner Talk (Track 2) + Q&A |
16:40-17:20 | Panel Discussion |
17:20-17:30 | Closing Remarks |
Organizing Committee
Primary Organizer
Chenglu Wen: Professor of the Department of Artificial Intelligence at Xiamen University. Her main research focuses on 3D vision, intelligent processing of point clouds, and multimodal fusion perception.
Co-Organizers
Qiming Xia: Ph.D. Student, ASC, Xiamen University. His research interests lie in the field of point cloud processing and intelligent transportation systems.
Xun Huang: Ph.D. Student, ASC, Xiamen University and Beijing Zhongguancun Institute. His research interests lie in the field of point cloud processing and intelligent transportation systems.
Wei Ye: M.S. Student, ASC, Xiamen University. His research interests include 3D computer vision and their applications in intelligent transportation systems.
Huanjia Zhang: M.S. Student, ASC, Xiamen University. His research interests include 3D computer vision and their applications in intelligent transportation systems.
Shijia Zhao: Ph.D. Student, ASC, Xiamen University. His research interests include 3D computer vision and their applications in intelligent transportation systems.
Confirmed Speakers
Hai Wu: Assistant Researcher, Pengcheng Lab. Topic: Research on High-Precision 3D Object Detection Algorithms.
Broader Impact
The intellectual contributions and methodological frameworks developed through this challenge have the potential to catalyze significant technological and societal advancements across multiple domains:
- Enhanced Autonomous Driving Safety and Efficiency. Improving the accuracy and robustness of 3D object detection in autonomous vehicles enhances safety by enabling better perception in diverse environments. These technologies also optimize traffic flow management through more precise traffic information, reducing congestion and improving overall transportation efficiency.
- Improved Infrastructure Management and Environmental Sustainability. Accurate detection of infrastructure damage facilitates more efficient maintenance scheduling, reducing the risk of accidents caused by deteriorating infrastructure. Additionally, optimized traffic flow management lowers energy consumption and carbon emissions, supporting environmental sustainability.
- Advancements in Multimodal Data Fusion and System Integration. Providing New Frameworks for Data Integration. New frameworks for aligning and integrating data from multiple sensors enhance the capabilities of intelligent transportation systems. These advancements support the development of more comprehensive and reliable traffic monitoring and management solutions.
Ethical Considerations
The datasets utilized in this challenge have been collected and annotated in strict accordance with applicable privacy legislation and regulatory frameworks. All personally identifiable information has been methodically anonymized to ensure the protection of individual privacy rights and community interests. The organizing committee will implement rigorous protocols to ensure that the dataset utilization remains exclusively within the intended research domain of point cloud-based traffic scene understanding.
For more information, please contact the organizing committee at xiaqiming@stu.xmu.edu.cn