CrossRay3D: Geometry and Distribution Guidance for Efficient Multimodal 3D Detection

H. X. Yang et al.

IEEE Transactions on Intelligent Transportation Systems2026https://doi.org/10.1109/tits.2026.3651273article
ABDC A
Weight
0.37

Abstract

The sparse cross-modality detector offers more advantages than its counterpart, the Bird’s-Eye-View (BEV) detector, particularly in terms of adaptability for downstream tasks and computational cost savings. However, existing sparse detectors overlook the quality of token representation, leaving it with a sub-optimal foreground quality and limited performance. In this paper, we identify that the geometric structure preserved and the class distribution are the key to improving the performance of the sparse detector, and propose a Sparse Selector (SS). The core module of SS is Ray-Aware Supervision (RAS), which preserves rich geometric information during the training stage, and Class-Balanced Supervision, which adaptively reweights the salience of class semantics, ensuring that tokens associated with small objects are retained during token sampling. Thereby, outperforming other sparse multi-modal detectors in the representation of tokens. Additionally, we design Ray Positional Encoding (Ray PE) to address the distribution differences between the LiDAR modality and the image. Finally, we integrate the aforementioned module into an end-to-end sparse multi-modality detector, dubbed CrossRay3D. Experiments show that, on the challenging nuScenes benchmark, CrossRay3D achieves state-of-the-art performance with 72.4% mAP and 74.7% NDS, while running $1.84\times $ faster than other leading methods. Moreover, CrossRay3D demonstrates strong robustness even in scenarios where LiDAR or camera data are partially or entirely missing. The code is available on https://github.com/xuehaipiaoxiang/CrossRay3D

1 citation

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1109/tits.2026.3651273

Or copy a formatted citation

@article{h.2026,
  title        = {{CrossRay3D: Geometry and Distribution Guidance for Efficient Multimodal 3D Detection}},
  author       = {H. X. Yang et al.},
  journal      = {IEEE Transactions on Intelligent Transportation Systems},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1109/tits.2026.3651273},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

CrossRay3D: Geometry and Distribution Guidance for Efficient Multimodal 3D Detection

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.37

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.16 × 0.4 = 0.06
M · momentum0.53 × 0.15 = 0.08
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.