论文标题
Milliego:单芯片MMWave Radar辅助通过深传感器融合估算
milliEgo: Single-chip mmWave Radar Aided Egomotion Estimation via Deep Sensor Fusion
论文作者
论文摘要
对人和机器人等移动药物的稳健轨迹估计是为新兴能力(例如增强现实或自动互动)等新兴功能提供空间意识的关键要求。尽管目前以光学技术为主导,例如视觉惯性探测器,但它们遭受了场景照明或无特征表面的挑战。作为替代方案,我们提出了Milliego,这是一种新型的深度学习方法,用于鲁棒性估计,利用低成本MMWAVE雷达的能力。尽管MMWave雷达比单眼相机具有基本优势,即提供绝对的比例或深度,但是当前的单芯片解决方案具有有限且稀疏的成像分辨率,使现有的点云登记技术变得脆弱。我们提出了一种新的体系结构,该架构旨在解决这个具有挑战性的姿势转化问题。其次,要与其他传感器(例如惯性或视觉传感器我们引入了一种混合注意力的深层融合方法。通过广泛的实验,我们证明了我们提出的系统能够达到1.3%的3D误差漂移,并且可以很好地推广到看不见的环境。我们还表明,神经体系结构可以高效,适合实时嵌入式应用。
Robust and accurate trajectory estimation of mobile agents such as people and robots is a key requirement for providing spatial awareness for emerging capabilities such as augmented reality or autonomous interaction. Although currently dominated by optical techniques e.g., visual-inertial odometry, these suffer from challenges with scene illumination or featureless surfaces. As an alternative, we propose milliEgo, a novel deep-learning approach to robust egomotion estimation which exploits the capabilities of low-cost mmWave radar. Although mmWave radar has a fundamental advantage over monocular cameras of being metric i.e., providing absolute scale or depth, current single chip solutions have limited and sparse imaging resolution, making existing point-cloud registration techniques brittle. We propose a new architecture that is optimized for solving this challenging pose transformation problem. Secondly, to robustly fuse mmWave pose estimates with additional sensors, e.g. inertial or visual sensors we introduce a mixed attention approach to deep fusion. Through extensive experiments, we demonstrate our proposed system is able to achieve 1.3% 3D error drift and generalizes well to unseen environments. We also show that the neural architecture can be made highly efficient and suitable for real-time embedded applications.