论文标题
从最佳中学习:跨传感器位置学习可穿戴活动识别的对比表示
Learning from the Best: Contrastive Representations Learning Across Sensor Locations for Wearable Activity Recognition
论文作者
论文摘要
我们解决了众所周知的可穿戴活动识别问题,即必须与它们提供的信息相关的传感器,但由于可穿戴能力/可用性问题而必须使用(例如,需要与腕上磨损的IMU一起使用,因为它们嵌入了大多数智能手表中,因此需要使用)。为了减轻此问题,我们提出了一种方法,该方法促进了仅在训练过程中存在的传感器中使用信息,并且在以后使用系统期间不可用。该方法通过对比度损失与关节训练期间的分类损失结合使用,将信息从源传感器转移到目标传感器数据的潜在表示。我们在众所周知的PAMAP2和机会基准上评估了该方法,用于不同的源和目标传感器组合,显示了平均(在所有活动中)F1得分在5%至13%之间的提高,而个人活动的改进,特别适合从20%到40%之间的其他信息中受益。
We address the well-known wearable activity recognition problem of having to work with sensors that are non-optimal in terms of information they provide but have to be used due to wearability/usability concerns (e.g. the need to work with wrist-worn IMUs because they are embedded in most smart watches). To mitigate this problem we propose a method that facilitates the use of information from sensors that are only present during the training process and are unavailable during the later use of the system. The method transfers information from the source sensors to the latent representation of the target sensor data through contrastive loss that is combined with the classification loss during joint training. We evaluate the method on the well-known PAMAP2 and Opportunity benchmarks for different combinations of source and target sensors showing average (over all activities) F1 score improvements of between 5% and 13% with the improvement on individual activities, particularly well suited to benefit from the additional information going up to between 20% and 40%.