论文标题

HDTORCH:使用GP-GPU进行设计空间探索的高维计算加速度计算

HDTorch: Accelerating Hyperdimensional Computing with GP-GPUs for Design Space Exploration

论文作者

Simon, William Andrew, Pale, Una, Teijeiro, Tomas, Atienza, David

论文摘要

对于涉及涉及连续的,半监督的学习以进行长期监测的应用程序,高维计算(HDC)作为机器学习范式非常有趣。但是,其准确性尚未与其他机器学习(ML)方法相提并论。允许快速设计空间探索以找到实用算法的框架对于使高清计算与其他ML技术竞争是必要的。为此,我们介绍了HDTORCH,这是一个基于Pytorch的开源,基于Pytorch的HDC库,其CUDA扩展名用于HyperVector操作。我们通过使用经典和在线HD培训方法来分析四个HDC基准数据集,来证明HDTORCH的实用程序。我们为经典/在线HD的平均(训练)/推理加速度分别为(111x/68x)/87x。此外,我们分析了不同的超参数对运行时和准确性的影响。最后,我们演示了HDTORCH如何使探索用于大型现实世界数据集的HDC策略。我们对CHB-MIT EEG癫痫数据库进行了首个高清训练和推理分析。结果表明,数据子集的典型培训方法不一定会推广到整个数据集,这是开发医疗可穿戴设备的未来HD模型时的重要因素。

HyperDimensional Computing (HDC) as a machine learning paradigm is highly interesting for applications involving continuous, semi-supervised learning for long-term monitoring. However, its accuracy is not yet on par with other Machine Learning (ML) approaches. Frameworks enabling fast design space exploration to find practical algorithms are necessary to make HD computing competitive with other ML techniques. To this end, we introduce HDTorch, an open-source, PyTorch-based HDC library with CUDA extensions for hypervector operations. We demonstrate HDTorch's utility by analyzing four HDC benchmark datasets in terms of accuracy, runtime, and memory consumption, utilizing both classical and online HD training methodologies. We demonstrate average (training)/inference speedups of (111x/68x)/87x for classical/online HD, respectively. Moreover, we analyze the effects of varying hyperparameters on runtime and accuracy. Finally, we demonstrate how HDTorch enables exploration of HDC strategies applied to large, real-world datasets. We perform the first-ever HD training and inference analysis of the entirety of the CHB-MIT EEG epilepsy database. Results show that the typical approach of training on a subset of the data does not necessarily generalize to the entire dataset, an important factor when developing future HD models for medical wearable devices.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源