论文标题
深度多尺度表示学习,并注意自动调制分类
Deep Multi-Scale Representation Learning with Attention for Automatic Modulation Classification
论文作者
论文摘要
当前,堆叠小尺寸卷积过滤器的深度学习方法被广泛用于自动调制分类(AMC)。在本报告中,我们通过使用较大的内核大小来进行基于卷积的卷积神经网络AMC,发现一些经验丰富的改进,这在提取原始信号I/Q序列数据的多尺度特征方面更有效。同样,挤压和激发(SE)机制可以显着帮助AMC网络专注于信号的更重要特征。结果,我们在本文中提出了一个具有较大内核大小和SE机构(SE-MSFN)的多尺度特征网络。 SE-MSFN achieves state-of-the-art classification performance on the public well-known RADIOML 2018.01A dataset, with average classification accuracy of 64.50%, surpassing CLDNN by 1.42%, maximum classification accuracy of 98.5%, and an average classification accuracy of 85.53% in the lower SNR range 0dB to 10dB, surpassing CLDNN by 2.85%.此外,我们还验证了整体学习可以帮助进一步提高分类性能。我们希望该报告可以为开发人员和研究人员提供一些参考。
Currently, deep learning methods with stacking small size convolutional filters are widely used for automatic modulation classification (AMC). In this report, we find some experienced improvements by using large kernel size for convolutional deep convolution neural network based AMC, which is more efficient in extracting multi-scale features of the raw signal I/Q sequence data. Also, Squeeze-and-Excitation (SE) mechanisms can significantly help AMC networks to focus on the more important features of the signal. As a result, we propose a multi-scale feature network with large kernel size and SE mechanism (SE-MSFN) in this paper. SE-MSFN achieves state-of-the-art classification performance on the public well-known RADIOML 2018.01A dataset, with average classification accuracy of 64.50%, surpassing CLDNN by 1.42%, maximum classification accuracy of 98.5%, and an average classification accuracy of 85.53% in the lower SNR range 0dB to 10dB, surpassing CLDNN by 2.85%. In addition, we also verified that ensemble learning can help further improve classification performance. We hope this report can provide some references for developers and researchers in practical scenes.