论文标题
多语言共同训练的声学和书面嵌入
Multilingual Jointly Trained Acoustic and Written Word Embeddings
论文作者
论文摘要
声词嵌入(AWES)是口语段的矢量表示。可以用字符序列的嵌入共同学习敬畏,以产生语音有意义的书面单词的嵌入,或者是刻录的单词嵌入(AGWES)。这种嵌入已被用来改善语音检索,识别和口语术语发现。在这项工作中,我们将此想法扩展到多种低资源语言。我们使用来自多种语言的语音转录数据共同训练AWE模型和AGWE模型。然后,预训练的模型可以用于看不见的零资源语言,或对低资源语言的数据进行微调。我们还研究了独特的功能,作为电话标签的替代品,以更好地共享跨语义信息。我们在十二种语言的单词歧视任务上测试我们的模型。当接受11种语言培训并对其剩余的看不见的语言进行测试时,我们的模型表现优于传统的无监督方法,例如动态时间扭曲。在一小时甚至十分钟的新语言数据中微调了预训练的模型之后,性能通常比仅在目标语言数据上进行培训要好得多。我们还发现,语音监督改善了角色序列的性能,并且独特的功能监督有助于处理目标语言看不见的手机。
Acoustic word embeddings (AWEs) are vector representations of spoken word segments. AWEs can be learned jointly with embeddings of character sequences, to generate phonetically meaningful embeddings of written words, or acoustically grounded word embeddings (AGWEs). Such embeddings have been used to improve speech retrieval, recognition, and spoken term discovery. In this work, we extend this idea to multiple low-resource languages. We jointly train an AWE model and an AGWE model, using phonetically transcribed data from multiple languages. The pre-trained models can then be used for unseen zero-resource languages, or fine-tuned on data from low-resource languages. We also investigate distinctive features, as an alternative to phone labels, to better share cross-lingual information. We test our models on word discrimination tasks for twelve languages. When trained on eleven languages and tested on the remaining unseen language, our model outperforms traditional unsupervised approaches like dynamic time warping. After fine-tuning the pre-trained models on one hour or even ten minutes of data from a new language, performance is typically much better than training on only the target-language data. We also find that phonetic supervision improves performance over character sequences, and that distinctive feature supervision is helpful in handling unseen phones in the target language.