论文标题
通过生成预训练的旋律转录
Melody transcription via generative pre-training
论文作者
论文摘要
尽管旋律在音乐感知中扮演着重要角色,但在音乐信息检索中仍然是一个开放的挑战,可以可靠地检测到任意音乐录音中出现的旋律的音符。旋律转录中的一个主要挑战是构建方法可以处理包含任何数量的乐器合奏和音乐风格的广泛音频 - 现有策略适合某些旋律仪器或样式,但不是全部。为了应对这一挑战,我们利用了Jukebox(Dhariwal等,2020)的表示形式,这是一种广泛的音乐音频的生成模型,从而相对于传统频谱功能,将旋律转录的性能提高了20美元。旋律转录的另一个障碍是缺乏训练数据 - 我们得出了一个新的数据集,其中包含50美元的旋律转录,从众包广泛的音乐注释中。相对于可用基线的最强基线,生成性预训练和针对此任务的新数据集的组合使旋律转录的性能提高了77美元。通过将我们的新旋律转录方法与节拍检测,关键估计和和弦识别的解决方案配对,我们构建了Swier Sage Sage,这是一种能够直接从音乐音频转录人类可读的铅片的系统。 可以在https://chrisdonahue.com/sheetsage和代码上找到音频示例,请参见https://github.com/chrisdonahue/sheetsage。
Despite the central role that melody plays in music perception, it remains an open challenge in music information retrieval to reliably detect the notes of the melody present in an arbitrary music recording. A key challenge in melody transcription is building methods which can handle broad audio containing any number of instrument ensembles and musical styles - existing strategies work well for some melody instruments or styles but not all. To confront this challenge, we leverage representations from Jukebox (Dhariwal et al. 2020), a generative model of broad music audio, thereby improving performance on melody transcription by $20$% relative to conventional spectrogram features. Another obstacle in melody transcription is a lack of training data - we derive a new dataset containing $50$ hours of melody transcriptions from crowdsourced annotations of broad music. The combination of generative pre-training and a new dataset for this task results in $77$% stronger performance on melody transcription relative to the strongest available baseline. By pairing our new melody transcription approach with solutions for beat detection, key estimation, and chord recognition, we build Sheet Sage, a system capable of transcribing human-readable lead sheets directly from music audio. Audio examples can be found at https://chrisdonahue.com/sheetsage and code at https://github.com/chrisdonahue/sheetsage .