Demucs: Deep Extractor for Music Sources with extra unlabeled data remixed - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

Demucs: Deep Extractor for Music Sources with extra unlabeled data remixed

Résumé

We study the problem of source separation for music using deep learning with four known sources: drums, bass, vocals and other accompaniments. State-of-the-art approaches predict soft masks over mixture spectrograms while methods working on the waveform are lagging behind as measured on the standard MusDB benchmark. Our contribution is two fold. (i) We introduce a simple convolutional and recurrent model that outperforms the state-of-the-art model on waveforms, that is, Wave-U-Net, by 1.6 points of SDR (signal to distortion ratio). (ii) We propose a new scheme to leverage unlabeled music. We train a first model to extract parts with at least one source silent in unlabeled tracks, for instance without bass. We remix this extract with a bass line taken from the supervised dataset to form a new weakly supervised training example. Combining our architecture and scheme, we show that waveform methods can play in the same ballpark as spectrogram ones.
Fichier principal
Vignette du fichier
demucs_preprint.pdf (518.05 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02277338 , version 1 (03-09-2019)

Identifiants

Citer

Alexandre Défossez, Nicolas Usunier, Léon Bottou, Francis Bach. Demucs: Deep Extractor for Music Sources with extra unlabeled data remixed. 2019. ⟨hal-02277338⟩
1060 Consultations
1210 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More