A Comparative Re-Assessment of Feature Extractors for Deep Speaker Embeddings - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

A Comparative Re-Assessment of Feature Extractors for Deep Speaker Embeddings

Résumé

Modern automatic speaker verification relies largely on deep neural networks (DNNs) trained on mel-frequency cepstral coefficient (MFCC) features. While there are alternative feature extraction methods based on phase, prosody and long-term temporal operations, they have not been extensively studied with DNN-based methods. We aim to fill this gap by providing extensive re-assessment of 14 feature extractors on VoxCeleb and SITW datasets. Our findings reveal that features equipped with techniques such as spectral centroids, group delay function, and integrated noise suppression provide promising alternatives to MFCCs for deep speaker embeddings extraction. Experimental results demonstrate up to 16.3% (VoxCeleb) and 25.1% (SITW) relative decrease in equal error rate (EER) to the baseline.
Fichier principal
Vignette du fichier
xuechen_interspeech2020.pdf (344.78 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02909105 , version 1 (29-07-2020)

Identifiants

  • HAL Id : hal-02909105 , version 1

Citer

Xuechen Liu, Md Sahidullah, Tomi Kinnunen. A Comparative Re-Assessment of Feature Extractors for Deep Speaker Embeddings. INTERSPEECH 2020, Oct 2020, Shanghai, China. ⟨hal-02909105⟩
68 Consultations
186 Téléchargements

Partager

Gmail Facebook X LinkedIn More