Predicting Tongue Positions from Acoustics and Facial Features - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2011

Predicting Tongue Positions from Acoustics and Facial Features

Résumé

We test the hypothesis that adding information regarding the positions of electromagnetic articulograph (EMA) sensors on the lips and jaw can improve the results of a typical acoustic-to-EMA mapping system, based on support vector regression, that targets the tongue sensors. Our initial motivation is to use such a system in the context of adding a tongue animation to a talking head built on the basis of concatenating bimodal acoustic-visual units. For completeness, we also train a system that maps only jaw and lip information to tongue information.
Fichier principal
Vignette du fichier
paper.pdf (321.14 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00602412 , version 1 (13-05-2016)

Licence

Paternité

Identifiants

  • HAL Id : inria-00602412 , version 1

Citer

Asterios Toutios, Slim Ouni. Predicting Tongue Positions from Acoustics and Facial Features. 12th Annual Conference of the International Speech Communication Association - Interspeech 2011, Aug 2011, Florence, Italy. ⟨inria-00602412⟩
376 Consultations
130 Téléchargements

Partager

Gmail Facebook X LinkedIn More