Unsupervised Any-to-Many Audiovisual Synthesis via Exemplar Autoencoders

, ,  -

Publications: arXiv Add/Edit

Abstract: Add/Edit

We present an unsupervised approach that enables us to convert the speech input of any one individual to an output set of potentially-infinitely many speakers. One can stand in front of a mic and be able to make their favorite celebrity say the same words. Our approach builds on simple autoencoders that project out-of-sample data to the distribution of the training set (motivated by PCA/linear autoencoders). We use an exemplar autoencoder to learn the voice and specific style (emotions and ambiance) of a target speaker. In contrast to existing methods, the proposed approach can be easily extended to an arbitrarily large number of speakers in a very little time using only two-three minutes of audio data from a speaker. We also exhibit the usefulness of our approach for generating video from audio signals and vice-versa. We suggest the reader to check out our project webpage for various synthesized examples: https://dunbar12138.github.io/projectpage/Audiovisual/

Keywords: Add/Edit

Code Links

Languages: Python Add/Edit

Libraries: Add/Edit

Description: Add/Edit

Unsupervised Any-to-many Audiovisual Synthesis via Exemplar Autoencoders