Paper: | SP-P2.12 | ||
Session: | Speaker Adaptation | ||
Time: | Tuesday, May 18, 13:00 - 15:00 | ||
Presentation: | Poster | ||
Topic: | Speech Processing: Adaptation/Normalization | ||
Title: | EIGENSPACE-BASED MLLR WITH SPEAKER ADAPTIVE TRAINING IN LARGE VOCABULARY CONVERSATIONAL SPEECH RECOGNITION | ||
Authors: | Vlasios Doumpiotis; Johns Hopkins University | ||
Yonggang Deng; Johns Hopkins University | |||
Abstract: | In this paper, Speaker Adaptive Training(SAT) which reduces inter-speaker variability and Eigenspace-based Maximum Likelihood Linear Regression (EigenMLLR) adaptation, which takes advantage of prior knowledge about the test speaker's linear transforms, are combined and developed. During training, SAT generates a set of speaker independent (SI) Gaussian parameters, along with matched speaker dependent transforms for all the speakers in the training set. Then a set of regression class dependent Eigen transforms are derived by doing Singular Value Decomposition (SVD). Normally during recognition the test speaker's linear transforms are obtained with MLLR. In this work, the test speaker's linear transforms are assumed to be linear combination of the decomposed Eigen transforms. Experimental results conducted on large vocabulary conversational speech(LVCSR) material from the Switchboard Corpus show that this strategy has better performance than ML-SAT and significantly reduces the number of parameters needed(an 87% reduction is achieved), while still effectively capturing the essential variation between speakers. | ||
Back |
Home -||-
Organizing Committee -||-
Technical Committee -||-
Technical Program -||-
Plenaries
Paper Submission -||-
Special Sessions -||-
ITT -||-
Paper Review -||-
Exhibits -||-
Tutorials
Information -||-
Registration -||-
Travel Insurance -||-
Housing -||-
Workshops