Thumbnail
Access Restriction
Open

Author Gunawardana, Asela ♦ Byrne, William
Source CiteSeerX
Content type Text
File Format PDF
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Isca Archive Convergence ♦ Dllr Rapid Speaker Adaptation Algorithm ♦ Additional Hidden Data ♦ Likelihood Linear Regression ♦ Observed Adaptation Data ♦ Alternative Derivation ♦ Censored Em Formulation ♦ Mllr Adaptation ♦ Speaker Adaptation Technique ♦ Insufficient Data ♦ Small Amount ♦ Mllr-type Adaptation Transformation ♦ Map Estimation ♦ Maximum Likelihood Solution ♦ Additional Adaptation Data
Abstract Discounted Likelihood Linear Regression (DLLR) is a speaker adaptation technique for cases where there is insufficient data for MLLR adaptation. Here, we provide an alternative derivation of DLLR by using a censored EM formulation which postulates additional adaptation data which is hidden. This derivation shows that DLLR, if allowed to converge, provides maximum likelihood solutions. Thus the robustness of DLLR to small amounts of data is obtained by slowing down the convergence of the algorithm and by allowing termination of the algorithm before overtraining occurs. We then show that discounting the observed adaptation data by postulating additional hidden data can also be extended to MAP estimation of MLLR-type adaptation transformations. 1.
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study