N-best based supervised and unsupervised adaptation for native and non-native speakers in cars

P. Nguyen*, Ph Gelin, J. C. Junqua, Jen-Tzung Chien

*Corresponding author for this work

    Research output: Contribution to journalConference articlepeer-review

    14 Scopus citations


    In this paper, a new set of techniques exploiting N-best hypotheses in supervised and unsupervised adaptation are presented. These techniques combine statistics extracted from the N-best hypotheses with a weight derived from a likelihood ratio confidence measure. In the case of supervised adaptation the knowledge of the correct string is used to perform N-best based corrective adaptation. Experiments run for continuous letter recognition recorded in a car environment show that weighting N-best sequences by a likelihood ratio confidence measure provides only marginal improvement as compared to 1-best unsupervised adaptation and N-best unsupervised adaptation with equal weighting. However, an N-best based supervised corrective adaptation method weighting correct letters positively and incorrect letters negatively, resulted in a 13% decrease of the error rate as compared with supervised adaptation. The largest improvement was obtained for non-native speakers.

    Original languageEnglish
    Pages (from-to)173-176
    Number of pages4
    JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
    StatePublished - 1 Jan 1999
    EventProceedings of the 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-99) - Phoenix, AZ, USA
    Duration: 15 Mar 199919 Mar 1999


    Dive into the research topics of 'N-best based supervised and unsupervised adaptation for native and non-native speakers in cars'. Together they form a unique fingerprint.

    Cite this