Thumbnail
Access Restriction
Open

Author Litman, Diane ♦ Forbes-Riley, Kate
Source CiteSeerX
Content type Text
File Format PDF
Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Speech Recognition ♦ Intelligent Tutoring ♦ Student Learning ♦ Automatic Speech Recognition Performance ♦ Dialogue System ♦ Task-oriented Spoken Dialogue System ♦ Numerous Quantitative Measure ♦ Rejection Versus Misrecognition Error ♦ User Satisfaction ♦ Sentence-level Error ♦ Speech Recognition Error ♦ Primary Evaluation ♦ Little Investigation ♦ Speech Recognition Performance ♦ Spoken Dialogue Tutoring ♦ Transcription Versus Semantic Error ♦ Tutorial Dialogue System
Description Speech recognition errors have been shown to negatively correlate with user satisfaction in evaluations of task-oriented spoken dialogue systems. In the domain of tutorial dialogue systems, however, where the primary evaluation metric is student learning, there has been little investigation of whether speech recognition errors also negatively correlate with learning. In this paper we examine correlations between student learning and automatic speech recognition performance, in a corpus of dialogues collected with an intelligent tutoring spoken dialogue system. We examine numerous quantitative measures of speech recognition error, including rejection versus misrecognition errors, word versus sentence-level errors, and transcription versus semantic errors. Our results show that although many of our students experience problems with speech recognition, none of our measures negatively correlates with student learning. 1.
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study
Learning Resource Type Article
Publisher Date 2005-01-01
Publisher Institution In Proceedings of the 9th European Conference on Speech Communication and Technology (Interspeech/Eurospeech