Thumbnail
Access Restriction
Open

Author Engwall, Olov
Source CiteSeerX
Content type Text
File Format PDF
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Tongue Reading ♦ Mcgurk Effect ♦ Tongue Movement ♦ Index Term ♦ Articulatory Knowledge ♦ Visual Modality ♦ Speech Perception ♦ Noise Level ♦ Consonant Identification Test ♦ Degraded Audio ♦ Previous Study ♦ Intersubject Difference ♦ Audiovisual Integration Effect ♦ Visual Information ♦ Different Level ♦ Auditory Speech Signal ♦ Matched Audiovisual Condition ♦ Normal Face View ♦ Audiovisual Speech Perception ♦ Average Recognition Rate
Abstract Previous studies on tongue reading, i.e., speech perception of degraded audio supported by animations of tongue movements have indicated that the support is weak initially and that subjects need training to learn to interpret the movements. This paper investigates if the learning is of the animation templates as such or if subjects learn to retrieve articulatory knowledge that they already have. Matching and conflicting animations of tongue movements were presented randomly together with the auditory speech signal at three different levels of noise in a consonant identification test. The average recognition rate over the three noise levels was significantly higher for the matched audiovisual condition than for the conflicting and the auditory only. Audiovisual integration effects were also found for conflicting stimuli. However, the visual modality is given much less weight in the perception than for a normal face view, and intersubject differences in the use of visual information are large. Index Terms: McGurk, audiovisual speech perception, augmented reality 1.
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study
Learning Resource Type Article