Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)


Communication Sciences and Disorders

First Advisor

M. Jane Collins


It has been documented that phonemic featural information is differentially distributed across time in the speech waveform. It is also known that listeners with sensorineural hearing impairment often make errors on phoneme identification tasks. However, there is little documentation available that describes how the hearing-impaired listener uses the various sources of phonemic information which are distributed in the speech waveform. In this investigation, a group of normal hearing listeners and a group of sensorineural hearing-impaired listeners (with and without the benefit of amplification) identified various consonant and vowel productions that had been systematically varied in duration. The consonants (presented in a /haCa/ environment) and the vowels (presented in a /bVd/ environment) were truncated in steps to allow additional sequential segments of the original waveform to be presented. The results indicated that normal hearing listeners could extract more phonemic information from the earlier occurring positions of the stimulus waveforms, especially consonantal place information, than could the hearing-impaired listeners. For the hearing-impaired listeners in the unaided condition, percent correct identification for the consonant stimuli was lower than that for the normal hearing subjects, even for the full-duration stimuli, although the gap between impaired-unaided and normal performance decreased as truncation times increased. For vowel stimuli, impaired-unaided performance approached that of the normal hearing subjects for the full-duration stimuli, although significant performance gaps were apparent at shorter stimulus durations. The use of amplification did decrease the performance differences between the normal hearing listeners and the unaided hearing-impaired listeners. Yet, in many cases, even while using amplification, the hearing-impaired listeners could not make full use of the early-occurring feature information. The results are relevant to current models of normal speech perception which emphasize the need for the listener to make phonemic identification as quickly as possible.