The Speech Auditory Brainstem Response as an Objective Outcome Measure

UoM administered thesis: Phd

Abstract

Current standard clinical audiological outcome measures of speech perception rely on behavioural responses from the participant (e.g. repeating words or sentences). These behavioural measures cannot be applied in all clinical populations (e.g. infants or children and adults with intellectual disabilities). To date, there are no standard clinical audiological objective (non-behavioural) outcome measures of speech perception. Literature suggests that ABRs to short consonant-vowel (CV) stimuli (i.e. speech-ABR) could potentially be applied as an objective outcome measure of speech detection, discrimination of CVs that differ in their vowel's second formant (F2) frequencies, speech recognition in noise, and self-reported speech understanding. ABRs have advantages over other auditory evoked responses (such as auditory late/cortical responses) in that they can be recorded when the participant is asleep and/or sedated, thus would be applicable in clinical populations that cannot remain still and alert during testing, and in those that cannot provide behavioural responses. The aim of this thesis was to assess the clinical applicability of speech-ABRs as an objective outcome measure of speech detection, speech discrimination, speech recognition in noise, and self- reported speech understanding. To investigate this, it was important to first establish which stimuli would be most appropriate for clinical use, and to evaluate speech-ABRs in populations that are able to provide behavioural responses. This was addressed through conducting three novel studies. First, I investigated the differences in speech-ABRs evoked by shorter (40 ms and 50 ms) and longer (170 ms) duration CVs in 12 adults with normal hearing. The aim was to evaluate if shorter CVs, that would be more clinically applicable, were appropriate stimuli for the application of speech-ABRs as a clinical objective outcome measure. Specifically, as an outcome measure of speech detection in quiet versus in background noise, and discrimination of CVs that differ in their vowel's F2 frequencies. I found that background noise had a similar effect on speech-ABRs in response to all CV durations (i.e. shorter CVs are sufficient to assess the effects of background noise on speech- ABRs), and that there were no differences between speech-ABRs evoked by CVs that differ in their vowel's F2 frequencies. Second, I investigated the applicability of speech-ABRs as a measure of unaided versus aided speech detection in quiet and in background noise, aided consonant and sentence recognition in noise, and aided self-reported speech understanding in 98 adults with SNHL. I found that aiding had a significant effect on speech-ABRs, but background noise did not. Also, aided speech-ABRs did not predict aided consonant or sentence recognition in noise or aided self-reported speech understanding. Finally, I investigated the feasibility of recording speech-ABRs in 12 adult cochlear implant (CI) recipients and removing the artefact generated by the CI using a clinically applicable single channel approach. Additionally, three CI artefacts were recorded using an artificial-head. I found that artefact removal was successful from the artificial-head recordings, and was potentially successful from recordings of two of the CI participants; however, verification that artefacts did not modulate detected responses was not possible. In summary, speech-ABRs, with more work, may be clinically applicable as an objective measure of speech detection in quiet versus in background noise in individuals with normal hearing using short duration CV stimuli, and of aided versus unaided speech detection in quiet in individuals with SNHL. Speech-ABRs may potentially be recordable in CI recipients with the development of an adaptive and robust approach to artefact removal that incorporates verification that detected responses were not modulated by artefacts. However, speech-ABRs are not an appropriate outcome measure of discrimination of CVs that differ in their vowel's F2 frequencies, of speech detection in quiet versus in background noise in individuals with SNHL, or of aided speech recognition in noise and self-reported speech understanding in individuals with SNHL.

Details

Original languageEnglish
Awarding Institution
Supervisors/Advisors
Award date31 Dec 2019