Mar 30 2017
Prof Dr Dorothea Kolossa and Mahdie Karbasi from the research group Cognitive Signal Processing at Ruhr-Universität Bochum (RUB) have developed a method for predicting speech intelligibility in noisy surroundings. The results of their experiments are more precise than those gained through the standard methods applied hitherto. They might thus facilitate the development process of hearing aids. The research was carried out in the course of the EU-funded project "Improved Communication through Applied Hearing Research", or "I can hear" for short.
Specific algorithms in hearing aids filter out background noises to ensure that wearers are able to understand speech in every situation - regardless if they are in a packed restaurant or near a busy road. The challenge for the researchers is to maintain high speech transmission quality while filtering out background noises. Before an optimised hearing aid model is released to the market, new algorithms are subject to time-consuming tests.
Researchers and industrial developers run hearing tests with human participants to analyse to what extent the respective new algorithms will ensure speech intelligibility. If they were able to assess speech intelligibility reliably in an automated process, they could cut down on time-consuming test practices.
New algorithm developed
To date, the standard approaches for predicting speech intelligibility have included the so-called STOI method (short time objective speech intelligibility measure) and other reference-based methods. These methods require a clear original signal, i.e. an audio track that's been recorded without any background noises. Based on the differences between original and filtered sound, the value of speech intelligibility is estimated. Kolossa and Karbasi have found a way to predict intelligibility without needing a clear reference signal, which is still more precise than the STOI method. Consequently, Kolossa and Karbasi's findings might help reduce test processes in the product development phase of hearing aids.
The RUB researchers have tested their method with 849 individuals with normal hearing. To this end, the participants were asked to assess audio files via an online platform. With the aid of their algorithm, Kolossa and Karbasi estimated which percentage of a sentence from the respective file would be understood by the participants. Subsequently, they compared their predicted value with the test results.
Research outlook
In the next step, Kolossa and Karbasi intend to run the same tests with hearing-impaired participants. They are working on algorithms that can assess and optimise speech intelligibility in accordance with the individual perception threshold or type of hearing impairment. In the best case scenario, the study will thus provide methods for engineering an intelligent hearing aid. Such hearing aids could automatically recognise the wearer's current surroundings and situation. If he or she steps from a quiet street into a restaurant, the hearing aid would register an increase in background noises. Accordingly, it would filter out the ambient noises - if possible without impairing the quality of the speech signal.