A brief summary of the work done under each of our research themes can be found below.
Research themes
- Auditory neuroscience
- Synthetic hearing
- Disorders and prosthetics
- Sound production
- Sound perception
- Acoustics
- Speech
The sense of hearing is the culmination of a series of complex processes beginning with the transformation of sound into electrical signals in the ear. These signals are processed in a network of billions of neurons in the brain ultimately resulting in the perception of sound.
The inner ear contains a small, snail-shaped organ called the cochlea, which transforms sound waves into electrical signals that the brain can work with. It does this through specialised sensory receptor cells known as hair cells. These cells are essentially a gateway to the outside auditory world, but exactly how they work is not well understood. Biophysics of hair-cell mechanotransduction is one focus of our work.
Neurons in the brain transfer sensory information through short electrical impulses known as spikes. Research in this theme is concerned with how these spikes facilitate computations that underlie our ability to perform sophisticated auditory tasks, like separating out one sound from many others or understanding speech in noisy environments.
Understanding the auditory brain is key to developing future approaches to tackle the growing problem of hearing loss, and could influence the development of new, biologically inspired audio technology. This research theme brings together expertise in engineering and the physical, biological and computing sciences to increase our understanding of how hearing works through the development of theoretical frameworks and state-of-the-art experimental approaches.
People:
- Dan Goodman (computational modelling of the auditory system)
- Andrei Kozlov (Auditory Neuroscience & Biophysics)
- Tobias Reichenbach (Biophysics of hearing & neural information processing)
Can devices achieve hearing acuity as good as humans, or even super-human capability?
The starting point in the processing chain for synthetic hearing is the microphone. We study microphone arrays comprising a set of such sensors. We develop associated algorithms for array signal processing that separate one or more desired signals from interference by spatial filtering in linear, planar and spherical geometries - we apply this as a complementary approach to spectral and modulation domain filtering. Further into the processing chain of synthetic hearing, additional awareness of the sound scene is obtained by extraction of spatial information from multichannel signals. This may enable selection of desired vs interfering sound sources. Enhancement of audio signals is particularly important in noisy and reverberant acoustic conditions typically encountered in real-world situations. We research noise reduction and dereverberation algorithms often supporting binaural and multichannel audio.
People:
- Mike Brookes (speech processing and computer vision)
- Christine Evers (acoustic signal processing and scene mapping for socially assistive robots)
- Patrick Naylor (speech and audio signal processing)
- Björn Schuller (machine learning and audiovisual signal processing)
One in six people suffers from hearing impairment. Afflicted individuals have major problems with understanding speech, in particular in a noisy environment such as a loud restaurant or bar. They therefore encounter major obstacles in their professional and social life.
The most common form of hearing impairment is sensorineural hearing loss, that is, damage or loss of the hair cells in the inner ear that are responsible for converting sound vibration into electrical signals in the attached auditory-nerve fibers. The neural machinery that processes complex sounds is vulnerable as well. Auditory-nerve fibers can detach from hair cells, leading to so-called 'hidden' hearing loss that cannot be detected by the standard pure-tone audiometry. Central auditory processing disorder affects the cortical circuits that process sound, and afflicted individuals have major problems hearing in real-world environments.
We are working on a better understanding of the causes and forms of the different hearing impairments, on more specific and objective diagnostic methods that allow to discriminate between them, as well as on novel treatments through auditory prosthetics and other hearing devices. Regarding the latter we investigate next-generation hearing-aid technology using state-of-the-art signal processing as well as brain-computer interfaces for mind-controlled hearing aids. We also explore the development and evaluation of novel hearing applications and devices focussing specifically on 3D audio and virtual reality.
People:
- Mike Brookes (Communications and Signal Processing)
- Andrei Kozlov (Auditory Neuroscience & Biophysics)
- Patrick Naylor (Speech and Audio Signal Processing)
- Lorenzo Picinali (Audio Experience Design)
- Tobias Reichenbach (Biophysics of hearing & neural information processing)
Sound Production involves research in different areas related with the production of acoustic signals. Research within this theme focuses on various aspects of sound production and reproduction, from the design and evaluation of auditory displays and sonification techniques/metaphors, to the creation of acoustic virtual and augmented reality applications and devices, including research on 3D audio synthesis and reproduction (both for headphones and loudspeaker restitution).
People:
- Lorenzo Picinali (Audio Experience Design)
- Rebecca Stewart
Research in this theme is concerned with how humans perceive and make sense of sound. Example questions include: how can we tell where a sound is coming from? Why does sound delivered through headphones sound like it comes from inside our head rather than from the world? Why is listening to one voice among many so much more difficult than listening to just one voice (the "cocktail party" problem)?
If we understood how humans can solve these problems, perhaps we could design better artificial systems (see Synthetic Hearing theme) or help people with hearing impairments (see theme Disorders and Prosthetics).
People:
- Dan Goodman (computational modelling of the auditory system)
- Lorenzo Picinali (Audio Experience Design)
- Tobias Reichenbach (Biophysics of hearing & neural information processing)
People:
- Mike Brookes (speech processing and computer vision)
- Christine Evers (acoustic signal processing and scene mapping for socially assistive robots)
- Patrick Naylor (speech and audio signal processing)
Speech is the most natural form of communication for human-machine interaction and, accordingly, research into capturing, analysing, processing and transmitting speech has been of paramount importance over recent years. The SAP team at Imperial College has led the way in several such technological areas, both through an extensive programme of scientific publications, strong industry linked research initiatives and public engagement. Research topics addressed include speech dereverberation, noise reduction, speech analysis and speech modelling. This work is currently being applied in binaural hearing aid signal processing and signal and information processing for robot audition.
People:
- Mike Brookes (speech processing and computer vision)
- Patrick Naylor (speech and audio signal processing)
- Tobias Reichenbach (Biophysics of hearing & neural information processing)
- Björn Schuller (machine learning and audiovisual signal processing)
Contact
For more information about the group, please contact:
Dr Dan Goodman
+44 (0)20 7594 6264
d.goodman@imperial.ac.uk