TY - UNPB
T1 - Listening to the room
T2 - disrupting activity of dorsolateral prefrontal cortex impairs learning of room acoustics in human listeners
AU - Hernandez-Perez, Heivet
AU - Monaghan, Jessica
AU - Mikiel-Hunter, Jason
AU - Traer, James
AU - Sowman, Paul
AU - McAlpine, David
PY - 2023/7/3
Y1 - 2023/7/3
N2 - Navigating complex sensory environments is critical to survival, and brain mechanisms have evolved to cope with the wide range of surroundings we encounter each day. In noisy spaces, for example, listeners place more emphasis on early-arriving sound energy to determine the location of a sound source, suppressing potentially spurious localisation cues conveyed in later-arriving sound energy reflected from walls and other hard surfaces. Nevertheless, reverberant sound energy is highly informative about those spaces per se, including their dimensions, construction, and the number of potential sources, and human listeners show improved speech understanding when re-encountering known, compared to new, reverberant environments. To determine how listeners learn acoustics spaces, we assessed the ability of listeners to perceive speech in a range of noisy and reverberant rooms. We mimicked the acoustic characteristics of real rooms using an array of loudspeakers positioned within an anechoic chamber and assessed listeners’ performance in a speech-in-noise task using sentences from the Coordinate Response Measure (CRM) corpus—“Ready ‘call sign’ go to |Color| |Number| now.” Listeners were also exposed to repetitive transcranial stimulation to disrupt the dorsolateral prefrontal cortex activity, a region believed to play a role in statistical learning. Our data suggest listeners rapidly adapt to statistical characteristics of an acoustic environment to improve speech understanding. This ability is impaired when repetitive transcranial magnetic stimulation is applied bilaterally to the dorsolateral prefrontal cortex. The data demonstrate that speech understanding in noise is best when exposed to a room with reverberant characteristics common to human-built environments, with performance declining for higher and lower reverberation times, including fully anechoic (non-reverberant) environments. Our findings provide compelling evidence for a reverberation “sweet spot” and the presence of brain mechanisms that might have evolved to cope with the acoustic characteristics of listening environments encountered every day.
AB - Navigating complex sensory environments is critical to survival, and brain mechanisms have evolved to cope with the wide range of surroundings we encounter each day. In noisy spaces, for example, listeners place more emphasis on early-arriving sound energy to determine the location of a sound source, suppressing potentially spurious localisation cues conveyed in later-arriving sound energy reflected from walls and other hard surfaces. Nevertheless, reverberant sound energy is highly informative about those spaces per se, including their dimensions, construction, and the number of potential sources, and human listeners show improved speech understanding when re-encountering known, compared to new, reverberant environments. To determine how listeners learn acoustics spaces, we assessed the ability of listeners to perceive speech in a range of noisy and reverberant rooms. We mimicked the acoustic characteristics of real rooms using an array of loudspeakers positioned within an anechoic chamber and assessed listeners’ performance in a speech-in-noise task using sentences from the Coordinate Response Measure (CRM) corpus—“Ready ‘call sign’ go to |Color| |Number| now.” Listeners were also exposed to repetitive transcranial stimulation to disrupt the dorsolateral prefrontal cortex activity, a region believed to play a role in statistical learning. Our data suggest listeners rapidly adapt to statistical characteristics of an acoustic environment to improve speech understanding. This ability is impaired when repetitive transcranial magnetic stimulation is applied bilaterally to the dorsolateral prefrontal cortex. The data demonstrate that speech understanding in noise is best when exposed to a room with reverberant characteristics common to human-built environments, with performance declining for higher and lower reverberation times, including fully anechoic (non-reverberant) environments. Our findings provide compelling evidence for a reverberation “sweet spot” and the presence of brain mechanisms that might have evolved to cope with the acoustic characteristics of listening environments encountered every day.
KW - implicit learning
KW - room acoustics
KW - reverberation
KW - reverberant environments
KW - dorsolateral prefrontal cortex
KW - statistical learning
KW - adaptation
KW - meta-adaptation
KW - speech in noise
KW - transcranial magnetic stimulation
KW - listening loops
U2 - 10.2139/ssrn.4495659
DO - 10.2139/ssrn.4495659
M3 - Preprint
T3 - Current biology : CB
BT - Listening to the room
ER -