Conversational distance adaptation in noise and its effect on signal-to-noise ratio in realistic listening environments

Adam Weisser*, Kelly Miles, Michael J. Richardson, Jörg M. Buchholz

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Everyday environments impose acoustical conditions on speech communication that require interlocutors to adapt their behavior to be able to hear and to be heard. Past research has focused mainly on the adaptation of speech level, while few studies investigated how interlocutors adapt their conversational distance as a function of noise level. Similarly, no study tested the interaction between distance and speech level adaptation in noise. In the present study, participant pairs held natural conversations while binaurally listening to identical noise recordings of different realistic environments (range of 53-92 dB sound pressure level), using acoustically transparent headphones. Conversations were in standing or sitting (at a table) conditions. Interlocutor distances were tracked using wireless motion-capture equipment, which allowed subjects to move closer or farther from each other. The results show that talkers adapt their voices mainly according to the noise conditions and much less according to distance. Distance adaptation was highest in the standing condition. Consequently, mainly in the loudest environments, listeners were able to improve the signal-to-noise ratio (SNR) at the receiver location in the standing condition compared to the sitting condition, which became less negative. Analytical approximations are provided for the conversational distance as well as the receiver-related speech and SNR.

Original languageEnglish
Pages (from-to)2896-2907
Number of pages12
JournalJournal of the Acoustical Society of America
Volume149
Issue number4
DOIs
Publication statusPublished - 29 Apr 2021

Fingerprint

Dive into the research topics of 'Conversational distance adaptation in noise and its effect on signal-to-noise ratio in realistic listening environments'. Together they form a unique fingerprint.

Cite this