A binaural model is presented which predicts the effect of audibility on the intelligibility of speech in the presence of speech-shaped noise and vocoded-speech maskers. It takes the calibrated target and masker signals (independently) and the listener’s tonal audiogram at each ear as inputs. Model predictions are compared to speech reception thresholds (SRTs) measured for normal-hearing (NH) and hearing-impaired (HI) listeners in the presence of two uncorrelated speech-spectrum noises or two vocoded-speech maskers, which were either (artificially) spatially separated or co-located with the frontal speech target. The artificial spatial separation was realized by presenting each masker to a different single ear using headphones, while the target was presented diotically as coming from the front. Audibility was varied by testing four different sensation levels for the combined maskers. The model allows for a good prediction of the decrease of SRT and the increase of spatial release from masking (based primarily on better-ear glimpsing here) with increasing audibility. For both groups of listeners, the averaged absolute prediction error across conditions was between 0.6 and 1.7 dB.