You are not currently logged in.
Access JSTOR through your library or other institution:
If You Use a Screen ReaderThis content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
The Effect of Timbre and Loudness on Melody Segregation
Jeremy Marozeau, Hamish Innes-Brown and Peter J. Blamey
Music Perception: An Interdisciplinary Journal
Vol. 30, No. 3 (Feb. 1, 2013), pp. 259-274
Published by: University of California Press
Stable URL: http://www.jstor.org/stable/10.1525/mp.2012.30.3.259
Page Count: 16
You can always find the topics here!Topics: Melody, Loudness, Musicians, Time perception, Streaming, Musical perception, Impulsiveness, Musical timbre, Timbre, Audio frequencies
Were these topics helpful?See something inaccurate? Let us know!
Select the topics that are inaccurate.
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Preview not available
The aim of this study was to examine the effects of three acoustic parameters on the difficulty of segregating a simple 4-note melody from a background of interleaved distractor notes. Melody segregation difficulty ratings were recorded while three acoustic parameters of the distractor notes were varied separately: intensity, temporal envelope, and spectral envelope. Statistical analyses revealed a significant effect of music training on difficulty rating judgments. For participants with music training, loudness was the most efficient perceptual cue, and no difference was found between the dimensions of timbre influenced by temporal and spectral envelope. For the group of listeners with less music training, both loudness and spectral envelope were the most efficient cues. We speculate that the difference between musicians and nonmusicians may be due to differences in processing the stimuli: musicians may process harmonic sound sequences using brain networks specialized for music, whereas nonmusicians may use speech networks.
© 2013 by The Regents of the University of California