Normalization of lexical tones and nonlinguistic pitch contours: Implications for speech-specific processing mechanism

Kaile Zhang, Xiao Wang, Gang Peng

Research output: Journal article publicationJournal articleAcademic researchpeer-review

9 Citations (Scopus)


Context is indispensable for accurate tone perception, especially when the target tone system is as complex as that of Cantonese. However, not all contexts are equally beneficial. Speech contexts are usually more effective in improving lexical tone identification than nonspeech contexts matched in pitch information. Some potential factors which may contribute to these unequal effects have been proposed but, thus far, their plausibility remains unclear. To shed light on this issue, the present study compares the perception of lexical tones and their nonlinguistic counterparts under specific contextual (speech, nonspeech) and attentional (with/without focal attention) conditions. The results reveal a prominent congruency effect - target sounds tend to be identified more accurately when embedded in contexts of the same nature (speech/nonspeech). This finding suggests that speech and nonspeech sounds are partly processed by domain-specific mechanisms and that information from the same domain can be integrated more effectively than that from different domains. Therefore, domain-specific processing of speech could be the most likely cause of the unequal context effect. Moreover, focal attention is not a prerequisite for extracting contextual cues from speech and nonspeech during perceptual normalization. This finding implies that context encoding is highly automatic for native listeners.
Original languageEnglish
Pages (from-to)38-49
Number of pages12
JournalJournal of the Acoustical Society of America
Issue number1
Publication statusPublished - 1 Jan 2017

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Acoustics and Ultrasonics


Dive into the research topics of 'Normalization of lexical tones and nonlinguistic pitch contours: Implications for speech-specific processing mechanism'. Together they form a unique fingerprint.

Cite this