心理学报 ›› 2023, Vol. 55 ›› Issue (2): 177-191.doi: 10.3724/SP.J.1041.2023.00177
收稿日期:
2022-02-09
发布日期:
2022-11-10
出版日期:
2023-02-25
通讯作者:
杨志刚
E-mail:yangzg.psy@gmail.com
基金资助:
ZHENG Xi1, ZHANG Tingting1, LI Liang2, FAN Ning1, YANG Zhigang1()
Received:
2022-02-09
Online:
2022-11-10
Published:
2023-02-25
Contact:
YANG Zhigang
E-mail:yangzg.psy@gmail.com
摘要:
言语的情绪信息(情绪性韵律和情绪性语义)具有去听觉掩蔽的作用, 但其去掩蔽的具体机制还不清楚。本研究通过2个实验, 采用主观空间分离范式, 通过操纵掩蔽声类型的方式, 分别探究言语的情绪韵律和情绪语义去信息掩蔽的机制。结果发现, 情绪韵律在知觉信息掩蔽或者在知觉、认知双重信息掩蔽下, 均具有去掩蔽的作用。情绪语义在知觉信息掩蔽下不具有去掩蔽的作用, 但在知觉、认知双重信息掩蔽下具有去掩蔽的作用。这些结果表明, 言语的情绪韵律和情绪语义有着不同的去掩蔽机制。情绪韵律能够优先吸引听者更多的注意, 可以克服掩蔽声音在知觉上造成的干扰, 但对掩蔽声音在内容上的干扰作用很小。言语的情绪语义能够优先获取听者更多的认知加工资源, 具有去认知信息掩蔽的作用, 但不具有去知觉信息掩蔽的作用。
中图分类号:
郑茜, 张亭亭, 李量, 范宁, 杨志刚. (2023). 言语的情绪韵律和情绪语义对听觉去信息掩蔽的作用. 心理学报, 55(2), 177-191.
ZHENG Xi, ZHANG Tingting, LI Liang, FAN Ning, YANG Zhigang. (2023). Unmasking effects of speech emotional prosody and semantics on auditory informational masking. Acta Psychologica Sinica, 55(2), 177-191.
信噪比 (dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性韵律 | 开心韵律 | 中性韵律 | 开心韵律 | |
-8 | 0.29 ± 0.16 | 0.39 ± 0.14 | 0.46 ± 0.19 | 0.50 ± 0.16 |
-4 | 0.55 ± 0.18 | 0.66 ± 0.15 | 0.71 ± 0.16 | 0.71 ± 0.19 |
-0 | 0.76 ± 0.16 | 0.80 ± 0.10 | 0.87 ± 0.08 | 0.86 ± 0.09 |
-4 | 0.93 ± 0.04 | 0.90 ± 0.07 | 0.94 ± 0.04 | 0.94 ± 0.04 |
表1 识别目标句的正确率(M ± SD)
信噪比 (dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性韵律 | 开心韵律 | 中性韵律 | 开心韵律 | |
-8 | 0.29 ± 0.16 | 0.39 ± 0.14 | 0.46 ± 0.19 | 0.50 ± 0.16 |
-4 | 0.55 ± 0.18 | 0.66 ± 0.15 | 0.71 ± 0.16 | 0.71 ± 0.19 |
-0 | 0.76 ± 0.16 | 0.80 ± 0.10 | 0.87 ± 0.08 | 0.86 ± 0.09 |
-4 | 0.93 ± 0.04 | 0.90 ± 0.07 | 0.94 ± 0.04 | 0.94 ± 0.04 |
信噪比(dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性韵律 | 开心韵律 | 中性韵律 | 开心韵律 | |
-8 | 0.07 ± 0.07 | 0.18 ± 0.11 | 0.28 ± 0.18 | 0.37 ± 0.14 |
-4 | 0.22 ± 0.14 | 0.43 ± 0.15 | 0.58 ± 0.19 | 0.63 ± 0.12 |
0 | 0.55 ± 0.19 | 0.67 ± 0.09 | 0.79 ± 0.13 | 0.81 ± 0.09 |
4 | 0.86 ± 0.12 | 0.85 ± 0.07 | 0.90 ± 0.07 | 0.91 ± 0.06 |
表2 识别目标句的正确率(M ± SD)
信噪比(dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性韵律 | 开心韵律 | 中性韵律 | 开心韵律 | |
-8 | 0.07 ± 0.07 | 0.18 ± 0.11 | 0.28 ± 0.18 | 0.37 ± 0.14 |
-4 | 0.22 ± 0.14 | 0.43 ± 0.15 | 0.58 ± 0.19 | 0.63 ± 0.12 |
0 | 0.55 ± 0.19 | 0.67 ± 0.09 | 0.79 ± 0.13 | 0.81 ± 0.09 |
4 | 0.86 ± 0.12 | 0.85 ± 0.07 | 0.90 ± 0.07 | 0.91 ± 0.06 |
情绪语义 | 动词 | 名词 | ||
---|---|---|---|---|
效价 | 唤醒度 | 效价 | 唤醒度 | |
中性 | 5.07 ± 0.19 | 3.26 ± 0.62 | 5.03 ± 0.14 | 2.62 ± 0.33 |
积极 | 6.79 ± 0.40 | 5.99 ± 0.52 | 6.46 ± 0.32 | 5.06 ± 0.53 |
表3 动词和名词的效价和唤醒度(M ± SD)
情绪语义 | 动词 | 名词 | ||
---|---|---|---|---|
效价 | 唤醒度 | 效价 | 唤醒度 | |
中性 | 5.07 ± 0.19 | 3.26 ± 0.62 | 5.03 ± 0.14 | 2.62 ± 0.33 |
积极 | 6.79 ± 0.40 | 5.99 ± 0.52 | 6.46 ± 0.32 | 5.06 ± 0.53 |
信噪比(dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性语义 | 积极语义 | 中性语义 | 积极语义 | |
-8 | 0.37 ± 0.12 | 0.35 ± 0.11 | 0.51 ± 0.16 | 0.52 ± 0.17 |
-4 | 0.58 ± 0.15 | 0.63 ± 0.12 | 0.73 ± 0.10 | 0.74 ± 0.11 |
0 | 0.83 ± 0.06 | 0.85 ± 0.09 | 0.87 ± 0.07 | 0.90 ± 0.06 |
4 | 0.93 ± 0.05 | 0.93 ± 0.05 | 0.94 ± 0.04 | 0.94 ± 0.05 |
表4 识别目标句的正确率(M ± SD)
信噪比(dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性语义 | 积极语义 | 中性语义 | 积极语义 | |
-8 | 0.37 ± 0.12 | 0.35 ± 0.11 | 0.51 ± 0.16 | 0.52 ± 0.17 |
-4 | 0.58 ± 0.15 | 0.63 ± 0.12 | 0.73 ± 0.10 | 0.74 ± 0.11 |
0 | 0.83 ± 0.06 | 0.85 ± 0.09 | 0.87 ± 0.07 | 0.90 ± 0.06 |
4 | 0.93 ± 0.05 | 0.93 ± 0.05 | 0.94 ± 0.04 | 0.94 ± 0.05 |
信噪比(dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性语义 | 积极语义 | 中性语义 | 积极语义 | |
-8 | 0.06 ± 0.06 | 0.09 ± 0.06 | 0.27 ± 0.18 | 0.33 ± 0.18 |
-4 | 0.27 ± 0.11 | 0.32 ± 0.13 | 0.58 ± 0.15 | 0.64 ± 0.16 |
0 | 0.65 ± 0.10 | 0.74 ± 0.11 | 0.84 ± 0.08 | 0.88 ± 0.06 |
4 | 0.90 ± 0.05 | 0.93 ± 0.05 | 0.95 ± 0.04 | 0.95 ± 0.03 |
表5 识别目标句的正确率(M ± SD)
信噪比(dB) | 无主观空间分离 | 有主观空间分离 | ||
---|---|---|---|---|
中性语义 | 积极语义 | 中性语义 | 积极语义 | |
-8 | 0.06 ± 0.06 | 0.09 ± 0.06 | 0.27 ± 0.18 | 0.33 ± 0.18 |
-4 | 0.27 ± 0.11 | 0.32 ± 0.13 | 0.58 ± 0.15 | 0.64 ± 0.16 |
0 | 0.65 ± 0.10 | 0.74 ± 0.11 | 0.84 ± 0.08 | 0.88 ± 0.06 |
4 | 0.90 ± 0.05 | 0.93 ± 0.05 | 0.95 ± 0.04 | 0.95 ± 0.03 |
[1] |
Arbogast, T. L., Mason, C. R., & Kidd Jr, G. (2002). The effect of spatial separation on informational and energetic masking of speech. The Journal of the Acoustical Society of America, 112(5), 2086-2098.
doi: 10.1121/1.1510141 URL |
[2] |
Asutay, E., & Västfjäll, D. (2014). Emotional bias in change deafness in multisource auditory environments. Journal of Experimental Psychology: General, 143(1), 27-32.
doi: 10.1037/a0032791 URL |
[3] |
Bestelmeyer, P. E., Maurage, P., Rouger, J., Latinus, M., & Belin, P. (2014). Adaptation to vocal expressions reveals multistep perception of auditory emotion. Journal of Neuroscience, 34(24), 8098-8105.
doi: 10.1523/JNEUROSCI.4820-13.2014 pmid: 24920615 |
[4] |
Brungart, D. S. (2001). Informational and energetic masking effects in the perception of two simultaneous talkers. The Journal of the Acoustical Society of America, 109(3), 1101-1109.
doi: 10.1121/1.1345696 URL |
[5] |
Brungart, D. S., Simpson, B. D., Ericson, M. A., & Scott, K. R. (2001). Informational and energetic masking effects in the perception of multiple simultaneous talkers. The Journal of the Acoustical Society of America, 110(5), 2527-2538.
doi: 10.1121/1.1408946 URL |
[6] |
Carlile, S., & Corkhill, C. (2015). Selective spatial attention modulates bottom-up informational masking of speech. Scientific Reports, 5(1), 8662.
doi: 10.1038/srep08662 URL |
[7] |
Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. The Journal of the Acoustical Society of America, 25(5), 975-979.
doi: 10.1121/1.1907229 URL |
[8] |
Cooke, M., Garcia Lecumberri, M. L., & Barker, J. (2008). The foreign language cocktail party problem: Energetic and informational masking effects in non-native speech perception. The Journal of the Acoustical Society of America, 123(1), 414-427.
doi: 10.1121/1.2804952 URL |
[9] |
Culling, J. F., & Mansell, E. R. (2013). Speech intelligibility among modulated and spatially distributed noise sources. The Journal of the Acoustical Society of America, 133(4), 2254-2261.
doi: 10.1121/1.4794384 URL |
[10] |
Dupuis, K., & Pichora-Fuller, M. K. (2014). Intelligibility of emotional speech in younger and older adults. Ear and Hearing, 35(6), 695-707.
doi: 10.1097/AUD.0000000000000082 pmid: 25127327 |
[11] |
Ethofer, T., Bretscher, J., Gschwind, M., Kreifelts, B., Wildgruber, D., & Vuilleumier, P. (2012). Emotional voice areas: Anatomic location, functional properties, and structural connections revealed by combined fMRI/DTI. Cerebral Cortex, 22(1), 191-200.
doi: 10.1093/cercor/bhr113 URL |
[12] |
Freyman, R. L., Balakrishnan, U., & Helfer, K. S. (2001). Spatial release from informational masking in speech recognition. The Journal of the Acoustical Society of America, 109(5), 2112-2122.
doi: 10.1121/1.1354984 URL |
[13] |
Freyman, R. L., Balakrishnan, U., & Helfer, K. S. (2004). Effect of number of masking talkers and auditory priming on informational masking in speech recognition. The Journal of the Acoustical Society of America, 115(5), 2246-2256.
doi: 10.1121/1.1689343 URL |
[14] |
Freyman, R. L., Helfer, K. S., McCall, D. D., & Clifton, R. K. (1999). The role of perceived spatial separation in the unmasking of speech. The Journal of the Acoustical Society of America, 106(6), 3578-3588.
doi: 10.1121/1.428211 URL |
[15] |
Fritz, J. B., Elhilali, M., David, S. V., & Shamma, S. A. (2007). Auditory attention—focusing the searchlight on sound. Current Opinion in Neurobiology, 17(4), 437-455.
doi: 10.1016/j.conb.2007.07.011 URL |
[16] |
Frühholz, S., & Grandjean, D. (2013). Processing of emotional vocalizations in bilateral inferior frontal cortex. Neuroscience & Biobehavioral Reviews, 37(10), 2847-2855.
doi: 10.1016/j.neubiorev.2013.10.007 URL |
[17] |
Glyde, H., Buchholz, J., Dillon, H., Best, V., Hickson, L., & Cameron, S. (2013). The effect of better-ear glimpsing on spatial release from masking. The Journal of the Acoustical Society of America, 134(4), 2937-2945.
doi: 10.1121/1.4817930 URL |
[18] |
Goh, W. D., Yap, M. J., Lau, M. C., Ng, M. M., & Tan, L. C. (2016). Semantic richness effects in spoken word recognition: A lexical decision and semantic categorization megastudy. Frontiers in Psychology, 7, 976.
doi: 10.3389/fpsyg.2016.00976 pmid: 27445936 |
[19] |
Gordon, M. S., & Hibberts, M. (2011). Audiovisual speech from emotionally expressive and lateralized faces. Quarterly Journal of Experimental Psychology, 64(4), 730-750.
doi: 10.1080/17470218.2010.516835 URL |
[20] |
Grandjean, D., Sander, D., Pourtois, G., Schwartz, S., Seghier, M. L., Scherer, K. R., & Vuilleumier, P. (2005). The voices of wrath: Brain responses to angry prosody in meaningless speech. Nature Neuroscience, 8(2), 145-146.
doi: 10.1038/nn1392 pmid: 15665880 |
[21] |
Grass, A., Bayer, M., & Schacht, A. (2016). Electrophysiological correlates of emotional content and volume level in spoken word processing. Frontiers in Human Neuroscience, 10, 326.
doi: 10.3389/fnhum.2016.00326 pmid: 27458359 |
[22] |
Grimm, S., Weigand, A., Kazzer, P., Jacobs, A. M., & Bajbouj, M. (2012). Neural mechanisms underlying the integration of emotion and working memory. Neuroimage, 61(4), 1188-1194.
doi: 10.1016/j.neuroimage.2012.04.004 pmid: 22521253 |
[23] |
Hampson, M., Driesen, N., Roth, J. K., Gore, J. C., & Constable, R. T. (2010). Functional connectivity between task-positive and task-negative brain areas and its relation to working memory performance. Magnetic Resonance Imaging, 28(8), 1051-1057.
doi: 10.1016/j.mri.2010.03.021 pmid: 20409665 |
[24] |
Huang, Y., Huang, Q., Chen, X., Wu, X., & Li, L. (2009). Transient auditory storage of acoustic details is associated with release of speech from informational masking in reverberant conditions. Journal of Experimental Psychology: Human Perception and Performance, 35(5), 1618-1628.
doi: 10.1037/a0015791 URL |
[25] |
Iwashiro, N., Yahata, N., Kawamuro, Y., Kasai, K., & Yamasue, H. (2013). Aberrant interference of auditory negative words on attention in patients with schizophrenia. PloS One, 8(12), e83201.
doi: 10.1371/journal.pone.0083201 URL |
[26] |
Jaspers-Fayer, F., Ertl, M., Leicht, G., Leupelt, A., & Mulert, C. (2012). Single-trial EEG-fMRI coupling of the emotional auditory early posterior negativity. Neuroimage, 62(3), 1807-1814.
doi: 10.1016/j.neuroimage.2012.05.018 pmid: 22584235 |
[27] |
Johansson, K. (1997). The role of facial approach signals in speechreading. Scandinavian Journal of Psychology, 38(4), 335-341.
doi: 10.1111/1467-9450.00043 URL |
[28] |
Kidd Jr, G., Arbogast, T. L., Mason, C. R., & Gallun, F. J. (2005). The advantage of knowing where to listen. The Journal of the Acoustical Society of America, 118(6), 3804-3815.
doi: 10.1121/1.2109187 URL |
[29] | Kidd, G., & Colburn, H. S. (2017). Informational masking in speech recognition. In A. B.Coffin & J.Sisneros (Eds.), Springer Handbook of Auditory Research: Vol. 60: The auditory system at the cocktail party (pp.75-109). Springer- Verlag. |
[30] |
Lang, P. J., & Bradley, M. M. (2010). Emotion and the motivational brain. Biological Psychology, 84(3), 437-450.
doi: 10.1016/j.biopsycho.2009.10.007 pmid: 19879918 |
[31] | Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1997). Motivated attention:Affect, activation, and action. In P. J.Lang, R. F.Simons, M.Balaban, & R.Simons (Eds.), Attention and orienting: Sensory and motivational processes (pp. 97-136). Google Play. |
[32] |
LeDoux, J. E. (2000). Emotion circuits in the brain. Annual Review of Neuroscience, 23(1), 155-184.
doi: 10.1146/annurev.neuro.23.1.155 URL |
[33] |
Lu, L., Bao, X., Chen, J., Qu, T., Wu, X., & Li, L. (2018). Emotionally conditioning the target-speech voice enhances recognition of the target speech under “cocktail-party” listening conditions. Attention, Perception, & Psychophysics, 80(4), 871-883.
doi: 10.3758/s13414-018-1489-8 URL |
[34] |
Mathews, A. (1990). Why worry? The cognitive function of anxiety. Behaviour Research and Therapy, 28(6), 455-468.
pmid: 2076083 |
[35] |
Mattys, S. L., Brooks, J., & Cooke, M. (2009). Recognizing speech under a processing load: Dissociating energetic from informational factors. Cognitive Psychology, 59(3), 203-243.
doi: 10.1016/j.cogpsych.2009.04.001 pmid: 19423089 |
[36] |
Mittermeier, V., Leicht, G., Karch, S., Hegerl, U., Möller, H. J., Pogarell, O., & Mulert, C. (2011). Attention to emotion: Auditory-evoked potentials in an emotional choice reaction task and personality traits as assessed by the NEO FFI. European Archives of Psychiatry and Clinical Neuroscience, 261(2), 111-120.
doi: 10.1007/s00406-010-0127-9 pmid: 20661744 |
[37] |
Mothes-Lasch, M., Becker, M. P., Miltner, W. H., & Straube, T. (2016). Neural basis of processing threatening voices in a crowded auditory world. Social Cognitive and Affective Neuroscience, 11(5), 821-828.
doi: 10.1093/scan/nsw022 pmid: 26884543 |
[38] |
Rakerd, B., Aaronson, N. L., & Hartmann, W. M. (2006). Release from speech-on-speech masking by adding a delayed masker at a different location. The Journal of the Acoustical Society of America, 119(3), 1597-1605.
doi: 10.1121/1.2161438 URL |
[39] |
Rhebergen, K. S., Versfeld, N. J., & Dreschler, W. A. (2005). Release from informational masking by time reversal of native and non-native interfering speech. The Journal of the Acoustical Society of America, 118(3), 1274-1277.
doi: 10.1121/1.2000751 URL |
[40] |
Sander, D., Grandjean, D., Pourtois, G., Schwartz, S., Seghier, M. L., Scherer, K. R., & Vuilleumier, P. (2005). Emotion and attention interactions in social cognition: Brain regions involved in processing anger prosody. Neuroimage, 28(4), 848-858.
pmid: 16055351 |
[41] |
Scott, S. K., & McGettigan, C. (2013). The neural processing of masked speech. Hearing Research, 303, 58-66.
doi: 10.1016/j.heares.2013.05.001 pmid: 23685149 |
[42] |
Shinn-Cunningham, B. G. (2008). Object-based auditory and visual attention. Trends in Cognitive Sciences, 12(5), 182-186.
doi: 10.1016/j.tics.2008.02.003 pmid: 18396091 |
[43] |
Song, S., Zilverstand, A., Song, H., Uquillas, F. D. O., Wang, Y., Xie, C.,... Zou, Z. (2017). The influence of emotional interference on cognitive control: A meta-analysis of neuroimaging studies using the emotional Stroop task. Scientific Reports, 7(1), 2088.
doi: 10.1038/s41598-017-12205-w URL |
[44] |
Summers, R. J., & Roberts, B. (2020). Informational masking of speech by acoustically similar intelligible and unintelligible interferers. The Journal of the Acoustical Society of America, 147(2), 1113-1125.
doi: 10.1121/10.0000688 URL |
[45] | Sun, C. C., Hendrix, P., Ma, J., & Baayen, R. H. (2018). Chinese lexical database (CLD): A large-scale lexical database for simplified Mandarin Chinese. Behavior Research Methods, 50(6), 2606-2629. |
[46] | Watson, C. S. (2005). Some comments on informational masking. Acta Acustica United with Acustica, 91(3), 502-512. |
[47] |
Wilson, R. H., Trivette, C. P., Williams, D. A., & Watts, K. L. (2012). The effects of energetic and informational masking on the Words-in-Noise Test (WIN). Journal of the American Academy of Audiology, 23(7), 522-533.
pmid: 22992259 |
[48] |
Wu, C., Cao, S., Wu, X., & Li, L. (2013). Temporally pre- presented lipreading cues release speech from informational masking. The Journal of the Acoustical Society of America, 133(4), EL281-EL285.
doi: 10.1121/1.4794933 URL |
[49] |
Wu, C., Wu, X. H., & Li, L. (2013). Speech Recognition in Schizophrenic under Masking Conditions. Advances in Psychological Science, 21(6), 958-964.
doi: 10.3724/SP.J.1042.2013.00958 URL |
[ 吴超, 吴玺宏, 李量. (2013). 精神分裂症患者在听觉掩蔽环境下的言语识别. 心理科学进展, 21(6), 958-964.] | |
[50] |
Wu, C., Zheng, Y., Li, J., Zhang, B., Li, R., Wu, H.,... Li, L. (2017). Activation and functional connectivity of the left inferior temporal gyrus during visual speech priming in healthy listeners and listeners with schizophrenia. Frontiers in Neuroscience, 11, 107.
doi: 10.3389/fnins.2017.00107 pmid: 28360829 |
[51] | Wu, X., Chen, J., Yang, Z., Huang, Q., Wang, M., & Li, L. (2007). Effect of number of masking talkers on speech-on- speech masking in Chinese. In Eighth Annual Conference of the International Speech Communication Association, Antwerp, Belgium, 390-393. |
[52] |
Yang, Z., Chen, J., Huang, Q., Wu, X., Wu, Y., Schneider, B. A., & Li, L. (2007). The effect of voice cuing on releasing Chinese speech from informational masking. Speech Communication, 49(12), 892-904.
doi: 10.1016/j.specom.2007.05.005 URL |
[53] |
Yang, Z. G., Zhang, T. T., Song, Y. W., & Li, L. (2014). The Subcomponents of Informational Masking: Evidence from Behavioral and Neural Imaging Studies. Advances in Psychological Science, 22(3), 400-408.
doi: 10.3724/SP.J.1042.2014.00400 URL |
[ 杨志刚, 张亭亭, 宋耀武, 李量. (2014). 听觉信息掩蔽的亚成分:基于行为和脑成像研究的证明. 心理科学进展, 22(3), 400-408.] |
[1] | 胡金生, 李骋诗, 王琦, 李松泽, 李涛涛, 刘淑清. 孤独症青少年的情绪韵律注意偏向缺陷:低效率的知觉模式*[J]. 心理学报, 2018, 50(6): 637-646. |
[2] | 郑志伟;黄贤军;张钦. 情绪韵律调节情绪词识别的ERP研究[J]. 心理学报, 2013, 45(4): 427-437. |
[3] | 覃薇薇,刘思耘,杨莉,周宗奎. 前分类声音存储器对声调和情绪韵律的加工[J]. 心理学报, 2010, 42(06): 651-662. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||