心理学报, 2019, 51(4): 462-470 doi: 10.3724/SP.J.1041.2019.00462

研究报告

新生儿情绪性语音加工的正性偏向——来自事件相关电位的证据

张丹丹1,2, 陈钰1, 敖翔1, 孙国玉3, 刘黎黎3, 侯新琳3, 陈玉明,1

1 深圳大学心理与社会学院

2 深圳市情绪与社会认知科学重点实验室(深圳大学), 深圳 518060

3 北京大学第一医院儿科, 北京 100034

Early preference for positive over negative prosody in neonates: Evidence based on event-related potentials

ZHANG Dandan1,2, CHEN Yu1, AO Xiang1, SUN Guoyu3, LIU Lili3, HOU Xinlin3, CHEN Yuming,1

1 College of Psychology and Sociology, Shenzhen University, Shenzhen 518060, China

2 Shenzhen Key Laboratory of Affective and Social Cognitive Science, Shenzhen University, Shenzhen 518060, China

3 Department of Pediatrics, Peking University First Hospital, Beijing 100034, China

通讯作者: 陈玉明, E-mail: cympsy@szu.edu.cn

收稿日期: 2018-07-31   网络出版日期: 2019-04-25

基金资助: * 国家自然科学基金.  31571120
深圳市基础研究自由探索项目.  JCYJ20170302143246158
北京市科委基金.  Z161100002616011

Received: 2018-07-31   Online: 2019-04-25

摘要

准确解码语音中的情绪信息能让个体更好地适应社会环境, 此能力对新生儿和婴儿尤其重要, 因为人类刚出生时听觉系统远比视觉系统发育得完善。虽然已有研究表明5~7月龄的婴儿能分辨不同情绪种类的语音, 但目前对新生儿的研究还非常少。人类是否在出生时即具有分辨不同种类情绪性语音的能力?新生儿对情绪的加工是否存在正性或负性偏向?本文选用odd-ball范式考察高兴、恐惧、愤怒三种韵律性语音在1~6天龄新生儿大脑中诱发的事件相关电位。实验1直接对比三种情绪性条件, 发现新生儿大脑的额区(F3和F4电极点)可以区分情绪性语音的正负性, 正性(高兴)语音诱发的“失匹配反应”幅度明显大于负性(愤怒和恐惧)语音。实验2采用偏差和标准刺激反转的odd-ball范式, 证实了实验1的结果并非源于三种情绪语音物理属性的差异。本文的结果提示, 新生儿大脑可自动辨别正性与负性情绪语音, 但尚不能将愤怒和恐惧两种负性语音区分开来。更重要的是, 高兴语音比两种负性语音诱发了更大的失匹配反应, 这一结果首次从神经学层面(电生理指标)为新生儿情绪性语音加工的正性偏向提供了证据。

关键词: 新生儿 ; 正性偏向 ; 高兴语音 ; 恐惧语音 ; 愤怒语音

Abstract

Our ability to process emotional prosody, that is the emotional tone of a speaker, is fundamental to human communication and adaptive behaviours. Very early in development, vocal emotional cues are more critical than facial expressions in guiding infants' behavior. However, the processing of emotional prosody in the very early days of life is still far from clearly understood. It is unclear whether the discrimination between prosodies with different emotional categories is present at birth. Furthermore, it is unknown whether there is a preferential orientation (negativity bias versus positivity preference) in neonates.

Here, we used event-related potentials (ERPs) to examine the ability of neonates (from 1 to 6 days old) to discriminate different types of emotions conveyed by speech prosody. The experiment was conducted in the neonatal ward of Peking University First Hospital, Beijing, China. Electroencephalogram recording was carried out when the infants were in a state of active sleep. Using an oddball paradigm, the current study investigated the neural correlates underlying automatic processing of emotional voices of happiness, fear and anger in 18 (Experiment 1) and 29 (Experiment 2) sleeping neonates. In Experiment 1, each category of emotional prosody (20%) was separately mixed into emotionally neutral prosody (80%), forming three blocks with different emotions. In Experiment 2, we not only repeated the procedure of Experiment 1, but also reversed the standard and deviation stimuli in the odd-ball task.

Event-related potential data showed that the frontal scalp distribution (F3 and F4) of the neonatal brain could discriminate happy voices from both angry and fearful voices; the mismatch response (MMR) was larger in response to the deviant stimuli of happiness, compared with the deviant stimuli of anger and fear. However, the latter two stimuli, i.e., angry and fearful voices could not be differentiated. The MMR amplitudes at the other four electrodes, i.e., C3, C4, P3, and P4 did not show significant differences across emotional conditions. Note: the MMR is a prototype of the mismatch negativity, i.e. a preattentive component of the auditory ERP that shows a positive (MMR) or negative (MMN) displacement in response to deviant sounds compared to standard sounds in the oddball paradigm.

The neural responses recorded here indicate very early preference for positive over negative stimuli, which is contrary to the ‘negativity bias’ phenomenon established in the affective prosody literature of adult and infant studies. It is suggest that the range-frequency hypothesis could help to interpret the transformation from the ‘positivity preference’ during the first half year of life to the ‘negativity bias’ later in development. The present finding provides the first neuroelectrophysiological evidence for the hypothesis of positivity preference in neonatal participants. In addition, this special discrimination between positive and negative prosody in early life may provide a foundation for later emotion and social cognition development.

Keywords: neonate ; positivity preference ; happy prosody ; fearful prosody ; angry prosody

PDF (1425KB) 元数据 多维度评价 相关文章 导出 EndNote| Ris| Bibtex  收藏本文

本文引用格式

张丹丹, 陈钰, 敖翔, 孙国玉, 刘黎黎, 侯新琳, 陈玉明. (2019). 新生儿情绪性语音加工的正性偏向——来自事件相关电位的证据 . 心理学报, 51(4), 462-470

ZHANG Dandan, CHEN Yu, AO Xiang, SUN Guoyu, LIU Lili, HOU Xinlin, CHEN Yuming. (2019). Early preference for positive over negative prosody in neonates: Evidence based on event-related potentials. Acta Psychologica Sinica, 51(4), 462-470

1 引言

语音(即人发出的说话声)是我们在日常生活中接触最为频繁的声音类型, 它不仅能传递语义信息, 还能传达说话人的情绪状态(Belin, Fecteau, & Bédard, 2004)。对语音中情绪信息的准确解码能让个体更好地适应社会环境(Decety & Howard, 2013; Frühholz & Grandjean, 2013; Hawk, Van Kleef, Fischer, & Van der Schalk, 2009)。在生长发育早期, 新生儿(年龄为0~28天)及婴儿(年龄为1~12月)的听觉系统比视觉系统发育得更加完善, 因此语音中的情绪比面孔等视觉载体所传达的情绪对婴儿的生存和发展更为重要(Grossmann, 2010; Vaish, Grossmann, & Woodward, 2008; Vaish & Striano, 2004)。

语音中的情绪可由语义传达, 也可通过声音的频率、响度及节律等特征的有机组合而表达(Brük, Kreifelts, & Wildgruber, 2011)。考虑到小婴儿尤其是新生儿尚不具备语义理解的能力, 本文仅探讨对后者, 即对语音中情绪性韵律(emotional prosody)的加工。新生儿的听觉系统已完全具备了加工音调的能力(Háden et al., 2009), 其大脑的右侧(相比于左侧)颞上沟和颞中回会被语音中变化的韵律显著激活(Arimitsu et al., 2011; Telkemeyer et al., 2009), 同时他们的额叶对音调不停变化的语音(相比于音调一层不变的语音)有特异性的激活, 这些结果提示此发育阶段的大脑已可区分语音中不同的韵律模式(Saito et al., 2007)。在情绪性语音(或旋律)加工方面, 虽然已有研究表明, 5月龄的婴儿在听音乐时能区分悲伤和高兴的旋律(Flom & Pick, 2012), 5~7月龄的婴儿能分辨不同情绪种类的语音(Flom & Bahrick, 2007), 7月龄的婴儿能捕捉到面孔与语音中一致性的情绪信息(Grossmann, Striano, & Friederici, 2005), 但目前对人类刚出生的时期, 即新生儿阶段的研究还非常少。

人类是否在出生时即具有分辨不同种类情绪性语音的能力?如果有, 新生儿对情绪的加工是否存在正性或负性偏向?这第二个问题的提出基于以下事实:已知儿童、青少年、成年人对情绪信息的加工存在负性偏向(negativity bias), 即对负性信息投入更多的注意、评价、记忆等认知资源(Ito, Larsen, Smith, & Cacioppo, 1998), 但此情绪加工的负性偏向似乎并不是与生俱来的。Vaish等人(2008)总结了基于面孔和语音的研究发现, 婴儿在6~7月龄后才表现出明显地对负性情绪的加工偏向。例如, 视觉通路的研究表明, 6月龄婴儿的大脑对恐惧(相比于中性)面孔注视的物体表现出更大的中央区事件相关负成分(Hoehl & Striano, 2010); 7月龄婴儿对恐惧(相比于高兴)面孔的注视时间更长, 中央区负成分的幅度更大(Peltola, Leppänen, Mäki, & Hietanen, 2009)。听觉通路的研究表明, 7月龄婴儿的大脑对愤怒(相比于高兴和中性)语音表现出右侧颞叶的显著激活(Grossmann, Oberecker, Koch, & Friederici, 2010), 且额区及中央区呈现出更大的事件相关负成分(Grossmann et al., 2005)。而支持早期正性加工偏向的研究显示, 5月龄婴儿在听赞赏性(相比于责备性)语音时表现出更多的微笑反应(Fernald, 1993); 4月龄婴儿对高兴面孔注视的时间明显长于愤怒和中性面孔(LaBarbera, Izard, Vietze, & Parisi, 1976), 同时他们对高兴(相比于恐惧)面孔注视的物体显示出更大的额区及中央区负成分(Rigato, Farroni, & Johnson, 2010)。然而, 上述针对正负性情绪加工偏向的研究仅考察了月龄大于3月的婴儿, 目前对小月龄婴儿特别是新生儿的相关报道还非常少。

据我们所知, 仅有三项研究直接比较了新生儿对正性和负性情绪材料的加工。较早期的一项行为学研究发现, 与愤怒、悲伤和中性的语音相比, 高兴语音能在新生儿被试中引起更长时间的睁眼反应(Mastropieri & Turkewitz, 1999)。更近期的一项行为学研究考察了新生儿对高兴和恐惧面孔的注视时间, 发现他们对高兴面孔的注视时间更长(Farroni, Menon, Rigato, & Johnson, 2007)。随后Cheng等人(Cheng, Lee, Chen, Wang, & Decety, 2012)利用odd-ball范式研究了新生儿对情绪性语音加工的事件相关电位(event related potential, ERP), 首次为新生儿区分语音情绪提供了神经学层面(电生理指标)的证据。该研究发现恐惧语音比高兴语音在额-中央区诱发出了更大的失匹配电位。由于该ERP成分在新生儿中表现为正幅度(Dehaene-Lambertz, 2000; Friederici, Friedrich, & Weber, 2002; Leppänen et al., 2004; Ruusuvirta, Huotilainen, Fellman, & Näätänen, 2009; Winkler et al., 2003), 与通常在成人中发现的失匹配负波极性相反, 我们称该成分为“失匹配反应” (mismatch response, MMR; Cheng et al., 2012; Zhang et al., 2014)。显然地, 在新生儿阶段仅有的三项研究对情绪加工的正负性偏向给出了相反的答案:前两项行为学实验支持正性偏向而Cheng等人(2012)的实验支持负性偏向。

综上所述, 目前对新生儿情绪性语音加工的研究还非常缺乏, 且在“情绪加工的正负性偏向”问题上出现了看似矛盾的结论。新生儿被试是一组特殊的群体, 他们无法按照主试的意愿安静并专注地完成实验, 在实验中的运动伪迹等会对行为学及神经学指标产生较大的干扰, 因此对新生儿群体的研究往往需要累积较多的证据才能得出相对可靠的结论。本研究即以此为目的, 我们拟通过两项实验, 利用ERP技术考察新生儿对高兴、愤怒、恐惧语音的大脑反应。参考Cheng等人(2012)的研究, 本实验采用odd-ball范式播放语音材料, 这是因为该范式比其他被动任务范式(例如两类刺激以等概率播放)在检测被试对不同刺激的分辨能力方面具有更高的敏感性(Ferrari, Bradley, Codispoti, & Lang, 2010)。实验1采用经典的odd-ball范式, 在三个block中分别诱发三种情绪语音相应的ERP波形, 通过直接比较三种条件下的MMR幅度考察新生儿大脑对正负性情绪的敏感性。实验2采用偏差和标准刺激反转的odd-ball范式, 一方面重复验证实验1的结果, 一方面排除实验1中情绪间MMR的差异来源于情绪语音物理属性差异的可能性。在本次研究中, 我们采用了与Cheng等人(2012)相同的实验材料。根据Cheng等人(2012)的结果, 本文假设:人类出生后即具有分辨正负情绪性语音的能力, 并且可能对负性情绪存在一定的加工偏向, 即愤怒和(或)恐惧语音比高兴语音能诱发更大幅度的MMR。

2 方法

2.1 被试

实验1和实验2分别招募了25名和35名刚出生(一周内)的健康足月新生儿。两个实验中分别有7名和6名被试由于哭闹(脑电伪迹过大)未能完成数据采集。因此实验1的有效被试为18名(9男9女), 胎龄38.9 ± 0.9周, 年龄3.2 ± 1.3天; 实验2的有效被试为29名(15男14女), 胎龄38.7 ± 1.0周, 年龄2.8 ± 1.2天。被试的纳入标准如下:1)出生体重符合胎龄; 2)实验前及实验过程中无异常临床表现; 3)实验前至少48小时未使用镇静剂; 4)耳声发射筛查未发现听力障碍(OAE, ILO88 Dpi, Otodynamics Ltd, Hatfield, UK); 5)生后1 min及5 min的Apgar评分不低于9分; 6) 6月龄时神经系统随访未发现异常。排除标准如下:1)缺氧缺血性脑病; 2)脑室出血或白质损伤(超声检查); 3)重度先天畸形; 4)中枢神经系统感染; 5)代谢疾病; 6)惊厥或癫痫(临床表现)。

新生儿家属被告知了研究的目的和内容, 实验前均签署了知情同意书。实验方案获得了北京大学医学伦理委员会的批准。

2.2 实验材料

本研究采用了Cheng等(2012)的情绪语音材料, 该材料的有效性已经多项研究得到了证实(e.g., Fan, Hsu, & Cheng, 2013; Hung, Ahveninen, & Cheng, 2013; Zhang et al., 2014)。实验中共使用四个双音节"dada"语音, 它们分别表现出愤怒、恐惧、高兴、中性四种情绪(图1)。简言之, 语音材料的制作流程如下:一名年轻成年女性重复发出四种情绪条件下的“dada”声各15次; 这60份语音材料经由120名成年人进行情绪类型及强度的5点评分, 分别选出愤怒、恐惧、高兴、中性评分最高的4份材料作为实验材料; 利用音频编辑软件(Adobe Audition, Adobe Systems Inc., San Jose, USA)将实验材料编辑为相同的长度及响度。

图1

图1   四种情绪语音材料的波形图(oscillogram)和声谱图(spectrogram)。


2.3 实验过程

实验在北京大学第一医院儿科病房进行, 病房背景噪声约30dB SPL (希玛噪音计AS804, 东莞万创电子制品有限公司, 东莞, 中国)。语音材料通过入耳式主动降噪耳机播放(IER-NW500N, Sony Corp., Tokyo, Japan), 平均响度为50 dB SPL。

被试进食结束后10 min开始实验准备(安置脑电电极等), 准备过程少于10 min。之后保持实验室安静, 让被试进入自然睡眠。实验过程中采用“振幅整合脑电图”技术(Olympic CFM 6000, Natus, Seattle, USA)实时监测被试的睡眠-觉醒状态(图2A), 振幅整合脑电图的电极放置于CP3及CP4位置, 该睡眠监测仪与本研究考察的脑电信号系统独立。被试一旦进入“活动睡眠”状态(active sleep, 相当于成人的快速眼动睡眠期)并稳定3~5 min后开始播放语音材料。振幅整合脑电图技术及新生儿睡眠分期可参考本课题组前期发表的相关文献(Zhang et al., 2011; 2014)。

图2

图2   实验及数据采集。A, 新生儿在实验中(图中的显示屏用于睡眠-觉醒状态的实时监测); B, 实验1考察的6个通道的脑电电极位置。


实验采用oddball范式(Cheng et al., 2012; Zhang et al., 2014), 被试在睡眠中被动收听情绪性语音材料。按照情绪条件, 被动收听任务包含高兴、愤怒、恐惧3个block, block之间有10 s的间隙, block的顺序在被试间平衡。每个block含500个试次, 其中标准刺激400个试次, 偏差刺激100个试次。每两个偏差刺激之间至少含两个标准刺激。每条语音刺激长度为350 ms, 刺激间隔为450~850 ms (Hirasawa, Kurihara, & Konishi, 2002; Zhang et al., 2014), 即每个block的500个试次共用时500 s (8.3 min)。

实验1含一个session, 每名新生儿收听高兴、愤怒、恐惧各1个block, 情绪语音作为偏差刺激, 中性语音作为标准刺激。实验2含两个session, 每个session包括高兴、愤怒、恐惧各1个block。在其中一个session中, 情绪语音作为偏差刺激, 中性语音作为标准刺激; 而在另一个session中偏差和标准刺激反转, 即情绪语音作为标准刺激, 中性语音作为偏差刺激。每名新生儿进行两个session的实验, session的顺序在被试间平衡。

2.4 数据采集及分析

脑电数据由HANDYEEG系统采集(Micromed, Treviso, Italy), 采样率256 Hz, 电极-头皮间的电阻低于5 kΩ。以左侧乳突为参考电极。为了与已有的研究一致(Cheng et al., 2012; Zhang et al., 2014), 实验1考察F3, F4, C3, C4, P3, P4共6个电极点上的脑电信号(图2B)。实验2根据Cheng等人(2012)以及实验1的结果, 简化了数据采集操作, 仅考察F3和F4电极点上的脑电信号。

脑电离线转为双侧乳突平均参考, 之后分别经过滤波(0.01~30 Hz)、分段(-200~1000 ms)、基线矫正(-200~0 ms)、剔除幅度超过±150 μV的试次。本文采用平均幅度来衡量MMR, 时间窗为语音刺激开始呈现后的300~500 ms (Cheng et al., 2012; Zhang et al., 2014)。

统计分析采用SPSS Statistics 20.0 (IBM Corp., Somers, USA)。描述性统计量表示为“均值±标准差”。显著性水平为p < 0.05。多重比较采用Bonferroni矫正。采用Greenhouse-Geisser方法矫正自由度。对MMR的平均幅度进行双因素重复测量方差分析, 两个被试内因素分别为语音情绪类型(愤怒、恐惧、高兴)和电极点(实验1:F3、F4、C3、C4、P3、P4; 实验2:F3、F4)。

实验1在统计时并未使用传统的“差异波”, 而是基于原始波形直接比较三种情绪条件的MMR幅度(Cheng等人(2012)的研究亦如此)。这主要是因为与健康成人的ERP数据相比, 新生儿数据的信噪比非常低(主要由运动伪迹造成), 因而使用差异波会引入较明显的噪声(减法会将标准刺激条件的噪声引入所有的情绪条件)。

实验2采用了偏差和标准刺激反转的odd-ball范式, 需要计算由同一种情绪语音材料诱发的偏差刺激和标准刺激间的差异波(例如高兴条件下的差异波等于高兴作为偏差刺激的ERP减去高兴作为标准刺激的ERP)。考虑到差异波的低信噪比特性, 实验2招募了比实验1更多的被试(18 vs. 29)以增强统计的显著性。

3 结果

3.1 实验1

情绪的主效应显著, F(2, 34) = 5.27, p = 0.012, η2p = 0.235。高兴语音诱发的MMR (此处为绝对幅度; 3.49 ± 1.23 μV)显著大于愤怒语音诱发的MMR (2.90 ± 1.25 μV), p = 0.010; 高兴和恐惧条件下的MMR无显著差异(3.12 ± 1.18 μV, p = 0.138); 恐惧和愤怒条件下的MMR无显著差异(p = 0.893)。电极点的主效应不显著, F(5, 85) < 1。情绪和电极点的交互作用显著, F(10, 170) = 2.41, p = 0.025, η2p = 0.125 (图3)。简单效应分析表明, 在F3电极点上情绪效应显著(F(2, 34) = 13.94, p < 0.001):高兴语音诱发的MMR (3.92 ± 1.17 μV)显著大于愤怒语音诱发的MMR (2.40 ± 1.33 μV), p < 0.001; 高兴语音诱发的MMR略大于恐惧语音诱发的MMR (3.15 ± 1.02 μV), 但仅边缘显著, p = 0.059; 恐惧和愤怒条件下的MMR无显著差异, p = 0.077。在F4电极点上情绪效应显著(F(2, 34) = 21.12, p < 0.001):高兴语音诱发的MMR (3.88 ± 1.07 μV)显著大于愤怒(2.77 ± 0.90 μV, p < 0.001)及恐惧(2.93 ± 1.07 μV, p = 0.001)语音诱发的MMR; 恐惧和愤怒条件下的MMR无显著差异(p = 1.000)。在其他四个电极点, 情绪效应不显著, F (2, 34) < 1。

3.2 实验2

本实验获得了三种情绪条件作为偏差刺激(图4A)和标准刺激的波形(图4B), 也计算出了同一种情绪语音诱发的偏差刺激与标准刺激之差的差异波(图4C)。当情绪语音作为偏差刺激时, 与实验1类似, 情绪的主效应显著, F(2, 56) = 6.94, p = 0.002, η2p = 0.197 (图4A)。高兴语音诱发的MMR (此处为绝对幅度; 3.38 ± 1.14 μV)显著大于愤怒(2.23 ± 1.56 μV, p = 0.009)和恐惧语音诱发的MMR (2.37 ± 1.37 μV, p = 0.008); 恐惧和愤怒条件下的MMR无显著差异(p = 1.000)。电极点的主效应不显著, F (1, 28) < 1。当情绪语音作为标准刺激时, 情绪效应不显著(F(2, 56) < 1, 高兴= 1.45 ± 1.06 μV, 愤怒= 1.43 ± 1.19 μV, 恐惧= 1.54 ± 1.15 μV; 图4B), 电极点的主效应不显著, F (1, 28) < 1。对差异波进行统计, 发现情绪主效应显著, F(2, 56) = 4.14, p = 0.021, η2p = 0.129 (图4C)。高兴语音诱发的差异波(1.97 ± 1.64 μV)大于愤怒(0.75 ± 1.72 μV, p = 0.058; 边缘显著)和恐惧语音诱发的差异波(0.88 ± 1.81 μV, p = 0.048); 恐惧和愤怒条件下的差异波无显著差异(p = 1.000)。电极点的主效应不显著, F (1, 28) < 1。

图3

图3   实验1结果:三个情绪条件及中性条件的MMR波形图(F3及F4电极点)。注:中性条件的波形由标准刺激试次叠加, 由于试次数量12倍于三个情绪条件, 故波形更光滑(本文未将中性条件的幅度纳入统计分析)。


图4

图4   实验2结果:三个情绪条件的原始波及差异波(图示数据为F3和F4电极点的均值)。A, 情绪语音为偏差刺激, 中性语音为标准刺激(重复实验1); B, 情绪语音为标准刺激, 中性语音为偏差刺激; C, 同一种情绪语音诱发的差异波(偏差条件减去标准条件)。


4 讨论

本研究采用ERP技术, 通过两项实验考察了出生后一周以内的新生儿(平均年龄3天)在被动收听不同情绪种类(高兴、愤怒、恐惧)的韵律性语音时的大脑神经响应。实验发现新生儿大脑的额区(F3和F4电极点)可以区分情绪性语音的正负性, 正性(高兴)语音诱发的MMR幅度明显大于负性(愤怒和恐惧)语音。这一结果首次从神经学层面(电生理指标)为新生儿情绪性语音加工的正性偏向提供了证据。

本文考察的MMR是新异刺激(较之于标准刺激)在新生儿大脑诱发的一个脑电正成分, 它相当于成人大脑额区(或额-中央区)产生的失匹配负波(mismatch negativity, MMN)。听觉MMN的峰值常出现在刺激呈现后150~250 ms, 新异刺激与标准刺激的波形相减即得到一个负波(Näätänen, Paavilainen, Rinne, & Alho, 2007)。MMN反映大脑对刺激间差异的自动化的探测能力, 由于其产生不需要注意的参与, 该成分被认为是最适合用于婴儿的脑功能研究的ERP成分之一。本文及其他新生儿听觉研究(Cheng et al., 2012; Dehaene-Lambertz, 2000; Friederici et al., 2002; Leppänen et al., 2004; Ruusuvirta et al., 2009; Winkler et al., 2003)观察到的MMR可看作是MMN在发育早期的雏形, 由于新生儿的大脑发育还极为不成熟, 此阶段的MMR表现为正成分, 且潜伏期延后。已有的脑电溯源分析表明MMN/MMR的神经源在颞上沟(superior temporal sulcus, STS), 该脑区(特别是右侧STS)恰好是成人加工情绪性语音的脑区(Belin, Zatorre, Lafaille, Ahad, & Pike, 2000; Ethofer et al., 2012)。虽然由于ERP技术的低空间分辨率, 我们并不能断言情绪性语音加工的核心脑区在新生儿阶段已有相当程度的功能分化, 本文的结果至少说明人类出生时即可自动地分辨情绪性语音的正负性, 且对正性情绪信息更加敏感。

本文得到的“新生儿对情绪加工的正性偏向”的结论不符合实验前的假设, 即与Cheng等人(2012)的实验结果相反。我们认为可能的原因有三点。第一, Cheng等人(2012)采用了odd-ball范式的变式(随机设计), 同时将两种偏差刺激(即高兴和恐惧语音)以各10%的概率混入到标准刺激(即中性语音)中, 继而发现恐惧语音诱发的MMR比高兴语音诱发的MMR更大。本研究采用了经典的odd-ball范式(block设计), 将高兴、愤怒、恐惧三种语音分别以20%的概率混入到3个block中, 发现高兴语音诱发的MMR比愤怒和恐惧语音诱发的MMR更大。我们认为, Cheng等人(2012)的随机设计在同一个时间段中混入了高兴和恐惧语音, 而这二者诱发的正性和负性情绪效应可能存在一定程度的相互抵消, 从而降低了结果的有效性。当然, 两种odd-ball方案中哪种更适用于考察本问题还有待进一步讨论, 此处仅提供两项研究结果不一致的可能原因。第二, Cheng等人(2012)的实验在新生儿清醒或睡眠时均有进行, 而本研究严格控制了被试的状态, 即仅在新生儿的“活动睡眠”阶段(类似于成人“快速动眼睡眠”阶段)采集ERP数据。考虑到MMN/MMR会受到睡眠-觉醒状态的影响(Hirasawa et al., 2002; Zhang et al., 2014), 这也可能是两项实验结果不一致的原因。第三, 由于眼动、体动等运动伪迹, 新生儿ERP数据的信噪比远远低于成人数据, 降低了单次实验结果的可靠性。因此, 我们认为继续进行系列实验才能对“新生儿情绪偏向”问题给出准确的回答。

本文继Mastropieri等人(1999)Farroni等人(2007)的行为学实验, 首次提供了新生儿情绪加工正性偏向的神经电生理证据。除了直接考察新生儿对正负性情绪的加工, 还有一些研究也从侧面支持了新生儿的正性情绪偏好。例如, 不少研究发现, 与“成人用语” (adult-directed speech)相比, 新生儿更偏好“婴儿用语” (infant-directed speech, 一种提高音调、加重重音、语调更欢快的说话方式) (Cooper & Aslin, 1990; Singh, Morgan, & Best, 2002), “婴儿用语”而非“成人用语”可显著激活新生儿的额区(Saito et al., 2007)。另外, 与陌生人的声音相比, 新生儿更喜欢母亲的声音(DeCasper & Fifer, 1980)。这些发现的可能原因之一是儿语和母亲的声音通常表现出更多的积极情绪(Saito, Fukuhara, Aoyama, & Toshima, 2009; Singh et al., 2002), 因此它们可作为正性情绪偏向的间接证据。前文已指出, 虽然“负性偏向”是情绪加工中的一个普遍现象, 但该偏向仅当婴儿发育到6~7月龄之后才能被稳定地观察到, 而此前婴儿似乎对正性情绪信息更感兴趣(Vaish et al., 2008)。情绪加工偏好由正性向负性的转变可以通过“范围-频率假说” (range-frequency hypothesis; Parducci, 1995)进行解释。简言之, 新生儿及小婴儿在日常生活中频繁地接收到来自抚养者的积极情绪信号, 习得抚养者的正性情绪线索(例如高兴的声音或表情)与良好照料(拥抱、抚摸、喂食)的联结会使他们从抚养者那里得到更多生理养分和心理抚慰。相反, 此阶段他们暴露在负性情绪环境中的概率极小, 同时由于他们的运动能力不足, 无法主动逃避危险, 即使他们对负性线索加强了关注也并不能获得明显的生存优势。直到6~7月龄之后, 婴儿运动能力快速发展, 他们开始主动探索周边世界, 伴随而来的是来自抚养者的逐渐增多的负性提示(例如父母惊恐的表情或呵斥), 婴儿此时需要对这些负性信息进行更快和更准确的加工(例如Grossmann et al., 2005; 2010; Hoehl & Striano, 2010; Peltola et al., 2009), 从而使自己免受或少受伤害。因此, 婴儿出生后先表现出对情绪加工的正性偏向, 之后再发展为负性偏向, 这对人类早期的发育和发展具有重要作用。认识此情绪加工的发展规律有利于我们制定更科学的育儿方案, 同时可帮助我们及早发现情绪和认知发育有障碍的患儿(例如自闭症患儿)。

总结来说, 本研究试图回答两个问题:新生儿能否区分不同种类的情绪?他们对情绪信息的加工是否存在正性或负性偏向?实验采用了经典的odd-ball范式以及偏差和标准刺激反转的odd-ball范式, 在三个block中分别考察高兴、愤怒、恐惧三种语音诱发的MMR。两项实验的结果一致证明, 新生儿大脑可自动辨别正性与负性情绪语音, 但尚不能将愤怒和恐惧两种负性语音区分开来。更重要的是, 高兴语音比两种负性语音诱发了更大的MMR幅度, 这一结果从神经电生理的层面证实了新生儿对情绪语音加工的正性偏好。我们认为出生后的这种正性偏好是符合进化规律的一种认知模式, 它可以帮助人类在宫外发育的最早期获得更多的食物和抚养者的关爱。

参考文献

Arimitsu T., Uchida-Ota M., Yagihashi T., Kojima S., Watanabe S., Hokuto I., … Minagawa-Kawai Y . ( 2011).

Functional hemispheric specialization in processing phonemic and prosodic auditory changes in neonates

Frontiers in Psychology, 2, 202.

URL     PMID:3173826      [本文引用: 1]

This study focuses on the early cerebral base of speech perception by examining functional lateralization in neonates for processing segmental and suprasegmental features of speech. For this purpose, auditory evoked responses of full-term neonates to phonemic and prosodic contrasts were measured in their temporal area and part of the frontal and parietal areas using near-infrared spectroscopy (NIRS). Stimuli used here were phonemic contrast /itta/ and /itte/ and prosodic contrast of declarative and interrogative forms /itta/ and /itta?/. The results showed clear hemodynamic responses to both phonemic and prosodic changes in the temporal areas and part of the parietal and frontal regions. In particular, significantly higher hemoglobin (Hb) changes were observed for the prosodic change in the right temporal area than for that in the left one, whereas Hb responses to the vowel change were similarly elicited in bilateral temporal areas. However, Hb responses to the vowel contrast were asymmetrical in the parietal area (around supra marginal gyrus), with stronger activation in the left. These results suggest a specialized function of the right hemisphere in prosody processing, which is already present in neonates. The parietal activities during phonemic processing were discussed in relation to verbal-auditory short-term memory. On the basis of this study and previous studies on older infants, the developmental process of functional lateralization from birth to 2 years of age for vowel and prosody was summarized.

Belin P., Fecteau S., & Bédard C . ( 2004).

Thinking the voice: Neural correlates of voice perception

Trends in Cognitive Sciences, 8( 3), 129-135.

URL     PMID:15301753      [本文引用: 1]

The human voice is the carrier of speech, but also an ’auditory face’ that conveys important affective and identity information. Little is known about the neural bases of our abilities to perceive such paralinguistic information in voice. Results from recent neuroimaging studies suggest that the different types of vocal information could be processed in partially dissociated functional pathways, and support a neurocognitive model of voice perception largely similar to that proposed for face perception.

Belin P., Zatorre R.J., Lafaille P., Ahad P., & Pike B . ( 2000).

Voice-selective areas in human auditory cortex

Nature, 403(6767), 309-312.

URL     PMID:10659849      [本文引用: 1]

Abstract The human voice contains in its acoustic structure a wealth of information on the speaker's identity and emotional state which we perceive with remarkable ease and accuracy. Although the perception of speaker-related features of voice plays a major role in human communication, little is known about its neural basis. Here we show, using functional magnetic resonance imaging in human volunteers, that voice-selective regions can be found bilaterally along the upper bank of the superior temporal sulcus (STS). These regions showed greater neuronal activity when subjects listened passively to vocal sounds, whether speech or non-speech, than to non-vocal environmental sounds. Central STS regions also displayed a high degree of selectivity by responding significantly more to vocal sounds than to matched control stimuli, including scrambled voices and amplitude-modulated noise. Moreover, their response to stimuli degraded by frequency filtering paralleled the subjects' behavioural performance in voice-perception tasks that used these stimuli. The voice-selective areas in the STS may represent the counterpart of the face-selective areas in human visual cortex; their existence sheds new light on the functional architecture of the human auditory cortex.

Brük C., Kreifelts B., & Wildgruber D . ( 2011).

Emotional voices in context: A neurobiologicalmodel of multimodal affective information processing

Physics of Life Reviews, 8( 4), 383-403.

URL     PMID:22035772      [本文引用: 1]

78 Emotional voice perception relies on specific temporal and frontal brain regions. 78 Moreover, a contribution of limbic structures has been observed. 78 Each structure can be tied to distinct subprocesses mediating vocal affect decoding. 78 Subprocesses range from basic stages of acoustic analysis to evaluation of meaning.

Cheng Y.W., Lee S. Y., Chen H. Y., Wang P. Y., & Decety J . ( 2012).

Voice and emotion processing in the human neonatal brain

Journal of Cognitive Neuroscience, 24( 6), 1411-1419.

URL     PMID:22360593      [本文引用: 17]

Although the voice-sensitive neural system emerges very early in development, it has yet to be demonstrated whether the neonatal brain is sensitive to voice perception. We measured the EEG mismatch response (MMR) elicited by emotionally spoken syllables "dada" along with correspondingly synthesized nonvocal sounds, whose fundamental frequency contours were matched, in 98 full-term newborns aged 1-5 days. In Experiment 1, happy syllables relative to nonvocal sounds elicited an MMR lateralized to the right hemisphere. In Experiment 2, fearful syllables elicited stronger amplitudes than happy or neutral syllables, and this response had no sex differences. In Experiment 3, angry versus happy syllables elicited an MMR, although their corresponding nonvocal sounds did not. Here, we show that affective discrimination is selectively driven by voice processing per se rather than low-level acoustical features and that the cerebral specialization for human voice and emotion processing emerges over the right hemisphere during the first days of life.

Cooper R. P., & Aslin R. N, . ( 1990).

Preference for infant-directed speech in the first month after birth

Child Development, 61( 5), 1584-1595.

URL     PMID:2245748      [本文引用: 1]

http://www.jstor.org/stable/1130766

DeCasper A. J., & Fifer W. P, . ( 1980).

Of human bonding: Newborns prefer their mothers' voices

Science, 208( 4448), 1174-1176.

URL     PMID:7375928      [本文引用: 1]

By sucking on a nonnutritive nipple in different ways, a newborn human could produce either its mother's voice or the voice of another female. Infants learned how to produce the mother's voice and produced it more often than the other voice. The neonate's preference for the maternal voice suggests that the period shortly after birth may be important for initiating infant bonding to the mother.

Decety, J., & Howard L. H, . ( 2013).

The role of affect in the neurodevelopment of morality

Child Development Perspectives, 7( 1), 49-54.

URL     [本文引用: 1]

Human social existence is characterized by an intuitive sense of fairness, concern for others, and the observance of cultural norms. This prosocial sensitivity is the foundation for adult morality, emanating from the sophisticated integration of emotional, motivational, and cognitive mechanisms across development. In this article, we discuss how an integrated neurodevelopmental approach helps us understand moral judgment and behavior. We examine data emphasizing the importance of affect in moral development and we suggest that moral cognition is underpinned by specific, although not unique, neural networks. The regions recruited in moral cognition underlie specific states of emotion, along with cognitive and motivational processes, which emerge and interconnect over the course of development to produce adaptive social behavior.

Dehaene-Lambertz,G. ( 2000).

Cerebral specialization for speech and non-speech stimuli in infants

Journal of Cognitive Neuroscience, 12( 3), 449-460.

URL     [本文引用: 2]

Early cerebral specialization and lateralization for auditory processing in 4-month-old infants was studied by recording high-density evoked potentials to acoustical and phonetic changes in a series of repeated stimuli (either tones or syllables). Mismatch responses to these stimuli exhibit a distinct topography suggesting that different neural networks within the temporal lobe are involved in the perception and representation of the different features of an auditory stimulus. These data confirm that specialized modules are present within the auditory cortex very early in . However, both for syllables and continuous tones, higher voltages were recorded over the left hemisphere than over the right with no significant interaction of hemisphere by type of stimuli. This suggests that there is no greater left hemisphere involvement in phonetic processing than in acoustic processing during the first months of life.

Ethofer T., Bretscher J., Gschwind M., Kreifelts B., Wildgruber D., & Vuilleumier P . ( 2012).

Emotional voice areas: Anatomic location, functional properties, and structural connections revealed by combined fMRI/DTI

Cerebral Cortex, 22( 1), 191-200

URL     PMID:21625012      [本文引用: 1]

We determined the location, functional response profile, and structural fiber connections of auditory areas with voice- and emotion-sensitive activity using functional magnetic resonance imaging (fMRI) and diffusion tensor imaging. Bilateral regions responding to emotional voices were consistently found in the superior temporal gyrus, posterolateral to the primary auditory cortex. Event-related fMRI showed stronger responses in these areas to voices-expressing anger, sadness, joy, and relief, relative to voices with neutral prosody. Their neural responses were primarily driven by prosodic arousal, irrespective of valence. Probabilistic fiber tracking revealed direct structural connections of these "emotional voice areas" (EVA) with ipsilateral medial geniculate body, which is the major input source of early auditory cortex, as well as with the ipsilateral inferior frontal gyrus (IFG) and inferior parietal lobe (IPL). In addition, vocal emotions (compared with neutral prosody) increased the functional coupling of EVA with the ipsilateral IFG but not IPL. These results provide new insights into the neural architecture of the human voice processing system and support a crucial involvement of IFG in the recognition of vocal emotions, whereas IPL may subserve distinct auditory spatial functions, consistent with distinct anatomical substrates for the processing of "how" and "where" information within the auditory pathways.

Fan Y. T., Hsu Y. Y., & Cheng Y. W . ( 2013).

Sex matters: n- back modulates emotional mismatch negativity

NeuroReport, 24( 9), 457-463.

URL     [本文引用: 1]

Farroni T., Menon E., Rigato S., & Johnson M. H . ( 2007).

The perception of facial expressions in newborns

European Journal of Developmental Psychology, 4( 1), 2-13.

URL     PMID:2836746      [本文引用: 2]

The ability of newborns to discriminate and respond to different emotional facial expressions remains controversial. We conducted three experiments in which we tested newborns' preferences, and their ability to discriminate between neutral, fearful, and happy facial expressions, using visual preference and habituation procedures. In the first two experiments, no evidence was found that newborns discriminate, or show a preference between, a fearful and a neutral face. In the third experiment, newborns looked significantly longer at a happy facial expression than a fearful one. We raise the possibility that this preference reflects experience acquired over the first few days of life. These results show that at least some expressions are discriminated and preferred in newborns only a few days old.

Fernald,A. ( 1993).

Approval and disapproval: Infant responsiveness to vocal affect in familiar and unfamiliar languages

Child Development, 64( 3), 657-674.

URL     PMID:8339687      [本文引用: 1]

In a series of 5 auditory preference experiments, 120 5-month-old infants were presented with Approval and Prohibition vocalizations in infant-directed (ID) and adult-directed (AD) English, and in ID speech in nonsense English and 3 unfamiliar languages, German, Italian, and Japanese. Dependent measures were looking-time to the side of stimulus presentation, and positive and negative facial affect. No consistent differences in looking-time were found. However, infants showed small but significant differences in facial affect in response to ID vocalizations in every language except Japanese. Infants smiled more to Approvals, and when they showed negative affect, it was more likely to occur in response to Prohibitions. Infants did not show differential affect in response to Approvals and Prohibitions in AD speech. The results indicate that young infants can discriminate affective vocal expressions in ID speech in several languages and that ID speech is more effective than AD speech in eliciting infant affect.

Ferrari V., Bradley M. M., Codispoti M., & Lang P. J . ( 2010).

Detecting novelty and significance

Journal of Cognitive Neuroscience, 22( 2), 404-411.

URL     [本文引用: 1]

Flom, R., & Bahrick L. E, . ( 2007).

The development of infant discrimination of affect in multimodal and unimodal stimulation: The role of intersensory redundancy

Developmental Psychology, 43( 1), 238-252.

URL     PMID:17201522      [本文引用: 1]

Abstract This research examined the developmental course of infants' ability to perceive affect in bimodal (audiovisual) and unimodal (auditory and visual) displays of a woman speaking. According to the intersensory redundancy hypothesis (L. E. Bahrick, R. Lickliter, & R. Flom, 2004), detection of amodal properties is facilitated in multimodal stimulation and attenuated in unimodal stimulation. Later in development, however, attention becomes more flexible, and amodal properties can be perceived in both multimodal and unimodal stimulation. The authors tested these predictions by assessing 3-, 4-, 5-, and 7-month-olds' discrimination of affect. Results demonstrated that in bimodal stimulation, discrimination of affect emerged by 4 months and remained stable across age. However, in unimodal stimulation, detection of affect emerged gradually, with sensitivity to auditory stimulation emerging at 5 months and visual stimulation at 7 months. Further temporal synchrony between faces and voices was necessary for younger infants' discrimination of affect. Across development, infants first perceive affect in multimodal stimulation through detecting amodal properties, and later their perception of affect is extended to unimodal auditory and visual stimulation. Implications for social development, including joint attention and social referencing, are considered. Copyright 2006 APA, all rights reserved.

Flom, R., & Pick A. D, . ( 2012).

Dynamics of infant habituation: Infants’ discrimination of musical excerpts

Infant Behavior and Development, 35( 4), 697-704.

URL     PMID:22982268      [本文引用: 1]

Sch02ner and Thelen (2006) summarized the results of many habituation studies as a set of generalizations about the emergence of novelty preferences in infancy. One is that novelty preferences emerge after fewer trials for older than for younger infants. Yet in habituation studies using an infant-controlled procedure, the standard criterion of habituation is a 50% decrement in looking regardless of he ages of the participants. If younger infants require more looking to habituate than do older infants, it might follow that novelty preferences will emerge for younger infants when a more stringent criterion is imposed, e.g., a 70% decrement in looking. Our earlier investigation of infants’ discrimination of musical excerpts provides a basis and an opportunity for assessing this idea. Flom et al. (2008) found that 9-month-olds, but not younger infants, unambiguously discriminate “happy” and “sad” musical excerpts. The purpose of the current study was to examine younger infants’ discrimination of happy and sad musical excerpts using a more stringent, 70% habituation criterion. In Experiment 1, 5- and 7-month olds were habituated to three musical excerpts rated as happy or sad. Following habituation infants were presented with two musical excerpts from the other affect group. Infants at both ages showed significant discrimination. In Experiment 2, 5- and 7-month-olds were presented with two new excerpts from the same affective group as the habituation excerpts. The infants did not discriminate these novel, yet affectively similar excerpts. In Experiment 3, 5- and 7-month-olds discriminated individual happy and sad excerpts. These results replicate those for the older, 9-month-olds in the previous investigation. The results are important as they demonstrate that whether infants show discrimination using an infant-controlled procedure is affected by the researchers’ chosen criterion of habituation.

Friederici A. D., Friedrich M., & Weber C . ( 2002).

Neural manifestation of cognitive and precognitive mismatch detection in early infancy

NeuroReport, 13( 10), 1251-1254.

URL     PMID:12151780      [本文引用: 2]

We recorded event-related potentials (ERPs) in 2-month-old infants in two different states of alertness: awake and asleep. Syllables varying in vowel duration (long vs short) were presented in an oddball paradigm, known to elicit a mismatch brain response. ERPs of both groups showed a mismatch response reflected in a positivity followed by a frontal negativity. While the positivity was present as a function of the stimulus type (present for long deviants only), the negativity varied as a function of the state of alertness (present for awake infants only). These data indicate a functional separation between precognitive and cognitive aspects of duration mismatch essential for the distinction between long and short vowels during early infancy.

Frühholz S., & Grandjean D . ( 2013).

Processing of emotional vocalizations in bilateral inferior frontal cortex

Neuroscience and Biobehavioral Reviews, 37( 10), 2847-2855.

URL     PMID:24161466      [本文引用: 1]

A current view proposes that the right inferior frontal cortex (IFC) is particularly responsible for attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Although some studies seem to support this view, an exhaustive review of all recent imaging studies points to an important functional role of both the right and the left IFC in processing vocal emotions. Second, besides a supposed predominant role of the IFC for an attentive processing and evaluation of emotional voices in IFC, these recent studies also point to a possible role of the IFC in preattentive and implicit processing of vocal emotions. The studies specifically provide evidence that both the right and the left IFC show a similar anterior-to-posterior gradient of functional activity in response to emotional vocalizations. This bilateral IFC gradient depends both on the nature or medium of emotional vocalizations (emotional prosody versus nonverbal expressions) and on the level of attentive processing (explicit versus implicit processing), closely resembling the distribution of terminal regions of distinct auditory pathways, which provide either global or dynamic acoustic information. Here we suggest a functional distribution in which several IFC subregions process different acoustic information conveyed by emotional vocalizations. Although the rostro-ventral IFC might categorize emotional vocalizations, the caudo-dorsal IFC might be specifically sensitive to their temporal features. (C) 2013 Elsevier Ltd. All rights reserved.

Grossmann,T. ( 2010).

The development of emotion perception in face and voice during infancy

Restorative Neurology and Neuroscience, 28( 2), 219-236.

URL     PMID:20404410      [本文引用: 1]

Purpose: Interacting with others by reading their emotional expressions is an essential social skill in humans. How this ability develops during infancy and what brain processes underpin infants' perception of emotion in different modalities are the questions dealt with in this paper.Methods: Literature review.Results: The first part provides a systematic review of behavioral findings on infants' developing emotion-reading abilities. The second part presents a set of new electrophysiological studies that provide insights into the brain processes underlying infants' developing abilities. Throughout, evidence from unimodal (face or voice) and multimodal (face and voice) processing of emotion is considered. The implications of the reviewed findings for our understanding of developmental models of emotion processing are discussed.Conclusions: The reviewed infant data suggest that (a) early in development, emotion enhances the sensory processing of faces and voices, (b) infants' ability to allocate increased attentional resources to negative emotional information develops earlier in the vocal domain than in the facial domain, and (c) at least by the age of 7 months, infants reliably match and recognize emotional information across face and voice.

Grossmann T., Striano T., & Friederici A. D . ( 2005).

Infants' electric brain responses to emotional prosody

NeuroReport, 16( 16), 1825-1828.

URL     [本文引用: 3]

Grossmann T., Oberecker R., Koch S. P., Friederici A. D . ( 2010).

The developmental origins of voice processing in the human brain

Neuron, 65( 6), 852-858.

URL     PMID:2852650      [本文引用: 1]

In human adults, voices are processed in specialized brain regions in superior temporal cortices. We examined the development of this cortical organization during infancy by using near-infrared spectroscopy. In experiment 1, 7-month-olds but not 4-month-olds showed increased responses in left and right superior temporal cortex to the human voice when compared to nonvocal sounds, suggesting that voice-sensitive brain systems emerge between 4 and 7 months of age. In experiment 2, 7-month-old infants listened to words spoken with neutral, happy, or angry prosody. Hearing emotional prosody resulted in increased responses in a voice-sensitive region in the right hemisphere. Moreover, a region in right inferior frontal cortex taken to serve evaluative functions in the adult brain showed particular sensitivity to happy prosody. The pattern of findings suggests that temporal regions specialize in processing voices very early in development and that, already in infancy, emotions differentially modulate voice processing in the right hemisphere. 78 Temporal cortex specializes in processing human voices during infancy 78 Emotion specifically enhances voice processing in the right hemisphere in infants 78 Deeper evaluation of happy speech in infants' right inferior frontal cortex

Háden G. P., Stefanics G., Vestergaard M. D., Denham S. L., Sziller I., & Winkler I . ( 2009).

Timbre-independent extraction of pitch in newborn infants

Psychophysiology, 46( 1), 69-74.

URL     PMID:19055501      [本文引用: 1]

The ability to separate pitch from other spectral sound features, such as timbre, is an important prerequisite of veridical auditory perception underlying speech acquisition and music cognition. The current study investigated whether or not newborn infants generalize pitch across different timbres. Perceived resonator size is an aspect of timbre that informs the listener about the size of the sound source, a cue that may be important already at birth. Therefore, detection of infrequent pitch changes was tested by recording event-related brain potentials in healthy newborn infants to frequent standard and infrequent pitch-deviant sounds while the perceived resonator size of all sounds was randomly varied. The elicitation of an early negative and a later positive discriminative response by deviant sounds demonstrated that the neonate auditory system represents pitch separately from timbre, thus showing advanced pitch processing capabilities.

Hawk S. T., Van Kleef G. A., Fischer A. H.,& Van Der Schalk, J. , .( 2009).

"Worth a thousand words": Absolute and relative decoding of nonlinguistic affect vocalizations

Emotion, 9( 3), 293-305.

URL     PMID:19485607      [本文引用: 1]

Abstract The authors compared the accuracy of emotion decoding for nonlinguistic affect vocalizations, speech-embedded vocal prosody, and facial cues representing 9 different emotions. Participants (N = 121) decoded 80 stimuli from 1 of the 3 channels. Accuracy scores for nonlinguistic affect vocalizations and facial expressions were generally equivalent, and both were higher than scores for speech-embedded prosody. In particular, affect vocalizations showed superior decoding over the speech stimuli for anger, contempt, disgust, fear, joy, and sadness. Further, specific emotions that were decoded relatively poorly through speech-embedded prosody were more accurately identified through affect vocalizations, suggesting that emotions that are difficult to communicate in running speech can still be expressed vocally through other means. Affect vocalizations also showed superior decoding over faces for anger, contempt, disgust, fear, sadness, and surprise. Facial expressions showed superior decoding scores over both types of vocal stimuli for joy, pride, embarrassment, and "neutral" portrayals. Results are discussed in terms of the social functions served by various forms of nonverbal emotion cues and the communicative advantages of expressing emotions through particular channels.

Hirasawa K., Kurihara M., & Konishi Y . ( 2002).

The relationship between mismatch negativity and arousal level. Can mismatch negativity be an index for evaluating the arousal level in infants?

Sleep Medicine, 3( S2), S45-S48.

URL     PMID:14592379      [本文引用: 2]

Background: Electrophysiological and behavioral studies have shown that stimulus relevance contributes to auditory processing in sleep and auditory stimuli changes the sleep stages. So we observed changes in auditory processing due to sleep stages by recording infant mismatch negativity (MMN) during different states and investigated the arousal mechanisms. Methods: Auditory event-related potentials (ERPs) of 26 neonates were recorded using high-density EGI EEG system. Stimuli consisted of 1000 Hz tones with 90% probability as standard and 1200 Hz with 10% probability as deviant. Study 1 was designed for the confirmation of the recording of MMN from neonates and Study 2 for investigating whether an appropriate stimulus onset asynchrony (SOA) of the stimulus would induce a clear difference in the latency or amplitude. Results: (Study 1) MMN were obtained from all subjects. No differences of the latencies, amplitudes and distribution due to arousal or sleep stage were observed. After the MMN response occurred, a prominent negativity like Nc was seen in response to deviant stimuli in active sleep and waking state. (Study 2) No distinct differences between the difference states were seen in any SOA. Conclusions: Only MMN did not characterize the arousal or sleep stage. But the modality of the auditory evoked potentials (AEPs) may differ according to the state, so further detailed investigation could enable the detection of the infants' state using the AEP.

Hoehl, S., &Striano, T. ( 2010).

The development of emotional face and eye gaze processing

Developmental Science, 13( 6), 813-825.

URL     PMID:20977553      [本文引用: 2]

Recent research has demonstrated that infants090005 attention towards novel objects is affected by an adult090005s emotional expression and eye gaze toward the object. The current event-related potential (ERP) study investigated how infants at 3, 6, and 9 months of age process fearful compared to neutral faces looking toward objects or averting gaze away from objects. Furthermore, we examined how the processing of novel objects is affected by gaze direction and emotional expression. We hypothesized that an adult090005s fearful expression should be particularly salient when it is directed toward a referent in the environment. Furthermore, responses to objects should be increased if the face previously expressed fear toward the object. Three-month-olds did not show differential neural responses to fearful vs. neutral faces regardless of gaze direction. Six-month-olds showed an enhanced negative central (Nc) component for fearful relative to neutral faces looking toward objects, but not when eye gaze was averted away from the objects. Furthermore, 6-month-olds showed an enhanced Nc for objects that had been gaze-cued by a fearful compared to a neutral face. Nine-month-olds showed an enhanced Nc for fearful relative to neutral faces in both eye gaze conditions and showed an enhanced Nc for objects that had been gaze-cued by a neutral face. The findings are discussed in the context of social cognitive and brain development.

Hung A. Y., Ahveninen J., & Cheng Y . ( 2013).

Atypical mismatch negativity to distressful voices associated with conduct disorder symptoms

Journal of Child Psychology and Psychiatry, 54( 9), 1016-1027.

URL     PMID:23701279      [本文引用: 1]

Although a general consensus holds that emotional reactivity in youth with conduct disorder (CD) symptoms arises as one of the main causes of successive aggression, it remains to be determined whether automatic emotional processing is altered in this population.MethodsWe measured auditory event-related potentials (ERP) in 20 young offenders and 20 controls, screened for DSM-IV criteria of CD and evaluated using the youth version of Hare Psychopathy Checklist (PCL:YV), State-Trait Anxiety Inventory (STAI) and Barrett Impulsiveness Scale (BIS-11). In an oddball design, sadly or fearfully spoken ‘deviant’ syllables were randomly presented within a train of emotionally neutral ‘standard’ syllables.ResultsIn young offenders meeting with CD criteria, the ERP component mismatch negativity (MMN), presumed to reflect preattentive auditory change detection, was significantly stronger for fearful than sad syllables. No MMN differences for fearful versus sad syllables were observed in controls. Analyses of nonvocal deviants, matched spectrally with the fearful and sad sounds, supported our interpretation that the MMN abnormalities in juvenile offenders were related to the emotional content of sounds, instead of purely acoustic factors. Further, in the young offenders with CD symptoms, strong MMN amplitudes to fearful syllables were associated with high impulsive tendencies (PCL:YV, Factor 2). Higher trait and state anxiety, assessed by STAI, were positively correlated with P3a amplitudes to fearful and sad syllables, respectively. The differences in group-interaction MMN/P3a patterns to emotional syllables and nonvocal sounds could be speculated to suggest that there is a distinct processing route for preattentive processing of species-specific emotional information in human auditory cortices.ConclusionsOur results suggest that youths with CD symptoms may process distressful voices in an atypical fashion already at the preattentive level. This auditory processing abnormality correlated with increased impulsivity and anxiety. Our results may help to shed light on the neural mechanisms of aggression.

Ito T. A., Larsen J. T., Smith, N. K. &, Cacioppo, J. T. ( 1998).

Negative information weighs more heavily on the brain: The negativity bias in evaluative categorizations

Journal of Personality and Social Psychology, 75( 4), 887-900.

URL     PMID:9825526      [本文引用: 1]

Negative information tends to influence evaluations more strongly than comparably extreme positive information. To test whether this negativity bias operates at the evaluative categorization stage, the authors recorded event-related brain potentials (ERPs), which are more sensitive to the evaluative categorization than the response output stage, as participants viewed positive, negative, and neutral pictures. Results revealed larger amplitude late positive brain potentials during the evaluative categorization of (a) positive and negative stimuli as compared with neutral stimuli and (b) negative as compared with positive stimuli, even though both were equally probable, evaluatively extreme, and arousing. These results provide support for the hypothesis that the negativity bias in affective processing occurs as early as the initial categorization into valence classes.

LaBarbera J. D., Izard C. E., Vietze P., & Parisi S. A . ( 1976).

Four- and six-month-old infants’ visual response to joy, anger and neutral expressions

Child Development, 47( 2), 535-538.

URL     PMID:1269322      [本文引用: 1]

24 infants, 12 4-month-olds and 12 6-month-olds, were repeatedly shown slides of 3 facial expressions. The expressions were previously judged by observers to be indicators of joy, anger, and no emotion, respectively. The duration of the first visual fixation to each presentation of the slides was monitored for each subject. The data indicated that the infants looked at the joy expression significantly more than at either the anger or neutral expressions. The results suggest that infants are capable of discriminating emotion expressions earlier in their development than previous studies have implied.

Leppänen P. H. T., Guttorm T. K., Pihko E., Takkinen S., Eklund K. M., & Lyytinen H . ( 2004).

Maturational effects on newborn ERPs measured in the mismatch negativity paradigm

Experimental Neurology, 190( S1), S91-S101.

URL     PMID:15498547      [本文引用: 3]

The mismatch negativity (MMN) component of event-related potentials (ERPs), a measure of passive change detection, is suggested to develop early in comparison to other ERP components, and an MMN-like response has been measured even from preterm infants. The MMN response in adults is negative in polarity at about 150 200 ms. However, the response measured in a typical MMN paradigm can also be markedly different in newborns, even opposite in polarity. This has been suggested to be related to maturational factors. To verify that suggestion, we measured ERPs of 21 newborns during quiet sleep to rarely occurring deviant tones of 1100 Hz (probability 12%) embedded among repeated standard tones of 1000 Hz in an oddball sequence. Gestational age (GA) and two cardiac measures, vagal tone (V) and heart period (HP), were used as measures of maturation. GA and HP explained between 36% and 42% of the total variance of the individual ERP peak amplitude (the largest deflection of the difference wave at a time window of 150 375 ms) at different scalp locations. In the discriminant function analyses, GA and HP as classifying variables differentiated infants in whom the peak of the difference wave had positive polarity from those with a negative polarity at an accuracy level ranging from 72% to 91%. These results demonstrate that during quiet sleep, maturational factors explain a significant portion of the ERP difference wave amplitude in terms of its polarity, indicating that the more mature the ERPs are, the more positive the amplitude. The present study suggests that maturational effects should be taken into account in ERP measurements using MMN paradigms with young infants.

Mastropieri, D., &Turkewitz, G. ( 1999).

Prenatal experience and neonatal responsiveness to vocal expressions of emotion

Developmental Psychobiology, 35( 3), 204-214.

URL     PMID:10531533      [本文引用: 2]

Abstract Newborn differentiation of emotion and the relevance of prenatal experience in influencing responsiveness to emotion was tested by examining newborn responses to the presentation of a range of vocal expressions. Differential responding was observed, as indicated by an increase in eye opening behavior in response to the presentation of happy speech patterns. More importantly, differential responding was observed only when the infants listened to emotional speech as spoken by speakers of their maternal language. No evidence of discrimination was found in the groups of infants listening to the same vocal expressions in a novel language. The results suggest that as a consequence of prenatal exposure to the distinctive prosodic maternal speech patterns that specify different emotions and to the temporally related stimuli created by distinctive maternal physiological concomitants of emotion, the fetus learns to differentiate those emotional speech patterns typical of the infant's maternal language. 1999 John Wiley & Sons, Inc. Dev Psychobiol 35: 204 214, 1999

Näätänen R., Paavilainen P., Rinne T., & Alho K . ( 2007).

The mismatch negativity (MMN) in basic research of central auditory processing: A review

Clinical Neurophysiology, 118( 12), 2544-2590.

URL     PMID:17931964      [本文引用: 1]

In the present article, the basic research using the mismatch negativity (MMN) and analogous results obtained by using the magnetoencephalography (MEG) and other brain-imaging technologies is reviewed. This response is elicited by any discriminable change in auditory stimulation but recent studies extended the notion of the MMN even to higher-order cognitive processes such as those involving grammar and semantic meaning. Moreover, MMN data also show the presence of automatic intelligent processes such as stimulus anticipation at the level of auditory cortex. In addition, the MMN enables one to establish the brain processes underlying the initiation of attention switch to, conscious perception of, sound change in an unattended stimulus stream.

Parducci,G. ( 1995).

Happiness, pleasure, and judgment: The contextual theory and its applications. Hillsdale, NJ, US: Lawrence Erlbaum Associates

Inc.

[本文引用: 1]

Peltola M. J., Leppänen J. M., Mäki S., & Hietanen J. K . ( 2009).

Emergence of enhanced attention to fearful faces between 5 and 7 months of age

Social Cognitive and Affective Neuroscience, 4( 2), 134-142.

URL     PMID:2686224      [本文引用: 2]

The adult brain is endowed with mechanisms subserving enhanced processing of salient emotional and social cues. Stimuli associated with threat represent one such class of cues. Previous research suggests that preferential allocation of attention to social signals of threat (i.e. a preference for fearful over happy facial expressions) emerges during the second half of the first year. The present study was designed to determine the age of onset for infants' attentional bias for fearful faces. Allocation of attention was studied by measuring event-related potentials (ERPs) and looking times (in a visual paired comparison task) to fearful and happy faces in 5- and 7-month-old infants. In 7-month-olds, the preferential allocation of attention to fearful faces was evident in both ERPs and looking times, i.e. the negative central mid-latency ERP amplitudes were more negative, and the looking times were longer for fearful than happy faces. No such differences were observed in the 5-month-olds. It is suggested that an enhanced sensitivity to facial signals of threat emerges between 5 and 7 months of age, and it may reflect functional development of the neural mechanisms involved in processing of emotionally significant stimuli.

Rigato, S., Farroni, T. & Johnson M. H, . ( 2010).

The shared signal hypothesis and neural responses to expressions and gaze in infants and adults

Social Cognitive and Affective Neuroscience, 5( 1), 88-97.

URL     PMID:19858107      [本文引用: 1]

Event-related potentials were recorded from adults and 4-month-old infants while they watched pictures of faces that varied in emotional expression (happy and fearful) and in gaze direction (direct or averted). Results indicate that emotional expression is temporally independent of gaze direction processing at early stages of processing, and only become integrated at later latencies. Facial expressions affected the face-sensitive ERP components in both adults (N170) and infants (N290 and P400), while gaze direction and the interaction between facial expression and gaze affected the posterior channels in adults and the frontocentral channels in infants. Specifically, in adults, this interaction reflected a greater responsiveness to fearful expressions with averted gaze (avoidance-oriented emotion), and to happy faces with direct gaze (approach-oriented emotions). In infants, a larger activation to a happy expression at the frontocentral negative component (Nc) was found, and planned comparisons showed that it was due to the direct gaze condition. Taken together, these results support the shared signal hypothesis in adults, but only to a lesser extent in infants, suggesting that experience could play an important role.

Ruusuvirta T., Huotilainen M., Fellman V., & Näätänen R . ( 2009).

Numerical discrimination in newborn infants as revealed by event-related potentials to tone sequences

European Journal of Neuroscience, 30( 8), 1620-1624.

URL     PMID:19811535      [本文引用: 2]

Humans are able to attentively discriminate number from 6 months of age. However, the age of the emergence of this ability at the pre-attentive stage of processing remains unclear. Event-related potentials (ERPs) were recorded in newborn human infants aged from 3 to 5 days. At 500-ms intervals, the infants were passively exposed to 200-ms sequences of four tones. Each tone could be either 1000 or 1500 Hz in frequency. In most sequences (standards), the ratio of the tones of one frequency to those of the other frequency in a sequence was 2 : 2. In the remaining sequences (deviants, P = 0.1), this ratio was either 3 : 1 or 4 : 0. The mismatch response of ERPs could not be found for 3 : 1 deviants, but it was a robust finding for 4 : 0 deviants, showing the neurophysiological ability of the infants to register the larger deviant tandard difference. The findings suggest very early sensitivity to auditory numerical information in infancy.

Singh L., Morgan J. L., & Best C. T . ( 2002).

Infants'listening preferences: Baby talk or happy talk?

Infancy, 3( 3), 365-394.

URL     [本文引用: 2]

Saito Y., Aoyama S., Kondo T., Fukumoto R., Konishi N., Nakamura K., … Toshima T . ( 2007).

Frontal cerebral blood flow change associated with infant-directed speech

Archives of Disease in Childhood. Fetal and Neonatal Edition, 92( 2), F113-F116.

URL     PMID:16905571      [本文引用: 2]

Objective: To examine the auditory perception of maternal utterances by neonates using near-infrared spectroscopy (NIRS).Methods: Twenty full-term, healthy neonates were included in this study. The neonates were tested in their cribs while they slept in a silent room. First, two probe holders were placed on the left and right sides of the forehead over the eyebrows using double-sided adhesive tape. The neonates were then exposed to auditory stimuli in the form of infant-directed speech (IDS) or adult-directed speech (ADS), sampled from each of the mothers, through an external auditory speaker.Results: A 2 (stimulus: IDS and ADS) × 2 (recording site: channel 1 (right side) and channel 2 (left side)) analysis of variance for these relative oxygenated haemoglobin values showed that IDS (Mean66=660.25) increased brain function significantly (F66=663.51) more than ADS (Mean66=66610.26).Conclusions: IDS significantly increased brain function compared with ADS. These results suggest that the emotional tone of maternal utterances could have a role in activating the brains of neonates to attend to the utterances, even while sleeping.

Saito Y., Fukuhara R., Aoyama S., & Toshima T . ( 2009).

Frontal brain activation in premature infants' response to auditory stimuli in neonatal intensive care unit

Early Human Development, 85( 7), 471-474.

URL     PMID:19411147      [本文引用: 1]

The present study was focusing on the very few contacts with the mother's voice that NICU infants have in the womb as well as after birth, we examined whether they can discriminate between their mothers' utterances and those of female nurses in terms of the emotional bonding that is facilitated by prosodic utterances. Twenty-six premature infants were included in this study, and their cerebral blood flows were measured by near-infrared spectroscopy. They were exposed to auditory stimuli in the form of utterances made by their mothers and female nurses. A two (stimulus: mother and nurse) two (recording site: right frontal area and left frontal area) analysis of variance (ANOVA) for these relative oxy-Hb values was conducted. The ANOVA showed a significant interaction between stimulus and recording site. The mother's and the nurse's voices were activated in the same way in the left frontal area, but showed different reactions in the right frontal area. We presume that the nurse's voice might become associated with pain and stress for premature infants. Our results showed that the premature infants reacted differently to the different voice stimuli. Therefore, we presume that both mothers' and nurses' voices represent positive stimuli for premature infants because both activate the frontal brain. Accordingly, we cannot explain our results only in terms of the state-dependent marker for infantile individual differences, but must also address the stressful trigger of nurses' voices for NICU infants.

Telkemeyer S., Rossi S., Koch S. P., Nierhaus T., Steinbrink J., Poeppel D., … Wartenburger l . ( 2009).

Wartenburger, Sensitivity of newborn auditory cortex to the temporal structure of sounds

Journal of Neuroscience, 29( 47), 14726-14733.

URL     PMID:19940167      [本文引用: 1]

Understanding the rapidly developing building blocks of speech perception in infancy requires a close look at the auditory prerequisites for speech sound processing. Pioneering studies have demonstrated that hemispheric specializations for language processing are already present in early infancy. However, whether these computational asymmetries can be considered a function of linguistic attributes or a consequence of basic temporal signal properties is under debate. Several studies in adults link hemispheric specialization for certain aspects of speech perception to an asymmetry in cortical tuning and reveal that the auditory cortices are differentially sensitive to spectrotemporal features of speech. Applying concurrent electrophysiological (EEG) and hemodynamic (near-infrared spectroscopy) recording to newborn infants listening to temporally structured nonspeech signals, we provide evidence that newborns process nonlinguistic acoustic stimuli that share critical temporal features with language in a differential manner. The newborn brain preferentially processes temporal modulations especially relevant for phoneme perception. In line with multi-time-resolution conceptions, modulations on the time scale of phonemes elicit strong bilateral cortical responses. Our data furthermore suggest that responses to slow acoustic modulations are lateralized to the right hemisphere. That is, the newborn auditory cortex is sensitive to the temporal structure of the auditory input and shows an emerging tendency for functional asymmetry. Hence, our findings support the hypothesis that development of speech perception is linked to basic capacities in auditory processing. From birth, the brain is tuned to critical temporal properties of linguistic signals to facilitate one of the major needs of humans: to communicate.

Vaish A., Grossmann T., & Woodward A . ( 2008).

Not all emotions are created equal: The negativity bias in social- emotional development

Psychological Bulletin, 134( 3), 383-403.

URL     PMID:18444702      [本文引用: 3]

There is ample empirical evidence for an asymmetry in the way that adults use positive versus negative information to make sense of their world; specifically, across an array of psychological situations and tasks, adults display a negativity bias, or the propensity to attend to, learn from, and use negative information far more than positive information. This bias is argued to serve critical evolutionarily adaptive functions, but its developmental presence and ontogenetic emergence have never been seriously considered. The authors argue for the existence of the negativity bias in early development and that it is evident especially in research on infant social referencing but also in other developmental domains. They discuss ontogenetic mechanisms underlying the emergence of this bias and explore not only its evolutionary but also its developmental functions and consequences. Throughout, the authors suggest ways to further examine the negativity bias in infants and older children, and they make testable predictions that would help clarify the nature of the negativity bias during early development.

Vaish, A., &Striano, T. ( 2004).

Is visual reference necessary? Contributions of facial versus vocal cues in 12-month-olds’ social referencing behavior

Developmental Science, 7( 3), 261-269.

URL     PMID:15595366      [本文引用: 1]

To examine the influences of facial versus vocal cues on infants' in a potentially threatening situation, 12-month-olds on a visual cliff received positive facial-only, vocal-only, or both facial and vocal cues from mothers. Infants' crossing times and looks to mother were assessed. Infants crossed the cliff faster with multimodal and vocal than with facial cues, and looked more to mother in the Face Plus Voice compared to the Voice Only condition. The findings suggest that vocal cues, even without a visual reference, are more potent than facial cues in guiding infants' . The discussion focuses on the meaning of infants' looks and the role of voice in of social .

Winkler I., Kushnerenko E., Horváth J., Čeponienė R., Fellman V., Huotilainen M., … Sussman E . ( 2003).

Newborn infants can organize the auditory world

Proceedings of the National Academy of Sciences of the United States of America, 100( 20), 11812-11815.

URL     [本文引用: 2]

Zhang D. D., Liu Y. Z., Hou X. L., Sun G. Y., Cheng Y. W., & Luo Y. J . ( 2014).

Discrimination of fearful and angry emotional voices in sleeping human neonates: A study of the mismatch brain responses

Frontiers in Behavioral Neuroscience, 8, 422.

URL     PMID:25538587      [本文引用: 8]

Appropriate processing of human voices with different threat-related emotions is of evolutionarily adaptive value for the survival of individuals. Nevertheless, it is still not clear whether the sensitivity to threat-related information is present at birth. Using an odd-ball paradigm, the current study investigated the neural correlates underlying automatic processing of emotional voices of fear and anger in sleeping neonates. Event-related potential data showed that the fronto-central scalp distribution of the neonatal brain could discriminate fearful voices from angry voices; the mismatch response (MMR) was larger in response to the deviant stimuli of anger, compared with the standard stimuli of fear. Furthermore, this fear-anger MMR discrimination was observed only when neonates were in active sleep state. Although the neonates' sensitivity to threat-related voices is not likely associated with a conceptual understanding of fearful and angry emotions, this special discrimination in early life may provide a foundation for later emotion and social cognition development.

Zhang D. D., Liu Y. F., Hou X. L., Zhou, C. L, Luo, Y. J., Ye D. T., & Ding. H. Y . ( 2011).

Reference values for amplitude-integrated EEGs in infants from preterm to 3.5 months of age

Pediatrics, 127( 5), e1280-e1287.

URL     PMID:21482614      [本文引用: 1]

Amplitude-integrated electroencephalogram (aEEG) is a valuable tool for the continuous evaluation of functional brain maturation in infants. The amplitudes of the upper and lower margins of aEEGs are postulated to change with maturation and correlate with postmenstrual age (PMA). In this study we aimed to establish reliable reference values of aEEG amplitudes, which provide quantitative guidelines for assessing brain maturation as indicated by aEEG results in neonates and young infants. aEEGs from healthy infants (n = 274) with PMAs that ranged from 30 to 55 weeks were divided into 10 groups according to their PMAs. Two 5-minute segments were selected from each aEEG and were used to automatically calculate the upper and lower margins and bandwidths of the aEEG tracings. Interobserver agreement was achieved with an overall correlation of 0.99. The upper and lower margins of the aEEGs in both active and quiet sleep clearly rose in infants after the neonatal period. The bandwidth defined as the graphic distance decreased almost monotonically throughout the PMA range from 30 to 55 weeks. The lower margin of the aEEG was positively correlated with PMA, with a larger rank correlation coefficient during quiet sleep (r = 0.89) than during active sleep (r = 0.49). Reference values of aEEG amplitudes were obtained for infants with a wide range of PMAs and constituted the basis for the quantitative assessment of aEEG changes with maturation in neonates and young infants. The normative amplitudes of aEEG margins, especially of the lower margin in quiet sleep, are recommended as a source of reference data for the identification of potentially abnormal aEEG results.

版权所有 © 《心理学报》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发  技术支持:support@magtech.com.cn

/