心理科学进展, 2019, 27(3): 499-507 doi: 10.3724/SP.J.1042.2019.00499

研究前沿

交流手势认知理论

张恒超,

天津商业大学法学院心理学系, 天津 300134

Communicative gesture cognition theory

ZHANG Hengchao,

Department of Psychology, School of Law, Tianjin University of Commerce, Tianjin 300134, China

通讯作者: 张恒超, E-mail: zhhengch@126.com

收稿日期: 2017-12-1   网络出版日期: 2019-03-15

基金资助: 教育部人文社会科学研究青年基金项目.  16YJC1900 29

Received: 2017-12-1   Online: 2019-03-15

摘要

手势是交流互动中一种重要的非语言媒介, 手势不仅可以辅助语言交流而且具有独立的交流性; 作为和语言共同发生的非语言媒介, 手势交流有助于降低交流认知负荷。文章重点归纳和述评了基于手势和语言表达关系的交流手势理论、交流手势激活理论、交流手势的认知节省理论。未来研究需要进一步考虑交流手势实验研究情境自然性和控制严格性间的平衡, 交流手势和其他非语言因素间的关系, 交流手势认知研究的现实意义。

关键词: 交流 ; 手势 ; 认知

Abstract

Gesture is an important nonverbal medium of communication. Gesture can not only assist language communication but also has independent communication attribute. As a nonverbal medium co occurring with language media, gesture communication can help to reduce communication cognitive load. This paper mainly summarizes and reviews the theory of communicative gesture based on the expression relationship between gesture and language, the theory of communicative gesture activation, and the cognitive saving theory of communicative gesture. The future research should explore the balance between naturalness and control rigor of the experimental situation, the relationship between gesture and other nonverbal factors, and the practical significance of gesture cognition research.

Keywords: communication ; gesture ; cognition

PDF (548KB) 元数据 多维度评价 相关文章 导出 EndNote| Ris| Bibtex  收藏本文

本文引用格式

张恒超. (2019). 交流手势认知理论. 心理科学进展, 27(3), 499-507

ZHANG Hengchao. (2019). Communicative gesture cognition theory. Advances in Psychological Science, 27(3), 499-507

1 前言

交流是以口头语言为典型媒介, 辅以多种非语言媒介和线索(如:手势、面部、注视、对象可视性等)的人际互动方式, 以共同目的性、合作性、集体奖赏和个体责任等为主要特征; 交流互动回合中, 交流者彼此轮流担任说者和听者的角色, 随着交流进程的不断发展, 实现认知和行为的“冲突-协调”转换过程, 进而实现共同的交流目的(张恒超, 2013, 2017, 2018; Berezan, Yoo, & Christodoulidou, 2016; Brentari & Goldin-Meadow, 2017; Drijvers & Özyürek, 2017; Duran & Dale, 2014; Edelman, 2017; Krauss & Weinheimer, 1964; Matovic, Koch, & Forgas, 2014; Sacchi, Riva, & Aceto, 2016)。

交流情境中众多的非语言因素中, 手势的交流性尤其受到研究者们的关注, 在各文化背景的语言交流中手势均会频繁出现, 甚至对于从未见过任何手势表达的盲人, 交流中也使用手势, 手势成为语言交流不可分割的一部分(Iverson & Goldin-Meadow 1998; Kang, Tversky, & Black, 2015; Novack & Goldin-Meadow, 2016; Weinberg, Fukawa-Conolly, & Wiesner, 2015)。手势可以为口语交流增加想象空间, 因为手势不像口头语言那样以语法规则为基础线性发生, 尤其在语言难以传达交流信息时, 手势将表现出交流意图表达的潜力; 不论手势表达交流的表面义还是隐含义, 手势均具有交流参照性(例如, 当表达心或爱时, 双手合并形成心形), 手势对交流者来说具有许多现实交流功能, 包括重复强调、明示语言含义等, 甚至预先表达交流计划。概言之, 手势是交流参与者实现共享性交流认知的重要手段, 尤其是代表性、典型文化性的手势, 以及便于直观表达的模仿性手势; 尽管手势和语言表达形式不同, 但是彼此在时间上和语义上互相伴随, 相辅相承表达关联性的交流信息, 手势传达的意义是全方位的, 依靠视觉和模仿性想象, 而语言传达的意思依靠词汇和语法规则。

以往研究对于交流手势认知特征做了一定的分析探讨, 并提出了不同的理论解释, 典型的如:增长点理论(The Growth Point Theory)、信息封装假说(The Information Packaging Hypothesis); 词汇性手势生成模型(The Lexical Gesture Process Model)或词汇检索假说(The Lexical Retrieval Hypothesis)、模拟行为的手势框架理论(The Gestureas-Simulated-Action Framework)、图像激活假说(The Image Activation Hypothesis); 共同范围模型(The Interface Model)、认知负荷降低假说(The Cognitive Load Reduction Hypothesis)。归纳而言, “增长点理论和信息封装假说”立足于交流过程中手势表达和语言表达间的关系, 即两个理论分别关注了相对于语言表达, 手势表达认知过程的共同性和互补性; “词汇性手势生成模型或词汇检索假说, 以及图像激活假说和模拟行为的手势框架理论”立足于手势的认知激活过程, 具体而言, 前者强调手势对语言认知加工的激活作用, 后者强调手势对具体化感知表征的激活和模拟并促进语言认知过程; “共同范围模型和认知负荷降低假说”立足于交流手势的认知加工机制和无意识性。三者是交流手势认知同一过程的三个方面。本文拟对交流手势认知理论分别做出归纳解释和评述。

2 基于手势和语言表达关系的交流手势理论

交流中手势和语言总是相伴发生, 在共同的交流目的下, 彼此间相互作用和沟通, 形成了统一的信息交换系统。交流手势的增长点理论和信息封装假说, 分别从交流互动中手势和语言信息沟通的关系上, 解释了手势的交流认知特征。

2.1 交流手势的“增长点理论”

McNeill和Duncan (2000)提出了交流手势的“增长点理论”, 认为手势和语言构成了交流中的集成系统, 手势的“视-空”表现形式和语音、语义、语法的线性规则表达形式, 在交流认知的不同层面上组建交流信息增长点或整体合成结构, 促进交流信息内容的不断发展和丰富。所谓的增长点是对交流认知中多线索合成信息不断递增发展的形象解释, 该理论重视手势和语言信息的合成性、共同性, 表达方式和过程的分层性、联合性。

Graziano和Gullberg (2013)研究中探讨了交流手势和语言互动的关联性, 实验材料为卡通图片故事, 研究中共招募了三种类型的被试:三个年龄组的儿童被试(4~5、6~7、8~10岁), 本国语成年被试, 第二语言成年被试。儿童组条件下, 被试先听故事录音, 之后向成年人复述(成年人是其亲人或老师), 儿童讲述期间, 成人持有对应的卡通图片故事材料, 不打断儿童的讲述, 但可以提供反馈。本国语成年被试条件下, 使用相同的材料, 随机两两配对, 其中一人先看卡片故事, 再向另一人讲述。第二语言成年被试(学习第二语言法语4年)条件下, 要求其向法语母语听者讲述故事。结果显示, 不同被试条件下, 尽管语言表达能力不同, 但是一致的特点是语言表述流畅, 手势相应更多, 流畅和不流畅表述条件下手势数量差异显著。而从流畅和不流畅语言表述间的对比看, 流畅语言表述时的手势主要是完整性手势, 而不流畅语言表述伴生的手势主要是不完整性手势。证实了:语言流畅交流中手势也连贯完整, 不流畅语言交流中, 手势表现出断续不连贯特点; 不同被试条件下的不流畅语言交流过程中, 手势几乎都没有完整性的表达; 当语言交流停止时, 手势也相应停止, 因此手势和语言是一个集成交流系统, 作为两种不同的交流媒介, 彼此关联性地表达共同的交流内容。

从交流互动性特征来看, 增长点理论所强调的手势和语言间的表达关联性和共同性, 可以从手势对于语言理解促进性方面的实验结果中得到支持, 即手势和语言的共同发生特征有助于促进听者对于语言信息的理解(Koppensteiner, Stephan, & Jäschke, 2016; Post, van Gog, Paas, & Zwaan, 2013), Hostetter (2011)进一步指出, 手势对于交流语言理解的影响效果, 受交流对象和任务特点、手势和语言间的重叠特征等方面的综合性影响。但是, 有一点是明确的, 手势对于语言交流的促进性影响, 最终决定于交流互动过程中交流参与者彼此间共享多种线索, 比单一交流媒介(语言)所提供信息更为丰富, 传达的交流意图、期望更为确凿, 有助于交流互动效率效果的提高。

可见, 交流手势的增长点理论强调交流手势和语言间的共同表达性, 显然, 多媒介共同交流有助于交流共享性的建立, 并且该理论有助于说明多种线索对于交流信息的增量解释。但是, 增长点理论的不足之处在于只着眼于手势和语言媒介间的相似性、关联性, 而无法解释手势和语言媒介间的区别性, 换言之, 手势不应仅仅是语言的重复表达, 或者说仅仅从属于语言沟通的需要性, 不论是表达的方式、机制还是意识性等, 手势和语言认知加工过程存在显著不同的特点。所以增长点理论对于交流手势认知的解释有其合理性和便利性, 但是存在将手势交流简单归为重复性语言交流的倾向性。

2.2 交流手势的“信息封装假说”

与交流手势的增长点理论不同, Kita和Özyürek (2003)提出了交流手势的信息封装假说, 认为交流过程中手势和语言间的关系是互补的, 手势主要组织和封装了交流中的空间视觉信息, 以适应交流语言编码过程, 其对信息的展现与线性发生的语言模式相匹配。该理论强调两点:一是手势表达的信息是空间视觉表征(比如穿衣戴帽方式、桌椅摆放位置等的直观描摹), 交流语言在描述这些复杂空间信息方面相对不足(Hostetter, 2014); 二是当语言交流复杂信息时, 手势有助于将信息条理分解成适合语言发生的小段信息, 比如手势帮助将小段信息组合进入语言从句的编码过程中(Bock & Cutting, 1992)。手势本身是个人的空间行为, 在选择和组织视空信息单元方面具有更大的表达便利性, 例如, 当语言描述房间布局时, 说者可能使用两只手分别代表沙发和椅子, 直观描摹它们在房间摆放的确切位置和相对方向关系, 语言相应指示“沙发和椅子这样面对面摆放”, 手势信息和语言信息互补性“封装”并综合表达。

交流过程中手势易化语言表达并丰富补充语言信息的观点, 在以往研究中得到证实。和交流手势的增长点理论不同, 交流手势信息封装假说认为交流手势不是语言交流的附属, Kelly, Church (1998)Ping, Goldin-Meadow (2008)的两项研究中要求儿童被试观看儿童讲解皮亚杰守恒任务的视频片段, 之后被试对语言信息做出判断, 比如一个视频中, 解说语言只表达了容器的高度, 同时手势描摹展现了容器的宽度, 被试一致性认为讲解者表达的是容器的高度和宽度, 结果表明:讲解者通过手势将语言信息一起“封装”编码, 听者也是将手势和语言信息合并解码。Hostetter, Alibali和Kita (2007)研究中使用了点子图作为实验材料, 任务为点子图描述任务, 实验中说者向听者描述的条件分两种:一种条件下, 只呈现点子说者对其描述(困难描述条件, 说者自己决定是什么形状); 一种条件下, 呈现的点子被线连接成几何图形(容易描述条件)。结果发现, 当点子已经明显连接为几何形状时, 语言描述更容易, 说者手势显著更少; 反之, 当点子图无任何提示时, 语言描述困难, 说者手势数量显著增加, 以辅助语言交流过程。

Trofatter, Kontra, Beilock和Goldin-Meadow (2015)指出手势和语言交流的互补性有助于表达某些言外之意, 实现听者对交流信息的完整理解。Koppensteiner, Stephan和Jäschke (2016)研究中使用政客政见演讲视频作为实验材料, 实验控制视频人物的交流手势, 从听者理解性的角度也证实, 手势交流影响交流语言中抽象观点的理解性和支持性。Cook, Duffy和Fenn (2013)在学习情境下也发现, 知识的语言讲解中有无手势指导显著影响学习成绩。手势交流对于语言的互补性, 源于语言认知过程同时具有抽象性和具体性表征特征, 交流语言认知加工过程不仅涉及到抽象语言规则的解码过程, 即语言加工中的抽象符号性, 还涉及到感知经验的重现(Arbib, Gasser, & Barrès, 2014; Glenberg & Gallese, 2012)。交流语言认知过程的混合性是手势和语言交流互补性的前提。另外, 语言交流的社会性决定了语言认知过程的意识性、策略性特征, 即语言认知加工是通过深思熟虑过程实现的; 与此对应, 语言交流中手势的发生发展体现为无意识性过程, 因此, 手势的互补性不仅表现在交流意图的沟通, 还表现在降低语言认知加工的认知负荷(Novack, Goldin-Meadow, & Woodward, 2015; Ping, Goldin-Meadow, & Beilock, 2014)。Alibali, Kita和Young (2000)还指出手势的认知影响性不仅表现在和语言的共同发生过程中, 还有助于促进交流语言信息的储存和提取。

归纳而言, 交流认知是一个综合性、复合性和动态发展性认知过程, 因此, 交流的共同目的性、人际互动性决定了, 手势和语言共同发生过程的关联性和共同性; 同时手势和语言认知表征特征的差异性决定了两者在信息沟通中的互补性。可以说, 在特定交流过程中, 增长点理论所关注的手势和语言间的共同性、联合性和集成性, 以及信息封装假说所强调的手势和语言的互补性, 客观上都是存在的。进一步还应该注意到, 手势和语言在交流中的作用关系不是孤立存在的, 一方面手势不仅可以表意, 现实情境下还可以表达态度、情感等多种信息; 另一方面现实交流中还存在大量的情境因素, 比如交流者间的物理距离, 交流对象的共同可视性, 交流者的现实社会身份, 交流文化习惯等等。未来研究应进一步拓展研究思路, 基于交流认知的社会性和现实多样性, 应在实验控制严格性和自然性间做到适当的平衡, 在相对更为宽松和自然的实验情境下探查交流手势的认知特征, 这有助于克服和协调不同理论观点间的隔阂和分歧。

3 交流手势激活理论

手势和语言是人类交流在视觉形态和听觉形态上的两种表现形式, 两者表达形式尽管不同, 但是在共同的交流背景和目的下, 手势和语言间互相伴随发生并随着交流时间进程而彼此映射, 有研究者关注了这一映射过程中手势的激活作用, 词汇性手势生成模型或词汇检索假说侧重强调手势对语言抽象性表征的激活, 表现为促进语言的发生过程; 图像激活假说和模拟行为的手势框架理论侧重强调手势对于具体化感知心理表征的激活和描摹(de Marco, de Stefani, & Gentilucci, 2015; Graziano & Gullberg, 2013; Hadar & Butterworth, 1997; Hostetter & Alibali, 2010; Krauss, Chen, & Gottesman, 2000)。

3.1 词汇性手势生成模型或词汇检索假说

Krauss等(2000)的词汇性手势生成模型或词汇检索假说, 认为手势影响交流语言词汇的选择和表达的难易, 交流手势对于语言认知加工起到激活作用, 同时表现在对语言发生过程和理解过程的易化作用; 即手势有助于促进交流者内心词汇的激活, 尤其是空间语义表征的激活, 从而方便词汇的提取和理解。例如, 当说者准备表达“球滚下山”时, 会做出一个圆形的手势, 并交替旋转双手, 这有助于促进“滚”这一表征的激活, 说者更容易表达, 听者也更容易理解。以往研究发现, 交流过程中如果禁止被试使用手势, 说者语言显著不流畅, 断续更多(Rauscher, Krauss, & Chen, 1996); 当交流者语言表达困难时, 手势表达的频次相应增多(Morsella & Krauss 2004)。

手势广泛出现于各种时空条件下的交流过程中, 如上所述, 其可以关联性表达(澄清界定、强调、直观展现等)语言信息, 还可以互补性表达语言未尽之意; 然而, 手势互动的根本目的仍然是增进共同理解性和默契性, 以及促进交流的高效性, 表现之一就是手势有时先于语言, 尤其在模糊语言交流情形下, 手势起到激发语言发生和理解的作用(Nicoladis, Pika, Yin, & Marentette, 2007; Pine, Bird, & Kirk, 2007)。

Beaudoin-Ryan和Goldin-Meadow (2014)实验中对比设立了允许手势组和禁止手势组, 实验任务要求儿童被试解释抽象的道德推理问题, 结果发现, 在语言解释过程中允许使用手势的儿童表达了更多的复合性观点, 而禁止使用手势的儿童观点单一和片面; 任务完成后, 研究者再次向被试问及类似的道德推理问题, 发现允许手势组儿童语言中的观点数量显著更多、更综合化。该实验的一个不同之处在于没有采用易于手势表达的空间表征问题(如家具摆放、容器形状等), 而是采用了抽象问题交流情境, 相对更好地区分和观察了简单的手势描摹和对语言发生过程的激活。Broaders和Goldin-Meadow (2010)的研究则通过控制听者的手势特征, 观察听者手势对于说者语言的激活作用, 实验情境中被试模拟担任目击证人, 结果也发现, 当被试被问及现场的细节问题时, 如果讯问者使用了相应的手势, 将有助于激活促进被试的语言报告过程, 比如, 当问到“他穿戴了什么?”时, 讯问者自然做出“戴帽子”手势, 被试显著倾向于使用“帽子”回答问题。但是, 研究结果中出现一个令人感兴趣的例外:讯问过程中, 目击证人被试语言交流中自发产生了大量的手势动作, 但是讯问者却忽视了这些手势交流的信息。这也为未来研究带来新的启示:某些相对敏感和特殊的实验情境是否会影响到手势和语言交流间的关系特征。Arnold, Kahn和Pancani (2012)的实验采用了参照性交流范式, 创设了物品交流选择和匹配任务, 交流对中听者为研究者同谋, 实验中说者被试负责语言指导听者, 在一个标有6个颜色点的木板上选择并放置物体, 交流者彼此对面站立, 听者身后设有一个电脑屏幕, 依次呈现任务中需要选择和摆放的靶对象, 双方中间桌子上放置了操作区。实验中要求被试只能以语言进行指导不能使用手直接指示或接触物品。实验条件区分了“期望条件”和“等待条件”, 前者交流中在被试语言指导之前, 实验者同谋听者就预先用手选择出了靶对象并做好摆放的准备, 后者是当被试语言指导结束后, 同谋听者才按照指令选择出靶对象。结果发现:同谋手势影响被试说者交流语言的发生, 具体表现为:语速更快, 词汇发音更短, 语音弱化且变化性小, 从不同的角度证实听者手势激活说者的语言生成过程。

3.2 图像激活假说和模拟行为的手势框架理论

图像激活假说认为, 手势表达和描摹出与交流有关的视觉空间图像; 同时, 手势图像性表征的保持有助于语言发生过程更好地传达信息(Hadar & Butterworth, 1997; Perniss, Özyürek, & Morgan, 2015)。与此对应, 模拟行为的手势框架理论认为, 当说者产生语言时, 自然激活与之相应的感知状态和行为的心理表征与模拟, 手势随之发生(Hostetter & Alibali, 2010; Sassenberg & van Der Meer, 2010)。综合而言, 两个理论均强调手势对于具体化感知表征的激活和模拟, 该过程伴随着语言交流自然发生, 并对语言认知过程产生促进作用。

手势对交流认知的促进过程很大程度上源于手势表达的动作化、直观性和具体化特征, 有助于引导交流者集中注意于交流问题表征中的感知操作信息, 相对于语言的抽象性符号表达过程, 有助于易化交流情境的理解和推理过程。Alibali, Spencer, Knox和Kita (2011)的实验任务是预测齿轮组中某个特定齿轮的运动方式, 比如, “如果齿轮组中的第一个齿轮转动到一个特定方向, 那么某一特定齿轮随之将会怎样运动?”, 一组被试语言解释时允许使用手势, 另一组不允许, 结果发现, 手势组被试通过使用手势动作模拟齿轮的运动规律, 显著使用感知操作策略推理齿轮的运动方式, 语言中几乎没有出现抽象策略的描述(通过计算齿轮组中齿轮的奇偶特征来推算特定齿轮的运动方式); 禁止手势组则反之。Goldin-Meadow (2015)使用数学等式问题情境, 要求儿童在解决等式过程中将思维过程以语言外显表达, 譬如, “我想让左边等于右边” (等价策略), 被试分两组, 一组只能使用语言, 另一组使用语言的同时, 鼓励手势模拟, 譬如, “在等式的左边挥动手, 然后在等式的右边也做挥手动作”, 学习任务结束后, 将两组中掌握等式问题的被试选择出来, 再完成迁移任务, 但迁移任务中不要求使用手势表达解题过程。fMRI的数据结果显示, 手势组儿童迁移任务中感觉运动区域显著激活, 表明学习任务中手势表达促进感觉运动表征的激活, 尽管迁移任务中不再使用手势, 但是感觉运动表征仍然被重复激活。Grenoble, Martinović和Baglini (2014)认为, 尽管语言交流过程中, 交流者会自觉产生手势动作, 但是实际上人们并未对语言表征和手势表征有意识做出明确区分, 交流中对于两种表征的时刻监控也不符合认知节省性原则, 从这一意义出发, 手势所激活的图像性表征和语言的抽象性表征间的配合特征一定程度上影响了交流的效率效果, 两种不同交流表征形式的并置, 使得手势成为一种有力的交流工具, 不仅可以将语言信息具体化、形象化, 而且可能对交流语言认知过程起到某种索引作用。

对照以上两种理论, 不论是手势对语言抽象性表征的激活和促进, 还是手势对于感知运动心理表征的激活和描摹, 均反映了交流手势认知互动过程的多样性、复杂性, 这两个过程是相辅相承不可分割的, 即手势对感知运动表征的激活是进一步激活和易化语言表达的前提。交流互动过程中, 语言不单纯是一组抽象的符号和规则, 这些符号仍然需要与感知动作、情感等相联系, 这是手势直观性、具体化表征激活、促进语言抽象性加工, 以及和语言表达相辅相承过程的重要基础, 手势激活过程导致交流认知从抽象性表征向具体化表征转变。换言之, 交流手势的激活作用为交流认知中多种心理表征的联合提供了一种机会, 比如, 当语言表达“把牛肉反复油煎后, 夹在面包中间”, 语言概念简单的抽象联合是“牛肉+油+煎锅+面包”, 但是伴随性的手势动作, 如“两手反复翻转+合上夹住”, 同时激活了人们感知运动表征和相关经验, 也激活和易化了语言讲解的流畅性、便利性和理解性。诚然, 手势交流激活作用的关键在于充分调动了视觉、动觉等综合性表征, 甚至可能包括特殊情绪情感的同时表达, 有助于指向和联系交流者记忆中的相关经历和经验。正如Louwerse (2011)所指出的, 手势对交流认知的影响源于对客观世界中交流对象多种模式的共现。

4 交流手势的认知节省理论

交流手势的认知负荷降低假说认为手势降低了语言认知加工对认知资源的需求(Goldin- Meadow, Nusbaum, Kelly, & Wagner, 2001); Kita和Özyürek (2003)在此基础上提出了“共同范围模型”, 认为手势是由行动发生器计划组织的, 口头语言是通过信息发生器计划组织的, 由于手势和语言分属于不同的交流认知系统, 因此交流过程中两者信息彼此间互动沟通, 既提高了交流效率又降低了认知负荷。

归纳而言, 交流手势降低交流认知负荷源于两个方面:一是手势是身体动作, 手势交流根植于感知运动系统的认知资源, 手势参与交流使交流者有更为丰富和充足的资源来管理交流认知努力, 即降低交流认知负荷, 节省的认知努力可以投入到交流的其余方面, 对于交流认知的深入发展是有益的(Hu, Ginns, & Bobis, 2015; Novack et al., 2015; Ping et al., 2014)。Goldin-Meadow, Nusbaum, Kelly和Wagner (2001)研究中要求被试解释自己对一系列数学问题的解题思路, 并要求同时记住一些无关联的刺激项目, 实验条件划分出使用手势组和不使用手势组, 结果显示, 手势组被试回忆无关项目的数量显著更多。Novack, Congdon, Hemani-Lopez和Goldin-Meadow (2014)采用数学等式问题作为实验材料, 通过手势组和无手势组学习比较, 进一步发现, 手势组被试对数学等式问题的概括化水平显著更高, 迁移性解决问题的水平显著更高。

总之, 手势交流降低认知负荷或表现出的交流认知节省性, 根本上源于手势动作本身和语言交流的显著不同, 手势是以身体动作表达抽象思维过程。尽管交流系统作为一个整体, 手势和语言交流方式彼此关联, 但是手势是不同于语言的视觉交流系统, 是一种直观表征性的动作行为。现实交流情境是复杂的, 交流认知中包含了大量不同层次和特征的心理表征, 共同组成特定的交流认知环境, 因此, 交流过程中信息的传递不能一味简单仅借助于语言词汇和语法水平的相对抽象性表达, 手势交流的参与有助于调节交流认知环境中抽象表征和具体表征的相对灵活表达。虽然至目前为止, 研究者们尚没有就手势交流的认知机制形成一个一致而系统的认识和解释, 但是可以明确一点, 作为非语言交流媒介的一种重要方式, 手势交流集中于表现对象的一些具体化表征的细节, 而这些信息通常并不总是方便于语言规则的条理性组织和表达。简言之, 正是因为手势认知资源的相对独立性, 表征的具体直观性, 以及手势表达的便利性, 使得交流手势的发生降低了交流认知负荷程度。

二是手势交流具有无意识性或内隐性特征。Novack和Goldin-Meadow (2015)研究中指出, 交流者的手势传达的思维过程信息不是语言信息的重复表达, 其超出了交流语言的信息范畴, 手势信息往往表现为隐含性、不断变化性和压缩式的, 但是现实交流中交流双方通常可以即时捕捉到并进行明确性解码; 交流者的手势时常透露出某些语言之外的新信息, 这些新信息尚处于语言表达的“呼之欲出”的状态, 接下来可能会明确进入语言交流的信息中。不同思路和目的的研究从不同的角度均发现, 当人们出声思维或解释思维过程时, 手势会无意识表达语言抽象规则之外的视觉空间策略或时空运动表征(Alibali et al., 2011); 手势还可以潜在地启示出语言表达之外的新策略(Brooks & Goldin-Meadow, 2015; Goldin-Meadow, Cook, & Mitchell, 2009); 或不自觉启发了解决问题的隐性知识(Broaders, Cook, Mitchell, & Goldin-Meadow, 2007; Sharma & Droch, 2015)。因此, 手势交流的内隐性特征是交流认知负荷降低和交流认知资源节省的重要影响因素。

可见, 手势是以直观的方式在特定交流情境下凸显某些动作元素, 如前所述, 不论是在空间问题表征, 亦或抽象道德问题表征方面, 手势在激活内隐性观点上都表现出独特的优势, 这可以说明手势动作在交流互动中不仅仅是对交流者注意的简单指引, 手势动作表征性特征和日常一般性操作对象的动作行为不同, 其目的不是处置和改变对象, 而是直观描摹和展现认知过程。从这一意义出发, 手势一方面发生机制不同于语言发生机制, 另一方面特定交流中当交流者同时使用手势和语言, 表达交流情境中的隐性知识和显性知识时, 隐性知识很可能也很容易拓展语言认知的意识性范畴, 即隐性知识的激活可能打破原有的明确的知识状态, 手势对于多元化信息的促进和发展, 不仅降低了交流认知负荷, 也促进交流认知整体上向更深入的层次发展。概言之, 交流手势降低交流认知负荷的效应源于多种不同类型的表征形式的并列使用。

5 启示

人们在说话时自然而然地产生手势, 有时甚至在静静思考时也会不自觉地使用手势。虽然交流手势在互动中扮演着重要的角色, 但手势不是单纯的动作输出, 它以身体动作方式展现了已存在的某种交流心理表征。从以上述评可见, 不论是手势和语言的关系, 手势对于语言表征和图像表征的激活, 以及手势使用对于交流认知负荷的降低, 都表明交流手势不是一种简单的挥手动作, 其具有重要的认知导向功能。尽管各交流手势认知理论阐述的着眼点不同, 但各理论对于手势认知的解释都是围绕手势的交流性特征展开的。未来研究应进一步思考以下几个方面的问题。

第一, 交流手势认知特征的实验研究, 需要考虑其他非语言交流线索的影响性。交流认知实验研究相对于个体认知研究而言更为复杂, 以往研究在以手势为研究变量的同时, 并没有对交流情境中的其他因素做出严格的控制或排除, 比如以往研究在创设“手势组和非手势组”时, 并没有排除“肢体表情、面部表情”等因素的存在和影响性。

第二, 交流手势认知的以往探讨, 典型立足于对语言交流的辅助性和独立交流性两个方面, 实际上现实交流情境下存在更多的交流背景因素, 比如以往研究重点关注了手势和语言间的关系, 那么手势和面部表情间是否存在相互作用关系?远程交流和面对面交流情境下的手势认知特征和功能是否是一样的呢?

第三, 交流认知的典型特点是互动性, 如上所述, 以往研究倾向于在对交流双方做出相对严格实验控制的条件下, 探讨手势认知特征, 比如, 控制说者手势分析听者反应或说者的认知变化性; 采用研究者同谋等。对于交流互动性的实验控制一定程度上排除了手势交流的复杂性和灵活性。未来研究应进一步尝试交流实验范式的创新和探索。

第四, 交流手势实验研究的现实意义明显, 未来研究应进一步尝试探查和解释手势交流的现实特征和功能, 比如教育教学情境下教师手势、学生手势的特征和认知心理意义; 手势交流对社会人际互动的促进作用; 手势交流所表达的情感、情绪特征; 交流者个性特征与手势交流特点间的关系, 等。

参考文献

张恒超 . ( 2013).

参照性交流中的“听者设计”

心理发展与教育, 29( 5), 552-560.

URL     [本文引用: 1]

“听者设计”一直是参照性交流研究领域中的热点.参照性交流过程 中交流者通常会根据对交流同伴共享信息的评估来调整自己的行为,但是这些调整什么时候以及怎样发生的机制问题仍然存在争论.重点评述了“听者设计”的已有 研究角度和研究进展,并归纳总结了参照惯例视角、记忆和注意视角、交流情境视角的研究观点.未来研究应扩展已有研究设计,以深入探查“听者设计”的形成、 获得、发展变化过程,以及其与参照性交流其他限制因素间的相互作用;需要结合行为证据和眼动、脑成像证据等以帮助揭示“听者设计”过程的行为特点与认知机 制.

张恒超 . ( 2017).

共享因素对参照性交流双方学习的影响

心理学报, 49( 2), 197-205.

URL     [本文引用: 1]

采用参照性交流学习范式,探查共享因素对双方学习的影响。结果显示:从学习阶段6开始“共享语言+对象+表情”方式的成绩显著高于“共享语言+对象”方式,低分组条件下该方式成绩显著最高,且该方式高、低分组间无显著差异;“共享语言+对象”方式下揭开的维度数量显著最少。表明:“共享语言+对象+表情”方式下学习效率最高,集中表现于低分组学习效率更高且双方协调水平最高;“共享语言+对象”方式的选择性注意水平最低。

张恒超 . ( 2018).

交流语言认知特征

心理科学进展, 26( 2), 270-282.

[本文引用: 1]

Alibali M. W., Kita S., & Young A. J . ( 2000).

Gesture and the process of speech production: We think, therefore, we gesture

Language and Cognitive Processes, 15( 6), 593-613.

URL     [本文引用: 1]

At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.

Alibali M. W., Spencer R. C., Knox L., & Kita S . ( 2011).

Spontaneous gestures influence strategy choices in problem solving

Psychological Science, 22( 9), 1138-1144.

URL     PMID:21813800      [本文引用: 2]

Abstract Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

Arbib M. A., Gasser B., & Barrès V . ( 2014).

Language is handy but is it embodied?

Neuropsychologia, 55, 57-70.

URL     PMID:24252354      [本文引用: 1]

61Arbib reflects on insights of Marc Jeannerod into action-oriented cognition.61Summarizes Mirror System Hypothesis for evolution of the language-ready brain.61Embodiment provides the evolutionary and developmental core of language.61But abstraction, generalization, and metaphor can go beyond embodiment.61Computational models address both primate neurobiology and construction grammar.

Arnold J. E., Kahn J. M., & Pancani G. C . ( 2012).

Audience design affects acoustic reduction via production facilitation

Psychonomic Bulletin & Review, 19( 3), 505-512.

URL     PMID:22419403      [本文引用: 1]

In this article, we examine the hypothesis that acoustic variation (e.g., reduced vs. prominent forms) results from audience design. Bard et al. (Journal of Memory and Language 42:1–22, 2000 ) have argued that acoustic prominence is unaffected by the speaker’s estimate of addressee knowledge, using paradigms that contrast speaker and addressee knowledge. This question was tested in a novel paradigm, focusing on the effects of addressees’ feedback about their understanding of the speaker’s intended message. Speakers gave instructions to addressees about where to place objects (e.g., the teapot goes on red ). The addressee either anticipated the object, by picking it up before the instruction, or waited for the instruction. For anticipating addressees, speakers began speaking more quickly and pronounced the word the with shorter duration, demonstrating effects of audience design. However, no effects appeared on the head noun (e.g., teapot ), as measured by duration, amplitude, and perceived intelligibility. These results are consistent with a mechanism in which evidence about addressee understanding facilitates production processes, as opposed to triggering particular acoustic forms.

Beaudoin-Ryan L. & Goldin-Meadow S., ( 2014).

Teaching moral reasoning through gesture

Developmental Science, 17( 6), 984-990.

URL     PMID:24754707      [本文引用: 1]

Stem-cell research. Euthanasia. Personhood. Marriage equality. School shootings. Gun control. Death penalty. Ethical dilemmas regularly spark fierce debate about the underlying moral fabric of societies. How do we prepare today's children to be fully informed and thoughtful citizens, capable of moral and ethical decisions? Current approaches to moral education are controversial, requiring adults to serve as either direct (‘top-down’) or indirect (‘bottom-up’) conduits of information about morality. A common thread weaving throughout these two educational initiatives is the ability to take multiple perspectives – increases in perspective taking ability have been found to precede advances in moral reasoning. We propose gesture as a behavior uniquely situated to augment perspective taking ability. Requiring gesture during spatial tasks has been shown to catalyze the production of more sophisticated problem-solving strategies, allowing children to profit from instruction. Our data demonstrate that requiring gesture during moral reasoning tasks has similar effects, resulting in increased perspective taking ability subsequent to instruction.

Berezan O., Yoo M., & Christodoulidou N . ( 2016).

The impact of communication channels on communication style and information quality for hotel loyalty programs

Journal of Hospitality and Tourism Technology, 7( 1), 100-116.

URL     [本文引用: 1]

Purpose – The purpose of this study is to evaluate the impact of communication channels on communication style and information quality as perceived by loyalty program members. Design/methodology/approach – An online survey was utilized to collect data, and multivariate analysis of variance was used to test the study hypothesis. Findings – Study results indicated that the choice of a communication channel has a significant impact on the perceived communication style and information quality. Research limitations/implications – The use of an online survey restricted the ability to generalize findings beyond those that use the internet. Replicating this study in other areas where customers seek information outside of loyalty programs would provide valuable insight into the impact of communication channels on communication style and perceived quality of communication. Practical implications – Communication style and information quality have been shown to impact customer loyalty. The results of this study indicate that the type of communication channel used impacts style and information quality, and thereby loyalty. Social implications – Executives should use these research findings as a guide to how they should structure and maintain relationships with their loyalty members. Originality/value – This manuscript provides executives with a taxonomy of the tools and channels available for communicating information to loyalty program members.

Bock K., & Cutting J.C . ( 1992).

Regulating mental energy: Performance units in language production

Journal of Memory and Language, 31( 1), 99-127.

URL     [本文引用: 1]

One of the classic puzzles of language is posed by the phenomenon of discontinous dependency, in which the form of an element at one point in an utterance depends on the form of a noncontiguous controlling element. How do speakers use the information carried by the controller to implement the correct form of the dependent element? We contrasted two accounts of this process that differ in their assumptions about the organization of language formulation. The serial account, patterned after an augmented-transition-network model of the parsing of discontinuous dependencies, suggests that the controller is held in working memory until the point in the string at which the dependent appears. A second hypothesis, derived from a hierarchical model of language production, predicts that controllers and dependents within the same clause are specified concurrently, even when they are eventually separated in the utterance. Using a procedure to elicit verb-agreement errors in speech, we found that agreement errors were more frequent after phrases than after clauses that separated the verb from its head noun, reversing the direction of a related effect in language comprehension. When length varied, longer phrases led to more errors; longer clauses did not. These results support the hierarchical hypothesis.

Brentari D. & Goldin-Meadow S., ( 2017).

Language Emergence

Annual Review of Linguistics, 3( 1), 363-388.

URL     [本文引用: 1]

Broaders S.C., & Goldin-Meadow S. ,( 2010).

Truth is at hand: How gesture adds information during investigative interviews

Psychological Science, 21( 5), 623-628.

URL     [本文引用: 1]

Broaders S. C., Cook S. W., Mitchell Z., & Goldin-Meadow S . ( 2007).

Making children gesture brings out implicit knowledge and leads to learning

Journal of Experimental Psychology: General, 136( 4), 539-550

URL     PMID:17999569      [本文引用: 1]

Abstract Speakers routinely gesture with their hands when they talk, and those gestures often convey information not found anywhere in their speech. This information is typically not consciously accessible, yet it provides an early sign that the speaker is ready to learn a particular task (S. Goldin-Meadow, 2003). In this sense, the unwitting gestures that speakers produce reveal their implicit knowledge. But what if a learner was forced to gesture? Would those elicited gestures also reveal implicit knowledge and, in so doing, enhance learning? To address these questions, the authors told children to gesture while explaining their solutions to novel math problems and examined the effect of this manipulation on the expression of implicit knowledge in gesture and on learning. The authors found that, when told to gesture, children who were unable to solve the math problems often added new and correct problem-solving strategies, expressed only in gesture, to their repertoires. The authors also found that when these children were given instruction on the math problems later, they were more likely to succeed on the problems than children told not to gesture. Telling children to gesture thus encourages them to convey previously unexpressed, implicit ideas, which, in turn, makes them receptive to instruction that leads to learning. 2007 APA

Brooks N. & Goldin-Meadow S., ( 2015).

Moving to learn: How guiding the hands can set the stage for learning

Cognitive Science A Multidisciplinary Journal, 40( 7), 1831-1849.

URL     [本文引用: 2]

Abstract Previous work has found that guiding problem-solvers' movements can have an immediate effect on their ability to solve a problem. Here we explore these processes in a learning paradigm. We ask whether guiding a learner's movements can have a delayed effect on learning, setting the stage for change that comes about only after instruction. Children were taught movements that were either relevant or irrelevant to solving mathematical equivalence problems and were told to produce the movements on a series of problems before they received instruction in mathematical equivalence. Children in the relevant movement condition improved after instruction significantly more than children in the irrelevant movement condition, despite the fact that the children showed no improvement in their understanding of mathematical equivalence on a ratings task or on a paper-and-pencil test taken immediately after the movements but before instruction. Movements of the body can thus be used to sow the seeds of conceptual change. But those seeds do not necessarily come to fruition until after the learner has received explicit instruction in the concept, suggesting a leeper effect of gesture on learning.

Cook S. W., Duffy R. G., & Fenn K. M . ( 2013).

Consolidation and transfer of learning after observing hand gesture

Child development, 84( 6), 1863-1871.

URL     PMID:23551027      [本文引用: 1]

Children who observe gesture while learning mathematics perform better than children who do not, when tested immediately after training. How does observing gesture influence learning over time? Children (n = 184, ages = 7 10) were instructed with a videotaped lesson on mathematical equivalence and tested immediately after training and 24 hr later. The lesson either included speech and gesture or only speech. Children who saw gesture performed better overall and performance improved after 24 hr. Children who only heard speech did not improve after the delay. The gesture group also showed stronger transfer to different problem types. These findings suggest that gesture enhances learning of abstract concepts and affects how learning is consolidated over time.

de Marco D., de Stefani E., & Gentilucci M . ( 2015).

Gesture and word analysis: The same or different processes?

NeuroImage, 117, 375-385.

URL     PMID:26044859      [本文引用: 1]

61Are emblem and corresponding word understood by using motor simulation?61Are communication signals integrated independently of their sensorial modality?61In a lexical task a prime (either emblem or word) preceded a target word.61Emblem but not corresponding word was comprehended using motor simulation.61The same mechanism integrated emblem and word with word.

Drijvers L. & Özyürek A., ( 2017).

Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension

Journal of Speech, Language, and Hearing Research, 60( 1), 212-222.

URL     PMID:27960196      [本文引用: 1]

Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method: Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2- band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Results: Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures

Duran N.D., & Dale R. , ( 2014).

Perspective-taking in dialogue as self-organization under social constraints

New Ideas in Psychology, 32, 131-146.

URL     [本文引用: 1]

61We model social perspective-taking as a low-dimensional dynamic process.61The model captures three timescales: choice, response time and dynamics.61High-level cognition may obey similar dynamics as lower-level motor coordination.

Edelman S. ( 2017).

Language and other complex behaviors: Unifying characteristics, computational models, neural mechanisms

Language Sciences, 62, 91-123.

URL     [本文引用: 1]

Similar to other complex behaviors, language is dynamic, social, multimodal, patterned, and purposive, its purpose being to promote desirable actions or thoughts in others and self (Edelman, 2017b). An analysis of the functional characteristics shared by complex sequential behaviors suggests that they all present a common overarching computational problem: dynamically controlled constrained navigation in concrete or abstract situation spaces. With this conceptual framework in mind, I compare and contrast computational models of language and evaluate their potential for explaining linguistic behavior and for elucidating the brain mechanisms that support it.

Glenberg A.M., & Gallese V. , ( 2012).

Action-based language: A theory of language acquisition, comprehension, and production

Cortex, 48( 7), 905-922.

URL     PMID:21601842      [本文引用: 1]

Evolution and the brain have done a marvelous job solving many tricky problems in action control, including problems of learning, hierarchical control over serial behavior, continuous recalibration, and fluency in the face of slow feedback. Given that evolution tends to be conservative, it should not be surprising that these solutions are exploited to solve other tricky problems, such as the design of a communication system. We propose that a mechanism of motor control, paired controller/predictor models, has been exploited for language learning, comprehension, and production. Our account addresses the development of grammatical regularities and perspective, as well as how linguistic symbols become meaningful through grounding in perception, action, and emotional systems.

Goldin-Meadow S. ( 2015).

From action to abstraction: Gesture as a mechanism of change

Developmental Review, 38, 167-184.

URL     PMID:4672635     

Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked – the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. But gesture can do more than reflect ideas – it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ – gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.

Goldin-Meadow S., Cook S. W., & Mitchell Z. A . ( 2009).

Gesturing gives children new ideas about math

Psychological Science, 20( 3), 267-272.

URL     PMID:2750886      [本文引用: 1]

ABSTRACT How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands.

Goldin-Meadow S., Nusbaum H., Kelly S. D., & Wagner S. M . ( 2001).

Explaining math: Gesturing lightens the load

Psychological Science, 12( 6), 516-522.

URL     PMID:11760141      [本文引用: 2]

Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. We asked adults and children to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations than when they did not gesture. Gesturing appeared to save the speakers' cognitive resources on the explanation task, permitting the speakers to allocate more resources to the memory task. It is widely accepted that gesturing reflects a speaker's cognitive state, but our observations suggest that, by reducing cognitive load, gesturing may also play a role in shaping that state.

Graziano M. & Gullberg M., ( 2013).

Gesture production and speech fluency in competent speakers and language learners

In Tilburg Gesture Research Meeting (TiGeR) 2013. Tilburg University.

[本文引用: 2]

Grenoble L. A., Martinovic´ M. & Baglini R.. ,( 2014) .

Verbal gestures in Wolof

In R. Kramer, L. Zsiga, & O. Boyer (Eds.), Selected proceedings of the 44th annual conference on African linguistics. Somerville, MA: Cascadilla Press.

[本文引用: 1]

Hadar U. & Butterworth B., ( 1997).

Iconic gestures, imagery, and word retrieval in speech

Semiotica, 115( 1-2), 147-172.

[本文引用: 2]

Hostetter A.B . ( 2011).

When do gestures communicate? A meta-analysis

Psychological bulletin, 137( 2), 297-315.

URL     PMID:21355631      [本文引用: 1]

Do the gestures that speakers produce while talking significantly benefit listeners' comprehension of the message? This question has been the topic of many research studies over the previous 35 years, and there has been little consensus. The present meta-analysis examined the effect sizes from 63 samples in which listeners' understanding of a message was compared when speech was presented alone with when speech was presented with gestures. It was found that across samples, gestures do provide a significant, moderate benefit to communication. Furthermore, the magnitude of this effect is moderated by 3 factors. First, effects of gesture differ as a function of gesture topic, such that gestures that depict motor actions are more communicative than those that depict abstract topics. Second, effects of gesture on communication are larger when the gestures are not completely redundant with the accompanying speech; effects are smaller when there is more overlap between the information conveyed in the 2 modalities. Third, the size of the effect of gesture is dependent on the age of the listeners, such that children benefit more from gestures than do adults. Remaining questions for future research are highlighted.

Hostetter A.B . ( 2014).

Action attenuates the effect of visibility on gesture rates

Cognitive Science, 38( 7), 1468-1481.

URL     PMID:24889881      [本文引用: 1]

Much evidence suggests that semantic characteristics of a message (e.g., the extent to which the message evokes thoughts of spatial or motor properties) and social characteristics of a speaking situation (e.g., whether there is a listener who can see the speaker) both influence how much speakers gesture. However, the Gesture as Simulated Action (GSA) framework (Hostetter & Alibali, ) predicts that these effects should not be independent but should interact such that the effect of visibility is lessened when a message evokes strong thoughts of action. This study tested this claim by comparing the gesture rates produced by speakers as they described 24 nouns that vary in how strongly they evoke thoughts of action. Further, half of the words were described with visibility between speaker and listener blocked. The results demonstrated a significant interaction as predicted by the GSA framework.

Hostetter A.B., & Alibali, M. W . ( 2010).

Language, gesture, action! A test of the Gesture as Simulated Action framework

Journal of Memory and Language, 63( 2), 245-257.

URL     [本文引用: 2]

The Gesture as Simulated Action (GSA) framework (Hostetter & Alibali, 2008) holds that representational gestures are produced when actions are simulated as part of thinking and speaking. Accordingly, speakers should gesture more when describing images with which they have specific physical experience than when describing images that are less closely tied to action. Experiment 1 supported this hypothesis by showing that speakers produced more representational gestures when describing patterns they had physically made than when describing patterns they had only viewed. Experiment 2 replicated this finding and ruled out the possibility that the effect is due to decreased opportunity for verbal rehearsal when speakers physically made the patterns. Experiment 3 ruled out the possibility that the effect in Experiments 1 and 2 was due to motor priming from making the patterns. Taken together, these experiments support the central claim of the GSA framework by suggesting that speakers gesture when they express thoughts that involve simulations of actions.

Hostetter A. B., Alibali M. W., & Kita S . ( 2007).

I see it in my hands’ eye: Representational gestures reflect conceptual demands

Language and Cognitive Processes, 22( 3), 313-336.

URL     [本文引用: 1]

The Information Packaging Hypothesis (Kita, 2000) holds that gestures play a role in conceptualising information for speaking. According to this view, speakers will gesture more when describing difficult-to-conceptualise information than when describing easy-to-conceptualise information. In the present study, 24 participants described ambiguous dot patterns under two conditions. In the dots-plus-shapes condition, geometric shapes connected the dots, and participants described the patterns in terms of those shapes. In the dots-only condition, no shapes were present, and participants generated their own geometric conceptualisations and described the patterns. Participants gestured at a higher rate in the dots-only condition than in the dots-plus-shapes condition. The results support the Information Packaging Hypothesis and suggest that gestures occur when information is difficult to conceptualise.

Hu F. T., Ginns P., & Bobis J . ( 2015).

Getting the point: Tracing worked examples enhances learning

Learning & Instruction, 35, 85-93.

URL     [本文引用: 1]

61Pointing and tracing with an index finger may enhance communication and learning.61Two experiments tested tracing effects on learning from worked examples.61Tracing enhanced test scores and cognitive load during test phase.61Tracing reduced errors and solution times across test questions.

Iverson J.M., & Goldin-Meadow S. ,( 1998).

Why people gesture when they speak

Nature, 396( 6708), 228.

URL     PMID:9834030      [本文引用: 1]

Focuses on factors that explain the use of gesture in communication. Comparison of survey results between blind and nonblind speakers on how they use gesture; Role of gesture in conveying useful information to the listener.

Kang S., Tversky B., & Black J. B . ( 2015).

Coordinating gesture, word, and diagram: Explanations for experts and novices

Spatial Cognition & Computation, 15( 1), 1-26.

URL     [本文引用: 1]

Successful explanations are a symphony of gesture, language, and props. Here, we show how they are orchestrated in an experiment in which students explained complex systems to imagined novices and experts. Visual-spatial communication090000diagram and gesture090000was key; it represents thought more directly than language. The real or virtual diagrams created from gestures served as the stage for explanations, enriched by language and enlivened by deictic gestures to convey structure and iconic gestures to enact the behavior and functionality of the systems. Explanations to novices packed in more information than explanations to experts, emphasizing the information about action that is difficult for novices, and expressing information in multiple ways, using both virtual models created by gestures and visible ones.

Kelly S.D., & Church R.B . ( 1998).

A comparison between children's and adults' ability to detect conceptual information conveyed through representational gestures

Child Development, 69( 1), 85-93.

URL     PMID:9499559      [本文引用: 1]

The present study compares children's and adults' ability to detect information that is conveyed through representational hand gestures. Eighteen children ( M = 10 years, 1 month) and 18 college undergraduates watched videotaped stimuli of children verbally and gesturally explaining their reasoning in a problem-solving situation. A recall procedure was used to assess whether children and adults could detect information conveyed in the stimulus children's gesture and speech. Results showed that children and adults recalled information that was conveyed through representational gestures. In addition, "mismatching" gesture negatively affected the precision of speech recall for adults. However, this negative effect on speech recall was absent for children.

Kita S. & Özyürek A., ( 2003).

What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking

Journal of Memory and language, 48( 1), 16-32.

URL     [本文引用: 2]

Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.

Koppensteiner M., Stephan P & Jäschke J. P. M. ., ( 2016).

Moving speeches: Dominance, trustworthiness and competence in body motion

Personality and Individual Differences, 94, 101-106.

URL     [本文引用: 2]

61Body movements of politicians giving speeches were turned into stick-figure videos.61Stimuli were rated on dominance, trustworthiness and competence.61Simple nonverbal cues were linked to perceptions of dominance and trustworthiness.61Male speakers from opposition parties received highest ratings on dominance.61Body motion has ecological validity and is a nonverbal cue of social relevance.

Krauss R. M., Chen Y. S. & Gottesman R.. ,( 2000) . Lexical gestures and lexical access: A process model In D McNeill (Eds) Language and gesture (pp. 261-283). Cambridge, UK: Cambridge University Press.

[本文引用: 2]

Krauss R.M., & Weinheimer S. , ( 1964).

Changes in reference phrases as a function of frequency of usage in social interaction: A preliminary study

Psychonomic Science, 1, 113-114.

URL     [本文引用: 1]

Pairs of subjects interacted in a problem-solving task which required them to communicate about ambiguous figures. The length of the reference phrase for each figure was calculated. A negative relationship was found between the frequency with which a figure was referred to and the mean length of its reference phrase.

Louwerse M.M . ( 2011).

Symbol interdependency in symbolic and embodied cognition

Topics in Cognitive Science, 3( 2), 273-302.

URL     PMID:25164297      [本文引用: 1]

Whether computational algorithms such as latent semantic analysis (LSA) can both extract meaning from language and advance theories of human cognition has become a topic of debate in cognitive science, whereby accounts of symbolic cognition and embodied cognition are often contrasted. Albeit for different reasons, in both accounts the importance of statistical regularities in linguistic surface structure tends to be underestimated. The current article gives an overview of the symbolic and embodied cognition accounts and shows how meaning induction attributed to a specific statistical process or to activation of embodied representations should be attributed to language itself. Specifically, the performance of LSA can be attributed to the linguistic surface structure, more than special characteristics of the algorithm, and embodiment findings attributed to perceptual simulations can be explained by distributional linguistic information.

Matovic D., Koch A. S., & Forgas J. P . ( 2014).

Can negative mood improve language understanding? Affective influences on the ability to detect ambiguous communication

Journal of Experimental Social Psychology, 52, 44-49.

URL     [本文引用: 1]

61Two experiments found that mild negative mood improved communication and language understanding.61An analysis of reaction times and recall memory confirmed that negative mood produced more careful and attentive processing.61A mediational analysis found that it was more attentive processing that mediated mood effects on language understanding.61The findings confirm that negative affect has adaptive benefits and can improve cognitive and communicative performance.61The results highlight the important role of moods in fine-tuning communication and social behavior in everyday situations.

McNeill D & Duncan S. ., ( 2000) . Growth points in thinking-for-speaking. In D. McNeill. (Eds) Language and gesture (pp. 141-161). Cambridge,UK: Cambridge University Press.

[本文引用: 1]

Morsella E., & Krauss R.M . ( 2004).

The role of gestures in spatial working memory and speech

The American Journal of Psychology, 117( 3), 411-424.

URL     PMID:15457809      [本文引用: 1]

Co-speech gestures traditionally have been considered communicative, but they may also serve other functions. For example, hand-arm movements seem to facilitate both spatial working memory and speech production. It has been proposed that gestures facilitate speech indirectly by sustaining spatial representations in working memory. Alternatively, gestures may affect speech production directly by activating embodied semantic representations involved in lexical search. Consistent with the first hypothesis, we found participants gestured more when describing visual objects from memory and when describing objects that were difficult to remember and encode verbally. However, they also gestured when describing a visually accessible object, and gesture restriction produced dysfluent speech even when spatial memory was untaxed, suggesting that gestures can directly affect both spatial memory and lexical retrieval.

Nicoladis E., Pika S., Yin H. U. I., & Marentette P . ( 2007).

Gesture use in story recall by Chinese-English bilinguals

Applied Psycholinguistics, 28( 4), 721-735.

URL     [本文引用: 1]

Previous studies have shown inconsistent results concerning bilinguals' use of gestures to compensate for reduced proficiency in their second language (L2). These results could be because of differing task demands. In this study, we asked 16 intermediate English L2 speakers (whose first language [L1] was Chinese) to watch a story and tell it back in both languages. We attempted to link gesture use to proficiency while accounting for task complexity as measured by scenes recalled. The results showed that these L2 speakers told longer stories in their L1 and used more iconic gestures in their L2. There were also trends for the women to tell longer stories and use more gestures in their L2 compared to the men. These results are consistent with the idea that the relationship between gesture use and proficiency is mediated by task complexity. The trends for gender differences, however, point to the possibility that gesture use is also related to expressivity.

Novack M. A., Congdon E. L., Hemanilopez N., & Goldin-Meadow S . ( 2014).

From action to abstraction: Using the hands to learn math

Psychological Science, 25( 4), 903-910.

URL     [本文引用: 1]

Novack M.A., & Goldin-Meadow S. , ( 2016).

Gesture as representational action: A paper about function

Psychonomic Psychonomic Bulletin & Review, 24( 3), 652-665.

URL     PMID:27604493      [本文引用: 1]

Abstract A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.

Novack M. & Goldin-Meadow S., ( 2015).

Learning from gesture: How our hands change our minds

Educational Psychology Review, 27( 3), 405-412.

URL     PMID:4562024      [本文引用: 1]

Abstract When people talk, they gesture, and those gestures often reveal information that cannot be found in speech. Learners are no exception. A learner's gestures can index moments of conceptual instability, and teachers can make use of those gestures to gain access into a student's thinking. Learners can also discover novel ideas from the gestures they produce during a lesson, or from the gestures they see their teachers produce. Gesture thus has the power not only to reflect a learner's understanding of a problem, but also to change that understanding. This review explores how gesture supports learning across development, and ends by offering suggestions for ways in which gesture can be recruited in educational settings.

Novack M. A., Goldin-Meadow S., & Woodward A. L . ( 2015).

Learning from gesture: How early does it happen?

Cognition, 142, 138-147.

URL     PMID:4500665      [本文引用: 2]

Highlights 61 2- and 3-year-olds learn novel actions from viewing iconic gesture demonstrations. 61 For 2-year-olds, iconic gestures are harder to interpret than incomplete-actions. 61 Children do not view gestures as meaningless movement. 61 For novice learners, imitating gesture may promote learning. Abstract Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form—a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter’s gesture as it was performed). Study 2 compared 2-year-olds’ performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner’s attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation.

Perniss P., Özyürek A., & Morgan G . ( 2015).

The Influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture

Topics in Cognitive Science, 7( 1), 2-11.

URL     PMID:25565249      [本文引用: 1]

Abstract For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.

Pine K. J., Bird H., & Kirk E . ( 2007).

The effects of prohibiting gestures on children's lexical retrieval ability

Developmental Science, 10( 6), 747-754.

URL     PMID:17973791      [本文引用: 1]

Abstract Two alternative accounts have been proposed to explain the role of gestures in thinking and speaking. The Information Packaging Hypothesis ( Kita, 2000 ) claims that gestures are important for the conceptual packaging of information before it is coded into a linguistic form for speech. The Lexical Retrieval Hypothesis ( Rauscher, Krauss & Chen, 1996 ) sees gestures as functioning more at the level of speech production in helping the speaker to find the right words. The latter hypothesis has not been fully explored with children. In this study children were given a naming task under conditions that allowed and restricted gestures. Children named more words correctly and resolved more ‘tip-of-the-tongue’ states when allowed to gesture than when not, suggesting that gestures facilitate access to the lexicon in children and are important for speech production as well as conceptualization.

Ping R.M., & Goldin-Meadow S. , ( 2008).

Hands in the air: Using ungrounded iconic gestures to teach children conservation of quantity

Developmental Psychology, 44( 5), 1277-1287.

URL     PMID:18793062      [本文引用: 1]

Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigated the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.

Ping R. M., Goldin-Meadow S., & Beilock S. L . ( 2014).

Understanding gesture: Is the listener’s motor system involved?

Journal of Experimental Psychology: General, 143( 1), 195-204.

URL     PMID:23565671      [本文引用: 2]

Listeners are able to glean information from the gestures that speakers produce, seemingly without conscious awareness. However, little is known about the mechanisms that underlie this process. Research on human action understanding shows that perceiving another's actions results in automatic activation of the motor system in the observer, which then affects the observer's understanding of the actor's goals. We ask here whether perceiving another's gesture can similarly result in automatic activation of the motor system in the observer. In Experiment 1, we first established a new procedure in which listener response times are used to study how gesture impacts sentence comprehension. In Experiment 2, we used this procedure, in conjunction with a secondary motor task, to investigate whether the listener's motor system is involved in this process. We showed that moving arms and hands (but not legs and feet) interferes with the listener's ability to use information conveyed in a speaker's hand gestures. Our data thus suggest that understanding gesture relies, at least in part, on the listener's own motor system.

Post L. S., Gog T. V., Paas F., & Zwaan R. A . ( 2013).

Effects of simultaneously observing and making gestures while studying grammar animations on cognitive load and learning

Computers in Human Behavior, 29( 4), 1450-1455.

URL     [本文引用: 1]

This study examined whether simultaneously observing and making gestures while studying animations would lighten cognitive load and facilitate the acquisition of grammatical rules. In contrast to our hypothesis, results showed that children in the gesturing condition performed worse on the posttest than children in the non-gesturing, control condition. A more detailed analysis of the data revealed an expertise reversal effect, indicating that this negative effect on posttest performance materialized for children with lower levels of general language skills, but not for children with higher levels of general language skills. The finding that for children with lower language ability, cognitive load did not decrease as they saw more animations provided additional support for this expertise reversal effect. These findings suggest that the combination of observing and making gestures may have imposed extraneous cognitive load on the lower ability children, which they could not accommodate together with the relatively high intrinsic load imposed by the learning task.

Rauscher F. H., Krauss R. M., & Chen Y. S . ( 1996).

Gesture, speech, and lexical access: The role of lexical movements in speech production

Psychological Science, 7( 4), 226-231.

URL     [本文引用: 1]

Sacchi S., Riva P., & Aceto A . ( 2016).

Myopic about climate change: Cognitive style, psychological distance, and environmentalism

Journal of Experimental Social Psychology, 65, 68-73.

URL     [本文引用: 1]

61Psychological distance of climate change and pro-environmental endeavors are negatively related.61Psychological distance is more related to pro-environmental attitudes when individuals adopt an analytic cognitive style.61When individuals are in a holistic mindset, ecological intentions are less affected by psychological distance.61Individual mindset may be affected by the Navon Task manipulation.

Sassenberg U. & van Der Meer E. , ( 2010).

Do we really gesturemorewhen it is more difficult?

Cognitive Science, 34( 4), 643-664.

URL     PMID:21564228      [本文引用: 1]

Representational co-speech gestures are generally assumed to be increasingly produced in more difficult compared with easier verbal tasks, as maintained by theories suggesting that gestures arise from processing difficulties during speech production. However, the gestures-as-simulated-action framework proposes that more representational gestures are produced with stronger rather than weaker mental representations that are activated in terms of mental simulation in the embodied cognition framework. We tested these two conflicting assumptions by examining verbal route descriptions that were accompanied by spontaneous directional gestures. Easy descriptions with strong activation were accompanied more often by gestures than difficult descriptions with weak activation. Furthermore, only gesture090009speech matches090000but not gesture090009picture matches090000were increasingly produced with difficult lateral directions compared with easy nonlateral directions. We argue that lateral gesture090009speech matches underlie stronger activated mental representations in mental imagery. Thus, all results are in line with the gestures-as-simulated-action framework and provide evidence against the view that gestures result from processing difficulties.

Sharma V. & Droch B., ( 2015).

Gesture-controlled user interfaces

Journal of Information Sciences and Computing Technologies, 2( 1), 133-135.

[本文引用: 1]

Trofatter C., Kontra C., Beilock S., & Goldin-Meadow S . ( 2015).

Gesturing has a larger impact on problem-solving than action, even when action is accompanied by words

Language, Cognition and Neuroscience, 30( 3), 251-260.

URL     PMID:4318567      [本文引用: 1]

The coordination of speech with gesture elicits changes in speakers' problem-solving behaviour beyond the changes elicited by the coordination of speech with action. Participants solved the Tower of Hanoi puzzle (TOH1); explained their solution using speech coordinated with either Gestures (Gesture + Talk) or Actions (Action + Talk), or demonstrated their solution using Actions alone (Action); then solved the puzzle again (TOH2). For some participants (Switch group), disc weights during TOH2 were reversed (smallest = heaviest). Only in the Gesture + Talk Switch group did performance worsen from TOH1 to TOH2 for all other groups, performance improved. In the Gesture + Talk Switch group, more one-handed gestures about the smallest disc during the explanation hurt subsequent performance compared to all other groups. These findings contradict the hypothesis that gesture affects thought by promoting the coordination of task-relevant hand movements with task-relevant speech, and lend support to the hypothesis that gesture grounds thought in action via its representational properties.

Weinberg A., Fukawa-Conolly T., & Wiesner E . ( 2015).

Characterizing instructor gestures in a lecture in a proof-based mathematics class

Educational Studies in Mathematics, 90( 3), 233-258.

URL     [本文引用: 1]

ABSTRACT Researchers have increasingly focused on how gestures in mathematics aid in thinking and communication. This paper builds on Arzarello’s (2006) idea of a semiotic bundle and several frameworks for describing individual gestures and applies these ideas to a case study of an instructor’s gestures in an undergraduate abstract algebra class. We describe the role that the semiotic bundle plays in shaping the potential meanings of gestures; the ways gestural sets create complex relationships between gestures; and the role played by polysemy and abstraction. These results highlight the complex ways in which mathematical meanings—both specific and general—are expressed in gesture, and to highlight the integrated nature of elements of the semiotic bundle.

版权所有 © 《心理科学进展》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发  技术支持:support@magtech.com.cn

/