The cognitive mechanism of haptic recognition of two-dimension images
收稿日期: 2018-05-4 网络出版日期: 2019-04-15
Received: 2018-05-4 Online: 2019-04-15
触觉二维图像可以辅助视觉受损人群, 将视觉信息转化为触觉信息, 从而感知外部世界。触觉二维图像的识别可能是通过触觉信息在大脑中进行“视觉转化”的方式而完成, 并且会受到图形的几何特征、视角与透视、视觉经验、视觉表象能力、触觉探索过程、训练以及年龄的影响。探索触觉二维图像识别的认知神经机制, 对于触觉二维图像设计的改进和可用性的提高, 具有重要意义。
触觉二维图像可以辅助视觉受损人群, 将视觉信息转化为触觉信息, 从而感知外部世界。触觉二维图像的识别可能是通过触觉信息在大脑中进行“视觉转化”的方式而完成, 并且会受到图形的几何特征、视角与透视、视觉经验、视觉表象能力、触觉探索过程、训练以及年龄的影响。探索触觉二维图像识别的认知神经机制, 对于触觉二维图像设计的改进和可用性的提高, 具有重要意义。
The two-dimension tactile image is the main approach of translating visual information into haptic information. It plays an important role in helping visually impaired people perceive the external world. The recognition of haptic two-dimension image is considered to be based on the “visual translation” process where the haptic input is translated into the visual image. This process is influenced by the graphic geometric feature, perspective, visual experience, capability of visual representation, the process of tactile exploration, training and age. Exploration of the cognitive neural mechanism of two-dimension images haptic recognition is significant for improving the design and usability of two-dimension tactile images.
The two-dimension tactile image is the main approach of translating visual information into haptic information. It plays an important role in helping visually impaired people perceive the external world. The recognition of haptic two-dimension image is considered to be based on the “visual translation” process where the haptic input is translated into the visual image. This process is influenced by the graphic geometric feature, perspective, visual experience, capability of visual representation, the process of tactile exploration, training and age. Exploration of the cognitive neural mechanism of two-dimension images haptic recognition is significant for improving the design and usability of two-dimension tactile images.
於文苑, 刘烨, 傅小兰, 龚江涛, 徐迎庆. (2019).
YU Wenyuan, LIU Ye, FU Xiaolan, GONG Jiangtao, XU Yingqing. (2019).
触觉是重要的感觉通道之一, 可以在一定程度上代替视觉感受物体的空间特征(如形状)和纹理特征(如粗糙度) (Stilla & Sathian, 2008)。触觉二维图像是视觉信息转化为触觉信息的主要方式, 可以辅助视觉受损人群通过触觉来感知和识别空间位置和图形图像(焦阳, 龚江涛, 史元春, 徐迎庆, 2016)。全球视觉受损人数约2.53亿(Barros, Maciel-Junior, Fernandes, Bezerra, & Fernandes, 2017), 需要通过听觉和触觉等其他通道的辅助来获得外界信息。然而, 目前的触觉二维图像是在视觉二维图像的基础上形成的, 可以被触觉有效识别的信息非常有限, 不完全适于触觉识别(Heller, McCarthy, & Clark, 2005; 龚江涛 等, 2018)。因此, 为了提高视觉受损人群对触觉二维图像的使用效率和用户体验, 需要研究触觉二维图像的认知加工机制, 依据影响触觉二维图像识别的因素, 来改善触觉二维图像的呈现方式, 从而帮助视觉受损人群更有效地通过触觉二维图像来获取信息。本文总结了触觉的特点和触觉二维图像的生成与应用方式, 综述了触觉二维图像识别的认知机制、影响因素以及神经基础, 并且提出了触觉二维图像识别进一步的研究方向, 以及对触觉二维图像设计的改进建议。
触觉是人类获取外界信息的重要渠道之一(周丽丽 等, 2017)。分布在皮肤上的机械感受器(mechanoreceptive)通过多种体表感觉传入纤维, 将外界信息传入中枢神经系统, 形成触觉(Saal & Bensmaia, 2014; Sathian, 2016)。触觉可以表征物体的材质属性, 如粗糙度、易变性、防滑性、粘度、密度和重量等(Baumgartner, Wiebel, & Gegenfurtner, 2015), 也可以表征物体的空间属性, 如朝向、曲度、长度、形状、大小和体积等(Kappers & Tiest, 2013)。触觉也可以通过以自我为中心(egocentric)和非自我为中心(allocentric)的参照系, 表征自身和客体的空间位置(Hatwell, Streri, & Gentaz, 2003)。此外, 触觉和视觉一样, 遵循格式塔组织原则(Gestalt grouping principle) (Gallace & Spence, 2011), 如倾向于将接近的、相似的或连续的多个客体知觉为整体(van Aarsen & Overvliet, 2016; Chang, Nesbitt, & Wilkins, 2007; Overvliet, Krampe, & Wagemans, 2012)。因此, 在视觉被削弱的情况下, 触觉可以在一定程度上代替视觉, 感知空间、位置以及图形图像信息。
尽管触觉可以作为视觉的替代知觉, 并且已有很多基于触觉的视觉替代性装置被投入使用(Segond, Weiss, Kawalec, & Sampaio, 2013), 但是触觉和视觉之间仍然存在巨大的差异。首先, 相比于视觉, 触觉的知觉域(perception field)更有限, 只限于外界刺激与皮肤接触的面积(Longo & Golubova, 2017; Loomis, Klatzky, & Lederman, 1991; Yoshida, Yamaguchi, Tsutsui, & Wake, 2015)。其次, 有限的知觉域使触觉无法像视觉一样对刺激进行整体加工, 而只能进行序列加工(Loomis et al., 1991; Picard & Monnier, 2009)。第三, 在触觉的序列加工过程中, 需要将知觉到的信息暂时储存在工作记忆中, 与随后知觉到的信息整合, 才能形成对整体的表征(Yoshida et al., 2015)。因此, 相对于视觉, 触觉的知觉过程需要占用更多的工作记忆资源(Lacey & Sathian, 2014), 而且这一特点在表征客体的空间属性(如形状、大小、朝向等)时表现得更为明显(Picard & Monnier, 2009)。已有研究发现, 当使视觉的知觉域范围限制为与触觉知觉域范围相一致时, 视觉也只能对刺激进行序列加工, 并且其对物体识别的绩效以及视觉的工作记忆容量均显著下降(Loomis et al., 1991; Picard & Monnier, 2009)。这说明, 触觉与视觉在加工方式和工作记忆容量方面的差异主要是由于两者知觉域范围的差异导致的。
不过, 虽然触觉和视觉两种模态之间存在差异, 但是两者在表征信息的种类和所依赖的神经基础都有较大程度的重合(Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Sathian, 2016), 触觉和视觉表征在物体空间特征时都有中央后沟(postcentral sulcus)、顶内沟(intraparietal sulcus)和外侧枕叶皮层(lateral occipital complex)的参与, 在表征物体纹理特征时都有内侧枕叶皮层(medial occipital cortex)的参与(Snow, Goodale, & Culham, 2015; Stilla & Sathian, 2008)。因此, 触觉对于视觉受损人群而言, 仍然是获得空间信息和图形图像信息的重要感觉通道。触觉二维图像是视觉信息向触觉信息转化的主要方式。根据视觉图像中的线条或轮廓, 在可触摸的材料表面形成凸出的线条, 从而使视觉二维图像转化为可触摸的触觉二维图像。因此, 认识触觉二维图像识别的认知机制、影响因素和神经基础, 对于促进视觉信息向触觉信息的转化, 改进触觉二维图像的设计, 以及提高触觉二维图像的识别效率, 具有重要意义。
对于触觉二维图像识别的认知机制, 有研究者提出了“表象调节模型” (image-mediation model) 理论, 来解释人是如何通过触觉识别二维图像。根据这一理论, 触觉二维图像的识别主要是通过“视觉转换” (visual translation)这一过程实现的。触觉感受器获取的线条、节点等信息, 在头脑中转换并重组为视觉表象, 然后将视觉表象与已储存的知识进行对比, 完成对物体的识别(Klatzky & Lederman, 1988)。“表象调节模型”理论得到了一些研究证据的支持。首先, 触觉二维图像的形象性(imageability)与明眼人蒙眼触摸识别的绩效有显著的正相关, 即越形象生动的物体越容易被识别; 其次, 表象能力高的明眼人蒙眼时通过识别触觉二维图像的绩效更高; 第三, 没有视觉经验的先天盲人对触觉二维图像的识别绩效低于有视觉经验的明眼人和早期盲人的触觉识别绩效(Lederman, Klatzky, Chataway, & Summers, 1990); 第四, 视觉经验越少的后天盲人, 越不倾向于使用视觉表象策略记忆触觉二维图像(Lebaz, Picard, & Jouffrais, 2010)。这些研究结果都表明, 视觉表象在触觉二维图像识别中具有重要作用, 支持了触觉识别过程中存在“视觉转换”的过程。此外, 其他研究发现, 视觉-触觉的跨模态转换具有不对称性, 当被试通过触觉进行二维图像学习, 然后通过视觉对图像进行测验, 其识别绩效比通过视觉学习再通过触觉接受测验的被试的绩效更低, 说明视觉的表象可以帮助触觉的识别, 而触觉表象对于视觉识别没有帮助(Behrmann & Ewell, 2003), 从而进一步支持了触觉二维图像识别中的“视觉转换”机制和“表象调节模型”理论。
然而, 也有研究表明, 在识别触觉二维图像时, 先天盲人与蒙眼明眼人的识别绩效没有差异(Heller et al., 2006; Heller et al., 2005; Heller et al., 2009; Picard, Lebaz, Jouffrais, & Monnier, 2010), 这对于视觉经验以及“表象调节模型”中“视觉转化”的机制提出了质疑。这可能是由于, 一方面, 触觉二维图像识别的绩效受到多种因素的影响, 不同实验中检测触觉二维图像识别绩效的方式和评价标准存在差异, 因而不同研究的结果不完全一致; 更重要的是, “表象调节模型”理论可能只是触觉识别二维图像认知机制的解释方式之一, 这一过程可能存在不止一种作用机制, 对于缺少甚至没有视觉经验的盲人, 大脑可能通过其他认知机制完成触觉二维图像的识别。最近的一项研究发现, 影响明眼人触觉识别绩效的二维图像特征与影响盲人触觉识别绩效的二维图像特征有显著的差异(龚江涛 等, 2018), 这为存在不同的触觉识别二维图像认知机制提供了证据。
如前文所述, 由于大多数触觉二维图像是直接从现有的视觉二维图像转换而来的, 这些二维图像可能具有不适于触觉识别的特征。因此, 总结影响触觉二维图像识别的因素, 对于认识触觉二维图像识别的认知机制, 改进触觉二维图像的表现方式, 以及提高触觉二维图像的识别绩效具有重要意义。
线条是构成触觉二维图像的基本元素, 夹角是线条可以组成的最基本的图形之一。因此线条和夹角会在触觉过程中引发更多的注意和加工, 是触觉二维图像识别的重要步骤(Grunwald et al., 2014)。线条的弯曲和线条的起始点均会引起更多的触觉探索(Grunwald et al., 2014); 此外, 触觉对于夹角角度的感受性比较敏锐, 差别阈限大约在4~7度左右, 且对于角度的感受性不会受到夹角朝向的影响(Toderita, Bourgeon, Voisin, & Chapman, 2014)。识别夹角的关键在于夹角的两条边是否相交, 有明显顶点的夹角更容易通过触觉被知觉为夹角(Wijntjes & Kappers, 2007)。这表明, 夹角的顶点, 而非夹角的朝向, 对于夹角的识别具有关键作用。
此外, 图像的大小也会影响触觉二维图像的识别。相对于小尺寸的图像, 大尺寸的图像提供了更高的分辨率, 可以展示更多细节, 因此更容易被识别(Wijntjes, van Lienen, Verstijnen, & Kappers, 2008)。然而, 该研究并未进一步考察图像大小与其指代物体的实际大小的一致性是否会影响具体图像的识别, 当图像的大小与物体实际的大小不相符时, 即使大尺寸的图像提供了更多的细节, 物体图像的触觉识别绩效可能不会提高。
当在二维平面上表现三维物体的图形时, 构成图形的线条的朝向及长度会根据透视原理(perspective), 产生一定程度的变化, 从而提供空间深度线索, 表现为“近大远小”, 即距离观察者越近的物体部位在二维平面上呈现的面积越大(Heller et al., 2002)。此外, 透视的表达也因视角(viewpoint)的不同而发生变化, 例如, 一个正方体在俯视视角下的二维图像是正方形, 不具有深度线索; 而在正面斜上方45度的视角下, 其二维图像包括呈现为正方形的正面和呈现为梯形的顶面, 说明顶面的形状根据视觉透视原理产生了变化。但是视觉可以非常容易地将物体知觉为正方体并将顶面知觉为正方形。现有的视觉二维图像中, 表现的是常见视角下的三维物体, 通常是三维视角(three-dimension view, 3-D view), 即侧面斜上方45度的视角。然而, 根据视觉的常见视角和透视原理形成的二维线条图, 不一定完全适合触觉识别(Hatwell et al., 2003), 以往的研究发现主要可以分为以下两个方面。
首先, 当蒙眼明眼人识别触觉二维图像时, 相比于不具有深度线索的物体图像, 三维视角下的物体图像的识别正确率更低, 所用时间更长(Lederman et al., 1990)。这说明, 虽然三维视角下的物体图像适于视觉识别, 但会增加其触觉识别的难度。
其次, 在识别三维物体的二维图像时, 会表现出视角的偏好。在Heller的系列研究中, 要求蒙眼明眼人被试先触摸一个真实的轴对称简单几何体(如三棱柱、正方体等), 然后从4张三维物体的二维图像中选出与该几何体一致的图像。结果发现, 第一, 顶面视角(top view)下二维图像的识别率最高, 而且即使被试被要求不能触摸真实物体的顶面时, 顶面视角下二维图像的识别正确率仍然最高(Heller et al., 2002; Heller et al., 2006), 这表明这些三维物体的触觉二维图像的识别具有顶面视角的偏好, 这可能是由于该实验中使用的三维物体在顶面视角下能够表现出最多的信息。此外, 这种顶面视角偏好与被试触摸时的身体姿态无关, 当线条图垂直于地面时, 被试需要触摸位于其正前方的、垂直面上的线条图, 此时被试的识别仍然表现出顶面视角的偏好, 这说明在对水平放置的触觉图像进行识别时, 被试没有因为手部触摸水平面而将该图像知觉为物体顶面的倾向(Heller et al., 2006)。然而, 这种顶面视角优势受到任务难度和任务类型的影响, 当三维物体由简单的几何体变为复杂几何体时, 或将作为目标的真实三维物体也换做三维物体的二维图像时, 触觉二维图像识别的顶面视角优势消失(Heller et al., 2006; Heller et al., 2009)。第二, 三维视角在一定的条件下会促进触觉二维图像的识别。当要求被试在4个二维图像备选项中选出与目标二维图像一致的选项时, 如果目标二维图像是三维视角, 被试的选择绩效更高; 而当二维图像备选项是三维视角时, 则不会表现出类似的优势。这说明, 三维视角下的目标选项提供了更多的物体信息, 从而提高触觉二维图像的识别。第三, 在正面视角(frontal view)下, 根据点透视原理形成的二维图像的识别正确率显著高于根据平面透视原理形成的二维图像的识别正确率(Heller et al., 2002), 说明透视也会对触觉二维图像的识别产生影响。
上述研究结果都表明, 触觉二维图像的视角及透视会影响其识别, 而且这种影响会因任务类型以及三维物体的复杂程度而产生变化。
视觉经验在触觉二维图像识别中的作用一直存在争议。目前的触觉二维图像是在视觉二维图像的基础上形成的, 明眼人有非常丰富的视觉二维图像的经验, 而盲人, 尤其是先天盲人, 他们很少有, 甚至没有对于二维图像的视觉经验, 这可能导致盲人不能有效地识别这些触觉二维图像, 这一观点得到了研究证据的支持(Lederman et al., 1990)。然而, 也有研究发现, 盲人的触觉二维图像的识别绩效与具有大量视觉经验的明眼人没有差异(Heller et al., 2006; Heller et al., 2005; Heller et al., 2009; Picard et al., 2010)。因此, 需要通过比较盲人和明眼人之间触觉二维图像识别的差异, 来进一步对视觉经验在此过程中的作用进行探讨。
很多研究发现, 在识别触觉二维图像时, 盲人没有由于视觉经验的缺乏而表现出比明眼人更差的识别绩效。在Heller等人实施的一系列关于触觉二维图像识别的实验中, 要求被试通过触觉, 从4个三维物体的二维图像中选出与目标几何物体对应的图像。结果发现, 无论目标物体是简单几何物体还是复杂几何物体, 先天盲人与蒙眼明眼人的识别绩效没有显著的差异, 而且当目标物体是简单几何物体时, 先天盲人和明眼人都表现出顶面视角下的二维图像的识别优势(Heller et al., 2002; Heller et al., 2006; Heller et al., 2009)。另外, 在触觉二维图像的记忆任务中, 要求被试记忆由线条和点组成的简单路线图, 结果发现, 虽然采用了不同的记忆策略, 但是蒙眼明眼人与先天盲人的绩效没有差异(Picard et al., 2010)。此外, 其他研究结果表明, 即使早期盲人缺乏视觉经验, 他们在触摸并且重新画出二维图像时, 他们会采用和明眼人相同的“向心执行原则” (centripetal execution principle), 即先确定图像轮廓, 再表现图像中心的内容(Bouaziz, Russier, & Magnan, 2005)。这说明, 在不依赖视觉经验和视觉表象的情况下, 早期盲人也可以对二维图像进行与明眼人类似的表征(Lacey & Lawson, 2013)。这些结果都说明, 视觉经验可能对于触觉二维图像的识别没有帮助。
此外, 对于视觉模态下特有的二维图像表象方式, 如透视和视角, 虽然先天盲人没有关于透视和视角的直接经验, 但是可以通过学习和训练, 理解这些视觉空间表象规则。当要求被试触摸两个连在一起的、夹角呈一定角度的木板, 然后从备选的二维图像中选出与之相符的图像。结果发现, 先天盲人、后天盲人和明眼人之间的识别绩效没有差异。更值得关注的是, 当要求被试画出某一视角下, 木板所呈现的形态时, 先天盲人的画中也会有一定的透视规则的体现, 即长方形的木板在二维图画中没有呈现为长方形, 而是根据视角和透视原则发生了一定程度的改变(Heller et al., 2002)。此外, 也有研究发现, 盲人也可以在三维空间中理解并运用透视规则(Wnuczko & Kennedy, 2014)。这说明, 即使没有或者很少有视觉经验, 盲人也可以理解视角和透视。不仅如此, 有个案研究发现, 通过指导和训练, 先天盲人也可以画出不同视角下符合透视规则的三维物体的二维图像(Kennedy & Juricevic, 2006)。
然而, 视觉经验对于触觉二维图像识别的影响可能没有体现在识别绩效上, 而是体现在触觉识别过程中所使用的表象策略上。视觉经验是形成视觉表象的关键, 视觉经验的缺失使盲人主要依靠触觉和听觉, 进行表象加工, 而非视觉(Cattaneo et al., 2008; Cattaneo, Vecchi, Monegato, Pece, & Cornoldi, 2007)。因此盲人具有较差的视觉表象能力, 从而导致在识别触觉二维图像时, 不会使用视觉表象策略。在触觉二维图像记忆任务中, 要求蒙眼明眼人和盲人被试通过触觉记忆简单线路图, 然后进行再认测试, 并且在测试后报告其所使用的记忆方法, 由主试根据被试的报告对其采用的图像表象策略进行判断。结果发现, 虽然蒙眼明眼人和盲人在记忆任务的绩效上没有差异, 但是明眼人更倾向于使用视觉表象策略, 即被试通过想象二维图像的形象来进行记忆, 而先天盲人只使用非视觉表象策略进行记忆, 即将图像编码为语义信息(如：“Z”字形)和图像各个部分的相对位置信息(如：直线的右边有一个点), 或通过记忆触摸时的手指移动的轨迹来进行记忆(Cornoldi, Tinti, Mammarella, Re, & Varotto, 2009; Picard et al., 2010); 后天盲人会同时采用这两种策略, 但是, 后天盲人对非视觉表象记忆策略的选择, 与其失明年龄和失明时间占生命中的比例显著相关(Lebaz et al., 2010), 这说明, 视觉经验越少的后天盲人, 越倾向于选择非视觉表象策略。这些结果都表明, 视觉经验会影响二维图像的触觉识别, 这种影响可能是通过改变触觉识别中使用的策略而实现的。
以上研究结果表明, 视觉经验对于触觉识别二维图像不是必要的(Heller et al., 2005), 对于提高二维图像的触觉识别绩效可能没有帮助, 但是可以通过其他的方式影响二维图像的触觉识别。
根据“表象调节模型”理论以及上文中提到的部分研究, 对于明眼人而言, 视觉表象是触觉二维图像识别的关键阶段, 因此, 视觉表象能力是影响二维图像触觉识别的因素之一。这一观点得到了研究证据的支持。一方面, 相关分析发现, 明眼人触觉二维线条图识别的绩效与其视觉表象能力成正相关(Lederman et al., 1990; Picard et al., 2010); 另一方面, 通过视觉表象测验成绩, 将高视觉表象能力的被试和低视觉表象能力的被试进行区分, 发现高视觉表象能力的被试的触觉二维图像识别的绩效更高(Lebaz, Jouffrais, & Picard, 2012)。这些结果都表明, 视觉表象能力在明眼人触觉二维图像识别的过程中起重要作用。
触觉识别是通过手部在可触摸物体表面的移动而实现的, 这种具有目的性的手部移动被称为触觉的探索过程(exploratory procedures, EP) (Klatzky & Lederman, 1988)。在触觉探索过程中, 还存在类似眼动过程中的注视停留阶段, 即探索暂停(exploratory stop, ES)。触觉的探索暂停的出现可以反映触觉过程中注意的偏好, 以及预测探索过程的时长(Grunwald et al., 2014)。常见的探索过程包括手指侧面移动(lateral motion)、按压(pressure)、静触(static contact)、轮廓追踪(contour following)和围绕(enclosure), 且每一种探索过程都有适合其探索的触觉特征, 如侧面移动适于探索纹理, 轮廓追踪适于探索轮廓和形状(Kalia et al., 2014)。此外, 当面对一个陌生的触觉二维图像时, 被试一般倾向于先用手掌来确定图像的位置和范围, 然后用手指进行细节的触摸(Symmons & Richardson, 2000)。
因此, 选择适合的触觉探索过程有利于触觉识别的完成。研究表明, 习得了正确的触觉探索过程的盲人儿童, 其触觉二维图像识别的绩效有显著提高(Vinter, Fernandes, Orlandi, & Morgan, 2012), 例如, 当使用多个手指进行触觉探索时, 蒙眼明眼人与盲人的触觉二维图像识别的绩效均会提高(Morash, Pensky, Tseng, & Miele, 2014)。不过, 通过习得正确探索过程这一方式来提高触觉识别绩效的前提是, 有已经被验证的、适合探索某一类触觉特征的探索过程。然而, 对于触觉二维图像而言, 目前尚未形成具体的、适用于触觉二维图像不同特征的探索过程。有研究者借助计算机技术, 根据概念的层级关系和语义关联, 将属于同一个概念层级的物体图像, 或是语义关联更强的物体图像, 划分在同一个部分, 并按照概念层级的高低和语义关联的程度, 由整体到细节逐步为被试呈现图像的不同部分, 从而形成了一种在人机交互环境下的触觉探索过程。在这种探索过程中, 被试先触摸图像的整体, 然后触摸局部细节, 而且相同类别的部分或语义关联度更强的部分会同时触摸(Rastogi, Pawluk, & Ketchum, 2013)。这不仅符合人的概念表征方式, 也弥补了触觉序列加工的缺陷, 更有助于触觉二维图像的识别。
触觉二维图像识别的能力也可以通过相应的训练来增强。一方面, 触觉二维图像的使用经验可以促进识别绩效的提高。在盲人儿童与明眼人儿童的对比研究中发现, 盲人儿童比明眼人儿童拥有更多的触觉二维图像的使用经验, 因此他们对触觉二维图像的识别绩效优于明眼人(Picard, Albaret, & Mazella, 2014)。不仅如此, 对于早期盲人儿童, 触觉二维图像使用经验越多的盲人儿童, 其识别绩效更高(Theurel, Witt, Claudet, Hatwell, & Gentaz, 2013)。另一方面, 盲人可以通过指导和训练, 理解视觉模态下特有的透视规则, 并且可以在二维平面中画出符合透视规则的三维物体图像(Heller et al., 2005; Kennedy & Juricevic, 2006)。这些结果都表明, 有针对性的训练可以促进触觉二维图像的识别。
很多研究发现, 触觉二维图像的识别能力随着年龄的增长而变化。一方面, 触觉二维图像识别的能力随年龄的增长而增强, 相比于成年人, 儿童和青少年的识别能力较低(Mazella, Albaret, & Picard, 2018; Overvliet & Krampe, 2018), 而且对于青少年而言, 其触觉二维图像识别的能力与年龄成正相关(Picard, Albaret, & Mazella, 2013)。触觉二维图像识别能力与年龄之间的关系, 可能与触觉形状辨别能力(Mazella et al., 2018)、工作记忆容量和空间参照系(Overvliet & Krampe, 2018)随年龄增长不断完善有关。此外, 触觉二维图像识别能力的可塑性随年龄增长而减弱。研究发现, 触觉探索过程的训练可以提高盲人儿童的触觉二维图像的识别绩效, 但是对于青少年盲人和成年盲人没有帮助(Vinter et al., 2012)。另一方面, 关于老年人和青年人触觉二维图像识别能力的研究发现, 老年人的触觉二维图像识别的绩效较差, 而且这种差异在识别复杂二维图像时更加明显。然而, 在识别之前提供了物体类别后, 老年人与青年人的识别绩效的差异减小(Picard et al., 2013)。这表明, 相对于青年人, 老年人在识别“视觉转化”后的提取已储存信息的能力变差, 而非“视觉转化”能力减弱。因此, 当提供了物体类别后, 从长时记忆中提取信息的负荷减弱, 从而使其与青年人之间绩效的差异缩小(Overvliet, Wagemans, & Krampe, 2013)。
上文所总结的触觉二维图像识别的影响因素, 不仅可以各自对触觉二维图像的识别产生影响, 各个因素之间可能也存在相互作用, 并共同影响触觉二维图像的识别。例如, 触觉二维图像的几何特征、视角和透视规则会影响图像的识别, 但是视觉经验, 以及有针对性的触觉二维图像的识别训练可能会通过促进对图像的几何特征、视角和透视规则的理解, 或通过提高视觉表象能力, 或通过选择合适的触觉探索过程, 来促进触觉二维图像识别; 而视觉经验、视觉表象能力、触觉探索过程的习得, 以及训练的效果, 均可能随着年龄的增长产生相应的变化, 在不同年龄段的人群中表现出差异。因此, 在探究触觉二维图形识别的影响因素的研究中, 不仅要研究哪些因素会影响触觉二维图形识别, 更需要关注因素之间的相互作用, 建立更具有生态效度和应用意义的触觉二维图像识别的认知模型。
目前, 针对触觉二维图像识别神经基础的研究还非常有限, 现有的研究结果基本是关注某一图像特性在触觉通道下的加工, 其研究对象只是简单的图像模式(pattern), 不是具体物体的二维图像。因此, 触觉二维图像识别的神经基础需要进一步探究。不过, 尽管目前没有直接的研究证据明确地指出触觉二维图像识别的神经基础, 根据已有的研究结果仍然可以推测可能参与触觉二维图像识别的脑区。
一方面, 由于触觉二维图像在视觉二维图像的基础上形成, 并且触觉和视觉物体识别的神经基础上存在一定程度的重合(Amedi et al., 2002; Yau, Kim, Thakur, & Bensmaia, 2016), 因此, 这些重合的区域可能参与触觉二维图像识别。在关于触觉工作记忆的研究中, 被试依次触摸3个凸点线条形成的夹角, 然后对测试夹角进行再认, 判断这个测试夹角是否在先前出现过。在这一任务中, 发现额下回(inferior frontal gyrus, IFG)、后顶叶(posterior parietal cortex, PPC), 以及额中回(medial frontal gyri, mFG)的激活(Yang et al., 2014), 而这些脑区同样参与视觉工作记忆过程(Kaas, van Mier, Visser, & Goebel, 2013)。因此, 这些区域可能在视觉和触觉工作记忆的过程均会发挥作用(Yang et al., 2014); 此外, 在通过触觉判断三维物体的形状时, 外侧枕叶(lateral occipital, LO)会被激活, 说明该区域在视觉和触觉的形状识别过程中均会发挥作用(Bauer et al., 2015; Lacey & Sathian, 2014; Lacey, Stilla, Sreenivasan, Deshpande, & Sathian, 2014)。这些同时参与到触觉和视觉物体表征中的脑区可能是触觉二维图像识别的神经基础。另一方面, 触觉对物体特性的表征还存在特异性的脑区, 这些脑区也可能是触觉二维图像识别的关键区域。在通过触觉辨别凸点图像的周期性(periodicity)时, 中央后回(postcentral gyrus)和顶上小叶(superior parietal lobule, SPL)会激活, 说明这些脑区参与触觉对物体周期性特征的知觉(Yang et al., 2017)。而在通过触觉辨别凸点图像的对称性时, 相比于视觉辨别对称性, 距状沟周围皮层(peri-calcarine area)在触觉对称性辨别时会被激活, 因此该区域可能在图像对称性的触觉识别起关键作用(Bauer et al., 2015)。此外, 尽管触觉和视觉工作记忆的神经基础有重合, 但是右侧后顶叶(right posterior parietal cortex, right PPC)只参与触觉工作记忆的过程, 可能是负责触觉工作记忆的特异性脑区(Ku, Zhao, Bodner, & Zhou, 2015; Yang et al., 2014), 然而, 目前关于触觉识别物体特征神经基础的研究比较有限, 触觉二维图像, 尤其是具有较高生态效度的触觉二维图像, 对其识别的神经基础的研究更加有限, 因此, 触觉二维图像识别的认知神经机制仍不明确。
触觉二维图像是视觉信息转化为触觉信息的主要方式之一。根据视觉图像中的线条或轮廓, 通过压印、热塑、盲文触点打印的方式, 改变材料(纸张或者塑料)的部分形状, 在材料表面形成凸出的线条, 从而使视觉二维图像转化为可触摸的二维图像(Kalia et al., 2014)。然而, 通过这些方式形成的触觉二维平面图的制作耗时, 不易保存和搬运, 而且制成的每一张图只能固定表现一个图像, 不能重复循环利用(Vidal-Verdu & Hafez, 2007)。为了增强可用性, 研究者发明了一种电子的触觉图像生成器, 它将视觉图像的每个像素转化为相应的触点, 视觉二维图像的线条对应位置的触点相对凸出, 凸出的触点形成凸起的线条, 从而实现视觉图像到触觉图像的转化(Bellik & Clavel, 2017; Vidal-Verdu & Hafez, 2007; 焦阳, 龚江涛, 徐迎庆, 2016)。有研究者在这种静态触觉图像生成器的基础上, 制作了动态触觉图像生成器。相比于静态触觉图像生成器一次性呈现整个线条图, 动态触觉图像生成器根据使用者所触摸的图像位置, 在使用者手指部位形成图像相应位置的凸点, 因此只需表现图像的局部(Rastogi et al., 2013)。这种触觉图像生成器成本低, 占用空间小, 但是要求使用者只能用手指触摸图像局部, 而不能用手掌触摸图像整体(Vidal-Verdu & Hafez, 2007)。
由于生成技术的不断发展, 触觉二维图像在盲人辅助设计领域得到了较为广泛的应用(Pawluk, Adams, & Kitada, 2015; Trief, Cascella, & Bruce, 2013)。触觉地图(tactile map)是触觉二维图像的一种典型的应用方式。触觉地图在视觉地图的基础上, 将地图中的线路、地标等元素通过凸点的线条、符号或纹理来表示, 从而达到辅助盲人获得空间位置信息的目的(谌小猛, 李闻戈, 2016; Hatwell et al., 2003)。传统的单纯呈现触觉图示的触觉地图可以表达的信息相对有限, 而且其熟练使用需要一定的学习过程, 近年来也有研究者对触觉地图进行了改进, 如将触觉地图中的地标改为立体的指示物(Gual, Puyuelo, & Lloveras, 2015), 或将触觉地图改进为交互式触觉地图, 添加了语音反馈和触觉震动反馈(Brock, Truillet, Oriola, Picard, & Jouffrais, 2015; Memeo, Campus, & Brayda, 2014), 提高了触觉地图的有效性。触觉二维图像的另一个主要应用是表达具体物体的凸点线条图(raised-line drawing), 通常作为盲人教育的教学用具, 来帮助盲人获得知识(杨光, 钟经华, 董晶, 2017; 张蕾, 刘建英, 2014)。这种线条图也可以用于制作适用于盲人的心理测量工具(Mazella, Albaret, & Picard, 2014; Mazella, Albaret, & Picard, 2016)。
在已有的触觉二维图像生成和应用工具中, 具体物体的触觉二维图像均是在视觉二维图像的基础上, 直接将图像线条转化为可触摸的凸点线条, 图像仍然保留了适于视觉识别的特性, 如因遮挡关系、视觉透视和视角产生的线条的变化。然而这些特性在触觉识别的条件下, 会对物体学习和识别产生影响, 甚至会对触觉识别产生阻碍(龚江涛 等, 2018)。因此, 改进触觉二维图像生成和应用的方式, 要重视触觉通道和视觉通道的差异, 探究触觉输入的二维图像信息转化为三维空间表征知识的机制, 在此基础上认识触觉二维图像识别的认知机制及影响因素, 改善二维图像在触觉线条图上的表现方式, 从而提高触觉线条图使用效率。
上述的研究总结了影响触觉二维图像识别的影响因素、认知机制和神经基础, 然而, 二维图像的视觉特征如何在触觉模态下表现仍需要进一步的研究, 触觉二维图像识别的认知神经机制也尚不明确。目前, 二维平面图像的触觉认知研究尚处于起步阶段, 未来仍存在大量的科学问题和实际应用问题有待解决。
首先, 触觉二维图像识别的认知神经机制尚不清楚。尽管“表象调节模型”理论在一定程度上说明了触觉二维图像识别的认知机制, 但是不能全面地解释明眼人与盲人的触觉二维图像识别的认知过程。目前的研究发现, 明眼人与盲人在触觉二维图像识别时所依赖的图形图像特征有明显的差异(龚江涛 等, 2018), 说明这两类群体的触觉二维图像识别的认知机制可能不同, 需要有更加完善的理论对这一认知机制进行解释。此外, 触觉二维图像识别的神经基础方面的研究也较为缺乏。目前关于这一方面的研究主要聚焦于视觉与触觉识别物体时神经基础的异同(Lacey & Sathian, 2014), 并且更多地关注识别三维物体(Snow et al., 2015)或者简单图像特征(如对称性) (Bauer et al., 2015)的视触觉神经基础, 而没有研究触觉识别二维物体图像(尤其是具体物体的二维图像)的神经机制, 以及触觉与视觉识别二维物体图像的神经机制的差异。
其次, 触觉输入的二维图像信息是否可以转化为三维空间表征知识, 如果可以, 那么其认知神经机制是什么, 这一系列问题尚不明确。明眼人拥有大量的三维物体和二维图像的视觉经验, 在识别触觉二维图像时, 可以通过视觉表象将触觉输入的信息转化为视觉表象, 进而转化为三维物体知识(Lebaz et al., 2010; Lederman et al., 1990)。然而, 当没有或者少有三维物体和二维图像的视觉经验时, 盲人是否还能通过触觉输入的二维图像信息获得三维物体知识并完成触觉二维图像的识别, 这一问题尚不清楚。如果答案是肯定的, 即盲人也可以凭借触觉输入的二维图像信息来获得三维物体知识, 那么其背后的认知机制也需要进一步研究。
第三, 如何设计更加适合视觉障碍人群使用的触觉二维图像的辅助设备, 仍有待进一步的研究。由于通道的差异, 直接由视觉二维图像转化得到的触觉二维图像不一定适用于触觉识别。因此, 需要根据触觉的特点以及影响触觉二维图像识别的因素, 改善触觉二维图像的设计。已有研究对于触觉的序列加工特点对于触觉图像进行分步序列呈现(Rastogi et al., 2013), 并且添加了触觉引导和语音说明, 引导盲人按照设定的顺序来触摸图像(焦阳等, 2016; Brock et al., 2015; Memeo et al., 2014), 提高了触觉二维图像的识别效率。然而, 还有许多不适合触觉识别的因素需要被改善, 如视觉二维图像中常用的表达物体表面反光的线条, 因物体重叠而省略的线条, 以及通过近大远小表达深度线索的画图原则等, 可能都不适合二维图像的触觉识别, 需要在认识触觉二维图像识别认知机制的基础上, 对触觉二维图像进行改进, 选择更加合适的方式对三维立体信息进行降维, 并通过二维平面来表达, 从而提高触觉二维图像的可识别程度。
最后, 未来的研究需要探索是否可以通过训练来提高视觉障碍人群对于二维图像的触觉识别。如前文所述, 训练可以提高触觉识别的绩效(Picard et al., 2014; Theurel et al., 2013)以及帮助盲人理解透视原则(Kennedy & Juricevic, 2006)。因此, 尽管目前很难设计出非常适合触觉识别的二维图像, 但是盲人可能可以通过学习和训练, 理解这些由视觉二维图像转换而来的触觉二维图像的设计原则(如通过近大远小表达距离和深度, 通过物体表面的线条表达表面的反光等), 并能够对二维平面信息进行升维, 构建出三维立体信息, 进而从应用的角度提高视觉障碍人群对触觉二维图像的识别。
Convergence of visual and tactile shape processing in the human lateral occipital complex,
We have recently demonstrated using fMRI that a region within the lateral occipital complex (LOC) is activated by objects when either seen or touched. We term this cortical region LOtv for the lateral occipital tactile-visual region. We report here that LOtv voxels tend to be located in sub-regions of LOC that show preference for graspable visual objects over faces or houses. We further examine the nature of object representation in LOtv by studying its response to stimuli in three modalities: auditory, somatosensory and visual. If objects activate LOtv, irrespective of the modality used, the activation is likely to reflect a highly abstract representation. In contrast, activation specific to and touch may reflect common and exclusive attributes shared by these senses. We show here that while object activation is robust in both the visual and the somatosensory modalities, auditory signals do not evoke substantial responses in this region. The lack of auditory activation in LOtv cannot be explained by differences in task performance or by an ineffective auditory stimulation. Unlike and touch, auditory information contributes little to the recovery of the precise shape of objects. We therefore suggest that LOtv is involved in recovering the geometrical shape of objects.
A dynamic gesture recognition and prediction system using the convexity approach,
Several researchers around the world have studied gesture recognition, but most of the recent techniques fall in the curse of dimensionality and are not useful in real time environment. This study proposes a system for dynamic gesture recognition and prediction using an innovative feature extraction technique, called the Convexity Approach. The proposed method generates a smaller feature vector to describe the hand shape with a minimal amount of data. For dynamic gesture recognition and prediction, the system implements two independent modules based on Hidden Markov Models and Dynamic Time Warping. Two experiments, one for gesture recognition and another for prediction, are executed in two different datasets, the RPPDI Dynamic Gestures Dataset and the Cambridge Hand Data, and the results are showed and discussed.
Neural correlates associated with superior tactile symmetry perception in the early blind,
Symmetry is an organizational principle that is ubiquitous throughout the visual world. However, this property can also be detected through non-visual modalities such as touch. The role of prior visual experience on detecting tactile patterns containing symmetry remains unclear. We compared the behavioral performance of early blind and sighted (blindfolded) controls on a tactile symmetry detection task. The tactile patterns used were similar in design and complexity as in previous visual perceptual studies. The neural correlates associated with this behavioral task were identified with functional magnetic resonance imaging (fMRI). In line with growing evidence demonstrating enhanced tactile processing abilities in the blind, we found that early blind individuals showed significantly superior performance in detecting tactile symmetric patterns compared to sighted controls. Furthermore, comparing patterns of activation between these two groups identified common areas of activation (e.g. superior parietal cortex) but key differences also emerged. In particular, tactile symmetry detection in the early blind was also associated with activation that included peri-calcarine cortex, lateral occipital (LO), and middle temporal (MT) cortex, as well as inferior temporal and fusiform cortex. These results contribute to the growing evidence supporting superior behavioral abilities in the blind, and the neural correlates associated with crossmodal neuroplasticity following visual deprivation.
A comparison of haptic material perception in blind and sighted individuals,
We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization.
Expertise in tactile pattern recognition,
This article explores expertise in tactile object recognition. In one study, participants were trained to differing degrees of accuracy on tactile identification of two-dimensional patterns. Recognition of these patterns, of inverted versions of these patterns, and of subparts of these patterns was then tested. The inversion effect (better recognition of upright than inverted patterns) and the part-whole effect (better recognition of the whole than a part pattern), traditionally considered signatures of visual expertise, were observed for tactile experts but not for novices. In a second study, participants were trained as visual or tactile experts and then tested in the trained and nontrained modalities. Whereas expertise effects were observed in the modality of training, cross-modal transfer was asymmetric; visual experts showed generalization to haptic recognition, but tactile experts did not show generalization to visual recognition. Tactile expertise is not obviously attributable to visual mediation and emerges from domain-general principles that operate independently of modality.
Geometrical shapes rendering on a dot-matrix display
Using a dot-matrix display, it is possible to present geometricalshapes with different rendering methods: solid shapes, empty shapes, vibratingshapes, etc. An open question is then: which rendering method allows the fastestand most reliable recognition performances using touch? This paper presentsresults of a user study that we have conducted to address this question.Using a 60*60 dot-matrix display, we asked 40 participants to recognize 6 differentgeometrical shapes (square, circle, simple triangle, right triangle, diamondand cross) within the shortest possible time. Six different methods to renderthe shapes were tested depending on the rendering of shape's outline and inside:static outline combined with static or vibrant or empty inside, and vibratingoutline combined with static or vibrant or empty inside. The results showthat squares, right triangles, and crosses are more quickly recognized than circles,diamonds, and simple triangles. Furthermore, the best rendering method isthe one that combines static outline with empty inside.
The copying of complex geometric drawings by sighted and visually impaired children,
This study examined the role of visual imagery in the centripetal execution principle (CEP), a graphic rule that is related to the drawing of complex figures that are composed of embedded geometric shapes. Sighted blindfolded children and children with early-onset low vision and early-onset blindness copied raised-line drawings (using only the haptic modality). The results revealed the dominance of the CEP in the sighted and blind groups, but not in the group with low vision. They suggest that the CEP is not determined by visual imagery, but by a more general mechanism that is based on children's perceptual experience.
Interactivity improves usability of geographic maps for visually impaired people,
Tactile relief maps are used by visually impaired people to acquire mental representation of space, but they retain important limitations (limited amount of information, braille text, etc.). Interactive maps may overcome these limitations. However, usability of these two types of maps has never been compared. It is then unknown whether interactive maps are equivalent or even better solutions than traditional raised-line maps. This study presents a comparison of usability of a classical raised-line map versus an interactive map composed of a multitouch screen, a raised-line overlay, and audio output. Both maps were tested by 24 blind participants. We measured usability as efficiency, effectiveness, and satisfaction. Our results show that replacing braille with simple audio-tactile interaction significantly improved efficiency and user satisfaction. Effectiveness was not related to the map type but depended on users characteristics as well as the category of assessed spatial knowledge. Long-term evaluation of acquired spatial information revealed that maps, whether interactive or not, are useful to build robust survey-type mental representations in blind users. Altogether, these results are encouraging as they show that interactive maps are a good solution for improving map exploration and cognitive mapping in visually impaired people.
Imagery and spatial processes in blindness and visual impairment,
The objective of this review is to examine and evaluate recent findings on cognitive functioning (in particular imagery processes) in individuals with congenital visual impairments, including total blindness, low-vision and monocular vision. As one might expect, the performance of blind individuals in many behaviours and tasks requiring imagery can be inferior to that of sighted subjects; however, surprisingly often this is not the case. Interestingly, there is evidence that the blind often employ different cognitive mechanisms than sighted subjects, suggesting that compensatory mechanisms can overcome the limitations of sight loss. Taken together, these studies suggest that the nature of perceptual input on which we commonly rely strongly affects the organization of our mental processes. We also review recent neuroimaging studies on the neural correlates of sensory perception and mental imagery in visually impaired individuals that have cast light on the plastic functional reorganization mechanisms associated with visual deprivation.
Effects of late visual impairment on mental representations activated by visual and tactile stimuli,
Although imagery is traditionally thought to be inherently linked to visual perception, growing evidence shows that mental images can arise also from nonvisual modalities. Paradigmatic in this respect is the case of individuals born blind or that became blind soon after birth. In this chapter, we will review evidence pertaining to different aspects of cognition showing that blind individuals... [Show full abstract]
Regularity detection by haptics and vision,
For vision, mirror-reflectional symmetry is usually easier to detect when it occurs within 1 object than when it occurs across 2 objects. The opposite pattern has been found for a different regularity, repetition. We investigated whether these results generalize to our sense of active touch (haptics). This was done to examine whether the interaction observed in vision results from intrinsic properties of the environment, or whether it is a consequence of how that environment is perceived and explored. In 4 regularity detection experiments, we haptically presented novel, planar shapes and then visually presented images of the same shapes. In addition to modality (haptics, vision), we varied regularity-type (symmetry, repetition), objectness (1, 2) and alignment of the axis of regularity with respect to the body midline (aligned, across). For both modalities, performance was better overall for symmetry than repetition. For vision, we replicated the previously reported regularity-type by objectness interaction for both stereoscopic and pictorial presentation, and for slanted and frontoparallel views. In contrast, for haptics, there was a 1-object advantage for repetition, as well as for symmetry when stimuli were explored with 1 hand, and no effect of objectness was found for 2-handed exploration. These results suggest that regularity is perceived differently in vision and in haptics, such that regularity detection does not just reflect modality-invariant, physical properties of our environment. (PsycINFO Database Record
The Gestalt principles of similarity and proximity apply to both the haptic and visual grouping of elements
Memory for an imagined pathway and strategy effects in sighted and in totally congenitally blind individuals,
The literature reports mixed results on the imagery abilities of the blind, at times showing a difference between sighted and blind individuals and at other times similarities. However, the possibility that the results are due to different strategies spontaneously used in performing the imagery tasks has never been systematically studied. A large group of 30 totally congenitally blind (TCB) individuals and a group of 30 sighted individuals matched for gender age and schooling were presented with a mental pathway task on a complex two-dimensional (5 5) matrix. After administering the task, participants were interviewed in order to establish the strategy they used. Results showed that both sighted and TCB may use a spatial mental imagery, a verbal or a mixed strategy in carrying out the task. Differences between the groups emerged only when last location and then entire pathway had to be remembered rather than just the last position, and were clearly affected by the type of strategy. Specifically, TCB performed more poorly than the sighted individuals when they used a spatial mental imagery strategy, whereas the two groups had a similar performance with a verbal strategy.
To what extent do gestalt grouping principles influence tactile perception?,
Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.
Human haptic perception is interrupted by explorative stops of milliseconds,
Abstract INTRODUCTION: The explorative scanning movements of the hands have been compared to those of the eyes. The visual process is known to be composed of alternating phases of saccadic eye movements and fixation pauses. Descriptive results suggest that during the haptic exploration of objects short movement pauses occur as well. The goal of the present study was to detect these "explorative stops" (ES) during one-handed and two-handed haptic explorations of various objects and patterns, and to measure their duration. Additionally, the associations between the following variables were analyzed: (a) between mean exploration time and duration of ES, (b) between certain stimulus features and ES frequency, and (c) the duration of ES during the course of exploration. METHODS: Five different Experiments were used. The first two Experiments were classical recognition tasks of unknown haptic stimuli (A) and of common objects (B). In Experiment C space-position information of angle legs had to be perceived and reproduced. For Experiments D and E the PHANToM haptic device was used for the exploration of virtual (D) and real (E) sunken reliefs. RESULTS: In each Experiment we observed explorative stops of different average durations. For Experiment A: 329.50 ms, Experiment B: 67.47 ms, Experiment C: 189.92 ms, Experiment D: 186.17 ms and Experiment E: 140.02 ms. Significant correlations were observed between exploration time and the duration of the ES. Also, ES occurred more frequently, but not exclusively, at defined stimulus features like corners, curves and the endpoints of lines. However, explorative stops do not occur every time a stimulus feature is explored. CONCLUSIONS: We assume that ES are a general aspect of human haptic exploration processes. We have tried to interpret the occurrence and duration of ES with respect to the Hypotheses-Rebuild-Model and the Limited Capacity Control System theory.
The effect of volumetric (3D) tactile symbols within inclusive tactile maps,
61We compare two tactile maps, one of them includes Volumetric (3D) tactile symbols.61Improving the interaction between users and tactile maps using 3D symbols together with 2D ones.613D symbols can be located in less time and, generally, cause fewer errors than flat relief symbols-2D.613D printing opens new horizons for the design and production of tactile maps for blind users.
Touching for knowing: Cognitive psychology of haptic manual perception, John Benjamins Pub
Tangible pictures: Viewpoint effects and linear perspective in visually impaired people,
Abstract Perception of raised-line pictures in blindfolded-sighted, congenitally blind, late-blind, and low-vision subjects was studied in a series of experiments. The major aim of the study was to examine the value of perspective drawings for haptic pictures and visually impaired individuals. In experiment 1, subjects felt two wooden boards joined at 45 degrees, 90 degrees, or 135 degrees, and were instructed to pick the correct perspective drawing from among four choices. The first experiment on perspective found a significant effect of visual status, with much higher performance by the low-vision subjects. Mean performance for the congenitally blind subjects was not significantly different from that of the late-blind and blindfolded-sighted subjects. In a further experiment, blindfolded subjects drew tangible pictures of three-dimensional (3-D) geometric solids, and then engaged in a matching task. Counter to expectations, performance was not impaired for the 3-D drawings as compared with the frontal viewpoints. Subjects were also especially fast and more accurate when matching top views. Experiment 5 showed that top views were easiest for all of the visually impaired subjects, including those who were congenitally blind. Experiment 5 yielded higher performance for 3-D than frontal viewpoints. The results of all of the experiments were consistent with the idea that visual experience is not necessary for understanding perspective drawings of geometrical objects.
Viewpoint and orientation influence picture recognition in the blind,
In the first three experiments, subjects felt solid geometrical forms and matched raised-line pictures to the objects. Performance was best in experiment 1 for top views, with shorter response latencies than for side views, front views, or 3-D views with foreshortening. In a second experiment with blind participants, matching accuracy was not significantly affected by prior visual experience, but speed advantages were found for top views, with 3-D views also yielding better matching accuracy than side views. There were no performance advantages for pictures of objects with a constant cross section in the vertical axis. The early-blind participants had lower performance for side and frontal views. The objects were rotated to oblique orientations in experiment 3. Early-blind subjects performed worse than the other subjects given object rotation. Visual experience with pictures of objects at many angles could facilitate identification at oblique orientations. In experiment 5 with blindfolded sighted subjects, tangible pictures were used as targets and as choices. The results yielded superior overall performance for 3-D views (mean, M = 74% correct) and much lower matching accuracy for top views as targets (M = 58% correct). Performance was highest when the target and matching viewpoint were identical, but 3-D views (M = 96% correct) were still far better than top views. The accuracy advantage of the top views also disappeared when more complex objects were tested in experiment 6. Alternative theoretical implications of the results are discussed.
Pattern perception and pictures for the blind,
The influence of viewpoint and object detail in blind people when matching pictures to complex objects,
We examined haptic viewpoint effects in blindfolded-sighted (BS) and visually impaired subjects: early blind (EB), late blind (LB), and very low vision (VLV). Participants felt complex objects and matched tangible pictures to them. In experiment 1, the EB and BS subjects had similar overall performance. Experiment 2 showed that the presence of a detail on the target object lowered performance in the BS subjects, and that matching accuracy was lower overall for top views for the blind subjects. In experiments 3-5, EB, LB, VLV, and BS subjects made judgments about perspective pictures of a model house with more salient object details. In experiment 3, performance was higher for side views than for corner views. Elevated side views were identified more readily than elevated corner views in experiment 4. Performance for top views was higher than for elevated side views in experiment 5, given the relative simplicity of the top-view depictions and salient details. The EB and BS participants had somewhat lower matching accuracy scores than the other groups. We suggest that visual experience is helpful, but not essential for picture perception. Viewpoint effects may vary with experience and object complexity, but the relevant experience need not be specifically visual in nature.
The neural substrate for working memory of tactile surface texture,
Fine surface texture is best discriminated by touch, in contrast to macro geometric features like shape. We used functional magnetic resonance imaging and a delayed match-to-sample task to investigate the neural substrate for working memory of tactile surface texture. Blindfolded right-handed males encoded the texture or location of up to four sandpaper stimuli using the dominant or non-dominant hand. They maintained the information for 1012 s and then answered whether a probe stimulus matched the memory array. Analyses of variance with the factors Hand, Task, and Load were performed on the estimated percent signal change for the encoding and delay phase. During encoding, contralateral effects of Hand were found in sensorimotor regions, whereas Load effects were observed in bilateral postcentral sulcus (BA2), secondary somatosensory cortex (S2), pre-SMA, dorsolateral prefrontal cortex (dlPFC), and superior parietal lobule (SPL). During encoding and delay, Task effects (texture > location) were found in central sulcus, S2, pre-SMA, dlPFC, and SPL. The Task and Load effects found in hand- and modality-specific regions BA2 and S2 indicate involvement of these regions in the tactile encoding and maintenance of fine surface textures. Similar effects in hand- and modality-unspecific areas dlPFC, pre-SMA and SPL suggest that these regions contribute to the cognitive monitoring required to encode and maintain multiple items. Our findings stress both the particular importance of S2 for the encoding and maintenance of tactile surface texture, as well as the supramodal nature of parieto-frontal networks involved in cognitive control. Hum Brain Mapp, 2013. (c) 2012 Wiley Periodicals, Inc.
Perception of tactile graphics: Embossings versus cutouts,
Graphical information, such as illustrations, graphs, and diagrams, are an essential complement to text for conveying knowledge about the world. Although graphics can be communicated well via the visual modality, conveying this information via touch has proven to be challenging. The lack of easily comprehensible tactile graphics poses a problem for the blind. In this paper, we advance a hypothesis for the limited effectiveness of tactile graphics. The hypothesis contends that conventional graphics that rely upon embossings on two-dimensional surfaces do not allow the deployment of tactile exploratory procedures that are crucial for assessing global shape. Besides potentially accounting for some of the shortcomings of current approaches, this hypothesis also serves a prescriptive purpose by suggesting a different strategy for conveying graphical information via touch, one based on cutouts. We describe experiments demonstrating the greater effectiveness of this approach for conveying shape and identity information. These results hold the potential for creating more comprehensible tactile drawings for the visually impaired while also providing insights into shape estimation processes in the tactile modality.
Tactile Picture Recognition: Errors are in shape acquisition or object matching?,
Numerous studies have demonstrated that sighted and blind individuals find it difficult to recognize tactile pictures of common objects. However, it is still not clear what makes recognition of tactile pictures so difficult. One possibility is that observers have difficulty acquiring the global shape of the image when feeling it. Alternatively, observers may have an accurate understanding of the shape but are unable to link it to a particular object representation. We, therefore, conducted two experiments to determine where tactile picture recognition goes awry. In Experiment I, we found that recognition of tactile pictures by blindfolded sighted observers correlated with image characteristics that affect shape acquisition (symmetry and complexity). In Experiment 2, we asked drawing experts to draw what they perceived after feeling the images. We found that the experts produced three types of drawings when they could not recognize the tactile pictures: (I) drawings that did not look like objects (incoherent), (2) drawings that looked like incorrect objects (coherent but inaccurate) and (3) drawings that looked like the correct objects (coherent and accurate). The majority of errors seemed to result from inaccurate perception of the global shape of the image (error types I and 2). Our results suggest that recognition of simplistic tactile pictures of objects is largely inhibited by low-level tactile shape processing rather than high-level object recognition mechanisms. (C) Koninklijke Brill NV, Leiden. 2011
Foreshortening, convergence and drawings from a blind adult,
Esref is a congenitally totally blind man, practiced in drawing. He was asked to draw solid and wire cubes situated in several places around his vantage point. He used foreshortening of receding sides and convergence of obliques, in approximate one-point perspective. We note that haptics provides information about the direction of objects--the basis of perspective.
The intelligent hand,
Cooperative processing in primary somatosensory cortex and posterior parietal cortex during tactile working memory,
Abstract In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities.
Visuo-haptic multisensory object recognition, categorization, and representation,
Spatial imagery in haptic shape perception,
We have proposed that haptic activation of the shape-selective lateral occipital complex (LOC) reflects a model of multisensory object representation in which the role of visual imagery is modulated by object familiarity. Supporting this, a previous functional magnetic resonance imaging (fMRI) study from our laboratory used inter-task correlations of blood oxygenation level-dependent (BOLD) signal magnitude and effective connectivity (EC) patterns based on the BOLD signals to show that the neural processes underlying visual object imagery (objIMG) are more similar to those mediating haptic perception of familiar (fHS) than unfamiliar (uHS) shapes. Here we employed fMRI to test a further hypothesis derived from our model, that spatial imagery (spIMG) would evoke activation and effective connectivity patterns more related to uHS than fHS. We found that few of the regions conjointly activated by spIMG and either fHS or uHS showed inter-task correlations of BOLD signal magnitudes, with parietal foci featuring in both sets of correlations. This may indicate some involvement of spIMG in HS regardless of object familiarity, contrary to our hypothesis, although we cannot rule out alternative explanations for the commonalities between the networks, such as generic imagery or spatial processes. EC analyses, based on inferred neuronal time series obtained by deconvolution of the hemodynamic response function from the measured BOLD time series, showed that spIMG shared more common paths with uHS than fHS. Re-analysis of our previous data, using the same EC methods as those used here, showed that, by contrast, objIMG shared more common paths with fHS than uHS. Thus, although our model requires some refinement, its basic architecture is supported: a stronger relationship between spIMG and uHS compared to fHS, and a stronger relationship between objIMG and fHS compared to uHS.
Haptic identification of raised-line drawings: High visuospatial imagers outperform low visuospatial imagers,
Abstract, Perception & Psychophysics) that a visual imagery process is involved in the haptic identification of raised-line drawings of common objects. The finding of significant correlations between visual imagery ability and performance on picture-naming tasks was taken as experimental evidence in support of this assumption. However, visual imagery measures came from self-report procedures, which can be unreliable. The present study therefore used an objective measure of visuospatial imagery abilities in sighted participants and compared three groups of high, medium and low visuospatial imagers on their accuracy and response times in identifying raised-line drawings by touch. Results revealed between-group differences on accuracy, with high visuospatial imagers outperforming low visuospatial imagers, but not on response times. These findings lend support to the view that visuospatial imagery plays a role in the identification of raised-line drawings by sighted adults.
Haptic recognition of non-figurative tactile pictures in the blind: Does life-time proportion without visual experience matter?, Pt Ii, Proceedings,
The present study tests whether age at onset of total blindness and the proportion of life-time without visual experience affect the haptic processing and recognition of tactile pictures in a sample o
Visual mediation and the haptic recognition of 2-dimensional pictures of common objects,
A set of three experiments was performed to investigate the role of visual imaging in the haptic recognition of raised-line depictions of common objects. Blindfolded, sighted (Experiment 1) observers performed the task very poorly, while several findings converged to indicate that a visual translation process was adopted. These included: (1) strong correlations between imageability ratings (obtained in Experiment 1 and, independently, in Experiment 2) and both recognition speed and accuracy, (2) superior performance with, and greater ease of imaging, twodimensional as opposed to three-dimensional depictions, despite equivalence in rated line complexity, and (3) a significant correlation between the general ability ofthe observer to image and obtained imageability ratings of the stimulus depictions. That congenitally blind observers performed the same task even more poorly, while their performance did not differ for two- versus three-dimensional depictions (Experiment 3), provides further evidence that visual translation was used by the sighted. Such limited performance is contrasted with the considerable skill with which real common objects are processed and recognized haptically. The reasons for the general difference in the haptic performance of two- versus three-dimensional tasks are considered. Implications for the presentation of spatial information in the form of tangible graphics displays for the blind are also discussed.
Mapping the internal geometry of tactile space,
A large body of research has shown spatial distortions in the perception of tactile distances on the skin. For example, perceived tactile distance is increased on sensitive compared to less sensitive skin regions, and larger for stimuli oriented along the medio-lateral axis than the proximo-distal axis of the limbs. In this study we aimed to investigate the spatial coherence of these distortions by reconstructing the internal geometry of tactile space using multidimensional scaling (MDS). Participants made verbal estimates of the perceived distance between 2 touches applied sequentially to locations on their left hand. In Experiment 1 we constructed perceptual maps of the dorsum of the left hand, which showed a good fit to the actual configuration of stimulus locations. Critically, these maps also showed clear evidence of spatial distortion, being stretched along the medio-lateral hand axis. Experiment 2 replicated this result and showed that no such distortion is apparent on the palmar surface of the hand. These results show that distortions in perceived tactile distance can be characterized by geometrically simple and coherent deformations of tactile space. We suggest that the internal geometry of tactile space is shaped by the geometry of receptive fields in somatosensory cortex. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
Similarity of tactual and visual picture recognition with limited field of view,
Subjects attempted to recognize simple line drawings of common objects using either touch or vision. In the touch condition, subjects explored raised line drawings using the distal pad of the index finger or the distal pads both of the index and of the middle fingers. In the visual condition, a computer-driven display was used to simulate tactual exploration. By moving an electronic pen over a digitizing tablet, the subject could explore a line drawing stored in memory; on the display screen a portion of the drawing appeared to move behind a stationary aperture, in concert with the movement of the pen. This aperture was varied in width, thus simulating the use of one or two fingers. In terms of average recognition accuracy and average response latency, recognition performance was virtually the same in the one-finger touch condition and the simulated one-finger vision condition. Visual recognition performance improved considerably when the visual field size was doubled (simulating two fingers), but tactual performance showed little improvement, suggesting that the effective tactual field of view for this task is approximately equal to one finger pad. This latter result agrees with other reports in the literature indicating that integration of two-dimensional pattern information extending over multiple fingers on the same hand is quite poor. The near equivalence of tactual picture perception and narrow-field vision suggests that the difficulties of tactual picture recognition must be largely due to the narrowness of the effective field of view.
Haptic tests for use with children and adults with visual impairments: A literature review,
Haptic-2D: A new haptic test battery assessing the tactual abilities of sighted and visually impaired children and adolescents with two-dimensional raised materials,
To fill an important gap in the psychometric assessment of children and adolescents with impaired vision, we designed a new battery of haptic tests, called Haptic-2D, for visually impaired and sighted individuals aged five to 18 years. Unlike existing batteries, ours uses only two-dimensional raised materials that participants explore using active touch. It is composed of 11 haptic tests, measuring scanning skills, tactile discrimination skills, spatial comprehension skills, short-term tactile memory, and comprehension of tactile pictures. We administered this battery to 138 participants, half of whom were sighted (n=69), and half visually impaired (blind,n=16; low vision,n=53). Results indicated a significant main effect of age on haptic scores, but no main effect of vision or Age×Vision interaction effect. Reliability of test items was satisfactory (Cronbach's alpha,α=0.51–0.84). Convergent validity was good, as shown by a significant correlation (age partialled out) between total haptic scores and scores on the B101 test (rp=0.51,n=47). Discriminant validity was also satisfactory, as attested by a lower but still significant partial correlation between total haptic scores and the raw score on the verbal WISC (rp=0.43,n=62). Finally, test–retest reliability was good (rs=0.93,n=12; interval of one to two months). This new psychometric tool should prove useful to practitioners working with young people with impaired vision.
The development of haptic processing skills from childhood to adulthood by means of two-dimensional materials,
Abstract Research into haptic perception has mostly focused on 3-dimensional objects, and more needs to be known about the processing of 2-dimensional materials (e.g., raised dots and lines and raised-line shapes, patterns and pictures). This study examines the age-related changes in various skills related to the haptic exploration of 2-dimensional raised-line and dot materials and how these skills are related to haptic picture perception. Ninety-one participants, aged 4 years to adult, were asked to perform a series of haptic tasks that entailed (a) finding dots and following lines; (b) matching elements based on texture, shape, and size; (c) matching elements based on spatial location and orientation; (d) memorising sequences of dots and shapes; and (e) identifying complete and incomplete raised-line pictures. On all the tests, the results showed that scores improved with age. Shape discrimination scores accounted for variability in comprehension scores for outline pictures. We suggested that identifying tactile pictures by touch improved with age and mainly depended on the improvement of shape discrimination skills. (PsycINFO Database Record
Do blind subjects differ from sighted subjects when exploring virtual tactile maps?,
Effects of using multiple hands and fingers on haptic performance in individuals who are blind,
In a previous paper we documented that sighted participants complete haptic tasks faster with two hands and multiple fingers, but that these benefits are task specific. The present study investigates whether these effects are the same for participants who are blind. We compared the performance of fourteen blind participants on seven tactile-map tasks using seven finger conditions. As with sighted participants, blind participants performed all tasks faster with multiple fingers. Line-tracing tasks were faster with fingers added to an already in-use hand, and sometimes when added to the second hand. Local and global search tasks were faster with multiple fingers and two hands. Distance comparison tasks were performed faster with multiple fingers, but not two hands. Lastly, moving in a straight line was faster with multiple fingers. These results reinforce our previous finding that the haptic system performs best when it can exploit the independence of multiple fingers. Furthermore, in every instance that an effect was different between sighted and blind participants, the blind participants benefitted more from two hands or multiple fingers than the sighted participants. This indicates that the blind participants have learned, through experience or training, how to best take advantage of multiple fingers during haptic tasks.
Haptic two-dimensional shape identification in children, adolescents, and young adults,
Perceptual grouping in haptic search: The influence of proximity, similarity, and good continuation,
Abstract We conducted a haptic search experiment to investigate the influence of the Gestalt principles of proximity, similarity, and good continuation. We expected faster search when the distractors could be grouped. We chose edges at different orientations as stimuli because they are processed similarly in the haptic and visual modality. We therefore expected the principles of similarity and good continuation to be operational in haptics as they are in vision. In contrast, because of differences in spatial processing between vision and haptics, we expected differences for the principle of proximity. In haptics, the Gestalt principle of proximity could operate at two distinct levels-somatotopic proximity or spatial proximity-and we assessed both possibilities in our experiments. The results show that the principles of similarity and good continuation indeed operate in this haptic search task. Neither of our proximity manipulations yielded effects, which may suggest that grouping by proximity must take place before an invariant representation of the object has formed. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
The effects of aging on haptic 2D shape recognition,
We use the image-mediation model (Klatzky & Lederman, 1987) as a framework to investigate potential sources of adult age differences in the haptic recognition of two-dimensional (2D) shapes. This model states that the low-resolution, temporally sequential, haptic input is translated into a visual image, which is then reperceived through the visual processors, before it is matched against a long-term memory representation and named. In three experiments we tested groups of 12 older (mean age 73.11) and three groups of 12 young adults (mean age 22.80) each. In Experiment 1 we confirm age-related differences in haptic 2D shape recognition, and we show the typical age x complexity interaction. In Experiment 2 we show that if we facilitate the visual translation process, age differences become smaller, but only with simple shapes and not with the more complex everyday objects. In Experiment 3 we target the last step in the model (matching and naming) for complex stimuli. We found that age differences in exploration time were considerably reduced when this component process was facilitated by providing a category name. We conclude that the image-mediation model can explain adult-age differences in haptic recognition, particularly if the role of working memory in forming the transient visual image is considered. Our findings suggest that sensorimotor skills thought to rely on peripheral processes for the most part are critically constrained by age-related changes in central processing capacity in later adulthood.
Designing haptic assistive technology for individuals who are blind or visually impaired,
Abstract This paper considers issues relevant for the design and use of haptic technology for assistive devices for individuals who are blind or visually impaired in some of the major areas of importance: Braille reading, tactile graphics, orientation and mobility. We show that there is a wealth of behavioral research that is highly applicable to assistive technology design. In a few cases, conclusions from behavioral experiments have been directly applied to design with positive results. Differences in brain organization and performance capabilities between individuals who are "early blind" and "late blind" from using the same tactile/haptic accommodations, such as the use of Braille, suggest the importance of training and assessing these groups individually. Practical restrictions on device design, such as performance limitations of the technology and cost, raise questions as to which aspects of these restrictions are truly important to overcome to achieve high performance. In general, this raises the question of what it means to provide functional equivalence as opposed to sensory equivalence.
Haptic identification of raised-line drawings by children, adolescents and young adults,
Haptic identification of raised-line drawings when categorical information is given: A comparison between visually impaired and sighted children,
ABSTRACT Research into haptic picture perception has mostly concerned adult participants, and little is known about haptic picture perception in visually impaired and sighted children. In the present study, we compared 13 visually impaired children (early blind and low vision) aged 9-10 years and 13 age-matched blindfolded sighted children on their ability to identify raised-line pictures of common objects when information about object category was provided prior to picture presentation (semantic cueing). The visually impaired children had moderate practice with tactile pictures, whereas the sighted controls had no prior practice with tactile pictures. We sought to determine whether the benefits of semantic cueing would add to those of practice, resulting in higher performance in the visually impaired children compared to the sighted controls (hypothesis 1), or whether semantic cueing would compensate for the lack of practice with tactile pictures in the sighted children, leading to a possible disappearance of the advantage of the visually impaired children over the sighted controls (hypothesis 2). In line with hypothesis 1, the results showed that the visually impaired children outperformed the sighted controls on both identification accuracy and response time to correct naming. We concluded that the visually impaired children outperformed the sighted controls because they benefited from both semantic cueing and superior exploration skills. By contrast, in the sighted children, semantic cueing was not sufficient to compensate for their encoding difficulties.
Haptic recognition of two-dimensional raised-line patterns by early-blind, late-blind, and blindfolded sighted adults,
We investigated the role of visual experience and visual imagery in the processing of two-dimensional (2-D) tactile patterns. The performance of early-blind (EB), late-blind (LB), and blindfolded sighted (S) adults in the recognition of 2-D raised-line patterns was compared. We also examined whether recognition of 2-D tactile patterns depends on the type of memory strategy (eg spatial, visuo-spatial, verbal, and kinesthetic) used by EB, LB, and S participants to perform the task. Significant between-group differences in the recognition performance have not been found despite significant between-group differences in self-reported memory strategies. Recognition performance does not vary significantly with the strategy, but correlates positively with visuo-spatial imagery abilities in the S participants. These findings may be taken to suggest that the difficulties some blind people experience with tactile pictures are not due to difficulties in processing 2-D tactile patterns.
Short-term memory for spatial configurations in the tactile modality: A comparison with vision,
This study investigates the role of acquisition constraints on the short-term retention of spatial configurations in the tactile modality in comparison with vision. It tests whether the sequential processing of information inherent to the tactile modality could account for limitation in short-term memory span for tactual-spatial information. In addition, this study investigates developmental aspects of short-term memory for tactual- and visual-spatial configurations. A total of 144 child and adult participants were assessed for their memory span in three different conditions: tactual, visual, and visual with a limited field of view. The results showed lower tactual-spatial memory span than visual-spatial, regardless of age. However, differences in memory span observed between the tactile and visual modalities vanished when the visual processing of information occurred within a limited field. These results provide evidence for an impact of acquisition constraints on the retention of spatial information in the tactile modality in both childhood and adulthood.
Intuitive tactile zooming for graphics accessed by individuals who are blind and visually impaired,
One possibility of providing access to visual graphics for those who are visually impaired is to present them tactually: unfortunately, details easily available to vision need to be magnified to be accessible through touch. For this, we propose an "intuitive" zooming algorithm to solve potential problems with directly applying visual zooming techniques to haptic displays that sense the current location of a user on a virtual diagram with a position sensor and, then, provide the appropriate local information either through force or tactile feedback. Our technique works by determining and then traversing the levels of an object tree hierarchy of a diagram. In this manner, the zoom steps adjust to the content to be viewed, avoid clipping and do not zoom when no object is present. The algorithm was tested using a small, "mouse-like" display with tactile feedback on pictures representing houses in a community and boats on a lake. We asked the users to answer questions related to details in the pictures. Comparing our technique to linear and logarithmic step zooming, we found a significant increase in the correctness of the responses (odds ratios of 2.64:1 and 2.31:1, respectively) and usability (differences of 36% and 19%, respectively) using our "intuitive" zooming technique.
Touch is a team effort: Interplay of submodalities in cutaneous sensibility,
Traditionally, different classes of cutaneous mechanoreceptive afferents are ascribed different and largely non-overlapping functional roles (for example texture or motion) stemming from their different response properties. This functional segregation is thought to be reflected in cortex, where each neuron receives input from a single submodality. We summarize work that challenges this notion. First, while it is possible to design artificial stimuli that preferentially excite a single afferent class, most natural stimuli excite all afferents and most tactile percepts are shaped by multiple submodalities. Second, closer inspection of cortical responses reveals that most neurons receive convergent input from multiple afferent classes. We argue that cortical neurons should be grouped based on their function rather than on their submodality composition.
Analysis of haptic information in the cerebral cortex,
Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level.
Perceiving space and optical cues via a visuo-tactile sensory substitution system: A methodological approach for training of blind subjects for navigation,
A methodological approach to perceptual learning was used to allow both early blind subjects (experimental group) and blindfolded sighted subjects (control group) to experience optical information and spatial phenomena, on the basis of visuo-tactile information transmitted by a 64-taxel pneumatic sensory substitution device. The learning process allowed the subjects to develop abilities in spatial localisation, shape recognition (with generalisation to different points of view), and monocular depth cue interpretation. During the training phase, early blind people initially experienced more difficulties than blindfolded sighted subjects (having previous perceptual experience of perspective) with interpreting and using monocular depth cues. The amelioration of the performance for all blind subjects during training sessions and the quite similar level of performance reached by two groups in the final navigation tasks suggested that early blind people were able to develop and apply cognitive understanding of depth cues. Both groups showed generalisation of the learning from the initial phases to cue identification in the maze, and subjectively experienced shapes facing them. Subjects' performance depended not only on their perceptual experience but also on their previous spatial competencies.
Preserved haptic shape processing after bilateral LOC lesions,
The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch.
Selective visuo-haptic processing of shape and texture,
Raised line drawings are spontaneously explored with a single finger,
In this study we examine the strategies used by blindfolded subjects asked to freely explore raised line drawings and identify what is depicted in them. We were particularly interested in how often a single finger is spontaneously used because in several studies subjects are forced to use only one fingertip and the extent to which this restriction may depress haptic perception is unclear. The results suggest that despite a variety of strategies, people 'naturally' use single fingertips sufficiently often to allow confidence in conclusions that are based on studies imposing this restriction.
Tactile picture recognition by early blind children: The effect of illustration technique,
This study investigated factors that influenced haptic recognition of tactile pictures by early blind children. Such a research is motivated by the difficulty to identify tactile pictures, that is, two-dimensional representations of objects, while it is the most common way to depict the surrounding world to blind people. Thus, it is of great interest to better understand whether an appropriate representative technique can make objects' identification more effective and to what extent a technique is uniformly suitable for all blind individuals. Our objective was to examine the effects of three techniques used to illustrate pictures (raised lines, thermoforming, and textures), and to find out if their effect depended on participants' level of use of tactile pictures. Twenty-three early blind children (half with a regular or moderate level of use of tactile pictures, and half with either no use or infrequent use) were asked to identify 24 pictures of eight objects designed as the pictures currently used in the tactile books and illustrated using these three techniques. Results showed better recognition of textured pictures than of thermoformed and raised line pictures. Participants with regular or moderate use performed better than participants with no or infrequent use. Finally, the effect of illustration technique on picture recognition did not depend on prior use of tactile pictures. To conclude, early and frequent use of tactile material develops haptic proficiency and textures have a facilitating effect on picture recognition whatever the user level. Practical implications for the design of tactile pictures are discussed in the conclusion.
Haptic two-dimensional angle categorization and discrimination,
This study examined the extent to which haptic perception of two-dimensional (2-D) shape is modified by the design of the perceptual task (single-interval categorization vs. two-interval discrimination), the orientation of the angles in space (oblique vs. horizontal), and the exploration strategy (one or two passes over the angle). Subjects ( n 02=0212) explored 2-D angles using the index finger of the outstretched arm. In the categorization task, subjects scanned individual angles, categorizing each as “large” or “small” (2 angles presented in each block of trials; range 80° vs. 100° to 89° vs. 91°; implicit standard 90°). In the discrimination task, a pair of angles was scanned (standard 90°; comparison 91–103°) and subjects identified the larger angle. The threshold for 2-D angle categorization was significantly lower than for 2-D angle discrimination, 4° versus 7.2°. Performance in the categorization task did not vary with either the orientation of the angles (horizontal vs. oblique, 3.9° vs. 4°) or the number of passes over the angle (1 vs. 2 passes, 3.9° vs. 4°). We suggest that the lower threshold with angle categorization likely reflects the reduced cognitive demands of this task. We found no evidence for a haptic oblique effect (higher threshold with oblique angles), likely reflecting the presence of an explicit external frame of reference formed by the intersection of the two bars forming the 2-D angles. Although one-interval haptic categorization is a more sensitive method for assessing 2-D haptic angle perception, perceptual invariances for exploratory strategy and angle orientation were, nevertheless, task-independent.
A field study of a standardized tangible symbol system for learners who are visually impaired and have multiple disabilities,
Communication is integrally tied to quality of life. It allows us to share our ideas and...
Perceptual grouping by similarity of surface roughness in haptics: The influence of task difficulty,
Abstract We investigated grouping by similarity of surface roughness in the context of task difficulty. We hypothesized that grouping yields a larger benefit at higher levels of task complexity, because efficient processing is more helpful when more cognitive resources are needed to execute a task. Participants searched for a patch of a different roughness as compared to the distractors in two strips of similar or dissimilar roughness values. We reasoned that if the distractors could be grouped based on similar roughness values, exploration time would be shorter and fewer errors would occur. To manipulate task complexity, we varied task difficulty (high target saliency equalling low task difficulty), and we varied the fingers used to explore the display (two fingers of one hand being more cognitive demanding than two fingers of opposite hands). We found much better performance in the easy condition as compared to the difficult condition (in both error rates and mean search slopes). Moreover, we found a larger effect for the similarity manipulation in the difficult condition as compared to the easy condition. Within the difficult condition, we found a larger effect for the one-hand condition as compared to the two-hand condition. These results show that haptic search is accelerated by the use of grouping by similarity of surface roughness, especially when the task is relatively complex. We conclude that the effect of perceptual grouping is more prominent when more cognitive resources are needed to perform a task.
Graphical tactile displays for visually-impaired people,
This paper presents an up-to-date survey of graphical tactile displays. These devices provide information through the sense of touch. At best, they should display both text and graphics (text may be considered a type of graphic). Graphs made with shapeable sheets result in bulky items awkward to store and transport; their production is expensive and time-consuming and they deteriorate quickly. Research is ongoing for a refreshable tactile display that acts as an output device for a computer or other information source and can present the information in text and graphics. The work in this field has branched into diverse areas, from physiological studies to technological aspects and challenges. Moreover, interest in these devices is now being shown by other fields such as virtual reality, minimally invasive surgery and teleoperation. It is attracting more and more people, research and money. Many proposals have been put forward, several of them succeeding in the task of presenting tactile information. However, most are research prototypes and very expensive to produce commercially. Thus the goal of an efficient low-cost tactile display for visually-impaired people has not yet been reached
Exploratory procedures of tactile images in visually impaired and blindfolded sighted children: How they relate to their consequent performance in drawing,
The aim of the present study was to compare the types of exploratory procedures employed by children when exploring bidimensional tactile patterns and correlate the use of these procedures with the children's shape drawing performance. 18 early blind children, 20 children with low vision and 24 age-matched blindfolded sighted children aged approximately 7 or 11 years were included in the study. The children with a visual handicap outperformed the sighted children in terms of haptic exploration and did not produce less recognizable drawings than their sighted counterparts. Close relationships were identified between the types of exploratory procedures employed by the children and their subsequent drawing performance, regardless of visual status. This close link between action and perception in the haptic modality indicates the importance of training blind children in exploratory procedures at an early age.
Angle discrimination in raised-line drawings,
Abstract We investigated the angular resolution subserving the haptic perception of raised-line drawings by measuring how accurately observers could discriminate between two angle sizes under various conditions. We found that, for acute angles, discrimination performance is highly dependent on exploration strategy: mean thresholds of 2.9 degrees and 6.0 degrees were found for two different exploration strategies. For one of the strategies we found that discriminability is not dependent on the bisector orientation of the angle. Furthermore, we found that thresholds almost double when the angular extent is increased from 20 degrees to 135 degrees. We also found that local apex information has a significant influence on discrimination for acute as well as obtuse angles. In the last experiment we investigated the influence of depiction mode but did not find any effect. Overall, the results tell us that the acuity with which angles in raised-line drawings are perceived is determined by the exploration strategy, local apex information, and global angular extent.
The influence of picture size on recognition and exploratory behaviour in raised-line drawings,
We demonstrate the influence of picture size on haptic recognition and . The stimuli were raised-line drawings of everyday objects. Participants were instructed to think aloud during haptic exploration of the pictures. We measured the delay between initial correct speculation and final correct response. The results indicate that picture size influences accuracy but not response latency: large drawings are recognised more often but not faster. By analysing video recordings of the experiment we found that two-handed exploration increases when picture size increases and that, on average, 83% of the exploration time involves the use of two hands. The thinking-aloud data showed that the average time difference between the initial correct speculation and final correct response amounted to 23% of the total reaction time. We discuss our results with respect to the design of tactile and the ecological validity of single-finger exploration.
Pointing to azimuths and elevations of targets: Blind and blindfolded- sighted,
Three groups of observers pointed to target circles in a path on the ground, in two parallel rows. Participants in one group viewed the circles and then pointed blindfolded. Those in a second group were blindfolded and then touched the circles with a stick while walking past them. Volunteers in the third group were blind adults, a diverse group, who also used a stick to detect the circles. For all three groups, as distance to the circles increased, pointing azimuths shrank and elevations increased. We suggest that directions to targets on major environmental surfaces may be appreciated similarly by the blind and sighted. We challenge the assumption that the principle of convergence to the horizon, available through because of the way in which visual angle decreases on the , is not available through touch.
Brain networks involved in tactile speed classification of moving dot patterns: The effects of speed and dot periodicity,
Abstract Humans are able to judge the speed of an object's motion by touch. Research has suggested that tactile judgment of speed is influenced by physical properties of the moving object, though the neural mechanisms underlying this process remain poorly understood. In the present study, functional magnetic resonance imaging was used to investigate brain networks that may be involved in tactile speed classification and how such networks may be affected by an object's texture. Participants were asked to classify the speed of 2-D raised dot patterns passing under their right middle finger. Activity in the parietal operculum, insula, and inferior and superior frontal gyri was positively related to the motion speed of dot patterns. Activity in the postcentral gyrus and superior parietal lobule was sensitive to dot periodicity. Psycho-physiological interaction (PPI) analysis revealed that dot periodicity modulated functional connectivity between the parietal operculum (related to speed) and postcentral gyrus (related to dot periodicity). These results suggest that texture-sensitive activity in the primary somatosensory cortex and superior parietal lobule influences brain networks associated with tactually-extracted motion speed. Such effects may be related to the influence of surface texture on tactile speed judgment.
Tactile priming modulates the activation of the fronto-parietal circuit during tactile angle match and non-match processing: An fMRI study,
Repetition of a stimulus task reduces the neural activity within certain cortical regions responsible for working memory (WM) processing. Although previous evidence showed that the repetition of vibrotactile stimuli reduced the activation in the ventrolateral prefrontal cortex, whether the repeated tactile spatial stimuli triggered the priming effect correlated with the same cortical region remains unclear. Therefore, we used event-related functional magnetic resonance imaging and a delayed match-to-sample task to investigate the contributions of the priming effect to tactile spatial WM processing. Fourteen healthy volunteers were asked to encode three tactile angle stimuli during the encoding phase and one tactile angle stimulus during the recognition phase, and then, they answered whether the last angle stimulus was presented during the encoding phase. As expected, both the Match and Non-Match tasks activated a similar cerebral network. The critical new finding was decreased brain activity in the left inferior frontal gyrus (IFG), right posterior parietal cortex (PPC) and bilateral medial frontal gyrus (mFG) for the match task compared to the Non-Match task. Therefore, we suggest that the tactile priming engaged repetition suppression mechanisms during tactile angle matching, and this process decreased the activation of the fronto-parietal circuit, including IFG, mFG and PPC.
Feeling form: the neural basis of haptic shape perception,
The tactile perception of the shape of objects critically guides our ability to interact with them. In this review, we describe how shape information is processed as it ascends the somatosensory neuraxis of primates. At the somatosensory periphery, spatial form is represented in the spatial patterns of activation evoked across populations of mechanoreceptive afferents. In the cerebral cortex, neurons respond selectively to particular spatial features, like orientation and curvature. While feature selectivity of neurons in the earlier processing stages can be understood in terms of linear receptive field models, higher order somatosensory neurons exhibit nonlinear response properties that result in tuning for more complex geometrical features. In fact, tactile shape processing bears remarkable analogies to its visual counterpart and the two may rely on shared neural circuitry. Furthermore, one of the unique aspects of primate somatosensation is that it contains a deformable sensory sheet. Because the relative positions of cutaneous mechanoreceptors depend on the conformation of the hand, the haptic perception of three-dimensional objects requires the integration of cutaneous and proprioceptive signals, an integration that is observed throughout somatosensory cortex.
Tactile search for change has less memory than visual search for change,
Haptic perception of a 2D image is thought to make heavy demands on working memory. During active exploration, humans need to store the latest local sensory information and integrate it with kinesthetic information from hand and finger locations in order to generate a coherent perception. This tactile integration has not been studied as extensively as visual shape integration. In the current study, we compared working-memory capacity for tactile exploration to that of visual exploration as measured in change-detection tasks. We found smaller memory capacity during tactile exploration (approximately 1 item) compared with visual exploration (2鈥10 items). These differences generalized to position memory and could not be attributed to insufficient stimulus-exposure durations, acuity differences between modalities, or uncertainty over the position of items. This low capacity for tactile memory suggests that the haptic system is almost amnesic when outside the fingertips and that there is little or no cross-position integration.
A study of shape discrimination for tactile guide maps(
In a haptic shape task, the human ability to discriminate objects on basis of their shape, as defined by active exploratory movements of the hand, is dependent on receptors located in the skin and deep structures. The shape of a 2-d object is a function of its geometric properties, including the pattern of the surfaces that form the object, their density, their size, and their spatial features. It is now widely known that haptic shape task shares neural activity in visual cortical areas, but we have little information about the factors for haptic shape perception. We investigated to what factors of pattern play an important role in a haptic shape discrimination task. Haptic shape can be gained through a small braille device. There are 10 baselines in the experiment, and 6 kinds of other patterns based on each baseline were made. Healthy right-handed subjects performed a delayed-match-to sample task discriminating between pairs of two-dimensional patterns. From the different characteristics of difficulty in the pattern task, we concluded that different difficulty in discrimination task are underling haptic shape patterns with different densities and sizes.
版权所有 © 《心理科学进展》编辑部