Loading...
ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Archive

    2023, Volume 31 Issue suppl. Previous Issue    Next Issue

    For Selected: Toggle Thumbnails
    Deciphering Human Decision Rules in Motion Discrimination
    Jinfeng Huang, Alexander Yu, Yifeng Zhou, Zili Liu
    2023, 31 (suppl.):  1-1. 
    Abstract ( 138 )  
    PURPOSE: We investigated the eight decision rules for a same-different task, as summarized in Petrov (Psychonomic Bulletin & Review, 16(6), 1011-1025, 2009).
    METHODS: These rules, including the differencing (DF) rule and the optimal independence rule, are all based on the standard model in signal detection theory. Each rule receives two stimulus values as inputs and uses one or two decision criteria.
    RESULTS: We proved that the false alarm rate p(F) ≤ 1/2 for four of the rules. We also conducted a samedifferent rating experiment on motion discrimination (n = 54), with 4◦ or 8◦ directional difference. We found that the human receiver operating characteristic (ROC) spanned its full range [0, 1] in p(F), thus rejecting these four rules. The slope of the human Z-ROC was also < 1, further confirming that the independence rule was not used. We subsequently fitted in the four-dimensional (pAA, pAB, pBA, pBB) space the human data to the remaining four rules—DF and likelihood ratio rules, each with one or two criteria, where pXY = p(responding “different” given stimulus sequence XY ).
    CONCLUSIONS: We found that, using residual distribution analysis, only the two criteria DF rule (DF2) could account for the human data.
    Related Articles | Metrics
    Examination of Interocular Delay in Anisomyopia
    Mengting Chen, Nan Jiang, Jiawei Zhou, Seung Hyun Min
    2023, 31 (suppl.):  3-3. 
    Abstract ( 97 )  
    PURPOSE: To assess whether there is an interocular delay in anisomyopia and myopia and whether optical correction can reduce the delay
    METHODS: 15 emmetropes, 18 anisomyopes, and 17 myopes participated in the study. Viewing a psychophysical paradigm that shows a rotating cylinder, subjects were asked to report their perception of rotation. The interocular delay could be measured from ambiguous perception. The stimuli were shown at three spatial frequencies (0.5, 1 and 2 c/deg). To investigate whether optical correction could reduce the delay in observers with myopia or anisomyopia, we also measured their delay using the same visual task after they were been optically corrected with contact lenses. Axial length, which is an anatomical marker of myopia, was also measured in the two patient groups to determine whether there would be a relationship between their interocular delay and axial length.
    RESULTS: The absolute value of the interocular delay was found to be larger in the myopic and anisomyopic observers than that of the emmetropic controls at 1 and 2 c/deg when there was no optical correction. We didn't find a significant correlation between ratio of absolute interocular delay with and without correction and ocular features (e.g., spherical equivalent difference, axial length difference and corneal curvature difference). However, we found that optical correction relieved the delay in the two patient groups.
    CONCLUSIONS: We show that there is an interocular delay in anisomyopia and myopia in the range of low spatial frequencies. Our data do not support the premise that interocular differences in refractive error or other clinical characteristics induce the Pulfrich phenomenon. Moreover, when these differences are resolved, interocular delay is still present. This indicates that the absence or presence of naturally-occurring delay in individuals with normal or abnormal vision is not related to the anatomical differences of the two eyes; instead, we speculate that the cause might originate from the visual cortex.
    Related Articles | Metrics
    Changing Gears to See Fast and Slow: Hierarchical Computation of Velocity Across V1, MT, and MST in Non-human Primates
    Ke-Yan He, Ye Wang, Jun-Xiang Luo, Xiao-Hong Li, Lixuan Liu, Yiliang Lu, Ian Max Andolina, Naill McLoughlin, Stewart Shipp, Lothar Spillmann, Wei Wang
    2023, 31 (suppl.):  5-5. 
    Abstract ( 109 )  
    PURPOSE: Distinguishing velocity is critical for animals’ survival and human life. The visual dorsal pathway involves the perception of velocity. Although various studies about velocity have been reported, we know a rough speed range for each area respectively. How each brain area processes various velocities from low to high speed is still unclear. Specifically, how the higher-level brain areas process velocity when the lower-level areas lost direction selectivity at high speed. Our main hypothesis was that the lower-level areas could provide sequential retinotopic activations to the higher-level areas to process velocity at high speed.
    METHODS: To get a whole picture of velocity processing for visual brain areas across speeds, we recorded single units in V1, MT, and MST in alert macaques for dots, gratings, and plaids. Then we built a receptive field model based on retinotopic information processing to simulate neuronal responses.
    RESULTS: 1) The optimal speed and the cutoff speed of direction selectivity increased hierarchically along V1, MT, and MST, no matter the cell subtype (component, pattern, or unclassified), across increasing speed. 2) The differences in latencies among V1, MT, and MST suggested the hierarchically bottom-up processing for velocity. 3) Model results were consistent with the physiological observations.
    CONCLUSIONS: These results supported our hypotheses that low-level areas provide retinotopic signals after losing direction selectivity for assisting high-level areas detect motion direction. This study may promote the understanding of velocity processing in the brain and the development of computer vision in artificial intelligence (AI).
    Related Articles | Metrics
    Effects of Monocular Flicker on Binocular Imbalance in Amblyopic Patients
    Yiqiu Lu, Liying Zou, Wenjing Wang, Ruyin Chen, Jia Qu, Jiawei Zhou
    2023, 31 (suppl.):  7-7. 
    Abstract ( 92 )  
    PURPOSE: This study aims to evaluate the effects of monocular flicker stimulation on binocular imbalance in patients with amblyopia.
    METHODS: Seven amblyopic patients (28.3 ± 3.3 years, four females) and seven normally sighted participants (27.3 ± 4.1 years, five females) were included. The balance point (BP), where participants’ two eyes are equally effective, was measured using a binocular orientation combination task under baseline conditions and with monocular flicker at five temporal frequencies (TFs, 4, 7, 10, 15, and 20 Hz). |logBP| were calculated and normalized to the baseline condition to analyze the influence of monocular flicker on binocular imbalance. Negative normalized |logBP| values indicate improved balance.
    RESULTS: Adding monocular flicker to the amblyopes’ fellow eye (FE) resulted in normalized |logBP| values below zero, indicating improved balance. Conversely, when flicker was added to the amblyopic eye (AE) or to the nondominant eye (nonDE) or dominant eye (DE) of normal participants, normalized |logBP| values were above zero, indicating disrupted balance. The mixed repeated-measures ANOVA revealed no significant difference in normalized |logBP| values between flicker applied to AE in amblyopes and nonDE in normal (P = 0.481). However, when flicker was applied to the FE in amblyopes, normalized |logBP| values were significantly lower than when applied to the DE in normal (P < 0.001).
    CONCLUSIONS: While flickering the amblyopic eye disrupted balance, flickering the fellow eye improved binocular imbalance in amblyopes. These findings have implications for therapeutic interventions targeting binocular balance in amblyopia.
    Related Articles | Metrics
    Abnormal Lateral Motion after Visual Acuity Recovery in Anisometropic Amblyopia
    Yao Chen, Shiqi Zhou, Yiya Chen, Hao Chen, Robert F. Hess, Jiawei Zhou
    2023, 31 (suppl.):  9-9. 
    Abstract ( 82 )  
    PURPOSE: A psychophysical paradigm involving parallax and lateral motion processing found that human motion perception is tuned by lateral motion. However, it is unclear whether deficits in dynamic stereo vision exist in amblyopes whose monocular vision has been fully normalized. Here we use a similar paradigm to investigate the dynamic stereoscopic vision in unilateral anisometropic amblyopia when normal visual acuity is restored.
    METHODS: Twelve clinically treated subjects with anisometropic amblyopia (mean age 23.61 ± 3.00 years) with best-corrected visual acuity ≤ 0.1 logMAR, 8 non-amblyopic anisometropes (mean age 22.22 ± 2.57 years), and 12 age-matched emmetropes (mean age 23.38 ± 2.81 years) with normal vision participated in this experiment. We presented 50 moving Gabor elements with a spatial frequency of 3 c/d as visual stimuli to measure the stereoscopic performance at six motion speeds (0.17°/s, 0.33°/s, 0.67°/s, 1.33°/s, 2.67°/s, and 5.33°/s). During the experiment, the six motion speeds were presented in a random order and the presentation time of the stimuli was set to 1000 ms. The stimuli were presented at two depth planes (25 Gabor elements were presented at the fixation plane and the other 25 Gabor elements were presented at an uncrossed disparity relative to the fixation plane). Observers were asked to report whether the Gabor elements in the front fixation plane were moving in the right or left direction without a time limit. We applied the staircase method to quantify the dynamic (lateral motion) stereo performance in the experiment.
    RESULTS: There were significant differences in the stereo thresholds of emmetropes at different speeds (P = 0.001). There were also significant differences in the stereoacuity of subjects in the anisometropes at different speeds (P = 0.003), while the differences in stereo thresholds of treated amblyopes at different speeds were not significant (P = 0.054). There was a significant difference in stereo thresholds between the treated amblyopes and the emmetropes and anisometropes (P < 0.001). Two-tailed Spearman correlation analysis showed that the correlations between dynamic stereo thresholds and the degree of anisometropia in anisometropes were not significant for the five speeds (P > 0.05), except for 2.67°/s (P = 0.013), and the correlations between dynamic stereo thresholds and the degree of anisometropia for the six speeds were not significant in the treated amblyopes (P >0.05). The correlations between the dynamic stereo thresholds and the degree of anisometropia were not significant in the amblyopic group with normal vision recovery(P >0.05) In the emmetropes, the correlations between dynamic stereo thresholds and RDS thresholds for the six speeds were not significant (P >0.05); in the anisometropes, the correlations between dynamic stereo thresholds and RDS thresholds for the six speeds were not significant (P >0.05). The correlations between dynamic stereo thresholds and RDS thresholds in the six velocity conditions were also not significant in the treated amblyopes (P >0.05).
    CONCLUSIONS: There is a deficit in dynamic (lateral motion stimulation) stereo vision in refractive parallax amblyopes with normal visual acuity after patching treatment.
    Related Articles | Metrics
    Mesoscale Functional Organization and Connectivity of Color, Disparity, and Naturalistic Texture in Human Second Visual Area
    Hailin Ai, Weiru Lin, Nihong Chen, Peng Zhang
    2023, 31 (suppl.):  10-10. 
    Abstract ( 120 )  
    PURPOSE: The functional role and information processing at the intermediate level of the primate visual system remain elusive.
    METHODS: Using 7T BOLD fMRI at 1-mm isotropic resolution, we investigated laminar and columnar organizations for color, disparity and naturalistic texture in human second visual area (V2), along with its informational connectivity to lower and higher order visual cortices.
    RESULTS: Although color-selective thin and disparity-selective thick stripe-columns can be clearly identified in area V2, BOLD activity to naturalistic textures exhibited neither columnar organization nor response preference among thin, thick and pale stripes. Cortical depth-dependent analyses revealed strongest color-selectivity in the superficial layers of V2, and its feedforward and feedback connectivity with V1 and hV4. Disparity-selectivity was similar across different cortical depth in V2, demonstrating significant feedforward and feedback connectivity with V1 and V3ab. Interestingly, the selectivity for naturalistic texture was strongest in the deep layers of V2, with significant feedback connectivity from hV4.
    CONCLUSIONS: Therefore, compared to color and disparity, the selectivity to naturalistic textures in area V2 is more related to feedback processing and may develop after the critical period for forming cortical columns.
    Related Articles | Metrics
    Language Decoding for Visual Perception Based on Transformer
    Wei Huang, Hengjiang Li, Diwei Wu, Huafu Chen, Hongmei Yan
    2023, 31 (suppl.):  11-11. 
    Abstract ( 90 )  
    PURPOSE: When we view a scene, the visual cortex extracts and processes visual information in the scene through various kinds of neural activities. Previous studies have decoded the neural activity into single/multiple semantic category tags which can caption the scene to some extent. However, these tags are isolated words with no grammatical structure, insufficiently conveying what the scene contains. It is well-known that textual language (sentences/phrases) is superior to single word in disclosing the meaning of images as well as reflecting people's real understanding of the images. Here, based on artificial intelligence technologies, we attempted to build a language decoding model to decode the neural activities evoked by images into language (phrases or short sentences).
    METHODS: Here, we propose a Dual-Channel Language Decoding Model (DC-LDM), which contains five modules, namely “Image-Extractor”, “Image-Encoder”, “Nerve-Extractor”, “Nerve-Encoder” and “Language-Decoder”. The first channel (image-channel), including “Image-Extractor” and “Image-Encoder”, aims to extract the semantic features of natural images ($\text{I}\in {{\mathbb{R}}^{\text{L}\times \text{W}\times \text{C}}}$). L, W and C denote the length, width, and number of channels of the image respectively. The second channel (nerve-channel), including “Nerve-Extractor” and “Nerve-Encoder”, aims to extract the semantic features of visual activities ($\text{X}={{\left[ {{\text{x}}_{1}},\ldots ,{{\text{x}}_{\text{T}}} \right]}^{\text{T}}}\in {{\mathbb{R}}^{\text{T}\times \text{M}}}$). T and M denote the time length and the number of voxels of visual activities, respectively. In the training phase, the corresponding outputs of the two channels are weighted by the transfer factor (α) to “Language-Decoder”. In addition, we employed a strategy of progressive transfer to train the DC-LDM for improving the performance of language decoding.
    RESULTS: The decoding results of different images in the test set with VC fMRI activities from a sample subject. It can be seen that the decoded texts by our model are reasonable for describing the natural images, although they are not completely consistent with annotator’s texts. The results show that our proposed model can capture the semantic information from visual activities and represent it through textual language. We adopted six indexes to quantitatively evaluate the difference between the decoded texts and the annotated texts of corresponding visual images, and found that Word2vec-Cosine Similarity (WCS) was the best indicator to reflect the similarity between the decoded and the annotated texts. In addition, among different visual cortices, we found that the text decoded by the higher visual cortex was more consistent with the description of the natural image than the lower one.
    CONCLUSIONS: By comparing different visual areas, we found that the decoding performance of the high-level visual cortex (HVC and VC) is significantly higher than that of low-level visual areas (V1, V2, V3, and LVC). It is once again confirmed that the high-level visual cortex contains more semantic information than the low-level visual cortex, even when this semantic information is decoded into text. Our decoding model may provide enlightenment in language-based brain-computer interface explorations.
    Related Articles | Metrics
    fMRI Study of Implicit Emotion Processing and Regulation Under High Working Memory Load Situations
    Gantian Huang, Longqian Liu, Ping Jiang
    2023, 31 (suppl.):  12-12. 
    Abstract ( 124 )  
    PURPOSE: To evaluate the characteristics of emotion-related brain networks and the differences between no-memory load condition and high-memory load condition with fMRI.
    METHODS: 26 participants underwent comprehensive psychological assessments, including Emotion Regulation Questionnaire-10(ERQ, based on the emotion regulation, we divided it into two parts: Reappraisal and Suppression), Toronto Alexithymia Scale (TAS-12, including the difficulty identifying feelings questions and the difficulty describing feelings questions), General Health Questionnaire-12(GHQ). Participants completed an emotional Face N-Back (EFNBACK) task . In this task, participants were asked to respond to the prescribed letters when watched a pseudo-random letter sequence. It includes 0-back no-memory load condition (EF‐0‐back) and 2-back high-memory load condition (EF‐2‐back). Faces with expressions of fear, happiness or neutral faces as distractors, or no faces (no-face) randomly appear on both sides of the letters. There are eight stimulus blocks: two memory load conditions (0-back and 2-back), each of which has one of four facial interference states. The task includes three runs of 7 minutes and 4 seconds, with a total of 24 blocks, which are presented in pseudorandom order. Each block contains 12 500ms trails, including two sides of a letter that are either the same picture of the actor's facial expression or no picture.
    RESULTS: Across all participants, 3 significant associations were found: TAS-12 score positively correlated with EF-2-back neutral face versus no face rAMY-rOFA (r = 0.612, P = 0.001), EF-0-back fearful face versus no face lAMY-rOFA (r = 0.510, P = 0.009); The negative correlation between EF-0-back happy face versus neutral face rAMY-rOFA and TAS-12 score (r = -0.537, P = 0.006). Significant positive correlations were found between EF-2-back versus EF-0-back fearful face rAMY-lFFA (r = 0.580, P = 0.002), EF-2-back versus EF-0-back happy face rAMY-rOFA (r = 0.512, P = 0.009, Figure 3b) and Reappraisal score. Suppression score negatively correlated with EF-0-back happy face versus no face left amygdala-AC (r = -0.501, P = 0.011) and EF-0-back fearful face versus neutral face rAMY-lpSTS (r = -0.511, P = 0.009).
    CONCLUSIONS: Our study found that even under high-demand working memory task conditions, implicit emotion processing can still be activated, and subsequent research is needed to determine whether implicit emotion regulation exists.
    Related Articles | Metrics
    The Effect of Pre-saccadic Attention on Contrast Appearance
    Tianyu Zhang, Yongchun Cai
    2023, 31 (suppl.):  15-15. 
    Abstract ( 68 )  
    PURPOSE: Previous studies have found that covert exogenous attention enhances the contrast appearance of low-contrast stimuli but attenuates the contrast appearance of high-contrast stimuli, whereas covert endogenous attention uniformly enhances the contrast appearance regardless of stimuli contrast. Here, we investigated how presaccadic attention, a kind of overt attention, alters the contrast appearance of low- and high-contrast stimuli.
    METHODS: We used a central cue to direct presaccadic attention: subjects were required to saccade to the target location (saccade condition) or maintain at the center (neutral condition). Two gratings (the test and the standard) were randomly located on the left or right sides of fixation. Eye positions were monitored online and analyzed offline to extract valid trials. Subjects performed an equality judgment task, in which they reported whether two gratings are of the same or different contrast. In low-contrast condition, the contrast of standard stimulus was 22.4%, and the contrast of test stimuli systematically varied in 11 log increments from 8% to 63%. In high-contrast condition, the contrast of standard stimulus was 60%, and the contrast of test stimuli varied in 11 log increments from 37% to 97%.
    RESULTS: For the low-contrast gratings, presaccadic attention enhances contrast appearance, with the average Point of Subjective Equality (PSE) in the test-cued condition lower and the average PSE in the standard-cued condition higher than the neutral. In contrast, presaccadic attention attenuates contrast appearance of high-contrast gratings, with the average PSE in the test-cued condition higher and the average PSE in the standard-cued condition lower than the neutral.
    CONCLUSIONS: Our results suggest that presaccadic attention changes our subjective contrast perception by strengthening the weak stimuli but weakening the strong stimuli. This result is similar to the exogenous attention but different from the endogenous attention.
    Related Articles | Metrics
    Perception of Causality Induces Pupil Dilation
    Yiwen Yu, Miao Zhong, XiangYong Yuan, Yi Jiang
    2023, 31 (suppl.):  16-16. 
    Abstract ( 74 )  
    PURPOSE: Causality is fundamental to understanding our physical environment. Previous research typically measured causal perception by various behavioral indicators, but little is known about whether causal perception can be implicitly detected by physiological responses. This study aimed to investigate the impact of causal perception on pupil responses under explicit or ambiguous conditions.
    METHODS: Participants viewed animations featuring two discs (A and B) moving sequentially and judged whether the event represented a launch (A caused B to move) or a pass (A passed the stationary B). Experiment 1 manipulated the overlap between A and B across nine conditions (0% to 100%, in steps of 12.5%). Experiment 2 included three overlap conditions (0%, 100%, and ambiguous), with the ambiguous condition determined by the point of subjective equality (PSE) obtained in Experiment 1. Pupil size was recorded during Experiment 2.
    RESULTS: Experiment 1 revealed participants tended to perceive the animations as launching, as evidenced by significantly larger PSEs than 50%. In Experiment 2, when the overlap led to explicit causal perception, participants’ pupils dilated after the second disc completely overlapped the first disc and the first disc appeared to pass through it. Under the ambiguous condition, participants’ pupils dilated after the animations ended and prior to launching reports. Notably, these pupil effects were not driven by the baseline pupil size, suggesting they were induced by cognitive processing rather than general arousal.
    CONCLUSIONS: The current study revealed distinct pupil responses during causal perception based on unambiguous versus ambiguous sensory evidence, implying the involvement of different cognitive processes in these two situations.
    Related Articles | Metrics
    Adaptation of the Perception of Animacy from Biological Motion
    Mei Huang, Yi Jiang, Ying Wang
    2023, 31 (suppl.):  17-17. 
    Abstract ( 101 )  
    PURPOSE: Humans can readily perceive animacy from the unique movement patterns of living creatures, known as biological motion (BM). Researchers have proposed that there is an evolutionarily ancient mechanism in the visual system tuned to the local motion of terrestrial vertebrates to serve as a ‘life detector’. Nevertheless, evidence for this hypothesis mainly came from the direction discrimination task that did not directly assess the perception of ‘life’, and how the brain encodes perceived animacy from BM cues remains unclear.
    METHODS: Here, we investigated these issues using the animacy rating task with the visual adaptation paradigm. Repeated adaptation to a stimulus with a given feature, e.g., being more or less animate, will cause the deviation of the perception of the following stimuli to the opposite direction, e.g., being less or more animate. This adaptation aftereffect is considered to result from weakened neuronal responses specific to the tested property of the adaptor, thus providing a non-invasive way to reflect the activity of neuronal populations through behavioral performances. Observers rated perceived animacy for a series of morphed motion stimuli that spanned a continuum between natural human walking and non-BM after adapting to different BM cues (Exp. 1: intact human motion; Exp. 2: feet motion; Exp. 3: static human form; Exp. 4: pigeon motion) and the non-BM controls. The rating scores were fit to a psychometric function, and the adaptation aftereffect was assessed using the shifts in the point of subjective equality (PSE) between the adapting conditions in each experiment.
    RESULTS: We found that preexposure to intact human BM and non-BM stimuli induced significant adaptation aftereffects on animacy perception. This effect persisted after adaptation to feet movements carrying diagnostic local kinematic cues but not after viewing the static form of BM, indicating there are neuronal populations dedicated to animacy perception from BM based on motion signals. Moreover, adapting to the movement of pigeons could bias animacy perception for human motions, revealing that the neural representation of animacy from BM stimuli can transfer across species.
    CONCLUSIONS: These results suggest that perceiving animacy from BM involves a neural mechanism driven by local foot motion signals and responsive to cross-species kinematic cues, supporting the existence of a ‘life detector’ tuned to animate motion in the human brain.
    Related Articles | Metrics
    First Impressions of Body Shapes in Chinese Individuals
    Ying Hu, Xiaolan Fu, Alice O’Toole
    2023, 31 (suppl.):  19-19. 
    Abstract ( 93 )  
    PURPOSE: People spontaneously infer personality traits (e.g., lazy, extraverted) from body shapes when encountering strangers. In previous research, computer-generated bodies were derived from laser scanning of real humans to visualize and quantify body trait impressions. Research with American individuals indicated a multivariate trait space with dimensions relating to valence (good vs. bad) and agency (active vs. passive). The trait impressions were further predicted by body shape parameters. Do body-trait inferences vary by culture or are they universal? Our study examined whether the body-trait inference applied to Chinese participants.
    METHODS: A total of 140 computer-generated bodies (70 females, 70 males) were rated by Chinese participants based on 30 personality traits. We visualized the structure of the body-trait space using correspondence analysis and predicted trait ratings based on body shape parameters using multiple linear regression.
    RESULTS: Body-trait space showed that the first dimension was valence (Conscientiousness/Openness/Neuroticism), which was driven by body weight. The second dimension was Extraversion/Agreeableness, which was driven by whether a body appeared strong or typical. Additionally, body shapes predicted trait ratings with above-chance accuracy at both the trait profile and individual trait levels.
    CONCLUSIONS: The study is the first systematic examination of body trait impressions in Chinese individuals. It highlights that forming body impressions is a universal phenomenon, but the structure of trait spaces, determinant body features, and prediction accuracy can vary by culture.
    Related Articles | Metrics
    Integration of Object Motion and Position is Constrained by Sensory Uncertainty
    Ke Yin, Jiyan Zou, Ce Mo
    2023, 31 (suppl.):  22-22. 
    Abstract ( 92 )  
    PURPOSE: The perceived position of a visual object could be deviated by the presence of motion signals, a phenomenon known as illusory motion-induced position shift (MIPS). While the prominent object-tracking model casts illusory MIPS as the result of optimal integration of the positional and the motion information, which are weighted differently by the visual system based on their relative uncertainty, it remains unknown how sensory uncertainty might affect such integration. Here, we investigated this issue by examining how the magnitude of illusory MIPS varied as a function of the positional and motion uncertainty.
    METHODS: To enable effective and independent manipulation of the positional and motion uncertainty of the visual object, we modified the classic double-drift illusion that has been shown to give rise to compelling illusory MIPS. The visual stimulus was a random dot kinematogram (RDK) containing presented within a circular aperture that was moving on a tilted path (45° or 135°). The directions of the individual moving dots conformed to a von Mises distribution whose mean (internal motion) was set orthogonal to the external aperture path. To measure the magnitude of MIPS, a psychophysical staircase procedure was employed to find the point of subjective verticality of the external aperture path. Critically, the uncertainty of the RDK motion was manipulated by changing the variance of the distribution, while the positional uncertainty was manipulated by changing the dot density of the aperture. According to the object-tracking model, we predicted stronger illusory MIPS effects for lower internal motion uncertainty (smaller variance of the von Mises distribution) and higher external positional uncertainty (lower dot density).
    RESULTS: Consistent with our predictions, the perceived tilt of the external aperture path deviated towards the internal RDK motion, as was previously reported in the classic double-drift illusion. More importantly, we found that the magnitude of MIPS was constrained by the positional and motion uncertainty. The illusory effect was significantly weakened by the increased variance of the von Mises distribution, yet was strengthened as the dot density increased.
    CONCLUSIONS: Our findings showed that information with higher certainty plays a dominant role in the integration process, which generates coherent perceptual estimates that enable the tracking of visual objects.
    Related Articles | Metrics
    Effect of Circadian Rhythm on Visual Functions
    Lei Jiang, Wei Mao, Dang Ding, Xianyuan Yang, Fangfang Yan, Chang-Bing Huang
    2023, 31 (suppl.):  25-25. 
    Abstract ( 89 )  
    PURPOSE: Although a number of studies have demonstrated that visual function is regulated by the circadian rhythm, results varied significantly across studies. The current study examined the effect of circadian rhythm on a variety of visual functions in a group of subjects with consistent chronotype, aiming to tackle the discrepancy in the literature.
    METHODS: Visual contrast detection, Vernier offset discrimination, and color perception (using Munsell 100-Hue Test) were adopted. To match experimental setup in previous studies, visual detection and discrimination tasks were tested at both central and peripheral retinal locations. Thirty-four participants with an intermediate chronotype were allocated into two groups and completed two test sessions in different temporal orders: morning to evening, and evening to morning.
    RESULTS: Indexed by the difference of threshold change between the two test sessions, we observed a significant decrease in threshold for contrast detection task in both central and peripheral visual fields when individuals took tests in the temporal order from morning to evening, and in contrast, a reverse effect (i.e. threshold increase) in the temporal order from evening to morning. For the Vernier offset discrimination task in peripheral visual fields, the average threshold change exhibited an opposite pattern, with individuals took test order of evening to morning showing a significant reduction. For color perception, there was no significant difference in threshold changes between the two testing sessions.
    CONCLUSIONS: Visual detection ability was found to be better during nighttime, while visual discrimination ability was superior in the morning. No diurnal variation was detected in color perception within the scope of this study. Our results indicated that the regulatory influence of circadian rhythm on visual perception is function dependent.
    Related Articles | Metrics
    Identifying Critical Kinematic Features of Animate Motion and Contribution to Animacy Perception
    Yifei Han, Wenhao Han, Liang Li, Tao Zhang, Yizheng Wang
    2023, 31 (suppl.):  26-26. 
    Abstract ( 75 )  
    PURPOSE: Over the course of evolution, animals have developed particular motion patterns, including relative motion (i.e., biological motion) and common motion (i.e., animate motion). Meanwhile, humans and non-human animals have developed the ability to identify these patterns, called animacy perception. However, since previous studies mainly used synthetic paths, it remains unclear what the nature of animate motion and animacy perception is and the relationship between them.
    METHODS: We proposed a new method to obtain actual animate motion by extracting the motion of the objects’ centroid (i.e., the center of gravity) using visual tracking algorithms, and we built a dataset containing both actual animate (birds’) and inanimate (drones’) motion. Then, we systematically compared the difference between animate and inanimate motion in both motion time-domain and frequency-domain features. Moreover, we evaluated how these motion features induced the animacy perception of human observers through psychophysical experiments. In each experiment, observers were asked to determine the animacy of a moving dot that moved along the centroid's trajectory in each trial.
    RESULTS: We found that compared to drones, birds generally fly faster, and their centroid motion changes more dramatically in terms of both speed magnitude (acceleration) and moving direction (angular speed). More interestingly, we found that the trajectory fluctuations in birds' trajectories reflected the periodic wing flapping of birds. In the human behavior experiments, we found that animacy perception positively correlates with four motion features: speed, acceleration, angular speed, and trajectory fluctuations. Using stepwise regression analysis, we found that fluctuations and acceleration played a more important role than speed, while angular speed's effect was insignificant.
    CONCLUSIONS: The defining features of animate motion are acceleration and angular speed since the flexibility of living objects is higher than that of non-living objects. For flying animals, trajectory fluctuations might also be a defining feature. The positive relationship between animacy perception and key features of animate motion indicates that the human brain is sensitive to the critical features of animate motion. We speculate that the brain might have specialized neurons and neural circuits in the brain to process them.
    Related Articles | Metrics
    Depth Perception Arising from Monocular Texture Cue in Amblyopia
    Junli Yuan, Yijin Han, Jun Wang, Fang Hou
    2023, 31 (suppl.):  27-27. 
    Abstract ( 80 )  
    PURPOSE: Amblyopia is a developmental visual disorder that results from abnormal visual experiences during critical periods of development. It is believed that abnormal visual experiences disrupt the interocular connections in the visual cortex, resulting in impaired stereopsis in amblyopia. However, human visual system can extract depth information in space from various monocular cues, include perspective, shadows, blur, and texture etc. Whether the depth perception from the monocular cue is normal in amblyopia is not clear. On the other hand, the quality-of-life assessment of amblyopic patients is heavily based on their stereopsis function, while ignoring the abilities to processing other monocular depth cues. This might result in an underestimate of the quality of life for amblyopic patients. Thus, it is necessary to investigate the ability to extracting depth information from monocular cues in amblyopia. Given that texture cue is a representative monocular depth cue which generates significant depth perception, we attempt to investigate whether the depth perception of amblyopic monocular texture cue is normal.
    METHODS: Twenty-three amblyopic patients (24.5 ± 4.0 years, 13 males), and eighteen normal participants with normal stereoscopic function and age-matched (24.4 ± 1.3 years, 8 males) were recruited. The amblyopic eye (AE), fellow eye (FE) of amblyopic patients, and the non-dominant eye (NDE) in normal participants were tested. In Exp 1, the slant stimuli textured with Voronoi pattern were used. The participants were asked to perform a discrimination task and a matching task. In the slant discrimination task, a 45° reference plane and a test plane were presented side-by-side on the display. The participants were required to indicate which of the two planes was rotated further away. The discrimination threshold was measured using a 3-up-1-down staircase procedure. In the slant matching task, the participants were required to adjust the angle of a straight line to match the angle of the slant they just saw. Five slant angles, 0°, 15°, 30°, 45°, and 60° were used. The trials of different angles were intermixed randomly. In Exp 2, same participants performed the discrimination and matching tasks with dihedral stimuli. The discrimination threshold and perceived angle for the dihedral stimuli were recorded.
    RESULTS: In Exp 1, there were significant differences in the slant discrimination thresholds between the AE and NDE (t(39) = 2.161, P = 0.037, Cohen's d = 0.699) and between the AE and FE (t(18) = 2.987, P = 0.008, Cohen's d = 0.68). However, there was no significant difference in the perceived slant angle between the AE, NDE and FE (F(2,57) = 2.134, P = 0.128). No significant correlation was found between the discrimination threshold and visual acuity. In Exp 2, the discrimination threshold in the AE group was higher than that of the NDE (t(39) = 2.092, P = 0.043, Cohen’s d = 0.674), and than that of the FE (t(18) = 2.642, P = 0.017, Cohen’s d = 0.424). The threshold correlated significantly with visual acuity (r = 0.48, P = 0.019). However, there was no significant differences in the perceived dihedral angle between the AE, NDE and NE (F(2,57) = 0.06, P = 0.942).
    CONCLUSIONS: In conclusion, the amblyopic patients showed normal ability to extract depth information from texture cue. This work provides a basis for future studies on monocular depth perception in amblyopia. It also suggests that the quality-of-life assessment should include the monocular depth perception.
    Related Articles | Metrics
    Visual Beauty and Pleasure Experience are not the Same
    Wu Yi-Fan, Dang Ding, Jinmei Xiao, Qingshang Ma, Yan Fang-Fang, Huang Chang-Bing
    2023, 31 (suppl.):  28-28. 
    Abstract ( 106 )  
    PURPOSE: The relationship between beauty and pleasure is on debate. In the current study, we aim to investigate whether there is a dissociation between the beauty and pleasure experience by measuring the break time in a continuous flash suppression paradigm.
    METHODS: Fifty-eight healthy subjects without long training experience in visual art participated. Two Mondrian masks, one above and one below the center of screen, were presented to the dominant eye, and refreshed every 100ms; the target image was presented randomly to the top or down side of the screen in the non-dominant eye. The target images that covered four categories, e.g. beautiful-pleasant (B-P), beautiful-unpleasant (B-UnP), unbeautiful-pleasant (UnB-P), and unbeautiful-unpleasant (UnB-UnP), were selected from a pilot study. There were 15 images in each category, leading to a total of 60 target images. The break time was recorded as the time when subject reported where the target image appeared after seeing it consciously.
    RESULTS: The results revealed that the break time for B-UnP images was significantly longer than that for B-P images, i.e., for beautiful pictures instead of unbeautiful images, unpleasant images, as opposed to pleasant ones, increased the break time. 2) the break time for B-UnP images was significantly longer than that for UnB-UnP images, i.e., for unpleasant images, instead of pleasant images, unbeautiful images decreased the break time, as opposed to beautiful ones.
    CONCLUSIONS: Our results indicated that there is an inconsistent effect between beauty degree and pleasure degree of images on break time, signifying a potential separation of beauty and pleasure perception.
    Related Articles | Metrics
    Music-induced Negative Emotion Shapes Human Visual Size Perception
    Bochun Yang, Lihong Chen
    2023, 31 (suppl.):  30-30. 
    Abstract ( 78 )  
    PURPOSE: In our everyday life, we are constantly exposed to stimuli that elicit brief emotional reactions, such as sad news, threatening images, or melancholic music. Previous studies have found that changes in affective state produced by negative visual images can affect visual size perception. Here we investigated whether such emotion-cognition interaction was still observed when using negative music, and the causal role of dorsolateral prefrontal cortex (DLPFC) in this process.
    METHODS: During each trial, participants firstly listened to 30-s negative or neutral music, and then performed an Ebbinghaus illusion task, during which a target circle surrounded by four large or small context circles and a comparison circle were simultaneously presented, and they were required to adjust the size of the comparison circle to match that of the target circle without time limit. To stimulate bilateral DLPFC, anodal electrode and cathodal electrode were positioned at F3 and F4 (i.e., left anode/right cathode), or vice versa (i.e., left cathode/right anode). A constant current of 1-mA intensity with a duration of 15 minutes was initiated 5 minutes before the measure of the illusion effect. The setup of sham stimulation was identical to that of real stimulation, with the exception that the stimulator was turned on for 60 seconds.
    RESULTS: In comparison with neutral music, negative music significantly reduced the Ebbinghaus illusion effect. Further, for the sham stimulation, similar pattern of results was observed. However, for the left anode/right cathode stimulation, the difference of illusion effect between negative and neutral music conditions disappeared; for the left cathode/right anode stimulation, the opposite pattern of results was observed, i.e., negative music significantly increased the illusion effect relative to neutral music.
    CONCLUSIONS: The results show that prior exposure to negative music can affect visual size perception, left anode/right cathode and left cathode/right anode stimulation of prefrontal cortex can eliminate and reverse the emotional effect of music, respectively. The findings suggest that visual perception can be shaped by negative emotion transmitted from auditory modality, in favor of the causal role and hemispheric asymmetry of prefrontal cortex in emotion-cognition interaction.
    Related Articles | Metrics
    Rhythmic TMS over Human Right Parietal Cortex Strenghtens Visual Size Illusions
    Xue Han, Lihong Chen
    2023, 31 (suppl.):  31-31. 
    Abstract ( 80 )  
    PURPOSE: Rhythmic brain activity has been proposed to structure neural information processing, with rhythms of different frequencies playing distinct roles. Here we investigated whether short rhythmic bursts of left or right parietal transcranial magnetic stimulation (TMS) at beta frequency (20 Hz) can affect visual size perception, which is indexed by the magnitude of two classic visual size illusions (i.e., the Ebbinghaus and the Ponzo illusions).
    METHODS: On each trial, rhythmic TMS was applied over left or right (superior parietal lobule) SPL in a train of five pulses at beta frequency. Immediately after the last pulse of stimulation train, a size illusion configuration was presented at the screen center. Participants were required to adjust the size of a comparison stimulus (i.e., a circle for the Ebbinghaus illusion and a bar for the Ponzo illusion) to match that of the target stimulus without time limit. The vertex was selected as a control site for disruption of behavioral performance due to non-specific TMS effect.
    RESULTS: Short rhythmic bursts of right-parietal TMS at beta frequency causally strengthened the magnitudes of both the Ebbinghaus and the Ponzo illusions relative to control stimulation, whereas left-parietal stimulation had a negligible effect on the illusion effects.
    CONCLUSIONS: These findings provide clear evidence that parietal beta rhythm is actively involved in shaping visual size perception, supporting the causal contribution of parietal cortex to the processing of visual size illusions, in a hemisphere-asymmetric manner.
    Related Articles | Metrics
    poral Contextual Modulation on Perceived Walking Direction
    Chang Chen, W. Paul Boyce, Colin J. Palmer, Colin W.G. Clifford
    2023, 31 (suppl.):  35-35. 
    Abstract ( 65 )  
    PURPOSE: Contextual modulation is well described for many aspects of high-level vision (e.g., facial attractiveness) but is relatively unexplored for the perception of walking direction. In a recent study, we observed an effect of the temporal context on perceived walking direction - namely, a repulsive perceptual aftereffect following exposure to biological patterns of motion. Here, we aim to examine the spatial contextual modulation of walking direction by measuring the perceived direction of a target walker in the presence of two flanker walkers, one on each side.
    METHODS: Experiment 1 followed a within-subjects design. Participants (N = 30) completed a spatial context task by judging the walking direction of the target walker in thirteen different conditions: a walker alone in the centre; a walker with two flanking walkers either intact or scrambled at a flanker deviation of ±15°, ±30°, or ±45°. To compare spatial and temporal contextual effects within subjects, participants also completed an adaptation task in which they were asked to report whether the walking direction of a target point-light walker was to their left or right after adaptation to one of two walking directions of ±30°. In Experiment 2 (N= 40), we measured the tuning of spatial contextual modulation across a wide range of flanker deviation magnitudes ranging from 15° to 165° in 15° intervals.
    RESULTS: We found the expected repulsive effects in the adaptation task but attractive effects in the spatial context task in Experiment 1. And results in Experiment 2 showed significant attractive effects across a wide range of flanker walking directions with the peak effect at around 30°.
    CONCLUSIONS: This study extends our understanding of how spatial and temporal contextual modulation operate in high-level visual processing, and how this differs from contextual modulation in low-level vision.
    Related Articles | Metrics
    Color Saturation Drives Oscillatory Responses in V4
    Hetian Cao, Ye Liu, Zheyuan Chen, Yingfan Liu, Xiaotao Wang, Xiaohong Li, Yiliang Lu, Ian Andolina, Niall McLoughlin, Stewart Shipp, Wei Wang
    2023, 31 (suppl.):  38-38. 
    Abstract ( 84 )  
    PURPOSE: Our visual world is rich in color information. The perception of color is commonly described in reference to three color dimensions: hue, saturation or chroma and value or lightness. With hue referring to the peak of the color spectrum, saturation (or chroma) referring to the spread of the color spectrum, and value or lightness referring to the overall intensity of the color spectrum. Previous psychophysical studies on humans have suggested that red (hue) stimuli and those with high saturation have a strong effect on arousal. Electrophysiological studies conducted on both human and non-human primate visual cortices have demonstrated a red dominance in neural responses compared to other colors, but have done so without considering saturation levels. A more recent study looking in primate V1 found a significant effect of the background on responses. Since the gamma (γ) band is often associated with cognitive function such as attention and stimulus awareness in V4, we hypothesized that it might be the saturation of a colored stimulus that drives the power of gamma band activity in color perception rather than the hue of a stimulus.
    METHODS: We used linear depth probes to record local field potentials (LFP) and multi-unit activity (MUA) in V4, while presenting isoluminant color stimuli both within responsive fields (RF) and as full field stimuli. We calculated the power spectrum density (PSD) of the LFP signals to six isoluminant hues (red, orange, green, cyan, blue, purple) with varying saturation levels (color distance along the saturation axe of each hue is 0.02 DE in CIE Lu’v’ color space, three to nine levels depending on the gamut).
    RESULTS: We found that color patches of all hues induced significantly higher power in both the γ-low (30-50Hz) and γ-high (50-100Hz) bands at higher saturation levels. Furthermore, when comparing different chromatic stimuli at the same saturation level, we observed no specific hue that exhibited a greater γ-band power response.
    CONCLUSIONS: High saturation color patches of all hues representing the chromaticity drive a significantly enhanced γ-band response. This increase in γ-band response does NOT depend on the hue of the colored stimulus but rather is dependent on the saturation of the color.
    Related Articles | Metrics
    Separation of Beauty and Pleasure Experience: A Study of Affective Priming Effect
    Wu Yi-Fan, Zhenyu Zhang, Jinmei Xiao, Qingshang Ma, Yan Fang-Fang, Huang Chang-Bing
    2023, 31 (suppl.):  42-42. 
    Abstract ( 59 )  
    PURPOSE: The relationship between beauty and pleasure is a significant aspect of the aesthetic mechanism. Whether there is a separation between beauty and pleasure in visual aesthetic experience is still unclear. In the current study, we attempt to investigate whether there is a dissociation between beauty and pleasure experience by adoption of an affection priming effect.
    METHODS: Sixty healthy subjects without long training in visual art participated. The stimuli consisted of priming images and target images. Priming images were selected from the International Affective Picture Set (IAPS, Lang et al., 2001). Target images of four categories, e.g. beautiful-pleasant (B-P), beautiful-unpleasant (B-UnP), unbeautiful-pleasant (UnB-P), and unbeautiful-unpleasant (UnB-UnP), were selected from a pilot study that recruited 73 subjects to rate images along the dimension of beauty and pleasure; there were 60 target images in total, with 15 images per category. For a typical trial, after presentation of priming image for 200ms, subjects rated the target image in dimensions of beauty and pleasure by moving and clicking the mouse.
    RESULTS: We found that if the target images were of the pleasant type, there was a significant difference in the pleasure ratings of the target stimuli between the positive and negative affective priming conditions, but no significant difference in the beauty ratings of the target stimuli. Additionally, there was no significant difference between two priming conditions if the target images were of unpleasant type.
    CONCLUSIONS: We found that the difference of affective priming effect only impacts the evaluation of pleasure, but not the evaluation of beauty, which may reflect the potential separation of beauty and pleasure experience.
    Related Articles | Metrics
    Cognitive Function in Children with Strabismus
    Yan Yang, Dingping Yang, Xinping Yu
    2023, 31 (suppl.):  44-44. 
    Abstract ( 141 )  
    PURPOSE: To evaluate the cognitive function of children with strabismus and investigate the influence of main clinical indexes of strabismus on cognitive function in children.
    METHODS: Prospective cross-sectional study.
    RESULTS: A total of 149 participants aged 4 to 10 years were enrolled, including 83 patients with exotropia (exotropia group), 30 patients with esotropia (esotropia group), and 36 normal individuals (normal group). Cognitive function was assessed using the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) and the Wechsler Preschool and Primary Scale of Intelligence-Fourth Edition (WPPSI-IV), including the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), Processing Speed Index (PSI), and Full-Scale Intelligence Quotient (FSIQ). The FSIQ, VCI, PRI, and PSI scores of both exotropia and esotropia groups were significantly lower than those of the normal group (P < 0.001, P < 0.001, P < 0.001, P = 0.001), while the WMI score did not differ significantly between the two groups (P = 0.144). However, there was no significant difference between the exotropia and esotropia groups. Spearman correlation analysis showed that the FSIQ, VCI, and PRI scores were negatively correlated with the duration of symptoms (r = -0.233, P = 0.01; r = -0.241, P = 0.01; r = -0.219, P = 0.02), and the FSIQ, VCI, PRI, and PSI scores were negatively correlated with near stereoacuity (r = -0.280, P < 0.001; r = -0.226, P = 0.01; r = -0.317, P < 0.001; r = -0.195, P = 0.01; r = -0.195, P = 0.02), far stereoacuity, and fusion for both near and far distances. The WMI score was only negatively correlated with fusion for near distances (r = -0.182, P = 0.03). However, there was no significant correlation between the type and deviation of strabismus and cognitive function in strabismus children.
    CONCLUSIONS: There are differences in cognitive function between children with strabismus and normal children. The cognitive functions of verbal comprehension, perceptual reasoning, and processing speed are impaired in children with strabismus, while working memory is not affected. The cognitive impairment in children with strabismus may be due to abnormal binocular visual function.
    Related Articles | Metrics
    Task Sets, but not Conflict Types, Determine the Domain-generality of Conflict Adaptation Effects
    Qian Qian, Jiawen Pan, Miao Song
    2023, 31 (suppl.):  45-45. 
    Abstract ( 61 )  
    PURPOSE: Conflict adaptation effects refer to the reduction of interference effects in conflict tasks after a previous incongruent trial, compared with a previous congruent trial. The effects have been considered to reflect the adaptive cognitive control mechanisms. However, whether the generalization of the effects between different conditions depends on the similarity of task sets or conflict types is still under debate.
    METHODS: Two very different tasks are tested and alternated between consecutive trials. One is the Flanker task, which induces congruency sequence effects (CSE), and one is the spatial cueing task, which induces validity sequence effects (VSE). Apparently, the two tasks have very distinct conflict types. If the similarity of conflict types is the decisive factor, there should be no sequence effects between the two tasks. The task sets are also manipulated, the target stimuli are similar for the two tasks in Experiment 1, but not similar in Experiment 2. If the similarity of task sets is the decisive factor, there should be sequence effects between the two tasks only when the target stimuli are similar.
    RESULTS: The domain-general cognitive control is only found from previous flanker trials to current cueing trials when the task sets of the two tasks are very similar in Experiment 1. In addition, the VSE between trial n-2 and trial n are eliminated by the existence of an intermediate flanker trial, but the CSE between trial n-2 and trial n is still significant even with an intermediate cueing trial.
    CONCLUSIONS: Conflict adaptation effects can happen between very different tasks, ruling out the conflict types as the decisive factor for the generalization of the effects. As a comparison, the similarity of task sets decides whether the effects generalize between different tasks, indicating its significant influence on the generalization of conflict adaptation effects.
    Related Articles | Metrics
    The Cheerleader Effect in Multiple Social Groups
    Ruoying Zheng, Guomei Zhou
    2023, 31 (suppl.):  47-47. 
    Abstract ( 83 )  
    PURPOSE: In addition to facial physical properties, the social context in which a face is presented also influences its attractiveness. When a target face is presented in a group, its attractiveness may be higher (cheerleader effect) or lower (reverse cheerleader effect) than when it is presented alone. The visual system processes the group attractiveness, and then the attractiveness of the target face comes toward (assimilation effect) or away from (contrast effect) the group attractiveness, resulting in the different degrees of the cheerleader effect. However, social groups are diverse in the real world, and the whole group can be split into different subgroups. Different social groups may have different weights in group attractiveness, thus moderating the cheerleader effect. For example, the observers’ own-race face captures attention faster than other-race face, thus it may have greater weight in the cheerleader effect. The current study aimed to explore how social context affected the attractiveness of the target face when there were multiple social groups.
    METHODS: We conducted three experiments recruiting Chinese university students to explore this question. The whole face group was split into the ingroup and the outgroup of the target face. We manipulated the social context into four conditions: high attractive ingroup and high attractive outgroup (HIHO), high attractive ingroup and low attractive outgroup (HILO), low attractive ingroup and high attractive outgroup (LIHO), low attractive ingroup and low attractive outgroup (LILO). The cheerleader effect was measured as the attractiveness increment of the target face in each context compared to the presented alone. Experiment 1 used the psychophysical method to calculate the point of subjective equivalence of attractiveness judgment for the target faces in each context, with race as the social group category. We used Black faces and White faces in Experiment 1A, and Asian faces and White faces in Experiment 1B. Experiment 2 and Experiment 3 asked participants rating the facial attractiveness of the target face. Experiment 2 used race as the social group category, with Black faces and White faces in Experiment 2A, Asian faces and White faces in Experiment 2B, and Asian faces and Black faces in Experiment 2C. Experiment 3 showed nationality labels (China, Japan, Singapore) to Asian faces to create Chinese, Japan, and Singapore faces, and manipulated threat priming (Japan: threat, Singapore: non-threat). We used Japanese faces and Singaporean faces in Experiment 3A, Chinese faces and Singaporean faces in Experiment 3B, and Chinese faces and Japanese faces in Experiment 3C.
    RESULTS: In Experiment 1, we found the cheerleader effect in LILO, showing the tendency for the contrast effect. In Experiment 2 and Experiment 3, we constructed the linear mixed model to examine the cheerleader effect in each context. In Experiment 2, the cheerleader effect in LILO was always larger than that in HIHO. When Black faces were presented with Asian/White faces, the cheerleader effect in LIHO was larger than in HILO for Black target faces, and no such difference was observed for Asian target faces or White target faces. When Asian faces and White faces were presented simultaneously, the cheerleader effect in LIHO was larger than that in HILO for Asian target faces, and there was no such difference for White target faces. These results manifested the contrast effect with a greater weight to the ingroup of the target faces and the ingroup of participants (i.e., Chinese faces). Experiment 3 replicated the result of Experiment 2 that the cheerleader effect in LILO was consistently larger than HIHO. Furthermore, when Japanese and Singaporean faces were presented simultaneously, there was no difference between HILO and LIHO. However, when Japanese/Singaporean faces were presented with Chinese faces, there was a tendency of a larger cheerleader effect in HILO than in LIHO for the Japanese and Singaporean target face. These results implied the contrast effect with greater weight to participants’ ingroup (i.e., Chinese faces). In addition, the attractiveness of target faces negatively predicted the cheerleader effect in both Experiment 2 and Experiment 3.
    CONCLUSIONS: The current study explored the cheerleader effect in multiple social groups and found that the attractiveness of target faces contrasted with the whole group. There was an ingroup bias of both the target face and the observers if social groups were categorized by the attributes (e.g., race) of the face itself, while only an ingroup bias of the observers if social groups were manipulated by the labels. We proposed an observer-target-context model, and suggested that the triadic relationship among the observer, target face, and context faces affected selective attention, social inference mechanisms, etc., moderating the cheerleader effect.
    Related Articles | Metrics
    Attention Reorientation in 3D Space: Depth-based Statistical Learning Modulates Attention Capture
    Binglong Li, Jiehui Qian
    2023, 31 (suppl.):  48-48. 
    Abstract ( 70 )  
    PURPOSE: The ability to ignore distracting information is crucial for improving visual search. Recent studies have shown that selection history can bias attention and 2D locations that may contain salient distractors can be suppressed through statistical learning. However, depth information has been shown to function differently from 2D spatial information in various visual tasks. This study aimed to investigate whether statistical regularities based on depth information could influence attention capture in 3D settings.
    METHODS: We manipulated the probabilistic occurrence of salient color distractor’ depth location (i.e., distance from viewer) and tested two search modes. Experiment 1 employed the addition-singleton paradigm, where participants searched for a shape singleton and ignore the salient distractor (singleton-detection mode). Similarly, participants searched for specific shape among four different shapes (feature-search mode) in Experiment 2. Experiment 3 utilized ordinal depth information (i.e., relative depth order) and replicated Experiment 2 to generalize our finding.
    RESULTS: For the depth location (or order) where salient distractor was presented with higher probability, we found significant slower response for target selection, indicating learned attention suppression for depth. This finding was replicated in depth order display. However, a larger attention suppression effect under feature-search mode demonstrated different processing of depth from 2D location.
    CONCLUSIONS: Statistical learning led to shaped attention selection away from depth location or order of distractor, suggesting that depth information could be learned implicitly. Nevertheless, depth location may be processed as a ‘feature,’ resulting in a larger suppression effect under feature-search mode.
    Related Articles | Metrics
    Serial Repulsion of Biological Motion Emotion Perception
    Haoyuan Tan, Qianyu Zhang, Yijie Kuai
    2023, 31 (suppl.):  52-52. 
    Abstract ( 101 )  
    PURPOSE: Perceiving biological motion is a special kind of visual cognitive ability, and people can evaluate the emotions of other individuals by their body movements. However, movements often do not occur in isolation; the visual system needs to continuously process the biological motion information that enters the visual field sequentially, and the evaluation of action emotions may be influenced by past action information. The present study aimed to clarify (1) how past experiences have altered current biological motion emotion perception, and (2) what factors have influenced this alteration.
    METHODS: 40 participants rated the point-light walker animation sequence continuously for emotion. The rating error and the difference from past ratings were calculated for each evaluation, and the derivative of the Gaussian function was fitted to obtain the amplitude of the effect of past evaluations on the current. Meanwhile, the autism spectrum quotient, empathy quotient, and vividness of movement imagery were measured for all participants using self-report questionnaires, then the correlations between the three and the amplitude of effects were examined.
    RESULTS: (1) The curve fitting results indicate that current biological motion emotion evaluations systematically deviate from past evaluations and exhibit repulsive serial bias, and (2) the p-values and Bayes factors of the correlation analysis together indicate that the amplitude of the bias significantly related to third-person movement imagery vividness, but not to autism spectrum quotients or empathy quotients.
    CONCLUSIONS: This study shows that there is a systematic serial repulsion effect for biological motion emotion perception, but that it belongs to general visual or movement perception processes, rather than social cognitive processes. This result enriches the theory of biological motion perception and provides a possible explanation for real-life action emotion recognition errors.
    Related Articles | Metrics
    Neuropsychological Evidence for Action-based Effects on Visual Size Perception
    Jian Xu, Lihong Chen
    2023, 31 (suppl.):  56-56. 
    Abstract ( 69 )  
    PURPOSE: Action and perception interact reciprocally in our daily life. Previous studies have found that action properties of a single object can affect visual perceptual processing. Here we investigated the modulation of action relations between two objects on visual size perception and its underlying neural mechanisms.
    METHOD: We investigated this action-based effect by varying the action relations between the surrounding inducers and the central target of the Ebbinghaus illusion. In particular, the central target was always a ping-pong ball, and the surrounding inducers were four ping-pong bats or electric fans with their handles oriented left or right. During each trial, participants were required to adjust the size of a comparison circle to match the target circle without time limit. Immediately, they had to judge whether the surrounding inducers were ping-pong bats or electric fans.
    RESULTS: The behavioral results showed that when the surrounding inducers were larger than the central target, the size illusion effect was significantly stronger when they were in a congruent condition (i.e., ping-pong bats and ping-pong ball) than when they were in an incongruent condition (i.e., electric fans and ping-pong ball), irrespective of handle orientation. However, when the surrounding inducers were smaller than the central target, significant modulation of action relations on size illusion effect was not observed. ROI analysis revealed that when the handles were oriented right, congruent condition elicited greater activation in left supramarginal gyrus (SMG) and insula than incongruent condition; right-oriented ping-pong bats condition induced larger activation in left insula and fusiform gyrus than left-oriented condition. Further DCM analysis showed that with handles oriented right, congruent condition significantly strengthened the connection from SMG to insula, and weakened the connection from SMG to fusiform gyrus, as well as decreased the inhibitory self-connection in insula relative to incongruent condition. The present study provides clear evidence that action relations between objects can affect visual size perception, which relies on both the forward connection from SMG to insula and the feedback connection from SMG to fusiform gyrus.
    CONCLUSION: The findings confirm that action properties of objects shape visual perception, and suggest that fronto-parietal network and its connection to occipital region play a pivotal role in this process.
    Related Articles | Metrics
    Effects of Optical Input and Dynamical Constraints on Gait Parameters in Natural Walking
    Huiyuan Zhang, Feifei Jiang, Yijing Mao, Xian Yang, Jing Samantha Pan
    2023, 31 (suppl.):  66-66. 
    Abstract ( 111 )  
    PURPOSE: Walking individuals experience pressure and friction from the ground. These forces are determined by the physical properties of the supporting surfaces and modulate postural control, balance, and gait. Thus, walkers exhibit different kinematical patterns when walking on surfaces with different physical properties. On a separate note, optical input, which provides essential information about the structures and changes in the environment, also affects walking and a blindfolded walker typically shows increased postural sway, difficulties in step initiation and execution, and impaired balance control as compared to their normal-viewing counterparts. Researchers of perception then dive in to study whether these kinematical differences could be picked up by observers to differentiate, recognize or identify walkers or walking events and their studies have yielded multifarious results (e.g., Beardsworth & Buckner, 1981; Jokisch, Daum, & Troje, 2006; Troje, & Westhoff, 2006). However, it is always assumed but never empirically measured how much kinematical difference there is, before testing the perception of it. We think it is absolutely important and necessary to measure the kinematical differences of action performance under various optical and dynamical conditions, which validates the perception studies. Using natural walking as an example, the current work recorded and compared gait and limb swing with or without optical input and on compliant or non-compliant surfaces to demonstrate the optical and dynamical effects on movement production.
    METHODS: Twenty participants’ (age between 19 and 26, 10 females) natural walking was recorded using a motion capture system (Nokov, Mars 2H) in two optical conditions: with or without vision (by blindfolding the walker), and two physical conditions: walking a foam-paded surface with high compliance or on a concrete surface with low compliance. In each condition, every participant walked 8 times and each time they walked for 3 cycles (a cycle was defined as the phase between two successive heel-strikes of the right leg). Using right leg kinematics, gait parameters (stride duration, stride length, walking speed, vertical oscillation, path straightness) were calculated.
    RESULTS: First, the effect of optical input. When blindfolded, participants exhibited reduced stride length, longer stride duration, slower walking speed, and less straight walking path. They also demonstrated decreased vertical bouncing and smaller range of motion on the head and joints, including the elbows, knees, and shoulders. Second, the effect of dynamical constraints. Walking on foam-paded surface, compared to walking on concrete floor, walkers showed reduced vertical bouncing, lower walking speed and smaller range of motion on the head and joints, including the elbows, and shoulders. There was an interaction between optical input and dynamical constraints and the effects of optical input on stride duration and path curvature were observed only in the high compliance foam-padded floor condition, not in the concrete floor condition.
    CONCLUSIONS: The kinematics of walking revealed the dynamical interaction between the walker and the environment and reflected the availibility of optical input. These findings contribute to our understanding of the complex interplay between kinematics, optics, and dynamics during natural walking. They also make scientific inquiries about whether observers could visually detect such differences to perceive the walker, the walking event and the walking environment reasonable and justified.
    Related Articles | Metrics
    Perceiving the Availability of Visual Feedback During Action Performance from PLD Videos
    Feifei Jiang, Yijing Mao, Xian Yang, Huiyuan Zhang, Jing Samantha Pan
    2023, 31 (suppl.):  67-67. 
    Abstract ( 150 )  
    PURPOSE: According to the Kinematic Specification of Dynamics (KSD) hypothesis, dynamics causes movements, movements exhibit kinematics in the world which projects to the optics forming information known as the optic flow, which underlies visual perception. In addition to dynamics, optical input during movement performance also affects kinematics. For example, when we walk with a heavy backpack, we may look different from without; when we walk with the eyes closed, we may look different from when the eyes being open. With this addition, we propose a revised KS(D+O) hypothesis, which emphasizes on the influences of both the dynamical and optical input during action production on observable movement patterns and hence the optical information and visual event perception. To test this new hypothesis, we designed a visual search task where observers looked at point-light displays of actors, who performed actions with the eyes open or closed, and reported the presence /absence of an open-eyed/closed-eyes actor.
    METHODS: A total of 768 point-light display videos of 4 actions (walking, running, jumping and adapted-Y-balance) were recorded from the frontal and side views. While performing these actions, actors’ eyes were either open or closed. We put these PLDs into a visual search task with set sizes equal to 2, 3, 4, or 5. Twenty participants watched the stimuli and reported whether an open-eyed PLD actor was present/absent among close-eyed PLD actors or vice versa. Their response accuracy and time were recorded.
    RESULTS: Participants were able to differentiate whether optical feedback was available during action performance (mean correct = 54.8%, SD = 0.025, chance level = 50%). First, views of presentation (frontal vs. side) did not affect search performance (P < 0.001). Moreover, a repeated measures ANOVA showed that set size, action type and target present/absent all affected accuracy, and accuracy decreased as set size increased and was higher for target present trials. The ANOVA also revealed some significant interactions, including target present/absent * set size, action type * set size, and target present/absent * search for eye-open/closed target. Finally, a repeated measures ANOVA showed that the response time was affected by set size, action type, target present/absent and longer RT was related with larger set size, and target absent trials. The significant interactive effect on RT included target present/absent * set size.
    CONCLUSIONS: From the movement alone, observers were able to find one (among many) actor with their eyes open or closed. This supported our hypothesis that optical input during action performance affected kinematics and the distinction (with or without optical input) could be picked up by observers. Thus, the KSD hypothesis can be extended to include the optical component and, in general, what affects action is perceptible through the kinematics.
    Related Articles | Metrics
    Simple action alters context-dependent visual size perception
    Haoyang Yu, Lihong Chen
    2023, 31 (suppl.):  71-71. 
    Abstract ( 52 )  
    PURPOSE: Action and perception interact reciprocally to shape human behavior. Recent studies have revealed an action effect, in which a simple action (i.e., key-press) towards a prime stimulus biases attention in a subsequent visual search in favor of objects that match the prime. Here we investigated whether the action effect was a generalized phenomenon that could affect context-dependent visual size perception and whether it took place at the early or late stage of visual processing.
    METHODS: Participants were required to make a key-press response to or passively view a circle prime whose color, size and location matched the central circle of the Ebbinghaus illusion presented subsequently. Immediately they performed a size matching task, during which a target circle surrounded by four large or small context circles and a comparison circle were simultaneously presented, and they were required to adjust the size of the comparison circle to match that of the target circle without time limit.
    RESULTS: The results showed that a prior key-press response to the circle prime significantly reduced the Ebbinghaus illusion effect compared to the passive viewing condition. Notably, the action effect persisted even when the central target and surrounding inducers of the Ebbinghaus configuration were presented to different eyes.
    CONCLUSIONS: These findings provide clear evidence that a prior action can exert a strong influence on context-dependent visual size perception, and it mainly affects the late visual processing stage.
    Related Articles | Metrics
    Separate Stores of Absolute and Relative Depth in VWM
    Kaiyue Wang, Jiehui Qian
    2023, 31 (suppl.):  73-73. 
    Abstract ( 70 )  
    PURPOSE: Depth, as an essential visual-spatial cue, is the primary guarantee to the normal life. Nevertheless, most research on visual working memory (VWM) were carried out in the absence of depth. Recently, several studies centering on VWM for depth initially found the difference between absolute depth (metric distance) and relative depth (ordinal relations among depth planes). Furtherly, the current study systematically investigated whether absolute depth and relative depth have the same capacity and whether they share the same characteristics during maintenance (i.e., consolidation process and the engagement of attention).
    METHODS: In Experiment 1, we measured and compared the VWM capacity for absolute depth and that of relative depth adopting the change detection task. Participants were instructed to detect whether the probe and the memory item were placed in the same depth plane for the absolute depth group while to detect whether the numeral probe indicated the depth order correctly for the relative depth group. In Experiment 2, we examined the process of consolidation by adding masks at 50/ 200/ 500/ 900 ms after the offset of memory items. In Experiment 3, we tested whether attention is required for maintenance of the two kinds of depth by inserting a distraction task into the memory task.
    RESULTS: 1) Overall, the memory performance for relative depth was greater than that for absolute depth. 2) Neither fine nor coarse absolute depth can survive perceptual interference from masks whereas relative depth can be consolidated well. 3) Attention distraction affected maintenance of coarse absolute depth rather than that of relative depth.
    CONCLUSIONS: The results indicated that only relative depth is able to form the robust internal representation, suggesting that there exist two distinct stores for depth. These implied that absolute depth and relative depth might be processed separately through dorsal and ventral pathways, respectively, serving different goals (e.g., behavioral actions vs. visual memory).
    Related Articles | Metrics
    Probing Spatiotemporal Neural Dynamics of Working Memory Reactivation
    Jiaqi Li, Ling Liu, Huan Luo
    2023, 31 (suppl.):  74-74. 
    Abstract ( 69 )  
    PURPOSE: Working memory (WM) relies on short-term neural plasticity (STP) and neural reactivations. Our previous work developed a bottom-up, behavioral "dynamic perturbation" approach to manipulate the recency effect in sequence WM. However, two questions remain unanswered. First, direct neural evidence for the dynamic perturbation approach is lacking. Second, the brain regions involved in WM reactivation during maintenance are also unknown.
    METHODS: We employed an impulse-response approach combined with magnetoencephalography (MEG) recordings to address the questions. Participants retained a sequence of two gratings in WM and later recall their orientations. During retention, we presented flickering probes to apply the "dynamic perturbation" approach and a neutral impulse (PING stimulus) to examine neural reactivation profiles. We performed source localization analysis to identify brain regions involved in WM reactivations.
    RESULTS: First, "dynamic perturbation" modifies the multi-item neural reactivation profiles after the PING stimulus, offering direct neural evidence of memory manipulation. The neutral PING triggered a backward memory reactivation profile in the baseline condition, while the synchronous luminance condition revealed a disrupted of reactivation profile. Furthermore, source localization analysis demonstrates dissociated brain regions for WM encoding and reactivation stages, with the frontoparietal region for encoding and the medial temporal lobe (MTL) for memory reactivation during retention.
    CONCLUSIONS: Our findings constitute novel neural evidence for the effectiveness of STP-based “dynamic perturbation” in manipulating WM. Importantly, WM encoding and reactivation engage different neural networks, i.e., WM information is retained in parietal and frontal regions and tends to be reactivated through the engagement of the hippocampus-related medial temporal cortex, implying an intertwined link between WM and episodic memory.
    Related Articles | Metrics
    Biological Motion Cues Modulate Visual Working Memory
    Suqi Huang, Yiping Ge, Li Wang, Yi Jiang
    2023, 31 (suppl.):  85-85. 
    Abstract ( 68 )  
    PURPOSE: Previous research has demonstrated that biological motion (BM) cues can induce a reflexive attentional orienting effect, a phenomenon referred as to social attention. However, it remains unknown whether BM cues can further affect higher-order cognitive processes, such as visual working memory (WM).
    METHODS: The present study aimed to probe this issue by adopting a modified central pre-cueing paradigm combined with a traditional WM change detection task. Specifically, the point-light BM stimuli were adopted as a non-predictive central cue. Participants were required to perform a change detection task immediately after viewing the central cue. We also adopted feet motion sequences as central cues to further investigate whether WM performance would be affected by local BM cues without global configuration.
    RESULTS: Results revealed a significant improvement in WM performance for the items appearing at the location cued by the walking direction of BM. The observed effect disappeared when the BM cues were shown inverted, or when the critical biological characteristics of the cues were removed. Crucially, this effect could be extended to upright feet motion cues without global configuration, reflecting the key role of local BM signals in modulating WM. More importantly, such BM-induced modulation effect was not observed with inanimate motion cues, although these cues can also elicit attentional effects.
    CONCLUSIONS: The current study suggests that the attentional effect induced by life motion signals can penetrate to higher-order cognitive processes, and provide compelling evidence for the existence of “life motion detector” in the human brain from a high-level cognitive function perspective.
    Related Articles | Metrics
    ‘Pop-out’ of Fearful Face in Invisible Crowds: Nonconscious Attentional Capture Guides Gaze Behavior
    Yujie Chen, Ying Wang, Yi Jiang
    2023, 31 (suppl.):  88-88. 
    Abstract ( 85 )  
    PURPOSE: Automatic detection of potential threats in crowds is essential for survival. Our prior research has revealed that fearful faces “pop out” of crowds and capture attention even without awareness, and such effects are pronounced in individuals with high trait anxiety. The current study further examines whether nonconscious attentional capture by fearful faces can modulate gaze behavior, and assess the association of this effect with trait anxiety across individuals.
    METHODS: A fearful face within a crowd of neutral or fearful faces forms either a pop-out or a non-pop-out display. In the initial nonconscious phase, all faces were fully suppressed from visual awareness using the sandwich-masking technique. In the subsequent phase, the target face was unmasked while the crowds remained invisible. Participants’ free gaze movement across the two phrases was recorded and analyzed. Spielberger Trait Anxiety Inventory was administrated to divide participants into high- and low-trait anxiety groups.
    RESULTS: Among individuals with high trait anxiety, a fully suppressed fearful face presented within an invisible pop-out display attracted the orienting of gaze even from the nonconscious phase, yielding a higher proportion of fixation on the fearful target area in the pop-out than in the non-pop-out display. Conversely, individuals with low trait anxiety showed gaze avoidance of the target, with less fixation on the pop-out target area compared with the non-pop-out condition.
    CONCLUSIONS: These findings provide compelling evidence that nonconscious fearful faces within invisible crowds can automatically guide attention and direct gaze orienting, and underscore the significant role of trait anxiety in shaping distinct gaze behavior.
    Related Articles | Metrics
    The Influence of Dynamic Attention in Working Memory on Feature Binding
    Yongyue Wang, Zhe Qu
    2023, 31 (suppl.):  91-91. 
    Abstract ( 72 )  
    PURPOSE: The world around us contains multiple objects, and each object consists of multiple visual features. Successful feature binding integrates discrete features into a holistic object representation, which means the establishment of visual object integrity. Attention has an impact on feature binding and object integrity, at both perception level and working memory level. At the memory level, most previous studies have used a dual-task paradigm, manipulating attentional resources rather than directly manipulating visual attentional conditions to observe the maintenance and extraction of binding features in working memory. Here, we attempt to explore the maintenance of feature binding and object integrity by manipulating dynamic attention in working memory directly.
    METHODS: A joint continuous reporting paradigm was adopted. A 4-segments display will be shortly presented, and after a while spatial cues will appear to manipulate dynamic attention in working memory. We designed three attentional conditions (hold, shift, and split) in two experiments (16 participants in experiment 1 and 18 in experiment 2). In the hold and shift attention blocks, the participants should pay attention to the location of the upcoming cues. If a second spatial cue appears, they should shift their attention to the new cue location. Hold and shift trials were intermixed in a block to ensure that participants had to attend to the first cue and could not simply wait for the second cue. For split attention blocks, two cues appeared simultaneously to make sure that the participants pay attention to the two cue locations, and after a while a third cue will appear at one of two cued locations. For all the conditions, participants were required to make a joint continuous report to indicate the color and orientation of the target (T) indicated by the last cue. Of the three nontarget items, the critical nontarget (N1) appeared adjacent to the target, which was either initially cued in shift trials or simultaneously cued in split trials; N2 appeared at the other adjacent location and N3 appeared at the diagonal location to target. The stimuli and procedure in experiment 2 were same as experiment 1, except that the interval time between the two spatial cues changed from 1000ms to 300ms in the shift attention trials. The participants' reports on the color and orientation of target were collected on each trial to examine whether the errors were correlated or independent when recalling multiple features of the same object under different spatial attention conditions. The different types of errors include (a) correlated swap (e.g., N1CN1O), or (b) independent T errors which was only reporting one feature of target, including unbound guesses (e.g., TCUO) and illusory conjunction (e.g., TCN1O).
    RESULTS: The majority of responses were reporting both the color and orientation of the target object (correlated target responses), and the error patterns vary with attention conditions. In experiment 1, compared to hold attention condition, attention shift resulted in both greater feature errors (i.e., less probability of target responses) and less precision of color feature. Splitting attention also decreased the precision of color feature, but did not impact the probability of target responses. We analyzed the error types and found that, compared to hold attention, shifts of attention were more likely to maintain object integrity or feature bindings (more correlated N1CN1O swaps and less independent T* errors). In all attention conditions, unbound guesses occurred significantly less than illusory conjunctions, which suggests that participants were more likely to report a feature of another object than to guess another feature. In experiment 1, in shift trials, the long processing of N1 (1000 ms) would cause loss of memory about the features of the real target. To avoid it, in experiment 2 we changed the SOA of two spatial cues to 300 ms in the shift trials. The decrease of SOA did increase the accuracy of the reported target under attention shifting condition. Compared to hold attention condition, shifting and splitting attention reduced the probability of target responses and the accuracies of both features in the joint-feature report. We analyzed the error type and found that the types of errors made in the different dynamic attention conditions were similar, and the object integrity can be maintained to a same extent. In all attention conditions, unbound guesses occurred significantly less than illusory conjunctions.
    CONCLUSIONS: Dynamic attention in working memory affects feature binding and the maintenance of object integrity. Attention splitting will degrade the integrity of objects which are maintained in working memory. Compared with attention splitting, attention shifting maintained the integrity of the object in working memory to a higher degree, especially when the shifting occurs relatively late.
    Related Articles | Metrics
    Simultaneous or Switching? Electrophysiological Measures of the Mechanism During Multiple Object Searching in Real-world Scenes
    Mengxuan Sun, Qi Zhang
    2023, 31 (suppl.):  92-92. 
    Abstract ( 64 )  
    PURPOSE: Rapidly locating and identifying targets in complex environments is a fundamental ability for human survival. The attention templates are stored in visual working memory (VWM) to help guide attention in performing search tasks. In daily life, individuals often need to search for multiple targets simultaneously. Classic laboratory tasks have asked observers to search for a simple visual target (such as colored letters or shapes) in an array of distractors scattered randomly on a display to explore the mechanism of templates. This study used real-world scene images as stimuli to get closer to those in real-world searches. The purpose of the current research is to address two questions: firstly, whether two templates could be maintained simultaneously in the working memory when two objects were searched at the same time, and secondly, to further clarify whether the dual-target template guide search in a simultaneous or switched pattern.
    METHODS: In Experiment 1, we used a dot-probe paradigm to investigate whether both target templates were maintained in VWM simultaneously during dual-target search tasks. We observed the consistency effect (i.e., faster responses to a dot presented on the side of the cued object compared with the non-cued object) under single- and dual- conditions separately, indicating attentional capture by template-matching stimuli. In Experiment 2, we utilized the ERP technique to explore further the fundamental neural mechanism of how representations are maintained in working memory. We aimed to assess whether the load-dependent component of working memory storage, as measured by the contralateral delay activity (CDA), is influenced by the varying number of activated target template representations during the search preparation phase of the visual search.
    RESULTS: The results of Experiment 1 showed that both single-target and dual-target template conditions exhibited a consistency effect, and the consistency effect of the dual-target template was not significantly lower than that of the single-target template. This indicates that both attention templates were maintained in VWM to guide attention and performance of the search task during a dual-target search in real-world scenes. Experiment 2 showed no significant difference in the CDA component between the single- and dual-target template conditions. This reflected the idea that dual-template guide the search using a switching method, and only one representation would be activated for each search task.
    CONCLUSIONS: It can be concluded that in dual-target search tasks in real-world scenes, both attention templates can guide attention simultaneously. Still, only one template is activated at a time, and the two templates alternate to guide attention in a switching pattern.
    Related Articles | Metrics
    Preferential Attentional Orienting to Animals Links with Autistic Traits
    Yang Geqing, Jiang Yi, Wang Ying
    2023, 31 (suppl.):  97-97. 
    Abstract ( 89 )  
    PURPOSE: Animacy, an attribute that distinguishes animals from non-living things, carries survival-relevant information and heralds social interaction. However, whether animal cues are prioritized in human attentional processes and whether the attentional preference for animacy is associated with observers’ autistic traits — a primary indicator of social abilities in general populations, remain controversial.
    METHODS: This study explored these issues using a classic cueing paradigm, in which the observers’ allocation of attention to pairs of animal and inanimate picture cues was measured based on their reaction times to the probes following these cues. Faster probe detection responses following the animal cues would indicate an attentional bias towards animacy. In addition, we varied the cue duration from 100 ms to 1000 ms to examine the time course of this attention effect. Participants’ autistic traits were measured using the Autism-Spectrum Quotient (AQ) questionnaire.
    RESULTS: We found a significant and enduring attentional bias for animals over inanimate cues in observers with relatively low, but not high, autistic traits. Moreover, there was a negative correlation between individual AQ scores and the attentional bias towards animals, especially at the early orienting stage. These results could not be simply explained by low-level visual differences between animal and inanimate pictures, as inverted or phase-scrambled picture cues did not yield a similar pattern.
    CONCLUSIONS: These findings highlight the significant status of animacy in selective attention and reveal the link between impaired orienting to animals and high autistic traits, further suggesting a broader scope of social attentional deficits in autistic disorders.
    Related Articles | Metrics
    Unconscious, but not Conscious, Gaze-triggered Social Attention Reflects the Autistic Traits in Adults and Children
    Fang Yang, Jinyu Tian, Peijun Yuan, Chunyan Liu, Xinyuan Zhang, Li Yang, Yi Jiang
    2023, 31 (suppl.):  98-98. 
    Abstract ( 73 )  
    PURPOSE: Typically developing (TD) individuals can readily orient attention towards other’s gaze direction, known as social attention or joint attention. Here we examined whether such an ability can operate without awareness of the gaze cues and how it is associated with autistic traits and autism spectrum disorder (ASD).
    METHODS: Using a combination of the gaze-cueing paradigm and the chromatic flicker fusion (CFF) technique, the present study compared the unconscious and the conscious gaze-triggered attentional effects in groups of adults and children with different autistic traits or ASD.
    RESULTS: The unconscious gaze cues in the context of an upright schematic face could trigger significant social attention in both TD adults and children. This effect was pronounced after 600ms of cue presentation and vanished when the face was shown inverted. Notably, the ability to involuntarily respond to unconscious gaze cues was negatively correlated with autistic traits of TD individuals and substantially impaired in adults diagnosed with ASD and children with high autistic traits. More intriguingly, this pattern of association was not observed with the attentional effects induced by the conscious gaze cues.
    CONCLUSIONS: These findings suggest that the unconscious gaze-triggered attentional orienting, presumably reflecting the intrinsic social attention ability, is closely linked to individual autistic traits and even ASD. Moreover, they also highlight the functional distinction between consciousness and attention from the perspective of social cognition.
    Related Articles | Metrics
    The Occurrence of Attentional White Bear Is Not Influenced by the Probe Task
    Shirong Wu, Zhe Qu
    2023, 31 (suppl.):  99-99. 
    Abstract ( 82 )  
    PURPOSE: How do people allocate attention to upcoming distractors? Some studies have shown that individuals can proactively suppress the spatial locations or features of distractors (e.g., Gaspelin et al., 2015). On the contrary, some research has indicated that participants will allocate more attention to expected distractors under certain conditions, even when they have prior knowledge of what the distractors look like (Tsal & Makovski, 2006; Moher & Egeth, 2012). This is known as the attentional white bear (AWB) phenomenon. In recent years, several studies have found the AWB effect even when distractors and targets were presented at different times (Chen et al., 2023; Makovski, 2019). However, since the AWB effect was usually examined through probe tasks which had a certain probability in these studies (e.g., 25%), it is possible that the observed AWB effect resulted from enhanced attention to the possible dot probes rather than the distractors. To address this issue, this study aims to investigate whether there is an enhancement of attention prior to the presentation of irrelevant distractors when they are temporally separated from the target, especially when participants do not expect the probes at all.
    METHODS: We conducted three experiments in this study. In Experiment 1, each participant was required to complete two blocks of memory trials (using change detection task). During the memory retention interval, an irrelevant distractor array was presented 800 ms after the memory array offset in the distractor-present block, but not in the distractor-absent block. Additionally, in a small proportion (10%) of trials, participants were asked to respond to a probe dot that was displayed at the expected distractor appearance time. By comparing participants' reaction times to the probe dots between the two blocks, the allocation of attention to the distractors could be inferred. In order to completely eliminate the confounding effects caused by the presence of the dot probe, in Experiment 2 we substituted the dot-probe task by a new surprise task. The experiment consisted of 48 trials, with the first 43 trials being memory trials same as Experiment 1. Participants were divided into two groups, one performing the memory task with distractors and the other without distractors. In the last five trials (surprise trials), an unexpected visual search task was presented at the time when distractors were supposed to appear, requiring participants to look for a stimulus different in size from the others. The impact of distractor presentation on attentional allocation could be examined by comparing the performance of the two groups of participants in the surprise trials. Experiment 3 employed a similar surprise task design to validate the results of Experiment 2, but replaced the surprise task with a detection task for memory items, in which participants were asked to recall the location of a color that had been presented in the memory array on that trial.
    RESULTS: Experiment 1 revealed that participants responded significantly faster on the probe detection task in the distractor-present block (785 ms) than distractor-absent block (868 ms), t(28) = 3.222, P = 0.003, indicating a stronger allocation of attention before the presentation of expected distractors. This result replicated the presence of the AWB effect under a low probability of probe presence. In Experiment 2, participants in the distractor-present group had higher accuracy in the surprise trials (65.0%) compared to those in the distractor-absent group (30.0%), χ2 (1) = 4.912, P = 0.027, which suggests that participants in the distractor-present group enhanced their attention prior to the distractors, leading to better performance in the surprise task. This result confirms the ABW effect even under conditions where the probes are totally irrelevant to the task. Experiment 3 exhibited the opposite pattern relative to Experiment 2, with a lower accuracy (5.88%) in the distractor-present group than that in the distractor-absent group (41.18%), χ2 (1) = 5.885, P = 0.039. The result demonstrates that the distractor-present group allocated enhanced attention before the distractor display, resulting in interference in retrieving memory items. Therefore, Experiment 3 further supports the stable existence of the AWB effect.
    CONCLUSIONS: By employing multiple experimental paradigms and conditions, this study demonstrated the stable existence of the AWB effect when irrelevant distractors were temporally separated from the target. When participants anticipate an upcoming distractor, they will allocate more attention to it, resulting in altered task performance at that moment even if they do not expect the task. These findings reveal how people prepare for irrelevant distractors, and provide new evidence for the AWB effect. The study also contributes to a deeper understanding of the underlying mechanisms governing attentional control over interfering information.
    Related Articles | Metrics
    Exploring the Effect of Averted Gaze Faces and Face-like Objects on Attentional Shifts in Adolescents with Autism-Like Traits
    Ziwei Chen, Mengxin Wen, Di Fu, Xun Liu
    2023, 31 (suppl.):  101-101. 
    Abstract ( 65 )  
    PURPOSE: Gaze is a representative and salient social cue. Individuals can perceive social attributes from averted gaze, generating attentional shifts. Individuals with Autism Spectrum Disorder (ASD) pay less attention to averted gaze and generate less attentional shifts. Like individuals with ASD, individuals with Autism-Like Traits (ALT) also show deficits in attentional shifts and cognitive flexibility. However, there is still a lack of empirical research to explore the differences in attentional shifts between typically developing individuals and individuals with ALT. In addition, by comparing the attentional shifts of two groups based on averted gaze faces and face-like objects, we can explore whether the difference in attentional shifts is due to the abnormal processing of faces.
    METHODS: Using the Autism Spectrum Quotient (AQ), we divided participants into high and low AQ groups. And we used averted gaze faces and face-like objects as the cueing stimuli in the gaze-cueing task. The cueing stimuli were presented upright and inverted in Experiments 1 and 2.
    RESULTS: The results showed that different from the low AQ group, the high AQ group generated strong cueing effects from the upright averted gaze faces and face-like objects. At the same time, the inverted cueing stimuli did affect the subsequent cueing effects of the individual. On the one hand, disrupting the global configuration in the inverted averted gaze faces would weaken the cueing effect generated by individuals with low AQ. On the other hand, disrupting the global configuration in the inverted face-like objects would weaken the cueing effect generated by individuals with high AQ.
    CONCLUSIONS: The present study shows the differences in attentional shifts induced by averted gaze faces and face-like objects between high and low AQ groups. Compared with the low AQ group, the high AQ group perceived the social attributes of the face-like objects through local processing enhancement, resulting in stronger attentional shifts. This result further indicates that individuals can generate attentional shifts by processing local features of face-like objects, similar to averted gaze. At the same time, the global configuration of face-like objects plays a facilitating role. This study provides a reference for the attentional shifts based on social attributes in individuals with ALT and provides new insights into the processing mechanism of face-like objects.
    Related Articles | Metrics
    The Neural Basis of Visual Working Memory of Real-World Object
    Wanru Li, Jia Yang, Pinglei Bao
    2023, 31 (suppl.):  102-102. 
    Abstract ( 104 )  
    PURPOSE: Whether visual working memory (VWM) is retained in the same brain areas involved in perception processing remains a subject of debate. However, studies examining this question have primarily concentrated on the representation of fundamental visual features, often neglecting the hierarchical and topological organization of the visual system. To address this gap in knowledge, we employed real-world objects as stimuli to systematically explore the representation of information in perception and working memory within the ventral visual hierarchy and association areas.
    METHODS & RESULTS: We assessed the representation of 20 objects across three VWM and four perception tasks using fMRI recording. In experiment 1, twelve participants performed a retro-cue sequential VWM task in which they viewed two objects and were required to recall the cued object after a 10-second delay. We found that both the identity and categorical information of the cued objects could be decoded from the lateral occipital complex (LOC) and intraparietal sulcus (IPS), but not in early visual areas. Moreover, a common representational pattern between VWM and perceptual tasks was evident solely in the LOC through representational similarity analysis (RSA). These results suggest that task-relevant high-level sensory areas are specifically recruited for VWM maintenance. Experiment 2 further examined the extent to which representational properties were shared between VWM and perception. Six participants took part in a retro-cue spatial VWM task where they memorized a cued object from two objects presented separately in the left and right visual fields. This experiment reinforced the findings of Experiment 1 regarding the roles of LOC and IPS. Interestingly, while a strong contralateral bias was confirmed in the perception task, this bias was significantly reduced in the VWM task, as evidenced by enhanced ipsilateral representation of the cued object. Furthermore, we conducted an ablation analysis by excluding regions that exhibited a similar response profile between the two visual fields in a control perception task. The results revealed that more ipsilateral voxels were involved in representing memory item as compared to perception task, refuting the hypothesis that the ipsilateral representation in VWM was due to the large receptive field of the LOC. In Experiment 3, to force the participant to combine the identity of the memorized item and its corresponding location, we delayed the retro-cue at the end of the period. Persistent but significant reduced involvement of the ipsilateral LOC was observed in delay persisted, suggesting the representation of visual working memory can be flexibly modulated by task configuration.
    CONCLUSIONS: Our study underscore the essential role of both the LOC and IPS in maintaining real-word object representations in VWM, while the sensory-based object representations in high-level visual areas may go beyond the feedforward visual information flow during VWM. The enhanced ipsilateral representation of objects during VWM expand the activation of memory-specific content and could aid in the stable retention of memory content.
    Related Articles | Metrics
    Cortical-layer Interplay Affects Working Memory-Perception Interaction: Evidence from Working Memory Load Impairing Visual Detection
    Yuanxiu Zhao, Yang Guo, Wenmin Li, Yuxuan Luo, Qikai Zhang, Mowei Shen
    2023, 31 (suppl.):  103-103. 
    Abstract ( 97 )  
    PURPOSE: The interaction between working memory (WM) and perception has been explored extensively in cognitive psychology and neuroscience. However, the mechanisms of WM-perception interactions are not clear. We argue that the recent finding of a laminar-specific circuitry in the visual cortex of the human brain, enabling efficient cortical-layer interplay during information processing, casts unique light on the exploration of the WM-perception interaction and inspires novel predictions. This study examined the cortical-layer interplay account of WM-perception interaction by taking the phenomenon that visual WM load impairs visual detection (short for WM-iD below) as an example.
    METHODS: Experiment 1 took advantage of the rational resource allocation of WM, testing the influence of set size of memory array on WM-iD. We required participants to memorize one, four, or eight colors. The cortical-layer interplay account predicts that the visual detection performance stopped decreasing when the allocated WM resource was at plateau, yet no detection-dropping occurred if the task only contained feed forward signals. That is, the visual detection performance is impaired in the 4-color condition compared to the 1-color condition but remains stable between the 4-color and 8-color conditions. Contrastingly, the sensory load account predicts that visual detection performance gradually decreases as memory load increases. Experiment 2 tested the influence of feedback signal of WM storge in WM-iD. We used the same parameters as in Experiment 1, but required participants to judge how many distinct colors appear, such that only the feedforward signal existed. Experiments 3 examined a novel prediction of the cortical-layer interplay account: visual WM and visually presented verbal WM load similarly affect visual detection, which is against the prediction of load theory. We required participants to memorize the colors (visual memory) or the letters (verbal memory) of one, six, or eight colored letters.
    RESULTS: In line with the cortical-layer interplay account, the visual detection performance stopped decreasing when the allocated WM resource was at plateau (Experiments 1), yet no detection-dropping occurred if the task only contained feedforward signals (Experiments 2). Besides, visual WM and visually presented verbal WM load similarly affect visual detection (Experiments 3).
    CONCLUSIONS: These findings support the cortical-layer interplay account to the underlying mechanism of the visual WM load impairing visual detection. Therefore, laminar-specific circuitry should be considered a factor in exploring WM-perception interaction in the future.
    Related Articles | Metrics
    Hippocampal Deterioration and Frontal Compensation of Amnestic Mild Cognitive Impairment in Visual Short-term Memory
    Ye Xie, Tinghao Zhao, Wei Zhang, Yunxia Li, Yixuan Ku
    2023, 31 (suppl.):  105-105. 
    Abstract ( 73 )  
    PURPOSE: Visual short-term memory has been suggested to be a cognitive marker for early detection of Alzheimer’s Disease (AD). However, the neural mechanism of VSTM affected by the AD pathology remained underexplored. Filling this gap, the current study focused on the hippocampus and prefrontal cortex (PFC), two brain areas both playing critical role in VSTM but also vulnerable in AD pathology, and examined how are the functional deterioration and the reorganization of external neural mechanism of hippocampus in relevant to VSTM deficit in amnestic mild cognitive impairment (aMCI) individuals.
    METHODS: VSTM was assessed by an adapted change detection task. In experiment 1, structural atrophy in the medial temporal lobe (MTL) and PFC was identified and the association between structure deficiency and VSTM performance was tested by partial correlation analysis. Moreover, participants were divided into subgroups with different severity of hippocampal atrophy to see if the reorganization of VSTM-related neural mechanism would be influenced by the hippocampal deterioration. Furthermore, using these targeted areas as regions of interest (ROIs), the functional connectivity (FC) profile for each ROI was computed and compared between groups, and the FC-VSTM association was also tested by partial correlation analysis. Experiment 2 tested association between the structural alteration of subfields of hippocampus and the VSTM performance.
    RESULTS: Both studies showing worse performance of VSTM task in aMCI subjects than the NC group. Experiment 1 revealed structural atrophy in the left medial temporal lobe (MTL) and the right frontal pole (FP) in the aMCI group. while gray matter volume (GMV) of both areas showing significant positively correlation with VSTM performance in the NC group, VSTM performance was significantly correlated with the GMV of the right FP but not with GVM of the left MTL in the aMCI group. Moreover, using the atrophic left MTL as a seed, its functional connectivity to a right FP area, which is overlapped with the atrophic frontal areas in structure, was significantly higher in aMCI than NC. Furthermore, for those aMCI with smaller left MTL, the compensatory involvement of the right FP in performing VSTM tasks, as assessed by the brain-behavior correlation was more prominent. Experiment 2 showed that left-lateral granule cell molecular layer of the dentate gyrus (GC-ML-DG), molecular layer, subiculum and hippocampal-amygdaloid transition region (HATA), and bilateral presubiculum and fimbria had significantly smaller GMV in the aMCI group than the NC group. GMV of left molecular layer, left subiculum and left GC-ML-DG were significantly associated with the accuracy and capacity of VSTM task in the NC group but not in the aMCI group. GMV of right fimbria was significant correlated with the RT of the VSTM task in the HC group but not in the aMCI group.
    CONCLUSIONS: The current study showed an MTL-dysfunction and prefrontal-compensation mechanism for VSTM processes in aMCI. Hippocampal subfields related to information input and output may contribute to VSTM impairment by disrupting hippocampal-cortical communication and the hyperconnectivity between MTL and PFC might suggest the external compensation to maintain the VSTM process in aMCI. Such compensatory mechanism of PFC varied along the progress to AD. This finding also points to VSTM impairment as a potential neuropsychological indicator for early detection of preclinical AD.
    Related Articles | Metrics
    Direct Suppression and Thought Substitution Engage Dissociated Oscillatory Neural Mechanisms to Achieve Active Forgetting
    Suya Chen, Yanhong Wu, Jian Li, Huan Luo
    2023, 31 (suppl.):  108-108. 
    Abstract ( 69 )  
    PURPOSE: Active forgetting is crucial for emotion regulation and psychological well-being. This could be achieved via two strategies: direct suppression (DS) and thought substitution (TS) yet their underlying neural mechanisms remain elusive.
    METHODS: Here we recorded electroencephalography (EEG) activities on 49 human subjects while they were instructed to use DS or TS strategies in different trials to inhibit previously memorized word associations.
    RESULTS: Behavioral results show that both DS and TS strategies efficiently disrupt memories, displaying gradual reductions in intrusive memories and decreased recall performance for both the trained and independent probe word cues. Most importantly, we demonstrate dissociated oscillatory neural mechanisms for DS and TS strategies. First, DS elicits stronger sustained alpha-band (8-11 Hz) activities in the parietal region while TS shows stronger theta-band (3-6 Hz) activities in the frontal region, indicating their respective inhibitory and excitatory characteristics. Second, the decrease of alpha-band power across blocks is accompanied by a similar decline in intrusive memories in the DS condition, suggesting alpha-band inhibition may be required to facilitate forgetting. Third, the theta-band power during TS condition is correlated to individual executive control function measured in independent ANT tasks and could also predict subsequent forgetting.
    CONCLUSIONS: Taken together, we present new evidence for the dissociative neural mechanisms underlying different active forgetting strategies. While the DS strategy employs alpha-band inhibitory modulation to suppress intrusive memories, TS strategy relies heavily on theta-band frontal executive control activities to enable the formation and replacement of new and old memories, respectively.
    Related Articles | Metrics
    Dissociable Effects of Transcranial Random Noise Stimulation (tRNS) on Early and Later Stages of Visual Motion Perceptual Learning
    Di Wu, Peng Huang, Pan Zhang
    2023, 31 (suppl.):  109-109. 
    Abstract ( 73 )  
    PURPOSE: The effect of transcranial random noise stimulation (tRNS) on visual perceptual learning (VPL) has only been investigated during early training sessions, and the influence of tRNS on later performance is unclear.
    METHODS: We engaged participants first in 8 days of training to reach a plateau (stage 1) and then in continued training for 3 days (stage 2). In the first group, tRNS was applied to visual areas of the brain while participants were trained on a coherent motion direction identification task over a period of 11 days (stage 1 + stage 2). In the second group, participants completed an 8-day training period without any stimulation to reach a plateau (stage 1); after that, they continued training for 3 days, during which tRNS was administered (stage 2). In the third group, participants completed the same training as the second group, but during stage 2, tRNS was replaced by sham stimulation. Coherence thresholds were measured three times: before training, after stage 1, and after stage 2.
    RESULTS: Compared with sham simulation, tRNS did not improve coherence thresholds during the plateau period. The comparison of learning curves between the first and third groups showed that tRNS decreased thresholds in the early training stage, but it failed to improve plateau thresholds. For the second and third groups, tRNS did not further enhance plateau thresholds after the continued 3-day training period.
    CONCLUSIONS: In conclusion, tRNS facilitated VPL in the early stage, but its effect disappeared as the training continued. This study contributed to a deep understanding of the dissociable tRNS effects at distinct temporal stages, which may be due to the dynamic change in brain regions during the time course of VPL.
    Related Articles | Metrics
    Dynamic Changes of V1 Plasticity after Associative Learning
    Yueguang Si, Wenxin Su, Zeyu Li, Biao Yan, Jiayi Zhang
    2023, 31 (suppl.):  110-110. 
    Abstract ( 66 )  
    PURPOSE: The primary visual cortex (V1) undergoes plastic changes following associative learning, resulting in specific responses to visual stimuli. Classical fear conditioning is a model of associative learning. Simultaneously, color vision is a crucial component of visual perception. However, it is still unclear how color-related fear learning causes plastic changes in mouse V1.
    METHODS: Firstly, we successfully established a fear conditioning model using blue light as a visual cue. We used two-photon calcium imaging to record light-evoked responses of the same neurons of layer II/III of V1 from anesthetized mice before and after learning. Using correlation-based graph theory analysis, we examined ensembles induced by blue light (cue) or green light (non-cue) before and after training. We employed in vivo whole-cell patch clamp technique to investigate the synaptic mechanisms of visual information processing in mouse V1 neurons before and after learning. We used bulk RNA-seq to find the differential gene. Finally, we used patch-seq to analyze the relationship between gene expression and electrophysiology.
    RESULTS: Our findings showed that fear conditioning resulted in a notable increase in the strength of connections solely within the blue-light ensembles. In subgroups within these ensembles, our results revealed that newly generated neurons within the blue ensembles exhibited a higher clustering coefficient, and comparable relative degree and connectivity strength compared to stable neurons present in the same ensemble. Through patch-clamp experiments, we observed that fear conditioning resulted in an elevation in input resistance of V1 neurons, as well as an enhancement in the postsynaptic membrane potential response to blue-light stimulation. During light-off periods, we detected a higher difference between the responses to blue and green light. Furthermore, we observed a decrease in the power of the low-frequency signals following fear conditioning. We found some differentially expressed genes before and after learning, such as Fkbp5 and Kcnj3. Notably, we observed a significant correlation between the expression of certain genes and neuronal functions.
    CONCLUSIONS: Our findings indicated that fear learning induced plasticity changes in mice, specifically in response to blue light stimulation, affecting the overall and individual activities of V1 neurons as well as the molecular level. Furthermore, we observed a high correlation between the electrophysiological features of V1 neurons after learning and their transcriptome expression.
    Related Articles | Metrics
    Metaplasticity in Short-Term Monocular Deprivation
    Yang YANG, Zhengbo CHEN, Yongchun CAI
    2023, 31 (suppl.):  111-111. 
    Abstract ( 65 )  
    PURPOSE: In adults, depriving one eye for a short period of time increases its predominance in subsequent binocular vision, reflecting a form of homeostatic plasticity. Physiological and behavioral evidence suggests that this form of monocular deprivation does not alter monocular excitability, but rather alters binocular interactions by modulating inhibition between the eyes. Here, we investigated whether the plasticity in monocular deprivation could be modulated by changes in interocular suppression.
    METHODS: Before monocular deprivation, interocular suppression was manipulated by a modulational binocular rivalry, which contained 3 contrast change conditions (ramping up, ramping down, and constant control) in the to-be-deprived eye, while the other eye held constant. Eye dominance change was measured by binocular rivalry at the beginning and the end of the whole experiment.
    RESULTS: Changes in monocular contrast in the to-be-deprived eye modulated the strength of interocular suppression, and preceding changes of monocular contrast attenuated the dominance increase of the deprived eye in monocular deprivation. Furthermore, the degree of interocular suppression change within modulational rivalry was positively correlated with the degree of subsequent plasticity attenuation in monocular deprivation.
    CONCLUSIONS: The plasticity in monocular deprivation is plastic, and can be modulated through changes in interocular suppression. Our study further supported the role of interocular suppression in visual plasticity, and provided a novel way of plasticity modulation.
    Related Articles | Metrics
    The Impact of Training on the Inner-outer Asymmetry in Crowding
    Yan-Ru Chen, Yu-Wei Zhang, Jun-Yun Zhang
    2023, 31 (suppl.):  112-112. 
    Abstract ( 62 )  
    PURPOSE: The inner-outer asymmetry, the outer flanker induces stronger crowding than the inner one, is a hallmark of visual crowding. It is unclear the contribution of inner-outer asymmetry to the pattern of crowding errors (biased predominantly toward the flanker identities) and the role of perceptual learning on crowding errors.
    METHODS: In a typical radial crowding display, twenty observers were asked to report the orientation of a target Gabor (7.5°-eccentricity) flanked by an inner or outer Gabor along the horizontal meridian. Nine observers continued to train under the outer flanker condition for four sessions.
    RESULTS: The outer flanker conditions induced stronger crowding, accompanied by assimilative errors to the outer flanker for similar target/flanker elements. While the inner flanker condition exhibited weaker crowding, with no significant crowding error patterns. A population coding model showed that the flanker weights in the outer flanker condition were significantly higher than those in the inner flanker condition. Training significantly reduced inner-outer crowding asymmetry and flanker weights to the outer flanker. Learning effects were retained over 4~6 months. Individual variabilities in the appearance of crowding errors, the strength of inner-outer asymmetry, and the training effects were evident.
    CONCLUSIONS: Different crowding mechanisms may be responsible for the asymmetric crowding effects induced by inner and outer flankers, with the outer flankers dominating the appearance more than the inner ones. Training reduces inner-outer crowding asymmetry by reducing target-flanker confusion and learning is persistent over months, suggesting that perceptual learning has the potential to improve visual performance by promoting neural plasticity.
    Related Articles | Metrics
    Feature Variability Determines Specificity and Transfer in Multi-orientation Feature Detection Learning
    Jun-Ping Zhu, Jun-Yun Zhang
    2023, 31 (suppl.):  113-113. 
    Abstract ( 75 )  
    PURPOSE: In typical visual perceptual learning (VPL), only a specific stimulus is practiced and learning is often specific to the trained feature. Previously we demonstrated that multi-stimulus learning (e.g., TPE procedure) has the potential to achieve generalization. However, it is still unclear whether feature variability plays a role in learning generalization in multi-stimulus learning.
    METHODS: We adopted a feature detection task, in which an oddly oriented target bar differed by 16° from the background bars. The SOA threshold between the target and the mask was measured with a staircase. Observers were trained with four orientation search stimuli either with a 5° deviation (30°-35°-40°-45°) or with a 45° deviation (30°-75°-120°-165°). The transfer of learning to the swapped target-background orientations was evaluated after 5 sessions of training.
    RESULTS: (1) Multi-stimulus training with a 5° deviation resulted in significant learning improvement, but learning failed to transfer to the swapped target-background orientations. (2) In contrast, training with a 45° deviation resulted in a significant learning transfer to swapped orientations. (3) A modified TPE procedure, in which observers were trained with four orientation search stimuli with a 5° deviation and simultaneously passively exposed to orientations with larger orientation variability (45° deviation), resulted in significant orientation learning generalization.
    CONCLUSIONS: Training with large feature variability enables generalization to untrained features, by eliminating the “overfitting” of specific stimuli in typical VPL paradigms and endowing the visual system with more robustness toward changes in the stimulus material. Our results could motivate the development of efficient training paradigms in clinical.
    Related Articles | Metrics
    Serial Dependence in the Ensemble Perception of Facial Attractiveness and Facial Expression
    Da Wang, Zhihao Yang, Gaoxing Mei
    2023, 31 (suppl.):  114-114. 
    Abstract ( 74 )  
    PURPOSE: The visual system forms stable visual representations from noisy surroundings by virtue of an effect called Serial dependence, which refers to a phenomenon that current perception is biased towards recently visual experience. For example, a current face is perceived as more attractive when an extreme attractive face could be previously presented. Although the phenomenon has widely been observed in visual perception, previous studies focused on single objects and little studies investigated serial dependence in the ensemble perception. Ensemble coding in visual perception reflects that the visual system can extract an average feature from a set of stimuli. Here we examined whether serial dependence could occur on the ensemble perception of facial attractiveness and facial expression.
    METHODS: Experiment 1 (N = 27) aimed to investigate whether serial dependence in the ensemble perception of facial attractiveness could emerge. In each trial, a set of six face images were presented for 1000 ms, and participants were instructed to rate the average facial attractiveness of the set on a visual analogue scale, where 0 denoted “extremely unattractive” and 10 “extremely attractive”. Three types of face sets (i.e., attractive, unattractive and mixed) were included, each type including 30 sets (i.e., 90 trials in total). The attractive and unattractive faces were determined according to our pilot assessing results from 116 female face images selected from the Oslo Faces database. This assessment was conducted by additional 27 participants. Experiment 2 (N = 12) aimed to exclude that serial dependence found in Experiment 1 could originate from single face in the set of six faces. The experimental procedure was the same as Experiment 1, except that face positions of two consecutive trials did not overlap. Experiment 3 (N = 12) aimed to examine whether serial dependence could occur between a single face and a set of faces in the ensemble perception of facial attractiveness. The experimental procedure was the same as that in Experiment 1, except that face stimuli were alternately presented between a single face and a set of six faces. Experiment 4 (N = 12) aimed to further examine whether serial dependence could occur on the ensemble perception of an unstable face attribute (i.e., facial emotional expression). The experiment procedure was similar as that in Experiment 1, except that facial emotional expressions rather than facial attractiveness were manipulated.
    RESULTS: Results of Experiment 1 showed that serial dependence emerged in the ensemble perception of facial attractiveness. Results of Experiment 2 demonstrated that serial dependence still existed even when face positions did not overlap in two consecutive trials in the ensemble perception of facial attractiveness. Results of Experiment 3 further showed that serial dependence in ensemble perception of facial attractiveness also occurred when a set of faces and single faces were alternatively presented. Experiment 4 extended above-mentioned results from stable facial features (i.e., facial attractiveness) to unstable facial features (i.e., facial expression), suggesting that serial dependence of ensemble perception also occur in the perception of facial expression.
    CONCLUSIONS: Our results suggested that serial dependence occurred in the ensemble perception of facial attractiveness and facial expression. We conclude that the visual system may employ serial dependence mechanism to integrate the perception of ensemble faces, which can improve visual stability in complex environments.
    Related Articles | Metrics
    On Circadian Rhythm and Visual Perceptual Learning
    Lei Jiang, Dang Ding, Wei Mao, Xianyuan Yang, Fangfang Yan, Chang-Bing Huang
    2023, 31 (suppl.):  115-115. 
    Abstract ( 76 )  
    PURPOSE: The circadian rhythm regulates visual perception in both human and animals. Although certain studies have suggested training subjects at roughly the same time in their experimental protocols, likely a reflection of researchers' concern about the potential interference of circadian rhythm, a systematic investigation into the impact of circadian rhythm on perceptual learning remains lacking.
    METHODS: In the current study, we adopted two classical visual tasks, e.g. contrast detection and Vernier offset discrimination, to investigate whether circadian rhythm affect visual perceptual learning. Fourty participants, all of intermediate chronotype, were divided into two groups and underwent six training sessions that evenly allocated in two different temporal orders: three morning then switched to three evening sessions, and three evening to three morning sessions.
    RESULTS: A multi-component perceptual learning model analysis (Yang et al., 2020, 2022) revealed that, for contrast detection task, individuals in the evening-morning temporal order group exhibited lower initial thresholds than those in the morning-evening group did; On the other hand, for Vernier offset discrimination task, individuals in the morning-evening group had significantly lower initial thresholds than those in the evening-morning group did. Nevertheless, no significant between-group differences in learning speed and other components in the accumation of learning effects (e.g. overnight gain and relearning) were detected in either of the tasks.
    CONCLUSIONS: Our findings suggested that the circadian rhythm only influence subject’s initial performance at the beginning of perceptual learning, and the subsequent learning process was likely immune to circadian rhythm, signifying a dissociation between circadian rhythm and visual perceptual learning. Moreover, the influence of the circadian rhythm on initial performance is probably task-specific.
    Related Articles | Metrics
    Ocular Dominance Plasticity does not Exhibit Perceptual Deterioration
    Liying Zou, Chenyan Zhou, Jiawei Zhou, Seung Hyun Min
    2023, 31 (suppl.):  116-116. 
    Abstract ( 60 )  
    PURPOSE: We investigated whether short-term ocular dominance plasticity induced by monocular deprivation follows a similar dynamic to that in visual adaptation, specifically perceptual deterioration.
    METHODS: We patched the non-dominant eye of fifteen adults with normal or corrected-to-normal vision for two hours over seven consecutive days. A baseline measurement of the balance point (BP) before the monocular deprivation and post-deprivation measurement of the BP were performed on the first, third, fifth, and seventh days. During the baseline measurement session, each subject's BP was tested twice. After the deprivation, BP was tested six times to see the eye dominance shift at 0-min, 3-min, 6-min, 12-min, 24-min, and 48-min. We measured the shift in eye dominance using a binocular orientation combination task. By using a logistic function, we fitted a psychometric function and then estimated the BP.
    RESULTS: To quantify the patching effect over time, we used the area under the curve (AUC). We performed a one-sample t-test to prove that the AUC had a significant difference relative to baseline on all test days, and then conducted a one-way repeated measures ANOVA to report that there was no significant difference in the measurement of AUC across different days.
    CONCLUSIONS: Our results showed no perceptual deterioration phenomenon after repeated periods of short-term monocular deprivation, suggesting that plasticity induction after the deprivation does not share a common mechanism with contrast adaptation. In other words, this study shows that monocular deprivation is a promising protocol for treating visual disorders such as amblyopia because its beneficial effect on vision would not deteriorate after repeated induction.
    Related Articles | Metrics
    Attention Modulates Plasticity in Short-Term Monocular Deprivation
    Zhengbo Chen, Yongchun Cai
    2023, 31 (suppl.):  119-119. 
    Abstract ( 66 )  
    PURPOSE: In adults, depriving one eye for a short period of time increases its predominance in subsequent binocular vision, reflecting a form of homeostatic plasticity in binocular vision. It remains an open question whether attention modulates binocular visual plasticity occurring at such lower levels in perceptual systems. To address this issue, we modulated the intensity of attention during monocular deprivation, examining how the plasticity in monocular deprivation is influenced by attention.
    METHODS: We employed a novel and potent way of monocular deprivation, monocular flash deprivation, during which a series of scenes were rapidly presented to the peripheral visual field of the non-deprived eye, while no stimuli were presented to the deprived eye. A dual-task paradigm was presented in the central visual field. Observers were instructed to attend to peripheral monocular stimuli or perform the central dual-task paradigm under different conditions, to modulate the intensity of attention in monocular deprivation. Eye dominance change was measured before and immediately after monocular deprivation by binocular rivalry.
    RESULTS: When observers attended to the peripheral monocular stimuli during monocular deprivation, the predominance of the deprived eye was greatly boosted in subsequent binocular rivalry. However, when attention was withdrawn from the monocular stimuli by the central dual-task paradigm, the deprivation effect was significantly reduced. Furthermore, a non-linear regression model fitting transient deprivation effects showed a great reduction in initial amount of shift in predominance.
    CONCLUSIONS: Attention modulated the magnitude of monocular deprivation effect, which suggested a modulatory role of attention in binocular visual plasticity even at such lower levels in perceptual systems. Our study provided a novel perspective into the relation between attention and neural plasticity in adult visual cortex.
    Related Articles | Metrics
    Visual Statistical Learning of Naturalistic Textures
    Siyuan Cheng, Hailin Ai, Yiran Ge, Yuanyi Luo, Nihong Chen
    2023, 31 (suppl.):  120-120. 
    Abstract ( 84 )  
    PURPOSE: The visual system continuously adapts to the statistical properties of the environment. Nonetheless, the mechanisms underlying the learning of naturalistic images, which contain the richest statistical dependencies, remain unclear. Here we utilized a computer vision approach to parameterize the co-occurrence statistics in naturalistic textures, and investigated the behavioral characteristics of learning these statistics.
    METHODS: We utilized a computational model (Portilla & Simoncelli, 2000) to capture the statistics embedded in naturalistic textures and to synthesize textures that are perceptually indistinguishable from the original ones. Subjects underwent training over weeks to discriminate naturalistic texture from its spectrally-matched noise, which only differed in their high-order statistics. Experiment 1 evaluated the contribution of different types of statistics to learning, including linear statistics that reflect the periodicity and global structure, energy statistics that depict structures like edges and corners, and phase statistics that represent luminance gradients from shading. Experiment 2 tested the statistical specificity and location specificity of learning, which is crucial in determining the neural locus of perceptual learning. Experiment 3 further explored the location specificity by comparing the full learning courses between the transfer and the original training.
    RESULTS: 1) High-order statistics played a critical role in naturalistic texture learning. 2) The learning effect was specific to high-order statistics and retinal location. 3) An accelerated learning was found at un untrained location.
    CONCLUSIONS: By manipulating co-occurrence statistics in naturalistic textures, the present study built a link between perception and statistical learning. Our findings indicate a multi-stage statistical learning that bridges the gap in the learning-induced plasticity between the early to mid-level visual system.
    Related Articles | Metrics
    Ocular Dominance Plasticity in Second-Order Binocular Combination
    Wenjing Wang, Liying Zou, Yang Zheng, Jiawei Zhou, Seung Hyun Min
    2023, 31 (suppl.):  121-121. 
    Abstract ( 62 )  
    PURPOSE: It appears that short-term monocular deprivation enhances the binocular contribution of the deprived eye in human adults, i.e., adult ocular dominance plasticity. And the ocular dominance plasticity induced by short-term deprivation is not dependent on test spatial frequency. Current research is limited to luminance-modulated (first-order) stimuli. However, little is known regarding whether there is ocular dominance plasticity in human adults for stimuli defined by modulations in contrast (second-order stimuli). To address this issue, we conducted a study to investigate ocular dominance plasticity in second-order binocular combination and its relationship with spatial frequency.
    METHODS: Sixteen healthy young adults (mean age ± SD: 24.56 ± 1.03 years) with normal vision participated in the study. The experiment consisted of three consecutive stages: a measurement of BP (balance point) before patching, a patching stage (patch the dominant eye 2 hours determined by the hole-in-the-card test) and measurement of BP after patching (six sessions at 0, 3, 6, 12, 24, 48 minutes after the removal of the patch). We use a translucent patch for short-term monocular deprivation. In this study, we utilized a binocular orientation combination task with stimuli of first and second order at 0.5, 2, and 4 cycles/degree (c/d). Each measurement included seven contrast ratios. By fitting a logistic function, we fitted a psychometric function and then estimated the main outcome called BP. BP was defined as the dBContrastRatio (i.e., 20 * log10 contrast ratio) with perceived orientation of 0°. Changes in BP values were utilized to quantify the impact of patching on ocular dominance.
    RESULTS: Short-term monocular deprivation can cause a shift in the ocular dominance for second-order stimuli. AUC (the area under the curve) means the summation of ΔBP from 0 to 48 minutes after monocular deprivation. The more negative the AUC, the stronger the patching effect for the patched eye. There was no significant difference in AUC for spatial frequencies of 0.5, 2, and 4 in the second-order.
    CONCLUSIONS:These results indicate that there exists ocular dominance plasticity for second-order stimuli and are associated with spatial frequency-independent.
    Related Articles | Metrics
    A Shared Mechanism Between Facial Overweight and Facial Emotional Expressions: Behavioral and Neural Evidence from Cross-adaptation Paradigm
    Xu Luo, Yi Gao, Gaoxing Mei
    2023, 31 (suppl.):  123-123. 
    Abstract ( 62 )  
    PURPOSE: Overweight has become very common all over the world. Being overweight has an implication of not just physical health problems but also psychosocial consequences. Previous studies have revealed a closely association between weight judgments and facial expression judgments. However, these studies, mainly basing upon explicit judgment tasks, demonstrated inconsistent results regarding the effects of facial overweight on facial expression judgments. Here we combined a cross-adaptation paradigm with event-related potential (ERP) technology to examine whether facial overweight could exert a positive or negative influence on facial expression judgements in an implicit manner, and whether they could share partly common neural substrates.
    METHODS: The constant stimulus method and top-up adaptation were employed in a psychophysical test, and EEG signals were recorded during the test. The adapting stimuli included four different types of overweight faces, normal-weight faces, and two phase-scrambled stimuli corresponding to overweight and normal-weight faces. The test stimuli included five emotional levels morphed from 100% happy to 100% angry (i.e., 83/17%, 67/33%, 50/50%, 33/67%, and 17% happy /83% angry). In each trial, an adaptor was presented at the center of the screen for 4500 msec, followed by a random interval of 400 - 600 msec. Then a test face was presented for 200 msec. Participants (N = 23) were instructed to judge whether the test stimulus could be perceived as happy or angry by pressing the left or right arrow on the keyboard.
    RESULTS: The behavioral results showed that participants were less likely to perceive the ambiguous morphed test faces as angry after adapting the overweight faces compared to the normal-weight faces, indicating that the cross-adaptation between facial overweight and emotional expressions emerged. Furthermore, the ERP results revealed potential neural correlates of the behavioral perceptual bias. The amplitude of N170 difference wave elicited by test faces under the condition of the overweight adaptor was significantly smaller (less negative) than that under the condition of the normal-weight adaptor, indicating that previous prolonged exposure to overweight faces modulated the amplitude of N170 difference wave for subsequent emotional test faces.
    CONCLUSIONS: Our results provided direct evidence for the cross-adaptation between facial overweight and emotional expressions and its neural correlates, demonstrating that the perception of facial overweight and facial emotional expressions share at least partly common neural substrates. We conclude that facial overweight implicitly exerts a negative influence on facial expression judgements.
    Related Articles | Metrics
    Effects of Internal and External Feedback on Visual Perceptual Learning
    Lei Jiang, Xianyuan Yang, Wei Mao, Jia Yang, Fangfang Yan, Chang-Bing Huang
    2023, 31 (suppl.):  124-124. 
    Abstract ( 56 )  
    PURPOSE: Both external (e.g., auditory cues) and internal feedback (e.g., introduction of easy task) had been demonstrated to effectively enhance perceptual learning. However, whether there is difference between the effects of these two types of feedback are still awaiting exploration.
    METHODS: Six groups of participants were trained to learn a 4-Alternative forced-choice (4-AFC) grating orientation discrimination task that differed in types of external feedback (no, simple, and complete) and presence of internal feedback (without internal feedback, i.e. both staircases converge to 35%; with internal feedback, i.e. one staircase converges to 35% and the other 70%). Simple feedback, using an auditory cue, provided the correctness of the response; complete feedback included an auditory cue indicating correctness as well as repetition of the displayed grating. High-accuracy task (i.e. 70%) in the mixed staircases provided internal feedback to low-accuracy task (35%).
    RESULTS: Training decreased orientation discrimination thresholds for all groups. In the low-accuracy training condition (35%35%), subjects in the simple external feedback group demonstrated a faster learning rate, while subjects in the complete external feedback group exhibited both a faster learning rate and a lower initial threshold, as opposed to subjects in the no feedback group. Introducing internal feedback (i.e. high accuracy) lowered the initial threshold but left the learning rate unchanged. These observations cannot be explained by increased or decreased occurrences of grating presentation in the external and internal feedback conditions.
    CONCLUSIONS: We found that external feedback can speed perceptual learning (simple feedback) and improve initial performance (e.g. with complete feedback), while internal feedback can only facilitate initial performance, indicating potential differences in the regulatory mechanism(s) of internal and external feedbacks on perceptual learning.
    Related Articles | Metrics
    Cross-Category Serial Dependence in Social Attention
    Zhihao Yang, Gaoxing Mei
    2023, 31 (suppl.):  128-128. 
    Abstract ( 75 )  
    PURPOSE: Serial dependence refers to an attractive perceptual bias whereby the perception of current visual features can be pulled toward recently exposed features. For example, a direct eye gaze is more likely to be perceived as leftward-facing if a face with extreme leftward eye gaze is recently presented. Serial dependence helps us obtain a stable perception in a noisy environment. Previous research has demonstrated this effect in various aspects of perception, such as orientation, emotion, and facial attractiveness. However, it remains largely unknown whether the serial dependence effects could occur between stimuli from different categories. The current study aimed to investigate whether cross-category serial dependence could exist in social attention by measuring the perception of eye gaze direction and the walking direction of biological motion (BM) stimuli.
    METHODS: The discrimination task of eye gaze direction and the “inducer” paradigm were used in all three experiments. Experiment 1 (N = 25) examined whether task-irrelevant inducer stimuli involving information of social attention (i.e., BM stimuli with a leftward or rightward walking direction) could pose an influence on judgments of eye gaze direction (i.e., whether cross-category serial dependence between judgments of BM direction and eye gaze direction could emerge). In each trial, the inducer stimulus (i.e., a BM stimulus) was presented in the center of the screen for 250 msec, followed by a 300 msec mask stimulus. Next, the probe stimulus randomly selected from one of seven levels ranging from 40% leftward to 40% rightward was presented in the center of the screen for 200 msec, followed by a 300 msec mask stimulus. When a question mark appeared in the center of the screen, participants were instructed to press the ‘up’ or ‘down’ keys to indicate which one of the reference stimulus and the probe stimulus was perceived to be more leftward or rightward. Experiment 2 (N = 12) investigated whether the results of Experiment 1 can be replicated when faces with a leftward or rightward eye gaze instead of the BM stimulus were used as inducer stimuli. Experiment 3 (N = 10) investigated whether the cross-category serial dependence could emerge when non-social stimuli (i.e., arrows pointing leftward or rightward) were used as inducer stimuli. The procedure of Experiment 2 and Experiment 3 were identical to Experiment 1, except that different inducer stimuli were used.
    RESULTS: The results of Experiment 1 found that after seeing a leftward/rightward walking direction stimulus (i.e., a BM stimulus), participants were more inclined to perceive the gaze direction of the reference stimulus as being more leftward/rightward, and vice versa. There results indicated that the BM stimuli had a cross-category attractive effect on the eye gaze stimuli, making the gaze direction of the reference stimuli becoming closer to the direction of BM stimuli. Similarly, in Experiment 2, where the inducer stimuli were replaced with leftward and rightward eye gaze stimuli, we also observed the attractive effect of the inducer stimuli on the reference stimuli. However, in the Experiment 3 where the inducer stimuli were changed to non-social stimuli (i.e., arrows pointing leftward or rightward), no serial dependence effect was found.
    CONCLUSIONS: In sum, different categories of social attention stimuli, rather than non-social attention stimuli, can generate attractive effects. We conclude that cross-category serial dependence can arise when the preceding and current stimuli belong to same social attention stimuli but did not arise between non-social attention stimuli and social attention stimuli.
    Related Articles | Metrics
    Representation- and Task-based Plasticity Decides Perceptual Learning and its Specificity and Transfer: A Computational Model
    Xiao Liu, Muyang Lyu, Cong Yu, Si Wu
    2023, 31 (suppl.):  129-129. 
    Abstract ( 80 )  
    PURPOSE: Perceptual learning improves sensory discrimination with practice, reflecting plasticity in the brain. Although specificity has been regarded as a hallmark of perceptual learning, in that learned improvement cannot maintain when the task condition (e.g., location or orientation) alters, new training paradigms such as double training can render originally specific perceptual learning completely transferrable. In this study we aimed to build a unified neural computational model to explain learning specificity and transfer, in order to better understand the neural mechanisms underlying perceptual learning.
    METHODS: We propose a new computational framework built on the following simple assumptions. First, there are two types of plasticity: task-based plasticity (general learning for decision making) and representation-based plasticity (specific learning for extracting features). Second, perceptual learning is by default transferrable (task-based plasticity dominates), but conventional training procedure induces overlearning (feature-based plasticity dominates) that makes learning specific. Third, double training removes the constraints by further exposure of new stimulus features with the transfer stimulus conditions, so that learning can transfer to these features.
    RESULTS: Our model successfully replicates several perceptual learning outcomes in a Vernier learning task. With a small number of stimulus repetitions, task-based plasticity dominates perceptual learning and learning shows transferability. As the training progresses with more repetitive trials, feature-based plasticity gradually increases, focusing primarily on repeatedly occurring features and ignoring others, resulting in specificity (Jeter et al., 2009). Double training introduces new repeatedly presented features (e.g., location or orientation), activating feature-based plasticity with the transfer condition to achieve complete learning transfer to new stimulus location or orientation (Xiao et al., 2008; Zhang et al., 2010). Analyzing the network's neuronal activities reveals that the task plasticity module extracts stimulus-condition invariant information, whereas the feature plasticity module enhances the feature processing by improving signal-to-noise ratio. The balance between task-based learning and feature-based learning is crucial for successful learning and its specificity and transfer.
    CONCLUSIONS: This model provides new thinking in interpreting perceptual learning, especially double training and resulting learning transfer. Further work is necessary to explain learning transfer among physically distinct stimuli, which suggests that perceptual learning may also operate at a conceptual level.
    Related Articles | Metrics
    Visual Training Enhances Visual Cortex Plasticity to Restore Vision from Amblyopia in Adult Mice
    Yiru Huang, Zitian Liu, Zidong Chen, Yanyan Wu, Minbin Yu
    2023, 31 (suppl.):  130-130. 
    Abstract ( 56 )  
    PURPOSE: Long-term monocular visual deprivation (MD) during early life causes visual acuity deficiency (amblyopia) and a progressive loss of neuronal responsiveness and selectivity in the primary visual cortex (V1) through the deprived eye, such as ocular dominance and orientation selectivity. Recent studies about experience-dependent plasticity in the visual cortex show that visual perceptual learning seems to be a therapeutic strategy for recovery in older children and adults with amblyopia and the neural circuit plasticity maybe a potential explanation. To explore the neural mechanisms underlying visual training, we investigated the effects of binocular visual training on the visual responses of neurons in V1 and vision recovery from early-onset, long-term MD.
    METHODS: We used optomotor response to measure the visual acuity threshold of an amblyopic model of mice to assess vision recovery from early-onset, long-term MD with or without a brief period of binocular visual training. We also employed two-photon calcium imaging and chemogenetic technique to investigate the visual responses (i.e., ODI and OSI) of individual excitatory neuron and parvalbumin-positive (PV) interneuron in V1.
    RESULTS: We found that binocular visual training promoted visual acuity threshold, activated the response of excitatory neurons but decreased the response of PV interneurons in V1 of adult amblyopia mice. Moreover, activation of PV interneurons offset the vision-promoting effects of visual training and disinhibition of PV interneurons enhanced the vision-promoting effects of visual training.
    CONCLUSIONS: We found that binocular visual training decreased the response of PV interneuron, resulting in an attenuation of inhibition onto excitatory neurons and sustained cortical disinhibition to enhance cortical plasticity in adult visual cortex. Our results demonstrate the neural plasticity-based mechanism for visual stimulation-mediated functional recovery from adult amblyopia.
    Related Articles | Metrics
    Restoring Antagonistic Center-surround Receptive Field in Blind Mice by a Silicon Photodiode-based Visual Prosthesis
    Tianyun Zhang, Ruyi Yang, Peng Zhao, Lizhu Li, Chen Peng, Gengfeng Zheng, Aihua Chen, Xing Sheng, Biao Yan
    2023, 31 (suppl.):  132-132. 
    Abstract ( 69 )  
    PURPOSE: Retinal degeneration disease (RDD) has become the largest cause of irreversible blindness by year 2020. Retinal prosthetics was one of the most fruitful approaches to combat such disease, but bottom-up inhibitory signal, which was at the center of antagonistic center-surround receptive field and fine visual features, has rarely been considered in the development of retinal prosthetics so far. This study aims to generate artificial antagonistic center-surround receptive fields in blind retina by aligning inhibitory and excitatory silicon photodiodes into receptive field-like patterns, and to evaluate the electrophysiological and behavioral performances of such design.
    METHODS: Subcellular structural changes on the neuro-material surface were quantified by IHC staining and 3-D cell reconstruction. Tests on the capabilities of both excitatory (Si np+) and inhibitory (Si pn+) silicon materials are conducted by ex vivo retina patch clamp. Excitatory and inhibitory cellular responses by photoelectrical stimulations were also recorded by ex vivo retina patch clamp, with inhibitory responses recorded under off-BC pre-excitation via KA administration.
    RESULTS: RGC and RBC survival were not affected by implantation duration. Strengthened dendritic trunk-like structure and multiplied dendritic tips and bifurcations have been observed on the RBCs from the neuro-material interface. Both types of Si materials showed light intensity-dependent photovoltaic responses, and can elicit excitatory or inhibitory responses on RGCs.
    CONCLUSIONS: Preliminary results suggest minor biocompatibility issue and possible RBC neurite proliferation upon prosthesis integration. Feasibility of restoring center-surround antagonism by excitatory plus inhibitory materials is validated.
    Related Articles | Metrics
    Effects of altered-reality training on interocular disinhibition in amblyopia
    Xinxin Du, Lijuan Liu, Xue Dong, Min Bao
    2023, 31 (suppl.):  133-133. 
    Abstract ( 80 )  
    PURPOSE: Training of viewing an altered-reality environment dichoptically has been found to reactivate human adult ocular dominance plasticity, allowing improvement of vision for amblyopia. One suspected mechanism for this training effect is ocular dominance rebalancing through interocular disinhibition. Here, we investigated whether the training modulated the neural responses reflecting interocular inhibition.
    METHODS: Thirteen patients with amblyopia and 11 healthy controls participated in this study. Before and after six daily altered-reality training sessions, participants watched flickering video stimuli with their steady-state visually evoked potential (SSVEP) signals recorded simultaneously. We assessed the amplitude of SSVEP response at intermodulation frequencies, which was a potential neural indicator of interocular suppression.
    RESULTS: Training weakened the intermodulation response only in the amblyopic group, which was in agreement with the hypothesis that the training reduced interocular suppression specific to amblyopia. Moreover, even one month after the training ended, we could still observe this neural training effect.
    CONCLUSIONS: These findings provide preliminary neural evidence in support of the disinhibition account for treating amblyopia. The results can be explained with the ocular opponency model.
    Related Articles | Metrics
    Reduced Monocular Luminance Promotes Fusion but not Mixed Perception in Amblyopia
    Shiqi Zhou, Chenyan Zhou, Liuqing Weng, Jiawei Zhou, Seung Hyun Min
    2023, 31 (suppl.):  134-134. 
    Abstract ( 85 )  
    PURPOSE: There are three perceptual states of binocular vision: fusion, suppression and double vision. Fusion and suppression facilitate the two eyes to combine their visual input into a single percept. If they fail, double vision can occur. The purpose of this study is to investigate whether reducing luminance in front of the fellow eye can facilitate fusion and reduce double vision in amblyopic observers and whether the effect of luminance reduction is related to the state of interocular interaction.
    METHODS: Normal adults and Amblyopic adults with best-corrected vision participated in this study. A novel 4-AFC binocular rivalry paradigm was used to test their visual function. There were four possible responses: right-tilt, left-tilt, fusion and mixed perception (i.e., double vision). An ND filter was placed in front of the fellow eye of amblyopic observers and the dominant eye of normal observers. Subjects reported their responses via continuous keypress. The duration of the keypress for each response before and after applying ND filters was then compared to see how luminance reduction could affect four perceptual durations. Besides, we adjusted the level of interocular suppression in amblyopic observers by reducing the contrast of stimuli shown to the fellow eye. We then further measured the sensitivity of ocular dominance change to monocular luminance reduction when the visual input of two eyes was intact or imbalanced in amblyopia.
    RESULTS: During the reduction of luminance in the fellow eye, the perceptual imbalance between the two eyes was relieved in the amblyopic group. And the duration of fusion, rather than mixed perception, significantly increased relative to that in the baseline. Moreover, patients with lower interocular suppression levels would show greater ocular dominance change to monocular luminance reduction.
    CONCLUSIONS: Our findings demonstrate that the reduction of monocular luminance can not only bring interocular balance but also promote fusion in amblyopia. Meanwhile, the effect of monocular luminance attenuation on ocular dominance is related to observers’ state of interocular interaction.
    Related Articles | Metrics
    Chromatic Pupillometry Isolation and Evaluation of Intrinsically Photosensitive Retinal Ganglion Cell-Driven Pupillary Light Response in Patients with Retinitis Pigmentosa
    He Zhao, Hao Wang, Minfang Zhang, Chuanhuang Weng, Yong Liu, Zheng Qin Yin
    2023, 31 (suppl.):  135-135. 
    Abstract ( 62 )  
    PURPOSE: The pupil light response (PLR) is driven by rods, cones, and intrinsically photosensitive retinal ganglion cells (ipRGCs). In this study, we aimed to isolate ipRGC-driven pupil responses using chromatic pupillometry in blind patients with retinitis pigmentosa (RP) with severely damaged photoreceptor cells, characterize the kinetic characteristics of the pupil response, and determine the state of ipRGC function in different RP severities.
    METHODS: A total of 100 eyes from 67 RP blinds were included. The best corrected visual acuity was less than 0.05 or the visual field radius was less than 10 degrees, which met the legal blindness standard. Patients were divided into groups according to severity of visual impairment: no light perception (NLP, 9 eyes), light perception (LP, 19 eyes), faint form perception (FFP, 34 eyes), or form perception (FP, 38 eyes). 18 healthy volunteers (18 eyes) were recruited as the control group. Pupil responses to rod-weighted (487 nm, -1 log cd/m2, 1 s), cone-weighted (630 nm, 2 log cd/m2, 1 s), and ipRGC-weighted (487 nm, 2 log cd/m2, 1 s) stimuli were recorded. The following indicators were calculated and analyzed after normalizing pupil measurements: the postillumination pupil response (PIPR), maximal contraction velocity (MCV), contraction duration, and maximum dilation velocity (MDV).
    RESULTS: We found a slow, sustained PLR response to the ipRGC-weighted stimulus in most patients with NLP (8/9), but these patients had no detectable rod- or cone-driven PLR. The PIPR amplitude of the NLP group (0.230 ± 0.128) was not significantly different from that of the control group (0.156 ± 0.084). The MCV of ipRGC-driven PLR was 0.269 ± 0.150 / s, the contraction duration was 2.562 ± 0.902 s, and the rapid expansion was absent, which was significantly different from those of the rod and cone responses. Comparing the ipRGC-driven PLR in different stages of RP blinds, the PIPR amplitude of the four RP blind groups was not lower than that of the control group, and the PIPR amplitude of the LP group was even significantly increased. As the disease progressed, The MCV and MDV of RP blinds gradually decreased, the contraction time was prolonged, and the kinetic index gradually approached the level of ipRGC-driven PLR.
    CONCLUSIONS: In this study, ipRGC-driven pupil response was isolated for the first time in RP blinds without light perception, and its reaction kinetic characteristics of slow contraction, continuous contraction, and no rapid expansion stage were demonstrated. We proved that ipRGC-driven pupil response gradually became the main component of PLR in RP blinds by chromatic pupillometry, which provided a new technique for the evaluation of residual visual cell function in end-stage RP blinds. It provides a convenient and objective diagnostic tool for studying the visual reconstruction of RP-blind people using emerging optogenetic therapy.
    Related Articles | Metrics
    Visual Task-related Functional and Structural Magnetic Resonance Imaging for the Objective Quantitation of Visual Function in Patients with Advanced Retinitis Pigmentosa
    Hao Wang, Wangbin Ouyang, Jian Wang, Zhengqin Yin
    2023, 31 (suppl.):  136-136. 
    Abstract ( 63 )  
    PURPOSE: The objective quantitation of visual function in patients with advanced retinitis pigmentosa (RP) presents a difficult challenge due to the weak visual function of these patients. This study utilized magnetic resonance imaging (MRI) to assess the function and structure of the visual cortex (VC) in patients with RP and quantitatively categorize them.
    METHODS: Twenty-three patients with RP and ten healthy controls (HCs) were enrolled for MRI examinations. The patients were divided into form perception (FP) and no form perception (NFP) groups. Participants underwent structural MRI scans, and two visual task functional MRI scans were performed using stimuli, including white flash and black and white checkerboard patterns. Eight regions of interest (ROIs) were studied. In structural MRI, the gray matter volume (GMV) was compared in the ROIs. In the two visual tasks, the response intensity and functional connectivity (FC) of ROIs were also compared separately. Correlation analysis was performed to explore the correlations between the structural and functional parameters.
    RESULTS: In the structural analysis, the GMV in Brodmann areas 17, 18, and 19 of the FP and NFP groups was significantly lower than that of HCs. Regarding the functional data, the response intensity in the VC of both the FP and NFP groups was significantly lower than that in HCs. The response in Brodmann areas 17, 18, and 19 obtained using the pattern stimulus was significantly lower in the NFP group than in the FP group. For the FC comparison, the FP and NFP groups exhibited significantly lower values in several pathways than the HCs, and FC in the ipsilateral V1-contralateral V1 pathway in the flash task was significantly lower in the NFP group than in the FP group. A positive correlation between response intensity and GMV was observed in Brodmann areas 17, 18, and 19 in both flash and pattern visual tasks.
    CONCLUSIONS: Magnetic resonance imaging was an effective tool to evaluate the visual function of patients objectively and quantitatively with advanced RP. Response intensity and FC were effective parameters to distinguish FP and NFP patients. A positive correlation between response intensity and GMV was observed in the VC.
    Related Articles | Metrics
    The Suppressive Basis of Ocular Dominance Changes Induced by Short-term Monocular Deprivation in Normal and Amblyopic Adults
    Ling Gong, Alexandre Reynaud, Robert F. Hess, Jiawei Zhou
    2023, 31 (suppl.):  137-137. 
    Abstract ( 67 )  
    PURPOSE: We aimed to study the effect of short-term monocular deprivation on the suppressive interocular interactions in normals and amblyopes by using a dichoptic masking paradigm.
    METHODS: Nine adults with anisometropic or mixed amblyopia and 10 control adults participated in our study. The contrast sensitivity in discriminating a target Gabor dichoptically masked was measured before and after 2 hours of monocular deprivation. The mask consisted in bandpass filtered noise. Both the target and the mask were horizontally oriented at the spatial frequency of 1.31 cpd. Deprivation was achieved using an opaque patch on the amblyopic eye of amblyopes and the dominant eye of controls.
    RESULTS: Results were similar in both controls and amblyopes. After 2 hours of monocular deprivation, the previously-patched eye showed a significant increase in contrast sensitivity under dichoptic masking, which also suggested reduced suppressive effect from the non-patched eye. Meanwhile, the contrast sensitivity of the non-patched eye remained almost unchanged under dichoptic masking.
    CONCLUSIONS: We demonstrate that the ocular dominance changes induced by short-term monocular deprivation, namely the strengthening of the deprived eye’s contribution, are associated with the unilateral and asymmetric changes in suppressive interaction. The suppression from the non-deprived eye is reduced after short-term monocular deprivation. This provides a better understanding of how inverse patching (patching of the amblyopic eye) could, by reducing the suppressive drive from the normally sighted (non-deprived) eye, form the basis of a new treatment for the binocular deficit in amblyopia.
    Related Articles | Metrics
    Anisomyopia Exhibits a Greater Binocular Imbalance as a Function of Spatial Frequency
    Nan Jiang, Yang Zheng, Mengting Chen, Jiawei Zhou, Seung Hyun Min
    2023, 31 (suppl.):  138-138. 
    Abstract ( 71 )  
    PURPOSE: To investigate whether adults with anisomyopia exhibit binocular imbalance across a wide range of spatial frequencies and examine whether this imbalance can be rectified with optical correction.
    METHODS: 15 anisomyopes (24 ± 0.9 years), 15 isomyopes (23.6 ± 1.3 years) and 14 emmetropes (22.5 ± 2.9 years) participated in this study. A balance point, which is an interocular contrast ratio when both eyes’ contributions to binocular fusion are equal, was measured using binocular orientation combination task at 0.5,1,2 and 4 cycles per degree (c/d). The binocular balance was quantitatively assessed as the absolute value of BP on log scale (|BP|). Using the data, we fit a slope from simple linear regression to better capture the dependence of binocular balance as a function of spatial frequency. Moreover, we computed the area under a curve (AUC) as an index of imbalance across spatial frequency. All the subjects with anisomyopia and isomyopia were tested twice with and without optical correction.
    RESULTS: |BP| of all groups significantly increased as a function of spatial frequency (P < 0.05). For uncorrected groups, |BP|, slopes and AUCs of anisomyopes were all significantly different from the two other groups (P < 0.05). The interaction between whether correction and spatial frequency was also significant (F (3,42) = 8.48, P < 0.05). |BP|, slopes and AUCs of uncorrected anisometropia were statistically larger than those in the corrected state (P < 0.05). There was no significant difference between myopes and emmetropes.
    CONCLUSIONS: Anisomyopes exhibit clear binocular imbalance as a function of spatial frequency. Optical correction improves the imbalance but the degree of improvement depends on spatial frequency. Our results show that binocular imbalance in anisomyopia occurs in ocular region instead of the later sites of visual pathway.
    Related Articles | Metrics
    Artificial Photoreceptors Based on Au@TiO2-x Nanowire Arrays Restore Visual Function in Blind Mice and Monkeys
    Ruyi Yang, Peng Zhao, Liyang Wang, Chenli Feng, Chen Peng, Xingdong Chen, Gengfeng Zheng, Chunhui Jiang, Yuanzhi Yuan, Biao Yan, Jiayi Zhang
    2023, 31 (suppl.):  139-139. 
    Abstract ( 76 )  
    PURPOSE: Photoreceptor degeneration is the main cause of blindness in retinal degenerative diseases such as retinitis pigmentosa and age-related macular degeneration. The objective of this study is to assess the performance of Au@TiO2-x nanowire arrays as artificial photoreceptors to restore primary visual function in blind mice and non-human primates with photoreceptor degeneration.
    METHODS: To evaluate the improvement in visual temporal and spatial resolution in blind mice, Au@TiO2-x nanowire arrays were assessed using in vitro patch-clamp recordings and mouse behavior paradigm based on 2AFC. Long-term continuous monitoring of light-induced responses in the visual cortex neurons of blind mice after implantation of nanowire arrays was conducted using two-photon calcium imaging technology. The safety, stability, and light-responsive capabilities of monkeys implanted with nanowire arrays were examined through fundus imaging, OCT imaging, and visually guided saccade behavior experiments, respectively.
    RESULTS: Subretinal implantation of nanowire arrays in blind mice was capable of detecting drifting gratings and flashing objects at low light intensity thresholds (15.70 - 18.09 μW·mm-2), with spatial and temporal resolutions of 77.5 μm and 3.92 Hz, respectively. Additionally, long-term in vivo calcium imaging suggested plastic changes in visual cortical circuits after nanowire implant. The nanowire arrays exhibited good biocompatibility and stability for 54 weeks after subretinal implantation in monkeys, capable of detecting light stimuli of 0.5º in diameter at 10 μW·mm-2 in visually guided saccade experiments.
    CONCLUSIONS: Our findings demonstrate the potential of Au@TiO2-x nanowire array as artificial photoreceptors to ameliorate visual deficits in patients with photoreceptor degeneration.
    Related Articles | Metrics
    Investigating Brain Structural Correlates of Ocular Tracking in Preadolescent Children and Young Adults
    Wenjun Huang, Bao Hong, Jiahe Wu, Jing Chen, Li Li
    2023, 31 (suppl.):  141-141. 
    Abstract ( 87 )  
    PURPOSE: Although extensive research in neurophysiology, neuropsychology, and neuroanatomy suggests the involvement of widespread brain regions in ocular tracking, there is still a lack of anatomical evidence for the development of ocular tracking abilities. Our study aims to fill this gap by investigating the relationship between performance in ocular tracking an unpredictable target and gray matter volume in preadolescent children and young adults.
    METHODS: We used an 8-minute ocular-tracking task in which participants tracked the step-ramp motion of a cartoon character (0.64°H × 0.64°V) with its speed (16°/s-24°/s) and direction (2°-358°) randomly varied from trial to trial. A total of 81 children aged 8-9 years (47 females and 34 males) and 77 adults aged 18-30 years (43 females and 34 males) completed the ocular-tracking task. Among them, 52 children (34 females and 18 males) and 72 adults (42 females and 30 males) had valid structural MRI data.
    For the ocular-tracking task, we computed 12 oculometric measures to assess different aspects of ocular-tracking performance. We also combined the 12 oculometric measures to compute the ocular-tracking performance index that indicates the overall tracking ability. For the structural MRI data, we first obtained cortical grey matter volume using the Desikan Killiany atlas, with 34 cortical regions per hemisphere. We then transformed the regional cortical volumes into centile scores using the lifespan chart of the human brain derived from the largest MRI samples by far (Bethlehem et al., 2022). The centile score evaluates to what extent an individual deviate from the normative distribution of reference samples with the same sex and similar age. We assessed the developmental state of the 34 brain regions in our child and adult cohorts based on the age at peak regional volume from the lifespan chart. This enabled us to roughly evaluate whether those regions exhibited volume enlargement or reduction in both child and adult cohorts.
    For the data analysis, we first examined whether children and adults differed on the ocular tracking performance metrics (i.e., performance index and the 12 oculometric measures). We then performed Spearman’s rank correlation analysis between the centile scores of the 34 brain regions and the performance metrics that exhibited intergroup differences. Regarding the individual oculometric measures, we explored three key questions: (1) Which brain regions are specifically involved in adults during unpredictable ocular tracking? To address this, we examined whether any brain regions showed significant correlations between volume centile scores and two or more oculometric measures exclusively in adults, not in children. (2) Which brain regions are specifically involved in children? To this end, we explored whether any brain regions demonstrated significant correlations between volume centile scores and two or more oculometric measures exclusively in children, not in adults. (3) Are there any brain regions that play a significant role in both adults and children? To address this, we investigated whether the volume percentile scores of any brain regions exhibited a significant correlation with at least one identical oculometric measure in both adults and children.
    RESULTS: Children demonstrate inferior performance in the ocular tracking task compared to adults, as indicated by both the performance index and individual oculometric measures. Correlation analysis revealed distinct brain regions associated with the performance index in adults and children. In adults, the centile scores of the caudal middle frontal region (including DLPFC) and the pars opercularis in the frontal cortex specifically exhibited correlations with the performance index. In contrast, in children, the superior parietal region (including IPS and V3a) in the parietal cortex specifically showed correlations with the performance index. According to the age at peak regional volume from the lifespan chart, the brain regions associated with performance index in adults develop later compared to those in children. Nevertheless, the respective brain regions in adults and children all have reached a considerable level of development, i.e., in developmental stages characterized by cortical volume reduction. This aligned with our correlation results that for both children and adults, the smaller the volume percentile scores of the related brain region, the better the overall ocular tracking abilities (i.e., the larger performance index).
    Correlation analysis on individual oculometric measures revealed a similar pattern. Specifically, the brain regions associated with oculometric measures in adults develop later compared to those in children. In adults, these brain regions were primarily located in the frontal cortex, including the caudal middle frontal, medial orbitofrontal, and frontal pole. In children, these brain areas were located in the pericalcarine of the occipital cortex and the isthmus cingulate. In both adults and children and, a smaller volume percentile score of the related brain region were corelated with better ocular tracking performance, such as shorter pursuit latency and larger proportion of smooth pursuit. These findings align with the fact that the respective brain regions in both adults and children are in stages of cortical volume reduction.
    In addition to the distinct correlation patterns found in children and adults, the correlation analysis on oculometric measures also revealed certain brain regions that exhibit overlapping correlations in both age groups. These shared regions include precentral (including FEF) and superior temporal areas. Interestingly, the precentral were in different developmental stages in adults (i.e., reduction) and children (i.e., enlargement). Accordingly, in adults, a smaller volume percentile score of the precentral is associated with larger open-loop acceleration, whereas in children, a larger volume percentile score of the precentral is associated with larger open-loop acceleration. The developmental state in children contrasts with findings mentioned above that the respective brain regions in both adults and children entered the stage of cortical volume reduction.
    CONCLUSIONS: This study represents the first investigation into the underlying neural basis of ocular tracking development. We found that the ocular-tracking brain network in adults includes regions that are absent in the network of children, and these regions generally develop later. Conversely, the ocular-tracking brain network in children contains regions that are absent in adults, and these regions usually develop earlier. This may be due to the fact that certain brain regions in children have not yet reached a considerable level of development. Consequently, the connections between these brain regions and ocular tracking performance observed in adults have not yet been established. Instead, children rely on brain regions that have already developed and are functional for them but absent in adults. Additionally, a critical brain region, i.e., the FEF responsible for generating oculomotor commands and having major outputs to subcortical regions for controlling ocular tracking behavior, exhibits a significant correlation with children's performance despite their incomplete development in this region. In general, the relationship between brain structure and ocular-tracking performance can be described as follows: if the cortical volume of these brain regions has not yet reached their peak, then the larger volume centile score, the better the ocular-tracking performance, and vice verse.
    Related Articles | Metrics
    Identification of Depression in Old Age Based on Eye Movement Characteristics
    Genying Huang, Yafang Li
    2023, 31 (suppl.):  142-142. 
    Abstract ( 67 )  
    PURPOSE: Collect older adults with depression and healthy older adults in the execution of Fixation Stability Task, Saccade Task, Free-Viewing Task eye movement data, analysis of depressed elderly patients and healthy elderly eye features in simple eye movement task, for the construction can be applied to the auxiliary screening of depressed elderly patients eye movement indicators and auxiliary clinical objective diagnosis provides empirical basis.
    METHODS: Collection in Jiangxi province psychiatric hospital psychological accord with the international classification of disease version 10 (international classification of diseases-10, ICD-10) depression episode diagnosis criteria of 16 elderly depression and 16 community health elderly for the study object, using Tobii Pro Spectrum Eye movement instrument to collect eye data, compare the elderly depression and healthy elderly in the fixation stability task, saccade task and free viewing task of eye movement characteristics.
    RESULTS: (1) In the fixation stability task, the number of saccades was significantly more than in the healthy older group (t=2.158, P<0.05); (2) in the Prosaccade and antisaccade task, the saccade orientation correctrate was significantly lower in healthy controls (t=-2.38, P<0.05), and the reverse saccade accuracy was significantly lower in healthy controls (t=-2.32, P<0.05); (3) In the free-viewing task, The interaction of emotional face type and the group was not significant (P>0.05), Group main effect of fixations, saccades, mean pupil diameter (P<0.05), Elderly patients with depression had less fixations on emotional faces than healthy controls (F=4.98, P<0.05), More saccades than those in the healthy control group (F=4.84, P<0.05), The mean pupil diameter was greater than that of the healthy control elderly group (F=17.40, P<0.0001).
    CONCLUSIONS: Elderly patients with depression in the Fixation Stability Task, Saccade Task, Free-Viewing Task present characteristic eye movement pattern, shows that eye movement index can help to distinguish between elderly depression and healthy elderly, clinical diagnosis of elderly depression has application value.
    Related Articles | Metrics
    Controlling Eye Movements and REM Sleep by Distinct Cholinergic Neurons in Oculomotor Nucleus
    Chengyong Jiang, Xinrong Tan, Qingshuo Meng, Er Chen, Liyuan Cui, Yanyu Xiong, Zixuan Yan, Biao Yan, Jiayi Zhang
    2023, 31 (suppl.):  143-143. 
    Abstract ( 117 )  
    PURPOSE: The oculomotor nucleus (OMN) in midbrain is a well-known center for eye movements (EMs). As a paradigmatic behavior, EMs showed distinct activation patterns in rapid eye movement (REM) and non-REM (NREM) sleep stages. Previous studies indicates that OMN may be involved in the regulation of EMs during REM sleep. However, it is still unclear whether the neurons in OMN participated in regulating sleep states and how they modulate different sleep states and EMs separately.
    METHODS: To explain the structurally and functionally basis of OMN cholinergic (OMNChAT) neurons in regulating EM in REM sleep, viral tracing, immunohistochemical techniques, in vivo or vitro electrophysiology, deep-brain calcium imaging and optogenetic stimulation during polygraphic recordings were conducted.
    RESULTS: In this study, we found that a subset of cholinergic neurons within the OMN were active during sleep and exhibited increased activity before REM sleep ends, especially during transitions from REM sleep to wakefulness. Through the application of optogenetic manipulation and calcium imaging, we revealed the existence of functional heterogeneity among OMN neurons and identify a subset of cholinergic neurons which did not control EMs could terminate REM sleep. We identified the nucleus papilio (NP) as a potential effector pathway in lighting the OMN for initiation of REM-off.
    CONCLUSIONS: Our results suggested that OMN is a key node that possesses functionally segregated subpopulations capable of parallel, independent control of EMs and sleep.
    Related Articles | Metrics
    Ocular tracking abilities in preadolescent children
    Bao Hong, Wenjun Huang, E'jane Li, Jing Chen, Li Li
    2023, 31 (suppl.):  144-144. 
    Abstract ( 117 )  
    PURPOSE: Humans combine smooth pursuit and saccades in the ocular tracking of moving objects of interest. Although many studies have examined ocular tracking in children, these studies used predictive stimuli or stimuli of low uncertainty, thus smooth pursuit and saccades were assessed in conjunction with the predictive abilities. In addition, no study to date has examined the visual processing of target motion signals during ocular tracking in children. The current study aims to address these research gaps.
    METHODS: We used an 8-minute ocular-tracking task based on the classic step-ramp paradigm modified to accommodate a full sampling of the polar angles. On each trial, participants tracked the step-ramp motion of a target (a cartoon character, 0.64°H × 0.64°V). Both the target speed and moving direction were randomly sampled from a range (speed range: 16-24°/s; direction range: 2-358° in 4° increment without replacement) to minimize expectation effects. A total of 78 children aged 8-9 years (female/male: 46/32) and 76 adults aged 18-30 years (female/male: 43/33) participated in this task. First, we computed 12 oculometric indices to measure different aspects of ocular-tracking performance and the dynamic visual processing of target motion. Second, previous studies have reported that open-loop pursuit relies on the reliability of visual processing of motion signals and can be affected by target motion signals in previous trials. In the current study, we thus examined the degree to which the open-loop pursuit response was affected by the target moving direction of the previous trials (i.e., the serial dependence effect). This helps reveal the robustness of visual processing of target motion signals in children.
    RESULTS: All the 12 oculometric indices showed that the ocular tracking abilities in children were inferior to those in adults: (1) In the pursuit initiation stage when eye movements are primarily driven by input target motion signals (i.e., open-loop response), children had prolonged latency and slower eye acceleration; (2) in the steady-state tracking stage when eye movements are also driven by extra-retinal information (such as efference copy) about eye positions to correct tracking errors (i.e., closed-loop response), children’s pursuit velocity lagged more behind the target velocity, and the proportion of smooth pursuit was also lower; (3) for the saccadic eye movements, the frequency of saccades and the spatial distribution of saccade direction were similar in children and adults, whereas the amplitude of catch-up saccades was greater in children than in adults; (4) for tracking direction and speed, ocular tracking gains for open-loop pursuit direction and close-loop tracking speed were similar in children and adults but both were less precise in children than in adults. Of all the oculometric indices, the greatest difference between children and adults was in the latency of pursuit initiation (children’s mean at 0.3% of the adult population). In addition, for both children and adults, the pursuit direction in the open-loop response was pulled toward the target moving direction in the previous trial. Such serial dependence effect was stronger in children than in adults.
    CONCLUSIONS: Our ocular-tracking task provides a wide range of largely independent oculometric indices that allow us to examine smooth pursuit and saccades and their coordination as well as visual processing of target motion signals in preadolescent children aged 8-9 years. Both open-loop and closed-loop tracking responses in children are inferior to those in adults. For the first time, we found that the gains of open-loop pursuit direction and close-loop tracking speed in children are comparable to those in adults, whereas the precision of both in children does not reach the adult level. This might be due to less reliable visual processing of target motion signals in children than in adults, which leads to the finding that the open-loop pursuit direction depends more on the recent history of target motion signals in children than in adults. We conclude that the development of different aspects of ocular tracking abilities follow different time courses, with the abilities related to open-loop pursuit maturing the last. The findings of the current study provide insights on the maturation process of cortical areas in charge of ocular tracking.
    Related Articles | Metrics
    From Simple to Complex: Virtual Reality Technology Reveals the Characteristics of Object-based Inhibition of Return in Three-dimensional Space
    Qinyue Qian, Aijun Wang, Ming Zhang
    2023, 31 (suppl.):  146-146. 
    Abstract ( 60 )  
    PURPOSE: Object-based inhibition of return (IOR) acts as a foraging facilitator that inhibits the re-attention to previously inspected locations. However, most of them were conducted in two-dimensional planes rather than three-dimensional spaces, it is not clear how the characteristics of object-based IOR change when objects cross depths. This study aims to explore the effects of depth and object representation on object-based IOR in three-dimensional spaces, and to provide theoretical support for traffic and design fields while expanding the theories of attentional spreading and attentional prioritization.
    METHODS: Virtual reality technology was used to present a double-rectangle paradigm. Experiment 1 used simple drawings with parallax as double rectangles; Experiment 2 removed parallax to eliminate confounding variables; Experiments 3, 4, and 5 used real objects with parallax, with the difference that the objects in Experiment 4 were more salient, and the similarity between the two objects in Experiment 5 was lower.
    RESULTS: Experiment 1 only had object-based IOR in near space; Experiment 2 had object-based IOR in both upper and lower visual fields, and there was no significant difference; Experiments 3 and 4 only had object-based IOR in far space; Experiment 5 had object-based IOR in both far and near spaces.
    CONCLUSIONS: Object-based IOR of simple drawings only exists in near space, and object-based IOR of real objects initially only exists in far space, and also appears in near space as object similarity decreases. Attentional spreading affects object-based IOR of simple drawings, while attentional prioritization dominates object-based IOR of real objects.
    Related Articles | Metrics
    Integration and Suppression Interact in Binocular Vision
    Rong Jiang, Ming Meng
    2023, 31 (suppl.):  147-147. 
    Abstract ( 91 )  
    PURPOSE: Contingent on stereo compatibility, two images presented dichopitcally can lead to either binocular integration, thus generating stable stereopsis, or interocular suppression that induces binocular rivalry with bistable perception that alternates between the two images. Here, a series of psychophysical experiments were conducted to investigate the interactions between binocular rivalry and stereopsis.
    METHODS: In Experiment 1, observers performed a stereo detection task for stimuli presented in the vicinity of rivalrous vs. non-rivalrous stimuli. In Experiments 2a & 2b, observers were instructed to track binocular rivalry which occurred around stereo vs. non-stereo stimuli.
    RESULTS: In Experiment 1, we found that the presence of binocular rivalry inhibited stereopsis with greater inhibition resulting from higher rivalry contrast. In Experiments 2a & 2b, we found that existing stereopsis balanced the dynamics of peripheral binocular rivalry, rendering more equivalent eye dominance.
    CONCLUSIONS: Binocular integration and interocular suppression are interconnected and an overlapping mechanism related to binocular balance may underlie the connection between these two processes, as well as the formation of unified conscious visual representation from binocular inputs.
    Related Articles | Metrics
    Seeing in Crowds: Averaging first, then Signed-Max
    Xincheng Lu, Ruijie Jiang, Meng Song, Yiting Wu, Nihong Chen
    2023, 31 (suppl.):  149-149. 
    Abstract ( 91 )  
    PURPOSE: As a fundamental bottleneck in object recognition, crowding is typically considered as excessive integration of nearby items, resulting in impairment in target identification. However, the integration strategy in visual crowding remains unclear. Here we proposed a new integration model in visual crowding, and tested if it can best account for the observation compared to other models.
    METHODS: We adopted a magnitude matching paradigm (Baldassi et al., 2006; Gheri & Baldassi, 2008) to probe the internal response of target orientation in clutter. Subjects were asked to report both the direction and the magnitude of a target with surrounding flankers in a crowded display. We varied the number of signal as well as the tilting angle of the signal. In addition, the signal items were placed either along the radial or the tangential axis with respect to the fovea fixation. By constructing the response distributions, we tested which model best accounts for the observation across signal numbers, tilting orientations, and radial-tangential layouts.
    RESULTS: 1) In contrast to the prediction of a pure averaging operation, we observed a bimodal response distribution, supporting an involvement of signed-max operation in the late stage. 2) Crowding can be modelled with radially elongated averaging zone, and was better accounted for when the signals were positioned in a congruent spatial layout. 3) An increase of signal number was associated with higher precision in orientation judgement, which can also be predicted by the mixed model.
    CONCLUSIONS: Our findings suggest a hybrid strategy in combining crowded signals: averaging in early visual processing stage followed by a signed-max operation in higher-level processing stage.
    Related Articles | Metrics
    Linking Transcriptomes with Morphological and Functional Phenotypes in Mammalian Retinal Ganglion Cells
    Qiang Xu, Wanjing Huang, Jing Su, Sheng Liu
    2023, 31 (suppl.):  150-150. 
    Abstract ( 56 )  
    PURPOSE: Retinal ganglion cells (RGCs) are the brain’s only gateway to the visual world. They can be classified into different types based on their electrophysiological, transcriptomic or morphological characteristics. However, whether the transcriptomic types have their corresponding phenotypes are still unknown.
    METHODS: Here we characterized the transcriptomic, morphological and functional features of 472 high-quality RGCs using Patch-seq, providing functional and morphological annotation of many transcriptomic defined cell types of previously established RGC atlas.
    RESULTS: We showed a convergence of different modalities in defining the RGC identity, and revealed the degree of correspondence for well-characterized cell types across multimodal data. Moreover, we complemented newly discovered RGC types with detailed morphological and functional properties. We also identified differentially expressed genes among ON, OFF and ON-OFF RGCs such as Vat1l, Slitrk6 and Lmo7, providing candidate marker genes for functional studies.
    CONCLUSIONS: Our research suggests that the molecularly distinct clusters may also differ in their roles of encoding visual information.
    Related Articles | Metrics
    Population Receptive Field and Top-down Information Transmission Properties in Sub-bundles of the Human Optic Radiation
    Yanming Wang, Huan Wang, Benedictor Alexander Nguchu, Du Zhang, Xiaoxiao Wang, Bensheng Qiu
    2023, 31 (suppl.):  151-151. 
    Abstract ( 54 )  
    PURPOSE: To investigate the retinotopic properties and the top-down information transmission characteristics of the optic radiation (OR) sub-bundles.
    METHODS: 7T retinotopy dataset from Human Connectome Project (HCP) was used to reconstruct the OR. Specifically, OR segmentation into sub-bundles was performed according to the retinotopic map of the primary visual cortex (V1). To suppress the influence of gray matter (GM) signals, white matter (WM) masks set to different degrees of restraint were used. The OR sub-bundles were confined within the spaces of these masks, which were ranked as level 1, level 2, and level 3 sub-bundles. The population receptive field (pRF) model was then applied to evaluate the retinotopic properties of these sub-bundles. Moreover, the consistency of the pRF properties of level 1 sub-bundles with those of V1 sub-fields at the endpoint of the sub-bundles was evaluated. Correlation analysis was performed to evaluate the relationships of the pRF parameters of level 2 and 3 sub-bundles with those of the level 1 sub-bundles. In addition, we applied the HCP working memory dataset to evaluate the activation of the foveal and peripheral OR sub-bundles and its correlation with those of the foveal and peripheral V1 and lateral geniculate nucleus (LGN) sub-fields.
    RESULTS: The results showed that the pRF properties of differentially constrained sub-bundles exhibited the standard retinotopic properties, and the pRF properties of the level 1 sub-bundles were in good agreement with those of V1 sub-fields. Moreover, the pRF parameters of level 2 and 3 sub-bundles were significantly correlated with level 1- pRF parameters, indicating that OR sub-bundle evaluation was GM-free. Notably, the OR sub-bundles activation under the 2bk task was significantly stronger than that under the 0bk task. The activation differences between 2bk and 0bk of foveal and peripheral OR sub-bundles were significantly correlated with those of the corresponding V1 sub- fields, but not with those of the corresponding LGN sub-fields.
    CONCLUSIONS: The findings demonstrated that the blood oxygen level-dependent (BOLD) signals of OR sub-bundles can encode high-fidelity visual information, indicating the feasibility of assessing WM functional activity at the tract sub-bundles level. Moreover, OR not only transmits visual information from bottom to up, but also engages top-down cognitive processes, such as visual working memory.
    Related Articles | Metrics
    Cortical Reorganization after Axial Alignment in Older Children and Adults with Strabismus
    Yiru Huang, Zitian Liu, Zidong Chen, Yanyan Wu, Daming Deng, Fang-Fang Yan, Chang-Bing Huang, Minbin Yu
    2023, 31 (suppl.):  152-152. 
    Abstract ( 78 )  
    PURPOSE: To measure visual crowding, an essential bottleneck on object recognition and reliable psychophysical index of cortex organization, in older children and adults with horizontal concomitant strabismus before and after strabismus surgery.
    METHODS: Using real-time eye tracking to ensure gaze-contingent display, we examined the peripheral visual crowding effects in older children and adults with horizontal concomitant strabismus but without amblyopia before and after strabismus surgery. Subjects were asked to discriminate the orientation of the central tumbling E target letter with flankers arranged along the radial or tangential axis in the nasal or temporal hemifield at different eccentricities (5° and 10°). The critical spacing value, which is the minimum space between the target and the flankers required for correct discrimination, was obtained for comparisons before and after strabismus surgery.
    RESULTS: Twelve individuals with exotropia (6 males, 21.75 ± 7.29 years, mean ± SD) and fifteen individuals with esotropia (6 males, 24.13 ± 5.96 years) participated in this study. We found that strabismic individuals showed significantly larger critical spacing with nasotemporal asymmetry along the radial axis that related to the strabismus pattern, with exotropes exhibiting stronger temporal field crowding and esotropes exhibiting stronger nasal field crowding before surgical alignment. After surgery, the critical spacing was reduced and rebalanced between the nasal and temporal hemifields. Furthermore, the postoperative recovery of stereopsis was associated with the extent of nasotemporal balance of critical spacing.
    CONCLUSIONS: We found that optical realignment (i.e., strabismus surgery) can normalize the enlarged visual crowding effects, a reliable psychophysical index of cortical organization, in the peripheral visual field of older children and adults with strabismus and rebalance the nasotemporal asymmetry of crowding, which promoting the recovery of postoperative stereopsis. Our results indicated a potential of experience-dependent cortical organization after axial alignment even for individuals who are out of the critical period of visual development, illuminating the capacity and limitations of optics on sensory plasticity and emphasizing the importance of ocular correction for clinical practice.
    Related Articles | Metrics
    Object Space as the Foundation for Object Recognition in the Human Ventral Temporal Cortex
    Baoqi GONG, Wei JIN, Pinglei BAO
    2023, 31 (suppl.):  153-153. 
    Abstract ( 104 )  
    PURPOSE: Object recognition, an essential cognitive function in the human visual system, depends on the ventral temporal cortex (VTC). However, the functional principles and neural mechanisms of the IT cortex remain largely unexplored. Earlier studies have proposed the use of the object space model to understand the functional organization of the IT cortex in macaques (Bao, et al. 2020), but its relevance to humans is still unclear.
    METHODS: To address this question, we used fMRI to measure the response to a large number (n = 500) of static object stimuli from 5 subjects.
    RESULTS: The object space was defined using principal component analysis of the VTC's responses. Our results showed that the functional organization of the VTC can be represented by a low-dimensional object space, with the first two principal components accounting for 92% variance of the consistency of the representation space. These two principal components can be broadly characterized as face versus spiky objects and animal versus stubby objects. Additionally, to examine the consistency of object spaces among different participants, we used hyperalignment methods (Haxby, et al. 2020) to project responses of the VTC onto a common space and then back onto the cortex of one participant, creating a unified template. The high consistency across subjects was found not only in known category-selective areas but also in other parts of the VTC, suggesting a common space represented across different subjects. To further investigate the similarities in object representation between humans and macaques, we compared the object space between the two species. Comparisons with electrophysiological data from neurons showed that the space constructed by human VTC responses closely resembles that represented in the IT of macaques.
    CONCLUSIONS: This implies that the creation of object-specific space representations is a key aspect of object recognition and that the functional organization of the IT cortex is preserved across species.
    Related Articles | Metrics
    A Study on Information Encoding Strategies in V1 and V4 Cortex of Cat During Visual Contrast Detection
    Zheng Ye, Shunshun Chen, Hongyan Lu, Jian Ding, Qingyan Sun, Tianmiao Hua
    2023, 31 (suppl.):  154-154. 
    Abstract ( 78 )  
    PURPOSE: To explore information encoding strategies in the visual cortex of ascending hierarchy during visual contrast detection.
    METHODS: We measured cats’ behavioral contrast sensitivity versus spatial frequency (CSF) and contrast threshold versus external noise contrast (TvC) functions using staircase method, and constructed neuronal CSF and TvC functions through ROC analysis of gamma power and theta-gamma phase-amplitude coupling (PAC) intensity based on local field potentials recorded in cats’ primary visual cortex (V1) and the area 21a, a higher-order visual cortex homologous to V4 of primates.
    RESULTS: The neuronal CSFs and TvCs measured by either gamma power or theta-gamma PAC in the V1 and V4 cortex were highly correlated with their behavioral counterparts. However, the neuronal contrast sensitivity (CS) on CSFs and the contrast threshold (TC) on TvCs in V1 cortex were closer to behavioral counterparts than in V4 cortex if neuronal CSFs and TvCs were measured with gamma power, whereas the neuronal CS and TC in V4 were closer to behavioral counterparts than in V1 if neuronal CSFs and TvCs were assessed with theta-gamma PAC intensity.
    CONCLUSIONS: V1 and V4 cortex may contribute to visual perception through different strategies of information processing. The low-level V1 cortex may encode stimulus contrast depending primarily on the high signal-to-noise activity of gamma oscillation whereas the higher-level V4 may encode stimulus contrast depending primarily on the modulation of gamma by low frequency of theta oscillation.
    Related Articles | Metrics
    Context-dependent Attentional Spotlight in Pulvinar-V1 Interaction
    Nihong Chen, Hailin Ai, Xincheng Lu
    2023, 31 (suppl.):  160-160. 
    Abstract ( 62 )  
    PURPOSE: As the largest thalamic nucleus with extensive cortical connections, the pulvinar has long been recognized as a critical node in the attention network. However, the role of the pulvinar remains unclear in selective attention. The present study aimed to investigate the interaction between the pulvinar and V1 during selective attention and its contextual dependency.
    METHODS: Using fMRI, we performed background connectivity between the pulvinar and V1 in relation to focused versus diffused attention allocation, in weak and strong visual crowding contexts. Furthermore, we utilized time-lagged connectivity and dynamic causal modelling (DCM) to examine the modulation effect of attention and context in the directional connectivity.
    RESULTS: Our findings revealed that focused attention led to enhanced correlations between the pulvinar and V1. Notably, this modulation was initiated by the pulvinar, and the strength of the modulation was dependent on the saliency of the target.
    CONCLUSIONS: We suggest that the pulvinar initiates information reweighting to V1 according to attentional demands. These results provide valuable insights on the intricate interplay between attentional processes in subcortical structures like the pulvinar, and cortical processing in the visual system.
    Related Articles | Metrics
    Learning Improves Peripheral Vision via Enhanced Cortico-cortical Communications
    Yuwei Cui, MiYoung Kwon, Nihong Chen
    2023, 31 (suppl.):  161-161. 
    Abstract ( 118 )  
    PURPOSE: When one’s central vision is deprived, a spared part of the peripheral retina acts as a pseudo fovea for fixation, which is termed as preferred retinal locus (PRL). Previously, we demonstrated that oculomotor training with simulated central vision loss not only induced a PRL in normally sighted adults, but also reduced crowding at the PRL (Chen et al., 2019). Does this compensatory adjustment involve changes in information communication in the visual processing network? Here we addressed this question by performing functional connectivity analyses on the BOLD fMRI signals recorded before and after training.
    METHODS: During the scan, crowded letters were displayed at the PRL while subjects were engaged in a central fixation task. Background connectivity was computed based on residual timeseries after removing stimulus-evoked signals. Voxels in the extrastriate cortex and in the intraparietal sulcus (IPS) that showed a stronger response to stimuli were identified as seeds for computing correlation with vertexwise V1. We compared background connectivity around the region covering the crowded letters before and after training.
    RESULTS: After training, the connectivity was enhanced between V1 and V2, and between V1 and IPS. This effect was observed at the retinotopic regions representing the crowded target.
    CONCLUSIONS: PRL training enhanced the background connectivity between V1 and V2, and between V1 and IPS. These results suggest that learning enhances peripheral vision by reweighting information transmission in the visual processing hierarchy in the human visual cortex.
    Related Articles | Metrics
    Population Coding for Figure-ground Texture Segregation in Macaque V1 and V4
    Xing-Nan Zhao, Xing-Si Dong, Si Wu, Shi-Ming Tang, Cong Yu
    2023, 31 (suppl.):  163-163. 
    Abstract ( 91 )  
    PURPOSE: Object recognition involves the brain segregating objects from their surroundings. Neurophysiological studies of figure-ground texture segregation have yielded inconsistent results, particularly regarding whether V1 neurons are capable of figure-ground segregation, or simply detect texture borders. To address the issue, here we employed two-photon imaging to study V1/V4 population coding for figure-ground segregation in awake, fixating macaques.
    Methods: We measured the neuronal responses for texture segregation in three V1 response FOVs, one from each awake, fixation macaques, and in six V4 response FOVs from another three awake, fixation macaques. The experimental texture stimuli were composed of oriented line segments. Two types of texture stimuli were mainly used: the uniform texture and the figure-ground texture. The uniform texture was a 32° × 32° patch composed of randomly positioned line segments with one of four orientations (0°, 45°, 90°, 135°). The figure-ground texture was composed of a 4° × 4° square figure texture superimposed on a 32° × 32° uniform ground with orthogonal orientations. During the recordings, the figure position varied relative to the pRF of the FOV, and V1 and V4 neuronal responses to the figure, figure-ground border, and ground during a passive-viewing task were recorded. To analyze the V1 and V4 population coding for texture segregation, we trained a three-stage linear support vector machine (SVM) to decode texture border and figure-ground information using PCA-transformed neuronal responses.
    Results: When considering the average response changes, it appears that V1 neurons detect the figure-ground texture border instead of segregating the figure from the ground. Meanwhile, the role of V4 neurons in figure-ground segregation is uncertain due to their extremely small effect size. Our population coding results revealed that both V1 and V4 neurons can decode the texture border and segregate the figure from the ground with sufficient principal components (PCs). However, V1 neurons decoded figure-ground borders with considerably higher efficiency (requiring a few principal components) than V4 neurons. In contrast, V4 neurons were considerably more efficient than V1 neurons for figure-ground segregation.
    Conclusions: These results indicate that V1 neurons are mainly responsible for border detection, and in addition provide rudimentary figure-ground information that is not very well represented by the first most informative PCs. However, V1 figure-ground information can be linearly read out and efficiently represented by downstream V4 neurons for segregation.
    Related Articles | Metrics
    The Neuronal Mechanism of Biological Motion Information Processing in the Fundus of Superior Temporal Area
    Tingting Feng, Wenhao Han, Tao Zhang
    2023, 31 (suppl.):  164-164. 
    Abstract ( 67 )  
    PURPOSE: Detection and recognition of biological motion play an important role in evolutionary survival of human. The biological motion, which was termed by Johansson, was produced by a few lights attached to the head and major joints of an actor. Even without contours linking the joints, observers can still quickly identify the biological motion as actions. Despite of extensive studies suggesting the superior temporal sulcus (STS) a key to processing biological motion information, the neuronal mechanisms underlying the processing of biological motion features remain unknown. In the dorsal pathway of the visual system, several areas were specialized for the analysis of complex motions, including the medial temporal area (MT), the middle temporal area (MST), the fundus of superior temporal area (FST) and the superior temporal polysensory area (STP) in the STS. Our lab’s previous study has revealed that MST can encode biological motion information by the way of dynamic. In this study, we aimed to explore the neuronal mechanism of biological motion information processing in FST, especially focusing on form, walking direction and body orientation. And then, we sought to compared the encoding abilities of FST with those of MST to uncover potential functional distinctions between these two areas in processing biological motion.
    METHODS: We examined the neuronal properties in FST by training macaque monkeys to perform passively viewing tasks. Once a cell was isolated, we used the receptive field (RF) test to determine the spatial location and size of the cell’s RF. Then, different optic flow stimuli were presented in the RF to investigate the motion direction tunning and optic flow selectivity, including four non-linear patterns (expansion, contraction, clockwise rotation and counter-clockwise rotation) and eight linear stimuli (dots translating at 8 directions with 45° apart). Subsequently, we randomly presented eight biological motion stimuli in the form of point-light in the RF. These stimuli varied in terms of form (intact VS scrambled), walking direction (right VS left), body orientation (upright VS inverted). Form scrambled animations were created by randomly placing the initial position of all dots. Right-walking and inverted biological motion was obtained by horizontally and vertically mirroring the left-walking and upright point-light walkers, respectively.
    RESULTS: The responses of FST neurons revealed that some cells preferred intact biological motion, a minority of cells exhibited a preference for scrambled animations. Additionally, some cells preferred right or left walking walker, while some cells showed a preference for upright or inverted biological motion. To further understand how FST encode biological motion features at the population level, we employed the modified F1/F0 technique, same as the analysis approach used in MST in our previous study. The technique is to evaluate the neuronal modulation based on the dynamic structure of spike trains over the time course of biological motion. The results demonstrated that FST is capable of extracting form from biological motion, distinguish vertical (body orientations) and horizontal spatial transformations (walking directions), but does not exhibit a preference for upright biological motion, which was observed in MST.
    CONCLUSIONS: FST can encode form, body orientation, and walking direction of biological motion. However, unlike MST, FST does not show a preference for upright walkers, indicating the absence of inversion effect in FST. These findings imply that MST and FST might have distinct role in processing biological motion information.
    Related Articles | Metrics
    The Developing Brain Maintains Dynamic Trade-off Patterns in Visual Networks
    Zhirui Yang, Jingwen Yang, Zelin Chen, Chufen Huang, Shuo Lu
    2023, 31 (suppl.):  168-168. 
    Abstract ( 53 )  
    PURPOSE: Since the beginning of life, visual perception provides important means to access the environment which are vital stimuli for brain maturation. However, the developing pattern of visual processing networks and the dynamic modularity on whole-brain level during childhood remain unclear. This study aims to shed light on the detailed developmental trajectory of visual networks on different cognitive levels, i.e., object, human face, and word.
    METHODS: We utilized millisecond-resolution high-density electroencephalogram (HD-EEG) to achieve visualization of real-time dynamic functional networks of visual recognition. Three categories of visual stimuli, Chinese character, Human face, and Object (photos of commonly found things) were involved. Among a large cohort of typically developed children (n=120, aged 3-13) and health adults (n=35), the neural responses were surveyed through multiple task-activated network analyses based on functional connectivity (FC). Multiple measurements of functional network properties, including node hubness, centrality and modularity, were involved.
    RESULTS: The developmental trajectory of visual network is not monotonously upgrading to the matured model but maintain trade-off patterns on FC, cortical hubness, and network modularity, mainly including: (1) Decreasing cortical activation is replaced by increasing functional connectivity on the whole brain. (2) Cortical engagement allocation shifted from the occipital visual areas to higher-level cortices such as left dorsal lateral prefrontal cortices (dlPFC) and parahippocampus. (3) Visual word (Chinese character) network hubs showed two different developmental trajectories, evolving into either strengthening hubs in left dlPFC or attenuated hubness but with enhanced cortical engagement in left parahippocampus. (4) Modularity measurements showed a trade-off between global hub and local hub, with global hub modulated by long-distance connectivity to engage in a high-level task. Local hub was supported by neighbour connection, which emerged early than global hubs and demonstrated plasticity to adapt to increasing processing complexity.
    CONCLUSIONS: Besides the maturation of specialized cortices and increasing expertise, our findings highlight the importance of whole-brain network development for the high-level cognitive functions which is featured by dynamically balancing resources to meet the fast-changing brain structure and outer environment. The trade-off patterns help children’s brain achieve a dynamic optimization for efficiency and energy-saving, especially in high-level tasks such as word reading. This study provides understanding on typical as well as atypical brain development and necessitates a rethinking of how cognitive networks develop during childhood.
    Related Articles | Metrics
    Neural Correlates of the Detection of Real Optic Flow in the Human Brain
    Xue-Chun Shen, Zhou-Kui-Dong Shan, Shu-Guang Kuai, Li Li
    2023, 31 (suppl.):  169-169. 
    Abstract ( 102 )  
    PURPOSE: When an observer is moving in the environment, objects in the world would project on the observer’s retina and generate a dynamic light motion pattern, named optic flow. Optic flow patterns induced by forward/backward self-motion in a rigid scene contain 3D structure information as well as 2D features such as a radial velocity field. Previous studies often confuse radial motion patterns that do not contain any 3D structure information about self-motion in a rigid scene the same as optic flow. Thus, it remains in question whether the cortical areas reported by these studies are specialized to process real optic flow or radial motion patterns in general. Here, we sought to address this question by finding the neural correlates of the detection of real optic flow using a new psychophysical method.
    METHODS: Two types of visual stimuli were tested: (1) The real 3D-cloud optic flow consisted of dots randomly distributed in a 3D space that moved either away or toward the observer resulting in contraction or expansion optic flow patterns, and (2) the fake optic flow that was generated by shuffling the image velocities of the dots in the 3D-cloud optic flow while keeping their initial positions and motion directions intact. Accordingly, the fake and the real optic flow stimuli were matched regarding not only the static 2D features (e.g., the radial pattern and the CoM) but also the dynamic motion signals (e.g., speed and acceleration of dot motion), except that the dot image motion in the fake optic flow was not consistent with self-motion in a rigid scene. We first conducted a psychophysical experiment in which we varied the motion coherence level (0-60%) of the stimuli by perturbing the image motion direction of a percentage of randomly selected signal dots, and participants were asked to indicate whether they perceived any coherent motion pattern. This experiment had 20 conditions: 2 types of stimuli (fake vs. real optic flow) × 2 motion directions (contraction vs. expansion) × 5 coherence levels (0%-60% with the step of 15%). We then conducted an fMRI experiment to find the cortical areas whose responses can be related to behavioral performance. This experiment adopted a block design and used the same stimuli as the psychophysical experiment. Participants were scanned for 4 sessions, with 8 runs in each session. Each session corresponded to one of the four experimental conditions: 2 types of stimuli (fake vs. real optic flow) × 2 motion directions (contraction vs. expansion). Each run had 15 stimulus blocks (5 motion coherence levels × 3 times) and 4 fixation blocks. Each stimulus block contained 16 trials of a 1-s motion stimulus at one motion coherence level and the fixation block also lasted 16 s. The testing order of stimulus was randomized in each run. Participants made a task-irrelevant judgment (color discrimination) during scanning. For each participant, we identified their visual ROIs and the ROIs previously reported to respond to radial motion stimuli (e.g., V1, V2, V3d, V3a, V3b/KO, MT, MST, V6, V7, VIP, CSv, and Pc) using standard localizers. We performed ROI-based multivoxel pattern analysis (MVPA) to examine the brain responses to contraction and expansion in fake versus real optic flow stimuli.
    RESULTS: The psychophysical results showed that the detection threshold for contraction patterns was lower than that for expansion patterns for fake optic flow, but the opposite trend was found for real optic flow. The MVPA results showed that only dorsal area MST showed significantly higher decoding accuracy for contraction than expansion for fake optic flow, but this trend was reversed for real optic flow, mirroring the behavioral data.
    CONCLUSIONS: The visual system does not consider non-rigid radial motion patterns (e.g., fake optic flow) the same as real optic flow. Although previous studies have reported many cortical areas in the human brain respond to radial motion stimuli, only the dorsal area MST shows neural correlates of the detection of real optic flow.
    Related Articles | Metrics
    Eyes are the Windows of Lies
    Xunbing Shen, Xiaoqing Mei, Min Gao, Zhencai Chen, Yafang Li, Mingliang Gong
    2023, 31 (suppl.):  172-172. 
    Abstract ( 89 )  
    PURPOSE: The eyes, as the windows to the soul, can reflect many internal mental activities. Can the eyes be the windows to the lies? Research has found that the pupil can be used as a cue for deception detection. In addition to the pupil, there is another feature of the eyes that can leak out inner mental information - the Eye Aspect Ratio (EAR, which is equal to 0 when the eyes are closed)
    METHODS: This paper presents a development of non-intrusive video-based method that uses computer-vision to measure eyes features for identifying visible signs of deception. By using video footage of players lying and telling the truth in the game show of Golden ball, the computer vision software OpenFace was used to analyze the features of the players' eyes in both cases. The pupil size and Eye Aspect Ratio were calculated. These obtained features are statistically analyzed and fed into the machine learning software WEKA for distinguishing lies from truth.
    RESULTS: The findings showed that the eyes’ pupil size was not different between the lying and truth-telling conditions, and the accuracy of machine learning classification using pupil size as a feature was below 60%; The Eye Aspect Ratio was different between the lying and truth-telling conditions (the Eye Aspect Ratio was greater during lying than it is while telling the truth), and the performances machine learning classification using Eye Aspect Ratio were all above 75%.
    CONCLUSIONS: Eyes can be the windows to the lies.
    Related Articles | Metrics
    MRGazerII: Camera-free Decoding Eye Movements from Functional Magnetic Resonance Imaging
    Rongjie Hu, Jie Liang, Yiwen Ding, Shuang Jian, Xiuwen Wu, Yanming Wang, Zhen Liang, Bensheng Qiu, Xiaoxiao Wang
    2023, 31 (suppl.):  174-174. 
    Abstract ( 104 )  
    PURPOSE: A raw fMRI-based end-to-end deep learning model, MRGazerII, was proposed to recognize eye-movement state at the temporal interval of tens of millisecond.
    METHODS: The movie-watching fMRI data we used were from the Human Connectome Project (HCP) 7T release, in which each subject watched four movies with their eye movements recorded. The binary morphology method was then used to segment the eye domains and the intermediate 6-layer slices of the eye balls were extracted and fed into the deep neural network for the eye movement prediction. A series of ResNet-CBAM, Transformer encoder and fully connected layer were assembled to give slice-level prediction. The dataset was categorized into training and validating group at cross-subject level.
    RESULTS: The proposed model achieves an accuracy of 0.48 in the classification of eye movements with f1-scores of 0.56, 0.51 and 0.34 for the fixation, blink and saccade respectively. Correlation analysis show that the prediction of fixation (averaged r = 0.28) and blink (averaged r = 0.46) were highly correlated with the records of the eye tracker.
    CONCLUSIONS: The proposed MRGazerII, a camera-free eye tracking method, is able to report typical eye movements at temporal interval of tens of milliseconds and would be helpful to future fMRI & eye-movement analysis.
    Related Articles | Metrics
    Fear Expression Outperforms Happiness as a Lie Detection Indicator
    Xin Zhou, Xunbing Shen, Yuxi Zhou, Zhenzhen Tao
    2023, 31 (suppl.):  177-177. 
    Abstract ( 91 )  
    PURPOSE: Facial expression, as a potential non-verbal cue, is of great significance for lie detection. This study aims to compare the performance of machine learning models using happy and fearful expressions as indicators for distinguishing lies from truth.
    METHODS: By designing a deception game experiment, we recorded videos of subjects lying and telling the truth. Using the facial analysis software OpenFace, we extracted two features of happy expressions (AU6 and AU12) and six features of fear expressions (AU01, AU02, AU04, AU05, AU20, AU26). These features are fed into the machine learning software WEKA for classification.
    RESULTS: When using happy expression features for classification, our machine learning model has an accuracy of 77.80% (Random Forest), 77.99% (IBK), and 75.14% (Bagging) in distinguishing lies from truth. When using fear expression features for classification, the accuracy rate was significantly increased to 94.03% (Random Forest), 92.78% (IBK), and 89.36% (Bagging).
    CONCLUSIONS: Fearful expressions are more effective than happy expressions as indicators for distinguishing lies from truths.
    Related Articles | Metrics
    Emergency and Development of Word Recognition Abilities in the Object Space Model
    Jia Yang, Yipeng Li, Jingqiu Luo, Pinglei Bao
    2023, 31 (suppl.):  178-178. 
    Abstract ( 84 )  
    PURPOSE: Reading systematically engages the lateral occipitotemporal sulcus, at a site known as the visual word form area (VWFA). While the prevailing recycling hypothesis posits that the VWFA emerges through repurposing a pre-existing region for recognizing written words, the original function of this prototypic region and how experience shapes its representation remains largely unexplored. The object space model recently proposed that this area might have initially been responsible for representing word-related features in non-word objects that are proximate to words in object space, subsequently expanding its representational area through word training. In this study, we leveraged fMRI and deep learning neural networks to test this hypothesis, thereby shedding light on the origins and evolution of the VWFA.
    METHODS & RESULTS: We initially tested the ability to discriminate between words in a widely used convolutional network, namely the pretrained AlexNet, which was trained with the ImageNet dataset. Surprisingly, we discovered that this network exhibited a significant ability to discriminate between words. Removing images containing words from the training dataset did not significantly affect word discrimination ability. Instead, we found that the pretrained network can effectively extract and utilize implicit, useful features from non-word images for word discrimination with deep dream methods. Furthermore, by training different networks with different sets of images, we found that the images that are closer to the words in the object space contain more word-related features. Taking our study a step further, we employed functional magnetic resonance imaging (fMRI) techniques to measure the brain response of seven subjects to 20 words and 80 non-word objects. The results revealed that objects close to words in the object space elicited stronger responses in the VWFA, bolstering the evidence favoring the object space model as an effective framework for explaining object representation in the human brain. Lastly, by systematically varying the correlation between word identity and task requirements, we found that task-irrelevant exposure hindered word representation and impeded word discrimination ability. Conversely, as the degree of association increased, both the representation area and word discrimination ability increased, suggesting that deep neural networks develop the object space in accordance with supervised rather than exposure-based learning rules.
    CONCLUSIONS: This study provides compelling evidence that the VWFA area have initially functioned to represent word-related features in non-word objects that are situated close to words within the object space. The process of reading training further refines these area’s representations to fulfill real-life demands, aligning with the principles of supervised learning. Collectively, our research strongly underscores the object space model as a comprehensive and systematic framework for understanding the emergence and evolution of category-specific brain areas. This holds promising implications for future electrophysiological research, guiding exploration into the complex interplay between neural representations and learned categories.
    Related Articles | Metrics
    Emotion Elicitation Promote the Disclosure of Facial Deception Cues
    Yuxi Zhou, Xunbing Shen
    2023, 31 (suppl.):  179-179. 
    Abstract ( 86 )  
    PURPOSE: It is difficult to simulate real deception in laboratory studies. Therefore, some researchers proposed to simulate the emotional arousal state of the real situation by inducing a high emotional arousal of the subjects before the experiment, so as to increase the ecological validity of the laboratory research results. Research has shown that cheating activates both happiness and fear more. This study will explore whether pleasurable emotion eliculation causes cheaters to expose more facial deception cues in a laboratory setting.
    METHODS: Using the Guilty Knowledge Tests, the facial expressions of the emotion-induced group and the neutral emotion-induced group were used as the material for the negative response to the detection stimulus (i.e. the deception response). We obtained 47,097 frames of deception material in the emotion-induced group and 49,529 frames of deception material in the neutral emotion group (frame rate 50 f/s). OpenFace analysis using computer vision software subjects facial AU frequency, and each frame material related to deception in AU unit (AU01 AU02, AU04, AU05, AU06, AU07, AU12, AU20, AU26) intensity input machine learning software WEKA.
    RESULTS: The deception AU activation frequency of emotion-induced group and neutral emotion-induced group was significantly different on AU12 (t = 2.470, P = 0.027, effect size = 0.638). Based on the AU intensity related to deception, the machine learning classification of whether the material is emotionally induced is performed, and the accuracy of the three classifiers is higher than 95%.
    CONCLUSIONS: Inducing the participants' happiness before the experiment can make the participants reveal more facial deception cues when they cheat.
    Related Articles | Metrics
    Traffic Fixated Object Detection based on Driver’s Selective Attention Mechanism
    Yi Shi, Shixuan Zhao, Jiang Wu, Hongmei Yan
    2023, 31 (suppl.):  180-180. 
    Abstract ( 71 )  
    PURPOSE: Driving safety is the most important for assisted/autonomous driving. Referring to the driver’s perception of the traffic scene, we combine the driver’s selective attention mechanism with computer vision to improve the detection performance of the vital fixated objects closely related to the driving task, providing the potential application or referential value in the intelligent driving safety.
    METHODS: Based on the fixations of more than 28 experienced drivers, we first build a new eye-tracking-based fixated object detection dataset (ETFOD). Then, we propose a fixated object detection model based on saliency prior, named FOD-Net. It consists of three parts: the object detection module (ODM), the salient region guided module (SRGM) and the saliency guidance strategy. ODM is a strong baseline detector responsible for detecting traffic fixated objects with various scales. SRGM predicts pixel-wise saliency maps in shallow layers, which contain detailed salient regions attracting drivers’ attention. Finally, based on the saliency guidance strategy, the salient regions generated by SRGM can be used as saliency priors to guide ODM to pay more attention to the fixated objects within the salient regions, thus enhancing ODM’s feature representation for fixated objects instead of objects outside the drivers’ attention regions. The two tasks of fixated object detection and salient region prediction can mutually facilitate each other to improve fixated object detection accuracy.
    RESULTS: Experimental results on the proposed dataset show that FOD-Net achieves a mAP value of 78.4% with small model parameters, which is higher than other state-of-the-art models.
    CONCLUSIONS: Combining the driver’s attention mechanism and the object detection together can achieve more accurate detection for the fixated objects with direct threats to driving safety, showing potential application value for developing high-intelligence assisted/automatic driving systems.
    Related Articles | Metrics
    A Diffusion Model for the Congruency Sequence Effect
    Chunming Luo, Robert W. Proctor
    2023, 31 (suppl.):  181-181. 
    Abstract ( 132 )  
    PURPOSE: The Simon, flanker, or Stroop effect on the current trial is influenced by the preceding trial type, with larger congruency effect following congruent trials than following incongruent trials, which is termed congruency sequence effect (CSE, Gratton et al., 1992). The conflict adaptation account (Gratton et al., 1992) and feature integration account (Hommel et al., 2004) are used to explain the CSE. Given that the diffusion model for conflict tasks (DMC, Ulrich et al., 2015) has provided a quantitative account of the mechanisms underlying decisions in conflict tasks, and it has not been applied to the congruency sequence effect (CSE). The present study expands analysis of the reaction time (RT) distributions reflected by delta plots to the CSE, and then extends the DMC to simulate the results, we refer to this model as the CSE-DMC. This model has the assumptions of the DMC that the controlled and automatic processes accumulate differently and independently, and adds two other assumptions: (1) feature integration influences only the controlled processes; (2) following incongruent trials the automatic processes are reduced, as more attention is paid to the task-relevant attribute (or target) and less to the task-irrelevant attribute (or distractor). These assumptions are inspired by the conflict adaptation and feature integration accounts and the previous findings.
    METHODS: Studies 1 to 3 analyzed and modeled the data from a spatial Simon task, an arrow-based Simon task and a flanker task, respectively. For each study, we used Vincentile analysis to analyze the RT distributions for CSE. Then, we fit them with the CSE-DMC to examine whether it can fit the data well and better than that of two variants, the FI-DMC and CA-DMC models that only add one aforementioned assumption. We coded the trial sequences: a congruent trial followed by another congruent one (cC) and by an incongruent trial (cI); an incongruent trial followed by a congruent (iC) trial and by an incongruent trial (iI). Then the conditional accuracy function (CAF) and conditional duration function (CDF) were created. After that, a repeated-measures analysis of variance (ANOVA) was performed on accuracy, with bin, preceding congruency, and current congruency as within-subject variables, and another was performed on RT, with percentile, preceding congruency, and current congruency as within-subject variables. The CSE-DMC and the other Models were fitted separately to the CAFs and the CDFs for each task with four conditions (cC, cI, iC, iI), each has 5 CAF bins and 5 CDF quantiles. Predictions of each model were generated using Monte Carlo simulations with a step size of 1 ms, and a constant diffusion constant of for the superimposed process. The G2 was used to fit each model to the data. 100,000 trials were simulated for each condition and minimization cycle. The G2 criterion was minimized with the Nelder-Mead SIMPLEX method. Model selection for the CSE was made by computing a BIC statistic that penalizes models according to their number of free parameters.
    RESULTS: RT distributions: On accuracy the Simon effects following congruent trials or incongruent trials became smaller across the RT distribution as RT increased, whereas the arrow-based Simon and flanker effects have similar RT distributions for each CSE condition. On RT, with increasing RT, (1) the spatial Simon effect was almost unchanged following congruent trials but initially became smaller and finally reversed following incongruent trials; (2) the arrow-based Simon effects increased following both congruent and incongruent trials, but more so for the former than the latter; (3) the flanker congruency effect varied quadratically following congruent trials but increased linearly following incongruent trials. Model fitting: The CSE-DMC could fit the data well and provided a better fit with the data than the other models, regardless of the task types.
    CONCLUSIONS: the congruency effects following congruent and incongruent trials are different on mean RT and RT distributions, which could be a consequence of the automatic and controlled processes of conflict stimulus activation on the current trial being influenced differently by the prior trial. We provided evidence for this supposition by showing that the hypothesized computational mechanisms underlying these processes can be instantiated within the CSE-DMC.
    Related Articles | Metrics
    Audiovisual Illusion Training Improves Multisensory Temporal Integration
    Haocheng Zhu, Aijun Wang
    2023, 31 (suppl.):  182-182. 
    Abstract ( 101 )  
    PURPOSE: When we perceive external physical stimuli from the environment, the brain must remain somewhat flexible to unaligned stimuli within a specific range, as multisensory signals are subject to different transmission and processing delays. Recent studies have shown that the width of the ‘temporal binding window (TBW)’ can be reduced by perceptual learning. However, to date, the vast majority of studies examining the mechanisms of perceptual learning have focused on experience -dependent effects, failing to reach a consensus on its relationship with the underlying perception influenced by audiovisual illusion.
    METHODS: The present study utilized the classic auditory-dominated sound-induced flash illusion (SiFI) paradigm with feedback training to investigate the effect of a 5-day SiFI training on multisensory temporal integration, as evaluated by a simultaneity judgment (SJ) task and temporal order judgment (TOJ) task.
    RESULTS: The results demonstrated that participants obtained more accurate multisensory integration, as shown by PSS shifting to 0 ms and a narrowed temporal binding window (TBW) after the 5-day multisensory illusion training. Regarding the temporal perception research paradigm, TOJ is more sensitive to changes in perceptual sensitivity than SJ.
    CONCLUSIONS: The results are consistent with a Bayesian model of causal inference, suggesting that perception learning reduce the susceptibility to SiFI, whilst improving the precision of audiovisual temporal estimation. Our study sheds light on the influence of the fundamental aspect of perceptual sensitivity on multisensory temporal perception.
    Related Articles | Metrics
    Effect of inhibition of return on audiovisual cross-modal correspondence
    Zu Guangyao, Li Shuqi, Zhang Tianyang, Wang Aijun, Zhang Ming
    2023, 31 (suppl.):  183-183. 
    Abstract ( 104 )  
    PURPOSE: Different dimensions of visual and auditory stimuli can map to each other to influence human behavioral responses, a phenomenon known as audiovisual cross-modal correspondence. A common audiovisual cross-modal correspondence is between auditory tones and visual spatial locations, with individuals tending to map high-pitched sounds to high spatial location and low-pitched sounds to low spatial location. When a high-pitch sound is accompanied or preceded by a visual stimulus, the participants respond faster to visual stimuli presented in the high spatial location than to visual stimuli presented in the low spatial location, and vice versa. Researchers have different views on the level at which audiovisual cross-modal correspondence occurs. Some argue that audiovisual cross-modal correspondence occurs at the perceptual level, increasing the perceptual saliency of the stimulus, while others argue that audiovisual cross-modal correspondence occurs at a later semantic or decision level. As inhibition of return (IOR) in the attentional system can affect human perception, this study used a cue-target paradigm to explore the interaction between IOR and audiovisual cross-modal correspondence to elucidate the occurrence level and mechanism of audiovisual cross-modal correspondence. Audiovisual cross-modal correspondence between auditory tones and visual spatial locations was expected to occur at the perceptual level and therefore would be subject to the IOR effect occurring at the same processing level.
    METHODS: The present study consisted of 3 experiments. Experiment 1 had a 2 × 2 within-subjects design; we manipulated the spatial cue validity (valid cue vs. invalid cue) and audiovisual cross-modal correspondence (congruent vs. incongruent). During the experiment, a fixation point was first presented in the middle of the screen for 750 ms. The box above or below the fixation point was then bolded for 50 ms, but this cue was not predictive of the spatial location of the target. After a time interval of 250 ms, a fixation point was presented in bold as a central cue. A central cue is commonly used in spatial IOR research, as it facilitates stable occurrence of IOR. The central cue was presented for 50 ms, and then the auditory stimulus (either high or low pitch) was presented for 50 ms. After a 200-ms interval, the visual target was presented for 100 ms in the box above or below the fixation point. The participants were instructed to perform a detection task for the presence of a visual target. The experimental design and procedure of Experiment 2 were identical to those of Experiment 1, except that the sound presented before the visual target was a single tone that was present or absent. Experiment 3 had a 2 × 2 × 2 within-subjects design. Experiment 3 added a factor to Experiment 1, namely, stimulus onset asynchrony (SOA) between the cue and the target (600 ms vs. 1300 ms).
    RESULTS: In all three experiments, the overall accuracy (ACC) was very high; thus, no further statistical analysis was conducted for the ACC. In terms of reaction time (RT), the results of Experiment 1 showed that both spatial IOR and audiovisual cross-modal correspondence occurred. Importantly, there was an interaction between spatial cue validity and audiovisual cross-modal correspondence. Specifically, when the cue was valid, audiovisual cross-modal correspondence occurred (322 ms vs. 327 ms); and when the cue was invalid, there was no audiovisual cross-modal correspondence. The results of Experiment 2 showed that the interaction between cue validity and sound presentation was not significant, and there was no evidence that IOR influenced the sound-induced facilitation effect. The results of Experiment 3 showed that the interaction among spatial cue validity, cross-modal correspondence congruency, and SOA was significant. Specifically, at an SOA of 600 ms, the interaction between spatial cue validity and cross-modal correspondence congruency was significant. When the cue was valid, audiovisual cross-modal correspondence occurred (350 ms vs. 361 ms); and when the cue was invalid, there was no audiovisual cross-modal correspondence. At an SOA of 1300 ms, the interaction between cue validity and cross-modal correspondence congruency was not significant, and cross-modal correspondence occurred in both valid-cue and invalid-cue conditions.
    CONCLUSIONS: In conclusion, the present results suggested that the IOR effect, occurring at the perceptual level, moderated audiovisual cross-modal correspondence. When the IOR effect occurred, audiovisual cross-modal correspondence occurred in the cued location, but not in the non-cued location. The alerting effect induced by the sound did not interact with IOR. With the weakening of the IOR effect, the audiovisual cross-modal correspondence in the cued location decreased, and the moderating effect of the IOR effect on audiovisual cross-modal correspondence weakened.
    Related Articles | Metrics