Norman, Price and Duff (2006) found that scores on the openness/feeling scale of NEO-PI-R could predict performance on deterministic implicit sequence learning. In another study, no such correlation was identified in complex probabilistic implicit sequence learning (Norman, Price, Duff, & Mentzoni, 2007). However, Kaufman, et al. (2010) found that performance on probabilistic implicit sequence learning was significantly related to the openness scale (i.e., it includes four dimensions of aesthetics, imagination, feeling and plot) of NEO-PI-R. The researchers concluded that implicit learning might be relate to experience openness rather than to feeling openness. In the present study, by adopting Norman, et al.’ experimental design (2007), but with fewer blocks (11 blocks) and more response stimulus intervals (5 RSIs), it was found that scores on the openness/feeling scale could predict performance on probabilistic implicit sequence learning. Specifically, individuals with high vs. low openness/feeling tend to show significant differences in implicit gradient process. In Experiment 1, a complex probabilistic implicit sequence learning procedure (Norman, et al., 2007) was used. There were two sequences in the training stage. One was probable sequence (SOC1 or SOC2) with a probability of 0.88, The other was improbable sequence (SOC2 or SOC1) with the probability of 0.12. For the experimental group, a 2 (high vs. low openness/feeling group) × 2 (probable vs. improbable sequence) × 11 (block) mixed design was used. The procedure for the experimental group was consisted of a training phase, a recognition phase, and a generation test phase (the generation test includes both a contain task and a rotation task). Each openness/feeling group was consisted of 38 subjects. For the control group, a 2 (high vs. low openness/feeling group) × 11 (block) mixed design was used. The control group was asked to study a random sequence and received only a training phase. In experiment 2, a 2 (high vs. low openness/feeling group) × 4 (RSI: 0 ms, 250 ms, 750 ms, 1000 ms) × 2 (probable vs. improbable sequence) × 11 (block) mixed design was used. Numbers of subjects for each openness/feeling group were around 20. The experimental process was the same as the experiment 1. No significant reaction time decrement was found in the control group in experiment 1. Therefore, reaction time decrement was used as an indicator of implicit learning for the experimental group. It was found that, when RSI was 500 ms, both high and low openness/feeling groups could learn probable sequence, but only high openness/feeling group eventually acquired improbable sequence. In experiment 2, When RSI was smaller than 500ms, high openness/feeling group failed to acquire either probable or improbable sequence before transfer block 9, but acquire both after transfer block 9. When RSI was greater than or equal to 500ms, high openness/feeling group acquired probable sequence before transfer block 9 and improbable sequence after transfer block 9. In contrast, low openness/feeling group could acquire probable sequence before transfer block 9 in all RSIs, but failed to learn improbable sequence regardless of RSI setting. Different from previous researchs, significant block reaction time differences (i.e., between transfer block 9 and block 8, as well as between transfer block 9 and block 10) used as the amount of implicit learning in previous researchs were not detected for the high and low openness/feeling groups in all RSIs, except for low openness/feeling group in RSI -0 ms. And surprisingly, in all RSIs, participants’ scores for recognition, contain task and rotation tasks were at or below the random level. Low openness/feeling group performed significantly better than high openness/feeling group in recognition task when RSI was 750 ms and rotation task when RSI was 500 ms. Results from the two experiments proved that scores on the openness/feeling scale of NEO-PI-R can predict individual differences in probabilistic implicit sequence learning. Essential differences between high and low openness/feeling groups exist in implicit acquisition process of probable and improbable sequences along with the increase of the RSI. High openness/feeling group could learn probable and improbable sequences by using collateral elaboration, but low openness/feeling group could only learn probable sequence. For studying implicit learning and individual differences in probabilistic implicit sequence learning, improbable sequence leaning can be a sensitive measure, while other measures such as transfer block, recognition task, and generation task are limited in terms of their predictability because of the interference of improbable sequence learning.
Attentional set-shifting task (AST) is a newly developed rodent-based model that can be used to specifically evaluate cortically-mediated cognitive flexibility. The AST has been increasingly used to investigate the neural basis underlying cognitive flexibility and related disorders. In the present study, we investigated the effects of strain and testing protocol on cognitive function by comparing the performance during different cognitive stages in the AST, between Sprague-Dawley (SD) and Wistar rats, and between a seven-stage and five-stage AST. Our data showed differences in cortically-mediated cognitive function between SD and Wistar rats when they were tested in both seven-stage and five-stage AST. In general, Wistar rats exhibited better performance in each stage of the AST compared with SD rats. Especially in the reversal learning (RL) stage, Wistar rats required fewer trials to reach the criterion and lower error rates compared with SD rats, suggesting better cognitive flexibility in strategy shifting. In contrast, the reactive pattern between different cognitive stages (simple discrimination, SD; compound discrimination, CD; intra-dimensional shifting, IDS; reversal learning, RL; extra-dimensional shifting, EDS) in the AST did not significantly differ by strain or testing protocol. Theoretically, there is a general response pattern across these cognitive stages, namely, more trials to reach criterion and/or higher error rates are generally seen during higher complexity learning stages (i.e. RL and EDS) than in simpler learning stages (i.e. SD and CD), which is a prerequisite for the interpretation of performance in the RL and EDS in terms of strategy and attentional set-shifting. Consistent with this, we found that both SD and Wistar rats required more trials to reach criterion and showed higher error rates during RL and/or EDS stages than other stages in both the five-stage and seven-stage AST, demonstrating a stable reactive pattern of set establishment and set shifting in the AST. These results suggest that there are strain differences in cortically-mediated cognitive flexibility in rats, and constructive relationship across different cognitive components in the AST is stable across rat strain and testing protocol. These findings extend the extant knowledge of the AST model and provide a behavioral basis for the selection of experimental animal and testing protocols for the AST in further studies.
Songbird is an ideal animal model for the study of human language. There are functional similarities between telencephalon nucleus HVC (high vocal center) in songbird and Broca’s area in the human brain. The songs of songbird arise by an integration of activity from two neural pathways that emanate from nucleus HVC which plays an important role in song production and sensorimotor learning. Human brain known to be important for speech and language is usually much larger on the dominant side. Lesion of HVC is likely to uncover any asymmetries of function in the central song system. To determine the lateral asymmetry in HVC control of song production, electrolytic lesions of HVC and acoustic analysis technology were used. In our experiments, all birds received unilateral HVC lesion prior to bilateral HVC lesion, and then sound were compared and analyzed before and after electrolytic lesion of HVC. Fifteen adult males (left lesion, n = 8; right lesion, n = 7) received lesion targeting HVC. Structure of syllables has the characteristics of fast frequency modulations in adult male zebra finches, motifs of songs consist of several sequentially arranged syllables. Songs and long calls were analyzed spectrographically using Sound Analysis Pro (SAP), we extracted the acoustic parameters including durations, amplitude, fundamental frequency, mean frequency, peak frequency, frequency modulation, amplitude modulation and similarity score. Nissl-counterstained sections of all brains were carefully examined to assess lesion damage. The results showed that the lesion of left HVC had no significant influence to frequency and intensity features in song and long call. Lesion of right HVC result in amplitude, frequency modulation, amplitude modulation decreased significantly (p < 0.05) in long call, and amplitude, mean frequency, peak frequency reduced significantly (p < 0.05) in song. The change of temporal feature after bilateral HVC lesions suggested that the coding of the temporal feature requires the both hemispheres integration of the song system. HVC has right dominance in the control frequency and intensity features, but in the control temporal feature requires both sides of HVC.
Problems are everywhere in daily life. The ability to solve problems helps human being survive natural selection, characterizing us as an intellectual species. It is also closely related to various life outcomes. A number of cross-sectional studies have demonstrated that intellectually-gifted children performed better than their average cohorts in problem solving tasks. But most of these studies primarily focus on the cognitive dimension of problem solving. They did not suffice to draw a whole picture of human problem solving ability. Moreover, few studies have explored the group differences from a developmental perspective, such as their developmental patterns or developmental critical stage, which could contribute a lot to both theories and educational practices. The present study investigated the developmental differences of problem solving ability between intellectually-gifted and intellectually-average children from cognitive, metacognitive and efficiency dimensions. Both cross-sectional and longitudinal data were collected. The cross-sectional study included 131 intellectually- gifted and 163 intellectually-average children aged from 11 to14, and the longitudinal study included 32 intellectually-gifted and 38 intellectually-average children aging from 11 to 13. A redesigned Sokoban game was used to measure the three dimensions of problem solving ability simultaneously. The number of successful solutions was adopted as indicator of cognitive ability, ratio between planning time and total time as indicator of metacognitive ability, and total moves as indicator of cognitive efficiency. Results showed that the intellectually-gifted were significantly superior to their intellectually-average cohorts in the three dimensions. Moreover, both cross-sectional and longitudinal data showed obvious developmental cascade of the three dimensions. However, the development patterns differed between the two groups. In the intellectually-gifted group, problem solving ability at the age of 13.73 and 12.46 was significantly higher than that at the age of 11.12, but no significant difference was found between the 13.73 and 12.46. In the intellectually-average group, however, problem solving ability at the age of 13.73 was significantly higher than that of 11.12 and 12.46 years old, but no significant difference was found between the latter two. Further, both cross-sectional and longitudinal data revealed remarkably higher score of the intellectually-gifted in earlier years but smaller group difference at the age of 13.5. The major finding of the present study was that problem solving ability of intellectually-gifted and intellectually-average children followed different developmental patterns. The development of the intellectually- gifted accelerated during age of 11~12.5 and slowed down during age of 12.5~14. In contrast, intellectually- average children developed slowly during the age of 11~12.5 and accelerated during age of 12.5~14. Group differences of problem solving ability diminished gradually as they grew older. Different development patterns may be attributed to the synaptic pruning and myelination of neurons. This finding has important implications for educational practice. In order to better cultivate intellectually-gifted children, educational professionals should make full use of their advantages at earlier years and provide enriched educational environment to develop their non-academic abilities, such as sociality, self-regulation skills.
As one of the leading researching topics of contemporary cognitive science, strategy use depends on many factors, such as situation, problem characteristics, individual differences including math anxiety, etc. (Siegler, 2007; Imbo & Vandierendonck, 2007).Various theories on arithmetic anxiety have been proposed, however, the role of arithmetic anxiety in strategy utilization and its potential mechanism is still far from clear (Wang & Liu, 2007). The neural mechanism of arithmetic strategies utilization affected by math anxiety still needs further exploring. Event-Related Potentials (ERPs) is usually utilized in exploring brain mechanisms in arithmetic performance, and the choice/no-choice method is a standard method to obtain unbiased data about strategy utilization. In this study, we employed the ERPs and choice/no-choice method to investigate the influence of math anxiety upon individual strategy utilization during arithmetic processing. Revised Mathematics Anxiety Rating Scale (R-MARS) and Trait Anxiety Inventory were used to test 154 students and 34 participants were picked and divided to two groups (17 high math anxiety and 17 low math anxiety). Participants were required to finish the two-digit addition computational mental arithmetic and computational estimation based on applying the ERP technique to test the neurophysiologic activity in the choice/no-choice method. Three subjects were excluded by artifact rejection due to severe contamination (15 high math anxiety and 16 low math anxiety). The experimental design was as following: 2 (computational estimation, mental arithmetic) × 2 (high math anxiety, low math anxiety) × 3 (free-choice condition, no-choice- 1--mental arithmetic decomposition strategy/computational estimation rounding-up condition, no-choice-2 --mental arithmetic decomposition strategy/computational estimation rounding- down condition), and math anxiety was between-subject variable, task type and strategy choice conditions were within-subject variables. Behavioral results showed that: reaction time and accuracy of math anxiety individuals were not significantly different on the strategy execution aspects (no-choice xsc2, xsc3, gsc2, gsc3) and strategy selection aspects (free-choice condition xsc1, gsc1). ERP results showed that: 1) the main effect of math anxiety was significant in N100 in strategy selection aspects and computational estimation strategy execution aspects, namely the N100 amplitude of high math anxiety was greater than low math anxiety, and the N100 latency in mental arithmetic strategy selection aspects ; 2)the effect of dimensions before and after was significant in P200 latency of low math anxiety individual in mental arithmetic strategy execution aspects; 3)The main effect of math anxiety was significant in N100 in F3, F4, PO3 in computational estimation strategy selection aspects and in F3 in mental arithmetic strategy execution aspects, namely the N400 amplitude of high math anxiety was greater than low math anxiety in F3, F4, PO3, and the N400 latency of high math anxiety was shorter than low math anxiety in computational estimation strategy selection aspects; In strategy execution aspects, the N400 amplitude of high math anxiety was greater than low math anxiety in F3. Our results show that math anxiety affects arithmetic strategy use which is based on the theory of cognitive resources, such as processing efficiency theory, attentional control theory, inhibition theory, etc. When arithmetic cognitive activity was impacted by anxiety, high anxious individuals will increase efforts and auxiliary processing resources (physiological level) to complete the task, which to make up the occupied working memory resources caused by impaired cognitive performance (behavioral level). And emotional factors cannot be ignored.
Intergroup interaction is a primary type of social interaction, and plays an important role in human social development. Previous behavioral researches based on economic game tasks has demonstrated that the perception of partner’s group membership could modulate individuals’ mental processes and behavioral decision making when the participants play the game against an ingroup or outgroup member. However, it is still unclear how group membership influences the time course of recipient’s fairness considerations in the asset allocation task. In order to address this problem, we use the Minimal Group Paradigm to manipulate the ingroup-outgroup distinction between subjects and interactive partner, and integrate Ultimatum Game task and event-related potentials (ERPs) technique to explore how group membership affect the processing of fairness process and the time course of evaluation to allocation proposal. Brain potentials were recorded while 15 healthy adult subjects participated as recipients in the Ultimatum Game with alleged members of both an experimentally induced ingroup and outgroup, and subjects would receive either extremely unfair, moderately unfair, or fair offers from proposers. The behavioral data and ERP amplitudes (AN1 and MFN) associated with the three offers in both two interactions were analyzed. The behavioral data suggested that participants accepted more offers from ingroup partner than from outgroup partner, and the acceptance rates for extremely and moderately unfair offers were higher when interacting with ingroup partner than with outgroup partner whereas it did not show difference for fair offers irrespective of ingroup or outgroup partner making the offers. The ERP results indicated that AN1 and MFN were not only influenced by offers’ fairness but also modulated by the group membership. The AN1 was more negative for fair and moderately unfair offers compared to extremely unfair offers when playing against an outgroup member whereas it did not show differential responses to different offers from ingroup partner. The MFN and MFN effect (dMFN) was more negative for extremely unfair offers compared to fair offers in the intergroup interaction whereas it did not show differential responses to different offers in the outgroup interaction. These results indicated that group membership influenced the early stage of outcome evaluation under asset distribution game. In the intergroup interaction, both group membership and offers’ fairness influence the early attention detection and resource allocation, which induced the larger AN1 for fair and moderately unfair offers from outgroup partner. Moreover, the perception of belonging to a social group increased the fair anticipation to ingroup partner and decreased the fair anticipation to outgroup partner, which may induce the larger MFN difference for unfair offers. The present study first demonstrates that group membership and offers’ fairness can modulate the process of attention allocation and fairness considerations.
Thaler and Johnson (1990) proposed the “house money effect” (risk taking increases after a win) and “break even effect” (risk taking increases after a loss when there’s a chance to break even) to describe people’s behavior in dynamic decision making. Obviously these phenomena exceeded the explanation of prospect theory. However, in the study by Thaler and Johnson (1990), the decision frame was manipulated in the paradigm of two-stage gamble choice, so that the effects of prior outcomes on the risk preference in dynamic repeated decision making scenarios remain a problem. In the present study, based on the paradigm of Demaree (2012), we would like to reaffirm the theoretical expectations of “house money effect” and “break even effect” and analyze the decision pattern in the roulette game scenario quantitatively. In order to constitute a dynamic repeated decision scenario with favorable ecological validity, a simulated roulette game was developed. Participants were asked to choose between options which stood for a range of numbers on the roulette (e.g., from 18 to 36) and decide the amount of wager to put on. After the roulette rolling, a specific number was given. If the number fell in the range of the option, the corresponding amount of “money” (the token in the game) would be added to the bankroll of the participant, otherwise the amount of money will be considered to be lost. As long as the bankroll had not reached zero, the participant was free to decide whether to continue the game or stop the game to cash the bankroll. The amount of bankroll, wager and the chosen option in every trial was logged for the analyses of the prior outcome and risk preference. The results showed that the participant’s risk preference increased with the absolute value of preceding outcome, no matter it was a gain or a loss. Moreover, after a preceding win, the wager devoted into next trial was smaller than the profit of the last trial, which was consistent with the “house money effect”. After a preceding loss, the potential profit in the next trial was larger than the loss of the last trial, which was consistent with the “break even effect”. The present study suggests that when the decision task is repetitive and flexible, the strategy of the decision maker would be to avoid loss after the combination of previous profit (or loss) and the potential loss (or profit). The decision process in the presence of prior gain is as a kind of “squander”, with “no loss” as the bottom line. Whereas in the presence of prior loss, the decision could be seen as a kind of “fighting” which aims at “no loss”. These results could be inspiring in predicting the way people react in risk decision. The main effort of subsequent study would be the probing into scenarios which would be more ecological and representative, such as decision in the stock market.
The embodied theory of conceptual representation hold that conceptual representation is grounded in the same neural systems and the way of conceptual processing is simulating and restoring of body experience. Power, as a kind of abstract concept, also has the physical basis of sex, and metaphor is a link bridge between the abstract concept of power and the body perception experience. Human can use these “up”and“down” spatial concept to describe the power concept in our daily 1ife.Previous studies verify the existence of the metaphor between vertical space and power mostly from the perspective of linguistics and a large corpus. However, little empirical research is concerned from the perspective of cognitive psychology. This study extended the domestic and foreign related studies by showing that the perception of power words could affect the processing of subsequent spatial stimuli at multiple stages.Based on previous studies, we hypothesized that if the perception of power words automatically induced shifts of attention, we expected that early ERP components (P1 and/or N1) would be enhanced when the target letter were presented at a location congruent with that implied by power words. On the contrary, we would expect to observe a larger P3 amplitudes for incongruent than for congruent trials at later cognitive processing stages. The present study used event-related potentials (ERPs) to explore that thinking about the abstract concept power may automatically activate the spatial up–down image schema (powerful - up; powerless - down) and consequently direct spatial attention to the image schema congruent location. 16 participants took part in the study. One participant, probably for misunderstanding about the assigned keys, was excluded from the experiment. In the experiment, each trial started with a fixation (a ‘+’ sign) of 500 ms followed by a centrally presented word reflecting a powerful or powerless person. The word remained on the screen until the participant decided whether the word reflected a powerful or powerless person. Participants were instructed to respond as quickly and accurately as possible by pressing one of two keys labeled “D” or “F” on the keyboard as soon as the words disappeared from the screen. After a delay of 450 ms, participants were asked to identify a target letter (“m” or “n”) at the top or bottom of the screen by pressing one of two keys labeled “J” or “K” on the keyboard. The presentation of the letters “m”or”n”and the top vs. bottom location were equally distributed across trials. Thus, each word was followed once by a letter at the top position and once by a letter at the bottom position. The response mappings for the “D”、“F ”and“ J”、“K” keys were counterbalanced across participants. The electrophysiological date and behavior date will be recorded. The reaction times and error scores for the power condition on the target task were submitted to a two (Power: powerful vs. powerless) × two (Position: top vs. bottom) repeated measures ANOVA. No main effects of Power and Position or an interaction effect (all p > 0.05) were found. The analysis of the error scores also revealed no main effects of Power and Position or an interaction effect (all p > 0.05) were found. The ERP analysis showed larger N1 amplitudes for congruent trials and larger P3 amplitudes for incongruent trials. In conclusion, the present findings provide further electrophysiological evidences that the perception of words identifying top or bottom objects can automatically activate the spatial representations implied by the power words and orient spatial attention to subsequent spatial stimulus. Moreover, our results suggest that not only early sensory processes but also later cognitive processes are modulated by power words with spatial associations, represented by larger N1 amplitudes for validly cued targets and larger P3 amplitudes for invalidly cued targets that violate the spatial expectancies. Spatial location relationship of power evaluation for revealing the neural mechanism research of power consciousness and ranking of thought is of great significance.
Owing to their distinctive focus on novelty and usefulness, radical creativity and incremental creativity may have different psychological antecedents. Drawing on cognitive evaluation theory and learned industriousness theory perspective, we conducted an empirical study concerning with the relationship between pay for performance (PFP) and employees’ intrinsic motivation and radical creativity as well as extrinsic motivation and incremental creativity. We also examined whether these relations were moderated by transformational leadership and transactional leadership. Data were collected from 364 dyads of employees and their immediate supervisors in 24 enterprises. The questionnaire for employee included PFP, intrinsic motivation, extrinsic motivation, transformational leadership, transactional leadership, willingness to take risks, organizational identification, and job complexity. Employee creativity was rated by their immediate supervisors. Theoretical hypotheses were tested by hierarchal regression analysis. Results of analyzing the matched sample showed that the relationships between PFP and both intrinsic motivation and radical creativity were nonsignificant, and the relationships between PFP and both extrinsic motivation and incremental creativity were positive; where transformational leadership was high, PFP was positively related to intrinsic motivation and radical creativity, whereas where transformational leadership was low, those relationships were negative; transactional leadership augmented PFP’s direct positive effect on extrinsic motivation and indirect positive effect on incremental creativity. Extending previous studies, this research demonstrated that PFP has a unique influence on radical creativity and incremental creativity, the result clarified the relationship between extrinsic reward and employee creativity in workplace from a new perspective. Second, by examining the mediating effect of intrinsic motivation and extrinsic motivation, the results contributed to our understanding on the mechanism through which PFP influence radical creativity and incremental creativity. Finally, through investigating the moderating effect of transformational leadership and transactional leadership, we confirmed that there are distinct bounded conditions of the effect of PFP on employees’ radical creativity and incremental creativity. Findings broaden understanding of the processes by which and the conditions under which PFP may promote or inhibit employees’ radical creativity and incremental creativity. Furthermore, the results also revealed that cognitive evaluation theory was more suitable for explaining the relationship between PFP and radical creativity, whereas learned industriousness theory could predict the PFP-incremental creativity relation more precisely.
Missing observations are common in operational performance assessment settings or psychological surveys and experiments. Since these assessments are time-consuming to administer and score, examinees seldom respond to all test items and raters seldom evaluate all examinee responses. As a result, a frequent problem encountered by those using generalizability theory with large-scale performance assessments is working with missing data. Data from such examinations compose a missing data matrix. Researchers usually concern about how to make good use of the full data and often ignore missing data. As for these missing data, a common practice is to delete them or make an imputaion for missing records; however, it may cause problems in following aspects. Firstly, deleting or interpolating missing data may result in ineffective statistical analysis. Secondly, it is difficult for researchers to choose an unbiased method among diverse rules of interpolation. As a result of missing data, a series of problems may be caused when estimating variance components of unbalanced data in generalizability theory. A key issue with generalizability theory lies in how to effectively utilize the existing missing data to their maximum statistical analysis capacity. This article provides four methods to estimate variance components of missing data for unbalanced random p×i×r design of generalizability theory: formulas method, restricted maximum likelihood estimation (REML) method, subdividing method, and Markov Chain Monte Carlo (MCMC) method. Based on the estimating formulas of p×i design by Brennan (2001), formulas method is the deduction of estimating variance components formulas for p×i×r design with missing data. The aim of this article is to investigate which method is superior in estimating variance components of missing data rapidly and effectively. MATLAB 7.0 was used to simulate data, and generalizability theory was used to estimate variance components. Three conditions were simulated respectively: (1) persons sample with small size (200 students), medium size (1000 students) and large size (5000 students); (2) item sample with 2 items, 4 items and 6 items; (3) raters sample with 5 raters, 10 raters and 20 raters. The authors also developed some programs for MATLAB, WinBUGS, SAS and urGENOVA software in order to estimate variance components of p×i×r missing data with four methods. Criterions were made for the purpose of comparing the four methods. For example, bias was the criterion when estimating variance components. The reliability of the results increased as the absolute bias decreased. Results indicate that: (1) MCMC method has a strong advantage for estimating variance components of p×i×r missing data over the other three methods. MCMC method is superior to formulas method because of smaller deviation for variance components estimation. It is better than REML method because iteration of MCMC method converge, while REML method does not. Unlike subdividing method, MCMC method does not require variance components to be combined in order to obtain accurate estimations. (2) Item and rater are two important influencing factors for estimating variance components of missing data. If manpower and material resources are limited, priority should be given to increase the number of items in order to increase estimation accuracy. If researchers cannot increase the number of items, the next-best thing is to increase the number of raters. However, the number of raters should be cautiously controlled.
It is well known that items in the bank of computerized adaptive testing (CAT) are always expected to be used equally. For one thing, a good deal of manpower and financial resources spent on constructing the item bank will surely be wasted if a large proportion of items are seldom exposed or even never be used. For the other, works for ensuring the test security and maintaining the item bank will become serious for test practitioners if items are exposed extremely skewed. In addition to controlling the item exposure, tests which assembled for different examinees are usually required to satisfy many constraints, such as (a) the well-proportional of each content domain; (b) the “enemy items” could not be appeared in the same test, and (c) the appropriate balance of item keys. Supposing some constraints are violated, it will give some unexpected reactions during the test and result in inaccuracy of trait estimates. Therefore, both item exposure control and content constraints are important non-statistical constraints. They have great influence on the test validity, measurement accuracy and comparability among examinees. So, they need to be incorporated into the designing of item selection for CAT in practical settings. When cognitive diagnostic theory is used in CAT, examinees can receive more detailed diagnostic information regarding their mastery of every attribute. Therefore, cognitive diagnostic CAT (CD-CAT) is a promising research area and has gained much attention because it integrates both the cognitive diagnostic method and adaptive testing. The present study compared the performances of five item selection methods in CD-CAT with item exposure control and content constraints. The item selection methods applied are (a) incorporating the Monte Carlo approach into the item eligibility approach (MC-IE); (b) incorporating the maximum priority index method into the Monte Carlo approach (MC-MPI); (c) incorporating the restrictive threshold method into the Monte Carlo approach (MC-RT); (d) incorporating the restrictive progressive method into the Monte Carlo approach (MC-RPG), and (e) incorporating the maximum post probability of knowledge states method into the Monte Carlo approach (MC-PP). The reparameterized unified model was implemented in the simulation experiments to generate item responses with respect to five item banks constructed according to attribute structures of linear, convergent, divergent, unstructured and independent, respectively. Results indicate that (a) the distributions of item exposure produced by the same item selection method in different item banks are similar, (b) the measurement precisions of each item selection method yield in attribute structures of linear, convergent, divergent, unstructured and independent are decreased gradually; (c) the performances of different item selection methods ordered by the measurement accuracy in each test condition are methods of the MC-PP, the MC-IE, the MC-MPI, the MC-RT, and the MC-RPG; their performances in terms of item exposure control are sorted in the opposite order. According to the value of uniformly dimensional, the MC-RPG method yields a best balance between item exposure control and test accuracy while satisfying some content constraints, and then followed by the MC-MPI method.
DIF detecting is an important issue when using cognitive diagnostic tests in practice. MH method, SIBTEST method and Wald test have been introduced into cognitive diagnostic test DIF detection. However, all of them have some limitations. As Logistic Regression is not based on certain model, has a pretty good performance in detecting DIF in IRT test, and could distinguish uniform DIF from non-uniform ones, it is predictable that Logistic Regression (LR) could make up some of the flaws the methods used in cognitive diagnostic test have. The performance of LR was compared with that of MH method and Wald test. Matching criteria, DIF type, DIF size and sample size were also considered. In this simulation study, data was generated using HO-DINA model. When detecting DIF using MH method and LR, 3 kinds of matching criteria were used. Sum score was computed by summing up right answers of each examinee; q was calculated with 2PL model; KS was calculated with 3 different cognitive diagnostic methods, which are DINA model, RSM method and AHM method. Wald test could be directly applied to DINA model. The 4 kinds of DIF are s increases, g increases, s and g increase simultaneously, and s increases while g decreases. Two levels of DIF size are 0.05 and 0.1. Two levels of sample size are 500 examinees per group and 1000 examinees per group. Here are the results: (1) LR did a great job in cognitive diagnostic test DIF detection with a pretty high power and low type I error; (2) LR was not constrained by cognitive diagnostic models, thus it can use KS estimated by whichever cognitive diagnostic methods; (3) LR can distinguish uniform DIF from non-uniform DIF, and the power and type I error are fairly good; (4) Using KS as matching criteria in cognitive diagnostic test DIF detection can provide ideal power and type I error; (5) With the increase of DIF size and sample size, power grew significantly while type I error rate did not change. LR has a satisfying performance in cognitive diagnostic test DIF detection with a high power and a low and stable type I error rate. KS should be the ideal matching criteria in cognitive diagnostic test DIF detection. In the long run, the unique characters of DIF in cognitive diagnostic test should be explored, and pertinence DIF detecting methods should be developed.
In general, social science data can be divided into attribute data and relational data. Focusing on individual properties, plus lagged statistical methods, the traditional social research, by simplifying relational data into attribute data, adopts traditional statistical analysis to deal with the former. This approach is not desirable, because traditional statistical analysis needs to meet the independence of cases. However, relational data mainly involves the relationships between interdependent actors, in which sense, it violates the assumption of independence, inapplicable to traditional statistical analysis. With the development of the statistical methods, a new approach—social network analysis (SNA) is proposed to deal with relational data. Social network analysis is a large and growing body of researches on the measurement and analysis of relational structure. It mainly evaluates relationships between actors, and the contexts of the social actors. Network autocorrelation models are common for social network analysis, which are used to study on the relationship between network effect and individual behavior. In order to explore the difference between social network analysis and traditional statistical analysis, we have compared the performance of network effect model and traditional linear model in dealing with relational data through simulation studies. The simulation studies were conducted in R statistical programming environment. This article also presents the application of network effect model in psychology, and the empirical study was to investigate the impact of peer effect and learning motivation on adolescents’ academic performance. Network effect model, a type of the network autocorrelation models that fully considers the interdependencies among sample units, was applied to delve into the data by using “sna” software package in R project. The simulation study suggests that parameter estimation and model fit of network effect model are significantly better than traditional linear model in dealing with relational data —that’s why network effect model should be applied. The results of the empirical study reveal that peer effect has significant impact on academic performance. Overall, the findings not only highlight that social network analysis should be applied to deal with relational data, but also indicate that peer effect is crucial to adolescents’ academic performance.