Please wait a minute...
Acta Psychologica Sinica    2013, Vol. 45 Issue (9) : 1039-1049     DOI: 10.3724/SP.J.1041.2013.01039
The Application of Many-Facet Rasch Model in Leaderless Group Discussion
YAO Ruosong;ZHAO Baonan;LIU Ze;MIAO Qunying
(1Department of Education, Guangzhou University, Guangzhou 510006, China) (2 School of Foreign Studies, Guangzhou University, Guangzhou 510006, China)
Download: PDF(442 KB)  
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  Many-Facet Rasch model (MFRM) of Item Response Theory (IRT) is applied to performance assessment. Domestic and foreign researches applied MFRM in many fields such as analysis of various examinations, medical diagnosis, judgments of life quality and so on. In these assessment tests, ratings were influenced by a variety of factors among which judges played the most important part. This thesis mainly probed into issues covering subjects, judges, rating scales and rating deviation in Leaderless Group Discussion (LGD) of personnel assessment center in personnel assessment to improve the effectiveness and stability of assessment. This study adopted the FACETS software, a MFRM computer statistics program, to establish 3 facets of subjects, judges and rating dimensions to analyze subjects’ abilities, rater severity, inter-rater reliability, dimension difficulty and rating scales. Meanwhile, this study got results of deviation analysis of subjects and judges, judges and dimensions, deviation among judges, subjects and dimensions. The results illustrated significant differences existed among levels of subjects’ ability, rater severity, dimension difficulty and the rating scale. Differences of rater severity generally did not affect the test scores of subjects. Except some judges, other judges’ ratings had good internal consistency. Dimension difficulty could better distinguish subjects’ ability but judges tended to concentrate on using an intermediate rating scale; The results of deviation analysis of judges and subjects, judges and dimension showed that untrained judges E, F had more rating deviations, so it was necessary to monitor their scores and strengthen the training of the two judges. The application of MFRM, IRT’s expansion, to assessment center evaluation could enable evaluators to make the employment decision by estimated ability level of subjects, design tests according to dimension difficulty, set the standards for training and selection referring to examine judges’ ratings rater severity and inter-rater reliability, improve the assessment process based on a variety of deviation analysis, and finally promote scientific, standardized and precise development of evaluation system of assessment center.
Keywords leaderless group discussion      many-facet Rasch model      item response theory      personnel assessment     
Corresponding Authors: YAO Ruosong   
Issue Date: 25 September 2013
E-mail this article
E-mail Alert
Articles by authors
Cite this article:   
YAO Ruosong;ZHAO Baonan;LIU Ze;MIAO Qunying. The Application of Many-Facet Rasch Model in Leaderless Group Discussion[J]. Acta Psychologica Sinica,2013, 45(9): 1039-1049.
URL:     OR
[1] LIU Yue, LIU Hongyun.  Reporting overall scores and domain scores of bi-factor models[J]. Acta Psychologica Sinica, 2017, 49(9): 1234-1246.
[2] CHEN Ping. Two new online calibration methods for computerized adaptive testing[J]. Acta Psychologica Sinica, 2016, 48(9): 1184-1198.
[3] MENG Xiangbin; TAO Jian; CHEN Shali. Warm’sweighted maximum likelihood estimation of latent trait in the four-parameter logistic model[J]. Acta Psychologica Sinica, 2016, 48(8): 1047-1056.
[4] WANG Wenyi;SONG Lihong;DING Shuliang. Classification accuracy and consistency indices for complex decision rules in multidimensional item response theory[J]. Acta Psychologica Sinica, 2016, 48(12): 1612-1624.
[5] ZHAN Peida; CHEN Ping; BIAN Yufang. Using confirmatory compensatory multidimensional IRT models to do cognitive diagnosis[J]. Acta Psychologica Sinica, 2016, 48(10): 1347-1356.
[6] ZHAN Peida; LI Xiaomin; WANG Wen-Chung; BIAN Yufang; WANG Lijun. The Multidimensional Testlet-Effect Cognitive Diagnostic Models[J]. Acta Psychologica Sinica, 2015, 47(5): 689-701.
[7] LIU Yue;LIU Hongyun. Comparison of MIRT Linking Methods for Different Common Item Designs[J]. Acta Psychologica Sinica, 2013, 45(4): 466-480 .
[8] DU Wenjiu;ZHOU Juan;LI Hongbo. The Item Parameters’ Estimation Accuracy of Two-Parameter Logistic Model[J]. Acta Psychologica Sinica, 2013, 45(10): 1179-1186.
[9] LIU Hong-Yun,LI Chong,ZHANG Ping-Ping,LUO Fang. Testing Measurement Equivalence of Categorical Items’ Threshold/Difficulty Parameters: A Comparison of CCFA and (M)IRT Approaches[J]. Acta Psychologica Sinica, 2012, 44(8): 1124-1136.
[10] LIU Hong-Yun,LUO Fang,WANG Yue,ZHANG Yu. Item Parameter Estimation for Multidimensional Measurement: Comparisons of SEM and MIRT Based Methods[J]. , 2012, 44(1): 121-132.
[11] TU Dong-Bo,CAI Yan,DAI Hai-Qi,DING Shu-Liang. Parameters Estimation of MIRT Model and Its Application in Psychological Tests[J]. , 2011, 43(11): 1329-1340.
[12] YAN Jin,WU Ying-Jie,ZHANG Wei. Biodata as A Personnel Recruitment Selection Approach in China: Assessment and Its Validity[J]. , 2010, 42(03): 423-433.
[13] WU Rui,DING Shu-Liang,GAN Deng-Wen. Test Equating with Testlets[J]. , 2010, 42(03): 434-442.
[14] LUO Zhao-Sheng, OUYANG Xue-Lian, QI Shu-Qing, DAI Hai-Qi,,DING Shu-Liang. IRT Information Function of Polytomously Scored Items under the Graded Response Model[J]. , 2008, 40(11): 1212-1220.
[15] CAO Yi-Wei,MAO Cheng-Mei. Adjustment of Freshman College Students:
A longitudinal Study using Longitudinal Rasch Model
[J]. , 2008, 40(04): 427-435.
Full text



Copyright © Acta Psychologica Sinica
Support by Beijing Magtech