ISSN 0439-755X
CN 11-1911/B

›› 2006, Vol. 38 ›› Issue (04): 614-625.

Previous Articles     Next Articles

An IRT Analysis of Rater Bias in Structured Interview of National Civilian Candidates

Sun-Xiaomin,Zhang Houcan-   

  1. School of Psychology, Beijing Normal University, Beijing 100875, China
  • Received:2005-08-10 Revised:1900-01-01 Published:2006-07-30 Online:2006-07-30
  • Contact:

    Sun Xiaomin

Abstract: Abstract
Introduction
Structured interview is one of the most important ways in personnel selection. The existence of rater bias, however, could threatens its reliability and validity to a great extent. Different test theories give different solutions to this problem. Many-faceted Rasch Model (MFRM), an extension to Rasch model, overcomes the shortcomings of Classical Test Theory (CTT). By parameterizing not only interviewee’s ability and item difficulty but also judge severity, MFRM offers an effective way to estimate rater bias and provides detailed information of rater’s bias behavior. This study divided rater bias into two kinds -- intra-rater inconsistency and between-rater difference in stringency -- and used MFRM to analyze these two kinds of rater bias.
Method
Data comes from a structured interview of national civilian candidates. There were 200 interviewees and 21 raters who were randomized into 3 panels in the morning of each of two days. Rating scores of two panels were used in this study. The first 34 interviewees (numbers 1 through 34) were interviewed by raters A、B、C、D、E、F、G in the first morning. Interviewees 35 to 66 were interviewed by raters A、E、H、I、J、K、L in the second morning. Using a 10-point rating scale (1 to 10), each rater rated each interviewee independently on five dimensions.
Using FACETS3.55.0, a computer program based on MFRM, we examined between-rater differences in stringency and intra-rater inconsistency. Rater bias across candidates, rating dimensions, gender, and time periods are further examined by bias analysis provided by FACETS.
Results
(1) In the structured interview, rater severity differed from each other significantly;
(2) In the structured interview, raters demonstrated different levels of internal consistency. The specific behavior of the inconsistent raters and the overly consistent raters were identified.
(3) Raters also showed different pattern of rater bias across candidates, rating dimensions, gender, and time periods.
Conclusions
The results suggest the existence of two rater bias sources. The application of MFRM in the analysis of structured interview offers an effective way to fairly select competent national civilian candidates, and provides valuable information for the selection of qualified raters, the identification of each rater’s strong points and shortcomings, which is useful for the construction of rater bank and further training of incompetent raters

Key words: Structured Interview, rater bias, IRT

CLC Number: