首页 期刊介绍 编 委 会 投稿指南 期刊订阅 联系我们 English

## 共同方法变异是“致命瘟疫”吗?——论争、新知与应对

,1,2, 1

1国防大学政治学院, 上海 200433

2陆军炮兵防空兵学院军政基础系, 合肥 230031

## Is common method variance a “deadly plague”? Unsolved contention, fresh insights, and practical recommendations

,1,2, 1

1 College of Politics, National Defence University, PLA, Shanghai 200433, China

2 Department of Military and Political Basic Education, Army Academy of Artillery and Air Defense, PLA, Hefei 230031, China

 基金资助: * 国防大学政治学院2017年度院级科研项目.  17ZY03-12原陆军军官学院2016年度第二批自主立项资助.  2016-02-ZZLX-40

Received: 2018-06-4   Online: 2019-04-15

Abstract

Common method variance (CMV) is a form of systematic variance attributed to similarities of measurement method facets between constructs. It has potential to distort observational correlations and thus elicits common method bias (CMB). Although it has been noted repeatedly in social science research for almost 60 years, its threat to research validity hasn’t been overwhelmingly acknowledged and remains to be scrutinized. Extant empirical evidence has demonstrated the ubiquity of CMV and identified distinct factors triggering CMB, including data source, time interval, and questionnaire design. As a result, cross-sectional self-reporting surveys are particularly subjected to extensive criticism. Nonetheless, some researchers contend that measurement error and uncommon method variance can offset or alleviate the underlying detriment so that pervasive anxiety regarding CMV is exaggerated and unjustified. The measure-centric approach underlines that CMV originates from the interplay between methods and constructs, and the two-dimensional CMV risk evaluation procedure should be conducted with simultaneous consideration of method and construct. From our view, it is preferable to cultivate a balanced and impartial attitude towards CMV, embrace its existence, discard the prejudice against self-reporting, and, above all, take proactive countermeasures based on the optimization of research design.

Keywords： common method variance ; common method bias ; self-reporting ; research design ; validity

ZHU Haiteng, LI Chuanyun. (2019). Is common method variance a “deadly plague”? Unsolved contention, fresh insights, and practical recommendations. Advances in Psychological Science, 27(4), 587-599

## 2 貌合神离：共同方法变异与共同方法偏差之检测与辨析

(1)数据来源。构念的测量可来自单一受测者或多种渠道(如多个评定者、客观记录)。一个明显的事实是, 对共同方法变异的批评绝大部分指向研究者最常使用的自我报告单一来源式横断调查研究(Brannick, Chan, Conway, Lance, & Spector, 2010; Chang, van Witteloostuijn, & Eden, 2010; Lai, Li, & Leung, 2013; Spector & Brannick, 2010)。不少学者相信, 自我报告数据带有大量同源偏差(common source bias), 得到的结果不可信, 有些审稿人甚至会不假思索地拒绝这类稿件(Brannick et al., 2010; Spector, 2006)。现有证据也表明, 单一受测者得到的相关系数的确偏高。同源偏差程度是构念间相关性的调节变量(陈春花, 苏涛, 王杏珊, 2016), Podsakoff等的2项元分析发现, 较之采用不同评定者, 单一受测者使相关系数发生了59.5%~304%的膨胀(Podsakoff, Whiting, Welsh, & Mai, 2013; Podsakoff et al., 2012), 个人或组织绩效与解释变量的关系也呈现出相似的趋势(Andersen, Heinesen, & Pedersen, 2016; Meier & O’Toole, 2013; 苏中兴, 段佳利, 2015)。同源偏差在主观性较强的感知类变量(如组织承诺、工作满意度)中更加严重(Favero & Bullock, 2015; Sharma, Yetton, & Crawford, 2009; Tehseen, Ramayah, & Sajilan, 2017)。

(2)测量时间。在同一时间测量的构念会带有系统性共变, 因为留存在短时记忆中的信息增大了一致性回答的概率, 导致相关性的膨胀(Podsakoff et al., 2003)。研究表明, 在不同时间点(间隔1天到2个月)测得的构念间的相关系数明显小于一次完成全部测量时的结果(Barraclough, af Wåhlberg, Freeman, Davey, & Watson, 2014; Johnson, Rosen, & Djurdjevic, 2011; Wingate, Sng, & Loprinzi, 2018)。

(3)问卷设计。主要涉及量表的格式(如Likert量表和语义区分量表)和选项(anchor)、题项的语义清晰度。采用选项内容(如同意式或频率式)和数量(如五级计分)均相同的Likert量表测量多个构念, 得到的相关系数会偏高(Podsakoff et al., 2013; Schwarz, Rizzuto, Carraher-Wolverton, Roldán, & Barrera-Barrera, 2017); 抽象、表意不清、模棱两可的题项会造成构念的指标负荷、合成信度和路径系数的膨胀(Schwarz et al., 2017; Schwarz, Schwarz, & Rizzuto, 2008)。

### 3.1 自我报告法不可替代

“都市传说”的宣扬者Spector (2006)还辩解道, 如果自我报告都带有同源偏差, 应存在一个确保所有观测相关系数达到统计显著性的基线水平, 但实际情况是, 即使在大样本研究和具有理论关联的构念间, 不显著的相关仍十分常见, 这充分说明自我报告远不是获得显著相关性的保证。总之, 正如“不能把婴儿与洗澡水一起倒掉”, 不应在未经确证的情况下不加区分地拒绝自我报告研究, 更不可将其“妖魔化”。

### 3.4 测量误差的抵消作用

${{r}_{XY}}={{\text{ }\!\!\lambda\!\!\text{ }}_{{{T}_{X}}}}{{\text{ }\!\!\lambda\!\!\text{ }}_{{{T}_{Y}}}}{{\text{ }\!\!\rho\!\!\text{ }}_{{{T}_{X}}{{T}_{Y}}}}+{{\text{ }\!\!\lambda\!\!\text{ }}_{{{M}_{X}}}}{{\text{ }\!\!\lambda\!\!\text{ }}_{{{M}_{Y}}}}$

Lance等(2010)对18个MTMM矩阵的再分析为此提供了佐证。采用相同方法测量的两个构念的平均观测相关系数为0.340, 通过(1)式换算得到的相关系数为0.332, 二者极为接近; 加入方法因子后, 特质因子的平均相关系数(真实相关系数${{\text{ }\!\!\rho\!\!\text{ }}_{{{T}_{X}}{{T}_{Y}}}}$的无偏估计值)为0.371, 与前两个值的差异也不太大。他们由此得出结论(p.444)：“ ‘共同方法效应使单一方法得到的相关系数膨胀’这一‘都市传说’有几分道理, 但相关系数大于其真值则是一个谣言, 这是由于测量误差具有削减效应。”在另一项模拟研究中, Fuller等(2016)操纵了共同方法变异比例、信度、真实相关系数等参数, 发现在信度略低于常规水平(0.77~0.80)时, 共同方法变异会导致相关系数的紧缩; 相反, 在信度极高(0.97~0.99)时, 共同方法变异会导致相关系数的膨胀。这很好地支持了Lance等的观点：共同方法变异虽然存在, 但能否引起显著的共同方法偏差部分取决于测量误差的削减作用; 在特定情况下(方法的膨胀效应恰好被测量误差的削减效应完全抵消), 由单一方法求得的相关系数能够准确地反映构念间的真实关系。

### 3.5 非共同方法变异与共同方法变异的消长

Conway和Lance (2010)将“他评优于自评”列为审稿人对共同方法偏差的三大误解之一, 因为来自不同评定者(推而广之, 其他不同方法特征)的评分会产生非共享方法效应或非共享无关变异(unshared irrelevant variance), 造成构念相关性的缩减(Brannick et al., 2010)。“非共享方法效应”的提出隐含了一种视角的转换, 即在关注测量方法的共同点或相似性之余, 也应留意方法之间的差异性, 因为这是共同方法变异的潜在制衡因子。受此启发, Spector等(Spector, Rosen, Richardson, Williams, & Johnson, in press)对方法变异做出了更全面的界定, 认为方法变异是作用于被测变量的外生的、意料之外的(unintended)系统性影响, 其中一部分为多个变量所共享, 即共同方法变异; 另一部分单独影响个别变量, 互不重叠, 称为非共同方法变异(uncommon method variance)。共同方法变异和非共同方法变异互补, 共同构成总的方法变异(各种变异成分的关系见图1), 它们相辅相成、密不可分, 又相互制约、此消彼长; 不论一项研究中的共同方法变异是否显著、量有多大, 必然存在一定量的非共同方法变异, 因为各构念的测量方法或多或少有一些差异。从另一个角度看, 不同方法间的相关性越高, 共同方法变异越大; 相关性越低, 非共同方法变异越大。

${{V}_{O}}={{V}_{C}}+\mathop{\sum }^{}{{V}_{{{M}_{i}}}}+{{V}_{E}}$

${{r}_{XY}}=\frac{Co{{v}_{XY}}}{\sqrt{Va{{r}_{X}}Va{{r}_{Y}}}}$

$Co{{v}_{XY}}=Co{{v}_{{{X}_{C}}{{Y}_{C}}}}+Co{{v}_{{{X}_{C}}{{Y}_{M}}}}+Co{{v}_{{{X}_{M}}{{Y}_{C}}}}+Co{{v}_{{{X}_{M}}{{Y}_{M}}}}$

### 图1

Spector等在共同方法变异之外提出具有对立性质的非共同方法变异, 颇有针锋相对的意味。虽然这一学说还是尝试性的, 尚缺乏实证证据, 但把人们对方法变异的认识推进了一步, 有助于厘清方法变异与共同方法变异的关系, 突破将二者等同起来的简单化理解。非共同方法变异与Lance等的测量误差抵消说相得益彰, 这两种观点都能较好地解释共同方法偏差何时表现为膨胀、何时表现为紧缩, 对“自我报告有严重的共同方法偏差”和“多方评定不存在共同方法偏差”的惯常思维发起了挑战, 是值得肯定的有益探索。

## 4 方法不代表一切：以测量为中心的新视角

“批判派”和“辩护派”你来我往的交锋使我们一时难以对共同方法变异的威胁下一个定论。或许, 这样一个普适的定论原本就不存在, 只有更加精细地看待共同方法变异, 才能找到正确的应对途径。研究者普遍带有这样的迷思：想当然地将共同方法变异与某种测量方法“挂钩”, 认为只要几个构念都采用了这种方法, 就免不了受到污染; 或者说, 共同方法变异的唯一诱发因素是方法, 与被测构念无关。在此驱动下, 面对一篇完全采用自我报告法的论文, 人们往往会揪住共同方法变异问题不放, 却对其中的变量特征和自我报告的适当性失之详查。

## 参考文献 原文顺序 文献年度倒序 文中引用次数倒序 被引期刊影响因子

Common Method Variance (CMV) refers to the overlap in variance between two variables because of the type of measurement instrument used rather than representing a true relationship between the underlying constructs. Researchers should give careful consideration to CMV although it may not surely bias the conclusions about the relationships between measures. CMV effect is often created by using the same method — especially a survey — to measure each variable. Procedural design and statistical control solutions are provided to minimize its likelihood in studies. A statistical control technique is a good solution if it can separate construct varience, method varience and error, and distinguish method bias at the item level from method bias at the construct level, and takes account of Method×Trait interactions. Thus, method-factor approaches are better than partial correlation approaches. It’s very important to understand the model of every method-factor approache for selecting statistical remedies correctly for different types of research settings. Etimating evaluate the effect of CMV within specific research domains and the effect of CMV on empirical findings within a theoretical domain should be concerned for further research.

The problem of common method biases has being given more and more attention in the field of psychology, but there is little research about it in China, and the effects of common method bias are not well controlled. Generally, there are two ways of controlling common method biases, procedural remedies and statistical remedies. In this paper, statistical remedies for common method biases are provided, such as factor analysis, partial correlation, latent method factor, structural equation model, and their advantages and disadvantages are analyzed separately. Finally, suggestions of how to choose these remedies are given.

Andersen L. B., Heinesen E., & Pedersen L. H . ( 2016).

Individual performance: From common source bias to institutionalized assessment

Journal of Public Administration Research and Theory, 26, (1), 63-78.

Performance is perhaps the most central concept in public administration research, and this article discusses theoretically and investigates empirically how we can obtain more consistent performance measures. Theoretically, we combine existing arguments in public administration with institutional theory and the sociology of professions. Empirically, we ask whether different measures of individual performance produce different results. The investigated performance measures vary with regard to risk of common data source bias, standardization of assessment criteria, and external verification of the assessment. Our investigated explanatory variables are intrinsic motivation, public service motivation, and job satisfaction. Combining survey and administrative data for 747 lower secondary school teachers (teaching 5,679 students in 85 schools), we analyze 4 different measures of the same performance dimension for the same teachers: the teachers’ self-reported contributions to students’ academic skills, the students’ marks for the year’s work given by the teacher, marks in oral exams with one external examiner and the teacher, and marks in written exams with at least one external examiner. The associations are systematically stronger when the performance measure comes from the same data source as the explanatory variables, but when separate data sources are used and the measurement scale is institutionalized, the level of external verification does not matter much. Based on institutional theory and the sociology of professions, we develop a theoretical argument that can explain this.

Barraclough P., af Wåhlberg A., Freeman J., Davey J., & Watson B . ( 2014).

Real or imagined? A study exploring the existence of common method variance effects in road safety research.

Paper presented at the 5th International Conference on Applied Human Factors and Ergonomics, Krakow, Poland.

Batista-Foguet J. M., Revilla M., Saris W. E., Boyatzis R., & Serlavós R . ( 2014).

Reassessing the effect of survey characteristics on common method bias in emotional and social intelligence competencies assessment

Structural Equation Modeling, 21, (4), 596-607.

Since the idea of method variance was inspired by D. T. Campbell and Fiske in 1959, many papers have demonstrated an ongoing debate about both its nature and impact. Often, method variance entails an upward bias in correlations among observed variables—common method bias. This article reports a split-ballot multitrait–multimethod experimental design for estimating 2 opposite biases: the upward biasing method variance from the reaction to the length of the response scale and the position of the survey items in the questionnaire and the downward biasing effect of poor data quality. The data are derived from self-reported behavior related to emotional and social competencies. This article illustrates a methodology to estimate common method bias and its components: common method scale variance, common method occasion variance, and the attenuation effect due to measurement errors. The results show that common method variance has a much smaller impact than random and systematic measurement errors. The results also corroborate previous findings: the greater reliability of longer scales and the lower reliability of items placed toward the end of the survey.

Brannick M. T., Chan D., Conway J. M., Lance C. E., & Spector P. E . ( 2010).

What is method variance and how can we cope with it? A panel discussion

Organizational Research Methods, 13, (3), 407-420.

A panel of experts describes the nature of, and remedies for, method variance. In an attempt to help the reader understand the nature of method variance, the authors describe their experiences with method variance both on the giving and the receiving ends of the editorial review process, as well as their interpretation of other reviewers鈥 comments. They then describe methods of data analysis and research design, which have been used for detecting and eliminating the effects of method variance. Most methods have some utility, but none prevent the researcher from making faulty inferences. The authors conclude with suggestions for resolving disputes about method variance.

Carter M. Z., Mossholder K. W., Field H. S., & Armenakis A. A . ( 2014).

Transformational leadership, interactional justice, and organizational citizenship behavior: The effects of racial and gender dissimilarity between supervisors and subordinates

Group & Organization Management, 39, (6), 691-719.

Chang S.-J., van Witteloostuijn A., & Eden L . ( 2010).

From the editors: Common method variance in international business research

Journal of International Business Studies, 41, (2), 178-184.

"JIBS" receives many manuscripts that report findings from analyzing survey data based on same-respondent replies. This can be problematic since same-respondent studies can suffer from common method variance (CMV). Currently, authors who submit manuscripts to "JIBS" that appear to suffer from CMV are asked to perform validity checks and resubmit their manuscripts. This letter from the Editors is designed to outline the current state of best parctice for handling CMV in international business research.

Conway J.M., &Lance C.E . ( 2010).

What reviewers should expect from authors regarding common method bias in organizational research

Journal of Business & Psychology, 25, (3), 325-334.

Cortina J. M., Aguinis H., & Deshon R. P . ( 2017).

Twilight of dawn or of evening? A century of research methods in the Journal of Applied Psychology

Journal of Applied Psychology, 102 (3), 274-290.

Craighead C. W., Ketchen D. J., Dunn K. S., & Hult G. T. M . ( 2011).

Addressing common method variance: Guidelines for survey research on information technology, operations, and supply chain management

IEEE Transactions on Engineering Management, 58, (3), 578-588.

Common method variance (CMV) is the amount of spurious correlation between variables that is created by using the same method-often a survey-to measure each variable. CMV may lead to erroneous conclusions about relationships between variables by inflating or deflating findings. We analyzed recent survey research in IEEE Transactions on Engineering Management, Journal of Operations Management, and Production and Operations Management to assess if and how scholars address CMV. We found that two-thirds of the relevant articles published between 2001 and 2009 did not formally address CMV, and many that did address CMV relied on relatively weak remedies. These findings have troubling implications for efforts to build knowledge within information technology, operations and supply chain management research. In an effort to strengthen future research designs, we provide recommendations to help scholars to better address CMV. Given the potentially severe effects of CMV, authors should apply the recommended CMV remedies within their survey-based studies, and reviewers should hold authors accountable when they fail to do so.

Doty D.H., &Glick W.H . ( 1998).

Common methods bias: Does common methods variance really bias results?

Organizational Research Methods, 1, (4), 374-406.

Edwards J.R . ( 2008).

To prosper, organizational psychology should…overcome methodological barriers to progress

Journal of Organizational Behavior, 29, (4), 469-491.

Progress in organizational psychology (OP) research depends on the rigor and quality of the methods we use. This paper identifies ten methodological barriers to progress and offers suggestions for overcoming the barriers, in part or whole. The barriers address how we derive hypotheses from theories, the nature and scope of the questions we pursue in our studies, the ways we address causality, the manner in which we draw samples and measure constructs, and how we conduct statistical tests and draw inferences from our research. The paper concludes with recommendations for integrating research methods into our ongoing development goals as scholars and framing methods as tools that help us achieve shared objectives in our field. Copyright 2008 John Wiley & Sons, Ltd.

Favero N., &Bullock J.B . ( 2015).

How (not) to solve the problem: An evaluation of scholarly responses to common source bias

Journal of Public Administration Research and Theory, 25, (1), 285-308.

Abstract Public administration scholars are beginning to pay more attention to the problem of common source bias, but little is known about the approaches that applied researchers are adopting as they attempt to confront the issue in their own research. In this essay, we consider the various responses taken by the authors of six articles in this journal. We draw attention to important nuances of the common measurement issue that have previously received little attention and run a set of empirical analyses in order to test the effectiveness of several proposed solutions to the common-source-bias problem. Our results indicate that none of the statistical remedies being used by public administration scholars appear to be reliable methods of countering the problem. Currently, it appears as though the only reliable solution is to find independent sources of data when perceptual survey measures are employed.

Fuller C. M., Simmering M. J., Atinc G., Atinc Y., & Babin B. J . ( 2016).

Common methods variance detection in business research

Journal of Business Research, 69, (8), 3192-3198.

The issue of common method variance (CMV) has become almost legendary among today's business researchers. In this manuscript, a literature review shows many business researchers take steps to assess potential problems with CMV, or common method bias (CMB), but almost no one reports problematic findings. One widely-criticized procedure assessing CMV levels involves a one-factor test that examines how much common variance might exist in a single dimension. This paper presents a data simulation demonstrating that a relatively high level of CMV must be present to bias true relationships among substantive variables at typically reported reliability levels. The simulation data overall suggests that at levels of CMV typical of multiple item measures with typical reliabilities reporting typical effect sizes, CMV does not represent a grave threat to the validity of research findings.

George B., &Pandey S.K . ( 2017).

We know the Yin-but where is the Yang? Toward a balanced approach on common source bias in public administration scholarship

Review of Public Personnel Administration, 37, (2), 245-270.

Surveys have long been a dominant instrument for data collection in public administration. However, it has become widely accepted in the last decade that the usage of a self-reported instrument to measure both the independent and dependent variables results in common source bias (CSB). In turn, CSB is argued to inflate correlations between variables, resulting in biased findings. Subsequently, a narrow blinkered approach on the usage of surveys as single data source has emerged. In this article, we argue that this approach has resulted in an unbalanced perspective on CSB. We argue that claims on CSB are exaggerated, draw upon selective evidence, and project what should be tentative inferences as certainty over large domains of inquiry. We also discuss the perceptual nature of some variables and measurement validity concerns in using archival data. In conclusion, we present a flowchart that public administration scholars can use to analyze CSB concerns.

Johnson R. E., Rosen C. C., & Djurdjevic E . ( 2011).

Assessing the impact of common method variance on higher order multidimensional constructs

Journal of Applied Psychology, 96, (4), 744-761.

URL     PMID:21142343

Researchers are often concerned with common method variance (CMV) in cases where it is believed to bias relationships of predictors with criteria. However, CMV may also bias relationships within sets of predictors; this is cause for concern, given the rising popularity of higher order multidimensional constructs. The authors examined the extent to which CMV inflates interrelationships among indicators of higher order constructs and the relationships of those constructs with criteria. To do so, they examined core self-evaluation, a higher order construct comprising self-esteem, generalized self-efficacy, emotional stability, and locus of control. Across 2 studies, the authors systematically applied statistical (Study 1) and procedural (Study 2) CMV remedies to core self-evaluation data collected from multiple samples. Results revealed that the nature of the higher order construct and its relationship with job satisfaction were altered when the CMV remedies were applied. Implications of these findings for higher order constructs are discussed.

Kammeyer-Mueller J., Steel P. D. G., & Rubenstein A . ( 2010).

The other side of method bias: The perils of distinct source research designs

Multivariate Behavioral Research, 45, (2), 294-321.

URL     PMID:26760287

Common source bias has been the focus of much attention. To minimize the problem, researchers have sometimes been advised to take measurements of predictors from one observer and measurements of outcomes from another observer or to use separate occasions of measurement. We propose that these efforts to eliminate biases due to common source variance create serious problems. To demonstrate the problems of using what we term the “distinct sources” measurement design, we provide an integrative review of the literature regarding both contamination and deficiency of measures. Building on this theme, the article uses simulated data to demonstrate how using data from distinct observers or occasions of measurement can distort estimates of predictor importance at least as much as common source variance. Alternative multisource designs are advocated and examined for tractability by simulating various numbers of observations and sources in the research design.

Kline T. J. B., Sulsky L. M., & Rever-Moriyama S. D . ( 2000).

Common method variance and specification errors: A practical approach to detection

Journal of Psychology, 134, (4), 401-421.

URL     PMID:10908073

The purpose of this study was to demonstrate how examining the bivariate correlations between items in self-report measures can assist in differentiating between possible common method variance vs. model specification errors. Specifically, social desirability was viewed as either a possible source of common method variance or as a theoretically meaningful construct that should be included in the model of interest (i.e., a specification error). In the first instance, LISREL was used, and the level of correlation between measures of social desirability and measures of the five constructs of interest was manipulated. These results provided some insight as to when one needs to be concerned about the possible “common variance effects” on the structural model. In the second instance, the correlations between measures of social desirability and the measures of only two constructs of interest were again manipulated. These analyses illustrated the point at which the omission of social desirability as a theoretically relevant variable began to result in a poor fit of the structural model.

Lai X., Li F., & Leung K . ( 2013).

A Monte Carlo study of the effects of common method variance on significance testing and parameter bias in hierarchical linear modeling

Organizational Research Methods, 16, (2), 243-269.

Despite that common method variance (CMV) is widely regarded as a serious threat to the validity of findings based on self-reports, there is insufficient research on its confounding influence. We extend Evans's (1985) pioneering work, and the more recent works by Ostroff, Kinicki, and Clark (2002) and Siemsen, Roth, and Oliveira (2010), to delineate the influence of CMV in a two-level hierarchical linear model based on self-report data. Our simulation results clearly show that in the absence of true effects, it is extremely unlikely for CMV to generate significant cross-level interactions. In fact, if a true cross-level interaction exists, CMV tends to lower the likelihood of its identification and erroneously underestimate the regression coefficient. Our simulation results also show that CMV may lead to a false significant cross-level main effect and overestimate the regression coefficient when no true effect exists. To reduce the probability of Type I errors, we show that raising the significance level to .01, the split sample strategy, and the addition of more CMV contaminated variables are effective in the vast majority of real-life situations and are more effective than increasing the number of groups or persons in each group. Both the split sample strategy and the addition of more CMV contaminated variables are also effective in reducing parameter bias when no true cross-level main effect exists. Trade-offs associated with different strategies are discussed.

Lance C. E., Baranik L. E., Lau A. R., & Scharlau E. A . ( 2009).

If it ain’t trait it must be method: (Mis)application of the multitrait-multimethod design in organizational research

In C. E. Lance & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends( pp. 339-362). New York: Routledge.

react-text: 506 The questionnaire survey is one of the most commonly used methods of data collection in public management research. These surveys often provide the information used to measure both the independent and dependent variables in an analysis. However, this introduces the risk of common method bias—a serious methodological challenge that has not received much attention as a distinct topic in public... /react-text react-text: 507 /react-text [Show full abstract]

Lance C. E., Dawson B., Birkelbach D., & Hoffman B. J . ( 2010).

Method effects, measurement error, and substantive conclusions

Organizational Research Methods, 13, (3), 435-455.

Lindell M.K., &Whitney D.J . ( 2001).

Accounting for common method variance in cross-sectional research designs

Journal of Applied Psychology, 86, (1), 114-121.

MacKenzie S.B., &Podsakoff P.M . ( 2012).

Common method bias in marketing: Causes, mechanisms, and procedural remedies

Journal of Retailing, 88, (4), 542-555.

There is a great deal of evidence that method bias influences item validities, item reliabilities, and the covariation between latent constructs. In this paper, we identify a series of factors that may cause method bias by undermining the capabilities of the respondent, making the task of responding accurately more difficult, decreasing the motivation to respond accurately, and making it easier for respondents to satisfice. In addition, we discuss the psychological mechanisms through which these factors produce their biasing effects and propose several procedural remedies that counterbalance or offset each of these specific effects. We hope that this discussion will help researchers anticipate when method bias is likely to be a problem and provide ideas about how to avoid it through the careful design of a study.

Malhotra N. K., Kim S. S., & Patil A . ( 2006).

Common method variance in IS research: A comparison of alternative approaches and a reanalysis of past research

Management Science, 52, (12), 1865-1883.

Despite recurring concerns about common method variance (CMV) in survey research, the information systems (IS) community remains largely uncertain of the extent of such potential biases. To address this uncertainty, this paper attempts to systematically examine the impact of CMV on the inferences drawn from survey research in the IS area. First, we describe the available approaches for assessing CMV and conduct an empirical study to compare them. From an actual survey involving 227 respondents, we find that although CMV is present in the research areas examined, such biases are not substantial. The results also suggest that few differences exist between the relatively new marker-variable technique and other well-established conventional tools in terms of their ability to detect CMV. Accordingly, the marker-variable technique was employed to infer the effect of CMV on correlations from previously published studies. Our findings, based on the reanalysis of 216 correlations, suggest that the inflated correlation caused by CMV may be expected to be on the order of 0.10 or less, and most of the originally significant correlations remain significant even after controlling for CMV. Finally, by extending the marker-variable technique, we examined the effect of CMV on structural relationships in past literature. Our reanalysis reveals that contrary to the concerns of some skeptics, CMV-adjusted structural relationships not only remain largely significant but also are not statistically differentiable from uncorrected estimates. In summary, this comprehensive and systematic analysis offers initial evidence that (1) the marker-variable technique can serve as a convenient, yet effective, tool for accounting for CMV, and (2) common method biases in the IS domain are not as serious as those found in other disciplines.

Malhotra N. K., Schaller T. K., & Patil A . ( 2017).

Common method variance in advertising research: When to be concerned and how to control for it

Journal of Advertising, 46, (1), 193-212.

Abstract In this article we discuss and analyze the critical issues related to common method variance (CMV) that are particularly relevant to advertising research and recommend best practices for assessing the effects of CMV in this domain. Specifically, we cover the development of CMV as a domain-specific methodological concern and the underlying sources of CMV that are likely to operate in cross-sectional survey-based studies in the field of advertising. We discuss in detail the available procedural and statistical techniques that can be applied to control for and/or measure the effects of sources of CMV in a single study and across research domains. In addition, we provide a critical look at how these techniques have been employed in past research and make recommendations for future examinations of CMV in advertising research.

Meade A. W., Watson A. M., & Kroustalis C. M . ( 2007).

Assessing common methods bias in organizational research.

Paper presented at the 22nd Annual Meeting of the Society for Industrial and Organizational Psychology, New York.

Meier K. J., & O’Toole L. J ., Jr. ( 2013).

Subjective organizational performance and measurement error: Common source bias and spurious relationships

Journal of Public Administration Research and Theory, 23, (2), 429-456.

Min H., Park J., & Kim H. J . ( 2016).

Common method bias in hospitality research: A critical review of literature and an empirical study

International Journal of Hospitality Management, 56, 126-135.

Common method variance has received much attention in the behavioral sciences. Nonetheless, scant scholarly effort has been invested in handling common method variance in hospitality research. This study investigates the current status of controlling for common method variance in hospitality research and assists researchers in taking appropriate actions. Study 1 shows hospitality researchers鈥 endeavors to control for common method bias through a critical review of literature published in four leading hospitality journals in the ten years from 2006 to 2015:International Journal of Hospitality Management,Journal of Hospitality & Tourism Research, Cornell Hospitality QuarterlyandInternational Journal of Contemporary Hospitality Management. In Study 2, empirical investigations examine the effectiveness of a procedural remedy (temporal separation) and a statistical control (an unmeasured method factor approach) with two independent samples. The results of Study 1 reveal that most survey-related publications in the four journals fail to address or acknowledge common method variance. Moreover, only a limited number of techniques is found to be used to control for method variance. The findings of Study 2 suggest that temporal separation with a time lag of one day leads to a weak control for method variance; however, the use of an unmeasured method factor significantly helps control for method variance in the model.

Pace V.L . ( 2010).

Method variance from the perspectives of reviewers: Poorly understood problem or overemphasized complaint?

Organizational Research Methods, 13, (3), 421-434.

Paiva-Salisbury M. L., Gill A. D., & Stickle T. R . ( 2016).

Isolating trait and method variance in the measurement of callous and unemotional traits

Assessment, 24, (6), 763-771.

URL     PMID:26733309

Abstract To examine hypothesized influence of method variance from negatively keyed items in measurement of callous-unemotional (CU) traits, nine a priori confirmatory factor analysis model comparisons of the Inventory of Callous-Unemotional Traits were evaluated on multiple fit indices and theoretical coherence. Tested models included a unidimensional model, a three-factor model, a three-bifactor model, an item response theory-shortened model, two item-parceled models, and three correlated trait-correlated method minus one models (unidimensional, correlated three-factor, and bifactor). Data were self-reports of 234 adolescents (191 juvenile offenders, 43 high school students; 63% male; ages 11-17 years). Consistent with hypotheses, models accounting for method variance substantially improved fit to the data. Additionally, bifactor models with a general CU factor better fit the data compared with correlated factor models, suggesting a general CU factor is important to understanding the construct of CU traits. Future Inventory of Callous-Unemotional Traits analyses should account for method variance from item keying and response bias to isolate trait variance.

Podsakoff N. P., Whiting S. W., Welsh D. T., & Mai K. M . ( 2013).

Surveying for “artifacts”: The susceptibility of the OCB-performance evaluation relationship to common rater, item, and measurement context effects

Journal of Applied Psychology, 98, (5), 863-874.

URL     PMID:23565897

Despite the increased attention paid to biases attributable to common method variance (CMV) over the past 50 years, researchers have only recently begun to systematically examine the effect of specific sources of CMV in previously published empirical studies. Our study contributes to this research by examining the extent to which common rater, item, and measurement context characteristics bias the relationships between organizational citizenship behaviors and performance evaluations using a mixed-effects analytic technique. Results from 173 correlations reported in 81 empirical studies (N = 31,146) indicate that even after controlling for study-level factors, common rater and anchor point number similarity substantially biased the focal correlations. Indeed, these sources of CMV (a) led to estimates that were between 60% and 96% larger when comparing measures obtained from a common rater, versus different raters; (b) led to 39% larger estimates when a common source rated the scales using the same number, versus a different number, of anchor points; and (c) when taken together with other study-level predictors, accounted for over half of the between-study variance in the focal correlations. We discuss the implications for researchers and practitioners and provide recommendations for future research.

Podsakoff P. M., MacKenzie S. B., Lee J.-Y., & Podsakoff N. P . ( 2003).

Common method biases in behavioral research: A critical review of the literature and recommended remedies

Journal of Applied Psychology, 88, (5), 879-903.

Abstract Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.

Podsakoff P. M., MacKenzie S. B., & Podsakoff N. P . ( 2012).

Sources of method bias in social science research and recommendations on how to control it

Annual Review of Psychology, 63, 539-569.

Reio T.G., Jr. ( 2010).

The threat of common method variance bias to theory building

Human Resource Development Review, 9, (4), 405-411.

The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as one of the possible major threats to the validity of quantitative research findings upon which significant theory building relies. Common method variance has been shown to introduce systematic bias into a study by artificially inflating or deflating correlations, thereby threatening the validity of conclusions drawn about the links between constructs. Both procedural design and statistical control solutions are provided to minimize its likelihood in studies with monomethod designs. Finally, editors and reviewers are called upon to support knowledge-building about how best to handle common method variance bias in quantitative studies.

Richardson H. A., Simmering M. J., & Sturman M. C . ( 2009).

A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance

Organizational Research Methods, 12, (4), 762-800.

Rindfleisch A., Malter A. J., Ganesan S., & Moorman C . ( 2008).

Cross-sectional versus longitudinal survey research: Concepts, findings, and guidelines

Journal of Marketing Research, 45, (3), 261-279.

Schaller T. K., Patil A., & Malhotra N. K . ( 2015).

Alternative techniques for assessing common method variance: An analysis of the theory of planned behavior research

Organizational Research Methods, 18, (2), 177-206.

ABSTRACT Each research domain carries the burden of examining the effects of common method variance (CMV) on published research within the domain. To focus on this concern in the context of the theory of planned behavior (TPB), this research empirically compares several methods of detecting the presence of and estimating the level of CMV in the TPB domain. These methods include various implementations of the marker variable technique and versions of the multitrait-multimethod (MTMM) technique. The results show that the marker variable technique provides estimates of CMV and CMV-corrected correlations and paths that are consistent with those produced using the other methods. Next, one implementation of the marker variable technique method is implemented post hoc on a large data set of published TPB studies. This analysis provides strong confirmatory evidence that the effects of CMV do not alter the substantive inferences of study results in prior research. Overall, these findings support putting to rest concerns about the adverse influence of CMV in the TPB domain.

Schwarz A., Rizzuto T., Carraher-Wolverton C., Roldán J. L., & Barrera-Barrera R . ( 2017).

Examining the impact and detection of the “urban legend” of common method bias

Data Base for Advances in Information Systems, 48, (1), 93-119.

Schwarz A., Schwarz C., & Rizzuto T . ( 2008).

Examining the “urban legend” of common method bias: Nine common errors and their impact

Paper presented at the 41st Hawaii International Conference on System Sciences, Waikoloa, USA.

Sharma R., Yetton P., & Crawford J . ( 2009).

Estimating the effect of common method variance: The method-method pair technique with an illustration from TAM research

MIS Quarterly, 33, (3), 473-490.

This paper presents a meta-analysis-based technique to estimate the effect of common method variance on the validity of individual theories. The technique explains between-study variance in observed correlations as a function of the susceptibility to common method variance of the methods employed in individual studies. The technique extends to mono-method studies the concept of method variability underpinning the classic multitrait ultimethod technique. The application of the technique is demonstrated by analyzing the effect of common method variance on the observed correlations between perceived usefulness and usage in the technology acceptance model literature. Implications of the technique and the findings for future research are discussed.

Siemsen E., Roth A., & Oliveira P . ( 2010).

Common method bias in regression models with linear, quadratic, and interaction effects

Organizational Research Methods, 13, (3), 456-476.

Spector P.E . ( 2006).

Method variance in organizational research: Truth or urban legend?

Organizational Research Methods, 9, (2), 221-232.

Spector P. E., Bauer J. A., & Fox S . ( 2010).

Measurement artifacts in the assessment of counterproductive work behavior and organizational citizenship behavior: Do we know what we think we know?

Journal of Applied Psychology, 95, (4), 781-790.

URL     PMID:20604597

An experiment investigated whether measurement features affected observed relationships between counterproductive work behavior (CWB) and organizational citizenship behavior (OCB) and their relationships with other variables. As expected, correlations between CWB and OCB were significantly higher with ratings of agreement rather than frequency of behavior, when OCB scale content overlapped with CWB than when it did not, and with supervisor rather than self-ratings. Relationships with job satisfaction and job stressors were inconsistent across conditions. We concluded that CWB and OCB are likely unrelated and not necessarily oppositely related to other variables. Researchers should avoid overlapping content in CWB and OCB scales and should use frequency formats to assess how often individuals engage in each form of behavior.

Spector P.E., &Brannick M.T . ( 2009).

Common method variance or measurement bias? The problem and possible solutions

In D. Buchanan & A. Bryman (Eds.), The Sage handbook of organizational research methods( pp. 346- 362). Thousand Oaks, CA: Sage Publications Ltd.

By P. E. Spector and M. T. Brannick, Published on 01/01/09

Spector P.E., &Brannick M.T . ( 2010).

Common method issues: An introduction to the feature topic in Organizational Research Methods.

Organizational Research Methods, 13 (3), 403-406.

Spector P. E., Rosen C. C., Richardson H. A., Williams L. J., & Johnson R. E. (in press). A new perspective on method variance: A measure-centric approach. Journal of Management. doi: 10.1177/0149206316687295.

Tehseen S., Ramayah T., & Sajilan S . ( 2017).

Testing and controlling for common method variance: A review of available methods

Journal of Management Sciences, 4, (2), 142-168.

Weijters B., Schillewaert N., & Geuens M . ( 2008).

Assessing response styles across modes of data collection

Journal of the Academy of Marketing Science, 36, (3), 409-422.

Cross-mode surveys are on the rise. The current study compares levels of response styles across three modes of data collection: paper-and-pencil questionnaires, telephone interviews, and online questionnaires. The authors make the comparison in terms of acquiescence, disacquiescence, and extreme and midpoint response styles. To do this, they propose a new method, namely, the representative indicators response style means and covariance structure (RIRSMACS) method. This method contributes to the literature in important ways. First, it offers a simultaneous operationalization of multiple response styles. The model accounts for dependencies among response style indicators due to their reliance on common item sets. Second, it accounts for random error in the response style measures. As a consequence, random error in response style measures is not passed on to corrected measures. The method can detect and correct cross-mode response style differences in cases where measurement invariance testing and multitrait multimethod designs are inadequate. The authors demonstrate and discuss the practical and theoretical advantages of the RIRSMACS approach over traditional methods.

Williams L.J., &Brown B.K . ( 1994).

Method variance in organizational behavior and human resources research: Effects on correlations, path coefficients, and hypothesis testing

Organizational Behavior & Human Decision Processes, 57, (2), 185-209.

Williams L. J., Hartman N., & Cavazotte F . ( 2010).

Method variance and marker variables: A review and comprehensive CFA marker technique

Organizational Research Methods, 13, (3), 477-514.

Williams L.J., &McGonagle A.K . ( 2016).

Four research designs and a comprehensive analysis strategy for investigating common method variance with self-report measures using latent variables

Journal of Business & Psychology, 31, (3), 339-359.

Common method variance (CMV) is an ongoing topic of debate and concern in the organizational literature. We present four latent variable confirmatory factor analysis model designs for assessing and controlling for CMV hose for unmeasured latent method constructs, Marker Variables, Measured Cause Variables, as well as a new hybrid design wherein these three types of method latent variables are used concurrently. We then describe a comprehensive analysis strategy that can be used with these four designs and provide a demonstration using the new design, the Hybrid Method Variables Model. In our discussion, we comment on different issues related to implementing these designs and analyses, provide supporting practical guidance, and, finally, advocate for the use of the Hybrid Method Variables Model. Through these means, we hope to promote a more comprehensive and consistent approach to the assessment of CMV in the organizational literature and more extensive use of hybrid models that include multiple types of latent method variables to assess CMV.

Wingate S., Sng E., & Loprinzi P. D . ( 2018).

The influence of common method bias on the relationship of the socio-ecological model in predicting physical activity behavior

Health Promotion Perspectives, 8, (1), 41-45.

Background:The purpose of this study was to evaluate the extent, if any, that the association between socio-ecological parameters and physical activity may be influenced by common method bias (CMB). Methods:This study took place between February and May of 2017 at a Southeastern University in the United States. A randomized controlled experiment was employed among 119 young adults.Participants were randomized into either group 1 (the group we attempted to minimize CMB)or group 2 (control group). In group 1, CMB was minimized via various procedural remedies,such as separating the measurement of predictor and criterion variables by introducing a time lag (temporal; 2 visits several days apart), creating a cover story (psychological), and approximating measures to have data collected in different media (computer-based vs. paper and pencil) and different locations to control method variance when collecting self-report measures from the same source. Socio-ecological parameters (self-efficacy; friend support; family support)and physical activity were self-reported. Results:Exercise self-efficacy was significantly associated with physical activity. This association (尾 = 0.74, 95% CI: 0.33-1.1; P = 0.001) was only observed in group 2 (control), but not in group 1 (experimental group) ( = 0.03; 95% CI: -0.57-0.63; P = 0.91). The difference in these coefficients (i.e., = 0.74 vs. = 0.03) was statistically significant (P = 0.04). Conclusion:Future research in this field, when feasible, may wish to consider employing procedural and statistical remedies to minimize CMB.

 版权所有 © 《心理科学进展》编辑部 本系统由北京玛格泰克科技发展有限公司设计开发　 技术支持：support@magtech.com.cn

/

 〈 〉