Top Five Articles from Organizational Research Methods

JPG READINGSummer is just around the corner, bringing with it longer days and warmer weather. To celebrate the season, we present a list of most read articles from Organizational Research Methods to add to your summer reading list.

“Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology” by Dennis A. Gioia, Kevin G. Corley, and Aimee Hamilton (January 2013)

For all its richness and potential for discovery, qualitative research has been critiqued as too often lacking in scholarly rigor. The authors summarize a systematic approach to new concept development and grounded theory articulation that is designed to
bring “qualitative rigor” to the conduct and presentation of inductive research.

Current Issue Cover

“Validation of a New General Self-Efficacy Scale” by Gilad Chen, Stanley M. Gully, and Dov Eden (January 2001)

Researchers have suggested that general self-efficacy (GSE)can substantially contribute to organizational theory, research, and practice. Unfortunately, the limited construct validity work conducted on commonly used GSE measures has highlighted such potential problems as low content validity and multidimensionality. The authors developed a new GSE (NGSE) scale and compared its psychometric properties and validity to that of the Sherer et al. General Self-Efficacy Scale (SGSE). Studies in two countries found that the NGSE scale has higher construct validity than the SGSE scale. Although shorter than the SGSE scale, the NGSE scale demonstrated high reliability, predicted specific self-efficacy (SSE) for a variety of tasks in various contexts, and moderated the influence of previous performance on subsequent SSE formation. Implications, limitations, and directions for future organizational research are discussed.

“Common Beliefs and Reality About PLS: Comments on Rönkkö and Evermann (2013)” by Jörg Henseler, Theo K. Dijkstra, Marko Sarstedt, Christian M. RingleAdamantios Diamantopoulos, Detmar W. Straub, David J. Ketchen Jr.Joseph F. Hair, G. Tomas M. Hult, and Roger J. Calantone (April 2014)

This article addresses Rönkkö and Evermann’s criticisms of the partial least squares (PLS) approach to structural equation modeling. We contend that the alleged shortcomings of PLS are not due to problems with the technique, but instead to three problems with Rönkkö and Evermann’s study: (a) the adherence to the common factor model, (b) a very limited simulation designs, and (c) overstretched generalizations of their findings. Whereas Rönkkö and Evermann claim to be dispelling myths about PLS, they have in reality created new myths that we, in turn, debunk. By examining their claims, our article contributes to reestablishing a constructive discussion of the PLS method and its properties. We show that PLS does offer advantages for exploratory research and that it is a viable estimator for composite factor models. This can pose an interesting alternative if the common factor model does not hold. Therefore, we can conclude that PLS should continue to be used as an important statistical tool for management and organizational research, as well as other social science disciplines.

“Using Generalized Estimating Equations for Longitudinal Data Analysis” by Gary A. Ballinger (April 2004)

The generalized estimating equation (GEE) approach of Zeger and Liang facilitates analysis of data collected in longitudinal, nested, or repeated measures designs. GEEs use the generalized linear model to estimate more efficient and unbiased regression parameters relative to ordinary least squares regression in part because they permit specification of a working correlation matrix that accounts for the form of within-subject correlation of responses on dependent variables of many different distributions, including normal, binomial, and Poisson. The author briefly explains the theory behind GEEs and their beneficial statistical properties and limitations and compares GEEs to suboptimal approaches for analyzing longitudinal data through use of two examples. The first demonstration applies GEEs to the analysis of data from a longitudinal lab study with a counted response variable; the second demonstration applies GEEs to analysis of data with a normally distributed response variable from subjects nested within branch offices ofan organization.

“Answers to 20 Questions About Interrater Reliability and Interrater Agreement” by James M. LeBreton and Jenell L. Senter (October 2008)

The use of interrater reliability (IRR) and interrater agreement (IRA) indices has increased dramatically during the past 20 years. This popularity is, at least in part, because of the increased role of multilevel modeling techniques (e.g., hierarchical linear modeling and multilevel structural equation modeling) in organizational research. IRR and IRA indices are often used to justify aggregating lower-level data used in composition models. The purpose of the current article is to expose researchers to the various issues surrounding the use of IRR and IRA indices often used in conjunction with multilevel models. To achieve this goal, the authors adopt a question-and-answer format and provide a tutorial in the appendices illustrating how these indices may be computed using the SPSS software.

All of the above articles from Organizational Research Methods will be free to access for the next two weeks. Want to know all about the latest research from Organizational Research Methods? Click here to sign up for e-alerts!

*Reading image attributed to Herry Lawford (CC)

Answers to 20 Questions About Interrater Reliability and Interrater Agreement

James M. LeBreton, Purdue University, and Jenell L. Senter, Wayne State University, published “Answers to 20 Questions About Interrater Reliability and Interrater Agreement” in the October 2008 issue of Organizational Research Methods (ORM). It was the most-frequently read article of July 2011 for ORM, based on calculations from Highwire-hosted articles. Most-read rankings are recalculated at the beginning of the month and are based on full-text and pdf views. Professor LeBreton kindly provided the following responses to the article.

Who is the target audience for this article?

The primary audience is graduate students, faculty, and practitioners working in organizational psychology, human resource management, and organizational behavior. That said, I have received a number of e-mails asking about the paper from colleagues scattered across a wide array of social sciences.

Were there findings that were surprising to you?

Our particular paper was designed to test a priori hypotheses. Instead, we sought to synthesize and integrate roughly 30 years of thinking on issues related to interrater reliability and interrater agreement. So we did not have “findings”, per se. Instead we structured our paper as 20 key questions related to the use of interrater reliability and agreement statistics. We then did our best to answer these questions in a way that would provide others with a clear set of guidelines for using these statistics in their research and work.

How do you see this study influencing future practice?

We hope that our paper provides helpful guidelines for individuals using interrater reliability and agreement statistics in their practice. Practitioners often invoke these statistics when conducting organizational climate or culture studies, performance evaluation studies, or even when examining the quality of rating obtained via a panel of interviewers. Our paper was written to provide important information for practitioners and researchers to help guide a) the selection of the correct agreement or reliability statistic, 2) the correct estimation of the statistic, 3) the correct interpretation of the statistic, and 4) understand how various features of their situation might influence estimates of agreement/reliability (e.g., missing data; number of items on a scale, number of raters/judges).

How does this study fit into your body of work/line of research?

I have been publishing articles that use or refine estimates of interrater agreement and reliability for roughly 10 years. This paper represents an opportunity to reflect on my thinking over these years and integrate it with the thinking of my co-author (Jenell Wittmer-Senter) to arrive at a product that we believe will be helpful to both researchers and practitioners.

How did your paper change during the review process?

The most substantive changes involved providing a more balanced treatment of the rwg coefficient. This is the coefficient that I use in my work and thus we were probably a bit too laudatory in our evaluation. The revised paper presents both the pros and cons of rwg. I still think it is a great way to estimate agreement, it is not without its limitations. Those are now addressed more explicitly in our final paper. We also expanded the set of agreement coefficients we discussed to include awg, AD, and SD.

What, if anything, would you do differently if you could go back and do this study again?

As I noted above, this paper wasn’t structured as a traditional “research study.” Thus, there are particular design or analysis issues I would like to do differently. Overall I am quite pleased with the paper. I believe it has the potential to serve as a helpful resource to individuals wanting to estimate interrater agreement and/or interrater reliability. It was structured as a Q & A paper. We certainly didn’t address all possible questions related to agreement and reliability, but I hope we addressed some of the more pressing ones for individuals who are new to using these statistics.

To learn more about Organizational Research Methods, please click here.

Bookmark and Share