Top Five Articles from Organizational Research Methods

JPG READINGSummer is just around the corner, bringing with it longer days and warmer weather. To celebrate the season, we present a list of most read articles from Organizational Research Methods to add to your summer reading list.

“Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology” by Dennis A. Gioia, Kevin G. Corley, and Aimee Hamilton (January 2013)

For all its richness and potential for discovery, qualitative research has been critiqued as too often lacking in scholarly rigor. The authors summarize a systematic approach to new concept development and grounded theory articulation that is designed to
bring “qualitative rigor” to the conduct and presentation of inductive research.

Current Issue Cover

“Validation of a New General Self-Efficacy Scale” by Gilad Chen, Stanley M. Gully, and Dov Eden (January 2001)

Researchers have suggested that general self-efficacy (GSE)can substantially contribute to organizational theory, research, and practice. Unfortunately, the limited construct validity work conducted on commonly used GSE measures has highlighted such potential problems as low content validity and multidimensionality. The authors developed a new GSE (NGSE) scale and compared its psychometric properties and validity to that of the Sherer et al. General Self-Efficacy Scale (SGSE). Studies in two countries found that the NGSE scale has higher construct validity than the SGSE scale. Although shorter than the SGSE scale, the NGSE scale demonstrated high reliability, predicted specific self-efficacy (SSE) for a variety of tasks in various contexts, and moderated the influence of previous performance on subsequent SSE formation. Implications, limitations, and directions for future organizational research are discussed.

“Common Beliefs and Reality About PLS: Comments on Rönkkö and Evermann (2013)” by Jörg Henseler, Theo K. Dijkstra, Marko Sarstedt, Christian M. RingleAdamantios Diamantopoulos, Detmar W. Straub, David J. Ketchen Jr.Joseph F. Hair, G. Tomas M. Hult, and Roger J. Calantone (April 2014)

This article addresses Rönkkö and Evermann’s criticisms of the partial least squares (PLS) approach to structural equation modeling. We contend that the alleged shortcomings of PLS are not due to problems with the technique, but instead to three problems with Rönkkö and Evermann’s study: (a) the adherence to the common factor model, (b) a very limited simulation designs, and (c) overstretched generalizations of their findings. Whereas Rönkkö and Evermann claim to be dispelling myths about PLS, they have in reality created new myths that we, in turn, debunk. By examining their claims, our article contributes to reestablishing a constructive discussion of the PLS method and its properties. We show that PLS does offer advantages for exploratory research and that it is a viable estimator for composite factor models. This can pose an interesting alternative if the common factor model does not hold. Therefore, we can conclude that PLS should continue to be used as an important statistical tool for management and organizational research, as well as other social science disciplines.

“Using Generalized Estimating Equations for Longitudinal Data Analysis” by Gary A. Ballinger (April 2004)

The generalized estimating equation (GEE) approach of Zeger and Liang facilitates analysis of data collected in longitudinal, nested, or repeated measures designs. GEEs use the generalized linear model to estimate more efficient and unbiased regression parameters relative to ordinary least squares regression in part because they permit specification of a working correlation matrix that accounts for the form of within-subject correlation of responses on dependent variables of many different distributions, including normal, binomial, and Poisson. The author briefly explains the theory behind GEEs and their beneficial statistical properties and limitations and compares GEEs to suboptimal approaches for analyzing longitudinal data through use of two examples. The first demonstration applies GEEs to the analysis of data from a longitudinal lab study with a counted response variable; the second demonstration applies GEEs to analysis of data with a normally distributed response variable from subjects nested within branch offices ofan organization.

“Answers to 20 Questions About Interrater Reliability and Interrater Agreement” by James M. LeBreton and Jenell L. Senter (October 2008)

The use of interrater reliability (IRR) and interrater agreement (IRA) indices has increased dramatically during the past 20 years. This popularity is, at least in part, because of the increased role of multilevel modeling techniques (e.g., hierarchical linear modeling and multilevel structural equation modeling) in organizational research. IRR and IRA indices are often used to justify aggregating lower-level data used in composition models. The purpose of the current article is to expose researchers to the various issues surrounding the use of IRR and IRA indices often used in conjunction with multilevel models. To achieve this goal, the authors adopt a question-and-answer format and provide a tutorial in the appendices illustrating how these indices may be computed using the SPSS software.

All of the above articles from Organizational Research Methods will be free to access for the next two weeks. Want to know all about the latest research from Organizational Research Methods? Click here to sign up for e-alerts!

*Reading image attributed to Herry Lawford (CC)

Throwback Thursday: What is Organizational Performance?

[Happy #ThrowbackThursday! We’re excited to revisit one of our most read posts on Organizational Research Methods‘s article “Exploring the Dimensions of Organizational Performance: A Construct Validity Study.”]

kenteegardin (cc)

kenteegardin (cc)

Editor’s note: We are pleased to welcome P. Maik Hamann, Frank Schiemann, Lucia Bellora, and Thomas W. Guenther, all of Technische Universitat Dresden, whose paper Exploring the Dimensions of Organizational Performance: A Construct Validity Study was published in Volume 16, Number 1 (January 2013) of Organizational Research Methods. The raison d’être of management research is to prove that management instruments and management methods, such as strategic planning, zero based budgeting, or the balanced scorecard, are able to enhance organizational perfUntitledormance. In addition, major theories in management research, for instance all contingency theories, include organizational performance as an important dependent variable in their conceptual arguments. But what is organizational performance? How can it be defined and measured in a reliable and valid manner? The Organizational Research Methods article Exploring the Dimensions of Organizational Performance: A Construct Validity Study provides answers to these questions.

home_coverEvery time we review existing literature on the effect of management methods on organizational performance, we find it hard to compare results across studies. The contradictions between studies are mostly caused by different concepts and measurement approaches of organizational performance. If, due to completely different concepts and measurement systems, we are not able to combine study results, how can we as researchers even pretend to contribute to management research by the newest study applying a new construct measurement approach on organizational performance? Consequently, the interest into measurement approaches, construct validation and conceptual nature of organizational performance was triggered in our research team. After reviewing previous literature on this subject we recognized that no construct validation study addressing jointly the conceptual level of organizational performance and the construct validity of a comprehensive set of indicators at the operational level had been published before. This was the gap we wanted to close with our study.

Following Combs, Crook, and Shook (2005)1 we distinguish between operational and organizational performance. In this framework operational performance combines all non-financial outcomes of organizations. Furthermore, the conceptual domain of organizational performance is limited to economic outcomes. On this basis, we identify four organizational performance dimensions: profitability, liquidity, growth, and stock market performance. For each of these dimensions, we propose and test a set of construct valid indicators on a large panel data set with 37,262 firm-years for 4,868 listed US-organizations.

Interestingly, the growth dimension is troublesome under conditions of high environmental instability (e.g., in 2002 after the dotcom bubble or at the beginning of the financial crises in 2008). We perceive two possible explanations for this finding. First, growth is examined based on three aspects of size: sales, employees, and assets. These aspects differ in their reactivity with regard to increasing environmental instability (e.g., although sales might decrease immediately, investments already under way will be finished, thus increasing an organization’s assets base). Second, Higgins (1977)2 introduced the concept of a sustainable growth rate that must be in alignment with overall organizational performance, the financial policy, and the dividend payout ratio. If an organization grows at a rate above its sustainable growth rate, the other aspects (e.g., other dimensions of organizational performance) will eventually decrease. Fully developing these two arguments was beyond the scope of our article. However, they pose interesting research questions for future research on the growth dimension of organizational performance.

In summary, we propose a validated set of measurement indicators for the organizational performance construct for future management research. Furthermore, we highlight situations, in which construct validity is hampered.

1 Combs, J. G., Crook, T. R., & Shook, C. L. (2005). The dimensionality of organizational performance and its implications for strategic management research. In D. J. Ketchen (Ed.), Research methodology in strategy and management (Vol. 2, pp. 259-286). Amsterdam: Elsevier.

2 Higgins, R. C. (1977). How Much Growth Can A Firm Afford? Financial Management, 6(3), 7-16.

Read the paper, “Exploring the Dimensions of Organizational Performance: A Construct Validity Study,” online in Organizational Research Methods.

Common Beliefs and Reality About PLS

[Editor’s Note: We’re pleased to welcome Dr. Jörg Henseler, who was the corresponding author on the article, “Common Beliefs and Reality About PLS: Comments on Rönkkö and Evermann (2013)” from Organizational Research Methods.]

The extent to which an issue is raised by successive generations of researchers and practitioners is a subtle indicator of its importance. The benefits and limitations of partial least squares path modeling (PLS) is such an issue that has been heatedly debated across a wide variety of disciplines. Tying in with this stream of research, Rönkkö and Evermann (2013), in their recent Organizational Research Methods article, sought to examine “statistical myths and urban legends surrounding the 07ORM13_Covers.inddoften-stated capabilities of the PLS method and its current use in management and organizational research.” Based on a series of arguments and simulations studies, Rönkkö and Evermann (2013) conclude that “PLS results can be used to validate a measurement model is a myth” (p. 438); “the PLS path estimates cannot be used in NHST [null hypothesis significance testing]” (p. 439); “the small-sample-size capabilities of PLS are a myth” (p. 442); “PLS does not have [the capability to] reveal patterns in the data” (p. 442); “PLS lacks diagnostic tools” (p. 442); “PLS cannot be used to test models” (p. 442); and “PLS is not an appropriate choice for early-stage theory development and testing” (p. 442). In light of these results, the authors conclude that the use of PLS is difficult to justify and that researchers should rather revert to regression with summed scales or factor scores.

Considering the increasing popularity of PLS in the strategic management (Hair et al. 2012a), marketing (Hair et al. 2012b) and management information systems disciplines (Ringle et al. 2012; Figure 1), these claims are certainly alarming. But how is it possible that Rönkkö and Evermann (2013) cannot find even a single positive attribute of PLS which stands against the research of great minds such as the founder of PLS, Hermann Wold, and key contributor’s such as Jan-Bernd Lohmöller and Theo Dijkstra? Does the criticism really hold what Rönkkö and Evermann (2013) promise or do these authors create myths by chasing myths?

fig1

The Organizational Research Methods article “Common Beliefs and Reality about Partial Least Squares: Comments on Rönkkö & Evermann (2013),” authored by Jörg Henseler, Theo K. Dijkstra, Marko Sarstedt, Christian M. Ringle, Adamantios Diamantopoulos, Detmar W. Straub, Dave J. Ketchen, Joe F. Hair, G. Tomas M. Hult, and Roger Calantone provides answers to these questions and shows that none of the alleged shortcomings of PLS stands up. More precisely, we show that Rönkkö and Evermann’s (2013) surprising findings are not inherent in the PLS method but are rather the result of several limitations in their study, which indisputably limit the validity of the authors’ findings.

The major shortcoming of Rönkkö and Evermann’s (2013) study is that they neglect that PLS estimates a composite factor model, not a common factor model. Although the composite factor model is often a good approximation to the common factor model, there are important differences. Rönkkö and Evermann (2013) regard PLS simply as a suboptimal estimator of common factor models. But like a hammer is a suboptimal tool to fix screws, PLS is a suboptimal tool to estimate common factor models. In contrast, PLS is a useful tool for estimating composite factor models.

Another fundamental limitation of Rönkkö and Evermann’s (2013) study relates to their simulation design. Research on PLS has generated a multitude of different simulation studies that compare the technique’s performance with that of other approaches to structural equation modeling. These studies vary considerably in terms of their model set-ups. In this context and despite the fact that most recent simulation studies use quite complex models with a multitude of constructs and path relationships, Rönkkö and Evermann (2013) chose to use a two-construct model with a single path as their basis for their simulation. This, however, inevitably raises the question whether this model can indeed be considered representative of published research from an applied standpoint. Bearing this in mind, we revisited review studies on the use of PLS in strategic management, marketing, and information systems research. Out of the 532 PLS models being estimated in 306 journal articles, there was exactly one model (0.2 percent) with two constructs. More precisely, the average number of constructs was 7.94 in marketing, 7.50 in strategic management, and 8.12 in information systems, respectively. There are several other aspects of Rönkkö and Evermann’s (2013) simulation design which cast doubt on their findings, suggesting that their simulation model set-up is not remotely representative of research studies using PLS. Further limitations relate to implicit assumptions in their interpretation of the PLS method, over-stretched generalization of their findings, misinterpretation of the literature and reporting errors in their simulation results. By disclosing these shortcomings, our study re-establishes a constructive discussion of the PLS method and its properties.

On a more general level, our article should also be read as a reminder that there is no such thing as an estimation method that is best for every model, every distribution, every set of parameter values and every sample size. For all methods, no matter how impressive their pedigree (maximum likelihood being no exception), one can find situations where they do not work as advertised. One can always construct a setup where a given method, any method, ‘fails’. A (very) small sample or parameter values close to critical boundaries or distributions that are very skewed or thick-tailed etc., or any combination thereof will do the trick. It is just a matter of perseverance to find something that it is universally ‘wrong.’

A constructive attitude, one that aims to ascertain when PLS work well, how it can be improved would seem to be more conducive to improving the quality of research: “We believe that such debates are fruitful as long as they do not develop a ritualistic adherence to dogma and do not advocate one technique’s use as generally advantageous in all situations. Any extreme position that (often systematically) neglects the beneficial features of the other technique and may result in prejudiced boycott calls [citations removed], is not good research practice and does not help to truly advance our understanding of methods (or any other subject)” (Hair et al. 2012c, p. 313).

References
Hair, J. F., Sarstedt, M., Pieper, T. M., & Ringle, C. M. (2012a). Applications of partial least squares path modeling in management journals: a review of past practices and recommendations for future applications. Long Range Planning, 45(5-6), 320-340.
Hair, J. F., Sarstedt, M., Ringle, C. M., & Mena, J. A. (2012b). An assessment of the use of partial least squares structural equation modeling in marketing research. Journal of the Academy of Marketing Science, 40(3), 414-433.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2012c). Partial least squares: The better approach to structural equation modeling? Long Range Planning, 45(5-6), 312-319.
Henseler, J., Dijkstra, T. K., Sarstedt, M., Ringle, C. M., Diamantopoulos, A., Straub, D. W., Ketchen, D. J., Hair, J. F., Hult, G. T. M., & Calantone, R. J. (2014). Common beliefs and reality about partial least squares: Comments on Rönkkö & Evermann (2013). Organizational Research Methods, forthcoming.
Ringle, C. M., Sarstedt, M., & Straub, D. W. (2012). A critical look at the use of PLS-SEM in MIS Quarterly. MIS Quarterly, 36(1), iii-xiv.
Rönkko, M., & Evermann, J. (2013). A critical examination of common beliefs about partial least squares path modeling. Organizational Research Methods, 16(3), 425-448.

Click here to read the paper “Common Beliefs and Reality About PLS: Comments on Rönkkö and Evermann (2013)” from Organizational Research Methods. Want to know about all the latest from Organizational Research Methods? Click here to sign up for e-alerts!

Jörg Henseler, Institute for Management Research, Radboud University Nijmegen, Nijmegen, the Netherlands and ISEGI, Universidade Nova de Lisboa, Lisbon, Portugal

Theo K. Dijkstra, Faculty of Economics and Business, University of Groningen, Groningen, the Netherlands

Marko Sarstedt, Otto-von-Guericke University Magdeburg, Magdeburg, Germany and University of Newcastle, Callaghan, Australia

Christian M. Ringle, University of Newcastle, Callaghan, Australia and Hamburg University of Technology, Hamburg, Germany

Adamantios Diamantopoulos, University of Vienna, Vienna, Austria

Detmar W. Straub, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA, USA

David J. Ketchen Jr., Raymond J. Harbert College of Business, Auburn University, Auburn, AL, USA

Joseph F. Hair, Coles College of Business, Kennesaw State University, Kennesaw, GA, USA

G. Tomas M. Hult, Broad College of Business, Michigan State University, East Lansing, MI, USA

Roger J. Calantone, Broad College of Business, Michigan State University, East Lansing, MI, USA

The Problem with Surveys in Research

[Editor’s Note: We are pleased to welcome Ben Hardy who collaborated with Lucy R. Ford on their article entitled “It’s Not Me, It’s You: Miscomprehension in Surveys,” available now in the OnlineFirst section of Organizational Research Methods.]

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’

There is a little of the Humpty Dumpty in all of us. When we communicate with others we tend to think about what we want to say – and choose the words to mean what we want them to mean – rather than thinking carefully about what it is that others will hear.

07ORM13_Covers.inddThe same is true when we conduct survey research. We identify a construct, such as job satisfaction, carefully define it and then produce a series of items which we believe will tap into the cognitive domain occupied by this construct. We then test these items to check that people understand them and use a variety of statistical techniques to produce a finished scale which, we believe, measures just what we choose it to mean – neither more nor less.

Alternatively we may bypass all of this and choose to use a published scale, assuming that all this hard work has been done for us.

Unfortunately, we do not tend to pay much attention to the actual words of the items. Sure, we check whether people understand them but we seldom check whether people understand them in exactly the same way as we do. Instead, like Humpty Dumpty, we fall back on assuming that words mean what we choose them to mean – neither more nor less.

The average noun has 1.74 meanings and the average verb 2.11 (Fellbaum, 1990). This leaves quite a good deal of scope for words to mean very different things, whatever we, or Humpty Dumpty, might choose. Consider the item ‘How satisfied are you with the person who supervises you – your organizational superior?’ (Agho, Price, & Mueller, 1992). What does it mean to you? How satisfied are you with your boss? (49); How satisfied are you with your boss and the decisions they make? (25); Is your supervisor knowledgeable and competent? (6); Do you like your supervisor? (14); One of these probably accords with your interpretation. You might be interested to know that quite a few people do not agree with you. The figures in brackets are the percentage of people selecting that particular option. You knew exactly what the item meant. And so did everyone else. The problem is that you did not agree.

“So what?” you might argue. If the stats work out then is there a problem? Well yes, there is. Firstly, we are not measuring what we think we are measuring. Few of us would trust a doctor whose laboratory tests might or might not be measuring what they claim to measure – even if the number looked reassuringly within the normal range. So should we diagnose organizational pathologies on the basis of surveys which may or may not be measuring what they claim to measure – even if the number if reassuring? Simply because something performs well statistically, it doesn’t mean that it tells you anything useful. Secondly, we do not know what individuals would score if they were actually answering exactly the same question that the researcher intended Thirdly, the different interpretations mean that there are different sub-groups within a population and this may have knock-on effects when linked to other factors, such as intention to leave.

So what is to be done? There are a number of simple fixes. Probably the easiest is to actually go and talk to some of the people who are going to be surveyed and ask them what they think the items in the survey actually mean. This will give a good idea of whether your interpretation differs wildly from theirs, and in many cases you will find that it does.

This problem of other peoples’ interpretations differing from our own extends beyond survey research, of course. Indeed, there is a whole field of research, that of linguistic pragmatics, which seeks to understand why we interpret things the way that we do. At the heart of it all, however, is communication. And so the assumption that words mean what we choose them to mean – neither more nor less – is a fallacious one, at least as far as other people are concerned. We need to stop thinking about what we are saying and spend a little more time thinking about what others are hearing. Humpty Dumpty was wrong. It is not us who chooses what words mean, it is the recipient of those words. And we ignore their views at our peril.

Agho, A. O., Price, J. L., & Mueller, C. W. 1992. Discriminant validity of measures of job satisfaction, positive affectivity and negative affectivity. Journal of Occupational and Organizational Psychology, 65(3): 185-196.

Fellbaum, C. 1990. English Verbs as a Semantic Net. Int J Lexicography, 3(4): 278-301.

Read “It’s Not Me, It’s You: Miscomprehension in Surveys,” from Organizational Research Methods for free by clicking here. Click here to sign up for e-alerts from Organizational Research Methods to get notified for all the latest articles like this.

Dr-Ben-HardyBen Hardy is a lecturer in management at the Open University Business School. His research examines the role of physiological processes in management and finance, morale in organizations and linguistic factors in survey research. He obtained his PhD from the University of Cambridge in 2009. He also earned an MBA and MPhil from the same institution and a bachelor of veterinary medicine and surgery from the University of Edinburgh. He is a member of the Royal College of Veterinary Surgeons.

Lucy R. Ford is an assistant professor of management in the Haub School of Business at Saint Joseph’s University. lfordHer research interests include leadership, teams, and linguistic issues in survey development. Dr. Ford has served on the executive committee of the Research Methods Division of the Academy of Management, and as the co-chair of the pre-doctoral consortium hosted by Southern Management Association. She has delivered numerous workshops on research methods and scale development at both regional and national conferences. Her work has been published in The Leadership Quarterly, Journal of Organizational Behavior, and Journal of Occupational and Organizational Psychology, among others. She received her BBA in human resources management from East Tennessee State University, and her PhD in organizational behavior from Virginia Commonwealth University.

Management INK in 2013: Revisiting Research on Organizational Performance

orm In the spirit of reflection on 2013, we are pleased to highlight one of the most read articles of the year, “Exploring the Dimensions of Organizational Performance: A Construct Validity Study” published in  Volume 16, Number 1 (January 2013) of Organizational Research Methods. This article discusses findings which reveal that performance dimensions are not what were previously proposed by researchers and offers up new essential intelligence for the continuing assessment of organizational performance investigation.

This article is free to read for the next week! The paper by P. Maik Hamann, Frank Schiemann, Lucia Bellora, and Thomas W. Guenther can be read online in Organizational Research Methods.

What is Organizational Performance?

Editor’s note: We are pleased to welcome P. Maik Hamann, Frank Schiemann, Lucia Bellora, and Thomas W. Guenther, all of Technische Universitat Dresden, whose paperExploring the Dimensions of Organizational Performance: A Construct Validity Studywas published in Volume 16, Number 1 (January 2013) of Organizational Research Methods.

The raison d’être of management research is to prove that management instruments and management methods, such as strategic planning, zero based budgeting, or the balanced scorecard, are able to enhance organizational perfUntitledormance. In addition, major theories in management research, for instance all contingency theories, include organizational performance as an important dependent variable in their conceptual arguments. But what is organizational performance? How can it be defined and measured in a reliable and valid manner? The Organizational Research Methods article “Exploring the Dimensions of Organizational Performance: A Construct Validity Study” provides answers to these questions.

home_coverEvery time we review existing literature on the effect of management methods on organizational performance, we find it hard to compare results across studies. The contradictions between studies are mostly caused by different concepts and measurement approaches of organizational performance. If, due to completely different concepts and measurement systems, we are not able to combine study results, how can we as researchers even pretend to contribute to management research by the newest study applying a new construct measurement approach on organizational performance? Consequently, the interest into measurement approaches, construct validation and conceptual nature of organizational performance was triggered in our research team. After reviewing previous literature on this subject we recognized that no construct validation study addressing jointly the conceptual level of organizational performance and the construct validity of a comprehensive set of indicators at the operational level had been published before. This was the gap we wanted to close with our study.

Following Combs, Crook, and Shook (2005)1 we distinguish between operational and organizational performance. In this framework operational performance combines all non-financial outcomes of organizations. Furthermore, the conceptual domain of organizational performance is limited to economic outcomes. On this basis, we identify four organizational performance dimensions: profitability, liquidity, growth, and stock market performance. For each of these dimensions, we propose and test a set of construct valid indicators on a large panel data set with 37,262 firm-years for 4,868 listed US-organizations.

Interestingly, the growth dimension is troublesome under conditions of high environmental instability (e.g., in 2002 after the dotcom bubble or at the beginning of the financial crises in 2008). We perceive two possible explanations for this finding. First, growth is examined based on three aspects of size: sales, employees, and assets. These aspects differ in their reactivity with regard to increasing environmental instability (e.g., although sales might decrease immediately, investments already under way will be finished, thus increasing an organization’s assets base). Second, Higgins (1977)2 introduced the concept of a sustainable growth rate that must be in alignment with overall organizational performance, the financial policy, and the dividend payout ratio. If an organization grows at a rate above its sustainable growth rate, the other aspects (e.g., other dimensions of organizational performance) will eventually decrease. Fully developing these two arguments was beyond the scope of our article. However, they pose interesting research questions for future research on the growth dimension of organizational performance.

In summary, we propose a validated set of measurement indicators for the organizational performance construct for future management research. Furthermore, we highlight situations, in which construct validity is hampered.

1 Combs, J. G., Crook, T. R., & Shook, C. L. (2005). The dimensionality of organizational performance and its implications for strategic management research. In D. J. Ketchen (Ed.), Research methodology in strategy and management (Vol. 2, pp. 259-286). Amsterdam: Elsevier.

2 Higgins, R. C. (1977). How Much Growth Can A Firm Afford? Financial Management, 6(3), 7-16.

Read the paper, “Exploring the Dimensions of Organizational Performance: A Construct Validity Study,” online in Organizational Research Methods.

Dealing With Outliers in Organizational Science Research

Editor’s note: We are pleased to welcome Herman Aguinis, Ryan K. Gottfredson, and Harry Joo, all of Indiana University, whose article “Best-practice Recommendations for Defining, Identifying, and Handling Outliers” is forthcoming in Organizational Research Methods and now available in the journal’s OnlineFirst section.

Our article was motivated by the need to address the following key questions faced by virtually every researcher conducting empirical work: Do I have outliers in my data? How do I know whether I do? Are they affecting my results? How do I deal with them? Malcom Galdwell, in his bestselling book “Outliers,” went so far as to state that a greater understanding of outliers can help us “build a better Untitledworld…that provides opportunities for all” (p. 268). However, our literature review based on 46 methodological sources and 232 organizational science journal articles addressing outliers revealed that researcher usually view outliers as “data problems” that must be “fixed.” Also, our review uncovered inconsistencies in recommendations regarding outliers across various methodological sources, as well as the use of a variety of faulty practices by substantive researchers.

home_coverOur goal was to produce a manuscript that describes best-practice recommendations on how to define, identify, and handle outliers. Our article offers specific recommendations that researchers can follow in a sequential manner to deal with outliers. We believe that our guidelines will not only be helpful for researchers, but also serve as a useful tool for journal editors and reviewers in the evaluation of manuscripts. For example, much like editors and reviewers should demand that authors be clear and specific about a study’s limitations, we suggest that they should also request that authors include a few sentences in every empirically-based manuscript describing how error, interesting, and influential outliers were defined, identified, and handled. Moreover, guidelines for publication such as those produced by the Academy of Management and American Psychological Association should force authors to include a short section on “Outlier Detection and Management” within the results section. In other words, this description should include how each of the three types of outliers has been addressed in all empirical studies. Our decision-making charts can serve as a checklist in this regard. Overall, we hope that our guidelines will result in more consistent and transparent practices regarding the treatment of outliers in organizational and social science research.

Read “Best-practice Recommendations for Defining, Identifying, and Handling Outliers” in Organizational Research Methods.

Bios

Herman Aguinis is the Dean’s Research Professor, a professor of organizational behavior and human resources, and the founding director of the Institute for Global Organizational Effectiveness in the Kelley School of Business, Indiana University. His research interests span several human resource management, organizational behavior, and research methods and analysis topics. He has published five books and more than 100 articles in refereed journals. He is the recipient of the 2012 Academy of Management Research Methods Division Distinguished Career Award and a former editor-in-chief of Organizational Research Methods.

Ryan K. Gottfredson is a doctoral student in organizational behavior and human resource management in the Kelley School of Business, Indiana University. His research interests include performance management, research methods and analysis, and relationship perceptions in the workplace (e.g., trust, justice). His work has appeared in several refereed journals including Academy of Management Learning and Education, Journal of Organizational Behavior, and Business Horizons.

Harry Joo is a doctoral student in organizational behavior and human resource management in the Kelley School of Business, Indiana University. His research interests include performance management and research methods and analysis. His work has appeared in several refereed journals including Organizational Research Methods, Journal of Management, Academy of Management Perspectives, and Business Horizons.