ASQ Improving Evidence Presentation: Resources and Tools

[The following post is re-blogged from the ASQ Blog. Click here to view the original article.]

This is a post that contains resources and tools to improve evidence presentation, echoing one of ASQ’s most important initiatives. The ASQ blog will keep updating the post and we welcome crowdsourcing and sharing of ideas from management scholars all around the world. Get in touch with asqblog@gmail.com to share your thoughts!

What is ASQ Improving Evidence Presentation initiative?(#ASQEvidencePresentation)

As you have seen in the “From the Editor” that Administrative Science Quarterly (ASQ) published in June 2017, the editors of ASQ strongly encourage that authors show the data in their manuscripts, by using graphical approaches to give an indication of the most important features of the data and their theoretical explanation before estimating models. Preferably this should be done as early as the introduction in order to spur the reader’s interest and give an indication of why the paper is valuable. Such use of graphical methods is rare in organizational theory and management research more generally, so we will gradually introduce methods of graphical analysis that can be used by researchers.

Graphical methods for showing the data are integrated into Stata, the most common software used by management researchers, and the Stata commands offer a good blend of simplicity and flexibility. Nevertheless, they need some training, especially because statistical training is model-focused in many schools, and highly variable in how well graphical methods are taught. New analytical tools like R and Python are also on the rise and increasingly used to visualize data, which requires, even more, training and knowledge sharing. Here we collaborate with the ASQ blog to home a resource center where tools and techniques to improve evidence presentation are crowdsourced and curated. The resource center will be updated, and editors of ASQ will contribute examples with data and do-files to demonstrate evidence presentation. We hope that the resource center will help improve the way you present evidence in your research.

—-Henrich Greve, the Editor of Administrative Science Quarterly

ASQ paper development workshop materials

2017 ASQ paper development workshop, AOM in Atlanta

Materials on importance of evidence presentation

An Economist’s Guide of Visualising Data, written by Jonathan A. Schwabish (2014) in Journal of Economic Perspectives, 28(1): 209–234

Editor’s examples

why indie books sell well, an evidence presentation example based on Greve & Song (2017), “Amazon Warrior: How a Platform Can Restructure Industry Power and Ecology.” Data and do-file are available.

Tools using Stata

  1. introduction to some important methods including scatterplots, lineplots, bar graphs, box plots, and kernel (full distribution) plots.
  2. example of more advanced programming, which is needed because stata does not (yet) have a simple way of showing a grouped bar graph with error bars, which is an important graph for taking a first look at group differences.
  3. introduction to spmap, an add-on procedure for producing mapped data displays. Displaying statistics on a map can be very helpful for any kind of research involving spatial relations, before the add-on such mapping required changing to different software and exporting data, which is both time consuming and a potential source of errors.
  4. introduction to the coefplot function, a graphical display of coefficient magnitudes. This is a very informative way of giving a comparative view of a full regression model, or parts of it, in a compact graph. This routine has a very flexible and intriguing set of plots as displayed on this link.

Psychological Science Seeking Preregistered Replications

 [The following post is re-blogged from Method Space. Click here to view the original article.]

keys-2114366_640_opt

Making a statement in the ongoing “replication” or “reproducibility crisis,” the journal Psychological Science will now accept a special class of research–based papers that report on attempts to re-create experiments that had influential findings and that were first published in Psychological Science.
This new category of “preregistered direct replications,” or PDRs, aims “to create conditions that experts agree test the same hypotheses in essentially the same way as the original study,” explained D. Stephen Lindsay, a cognitive psychologist at the University of Victoria and editor of Psychological Science.

That psychology in particular is in need of a replication tonic has been widely accepted. “To me,” Jim Grange, a senior lecturer in psychology at Keele University, wrote last year, “it is clear that there is a reproducibility crisis in psychological science, and across all sciences. Murmurings of low reproducibility began in 2011 – the ‘year of horrors’ for psychology – with a high profile fraud case. But since then, The Open Science Collaboration has published the findings of a large-scale effort to closely replicate 100 studies in psychology. Only 36 percent of them could be replicated.”

Under the PDR description, the ‘direct’ part of the replication rubric refers to sticking to the original methodology as closely as possible. “It is impossible to conduct a study twice in exactly the same way,” Lindsay said in a letter announcing the PDRs, “and doing so would not always be desirable (e.g., often it would be better to test a larger number of subjects than in the original study; if the original study included some problematic items or unclear instructions, it is probably best to correct those shortcomings; if the original and replication studies were conducted in different cultural contexts then it may be appropriate to change surface-level aspects of the procedure).”

It is expected, Lindsay said, that the author of the target piece will be invited to provide a review of the replication, along with at least two independent experts. And in keeping with the ‘preregistered’ aspect of the PDR, researchers are asked to submit proposals for review before data collection begins. “A virtue of pre-data-collection submissions is that decisions about scientific merit are independent of how the results come out,” he said. “Another advantage is that reviewers and the editor have an opportunity to improve a study before it is conducted. We may therefore at some point in the future require that PDRs be submitted as proposals prior to data collection.  But for now we will also consider PDRs that have already been conducted if they were preregistered.” And in keeping with an earlier announcement also focused on transparency, Lindsay said would-be authors are asked to provide the data they use in their published analysis to reviewers.

Lindsay made clear that while the replication needed to leverage an article that originally appeared in Psychological Science, merely having appeared there was not sufficient for a PDR. “The primary criterion is general theoretical significance,” he emphasized.

Still, he said, the journal – which takes its lumps from some observers like Andrew Gelman – has a responsibility to not turn its back on what it’s published. “One of the motivations for adding PDRs is the belief that a journal is responsible for the works it publishes (as per Sanjay Srivastava’s ‘Pottery Barn rule’ blog post). That said,” he continued, “our goal is not to publish ‘gotchas!’ Rather, inviting PDRs is one of many steps taken to increase the extent to which this journal contributes to efforts to make psychology a more cumulative science.”

Regardless with whether a replication confirms or question the original findings, the outcomes from the PDRs will be “valuable and informative.” He cited a 2009 column by then Association for Psychological Science President Walter Mischel which noted “replications sometimes yield more nuanced results that spark new hypotheses and contribute to the elaboration of psychological theories.”

The PDRs will launch with a replication of a 2008 an fMRI study Psychological Science published in 2008, said journal editor Steve Lindsay. That original study, conducted by William A. Cunningham, Jay J. Van Bavel and Ingrid R. Johnsen, detailed evidence that processing goals can modulate activity in the amygdala, a part of the brain that influences memory, emotions and decision-making. The new article explains how the University of Denver’s Kateri McRae and the University of California, Los Angeles’ Daniel Lumian conducted a high-powered replication of that experiment — in consultation with Cunningham and Van Bavel – and in the process replicated the original’s central finding.

“This article,” said Lindsay, “is a fine model of a preregistered direct replication,” in part because of its greater statistical power and multiple converging analyses compared to the original.

Lindsay said PDRs are not the same as the registered replication reports that have appeared in the sister journal, Perspectives on Psychological Science, and which along with other multi-lab empirical papers will appear in the new journal, Advances in Methods and Practices in Psychological Science.

Benefits and Costs of Covert Research: An Analysis

[We’re pleased to welcome author Thomas Roulet of King’s College, UK.  Roulet recently published an article in Organizational Research Methods entitled Reconsidering the Value of Covert Research: The Role of Ambiguous Consent in Participant Observation, co-authored by Michael J. Gill, Sebastien Stenger, and David James Gill. From Roulet:]

What inspired you to be interested in this topic? We were inspired by recent ethnographic work relying heavily on covert observation – for example the recent work by Alice Goffman on low income urban areas, or the paper byORM_72ppiRGB_powerpoint.jpg Ethan Bernstein on the pitfalls of transparency in a Chinese factory.Alice Goffman’s work was attacked for the ethical challenges associated with the work of ethnographer.

So we went back to the literature and looked at research in various fields that relied on covert observation – the observation of a field of enquiry by a researcher that does not reveal his or her true identity and motives. This methodological approach has progressively fallen into abeyance because of the ethical issues associated with it- in particular the fact that covert observation implies not getting consent from the people observed by the researcher.

Were there findings that were surprising to you? Our review of covert research reveals that:
– all observations have issues with consent to different extent. It is obtain the full consent of all subjects. We put forward a two dimensions
– covert research can be ethically justified when tackling taboo topics, or trying to uncover misbehaviors.
– there is a wide range of ways and practices that can be used to minimize ethical concerns and limit the harm to subjects.

How do you see this study influencing future research? We hope that the ethical guidelines of some associations can evolve to offer more room for covert or semi covert research, and acknowledge the difficulties of obtaining full consent. We also think that ethical boards in universities may be willing to offer a more flexible perspective on covert research.

Finally our work is a call to researchers to consider covert observational approaches… with care!

Don’t forget to sign up for email alerts through the ORM homepage. 

How Surveys Provide Integrated Communication Skills

“Excuse me, can you spare a  a few minutes? We’re conducting a survey and would greatly appreciate your responses.” You’ve most likely heard these two sentences presented to you as you’re walking briskly down a crowded street. The Internet is also a crowded street full of news, but we hope you can spare a few minutes to read about the latest research from Business and Professional Communication Quarterly.

Author Anne Witte of EDHEC Business School, France, recently published a paper in BCQ entitled “Tackling the survey: A learning-by-induction design,”where she outlines the different learning outcomes that surveys afford. Below, Witte describes her inspiration for the study:]

  • What inspired you to be interested in this topic?

Our world is filled with surveys, yet surveys are often a negl4453697565_dcacd29f08_z.jpgected area in business training and often taught as a kind of mechanical application task which has more to do with software than with thinking.  As qualitative and quantitative data are the basis for business and organizations today, I wanted to train students more in the “art” rather than the “science” of the survey.

  • Were there findings that were surprising to you?

Students are often overconfident in their ability to do a survey task from A to Z.  When you challenge them with an interesting question to answer through a survey, they discover on their own how difficult it really is to obtain quality data that can be used to make decisions.

  • How do you see this study influencing future research and/or practice?

I love testing new teaching paradigms with advanced business students and especially using interdisciplinary thought experiments that oblige students to draw from previous knowledge and varied skills sets.

Don’t forget to sign up for email alerts on the BCQ homepage. 

Survey photo attributed to Plings (CC).

 

Communities as Nested Servicescapes

jsrrr.JPG[We’re pleased to welcome Xiaojing Sheng from the University of Texas at Rio Grande. Sheng co-authored a recently published article in the Journal of Service Research  entitled “Communities as Nested Servicescapes” with Penny Simpson and Judy Siguaw. From Sheng:]

  • What inspired you to be interested in this topic?

From groups of four to sixteen sipping margaritas in local restaurants to dancing at a beach or Mexican fiesta, retired winter migrants are a ubiquitous presence in the Rio Grande Valley of South Texas each winter. These migrating consumers repeatedly come to the area in large numbers each winter to enjoy the warm tropical weather, to participate in the many available activities, and to enjoy each other in their highly social living environment of mobile homes and recreational vehicle communities. These senior citizens also become an inseparable part of the region by routinely going to restaurants, events, shows and stores where they seem to exude a comradery and enjoyment of life not seen by typical residents of any community. For these migrants, winter life in the Valley seems to be a fun-filled, months-long vacation. Through casual observation of the lifestyle of these hundreds of thousands of active retirees, we were driven to understand their experiences as they become immersed in the broader servicescape of the Valley and in the nested servicescapes of their mobile home/recreational vehicle communities in which they reside for extended periods of time.

  • Were there findings that were surprising to you?

The finding that servicescape engagement weakened the positive effect of perceived servicescape satisfaction on loyalty intention is unexpected and surprising. This is probably because high levels of activity engagement become all-consuming, making perceived servicescape satisfaction itself less important in loyalty intention. For example, consumers may be willing to overlook a rundown beach villa if the beach activities are exceptional. On the other hand, lower levels of engagement strengthened the impact of perceived servicescape satisfaction on loyalty intentions, conceivably because consumer attention is less distracted by activity involvement, and therefore, more focused on servicescape factors.

  • How do you see this study influencing future research and/or practice?

An interesting finding from our study is that, when consumers interact with two servicescapes of which one is nested within another, their experiences are shaped by the effects of the individual servicescape, the compounding effects of both servicescapes, and by the transference effects between the two servicescapes. Consequently, marketers need to take a holistic approach to managing servicescapes at all levels to create an overall positive consumer experience. We hope that our research serves as a catalyst for future studies that examine effects of nested servicescapes. Moreover, we hope our work encourages other researchers to investigate less conventional servicescapes, such as regions, towns, and neighborhoods, because there is so much more to be learned about how the places in which we live, work, and play affect and transform our lives.

Don’t forget to sign up for email alerts through JSR’s homepage so you never miss a newly published article.

Wide Research, Narrow Effects: Why Interdisciplinary Research – and Innovation – is Hard

[This blog post was originally featured on Organizational Musings, written by Administrative Science Quarterly‘s Editor, Henrich R. Greve. Click here to view the original post.]

Interdisciplinary research is seen as very valuable for society and economy. Some of that could be hype, but there are some good examples of what it can do. You have probably noticed that oil is no longer 100 dollar per barrel, and the US is no longer a big importer. This is a result of fracking, a result of interdisciplinary research. And if you don’t like fracking, a good alternative is photovoltaic energy, which comes from the sun, and from interdisciplinary research.

So some interdisciplinary research has been good for society. Is it also good for the scientists who are supposed to do it? The answer to this question is very interesting, and is reported in an article in Administrative Science Quarterly by Erin Leahey, Christine Beckman, and Taryn Stanko. The start is easy to explain: interdisciplinary research is less productive, but it gets more attention. The answer got more complicated, and more interesting, when they started looking at why that happened.

Current Issue CoverThe first step was to look at whether interdisciplinary research is more difficult to do, or whether it is because it is harder for it to gain acceptance and get published. The answer is clear: it is not harder to gain acceptance, but it is harder to do, especially early on. The second step was to look at why this research got more attention. Here many factors played a role, but one stood out to me: Actually what increases especially much is the variation in how much attention interdisciplinary research gets, and that helps explain the increased average. So interdisciplinary research is related to fracking in one more way – few reap the awards from it.

This paper doesn’t really result in career advice for scientists, because everyone will be interested in different kinds of research, and have different ideas on how much risk to take on. But has important insights on how innovations are made. Building on closely related ideas is much easier to do, so no wonder much of what scientists – and companies – do is incremental. And this is true even though we often tell stories of the great successes of interdisciplinary research and integrative innovations, while forgetting all those who tried and didn’t succeed. Whether that means we cross-fertilize knowledge too little, too much, or just enough is hard to tell.

You can read the article, “Prominent but Less Productive: The Impact of Interdisciplinarity on Scientists’ Research” from Administrative Science Quarterly free for the next two weeks by clicking here. Want to stay up to date on all of the latest research from Administrative Science QuarterlyClick here to sign up for e-alerts!

The Chrysalis Effect: Publication Bias in Management Research

14523043285_2235b0dbb4_zHow well do published management articles represent the broader management research? To say that questionable research practices impact only a few articles ignores the broader, systemic issue effecting management research. According to authors Ernest Hugh O’Boyle Jr., George Christopher Banks, and Erik Gonzalez-Mulé, the high pressure for academics to publish leads many to engage in questionable research, thereby leading the resulting published articles to be biased and unrepresentative. In their article, “The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles,” published in Journal of Management, O’Boyle, Banks, and Gonzalez-Mulé delve into the issue of questionable research practices. The abstract for the paper:

The issue of a published literature not representative of the population of research is Current Issue Covermost often discussed in terms of entire studies being suppressed. However, alternative sources of publication bias are questionable research practices (QRPs) that entail post hoc alterations of hypotheses to support data or post hoc alterations of data to support hypotheses. Using general strain theory as an explanatory framework, we outline the means, motives, and opportunities for researchers to better their chances of publication independent of rigor and relevance. We then assess the frequency of QRPs in management research by tracking differences between dissertations and their resulting journal publications. Our primary finding is that from dissertation to journal article, the ratio of supported to unsupported hypotheses more than doubled (0.82 to 1.00 versus 1.94 to 1.00). The rise in predictive accuracy resulted from the dropping of statistically nonsignificant hypotheses, the addition of statistically significant hypotheses, the reversing of predicted direction of hypotheses, and alterations to data. We conclude with recommendations to help mitigate the problem of an unrepresentative literature that we label the “Chrysalis Effect.”

You can read “The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles” from Journal of Management free for the next two weeks by clicking here.

Want to stay current on all of the latest research published by Journal of ManagementClick here to sign up for e-alerts! You can also follow the journal on Twitter–read through the latest tweets from Journal of Management by clicking here!

*Library image attributed to Apple Vershoor (CC)