Overcoming the Problem With Solving Business Problems

[We’re pleased to welcome authors, Todd Bridgman of the Victoria University of Wellington, Colm McLaughlin of the University College Dublin, and Stephen Cummings of the Victoria University of Wellington. They recently published an article in the Journal of Management Education entitled “Overcoming the Problem With Solving Business Problems: Using Theory Differently to Rejuvenate the Case Method for Turbulent Times,” which is currently free to read for a limited time. Below, Dr. Bridgman recounts the motivations and innovations of this research:]


What motivated you to pursue this research?

Our interest came from our experiences as case writers and teachers. Early cases we developed were well received, so we attended case writing and teaching workshops to further our skills. This led to two realizations. First, we came to see that analysis of the cases largely took place in a theoretical vacuum. This seemed limiting, because we had always found theory useful for seeing situations from multiple perspectives. Second, theory, when it was applied to cases, was only given a narrow active role. It was only seen as useful if it was a ‘tool ’applied to fix or solve real-life ‘business problems’ which were generally seen in immediate financial profit and loss terms. This struck us as too narrow. Wasn’t there more to studying management than solving business problems? And doesn’t theory have more useful purposes than being a profit-maximization tool? These experiences got us interested in delving deeper into the history of the case method and the role of theory in utilizing cases in teaching.

Were there any specific external events—political, social, or economic—that influenced your decision to pursue this research?

We’ve watched closely the global political-economy since the financial crisis hit a decade ago, and we see parallels with what happened in the United States following the financial crises of the 1920s and 1930s. Both periods of turbulence were followed by a deep questioning of the prevailing free-market capitalist model. We see today in Brexit, American politics, the rise of nationalism in Europe and elsewhere a fundamental challenge to a 30-year consensus around neoliberalism. This has implications for management education, because business schools that have been strongly aligned to the neoliberal worldview now risk being seen as out of step with this new political landscape. We were interested in looking back to the 1920s and 1930s to see how business schools like Harvard responded to the crisis, to give us insight on how schools might respond today. In HBS’ past we found the seeds of a critical, reflexive management education, which encourages students to question dominant assumptions and ideologies. The aim of the paper is to think about how we could adapt the case method to incorporate this kind of approach.

In what ways is your research innovative, and how do you think it will impact the field?

It is widely accepted that we should learn from history, but what is less understood is how we are limited by the histories that we have. Our paper is innovative by exploring the case method’s forgotten past at HBS. In response to the crises of the 1920s and 1930s HBS’ leaders understood the need for a business education that didn’t just blindly support capitalism but seriously questioned its development for the good of humanity. But these events have been largely airbrushed from the school’s history because they challenge the neoliberal worldview that the modern HBS wished to promote in the last half of the 20th century. HBS has a more diverse and interesting past that is conveniently forgotten by supporters, and therefore unseen by the critics. Our paper will have impact if it stimulates new research on the case method and if it provides greater legitimacy for case writing and teaching that does more than train students to solve immediate ‘business problems’. We want to inspire a rejuvenated role for theory and a more reflective and thought-provoking case method that is a better fit for today’s challenging, multi-faceted times.

Stay up-to-date with the latest research from the Journal of Management Education and sign up for email alerts today through the homepage!




Is RateMyProfessors.com Unbiased?

classroom-1910014_1920[We’re pleased to welcome authors David Ackerman of California State University Northridge and Christina Chung of Ramapo College of New Jersey. They recently published an article in the Journal of Marketing Education entitled “Is RateMyProfessors.com Unbiased? A Look at the Impact of Social Modelling on Student Online Reviews of Marketing Classes,” which is currently free to read for a limited time. Below, they reflect on the motivations for conducting this research:]

JME(D)_72ppiRGB_powerpointOur paper “Is RateMyProfessors.com Unbiased?: A Look at the Impact of Social Modelling on Student Online Reviews of Marketing Classes” was definitely motivated by personal experience. My colleagues and I early on noticed that there was a huge mismatch between the one or two student ratings per semester on online rating sites such as RMP and the 100 or more ratings from the student evaluation measures collected at our universities. Some seemed to hit it right. They had a great rating or two and then subsequent ratings were good. Others seemed to hit it wrong, with a really bad rating or two from a student unhappy with his or her grade and then subsequent ratings were bad.

We compared SEMs and found those who had both good and bad RMP ratings all had good SEMs. Those were my personal observations though I know there has been some research suggesting that RMP can be similar to SEMs and some suggesting the opposite. I didn’t look into it at the time because I felt sites like RMP provide a place for students to vent their anger or express their happiness, kind of like a virtual public bathroom stall.

An external event that sparked this specific research paper was the rise of “social media mobs.” Groups of anonymous raters would attack a rating site and leave a lot of negative ratings about a particular business, product or service. Though most of these raters were anonymous, the ratings depressed future ratings that were posted. Before the attack, ratings might be moderate to positive, but afterward, primarily negative ratings would be posted.

So, my colleague and I set out to see if this pattern held in online teaching ratings and it did. The results of this study suggest that several highly positive or negative ratings have an oversized influence on subsequent ratings, who model the previous ratings, which can compromise the validity of the ratings. We are also looking into whether they also influence the willingness of people to do an online rating if their views are contrary to the prevailing positive or negative salient reviews. These results suggest that rating sites should do all they can to remove unverified ratings, especially if they are extremely negative or positive to maintain the validity and integrity of their rating system.

Stay up-to-date with the latest research from the Journal of Marketing Education and sign up for email alerts today through the homepage!

Classroom photo attributed to Wokandapix. (CC)

Systems Thinking and Population Ecology

bubble-19329_1920[We’re pleased to welcome authors Karen Macmillan and Jennifer Komar of Wilfrid Laurier University. They recently published an article in the Journal of Management Education entitled “Population Ecology (Organizational Ecology): An Experiential Exercise Demonstrating How Organizations in an Industry Are Born, Change, and Die,” which is currently free to read for a limited time. Below, Macmillan speaks about population (organizational ecology) and its applications:]



Organizations are embedded within complex, interdependent networks.  Yet it can be challenging for business students to conceptualize how organizations interact with others on a broad scale. This type of systems thinking does not come naturally. Most individuals tend to have difficulty understanding non-linear, interdependent connections when the relationships are distant in time and space.

One line of management study that takes a systems view is population (or organizational) ecology.  Rather than observing how an individual company evolves over a brief period, population ecologists look at all of the organizations within an industry and examine how certain characteristics (e.g., size), the environment, and random chance affect organizational outcomes. Population ecologists identify how industries change over many years, often finding patterns across industries in how organizations are born, change, and die.  This approach differs from traditional management theory in two key ways.  First, all members of a targeted population are included in the analysis. The premise is that to focus only on the most successful organizations (e.g., the Fortune 500) leads to an understanding of only a small portion of the total range of organizations. It can be useful to examine not only the winners, but also the losers, and even the runners-up. Second, population ecologists examine how processes evolve over relatively long periods of time. This can lead to different insights than a cross-sectional approach.

In order to help students develop systems thinking through a consideration of population ecology, we have developed an in-class exercise that allows participants to see first-hand in one class how all of the organizations within an industry interact over a long period. Full details are included so instructors can easily integrate this activity into the classroom. This process makes the theory come alive by asking students to put themselves directly into the role of an organizational decision maker in an evolving industry.

The exercise dramatically highlights how an organization affects, and is affected by, its context, and will help to prepare students to operate effectively within a multi-faceted business environment. This activity could fit within discussions on organizational design, organizational structure, organizational change, or inter-organizational relationships, and it complements instruction on more micro organizational behavior topics, or more linear or analytical approaches to management.  It challenges the idea that management is solely about control, and helps students see that each internal decision influences how the organization fits within a broader system, and affects, ultimately, its ability to survive.


Stay up-to-date with the latest research from the Journal of Management Education and sign up for email alerts today through the homepage!

Thinking photo attributed to PublicDomainPictures. (CC)

Assessing Leader Development From a Historical Review of MBA Outcomes

[We’re pleased to welcome authors, Angela M. Passarelli of the College of Charleston, Richard E. Boyatzis of Case Western Reserve University, and Hongguo Wei of the University of Central Oklahoma. They recently published an article in the Journal of Management Education entitled “Assessing Leader Development: Lessons From a Historical Review of MBA Outcomes,” which is currently free to read for a limited time. Below, Passarelli recounts the events that led to the research and the significance it has to the field:]

JME_72ppiRGB_powerpointWhat motivated you to pursue this research?

We began collecting outcome data 30 years ago on our MBA students. We were trying to determine what they were learning that was crucial to their success as managers and leaders – namely, the competencies from performance-validated studies. This particular project was born when we hit a major milestone in the ongoing assessment program – 25 years of data collection. The 25-year mark prompted us to reflect on how the data were being used. Each year we examined the data to determine how students in our full-time MBA program developed emotional and social competencies during the course of their 2-year program. This information provided a basis for modifications to the curriculum. For example, a downward trend in teamwork competency development prompted a pedagogical innovation in which project teams remained the same across multiple courses and were given coaching not just on performance outcomes, but also on how they functioned as a group. While these year-to-year adjustments were helpful, we came to the realization that we were missing potentially important trends that would not be evident by looking at just one or two cohorts at a time. This realization became the motivation for examining trends in competency development from a birds-eye view – across the entire 25-year assessment effort, rather than in small pockets at a time.

What has been the most challenging aspect of conducting your research

The most challenging aspect of conducting this research was contending with advances in instrumentation. We improve the tests psychometrically about every 7 years, which helps reliability, model fit and validity but creates comparability challenges in longitudinal research. Although these changes improved our confidence in inferences made on an annual basis, they impeded our ability to analyze the data set in its entirety. To deal with this, we chose to focus on a period of time in which the survey instruments were most similar and conducted graphical trend analysis. This allowed us to see trends over time, such as the saw tooth effect. It also helped us figure out what we should contemplate doing to minimize such threats to learning and positive impact.

Relatedly, collecting data of this nature and for this length of time is difficult. Our assessment program faced a variety of obstacles over its history. Personnel changes led to knowledge gaps whereby informed consent was not administered or data were not appropriately retained. Computer crashes resulted in data loss, and funding deficits threatened financial support for the effort. Having a faculty champion whose intellectual curiosity aligned with the assessment program was critical to overcoming these obstacles.

Were there any surprising findings?

The downturn in competency development during times of leadership upheaval was possibly the most striking trend we saw in the data. The idea that toxicity at the most senior levels of leadership was trickling down to the students had been proposed in earlier research. But this study offered confirmation by showing a rebound in competency development once leadership stability was restored. In the paper we postulate that students were affected by this leadership turbulence via declines in faculty climate and satisfaction. Research designed to directly test this interpretation is still needed. Without knowing the exact degree of negative effects, educators would be well advised to try to mitigate the deleterious effects of toxic leadership on student outcomes.

Stay up-to-date with the latest research from the Journal of Management Education and sign up for email alerts today through the homepage!




An Educator’s Perspective on Reflexive Pedagogy: Identity Undoing and Issues of Power

[We’re pleased to welcome author Dr. Marian Iszatt-White of the Lancaster University Management School. Dr. Iszatt-White recently published an article in Management Learning entitled “An educator’s perspective on reflexive pedagogy: identity undoing and issues of power,” which is currently free to read for a limited time. Below, Dr. Iszatt-White reveals the inspiration for conducting this research and the impact it has on the field:]

mlqb_48_3.coverWhat motivated you to pursue this research?

All the authors of this paper are teachers as well as researchers, and spend much of our time working with ‘gnarly’ middle managers on executive education programmes and Executive MBAs. It was piloting an innovative leadership learning intervention (co-constructed coaching – the subject of an earlier paper in Management Learning by Steve and myself) with this latter population that triggered the insights underpinning this paper. Specifically, we realised that adopting a reflexive pedagogy had implications for us as ‘teachers’ as well as for our students. This was not the direction we intended the paper to go, but it really hit us as something important and not well understood in the literature. The idea of ‘identity undoing’, which Brigid had already developed, seemed key to our own experiences and offered a valuable framework for processing and theorizing them.

What has been the most challenging aspect of conducting your research? Were there any surprising findings?

A significant challenge in conducting this research was the autoethnographic element – which was not part of the original design but still needed to be methodologically robust. Our original intention had been to validate the idea of co-constructed coaching as a leadership learning intervention, which we had previously proposed. An early draft of the paper, pursuing this intent, happened to mention our own experience of implementing this intervention and our reviewers picked up on this as being interesting. This led Steve and I to home in on this previously marginal aspect of the project and to bring Brigid in as an ‘independent witness’ to our reflections on what it felt like to adopt a reflexive pedagogy. Brigid did a great job of ‘interrogating’ and then narrating key elements of this experience, which we were then able to theorize in relation to identity undoing and issues of power.

In what ways is your research innovative, and how do you think it will impact the field?

In undertaking this analysis, we problematize the pursuit of a reflexive pedagogical practice within executive and postgraduate education and offer a paradox: the desire to engage students in reflexive learning interventions – and in particular to disrupt the power asymmetries and hierarchical dependencies of more traditional educator-student relationships – can in practice have the effect of highlighting those very asymmetries and dependencies. Successful resolution of such a paradox becomes dependent on the capacity of educators to undo their own reliance on and even desire for authority underpinned by a sense of theory-based expertise. We belief this insight – as well as the innovative use of autoethnographic methods to turn a critically reflexive lens upon academic teaching – will provide food for thought (and for further research) across a wide range of academic disciplines. With the introduction in the UK of the Teaching Excellence Framework, now seems to be a fitting time to review what it means to be an ‘expert’ teacher.

Stay up-to-date with the latest research from MLQ and sign up for email alerts today through the homepage!

Getting to Know Your Students and an Educational Ethic of Care

[We’re pleased to welcome author Thomas F. Hawk of Frostburg State University. Hawk recently published an article in the Journal of Management Education entitled, “Getting to Know Your Students and an Educational Ethic of Care,” that is currently free to read for a limited time. Below, Hawk shares background and motivation for pursuing this research:]

A sabbatical in 1996 that focused on critical thinking led me to discover the Philosophy of Education Society and the idea of an ethic of care. The more I explored the ethic of care literature, the more it resonated with me and gave me a vocabulary and a philosophical frame for describing and discussing my fundamental processes of facilitating the deep learning of my students. That journey of exploration continues to the present even though I retired from the university in 2009.

In 2003, a student who appeared to be struggling in my MBA capstone strategy course sent me an email asking me not to “give up on her” as she had some learning challenges that held her back from actively contributing to the case discussions. But she also complimented me on the caring and skillful ways in which I focused on my students’ learning development, provided extensive developmental feedback, and continually tried to get my students involved in the discussions. That email triggered a set of questions in my mind that led to the 2008 JME article, “Please Don’t Give Up on Me: When Faculty Fail to Care.” As I understand it, that was the first full length article in JME to address an ethic of care.JME_72ppiRGB_powerpoint.jpg

As my journey into an ethic of care continued, I did research on the extent to which business ethics textbooks and journals addressed the issue of an ethic of care as an alternative ethical framework to the traditional ethical frameworks of virtue, deontological, utilitarian, and justice ethics. That research revealed an almost total absence of a consideration of an ethic care in business ethics textbooks and only a few articles on an ethic of care in the primary business ethics journals. I also became aware of the significant differences in the ontological/metaphysical assumptions made by the rationalistic and abstract universalistic individualism of traditional ethical frameworks and the relational, concrete, uniqueness of each situation that characterizes an ethic of care and its central focus on the well-being of the parties to the relationship and the relationship itself.

Chory & Offstein’s 2017 JME article (41-1), “Your Professor Will Know You as a Person: Evaluating and Rethinking the Relational Boundaries between Faculty and Students,” prompted me to write, “Getting to Know Your Students and an Educational Ethic of Care.” That article reflects my current exploration of the congruence among an ethic of care, Alfred North Whitehead’s process philosophy and process ethics, and a process perspective on teaching and learning (see Whitehead, 1929, and Oliver & Gersham, 1989, cited in the article). I now see an ethic of care as a way of being in the world, not just as an alternative ethical framework. But in the educational domain, the most important scholarly work I have read over the last year is: Alhadeff-Jones, M. (2017). Time and the Rhythms of Emancipatory Education: Rethinking the Temporal Complexity of Self and Society. New York: Routledge.
Enjoy the reading.

Be sure to spare a minute to visit the journal homepage to sign up for email alerts!


Webinar Highlights: Presenting Data Effectively

[The following post is re-blogged from Social Science Space. Click here to view the original article.]

Crystal clear graphs, slides, and reports are valuable – they save an audience’s mental energies, keep a reader engaged, and make you look smart. This webinar held on June 6, 2017, covers the science behind presenting data effectively and will leave viewers with direct, pointed changes that can be immediately administered to significantly increase impact. Guest Stephanie Evergreen also addresses principles of data visualization, report, and slideshow design that support legibility, comprehension, and stick our information in our audience’s brains.

Evergreen’s presentation was followed by an audience question-and-answer session, which is included in the recording. Not all the questions were answered at the time, and Evergreen answers some additional session questions below.

Evergreen is an internationally recognized speaker, designer, and researcher best known for bringing a research-based approach to better communicate through more effective graphs, slides, and reports. She holds a PhD from Western Michigan University in interdisciplinary evaluation, which included a dissertation on the extent of graphic design use in written research reporting. Evergreen has trained researchers worldwide through keynote presentations and workshops, for clients including Time, Verizon, Head Start, American Institutes for Research, Rockefeller Foundation, Brookings Institute, and the United Nations. She is the 2015 recipient of the American Evaluation Association’s Guttentag award, given for notable accomplishments early in a career.

She is co-editor and co-author of two issues of New Directions for Evaluation on data visualization. She writes a popular blog on data presentation at StephanieEvergreen.com. Her books SAGE Publishing books Presenting Data Effectively and Effective Data Visualization both reached No. 1 on Amazon bestseller lists. A second edition of Presenting Data Effectively was published in May.

  1. When is it best to place the data information (e.g. 20 percent) on a bar or lollipop vs. using a scale on the side or bottom of a chart?

If people will want to know the exact value, add the data label. If the overall pattern of the data and estimated values are sufficient, use a scale. But don’t use both – that’s redundant.

  1. How do your clients and colleagues respond to the ‘flipped report,’ in which research findings and conclusions are presented before the discussion, literature, methodology, and background sections?

With a “duh” as in “Why haven’t I thought of that before”? Generally, clients appreciate how a flipped report values their time. On occasion, you and I will find audiences who really bristle at the idea, usually people steeped in the academic culture, so check first if a flipped report structure would be okay.

  1. Any tips for the converted about changing resistant organizational culture to data visualization? “You need to use our template!”

Culture change is slow, so the first tip is to be patient. After that, try remaking one of your own old (bad) slides or graphs to show what an overall would look like. See if you can get a friendly client or customer you know to give you feedback on it. Then report on the redesign and the feedback to others in your organization. Try getting someone from senior management on board. Leave a copy of my book in their mailbox or in the break room. And hang in there.

  1. How do we report small numbers? Without percentages?

I would report small numbers as raw numbers, not percentages. Try an icon array for a visual.

  1. Where is the best place to get report templates?

In your imagination! Any report template is going to look like a report template, not like something that fits your own work. Look around for inspiration, for sure, like on my Pinterest boards, but create your own style that fits you and your work.

  1. What program do you use to create dashboards or infographics? We’ve used Piktocharts…. are there others?

I work within the Microsoft Office suite. I make dashboards in Excel and infographics in PowerPoint. This way I have total control over the design and everyone on my team can make edits. A quick Google search of either dashboard or infographic programs will give you hundreds of choices you could work with. If you want something from that list, look for maximum flexibility, low learning curve, and reasonable expense.

  1. Each chart can have multiple findings; are we skewing the results when we highlight certain findings over others using color and data?

“Skewing” sounds like we are manipulating, but that’s not the case. Using color to highlight a certain part of the graph still leaves the rest of the graph completely intact and able to be seen. Adding color does, however, reflect an interpretation we have made of the data. But that isn’t “skewing” – it’s telling people our point and that’s why they are listening to us in the first place.

  1. Can you please explain the difference between your two books? Thanks!

Sure! Effective Data Visualization walks you through how to choose the right chart type and then how to make it in Excel. Presenting Data Effectively talks about formatting graphs well with consideration of text and color and broadens that discussion to address dashboards, slides, handouts, and reports.

  1. One challenge I face is presenting nuanced findings in an accessible way. For example, when there are limitations to the data or subgroups that need to be acknowledged or findings need to be interpreted with caution. As a researcher, it worries me that the client might put tentative findings “out there”, misrepresenting them (to a degree).

This makes your title and subtitle ever more important. Be very clear in your wording that the findings are limited. You can also add things like confidence intervals to your graph if you are confident that the reader will know how to interpret them. If it is still going to be a concern, don’t make a graph of the data. People are drawn to graphs because we look at pictures so don’t put the data in a picture if you are worried people won’t read the nuanced narrative.