[We’re pleased to welcome authors, Angela M. Passarelli of the College of Charleston, Richard E. Boyatzis of Case Western Reserve University, and Hongguo Wei of the University of Central Oklahoma. They recently published an article in the Journal of Management Education entitled “Assessing Leader Development: Lessons From a Historical Review of MBA Outcomes,” which is currently free to read for a limited time. Below, Passarelli recounts the events that led to the research and the significance it has to the field:]
We began collecting outcome data 30 years ago on our MBA students. We were trying to determine what they were learning that was crucial to their success as managers and leaders – namely, the competencies from performance-validated studies. This particular project was born when we hit a major milestone in the ongoing assessment program – 25 years of data collection. The 25-year mark prompted us to reflect on how the data were being used. Each year we examined the data to determine how students in our full-time MBA program developed emotional and social competencies during the course of their 2-year program. This information provided a basis for modifications to the curriculum. For example, a downward trend in teamwork competency development prompted a pedagogical innovation in which project teams remained the same across multiple courses and were given coaching not just on performance outcomes, but also on how they functioned as a group. While these year-to-year adjustments were helpful, we came to the realization that we were missing potentially important trends that would not be evident by looking at just one or two cohorts at a time. This realization became the motivation for examining trends in competency development from a birds-eye view – across the entire 25-year assessment effort, rather than in small pockets at a time.
What has been the most challenging aspect of conducting your research
The most challenging aspect of conducting this research was contending with advances in instrumentation. We improve the tests psychometrically about every 7 years, which helps reliability, model fit and validity but creates comparability challenges in longitudinal research. Although these changes improved our confidence in inferences made on an annual basis, they impeded our ability to analyze the data set in its entirety. To deal with this, we chose to focus on a period of time in which the survey instruments were most similar and conducted graphical trend analysis. This allowed us to see trends over time, such as the saw tooth effect. It also helped us figure out what we should contemplate doing to minimize such threats to learning and positive impact.
Relatedly, collecting data of this nature and for this length of time is difficult. Our assessment program faced a variety of obstacles over its history. Personnel changes led to knowledge gaps whereby informed consent was not administered or data were not appropriately retained. Computer crashes resulted in data loss, and funding deficits threatened financial support for the effort. Having a faculty champion whose intellectual curiosity aligned with the assessment program was critical to overcoming these obstacles.
Were there any surprising findings?
The downturn in competency development during times of leadership upheaval was possibly the most striking trend we saw in the data. The idea that toxicity at the most senior levels of leadership was trickling down to the students had been proposed in earlier research. But this study offered confirmation by showing a rebound in competency development once leadership stability was restored. In the paper we postulate that students were affected by this leadership turbulence via declines in faculty climate and satisfaction. Research designed to directly test this interpretation is still needed. Without knowing the exact degree of negative effects, educators would be well advised to try to mitigate the deleterious effects of toxic leadership on student outcomes.