In my prior career as a human resource management practitioner, I worked in a mid-sized corporation where executives were credibly accused of sexual harassment, and it was left to me to address the complaints. I thought that, given the mutual respect I had with the men accused and our shared interest in protecting the organization from lawsuits, I could convince them to discontinue any offensive behavior. Much to my dismay, my efforts resulted in a deepening of biased attitudes and an apparent escalation of harassment that placed the business at increased risk, and ultimately had a negative impact on the careers of the targets and on my own career. I was floored. This experience left me to wonder, “What I could have said or have done differently to produce a better result?”
Although this happened more than ten years ago, today we find countless examples in the media and other recent events where people are called out for their biases and treatment of others. While such behavior may justly earn public condemnation, treating biased individuals this way can be divisive, and provoke defensiveness and shame. As this paper shows, this can increase resistance to change and lessen the chance of a positive outcome.
One possible solution might be taking a softer approach to dealing with biased individuals that is more caring of the needs of those whose behavior we hope to change. This approach is further applicable in situations where the biased individual is in a position of power. The findings were counterintuitive for me personally, and have left me with many more questions that I will continue to investigate.
Our paper “Is RateMyProfessors.com Unbiased?: A Look at the Impact of Social Modelling on Student Online Reviews of Marketing Classes” was definitely motivated by personal experience. My colleagues and I early on noticed that there was a huge mismatch between the one or two student ratings per semester on online rating sites such as RMP and the 100 or more ratings from the student evaluation measures collected at our universities. Some seemed to hit it right. They had a great rating or two and then subsequent ratings were good. Others seemed to hit it wrong, with a really bad rating or two from a student unhappy with his or her grade and then subsequent ratings were bad.
We compared SEMs and found those who had both good and bad RMP ratings all had good SEMs. Those were my personal observations though I know there has been some research suggesting that RMP can be similar to SEMs and some suggesting the opposite. I didn’t look into it at the time because I felt sites like RMP provide a place for students to vent their anger or express their happiness, kind of like a virtual public bathroom stall.
An external event that sparked this specific research paper was the rise of “social media mobs.” Groups of anonymous raters would attack a rating site and leave a lot of negative ratings about a particular business, product or service. Though most of these raters were anonymous, the ratings depressed future ratings that were posted. Before the attack, ratings might be moderate to positive, but afterward, primarily negative ratings would be posted.
So, my colleague and I set out to see if this pattern held in online teaching ratings and it did. The results of this study suggest that several highly positive or negative ratings have an oversized influence on subsequent ratings, who model the previous ratings, which can compromise the validity of the ratings. We are also looking into whether they also influence the willingness of people to do an online rating if their views are contrary to the prevailing positive or negative salient reviews. These results suggest that rating sites should do all they can to remove unverified ratings, especially if they are extremely negative or positive to maintain the validity and integrity of their rating system.
In today’s management world, the growing consensus holds that transparency is good for any organization. But a study in the Journal of Sports Economics (JSE) – noting that sports are a “useful setting in which to examine phenomena that are of broader significance”—offers other findings.
Transparency is usually thought to reduce favoritism and corruption by facilitating monitoring by outsiders, but there is concern it can have the perverse effect of facilitating collusion by insiders. In response to vote trading scandals in the 1998 and 2002 Olympics, the International Skating Union (ISU) introduced a number of changes to its judging system, including obscuring which judge issued which mark. The stated intent was to disrupt collusion by groups of judges, but this change also frustrates most attempts by outsiders to monitor judge behavior. The author finds that the “compatriot-judge effect,” which aggregates favoritism (nationalistic bias from own-country judges) and corruption (vote trading), actually increased slightly after the reforms.