[We’re pleased to welcome authors David Ackerman of California State University Northridge and Christina Chung of Ramapo College of New Jersey. They recently published an article in the Journal of Marketing Education entitled “Is RateMyProfessors.com Unbiased? A Look at the Impact of Social Modelling on Student Online Reviews of Marketing Classes,” which is currently free to read for a limited time. Below, they reflect on the motivations for conducting this research:]
Our paper “Is RateMyProfessors.com Unbiased?: A Look at the Impact of Social Modelling on Student Online Reviews of Marketing Classes” was definitely motivated by personal experience. My colleagues and I early on noticed that there was a huge mismatch between the one or two student ratings per semester on online rating sites such as RMP and the 100 or more ratings from the student evaluation measures collected at our universities. Some seemed to hit it right. They had a great rating or two and then subsequent ratings were good. Others seemed to hit it wrong, with a really bad rating or two from a student unhappy with his or her grade and then subsequent ratings were bad.
We compared SEMs and found those who had both good and bad RMP ratings all had good SEMs. Those were my personal observations though I know there has been some research suggesting that RMP can be similar to SEMs and some suggesting the opposite. I didn’t look into it at the time because I felt sites like RMP provide a place for students to vent their anger or express their happiness, kind of like a virtual public bathroom stall.
An external event that sparked this specific research paper was the rise of “social media mobs.” Groups of anonymous raters would attack a rating site and leave a lot of negative ratings about a particular business, product or service. Though most of these raters were anonymous, the ratings depressed future ratings that were posted. Before the attack, ratings might be moderate to positive, but afterward, primarily negative ratings would be posted.
So, my colleague and I set out to see if this pattern held in online teaching ratings and it did. The results of this study suggest that several highly positive or negative ratings have an oversized influence on subsequent ratings, who model the previous ratings, which can compromise the validity of the ratings. We are also looking into whether they also influence the willingness of people to do an online rating if their views are contrary to the prevailing positive or negative salient reviews. These results suggest that rating sites should do all they can to remove unverified ratings, especially if they are extremely negative or positive to maintain the validity and integrity of their rating system.