What Effect Does Status Endowment Have on Customer Loyalty Programs?

02JSR13_Covers.indd[We’re pleased to welcome Lena Steinhoff of the University of Paderborn in Germany. Dr. Steinhoff recently collaborated with Andreas Eggert of University of Paderborn and Ina Garnefeld of the University of Wuppertal on their paper from Journal of Service Research entitled “Managing the Bright and Dark Sides of Status Endowment in Hierarchical Loyalty Programs.”]

When I opened the envelope and found a golden customer card issued by a hotel chain that I had hardly patronized, I was surprised and had some mixed feelings about it, which is how this research project began. Indeed, service companies purposefully offer elevated status to some customers who do not meet the required spending level, in an attempt to profit from the profound allure of status. This is what we call status endowment, which we define in our paper as awards of elevated status to customers who are not entitled to it.

Recently, several service firms have begun experimenting with status endowment, including Accor Hotels (A|Club), Hertz Car Rental (Hertz Gold Plus Rewards), and Hilton Hotels & Resorts (Hilton HHonors). An Internet search of company websites and customer forums reveals that among the top 100 North American loyalty programs, status endowment exists in more than 40% of those that rely on hierarchical programs. Yet, the emotional, attitudinal, and behavioral consequences of status endowment are not well understood, so scholarly research has a chance to provide marketing practitioners with a better understanding of this customer management instrument before it becomes a standard tool.

Employing three research formats (qualitative, experimental, survey) and covering various industries, we identify differential effects of status endowment that have three key implications for services management. First, there is a bright and a dark side of status endowment. Customer gratitude enhances loyalty, yet customer skepticism acts as an opposing force. Conventional wisdom assumes that people react positively to preferential treatment, but our research also demonstrates the unintended dark sides of relationship marketing investments on focal customers.

Second, the dark side of endowed elevated customer status is contingent on the design of the status endowment. Managers should carefully consider how to avoid fostering further skepticism. Status endowment should not be designed as a “pure” endowment but rather should augment customers’ perceptions of their own personal choice or achievement.

Third, the effectiveness of status endowment also depends on the characteristics of the loyalty program, including the perceived value of the preferential treatment. When elevated status offers high value benefits, customers’ attitudinal loyalty is higher than if the company provides elevated status with only low value, stemming from enhanced customer gratitude and reduced customer skepticism. While service companies such as airlines and hotels can easily offer high value preferential treatment by exploiting their underutilized, perishable assets at low additional costs, firms that lack unused capacities face a more challenging position.

You can read “Managing the Bright and Dark Sides of Status Endowment in Hierarchical Loyalty Programs” from Journal of Service Research for free by clicking here. Want to know about all the latest research like this from Journal of Service Research? Click here to sign up for e-alerts!

EggertAndreas Eggert is a chaired professor of marketing at the University of Paderborn, Germany. He is also a strategic research advisor at Newcastle University Business School, Newcastle upon Tyne, UK. His research interests focus on the profitable management of customer relationships in both business-to-consumer and business-to-business markets, and his work has appeared in Journal of Marketing, Journal of the Academy of Marketing Science, Journal of Service Research, Journal of Supply Chain Management, Journal of Business Research, European Journal of Marketing, Journal of Marketing Theory and Practice, Industrial Marketing Management, Journal of Business-to-Business Marketing, and Journal of Business and Industrial Marketing, among others.

csm_Lena_Steinhoff_2014_7847c236f1Lena Steinhoff is an assistant professor of marketing at the University of Paderborn, Germany. Her research interest is relationship marketing, with a particular focus on managing customer relationships through loyalty programs and customer engagement initiatives. Current projects include examining the impact of customer engagement initiatives on existing customer relationships and the performance effects of relationship marketing investments over the relationship life cycle. Her work has appeared in Journal of the Academy of Marketing Science, Journal of Service Management, and the Marketing Science Institute (MSI) Working Paper Series.

inaIna Garnefeld is a chaired professor of service management at the Schumpeter School of Business and Economics at the University of Wuppertal, Germany. Her research interests are services marketing and customer relationship management. Current projects include examining online and off-line word-of-mouth behavior and the use of incentives for managing customer communication behavior. Her work has appeared in Journal of Marketing, Journal of Service Research, International Journal of Electronic Commerce, and Journal of Service Management, among others.

Marcia Simmering on the Detection of Common Method Variance

[We’re pleased to welcome Marcia Simmering of Louisiana Tech University. Dr. Simmering recently published an article in Organizational Research Methods with Christie M. Fuller, Hettie A. Richardson, Yasemin Ocal, and Guclu M. Atinc entitled “Marker Variable Choice, Reporting, and Interpretation in the Detection of Common Method Variance: A Review and Demonstration.”]

  • What inspired you to be interested in this topic?

07ORM13_Covers.inddAfter the publication of my earlier piece on common method variance (Richardson, Simmering, Sturman, 2009 in ORM), where we found that marker variables could be potentially useful in detecting method variance, I kept getting questions from other researchers about what marker variables they should use in their own studies. I didn’t always have an answer, because the appropriateness of a marker variable depends on the study variables. So, I worked with a team of co-authors from different business disciplines on the current paper to find good marker variables in a variety of studies. As we all read articles using marker variables, we found so much variation in how they were used, and we learned that many had not been chosen or implemented properly. So, my coauthors and I decided to give an overview of how these techniques have been used (and misused). We took it a step further and tried to find out what these marker variables are really measuring and whether they’re measuring something different from presumed causes of common method variance (CMV), like social desirability and affectivity.

  • Were there findings that were surprising to you?

Yes! I would say that most of what we found in both studies surprised us. In Study 1 (the review of marker variable use), I didn’t expect so many authors to choose marker variables that really couldn’t properly capture CMV. And, I was surprised at how little journal space was given to tests of CMV. In Study 2, we didn’t know what we would find about what marker variables might detect in comparison to presumed causes of CMV, but we were still surprised to find that one added measure (either marker or presumed cause) is likely not enough to reasonably detect CMV and that multiple marker and CMV-cause variables in one study give much more information.

  • How do you see this study influencing future research and/or practice?

We hope that other researchers can find this article helpful in choosing appropriate marker variables and analyzing them in a way that can reasonably detect CMV. This is easier said than done, because a good marker variable is often chosen before data collection, and perhaps this article can influence more authors to do that. But, we hope, too, that reviewers gain some knowledge about how these techniques can be used to detect CMV. And, our ultimate goal is that this work can get us a little bit closer to understanding the large, complex, and still ambiguous phenomenon of CMV in social science research.

You can read “Marker Variable Choice, Reporting, and Interpretation in the Detection of Common Method Variance: A Review and Demonstration” from Organizational Research Methods for free for the next two weeks by clicking here. Want to know about all the latest research like this from Organizational Research Methods? Click here to sign up for e-alerts!

marcia_dickersonMarcia J. Simmering is the Francis R. Mangham Endowed professor of Management and assistant dean of Undergraduate Programs in the College of Business at Louisiana Tech University. Her current research focuses on the methods topics of common method variance and control variables. Additionally, she has published research on feedback, compensation, and training.

Christie M. Fuller is Thomas O’Kelly-Mitchener associate professor of Computer Information Systems at Louisiana Tech University. Her research in deception and decision support systems has been published in Decision Support Systems, Expert Systems with Applications, IEEE Transactions on Professional Communication, along with other journals and conference proceedings.

Richardson-Hettie for profileHettie A. Richardson is an associate professor and Chair of the Department of Management, Entrepreneurship, and Leadership in the Neeley School of Business at Texas Christian University. Her methodological research interests focus on common method variance and other measurement-related issues. She also studies employee involvement, empowerment, and strategic human resource management.

Yasemin Ocal is an assistant professor of Marketing at Texas A&M University-Commerce. Her research focuses on response rate and response bias in marketing research and has appeared in journals such as Journal of Leadership and Organizational Studies and numerous international conferences, including organization of a survey response rate issues session in World Marketing Congress of the Academy of Marketing Science.

atnicGuclu M. Atinc is an assistant professor of Management at Texas A&M University-Commerce. His current research addresses board composition, top management teams and ownership structures of young entrepreneurial firms, and research methods. Dr. Atinc’s research has appeared in journals such as Organizational Research Methods, Journal of Managerial Issues, and Journal of Leadership and Organizational Studies.

The Problem with Surveys in Research

[Editor’s Note: We are pleased to welcome Ben Hardy who collaborated with Lucy R. Ford on their article entitled “It’s Not Me, It’s You: Miscomprehension in Surveys,” available now in the OnlineFirst section of Organizational Research Methods.]

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’

There is a little of the Humpty Dumpty in all of us. When we communicate with others we tend to think about what we want to say – and choose the words to mean what we want them to mean – rather than thinking carefully about what it is that others will hear.

07ORM13_Covers.inddThe same is true when we conduct survey research. We identify a construct, such as job satisfaction, carefully define it and then produce a series of items which we believe will tap into the cognitive domain occupied by this construct. We then test these items to check that people understand them and use a variety of statistical techniques to produce a finished scale which, we believe, measures just what we choose it to mean – neither more nor less.

Alternatively we may bypass all of this and choose to use a published scale, assuming that all this hard work has been done for us.

Unfortunately, we do not tend to pay much attention to the actual words of the items. Sure, we check whether people understand them but we seldom check whether people understand them in exactly the same way as we do. Instead, like Humpty Dumpty, we fall back on assuming that words mean what we choose them to mean – neither more nor less.

The average noun has 1.74 meanings and the average verb 2.11 (Fellbaum, 1990). This leaves quite a good deal of scope for words to mean very different things, whatever we, or Humpty Dumpty, might choose. Consider the item ‘How satisfied are you with the person who supervises you – your organizational superior?’ (Agho, Price, & Mueller, 1992). What does it mean to you? How satisfied are you with your boss? (49); How satisfied are you with your boss and the decisions they make? (25); Is your supervisor knowledgeable and competent? (6); Do you like your supervisor? (14); One of these probably accords with your interpretation. You might be interested to know that quite a few people do not agree with you. The figures in brackets are the percentage of people selecting that particular option. You knew exactly what the item meant. And so did everyone else. The problem is that you did not agree.

“So what?” you might argue. If the stats work out then is there a problem? Well yes, there is. Firstly, we are not measuring what we think we are measuring. Few of us would trust a doctor whose laboratory tests might or might not be measuring what they claim to measure – even if the number looked reassuringly within the normal range. So should we diagnose organizational pathologies on the basis of surveys which may or may not be measuring what they claim to measure – even if the number if reassuring? Simply because something performs well statistically, it doesn’t mean that it tells you anything useful. Secondly, we do not know what individuals would score if they were actually answering exactly the same question that the researcher intended Thirdly, the different interpretations mean that there are different sub-groups within a population and this may have knock-on effects when linked to other factors, such as intention to leave.

So what is to be done? There are a number of simple fixes. Probably the easiest is to actually go and talk to some of the people who are going to be surveyed and ask them what they think the items in the survey actually mean. This will give a good idea of whether your interpretation differs wildly from theirs, and in many cases you will find that it does.

This problem of other peoples’ interpretations differing from our own extends beyond survey research, of course. Indeed, there is a whole field of research, that of linguistic pragmatics, which seeks to understand why we interpret things the way that we do. At the heart of it all, however, is communication. And so the assumption that words mean what we choose them to mean – neither more nor less – is a fallacious one, at least as far as other people are concerned. We need to stop thinking about what we are saying and spend a little more time thinking about what others are hearing. Humpty Dumpty was wrong. It is not us who chooses what words mean, it is the recipient of those words. And we ignore their views at our peril.

Agho, A. O., Price, J. L., & Mueller, C. W. 1992. Discriminant validity of measures of job satisfaction, positive affectivity and negative affectivity. Journal of Occupational and Organizational Psychology, 65(3): 185-196.

Fellbaum, C. 1990. English Verbs as a Semantic Net. Int J Lexicography, 3(4): 278-301.

Read “It’s Not Me, It’s You: Miscomprehension in Surveys,” from Organizational Research Methods for free by clicking here. Click here to sign up for e-alerts from Organizational Research Methods to get notified for all the latest articles like this.

Dr-Ben-HardyBen Hardy is a lecturer in management at the Open University Business School. His research examines the role of physiological processes in management and finance, morale in organizations and linguistic factors in survey research. He obtained his PhD from the University of Cambridge in 2009. He also earned an MBA and MPhil from the same institution and a bachelor of veterinary medicine and surgery from the University of Edinburgh. He is a member of the Royal College of Veterinary Surgeons.

Lucy R. Ford is an assistant professor of management in the Haub School of Business at Saint Joseph’s University. lfordHer research interests include leadership, teams, and linguistic issues in survey development. Dr. Ford has served on the executive committee of the Research Methods Division of the Academy of Management, and as the co-chair of the pre-doctoral consortium hosted by Southern Management Association. She has delivered numerous workshops on research methods and scale development at both regional and national conferences. Her work has been published in The Leadership Quarterly, Journal of Organizational Behavior, and Journal of Occupational and Organizational Psychology, among others. She received her BBA in human resources management from East Tennessee State University, and her PhD in organizational behavior from Virginia Commonwealth University.

Huddle Up! Team Learning In the Workplace

Humans learn through social interaction. In the workplace, huddles–informal meetings of two or more people gathered to discuss work-related issues–play a critical role in the learning process that contributes to the success of the organization.  Ryan W. Quinn of Brigham Young University and J. Stuart Bunderson of Washington University in St. Louis explored this concept in the context of newspaper newsrooms in “Could We Huddle on This Project? Participant Learning in Newsroom Conversations,” forthcoming in the Journal of Management and now available in the journal’s OnlineFirst section:

JOM_v38_72ppiRGB_150pixWThe theory and results of this study offer important advances to the study of learning in social interaction. If the organic capabilities of an organization are created in the informal interactions in which its members participate (Morand, 1995), then learning how to huddle in ways that generate quality learning can contribute to the effectiveness and the adaptiveness of the organization as a whole. As cited earlier, “[H]uman learning in the context of an organization is very much influenced by the organization, has consequences for the organization and produces phenomena at the organizational level” (Simon, 1991: 126). A better understanding of huddles gives us new ways to understand and improve this learning.

Click here to continue reading “Could We Huddle on This Project? Participant Learning in Newsroom Conversations,” forthcoming in the Journal of Management.

Control Variables in Management Research

Guclu Atinc, and Marcia J. Simmering, both of Louisiana Tech University, and Mark J. Kroll, University of Texas at Brownsville, collaborated on “Control Variable Use and Reporting in Macro and Micro Management Research,” published online first in Organizational Research Methods.

Professor Atinc provided some background information on the article.

Who is the target audience for this article?

The target audience for this article is researchers of organizational, behavioral and social sciences. The article is written primarily for those conducting academic research. Although the article compares the use of control variables in macro and micro management, it is generalizable to other fields.

What inspired you to be interested in this topic?

This paper started as a class paper for a doctoral seminar in research methods. We recognized that, use of control variables in Management research is widely popular. However, there is not enough work done in academia to set the standards for this practice. We aimed to fill this gap.

Were there findings that were surprising to you?

The findings were definitely surprising. We found that, the majority of the studies did not make proper use of control variables.

How do you see this study influencing future research and/or practice?

We believe it is going to direct attention to the proper use of control variables in organizational research.

How does this study fit into your body of work/line of research?

One of our purposes as organizational researchers is to contribute to the way research is conducted in our field. We believe our study serves that purpose.

How did your paper change during the review process?

The three anonymous reviewers and the action editor of Organizational Research Methods provided us with very valuable comments and recommendations. In fact, based on the reviews we received, we extended our sample, strengthen our criterion for analysis and revised our statistical methods. The quality of our paper increased dramatically based on the insightful comments we received during the review process.

What, if anything, would you do differently if you could go back and do this study again?

Most of what we should have done at the first place were identified by the reviewers during the review process. Thus, we believe we did what we were asked to do and met the high quality requirements of Organizational Research Methods. However, we could have used a larger sample size and a more broad set of criteria initially so that the review process could have been easier for both parties (authors and reviewers).

Bookmark and Share