Is achieving true customer satisfaction getting easier or harder?

This is a bit of a simplistic question, in that there’s much more to the management of the customer experience these days than just customer satisfaction. But satisfaction is still the foundation of the customer experience. It’s the complexity and interconnectedness of the underlying components that have changed so rapidly in the past decade. In many ways, this makes it more difficult to have satisfied customers. Dealerships need to keep up with technology and processes that meet the changing needs and desires of customers. But selling and servicing cars is still very much a people business — it’s really the context that has changed.

We are experiencing fundamental changes in two broad areas that impact dealerships:

  • Changing markets and customers
  • Different models for selling (and servicing) vehicles

This is not revolutionary insight, but it sets the stage for a comparison of how we measure customer satisfaction now (which really hasn’t changed much), and how we should change our approach to be more relevant.

Flat-lining – barrier to progress

One of the things that always struck me when measuring customer satisfaction over the years is how little things change. Sometimes they don’t change at all. It was always a source of frustration to me when talking to clients at dealerships or at the manufacturer. How do you act on data that barely moves and shows little differentiation between good and poor results? Particularly in a rapidly changing environment, this can be a real barrier to progress.

There can be many reasons for the fact that satisfaction scores don’t change and for scores that really don’t help you understand where to focus your attention. Recently, I tuned in to a webinar hosted by consulting firm MaritzCX that addressed the question of how to operationalize customer experience in a business. Part of the talk focused on challenges organisations can face in monitoring customer satisfaction and they specifically addressed the issue of flat-lining scores. MaritzCX identified four key reasons why scores often don’t move at all. I’ve added two others (the last two) which I’ve seen time and time again.

Why do CS scores often remain static?

We’re measuring the wrong things
Most CS surveys are highly rational in nature. Customers react with their hearts and their heads — we probably miss 50 per cent of the story. New developments in immediate response and intelligent survey systems will be much more important in the future.

The quality of executive team engagement in the process
Lip service won’t work. The leadership team needs to get involved and pay attention to more than just the scores.

There is no voice of the employee in the design of the process
Employees are often at the front line. No matter what their level, they see customer interaction and can offer valuable insights.

Have you changed?
Don’t keep doing the same things and expect different results.

CS programs have developed a momentum of their own
The industry is still struggling to truly get beyond chasing the score. Resistance to change the game is strong and CS incentive programs continue to reinforce this.

Score compression
Look at most CS programs and all the scores are often in the mid to high 90’s. Is this meaningful (or even accurate)? How can you act on it?

Regarding Score Compression, it was clear to me a few years ago, that we needed to change the way we looked at measuring and scoring the customer experience to find greater differentiation between good and poor performers. Given the considerable legacy in most auto customer experience measurement programs and the potential disruption in making changes, we looked for a simple solution. This was to focus more on “moment of truth” metrics — the delivery in sales and FRFT (fixed right first time) in service. We used the existing binary (Yes/No) questions to capture customer feedback: questions like: “Were there any problems at delivery?” for sales and “Was your vehicle serviced right first time?” for service. We developed a new overall index, allocating a much higher weighting to these questions. What we found was that the distribution of satisfaction scores achieved by the dealer body varied significantly more than it did when using the traditional index.

This approach allowed us to differentiate much more clearly between top performing and lower performing dealerships. There were quite a few surprises when we compared dealers who performed well on the traditional index but poorly on the modified index. We saw some significant shifts — both upwards and downwards! Some dealerships that were apparently doing well on the “old” index did not do well when different measures were given more weight.

Do we need to move the goalposts?

Things like lack of differentiation between good and poor performance, static scores and the ability to better capture the total customer experience will be more of a challenge in the new environment. Looking at the changes taking place, you could ask: Is the current satisfaction metric still the most relevant concept or goal for a business? Customer satisfaction will always be the foundation of a good relationship with your customers. But, given all the changes we see and our enhanced ability to capture customer feedback more quickly and effectively, shifting the goalposts will be essential.

One component that has always been part of good customer experience management is trust. Trust is a central part of all human relationships and it is an outcome of a number of different parts of the customer experience. No matter what improvements you make in technology or business processes, without trust, a long-term positive relationship with a customer is not possible.

How do you measure trust?I would argue that, just as trust is an outcome of many aspects of the customer experience, customer retention and advocacy are the outcomes of trust. Unfortunately, both come with some caveats. I have commented before about “fake loyalty” (e.g., repeat business based entirely on price promotions or special offers) and advocacy cannot be measured with one question (How likely are you to recommend ____?). Many organisations are incorporating more accurate measures of advocacy by asking new customers directly what drove them to their businesses. Done properly, customer retention and advocacy should be the new targets.

Five things to focus on in future

Measuring and operationalizing customer experience measurement will be more complex in the future, but not necessarily more difficult. We now have the technology to make complex tasks simpler and to achieve better results. If I were to be given a blank sheet and the opportunity to start from scratch, here are the five things I would focus on:

  • Make sure you can truly differentiate between good and poor (or mediocre) performance;
  • Timing (immediacy) of customer feedback is essential;
  • Make sure you can capture both rational (what you do) and emotional (how you’re doing it and how the customer feels) feedback from customers;
  • The medium is the message — that smartphone is the key and a guided conversation needs to replace the traditional survey;
  • The customer experience starts on-line and continues in the dealership. Make sure you understand both and can address the gaps between the two.

Related Articles
Share via
Copy link