When Validation Backfires

I just came across an interesting issue with validation in an online survey using a Van Westendorp pricing model.  Van Westendorp is one of the common ways to test pricing by directly questioning prospective purchasers.  This post isn’t about Van Westendorp, also known as the Price Sensitivity Meter (you can find plenty of references online, including  a starting point on Wikipedia) but you need to know a little to understand the issue.  Survey respondents are asked a series of questions about price perceptions, as follows:

  • At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would still consider buying it? (Expensive/High Side)
  • At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)
  • At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)
  • At what price would you consider the product to be a bargain—a great buy for the money? (Cheap/Good Value)

There is some debate about the order of questions, but in this example the questions were asked in the order shown.  The wording was slightly different.  Researchers are sometimes concerned about whether the respondents understand the questions correctly, especially since the wording is so similar (the Expensive, Cheap etc. designations are usually not inclined in the question as seen by a survey taker).   One way to address this concern is to highlight the differences.  Or you might point out that the questions are slightly different and encourage the respondent to read carefully.

The other approach is to apply validation that tests the numerical relationship.   Correctly entered numbers should be Too Cheap < Good Value  < Expensive < Too Expensive. (We usually ask these questions on separate pages so as to get independent thoughts from the respondents as far as possible, rather than letting them see the group of questions as one and making them consistent or nicely spaced).

In this case, the research vendor chose to validate, but messed up big-time.  When I entered a value for ‘Too Expensive’ that was higher than the value for ‘Expensive’, I was told to make sure your answer is smaller or equal to the previous answer.  Yes, they forced me to provide an invalid response!  I hope they caught the problem before the survey had gathered all the completes, but maybe they didn’t – given how fast online surveys often fill.  They probably had to field the survey again because the pricing questions were integral to the research objectives.

Why did this happen, and how can you prevent a similar problem in your surveys?

My guess is that the underlying cause was that debate about question order that I mentioned earlier.  The vendor probably had the questions switched when the validation was tested, and then changed the order before the survey was launched.

But the real message is that proper testing could have identified the issue in time to correct a very expensive error.  There is no excuse for what happened.  This doesn’t even fall into the class of problems that the pilot or soft-launch would be needed to catch.

So, test, test, and test again.   In particular, test using people who aren’t research professionals or experienced survey takers.

If you are creating your own surveys, don’t let this kind of problem stop you.  You can do just as good a job of testing as the big companies, and big companies aren’t immune.  This survey was delivered by one of the top 10 U.S. market research firms.  I won’t publish the company name here, but I’ll probably tell you if you catch me at one of my workshops (coming soon).

Idiosyncratically,

Mike Pritchard

Speak Your Mind

*

Copyright © 1995 - 2017, 5 Circles Research