An Introduction to MaxDiff (and gamified MaxDiff)

  • June 15, 2016

An Introduction to MaxDiff (and gamified MaxDiff)

What is MaxDiff, anyway?

As survey researchers, we’ve been using MaxDiff for many years as a better and more efficient way to “rank” long lists of items. Have a product or service with 47 features to prioritize? MaxDiff. Trying to identify relative brand strength in a crowded category? MaxDiff. Factors contributing to user satisfaction? MaxDiff. Picking the best ad copy? You get the idea. It sometimes goes by other names, such as “maximum difference scaling” or “best worst scaling.”

When you give a normal human a list of things to sort, the general rule of thumb is that they’ll be able to mentally organize a set of 6 or 7 unique items. When the set grows larger, however, our brains “chunk” them into groups and clusters, and we become less and less accurate at determining if item 37 is better or worse than item 38.

Introduction to MaxDiff

Introduction to MaxDiff: Traditional MaxDiff question example

Traditional MaxDiff question example

MaxDiff is a type of choice model – a statistical algorithm for predicting people’s decisions based on sets of known choices. For example, if you like apples more than bananas and bananas more than pears, our model tells us that you probably like apples more than pears. In the practice of survey research, MaxDiff presents the user with a series of smaller choice sets and then recombines that data into a model of the complete set. This process requires three pieces:

  1. An  experimental design, which describes the subsets to be presented to the user. For example, if you have a list of 20 items to rank and want to present them to someone in groups of four, the design determines which items go into each group and how many groups will be displayed. MaxDiff designs need to be orthogonal (a fancy way of saying that the comparison groups should be well-balanced and, as a whole, provide uniform coverage of the items being shown and their comparison sets).
  2. choice interaction, which is the actual display of a group from the design. The traditional format for a MaxDiff survey question is displayed in the snapshot above. Users will repeat multiples of this interaction, selecting their most and least favorite items from a repetitive series of sets.
  3. model, which translates the choice data into an analysis of the full set. Multiple techniques can be applied here, of varying complexity and skill.

Most commercial survey platforms provide a question type for item #2 (the choice interaction). The task of building a design and modeling the resulting data is left as an exercise for the researcher.

Designing a MaxDiff Model

There are generally three approaches you can take to creating your MaxDiff model:

  1. You can build a simple Excel template and generate random sets, eliminate ones with duplicates, and rely on the law of large numbers to balance everything out. This is the easiest approach, but also algorithmically dangerous: it is very easy to end up with an unbalanced model that displays some items more than others, or that never directly compares two items.
  2. You can use a commercial tool specifically to create a design. This costs money, of course, but will give you a reliable design with minimal effort. (You will still need to program your design into your survey software.)
  3. You can use a commercial tool that combines the design task with the question interface and reporting in a single package. (That is what Datagame MaxDiff Rankifier does.)

If you’re not able to purchase a commercial tool, you may want to try out our free MaxDiff Model Designer. It will create experimental designs for you.

Programming your MaxDiff Interactions

Once you have your design ready, you’ll need to create series of survey pages: one MaxDiff question type per page. These pages will look nearly identical, with the same question text and labels; only the items being evaluated will change. For user efficiency, one column should always be the “most important” selection grid and the other column should always be the “least important” grid. The question text itself simply prompts the user to pick their most and least important, or most and least interesting, or whatever other descriptive scale you’re working with.

If you’re using a fully integrated tool such as Datagame, no programming is required at this step: the engine will automatically present your design to each user.

Analyzing your MaxDiff Data

After you’ve collected your dataset, there are a couple of approaches you can use for analyzing the data.

  1. Raw totals (counts analysis): You can simply add up the number of times an item was selected as most or least important and rank-order the items according to this data. This will work with the aggregated data, but won’t give you actionable information per-respondent. That said, this task can be accomplished quite easily in Excel or any other analysis tool.
  2. Hierarchical Bayes (HB): Entire white papers and technical documents can (and have been) written about Hierarchical Bayes. The abridged version is that HB analysis creates a probability model that allows for inferring the probable choice between any two items, for each individual user. The primary benefit is that each person in the dataset has their own unique and full set of scoring data for each item.

Doing your own HB analysis is no junior task, but if you need respondent-level scores it’s an unavoidable task.

OK, so why did you gamify MaxDiff?

The traditional user interaction for MaxDiff works fine from a methodology perspective, but has one major drawback and two resulting consequences: the experience is repetitive and boring. Asking a person to repeat the same task 15 or 20 times and expecting the same degree of attentiveness for each task is optimistic; data quality severely degrades beyond a breaking point. This increases the survey dropoff rate (which reduces response rate, which increases recruiting costs). It also has a negative effect on user satisfaction with the survey experience, which damages the long-term quality of your recruiting channel.

We’ve found that by simply swapping in the MaxDiff Rankifier game where a traditional question battery would have lived, we see significant improvements to both response rate and user satisfaction. Not convinced? See the game demo below. We also found that when you integrate the experimental design, user experience, and analytics into a seamless package it makes the whole approach much more valuable. And easier to use, more frequently.

Learn more about MaxDiff Rankifier and other Survey Gamification Games

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *