Measuring the 2 million combo menu.

Upsize this. Downsize that. No wonder we have whiplash: a fast food menu presents us with something like 2 million possible combinations.

In an age of mass customisation we now have the tools to measure really complex customer choices.

Scenario one. A young guy walks into Burger King and he’s starving. He’s got $10 in his pocket and while he waits in the queue he stares up at the menu board. Does he go for the tender crisp chicken wrap with honey mustard? Or does he stick with his usual Double Cheese Whopper? Or should that be a single cheese but with an upsized Coke? Next please. He makes his order.

Scenario two. Life Insurance. The young mother sits at the dining table with her tablet opened to show the Life Direct website. She is comparison shopping – looking for the best life cover for herself and her husband. She enters the details into the online calculator and nominates how much cover they need (enough to cover the mortgage,) and six competing offers pop-up within 15 seconds. These are priced marginally differently per month. She has heard of some good things about some insurance companies but bad things about one of the six. And a few of the competing offers come with additional conditions and benefits. She weighs everything up and clicks one.

Scenario three. The couple have all but signed up for the new Ford. They tested other competing brands, but this is the hatchback they really like the best. “Of course,” says the salesman as he starts to fill in the sales order form; “we haven’t discussed your options yet. Do you prefer automatic, or manual? How about the sport model with the mag wheels? That’s only $1200 extra. Of course you have the two-door, but how about the four-door option? And it’s a bit extra for the metallic paint – though it tends to age a lot better than the straight enamel colours.”

“You mean it’s not standard?” she asks. The couple look at each other. Suddenly the decision seems complicated. She murmurs to her partner that maybe the Mazda with the free “on road package” is back in the running. The salesman begins to sweat. The deal is slipping through his fingers.

We are in the age of mass customisation

Three very common consumer scenarios. In an age of mass customisation, consumer choices don’t just come down to brand preference or advertising awareness – the two staples of consumer research – but rather come down to an astonishingly complex human algorithm which somehow handles what we might call menu-based choice architecture or MBC.

How complex? One estimation I have seen of the typical burger menu board from where during the 90 seconds in which you queue up you must make a choice, is that there are in excess of 2 million possible combinations and permutations for a typical $15 order. Double cheese. Extra fries. Upsized Coke. Hold the beetroot. Everything comes with a cost and a trade-off, governed by personal tastes, one’s own sense of value and the various conditional skews of this week’s promotions. Somehow humans manage to make a decision, though to be honest, you can see how habits develop. They become an efficient way of dealing with information overload.

A big challenge for market researchers

How do researchers even begin to test the attractiveness of 2 million options? The problem for market researchers is not just one of volume. The complication comes from the fact that the consumer choices are not strictly rational. When you think about it, paying an extra dollar for an additional slice of cheese in a double cheeseburger is bloody expensive. And if I’m constrained by the $10 note in my back pocket, then who can tell whether I would prefer an upsize Coke and small fries with my burger, or a Jumbo coke and no fries at all? Or no Coke at all, but extra fries? Or Coke and Fries but hold the extra cheese? Or…

Menu-based choice modelling is a relatively new field of market research that springs from its roots in conjoint analysis.
Conjoint, I learned recently to my delight, was a term first applied by my statistical hero John Tukey, who was fascinated by the question of how we can measure things that are measured not discretely, but considered jointly – hence con – joint.
Since the 1970s conjoint approaches have been applied with increasing confidence.

At first these were paper-based, and a typical example of this kind of study gave respondents a set of 16, 24, or 32 cards on which various combinations and permutations of product description were portrayed. Generally this might be enough to efficiently calculate the relative attractiveness of competing offers.

For example if five competing brands of peanut butter came in four different price points, low or medium saltiness, super crunchy, regular crunchy or smooth, then we have 5 x 4 x 2 x 3 = 120 potential combinations of product to test. Good design might reduce this to a fairly representative 32 physical cards each with a specific combo of price, brand and flavour etc.
The conjoint approach – with carefully calibrated options to test – enables us to determine which factors are most driving consumer decisions (price is often the main driver,) and the degree to which consumers consider the trade-offs between, say a preferred brand at a higher price, versus a lower priced offer from a brand we don’t trust so well.

Conjoint has the advantage of boiling each variable down to utility scores, and – since price was one of these variables – allowing us to put a price-tag on brand appeal, or flavour.

Even so, the paper-based system, because it still requires at least 32 cards to give us enough data on the peanut butter survey, still puts on respondents a high cognitive load.

By the 1980s conjoint studies were made possible on computers, and this approach enabled some clever variations to be tried. First, the computer approach eased the cognitive load by offering three or four cards at a time so that respondents could more easily choose their preferred option (or none of these) over 9 or 10 iterations. Another innovation: useful in some situations but not universally, is adaptive conjoint, which in cycling a respondent through a series of choice exercises, may quickly decide that this respondent never chooses Sanitarium brand, so it begins testing variations among the preferred ETA and Pam’s brands. It focuses on the useful part of the choice landscape.

These approaches have been increasingly honed and refined, and I have always admired the developers of Sawtooth for working and reworking their algorithms and methodologies to create increasingly predictive conjoint products. They are willing to challenge their own products. They moved away from favouriting Adaptive Conjoint a few years ago.

Up until 2010 the softwares offered respondents a choice between “competing cards.” This approach works best on mass produced products such as peanut butter, where there may be less than 200 combinations and permutations available. However in the last decade or so, and with the rise of e-commerce and the presence of bundled offers (think how phone, energy and broadband packages are now being bundled,) classic choose-one-card conjoint only goes some of the way to explaining what goes on in the consumer mind.

Enter MBC.

This is a Sawtooth product and is a particularly expensive software. I recently forked out New Zealand around $12,000 for a copy, and dismayingly, instead of receiving a sexy drag-and-drop statistical package, had the equivalent of a jumbo bucket of Lego bits offloaded onto my computer. It isn’t pretty. It requires HTML coding, and requires some real steep learning to get a handle on.

In my case, I’m just lucky that I have worked with conjoint for a few years now, and conceptually can see where this is coming from. But even so, I can think of no software that I have dealt with in 22 years that has required such a demanding user experience. Luckily by the time the respondent sees it, things aren’t so complicated.

Not pretty for the researcher – but emulates reality for the respondent

With the right HTML coding, the online survey presents – metaphorically – a replica of the Burger King menu board. There you can choose the basic flavour of burger (beef, chicken, fish, cheese,) and then top it up with your own preferred options. Choose the beef value meal if you want – and upsize those fries. As you select your options the price goes up or down so respondents can very accurately replicate the actual choice experience.

And that’s the point of MBC: to emulate as closely as possible real choice situations. We might discover that consumer choice architecture radically transforms itself depending on whether there is a price point of $10, or whether there is a price point of $12 or $15. Sawtooth experience shows that many of the trade-offs are not strictly rational. There may be an overriding decision about whether to order a whopper or double whopper, but regardless, the shopper may always prefer to have a Coke of a certain size. In other words there is no trade-off here, or there is: but it applies to the fries only.

In a typical example the respondent is shown 10 variations of menu boards, and each time given a price limit and asked to choose. Given that the questionnaire needs HTML coding, it will cost the client somewhat more to conduct an MBC survey compared to a regular out-of-the-box online survey. The investment should be well worth it.

Another consideration: given the 2 million permutations available, this is not something to be tested with small sample sizes. A thousand respondents may be required as a minimum, each doing ten choices to generate 10,000 data points. Picture these as trigs on data landscape.

Given enough of these reference points, the software then has enough information to fill in a complete picture of the consumer-choice landscape. You simply can’t do this when you break down the survey into constituent questions about price, taste, brand, options etc – that is, in considering these elements not jointly, but singly.

Now comes the good part. By assembling this data, you could more or less model the optimum menu board. If I was Burger King for example, I could probably massage the menu so that the average punter with $10 in their back pocket would spend not $9.20, but something closer to $9.80 a 9% lift in revenue willingly forked out by happy customers. If I wanted to try a special offer on the flame grilled chicken burger, I could see what impact this would have on revenue and on the demand for other options.

How accurate is MBC?

As with any conjoint approach, the accuracy is largely dependent on the questionnaire design, and on the survey design which can be tested before it goes into field – using random data and running it through the MBC simulator.

If I was doing a landscape map of New Zealand and had 10,000 trig points I could probably come up with a pretty accurate picture of the geological landscape. There would be enough points to suggest the Southern Alps, and the ruggedness of the King country for example. But it wouldn’t be totally granular or perfect, and I would be extrapolating a lot of the story.

So similarly, given that we are testing, say, 2 million combinations of menu with just 10,000 data points, don’t expect the modelling to be perfect. Sawtooth has tested MBC using actual data, (but held out for comparison,) and found the software chooses the right options with 80% accuracy or thereabouts. So for my money, the MBC accuracy is right up there with Neural Networks for accuracy, but much more useful than NNs for testing menu-based choices and explaining the interactions that are going on.

Currently I’m employing the software for a client who is looking at bundled products, in a particularly complex market, that’s why I bought the software. There is simply no other tool available to do the job. Without giving any client information away, I will within a few weeks disguise the data and make available a case study to show how MBC works, and how it can be used.
I mention this because I am a strong believer in the idea that market research professionals ought to share their experiences for mutual benefit. As a profession we are under mounting challenge from the big data mob. However, so long as they stay in the world of Excel, and so long as market researchers remain aggressive in the way we trial and purchase new software advances, we will continue to have a healthy profession. My first reaction to MBC is that it is an answer – at last – to a world where customers can now customise just about anything. You are welcome to contact me if you want to learn more.

Duncan Stuart
64-9 366-0620

A solution to the problems with ranking questions. My wine choice system revealed.

How did I choose this wine and win the compliments of the wine-waiter? My sad secret is revealed, and shows why ranking questions don't work.
How did I choose this wine and win the compliments of the wine-waiter? My sad secret is revealed, and shows why ranking questions don’t work.

I have never liked ranking questions. For a start, they seem to place an unusually high cognitive burden upon the respondents. After a lifetime of being asked to score things out of five, and to work efficiently through batteries of rating style questions, well, suddenly ranking questions emerge like a sudden speed bump. Any efficient respondent hits the thing going far too fast, and their suspension – or tolerance for the questionnaire – bottoms out. Clunk!

It isn’t just because there is confusion in the typical respondent about the difference between rating and ranking. That’s one cause of the problems. (Heaven knows, another one is the “meaning” of the numbers now that a “5” now means fifth ranked instead of an “excellent.”)

The second cause of the problem is that we often have too many items to rank. I call this the wine list effect. If you go into a restaurant and the waiter asks you whether you have selected a wine yet, you’ll know the feeling I generally experience. There’s a list of 30 white wines. You don’t know any of them, but you know three things.

  1. You prefer Chardonnay over Riesling.
  2. Anything over $60 a bottle is ridiculous.
  3. On the other hand, selecting the cheapest wine will simply make you look mean. Avoid that one.

My own wine-choosing rules are quite efficient, and present quite a good cognitive shortcut, as all heuristics are meant to do. The rules quickly eliminate a dozen of the 30 wines on offer. The remaining wines I am, of course, ill-equipped to choose from – I am indifferent to them. So at this point I quickly point to one of the remaining 18 candidates. It’s probably a pretty good drop, and if I am quick enough in my selection I maintain a look of appearing knowledgeable.

“Good choice Sir,” says the waiter. (And with those words he has just earned his tip for the evening!) Everybody wins.

Well, that’s my strategy revealed. That’s how most people rank most things in life. As soon as the shortlist grows longer than five or six items, we start applying a few blanket rules or heuristics. Our brains are not actually engineered to conduct a fully rational ranking process – one by one – towards seven or more competing products or services or attributes.

So what kind of data do you get when you ask a respondent in your questionnaire to rank nine service attributes, or 11 brands, or a dozen product configurations: rank them from best down to worst?

It is just too complicated!

And supposing you had 1000 respondents diligently ranking these items from one down to 12, (and we are assuming that they each fully understand that one equals good, and 12 equals the worst,) what sort of data do we get? Is the gap between the first and second attribute the same size as the gap between the second and third attribute etcetera? Can we tell whether the top three attributes are clear leaders? Or have they nudged just ahead of the pack?

Likewise, do we know whether an attribute has failed to impress – or whether in fact it has actively turned off the respondents.

Ranking questions fail to explain or illuminate what’s really going on in the consumer mind. You don’t know whether I rejected the Champagne because I don’t like Champagne, or because it was way over $60 per bottle, or whether I didn’t reject it all, but rather chose something even better.

Is there an answer to this problem? One elegant solution – it isn’t perfect – is Max-Diff. In a typical exercise you may have a dozen variables which you wish to see ranked meaningfully.

Max-Diff breaks the cognitive process down into may be seven or eight chunks. Each time the respondent is presented with four variables and asked to choose the most important, as well is the least important. They then repeat the exercise, but with different combinations of variables – they may do this seven or eight times. This gets us over the cognitive problems associated with ranking questions.

It also gets us over the analytical problems. Here, and I’m thinking of the Sawtooth Max-Diff module, each variable in question is ultimately scored with an index that clearly shows the relative preference or rejection. I can see the degree to which each was chosen as “best” or as “worst” – or I can see the net score (best minus worst.)

Thus we can see that Variable1 is easily preferred – and by a big margin – over the second ranked variable. From the data we might understand that six of the 12 variables were testing cluster together – neither particularly liked, nor disliked. In other words we get to see something of the thinking that goes behind the ranking scores. We understand better the nature of the consumer decisions.

If you applied this software to my various wine list selections over the past 10 years you would understand that Duncan Stuart – wine-wise – is driven by a rejection of high prices, and a rejection of certain varieties. By contrast, the sad facts would reveal, this writer is neither driven by supreme wine knowledge, or by a fail-safe palate. There: Max-Diff has just revealed my dodgy wine selection secrets.

Max-Diff is an easy module to use, it works well in online surveys, and the analysis is blindingly simple. As a research tool it fulfils the basic need we have to understand human choice-making. It understands that people are not all that mathematical in their approach, but are intuitive and sometimes less logical than the wine waiter and assorted guests might ever suspect.

Could an old-fashioned Italian grocery store be the future of grocery buying?

Montepulciano Italy - this micro store gave us foodie moments that don't exist in the mass marketing model of the supermarkets.
Montepulciano Italy – this micro store gave us foodie moments that don’t exist in the mass marketing model of the supermarkets.

Sometimes, during our recent holiday in northern Italy, we needed the Detective skills of a Sherlock Holmes to find the local grocery store. Having been bought up to look for a vast car park and a well lit supermarket as the signatures for fresh food buying, it was disconcerting to live in a series of villages – in Tuscany and Lake Garda – that appeared to be closed up for the season. Where do locals go for their daily bread?  The answer, as Holmes would have told us, is ‘alimentary’ which is the Italian word for corner foodstore. Go past one of these during siesta hour and all you’d see are shuttered doors.

These little shops, some of them little bigger than a hole in the wall liquor joint, turned out to be more like Aladdin’s cave when it came to tasty produce. We were never fluent in Italian, but the store owners – once it was our turn to be served – gave us full undivided attention. When we asked the cheese for example, they asked us in turn: local?

They meant the cheese. You could generally choose from a range of half a dozen non-local varieties, or from half a dozen cheeses that represented the locality. The same was true of prosciutto, and the bread of course was locally baked that day. Each day apart from our travellers budget breakfast of yoga and grapes, we dined on simple, artisanal, and fully delicious foods. I ate a lot of pasta, a ton of cheese, a truckload of bread – and over the five weeks I actually lost 5 kg. The diet had no added sugars and glutens and other bogus flavorings.

In fact it struck me one day that far from experiencing a traditional European grocery-buying process, I was in fact getting a taste of the future of FMCG.

What the supermarket model does is actually slow FMCG down. Instead of being delivered daily, so-called fresh bread is delivered every two or three days – and to compensate, and to retain that fresh fluffy feel, the bakers add gluten, sugar and various other nasties to simulate the fresh bread experience. Fail. The first thing we’ve chosen to do upon returning to New Zealand is to purchase one of those breadmakers, and to never again by the crap we purchased in the past from the supermarkets. The fresh Italian bread was really that life-changingly good!

But the same comment could be applied also to the local-ness of the products. We travelled extensively in northern Italy, and in each village the specialty foods – for example those cheeses – differed from region to region. Very often the manager of the food store knew the cheesemaker. Likewise he or she knew the bakers who delivered product fresh each morning.

This was the polar opposite of the New Zealand supermarket experience. Consider well-known New Zealand brands which are in fact grown, processed and packaged offshore. Consider health products – fish oil for example – which say fresh New Zealand on the pack, but originate, months previous, in factories based in South America. How can consumers enjoy the rich variety that comes from different regions, and from different seasons, when the supermarkets seek – for whatever reason – to homogenise their choices? In supermarket land, Braeburn apples grow 365 days a year. Strawberries are never out of season, but by the same token never quite in season either. The bread, as I said, is a simulation of the real thing.

Supermarkets represent the pinnacle of the production and distribution and marketing model circa 1959. Supermarkets were an amazing thing 60 years ago, but now in an age of RFID, social media, customised communications channels et cetera – not to mention a pickier health-conscious society, perhaps the old-fashioned ailmentary is closer to the future than the mass marketing model created during the age of the Jetsons. Support local I say. And start enjoying your food!


If there’s one piece of feedback I get most from respondents to questionnaires I’ve written, it is that I don’t always give them Don’t Know as an option.  I think there are times when it is useful to exclude this option, and times when Don’t Know should be mandatory in a questionnaire. 

Why would I exclude Don’t Know? I do so when the question is of a lower order – for example where I am trying to get a rough measure on a minor issue. From my reading of the literature, the exclusion of don’t know will prod a few more people to express which way they lean – and frankly the more who do so, the easier it is to analyse the data. But this comes at a cost.

Research studies have suggested that there is some reluctance by respondents, at least in some surveys, to admit that they don’t know which way they stand. To include Don’t Know as an option makes it explicitly acceptable to tick that box if that’s the way the respondent feels. What we may miss is an indication of which way the respondent might be leaning – and US political polls very frequently ask people who Don’t Know a subsequent question: yes, but which way do you lean? And a high proportion of those who said they don’t know, then indicate that actually they lean this way or the other.

So as I see it, there are competing arguments about the inclusion of Don’t Know.

But there are circumstances in which I try always to include Don’t Know option. these circumstances where I am trying to get an accurate measure not of general attitudes, but of projected behaviours. Which brand will you buy? Which way would you vote?

I have a rule of thumb that for any given question around 10 to 15% of respondents can be counted on to be unsure which box to tick. If that’s true, then we have an effect that rivals or even exceeds the variance that may be caused by the survey’s margin of error.  In other words, if you don’t provide a Don’t Know option when it matters, you could easily be invalidating your own conclusions. That lack of opinion may be very powerful stuff.


The big thing we forget to measure

RISKIn our market research reports we generally make a concerted stab at the idea that our data is both precise and reliable. We gleefully report the sample size,  and we pinch our fingers together as we cite the maximum margin of error – which in many surveys is plus, or minus, 3.1%. Talk about pinpoint accuracy!

Yet we blithely ignore the fact that our clients work in a fuzzy universe where things go right, or horribly wrong. If you were the brand or marketing manager for Malaysian Airlines this year, I really wonder if your standard market research measures – brand awareness, consideration, advertising awareness etcetera – would have anything remotely to do with the fact that you have lost, very tragically, two airliners within the space of three months. Risk happens. Regardless of your marketing investment, passengers aren’t flying MH.

Or if you are the marketing manager for Coca-Cola in your country, do you think honestly that the subtle shifts of brand awareness, ad recall and consideration have as much effect as whether or not this summer proves to be wet and dismal, or an absolute scorcher?

We may not have a crystal ball when it comes to weather forecasting, but we do have decades of accurate climate data.  When I did this exercise a few years ago I gathered 30 years worth of data, popped it into Excel, then used a risk analysis tool to come up with a reasonable distribution curve based on that data. It looked something like the chart above.  Then I could set my temperature parameter – below x° – and based on that, could fairly reasonably calculate that my city had a 20% chance of having a dismal summer. The risk was high enough, I felt, that any marketing manager for any weather sensitive product or service should have a contingency plan in case the clouds rolled over and the temperatures dropped.

Why don’t we do this more often?  Why don’t we build in the various risks that accompany the work of our clients? If we did, then we could better help them to make decisions as circumstances arise.  We consider some risks – what if the competitor achieves greater brand awareness and consideration – yet we treat many of the other risks (weather,  currency or price fluctuations, whether or not supermarkets choose to stock us or otherwise, whether or not some kind of health scare will affect our category, etcetera,) as if these were off-limits and outside our scope.

Though these data are not outside our scope to all. The data may not come in via our surveys, but they are just as relevant.  Going back to the weather example: we did observational research at a local lunch bar, and found that on wet cool days the pattern of drinks consumption was quite different to that on hot sunny days. It wasn’t just a question of volume. On cool days people switched from CSD’s – slightly – towards juices, as well as choc-o-milk type drinks.

So if I was a soft drink marketer, and I had a market research firm supplying me with climate risk data, as well is an idea of how people behave when it is hot, average, or cool – then I might come up with a marketing plan for each of those circumstances. I would go into summer expecting it to be average, but as soon as the long-range weather forecast told me  that indeed the weather is going to be cool – I would think, well, I had a 20% expectation that this would happen. Time to wheel out Plan B. I’d be prepared.

The risk analysis tool that I use is called @Risk and it is frighteningly simple to use. It works as a plug-in to Excel, and takes roughly 10 minutes to learn.  Since using the software, my outlook toward what we do as market researchers has totally changed.

We are not in the survey business. We are in the business of assisting our clients to make informed, evidence-based decisions.

Sometimes the evidence comes from our survey work, bravo! But sometimes the evidence comes from the weather office, or from the war zone of the Ukraine.

The Adam Sandler effect

sandlerI was thinking a little more about the different rules we apply when we make our customer choices, and how these nuances may be lost if we ask research questions in the wrong way.

A really simple example, and one I’ve mentioned before illustrates what I call the Adam Sandler effect.

What happens is this: Five of you decide to go to the movies this Saturday. So far so good.

But which movie? You and your group love movies, and surely you have a collective favourite to see. So you start discussing what’s on and four of you agree the new Clooney film is the one you want.

“Ah… but I’ve already seen that film,” says the fifth member of your clique.

The veto rule.

Okay what’s our next best choice? And so it goes. Whatever you choose, somebody has either already seen it, or has read a tepid review.

What you have here is a collision between two competing sets of rules. You set out to see your favourite film, and instead you and your group end up seeing the “least objectionable” film that, actually, nobody has wanted to see. This is where Adam Sandler, I swear, has earned his place as one of Hollywood’s three top grossing actors of the past 10 years.

Apart from The Wedding Singer which was a great little film, the rest have been an appalling bunch of half witted comedies. Little Nicky anyone?

It doesn’t matter. Every weekend at the movies there is a blockbuster or three –  and then there is Adam Sandler, lurking there: his movies ready to pick up the fallout from your friends well-meaning decision process.

Now for researchers this has serious implications. If we only ask about what things people want, then we may end up with a theoretical ideal – but our research will never pick up the kind of goofy, half-assed, slacker productions that actually gross the big dollars. In our questionnaires we need to think about how we might pick up the Adam Sandler effect. Good luck to the guy. He has the knack of reading the Saturday night crowd much more accurately than most of our surveys could ever hope to achieve. We should learn from that.

  • Choices depend on positives as well as vetoes
  • When two or more people make a decision, the outcome depends more strongly on vetoes than on positives
  • There is always a market for things that are least objectionable.



A solution to the increasing volume and complexity of research reporting is to increase our story-telling skills. Here are 10 useful guidelines.

Storytelling has become one of the hot topics in business circles in the last couple of years. One reason for this is the sheer explosion of the amount of information that must be processed by organisations and communicated to their various stakeholders. By some measures the amount of data in this world is growing by something like 45 per cent per annum. So how do business people communicate all this information?

Market researchers, before the age of the PC and the datashow projector, used to communicate by two means only. One was to physically get up, shuffle papers and present a virtual lecture to the client. The second means was to present a written report. We were famous for them and even until recently market research firms were criticised for their delivery of doorstopper reports.
No wonder so much of our work ended up populating the bottom drawer of the client’s desk. It was like this from the early decades of the 20th-century when pioneer Charles Parlin would submit reports hundreds of pages long, right through to the 1980s, when the advent of the PC and PowerPoint began to change the way we told our stories to our clients.

At first the use of visuals and a PowerPoint medium was an exciting new thing for market researchers. The medium suits our use of statistical charts, though most senior professionals will remember the heady days when assistants would come charging into their office saying ‘look at this!’ and show how they’d used clipart to help deliver the visual metaphor to whatever was going on inside the data. Fortunately the fad of adding whoosh sound effects passed quickly.

But did it lead to better story telling? By and large the answer is no. Over time market research slide decks have turned into gargantuan productions showing slide after slide of pies and bars. In short this process has commoditised a lot of market research. Many senior researchers may deny this but their staff gauge the success of their productive day by the number of slides they have produced. Presentations are described and measured by being a deck of 60 or being a major “120+” kind of presentation. Whole MR organisations are structured around the production and delivery of these slide decks.

This is a tragedy. Technology has led us to focus more on presenting greater volumes of supporting statistical evidence rather than the quality of insight delivered. With the amount of data increasing exponentially the problem is only getting worse.

Volume is not the only issue. The typical insights we deliver as market researchers in 2014 are, surely, deeper and more complex than the insights delivered 20 or 30 years ago. I remember joining a very good market research firm in the 1990s and in the bowels of the filing room I discovered a set of political polling reports from the 1970s. The charts were rudimentary and hand drawn. The reports were very basic. There was no segmentation work, nor any kind of underlying driver analysis: there was nothing except simple descriptive statistics.

Today statistical analysis may be quite advanced and require some explanation in order for the clients to understand how we have reached our conclusions. Researchers may also be dealing with several streams of data including sales data consumer survey data and verbatim feedback collected by the client’s own call centre. These various rivers of information may be compiled into one particularly rich report that goes beyond descriptive statistics and into the world of strategic thinking or what-if modelling.

At this point our reports may get bogged down not just in absolute volume but growing complexity as well. The solution to this problem is surely not “the same, but more of it.” We require a step-change in our reporting style, and I’m not alone in arguing that we need to shift from evidence-based reporting toward a story-telling emphasis.

My own uncle first alerted me to this problem back in the 1990s when he was an engineer in charge of major hydro projects worldwide. Montreal-based Uncle Rod told me a true story about how he had received an urgent phone call from Hugo Chavez, then president of Venezuela. The President wanted help to decipher a huge report about where to build a major hydro dam. The report had been put together by acknowledged experts in hydro construction and civil engineering. They considered the financial, engineering, geo-technical and social costs attached to each option. In short, the report , which was hundreds of pages long, set out the upside and downsides of two competing locations. My uncle explained to the President that the authors of the report had practically written the book on these kinds of complex decisions. “That’s the problem!” exclaimed Chavez, “they wrote a book. All I want is the answer!”

My uncle told me the story to impart two lessons. First he wanted to show me that even with billion-dollar decisions such as hydro projects, and the Venezuelan project is one of the 10 biggest in the world, one can get too bogged down in decimal points. To paraphrase those hundreds of pages of expertise, the choice between Location A and Location B was about 50-50. In the end, the experts should have had the courage to put it in those simple terms. The second point was that the report was too big and too technical for the audience. Hugo Chavez was no fool, quite the contrary, but neither was he a qualified engineer. As he said, all he wanted was to make a decision.

Market researchers think long and hard about the engagement level of respondents to the surveys we conduct. We are fully aware in questionnaire construction that we must keep things simple, brief, easily understood as well as engaging. Yet, at the same time, many of us fail to think of our reporting along these same terms. Why do we need to show page after page of pie charts? What is the benefit of making a deck 90 slides long? What processes do we implement to boil down all our information into one easily understood story that passes the Hugo Chavez test?

Here is where storytelling technique becomes a useful tool in the armoury of the professional market researcher. Many organisations instil presentation skills by giving younger researchers practice internally and then in front of clients in the process of sharing decks of PowerPoint slides – but this training process only covers half the story. We get very good at presenting, but the stories we present are underdeveloped or dull and overcomplicated.

Yet stories are an elegant solution to the problem of too much information. Humans are wired to process stories and understand them. Stories act as a kind of cognitive coathanger on which we can drape emotions, characters as well as the sense of actions and consequences that are the hallmark of human dramas.

Even a 4-year-old can hear the story of Little Red Riding Hood and gasp in the knowledge that Grandma’s house is now occupied by a wolf. In doing so that four-year-old is handling irony, and processing a moral universe that is in-fact quite complicated. I doubt if a deck of thirty PowerPoint slides showing pie charts (and various KPIs,) of right and wrong could impart the same level of wisdom. Aesop’s fables are another example of simple stories being able to impart rich life lessons.

And get this. A pre-schooler may not have the mathematical skills to interpret statistical charts, but even at age 5 they have the intellectual horsepower to comprehend the complexities and film grammar of a two-hour movie. The storytelling techniques of moviemaking should in fact package up the rich and complicated story that comes out of our market research work.

So what are the basics of filmmaking? What storytelling techniques do script writers and directors and film editors use to keep us engaged for 15 gripping weeks of a TV series such as Breaking Bad?

My own career as it turns out was blessed by the fact that I spent eight years in TV drama scripting. I was a script editor and writer for a host of shows predominantly soaps and cop dramas. This early career was entertaining and made a great dinner party conversations, though to be honest by the time I quit television in my early 30s I felt as if the experience had taken me down a professional cul-de-sac.

Not so, as it turned out. Over those eight years I was immersed in the world of storytelling and never realised what a universal skill-set this turned out to be, at least not until recently. So here’s my list of ten techniques that are useful for market research storytelling.

1. Include some back story. Before you launch into the main thrust of the report, it helps to recapture why the research was conducted in the first place. In a recent report for a bank I recounted how during the observational research project we had witnessed a customer who attempted to open an account, but failed in their quest. It was a minor drama compared to the bigger questions we were going to explore in the report, but the incident illustrated how even small and incidental details contributed to a failure by the bank. For the sake of two minutes the bank forfeited the lifetime value of their customer. So I framed the report in terms of this incident. My subtitle for the report was: The two minutes that cost $50,000. That little back story framed the rest of the discussion: it set the theme.

2. Develop good characters. Whether qualitative or quantitative, professional research prides itself on keeping respondents anonymous. For the sake of privacy this anonymity is a good thing, but it makes for lousy storytelling. This is why I love verbatim questions in my questionnaires. Without naming names I can refer to the lady who complained about the coffee. Without divulging identifying details I can refer to the grumpy old guy who just wouldn’t be pleased. In script writing good characterisation does not come out of demographic descriptions, it comes out of the decision-making by these characters. The Denzel Washington character in the train movie Unstoppable can be described demographically, but what makes him interesting and trustworthy are the decisions he makes along the way. The same in our data: here is the lady who is prepared to pay a premium price! Over there, the customer who yearns for the old-style products. By introducing a few of these characters into our narrative we can explain later results quite simply. Instead of pointing to slide after slide of NPS scores, we may simply conclude that the new strategy got the thumbs down from Mr Grumpy. Everybody in the room gets it.

3. Find suitable metaphors. Sometimes very complicated things can be explained by using a good simple illustration. When asked to explain a factor analysis, I ask the audience to picture a new kitchen device called the un-blender. Where a blender turns diverse ingredients into gray statistical soup, an un-blender starts off with grey soup and after 30 seconds reveals the underlying ingredients: the factors that made up the soup. So far my layman’s explanation of factor analysis has received warm reviews from all my clients including, uh oh, two PhDs in statistics. Far better the metaphor that gives the gist than the full technical explanation.

4. Structuring a story very carefully. One of the biggest challenges in film writing is to find a structure that produces a compelling tale. I quite like movies where two or three different strands either click together or collide just before the end of the movie. When you have 45 minutes to convey the rich discovery and the insights of a research project you have the same time available to you as that available to the writers of say an episode of CSI or Law And Order. In other words you have room to introduce a couple of twists and turns as you piece together the bloodstains, fingerprints and ballistic details required to reach a conclusion. Clients don’t mind if in the course of that presentation you show them a little bit about your forensic techniques. Your audience doesn’t mind seeing some of the story behind the main story. When we put together cop shows, the question of whodunit was always less interesting than the question of how to the cops find the guilty party. Market research follows the same narrative arc.

5. Involve the audience. The audience of the drama can at any one moment be either up with the play, ahead of the play or behind the action. A skilled storyteller varies the pace so that sometimes the audience knows what is coming around the bend before our main character does. “Don’t go down the alley!” we yell at the hero. “There’s a bad guy waiting for you with a gun!” We love those moments, at least in moderation. If we get ahead of the protagonist too often however we begin to wonder why we are bothering to watch such a klutz.

On the flip side, sometimes the hero does things and we don’t understand what he or she is up to. All will be revealed later! In TV storylining we used to refer to these as mysterioso moments. A few of these add spice to the drama, and they allow the audience to revel in the intelligence of the protagonist. At other times within the movie, we are simply up with the play, neither ahead of it all behind the protagonist.

Alfred Hitchcock was a master of control when it came to these three audience statuses. Within a heartbeat he could take us from being ahead of the action to being 12 steps behind. Just when we think we’ve figured everything out, we realise we are embroiled in something much bigger and more complicated! Now I’m not suggesting that market researchers go for that effect too often. But there is a lot to be said for having a kind of rhythm between the lean back and listen elements of the presentation and the lean forward moments in which the audience is challenged. Rhetoric questions, for example, signify a change in audience status.

6. Remind the audience of what’s at stake. Don’t forget we are in the business of providing the information required for our clients to make important and sometimes very expensive decisions. If we work in FMCG, then perhaps we need to remind the client that in this business 80% of new product launches fail on average every year. The stakes are high! One reason I used the story of the lost bank customer was that I wanted to reinforce that our modest project was not about measuring customer resources at the bank, but about mitigating risk of failure. I wanted that top of mind, so that even during the prosaic bar charts that I had to present, these were contextualised by what was at stake.

7. Seek storytelling variety. When I worked on a cop show in Australia we used to crank out two episodes every single week. As a group of storyliners we recognised that cops only do a certain number of things. They examine crime scenes, they grill the bad guys, they chase suspects, they observe from the anonymous grey van parked over the road. We boiled this down to eight modes of behaviour, and we made sure that in any given episode of the cop show each mode was used no more than once. In other words we didn’t have a car chase followed by a foot chase. Or have an interrogation scene followed later by another one. In marketing research reporting we also have a shortlist of reporting modes. But this is why I get critical of seeing a deck of slides it features a whole stream of descriptive charts, followed by yet another stream of descriptive charts. It is useful to break down our reports into chapters, and for each chapter to be fundamentally quite different to those previous. So after introducing what’s at stake in chapter 1, I might present a series of descriptive slides in chapter two before searching for strategies using different techniques in chapters 3,and 4. This keeps the storytelling interesting.

8. Don’t be afraid to develop a theme of a deeper nature. Very often in market research we get to study a subject but in the course of that study we ruminate on deeper material. We may be tracking the performance of the brand but at the same time we are witnessing a shift in the zeitgeist. Development of such scenes in the movie or TV program adds a lot of richness to the storytelling. We are not just witnessing a story about a person; we are reflecting on the human condition. In my own presentations I refer to these parts as the “I’ve been thinking” zone, and it may consist of a single photo, or a discussion about a relevant and fascinating book that I have been reading in conjunction with the research study. Sometimes these pauses in the narrative spark a much greater discussion than might be expected. Just as the film will resonate with the public because it seems to capture the mood of the audience, so a thematic discussion may capture and resonate with the mood of the client.

9. Action is better than talk. Charts are stepping stones in a forward moving narrative. Your headlines ought to be spiced up with verbs, your summary findings should lead to consequences. You’re building a case and working toward a verdict.

10. Finally, good storytelling always has an authenticity. What made that terrific, nail-biting movie Captain Phillips so genuinely exciting was the absolute authenticity of the Somali pirates. It was brilliant casting. The director allowed a degree of improvisation from his cast also, so that the scenes were never over-polished or slick. Those actors didn’t look like they were acting; they were the real thing: lean and desperate. In market research reports our statistics and charts and evidence and conclusions must all reek of similar authenticity. I quite frequently add an anecdote in my presentation about my own journey of doubt during the project, or about the difficulties of fieldwork. I will mention the fact that one respondent, a sleepless parent perhaps, completed the online survey at three o’clock in the morning, or that another respondent wrote 1200 words in response to a question about why they would recommend a product or service. I want the client to breathe-in and smell the reality of our work.

Storytelling places on us one demand that challenges many corporate style guides. Your firm may specify a certain tone, look and feel to its reports. I find storytelling by nature is more personal than that. A good story requires the emotional investment of the storyteller. In an age of more data than conventional reporting systems can deal with; storytelling demands that you lay your heart bare in the telling and sharing of the tale. You up for that?

Choices depend on different rules

Last week I invested US $12,000 in new software. That’s a ridiculous amount, and frankly worth about three times more than my car. Strangely, the decision to invest in this software was very simple. It was fuelled by an ideal: am I interested in doing what I do, to the highest standard that I can? The answer is an idealistic yes. Idealistic for sure. My financial adviser and good partner for 34 years just shook her head.

The software, by the way, included Sawtooth’s well-known Max-Diff module as well as their pricey but promising MBC module which takes choice modelling to a whole new level. In terms of learning new technology, MBC promises to stretch me to the limit. It is neither intuitive, nor pretty. But more about that in a later blog.

As soon as I had Max-Diff out of the box I used it on a client survey as part of a conjoint exercise. I’ve long been a fan of conjoint because that emulates realistic decision situations that people face in real life. We learn not just what they choose, but the architecture of their decision-making as well.

In this case I could compare the two approaches. In effect, the conjoint choice modelling and the Max-Diff exercise ran in parallel, testing more or less the same variables, but using different approaches.

With conjoint the respondent chooses (on-line) from a small array of cards, each with a different combination of attributes and features. They select the most optimal.

With Max-Diff the respondent chooses their favourite combination, as well is the least favourite.

Were the results similar? Well yes, they converge on the same truths, generally, but the results also revealed telling differences. One of the least important attributes, according to conjoint, proved to be one of the most important attributes according to Max-Diff. How could this be?

The lesson went back to some wonderful insights I learned from Alistair Gordon when we were working on the subject of heuristics – those rules of thumb that people use to evaluate complicated choices.

Most of us, when asked “how do people make choices?” figure that mentally we prepare a list, based on the variables, and we set about finding the best: in fact conjoint is predicated on exactly this process.

But Alistair introduced me to a fabulous concept: the veto rule. Put simply, if I was choosing between one brand of breakfast cereal and another, I may have a number of variables that contribute to optimality (Flavour, naturalness, organic-ness,) and no doubt my brain has worked up a complex algorithm that balances these things against the presence of raisins, puffed wheat, stone ground oats and dried apricots. Good luck trying to model that!

But I also have a few simple veto rules. If a competing breakfast cereal contains more than x% of sugar, then bingo – I drop it from the list of competitors.

This explained why some variables scored as important with Max-Diff, but scarcely registered with conjoint. Among the variables were a few conditions that might be described as veto conditions. Those who use Max-Diff alone seldom discuss these different effects.

So which approach – conjoint or Max-Diff – should one use? As ever, I think one should try both. My favourite research metaphor is about the blind men and the elephant, each discovering a different aspect of the animal, and each giving a different version of events. They are all correct, even if they have different answers. Together, they converge on the same answer: the whole elephant.

I do like the way research tools can give us these honest, statistically reliable, yet conflicting answers. They give us pause for thought, and they highlight the fact that numbers are merely numbers: quite useless without confident interpretation.RESEARCH

Here’s why I think Market Research companies are flying backwards.

Yes, technically we’re flying, but only just. Kitty Hawk thinking doesn’t help high-flying clients.

The very last meeting I attended when I was with my previous employer did not go particularly well. It was the weekly meeting of senior execs and on the agenda were things like staff training, client account management, and the usual HR concerns. I already knew that I was going to be leaving the organisation, so I wanted to end by contributing something which I felt was very important. “I want to run some staff training on SPSS.”

I didn’t expect the answer from my senior colleague. She asked simply: “why?”

I was quite taken aback, and I must admit I fumbled for an answer, mumbling things like; “extracting more value from the data we collect.” I can’t remember my exact words but they were along those lines. Anyway the workshop suggestion got kiboshed, and I left the organisation a few weeks later.

Three weeks ago one of the bigger clients of that same organisation announced that they were getting SPSS in-house, and they introduced me to a new member of the team, a particularly bright analyst who had previously worked with the banks.

I realised I had just witnessed a well expressed closure to my fumbling argument from 18 months earlier. The reason we need to smarten up our analytical skills as market researchers is because if we don’t,  we will simply get overshadowed by our clients.

In fact I see this all over the place. In-house analytical units within banks are using SPSS and SAS quite routinely, and most of the major banks have adopted text analysis software to help them swiftly code and analyse their vast streams of incoming verbal feedback. The text analysis centre-of-gravity in New Zealand is located client side, and not on the side of the market research industry. The same could be said of Big Data analytics. MR companies are scarcely in the picture.

Meanwhile, what is happening within market research companies? Well this last week I’ve been in conversation with two analysts looking for better more challenging roles than they have been given within their market research firms. One of them, an employee with one of New Zealand’s leading MR firms, (and one I largely admire – they are by every measure a world-class operator,) had for several years been asked to produce crosstabs and other descriptive outputs, and had on no occasion, ever, had his mental capabilities even remotely stretched. I’m talking about a graduate in statistics who has effectively been cudgeled to death by the rote boredom of low calorie market research thinking. I asked what he was equipped with, software-wise and he told me: “Microsoft Excel.”

This is simply not good enough both professionally or stragically. While globally the volume of marketing data analytics is growing by something like 45% per annum, the market research industry is relatively flat-lining or showing single digit growth at best. In other words most of the growth is happening over at the client’s place. And they aren’t even specialists in data.

If the market research industry wishes to gain relevancy, then it has got to offer something that clients can’t provide for themselves. It used to be the market researchers provided unrivalled capabilities in the design and execution and analysis of market research projects. The key word here is “unrivalled” but I’m afraid the leading research firms are being simply outstripped by their own clients.

The mystery to me is why the larger firms appear blind to this phenomenon. Perhaps in building their systems around undernourished, under-equipped DP departments, they have a wonderfully profitable business model. Pay monkey wages, and equip them with Excel. And for that matter, keep them at arms length from the client so they never get to see how their work even made a difference. The old production line model. Tighten the screws and send it along the line.

Or perhaps the big firms are simply comfortable in doing things the way that I’ve always done, or perhaps senior managers, having grown up in Kitty Hawk thinking lack the imagination or the will to fly into the stratosphere.

Either way, if I was a young analyst and looking at my career prospects my attention would be over on the client side, or on dedicated data analytics operators such as Infotools. That’s actually a painful thing for me to say, speaking as a life member of my market research professional body. But if the status quo prevails, then we are going to see not just the relative decline, but the absolute decline of our industry.

What can market research firms do to rectify this problem?  Here are 4 suggestions:

  1. Invest in decent analytical software. Just do it. A few thousand dollars – for a much better return than sending your exec to that overseas conference.
  2. Reignite the spirit of innovation becomes from your groovy team of analysts. Rather than focus merely on descriptive data, let them loose on the meta data – the stuff that explains the architecture of the public mood.
  3. Develop a value add proposition to take to clients. Look, all of us can describe the results of a survey, but we market researchers know how to make that data sing and drive decision-making.
  4. Employ specialists in big data, so that as market research companies we can integrate the thinking that comes from market surveys, and qualitative work, with the massive data sitting largely untouched in the world of the client.

In my view market research industry has been going off-course for the last 20 years. We are stuck at Kitty Hawk. We stopped shooting for the moon.




The role play of respondents. The Pantomime of Polls.

Used car, low mileage…contact your local, friendly Member of Parliament

I do think a lot of social opinion research misses a big point. We’re not supposed to love Politicians. It’s our role as voters to distrust them. We revel in it. That’s how we’re wired. Sure, we trudge off to the polling booth to vote for them. And of course we devote hours immersed in the news cycles each month following their every move. But trust them??

And such is the disconnect between many earnest poll questions and the realities of public opinion. Today a trusted colleague, a professional I greatly respect, tweeted over a link to a disturbing new poll. New polls are always disturbing. (They play a role too.)  The figures said that few of us trust either the Government or Market Researchers with their data. Maybe I’m just getting ho-hum about having my profession bagged in every latest disturbing study, but I wondered whether my colleagues had missed something in their research. 

For sure, it is one thing to know that only 30% of us trust our Governments with our data. (Frankly I’m surprised that it is that much.) But if the other 70% have some or serious distrust on the issue – do they actually give a damn about it?

In other words we are used to measuring the breadth of sentiment, but often we do little to measure the depth of sentiment. Have those 70% written to their MP about Privacy? Have they signed a petition? Shared an angry tweet? Marched down main street?

Polls going back for decades have demonstrated the fixed social role the public allocates politicians and us pollsters. We’re always down near the bottom of the trusted profession list: cellar-dwellers with our favorite bad guys: the used  car salesmen.  But these results are a role play; a social construct – they are part of our culture. So I’m no longer shocked or amused about who wears the bad hats in this public play.

In Pantomime we know there’s going to be a big bad wolf. I do think it is time to ask deeper questions that dig beneath these paradoxes:

  • If you distrust retailers (or car salespeople) – how come you still buy from them?
  • If you  ‘hate the media” how come you watch the news and TV for as many hours per week as in the 1970s?
  • If you distrust polls, how come you read them?

These are paradoxes that, were we to find some answers, would reveal a lot about the trade-offs and real values of the public we love to measure.