ASKING THE RIGHT QUESTIONS

There’s been a certain amount of discussion in the market research blogosphere and twittersphere about “web 2.0” research in opposition to “traditional” research.

On the one hand you have the idea that while social applications (like online communities or Twitter) have plenty of research potential, they may fall short of the standards of quality established by and for the existing research industry. On the other, there’s the feeling that these applications are the future of the business – that research communities, or listening to consumers, will come to dominate market research in the next decade.

The conversation has all been good-natured, as everyone involved can see everyone else’s point: extremism has rarely been part of the researcher’s make-up.

I think there’s a wider question that needs asking: who is going to do research in future? Existing agencies? Clients? Anybody? Nobody? What’s interesting to me isn’t the new methodologies exactly, it’s the changed consumer context they’ve arisen in.

This post is my stab at trying to make sense of some of the issues around “research 2.0”. It’s long, for which I apologise: I’ve divided it into chunks, which might make it more manageable.

QUALITY AND SCARCITY

Here’s a quote from Clay Shirky’s superb book about group interaction online, Here Comes Everybody:

“Much of the time the internal consistency of professional judgement is a good thing – not only do we want high standards of education and competence, we want those standards created and enforced by other members of the same profession… Sometimes, though, the professional outlook can become a disadvantage, preventing the very people who have the most at stake – the professionals themselves – from understanding major changes to the structure of their profession. In particular, when a profession has been created as a result of some scarcity, as with librarians or television programmers, the professionals are often the last ones to see it when that scarcity goes away. It is easier to understand that you face competition than obsolescence.”

Shirky here is leading up to a discussion of journalism, and the revolutionary effects the Internet has had on it. Later in the book he talks about encyclopaedias and software operating systems, and the game-changing effects that Wikipedia and Linux have respectively had on them.

The professional arguments against online journalism, and against Wikipedia and open source software have all used “quality” as a major plank of their defence. With varying results. Encyclopaedia Britannica is opening its content to its readers. Professional journalism puts up with regular predictions of its demise. Microsoft and other non-open software firms endure or prosper but the idea that open-source software is lower quality has been generally discredited.

Quality, in other words, is something professionals naturally reach for as a reason what they do is valuable. But it’s not necessarily a winning move. My gloss on Shirky’s argument is that professionals tend to confuse quality and scarcity.

Scarcity leads to professionalisation, and professionalisation leads to consistency, and this consistency comes to define “quality”. Professionals then convince themselves that the people buying their services are paying for this quality, rather than the scarcity it evolved out of. Sometimes it may be true.

THE INFORMATION GAME

So what does this have to do with Market Research? Well, in order to gauge the prospects of the research profession, we ought to look for things it does where the Internet may have removed pre-existing scarcities.

Market research is a business based on selling information about people – usually about their opinions and behaviour, and usually self-reported. But let’s break down what it actually does.

It solicits information.
(By asking questions, leading discussions, etc.)

It creates information. (By asking for reactions to things which participants could not otherwise be exposed to – eg concept tests, ad tests)

It collates information. (By doing this with a number of people – which may be small or large depending on the technique used.)

It validates information. (By using standardised methods, representative samples, weighting, etc.)

It analyses information. (By interpreting it using quantitative or qualitative tools to draw conclusions and form recommendations.)

It delivers information. (In the forms of eg. reports, workshops, tailored presentations)

DATA, DATA EVERYWHERE

Who is this information available to, and useful for? The clients who pay for it. But not just any clients: economies of scale have meant that market research has developed at an industrial level. So it’s not (until recently) been possible for an individual to go to a research shop and get something researched in the way that an individual might go to a print shop and get something printed.

This has now changed: tools that do many of the above things are available for free, or close to free, and available to anyone. The set of people who can do some kind of market research has vastly widened, and now includes many private individuals as well as companies large and small. This “DIY” research is one area of change seen as a threat by some agencies, in that it removes previously existing scarcities: the ability to solicit and collate information at scale.

But it’s far from the only factor reducing scarcities in research. Market Research is also having to cope with changes in how its participants create data and in how its clients obtain it – changes which threaten research’s status as a middleman between participant and client.

People – at least those with web access – now have far more opportunities to communicate publically about products, brands, their preferences, their lives and lifestyles. While far less structured than traditional research data, this stuff is data nonetheless. In such a situation, the promise of research to the participant – the opportunity to influence a brand or product – becomes flimsier.

Meanwhile, people are also leaking data – leaving data trails via all their online and most of their offline transactions. This trend parallels the rise of the web but isn’t directly caused by it – credit card companies, for instance, don’t need the web to amass and crunch megadata. The quantity of user data being generated automatically – whether online or off – dwarfs anything research companies could dream of providing.

DIY research, data trails, and the ubiquity of customer feedback: a three pronged attack on the scarcity of data. In fact there is more information about people around – much of it in public – than there has ever been before.

DOES INFORMATION WANT TO BE VALID?

So let’s take another look at that list of “what research does” and see exactly where the scarcities it’s relied on are threatened.

Soliciting information: Spontaneous generation of data makes this obsolete in some cases: DIY research reduces scarcity and cost further in others. Yes, the information isn’t in the neat formats created by market research, but it’s consumer information nonetheless.

Creating original information
: Because access to test materials can be limited, spontaneously generated data isn’t an issue. DIY tools allow more potential for testing in-house. The danger here is that given the speed with which data can be generated, testing by trial launch (the “perpetual beta” mentality) can be more efficient than going through a research process. This especially applies in areas like web optimisation.

Collating information: Spontaneous generation of data can generally produce the raw numbers of responses needed. With DIY tools you do still need to be able to publicise the existence of your research.

Validating information: Spontaneous voluntary generation of data by consumers can’t replace validated samples – this is still a scarcity. For “leaked” data, however, – customer or user databases – the sample is often identical to the universe. DIY research tools also generally fall short when it comes to providing a valid sample. And some groups remain hard to reach – business decision makers being an excellent example.

Analysing information
: Voluntary consumer data (the output of an online community, for instance) will often include its own analysis, though this is hardly impartial! DIY quant analytics tools are available but are often crude and cumbersome. In general, data still needs to be analysed and interpreted, and this is still a scarcity.

Delivering information: As with analysis, there are some DIY data display and delivery tools available, but this can’t be considered a lost scarcity.

So it’s clear to me that – with the partial exception of testing – the further along the research process you go, the less the scarcities market research relies on have been eroded. Technology makes soliciting and collating information extremely easy. Validation, analysis and delivery of that information are the areas where market research still appears to enjoy the advantage of scarcity.

But does it? Here we come back to my original question: who is going to do research in future?

FIVE CHALLENGES FOR RESEARCH

The problem for market research isn’t just that the front end of its business model – access to consumers and their information – is no longer a scarcity. The back end of the research model – analysis and presentation of data – isn’t something the research industry “owns”. In fact, as anyone who’s been to a research conference in the last decade could tell you, the industry is fairly self-lacerating when it comes to perceived failures in the level of insight, actionable analysis and sparky delivery it provides.

And this isn’t even the end of the problem. If the rarity of top-flight analysis and delivery skills in the industry is an issue even when dealing with traditional research data, how much more of an issue must it be when dealing with data sources it has far less experience in? The networked output of a community, for instance – or colossal customer databases? Simply being a market research agency doesn’t automatically bestow analytic competence when it comes to the very wide ranging data sources clients are now working with.

So the industry faces multiple challenges.

Redefine its value to participants
: Its role as an information middleman is losing credibility. It needs to persuade participants that it can amplify their voices, and that taking part will be an enjoyable, rewarding use of their time.

Make the argument for validation: Researchers are putting their own houses in order in terms of data quality – the next step is to go in hard with this argument in the marketplace, with compelling examples of the benefits of valid, robust data that don’t simply sound defensive or a justification for higher costs. For instance, don’t damn DIY research – embrace it, the better to educate its users.

Understand the new data sources: Market researchers should be fantastically placed to understand the metrics that describe networks, or influence. They should know better than anyone else how to evaluate the information that comes out of a group, and the nuances and cultures of mass participation. Right now I’m not sure they do. (And I’ve not even mentioned neuroscience and other behavioural innovations in this essay!)

Invest in analysis and delivery: Market researchers ought to know data better than anyone – and knowing data includes drawing out insights, relating it to clients, and presenting it magnificently. In my limited experience where there has been an emphasis on these things it’s come at the expense of the data – fatally separating the sizzle from the steak.

Anticipate: Consumers aren’t going to stop offering, or leaking, information – but the devices and networks they use to do it aren’t going to remain static. Because it’s an industry founded on the idea of mass behaviour, research has tended to wait until technologies hit mass adoption before it finds much use for them. The risk is that adoption curves now will be short and steep, particularly when it comes to mobile technology. The research industry needs to be placing bets and preparing for future developments – allowing a “skunk works” mentality so it’s as ready as it can be for upcoming changes.

Businesses will always need information about people. But I genuinely believe that the existence of a “research industry” dedicated solely to generating and selling that information is not an inevitability. Just because it has been the right answer to the information problem for 50 years or so doesn’t give it a lock on answering that problem – particularly if it fails to rebalance itself so it’s addressing genuine scarcities and refreshing its reasons for being.

I hope you’ve found my ideas about its challenges and problems stimulating.