Awareness is Everything

Monday, September 21, 2009

Social Media and Market Research

Earlier this month Brad Bortner posed this question in The Forrester Blog For B2B Market Research Professionals:
"Are MROCs [Market Research Online Communities] the next big thing in market research, and will they eventually take measurable share from traditional qualitative research?"
Sentient Services has been working in online qualitative for a few years now through asynchronous bulletin board focus groups. While you give up a lot in moving away from a face-to-face interaction (body language, vocal intonation, etc.), in an asynchronous online group you have a lot of different strengths.
  1. Less time restraint – respondents have more time to think, they can look up notes and do “homework” assignments. Additionally, we can let side conversations go and see if the tangent provides additional insight
  2. Broader coverage – asynchronous participation means that respondents aren’t locked into 6-8pm ET, making time zones a non-issue. This translates to breaking down some geographic boundaries.
  3. Bigger groups – we’re not limited to the capacity of a conference room, meaning that we “seat” at least 12 participants per group (vs. 6-8 participants in a traditional group).
While we’ve previously scrutinized sample goodness when using social networks for market research and the value of polling features in LinkedIn, I believe MROCs (private online communities focused on research) are just an extension of online methodologies we already see. Instead of recruiting participants to one online discussion or survey, they are being recruited for continuing feedback on a variety of topics. And MROCs will impact both qualitative and quantitative research – it’s just as easy to host a survey in an online community as it is a forum discussion.

What are your thoughts on MROCs? What other evolutions do you foresee in the research industry?

Labels: , , , , ,

diigo it

Tuesday, September 18, 2007

Sometimes It Counts To Count -- And Sometimes It Doesn't

A couple of very good posts on the uses of qualitative vs. quantitative data have come my way over the past few weeks. Both of these are well worth looking into:

Andrew Hargadon, author of How Breakthroughs Happen: The Surprising Truth About How Companies Innovate, wrote this on his blog about “the virtues of qualitative research”:

“Qualitative research is, at its heart, an attempt to understand how people (or fish) interpret their reality and as a result make it. Anyone who has both looked at manufacturing statistics and wandered the factory floor knows that you can learn a lot by watching and talking to the workers about their work and their lives. And so, when you decide you want your company to be more innovative--and you decide to reward those who are ‘innovative’--you need to be very careful how you are measuring innovation.”
In an unrelated post, Bob Sutton, author of The No-Asshole Rule, discusses on his blog the current focus of many management gurus (including Sutton himself), “evidence-based management”:

“Managers and the business press seem to automatically assume that quantitative evidence is always the best evidence....The message seems to be that evidence-based management means management by quantitative data. I reject that thought, and have always believed that there are times when qualitative data are more powerful, valid, and useful for guiding action than quantitative data.”

Sutton goes on to describe three areas he feels it essential that companies use qualitative data:

1. When you don’t know what to count.

My take: This is what we often use qualitative techniques for in market research. If you don’t know what specific kinds of answers you're looking for, you can’t even construct a questionnaire with closed-end questions. So, often the first step is a qualitative study that will allow you to understand what’s important and what even begin to understand what can and should be measured.

2. When you can count it, but it doesn’t stick.

My take: It seems to me that this is the least compelling of Sutton's reasons, if only because so many companies seem to want numbers of some kind before they’ll make a decision. But it is true that compelling stories and images, which can only come from qualitative research, can be very persuasive.

3. When what you can count doesn’t count.

I don’t have anything to add here that’s better than Sutton’s observation: “In the hunt for and obsession with what can be counted, the most important evidence is sometimes overlooked. As Einstein said, ‘Not everything that counts can be counted, and not everything that can be counted counts.’ ”

Labels: , , ,

diigo it

Sunday, July 22, 2007

A sideways smack at the focus-group methodology

Focus groups -- the most picked-on market research methodology of them all -- got a sideways smack from James Surowiecki in the New Yorker's July 9 edition. I say "sideways" because the point of the article was not to dump on focus groups, but to tout the much more fashionable prediction markets as a potential method for identifying hits and misses in the consumer market space. The slap at focus groups was just a by-product:
"Prediction markets avoid many of the faults of focus groups, which tend to be dominated by the loudest and most opinionated people, to be driven toward consensus decision, and to discourage disagreement, making them of limited usefulness. ('Seinfeld,' famously, was a complete bust with focus groups.)"
First, surely someone can come up with something newer than Seinfeld about which focus group respondents were wrong. That example is aging.

Second, note that when people complain about focus groups they always complain that they were wrong at predicting a product's success (usually) or failure (occasionally). Yet among professional qualitative researchers, the idea that you might use focus groups to predict *anything* is ludicrous. So, once more, repeat after me: Focus groups help you discover why and how -- not how many. The job of quantifying and predicting falls to quantitative research.

Here's a similar example from another New Yorker writer, Malcolm Gladwell. In his book Blink he tells the story of the Aeron chair: Apparently in the early 1990s, when the concept for the Herman Miller Aeron chair was being researched, focus groups were held with "facility managers and ergonomic experts" -- the target market. These group participants mostly did not like the odd aesthetics of the chair. Gladwell gives the Herman Miller folks a big pat on the back for going forward with the launch anyway. And what happened? Quoting Gladwell:
"In the beginning, not much. The Aeron, after all, was ugly....however, in California and in New York, in the advertising community and in Silicon Valley, it became a kind of cult object....It began to appear in films and television commercials....and by the end of the 1990s....was the best-selling chair in the history of the company."
I'd like to point out here that this is *exactly* what the focus group research indicated. Not predicted -- but indicated. The target market didn't like the chair and wasn't likely to adopt it. That is a finding -- not an implication, insight, or recommendation. (The fact that the focus group did not like Seinfeld was also a finding, not a recommendation.)

A smart qualitative researcher would have been able to dig deep and get a sense for what those findings meant. It's *not* about whether the groups hate the Aeron and Seinfeld -- it's about gaining insights that help Herman Miller put together a product-launch strategy for the Aeron that....about understanding the kinds of reactions Seinfeld elicits from a specific small demographic. The mistake comes with premature closure. The findings are not insights or recommendations. They are not quantified. They are not predictions.

The purpose of focus groups is not to predict or quantify. The purpose is to understand how and why, to get the "lay of the land," to come to an understanding of the opinions of a particular, very small group. There are lots of ways to do this, of course -- observational research, such as traditional ethnography or an online hybrid; one-on-one interviews; interviews and groups online, in virtual worlds; interviews and groups with target markets, with existing customers. Gathering 8 to 12 strangers in a room and asking previously written, non-directed questions of them, leading them into a group discussion, is a perfectly legitimate method -- as long as it's done correctly.

The prediction market, as cool and up-to-the-moment as it is, won't give you the depth of insight and understanding in people's likes, dislikes, and motives that qualitative research will. But I can totally see how it would be way better for predicting who'll win the Oscars, and may also be better at predicting how well a given book will sell (the subject of the Surowiecki article).

I'll give Surowiecki the last word, because despite his sideways slap at focus groups, this indicates that he *does* "get" consumer research -- but like a lot of others, he just can't resist smacking the focus-group methodology around a little.
"The collective intelligence of consumers isn’t perfect—it’s just better than other forecasting tools. The catch is that to get good answers from consumers you need to ask the right kinds of questions; asking the market to predict how many copies a book will sell, which requires predicting how a wide readership will behave, is better than asking the market to predict which manuscript will get a book deal, which requires predicting the decisions of a small number of editors. (The Simon & Schuster experiment with MediaPredict, unfortunately, focusses more on the latter.) And you need a critical mass of people to participate."

Labels: , ,

diigo it