Tag Archives: brand management

Market Research Survey Silliness

I’ve spent more than a decade working in market research, both quantitative & qualitative research, and still more years working with market research data in one form or another.  So, I’ve seen maybe not it all, but a whole lot.  I have managed and analyzed hundreds of surveys conducted by mail, on the phone, and online.  I’ve attended and moderated hundreds of focus groups.  I’ve managed segmentation surveys, discrete choice surveys, and I’ve randomized this and that and versioned that and this.  I’ve applied the unknowledgeable-management consultant’s favorite – conjoint analysis – and regressed and forecasted.  I’ve observed natural consumer behaviors, stimulated behaviors, and followed people while they operate cars, computers, and cell phones, and while they dined.  I’ve talked to people about death and dying and about surviving.  I’ve talked to people about how they shop, why they shop, and why they don’t like to bend over to pick up items on the bottom shelf when they shop.

My career gave me a fascination for consumer behavior, from the little decisions they make to the big, life changing decisions.  People are fascinating, and what makes it even more interesting, they very often don’t do what they say they do – imagine that!  Yes, consumers are captivating and indeed, understanding them and catering to them is the product’s best route to success.

I’m in a bit of a different professional position these days.  I use market research data, but it is of the most basic kind.  I currently conduct and use some customer satisfaction and general feedback data with a few qualitative interactions thrown in for good measure.  Yet as a supporter of market research and a consumer myself, I continue to complete surveys whether it’s a phone call at night, an online survey, or one in the mail, provided it is a legitimate survey and not someone trying to disguise themselves as a survey and then sell me a product or a politician.  I continue to respond as a fiduciary duty to my former profession and just to be helpful to the organizations out there attempting so dutifully to figure out their customers.  I continue to support data collection, even when it is excruciatingly painful.  And that brings me to this post.

Earlier this fall, I received a survey invitation via email for the purposes of providing feedback on school supplies.  What timing!  I had just completed purchasing school supplies for my son, and I will be in the market for the next 16 years by the time my youngest gets through high school.  This survey, I thought, would be interesting.

I go through the normal procedure of following the link.  I answer several demographic questions about my age, gender, household make-up, age of my kids, and income levels.  I answer these questions without hesitation because I know that I am one of probably 1,000+ answering these questions and that the 25-year-old analyst reading this data is not going to care about who I actually am (nope, not even my income level) for I am just one of a thousand.  (Just so you know, analysts care about percentages of the total sample; they don’t care about individuals in these instances.  Yes, you are special, but so are the other 999 people completing the survey.  It’s okay to let people know your household income and that you are a boy or girl, and etc. – trust me!  Now if they ask for a social security or credit card number – that’s different:  shut the survey down and/or hang up the phone – that’s not market research, that’s a scam.)

So I’ve now answered the demographic questions, and indeed, I am one of the target customers from which feedback is desired.  I am asked to continue with the survey.  I then answer several questions about which products and brands of school supplies I bought, which ones I considered, and from what stores I purchased the school supplies.  I was asked about how much I spent and how much I had intended to spend.  It was a nicely designed survey and I felt like I was really cruising through it and helping out.  And then it began – I fell into survey hell.

First, there was a battery of questions for every single product I bought and every single brand I considered.  Then there were batteries of questions for impressions of brands I had not considered.  The batteries were endless and included statements where I was to rate my agreement with things like:  “I feel like a leader for having brought this brand.”  “Others will admire me for buying this brand.”  “This brand is like me.”  “This brand is the only brand for my child.”  “This brand will make my child more successful.”  “I talked to friends and colleagues about this brand.”  “I encouraged others to buy this brand” – and many more questions along those lines.   Multiply these brand behavioral questions times the brands I considered and/or was aware of, times the number of back-to-school supplies I ticked off the list that I bought.  This was in fact, a survey that I may be completing for the next 16 years!

Yes, I was in survey hell, and not just at the surface.  I was in the hottest most uncomfortable level with no hope of getting to the surface anytime soon.  It is now several weeks later and having taken some time to recover from this experience, I have some advice to offer those of you developing surveys for your product or service or working with organizations that are doing so.

First of all, good market researchers/questionnaire designers realize that you never, I mean never, put people through endless list of questions on everything they did.  You randomly select a number of products for someone to evaluate.  Why?  Because I felt committed to answering the full survey, but by the end, I was clicking answers just to get through it (sorry about that – I still feel the guilt of providing bad data, but I just couldn’t take it anymore); others will just simply exit the survey.  There are several questionnaire programs that will randomly select a handful of items for a consumer to evaluate if it is the case that a typical consumer has purchased or considered more than 5 or 6 items and these programs will ensure that you have enough sample to analyze each item.  Ask consumers to evaluate more than 5-6 items with associated statements, etc., and your data will be flawed.  Good researchers protect the validity of their data and the sanity of their respondents.

Secondly, let’s think about the behavioral and brand leader issues related to buying a PENCIL!  Buying pencils and erasers are much different than buying, for example, a car, or apparel, or kitchen appliances or major electronics.  The latter are big ticket or emotional purchases that consumers may use to define themselves and that they may talk about with friends and colleagues.  If I’m going to be a brand leader or influencer, it’s going to be with these types of ego-type purchases – a pack of standard #2 pencils?  Not so much.  You might do some brand loyalty and behavioral segmentation on back-packs or on PDAs or other higher-profile supplies, but on pencils?  I’ve no clue what brand I even purchased for goodness sake!  But, I still was presented the battery of questions because I was familiar with some pencil brands.  Whoever designed this survey somehow believed that it was legitimate to apply brand and behavioral attributes/measurements used to describe the purchase process of a muscle car ($40,000) against the purchase of a pencil (40 cents).  Does that seem logical?  Remember that analogy:  the clock-builder will know how the clock is built, how it keeps nearly precise time, how the gears (now chips) work together, and about when it will need repair.  The consumer will know – what time it is.  I’m guessing that same analogy can be applied to the #2 pencil.  Good researchers question their logic and the logic of their clients who often get so caught up in their product they don’t understand its role and importance in the lives of everyday consumers. 

Third, and this one is important, good market researchers need to understand how the purchase is made prior to measuring and interpreting actual purchase.  The survey should have asked me how I made decisions about the purchase of school supplies.  You see, my school district, and all the ones around me (I talked to other parents, yup, I’m an “influencer”), gave me a list of supplies for my elementary-aged son.  They asked for specific brands and sizes and colors (emphasizing “washable” products, of course) because what happens is that we give the entire bag of supplies to the teacher who puts them in community bins.  The kids, when they need it, just get their pencil or marker from the bin.  Then they return it when they are done using it.  It’s a great idea – things don’t get lost, no need to label, there are not equity issues (all the same brand and size and color) and for the kids who cannot afford supplies, they are not left without.  So all this “brand leader/influencer” stuff the surveyor forced upon me?  Unfortunately, it’s all bad data.  If this bin trend for elementary students is a wide one, the individuals that should be surveyed are the teachers who are making the brand decisions, not me.  Good researchers do some research before they do the research.

So if you are a product manager or a market research consultant, I beg you, please consider your product, the product purchase process, and your consumers.  Make it easy for us to provide you good quality data.  Fewer and fewer people are responding to surveys, so please don’t push away those of us who still do respond.  And I should also say, lest I receive emails from pencil managers, I’ve nothing against the #2 pencil.  Why I am one of the few adults who still uses a pencil – not a mechanical pencil that breaks every time I use it – a real, sturdy, old-fashioned pencil that requires a sharpener.  Hmm, I wonder what type of consumer-segment that puts me in. . .

Silos, Ladders, and Customer Feedback

On the drive into work recently, I was listening to an automotive dealer lauding his company because it had the intestinal fortitude to kill a product that customers clearly told that company they did not want and would not buy.  Ironically, later that morning, I received an email from a former colleague who is still in the business of consumer research.  He sent me a link to the Detroit Free Press article on the subject, accompanied with a comment stating his amazement that an organization would actually kill a product that customers were already heckling before it was ready for sale.  While many would think this a common sense approach for all companies who depend on customers to buy their products, it is not uncommon for corporations to well, “dismiss” the customer.

My colleague and I have spent years “listening to customers” and reporting their opinions to countless organizations.  Our successful clients were those that took customer messages, incorporated them into their product design, retail environments, customer service centers, and even their parking lot design and made changes quickly.  Our less successful clients were those who told us that we obviously recruited the wrong customers to our focus groups, or we sampled the wrong 1,000 respondents when we were conducting quantitative research, because of course, these organizations knew their customers and they knew the customer would like what they built.  Why they were just completing the consumer research because they had to check the box on the product development flow chart that said they had to do research with the target audience.  Consumer research was just really a formality to moving forward.

In fact, we had one organization that scheduled what seemed like endless focus groups in two cities for the purposes of the evaluation of a new product idea.  We recruited carefully to insure that we had the right customer – the customer that this product was to target.  Group after group declared the product ugly, pointed out several problems, and advised the organization that they would not buy it.  They threatened to leave the brand and explore competitor products.

The message was clear.  Don’t produce this!  However, our client informed us that we must have recruited the wrong people.  The 100 or so people that evaluated this product couldn’t have been the right people.  “We know our customer and these people are not our customers.”  The solution was to conduct several more focus groups with several hundred more people in three more cities.  And in the last city, the reaction to the product was “take-it-or-leave-it.”  The customers weren’t as vehemently negative.  And while we cautioned that the region of this last city is known for more “polite” communication, the absence of absolute negativity from the customers in this city was all that was needed to give the client the permission to proceed.  They did and the product, unfortunately but predictably, failed.

Now lest you think that this occurs only in large organizations:

  • From a 24-year-old technology developer during a focus group on search engines with baby boomer women who couldn’t see benefit in the proposed technology:  “I know search engines and I know baby-boomer women (perhaps his mother?), and what they want.  These women in this group just don’t understand the technology enough to understand how much they need it.”  Ah, if the customer can’t understand your product grasshopper, it’s dead in the water.  And yes, the technology, whatever it was, lost its venture capitalist funding.
  • From a youngish consultant in social media marketing:  “I tell all my clients to throw away their traditional marketing methods.  There is no reason to conduct direct mailings or run broadcast advertisements.” Hmmm, I just hope his clients are all targeting moderate to high-end urban audiences that can be and want to be connected all the time by voice or Internet technologies – and who will opt-in to receive marketing messages.  That’s everybody, right?  I’m not betting on a lot of repeat-clients for this gentleman consultant.

Leaders in organizations, big and small, just like celebrities and politicians, tend to surround themselves with people who work well together and thus, tend to agree with each other.  And the group begins to drink its own Kool-Aid.  Product development groups are often put together because they are high functioning as a group – to start.  It would seem, however, that they quickly become isolated.  They become silos on their own farm isolating themselves from those whom they are serving.

It also goes, that those who have had success in the past tend to climb that corporate ladder accompanied with some legendary success stories.  These climbers gain more power in the organization and colleagues with less power tend not to want to report things that are not in agreement with the dude on the top of the ladder.  I recall a conversation with a brand consultant on a flight home from NYC.  He told me he had gone to the city to deliver some bad news to a corporation regarding some customer feedback.  The process was to present to the product team and the product team would then either have the consultant re-present with CEO in the room or the team would deliver the message themselves.  The team’s response to his findings:  “We can’t tell Mrs. Johnson that.  She would be furious.”  So, the product team decided to pretend that the findings were never collected and the consultant was heading home a day early and a bit dazed.  We laughed as we predicted that he would not be hired again by that organization because it is easier to shoot the messenger rather than deliver the message (or pretend the message never happened).

It’s hard to listen to customers.  They are needy and always looking for something better, and for the most part, in most cities, they tell it like it is.  They aren’t very loyal anymore either.  It takes guts to listen to the customer.  And it takes still more guts to deliver the customer’s message to those who need to hear it, to those who have the power to pull the plug on a project and either make modifications or start all over.  And those with power don’t make it easy to deliver negative feedback, particularly when it’s a “pet project” that Mr. Power has wanted to develop for the past 20 years.  And it’s hard for me to advise people climbing the ladder to keep delivering the real truths to those at the top of the ladder.  The more successful customer-driven organizations of course embrace delivery from all rungs; but still too many just kick the messengers off the ladder and send them home early.

So for those of you at the top of the ladder (and those of you climbing the ladder, when you get there), be sure to embrace dissonance.  Invite employees and your over-worked consultants, particularly those focused on consumer research, to disagree and to deliver the good, the bad, and the downright jeered.  If you rarely hear disagreement or every idea you present is met with, “That’s a great idea, the customer will love it,” you’re probably living in a silo and you’re likely thwarting honest feedback.  When presented with unexpected customer feedback, take action to make sure the findings aren’t an anomaly, but if you continue to hear customers jeer and threaten to leave your farm, it’s a good bet it’s time to plow up the crop and replant.  And congratulations to General Motors for plowing up the field on the Buick crossover that customers said they would not buy!  I think I just heard a silo crumble!