7 user research myths and mistakes

Finding the holes in qualitative and quantitative testing.


Attend our live training event, “UX Design for Growth — Improving User Conversion,” on September 15, 2015, starting at 10 a.m. PT. Author Laura Klein will teach you to design for product growth.

I can’t tell you how often I hear things from engineers like, “Oh, we don’t have to do user testing. We’ve got metrics.” Of course, you can almost forgive them when the designers are busy saying things like, “Why would we A/B test this new design? We know it’s better!”

In the debate over whether to use qualitative or quantitative research methods, there is plenty of wrong to go around. So, let’s look at some of the myths surrounding qualitative and quantitative research, and the most common mistakes people make when trying to use them.

My favorite qualitative research myths and mistakes

Just to make sure we’re all on the same page, when I talk about qualitative research, I mean things that involve talking with real humans. Qualitative research never involves gathering statistically significant amounts of data.

User research == usability testing

About 90% of the time when people tell me that they want to do user research, they mean a very specific type of user research: usability testing. In other words, they’re running tests on their product or a prototype to see where people are struggling.

Don’t get me wrong. This is an incredibly important type of user research. However, it’s only one method out of many.

You don’t need to have a product or a prototype to do qualitative user research. Qualitative research can start as soon as you have an idea or a market or a single potential user. In fact, you should start learning about your user and your user’s problems long before you ever start to build anything at all. Interviewing potential users about their problems and their use of other products will make the first thing you build significantly more likely to succeed.

Of course, you should still usability test the hell out of it.

Users don’t know what they want

The most common reason people give for not talking to users is that “users don’t know what they want.” While that’s sometimes true, it’s not a good reason for not talking to them. It’s just a good reason for not asking them to tell you exactly what they want.

Instead, ask people about their problems. Ask them to tell you stories about how they use other products and how they make buying decisions. Ask them when they use specific products. Is it on the train? In the car? At their desks? At work? Ask them about their lives.

Users might not be great at telling you what new product they’re definitely going to use, but they’re great at telling you about themselves, and that is a very good thing for you to understand if you’re making a product for them.

Qualitative research can be done on anybody

Unfortunately, when I tell entrepreneurs they should talk to people before they have a product, they tend to take me quite literally. I would like to take this opportunity to point out that talking to your brother doesn’t count.

If you’re trying to learn about the people who are going to be your users, you should talk to people who might honestly be your users. If your brother isn’t in your target market, then his opinion about your product isn’t worth much. It’s probably also not worth much because he loves you and wants you to be happy, but he’s a little worried that if this startup thing doesn’t work out, you’ll be sleeping on his couch.

When interviewing potential users, take some time to find the right people. Look for people who have the problem that your product solves or who have searched for similar products in the past. You’ll get much better feedback much more quickly.

But the participant said they’d buy the product!

Even when you find the right people to interview, you’ll be disappointed to learn that there is one question you must never ask. Sadly, it’s probably the single most asked question in early user research: will you buy this product?

For some reason, the myth persists that people can tell you whether they will buy a specific product that has not yet been designed or built. This is a lot like asking people to predict the future, and it has a similar rate of accuracy.

Qualitative research does not mean simply showing somebody a landing page or a prototype and asking, “would you buy this?” In fact, doing this is the worst kind of self-deception because people will overwhelmingly say “yes” just because they want to seem nice.

My favorite quantitative research myths and mistakes

Quantitative research involves things like A/B testing, funnel metrics, and analytics. It’s the process of gathering information about what’s actually happening in an existing product.

Metrics remove the need for design

For some reason, people think that A/B testing means that you can randomly test whatever crazy idea pops into your head. They envision a world where engineers algorithmically generate feature ideas, build them all, and then just measure which one does best.

This is not how the world works.

A/B testing only specifies that you need to test new designs against each other or against some sort of a control. It says absolutely nothing about how you come up with those design ideas.

As I mentioned, the best way to come up with great products is to go out and observe users and find problems that you can solve, and then use good design processes to solve them. When you start A/B testing, youʼre not changing anything at all about that process. Youʼre just making sure that you get metrics on how those changes affect real user behavior.

Nothing about A/B testing determines what youʼre going to test. A/B testing two crappy experiences does, in fact, lead to a final crappy experience.

That’s why design is still incredibly important. A/B testing can’t replace it. It can only make it possible to measure a designʼs impact.

Good design can’t be measured

This leads me to the other side of the debate. It’s almost always a designer who is screaming that “good design can’t be measured.”

Well, of course it can.

Design doesn’t happen in a vacuum. It always has a goal. Generally, the goal involves changing user behavior in a way that has a positive long-term effect on a business — specifically, the business that is paying the designer’s salary.

With the tools available to us, we can now measure how our design changes impact actual user behavior. We can see whether a site redesign increased sales or a typographical change increased time on site. We can understand whether an onboarding flow change improved conversion or a new feature increased engagement.

This is important because sometimes the things we’re sure will work, don’t. We need to know that in order to not keep making the same mistakes over and over until we go out of business with a beautifully designed but totally ineffective product.

Only A/B testing optimizations

The biggest mistake I see in quantitative research is the total misuse of A/B testing. People spend a huge amount of time setting up an A/B testing framework only to use it for tiny optimizations like button color or text changes.

Sure, once in a great while changing a button label will have an enormous impact on conversion, but the majority of the time, these sorts of little tweaks are minor in terms of impact on user behavior.

This doesn’t mean that you shouldn’t be A/B testing, though. It just means you should be doing it better. You can A/B test far more than just small, intra-page changes. For example, you can roll out big new features to a portion of your user base and see whether there are improvements in key metrics for that group. You can change entire onboarding flows for half of your new users. You can A/B test entire product redesigns.

In fact, you should test those types of things. Don’t reserve A/B testing for the smallest, most isolated of changes. Those are the things least likely to have a significant impact on your bottom line. Make sure that you’re checking the impact of all of your major changes as well.

How to use them correctly

So, now that we’ve looked at a lot of the things people get wrong about qualitative and quantitative research, what’s the right way to use them?

I’ll be addressing that in my O’Reilly webcast Qualitative vs. Quantitative: Using Better Data to Improve Performance and Design on March 11th, at 10 a.m. PST. If you want to learn some methods for effectively combining qualitative and quantitative research, join me then.

Editor’s note: this post is part of our ongoing exploration into end-to-end optimization.

Public domain test tube image via Pixabay.

tags: , , , ,

Get the O’Reilly Web Ops and Performance Newsletter

Weekly insight from industry insiders. Plus exclusive content and offers.

  • This is an awesome article. I see a lot of these problems myself, and it’s a shame so many of these myths are still so prevalent.

    The one thing that I’m not sure I quite agree with is the idea that you should regularly A/B test large changes. You can do this, and it will tell you which version performs better, but it won’t tell you why – and the why is important for future improvements of the app/service/website you’re working on.

    If you’re testing small changes, it’s easy to conclude what it was that made the difference. If you’re testing large changes, it’s impossible to tell that without qualitative data to go with it.

    This increases the chances that you’re either going to come to the wrong conclusion about *why* that change was better, or you’ll repeat the original design mistake that you made elsewhere, because you haven’t identified it as a mistake.

    When I’m testing large changes, I prefer to prototype and then usability test before the change goes live. This also means you aren’t releasing things that have a worse experience than the one you started with.

    If you wanted to quantitively validate that the change that performed well in usability testing was a good one, then using A/B testing would be a good way to do that. But, personally, I wouldn’t recommend heading straight for A/B testing for large changes.

  • JoeOvercoat

    Good advice on this page, applicable to almost any developmental effort. For the DoD bunch, that means the guys and gals that will actually operate the system, peeps.