Product Management
UX Research
January 26, 2023

How to Use Surveys in Design Research - Part I: Pitfalls and Benefits

Lilla Schmidt
How to use surveys in design research – Part one: Pitfalls and benefits

Surveys. We love them or we love to hate them. Why are they so divisive? Well, it is a fact that it would be hard to find a more misused UX research method than surveys.

On the one hand, due to their low cost and time requirements, we often see that surveys are the go to research method for many product teams. On the other hand, we also see researchers with a qualitative background who resent surveys and argue that the data collected there is at best biased or non-informative. So which side is right?

We would argue that we will get the best results by striving for a balance. The goal of this article is to show researchers working in product teams how they might achieve that. As a starting point, we will see which practices are best to avoid to not raise eyebrows. We will follow up in a second part with suggestions on how we might do surveys better. Lastly, we will show you some inspiring use cases of UX surveys. Let’s dig in.

Surveys done wrong(ish)

“There’s no such thing as a stupid question”. We have all heard this phrase a million times. And generally speaking, curiosity is an admirable trait.

But ask any researchers, or for that matter anybody else whose job is to collect reliable data from people. Reporters, detectives or psychologists - you name it. They will definitely tell you that there are some ways of asking questions, which are more fortunate than others. But before we get on to them, let’s see first what happens when questions are asked in an unfortunate way.

How to use surveys in design research – Part one: Pitfalls and benefits - Misuses
Image source

1. Sending out a random group of questions into the world wide web

As a researcher, you have some research questions to clarify. But you do not have resources (such as time, incentives, tools) to actually get them. And recruitment for specific target groups for a proper qualitative study feels tedious. So why not simply collect your research questions into a free survey tool and post them to a big enough online group? It is such an easy, fast and cheap way to get data.

Indeed, if you join any UX related Facebook and Slack group, you will not have to wait long to see a new survey pop-up. Usually, it's followed by requests to fill it in. Anyone can do it, if they do it for free. And soon. Obviously, the survey will not take more than 10 minutes. Maybe 15. But that is max. Pinky-promise.

But just as cooking a good risotto requires more than pouring butter and rice in water, a list of questions randomly put together will not result in a good survey. Without clear focus both on the goals and the target participants, it is better to not even conduct a study. There, we said it.

Example of unstructured list of questions in a survey

Example of unstructured list of questions in a survey - source: writer's own collection

Think about it for a second. And be honest with yourself. Do you really believe that any randomly collected opinion leads to equally valid product decisions? Imagine collecting the data in person. By just randomly ringing your neighbours. One of them is a grumpy teenager. The other is a kind grandma. Who is more likely to answer your questions in a relevant way? Although there is always a chance that this grandma will provide valuable insights about TikTok’s usability, most of the time her opinion will be just a distraction. This is the huge risk you run when anybody can fill in your survey.

And even if the sampled grandma could be a future target customer... Are you sure she understands your questions? When you sample a wide range of people, there will be a wide range of how well they understood your questions. This has an effect on the reliability of your data.

And we have not even talked about the fact that a random list of questions can result in changing motivation during the fill in process. The order of questions has an influence on how people answer. The type of questions have an influence on how people answer. Even weather has an influence on how people answer questions.

Are you sure you considered all this before releasing your survey to the public?

2. Surveys used as fortune telling tools

We all want to predict the future. Especially when we work on new products or services. Maybe that is why one of the most common mistakes we see in surveys is the wide ranging application of conditional questions. They usually sound something like “Imagine you are buying a rocket...”, “how would you behave when you would become …”, “would you like this app if you would be a new mother….”.

The worst part of these questions is that they result in presentable conclusions, such as “80% of females above 30 who live in big cities would buy a rocket”. So should we start building these rockets? Not so fast.

People are extremely good at imagining futures. Especially ones that are flattering to their self image. But doing what they have imagined? The picture is not so rosy anymore. You see, with conditional questions we will get answers only about their intentions. If you are interested in actual behaviour, it is best to refrain from using them.

Example of “brainstorming” questions used in surveys

Example of “brainstorming” questions used in surveys - source: writer's own collection

3. Surveys used as un-moderated interviews

Repeat after us: “Surveys are not written interviews”. We can not tell you how many times we see somebody posting a list of open-ended questions and calls it a day. And any time this happens, as researchers, we feel that something dies inside of us.

Example for surveys used as an in-depth-interview substitute - source own collection

Believe me, we get it. Traditional interviews are sometimes hard to conduct. You may not have time to speak with all participants. You may not have the budget for incentives. Or you do not want to struggle with recruitment.

But first of all: is it ethical to ask unassuming volunteers to do your dirty work and answer long questions without giving them anything back? And it is not just about the money. In professional, well led user interviews participants also have the chance to interact with the researcher. They share their issues or needs and we, researchers, make them feel heard. It is a good, productive experience. This is not the case while typing your answers to a faceless survey.

Beside ethical considerations, there is the aspect of data quality. In case of open-ended questions, you just have to accept any answers at face value. Even if they are less elaborate. Or simply not clear. Only when you get to data analysis do you realise how hard it is to deal with dozens of answers that vary in quality. By the time you finish with the analysis, you will probably regret using surveys as interview replacements.

Example for surveys used as an in-depth-interview substitute

Example for surveys used as an in-depth-interview substitute - source: writer's own collection

Surveys done right

After seeing how not to run surveys, let's move on to calmer waters. Let's see what helps to create a valid, reliable and insightful survey.

Surveys, as most researchers define them, are essentially a set of questions (a questionnaire or multiple questionnaires applied together), collected from your target group. They are designed, and later analysed, to measure attitudes, opinions, certain behaviours or their prevalence in those target groups. We know, long definition. But we need to have solid foundations in order to achieve great results. Let’s see what this means in practice.

The right time for surveys

There are certain types of research questions for which you should start thinking about using surveys. One of these typical situations is when you need to find out how frequently a behaviour or an attitude happens or how much a certain behaviour or attitude happens. Here you can go in detail by grouping your different types of customers. For example, with surveys you might measure how many of the users, who have seen your listings page have found that the listings are attractive for them. Or at a later stage of the journey, you might measure how big a proportion of your users remember on a certain element from the listings page.

Besides measuring quantity, surveys are also well suited to answer how people feel about certain things. This is not to be confused with the bad use cases of choosing surveys to explore the whys of behaviour.

Let me illustrate the difference between them. If somebody asks you how much you like pistachio ice cream, or if you prefer pistachio ice cream to lemon, it is really easy to say. You love pistachio ice cream a lot and you prefer it to lemon. You can probably also recall how many times you have eaten pistachio ice cream in the past couple of weeks. Recalling takes a couple of seconds. Yes. Can you choose your answers from a drop down list accurately? Absolutely.

But let me ask you why pistachio ice cream is so tasty to you. In that case, it becomes harder to give a true answer. Maybe it reminds you of family holidays, maybe you prefer cream based ice creams, or maybe your go-to ice cream shop simply has a killer recipe. Maybe it is the mix of these. How likely is it that you will type all of these in a survey? Especially one you received on your mobile phone from a stranger? Not that likely.

As we can see, surveys are not a “wrong” method, but applying them to answer the right question is crucial. We get thick, emotional, deep data from interviewing a handful of people. But sometimes we do need to measure attitudes and quantities on a large scale. In those cases, as we have seen with the ice cream example, you really would only waste time with qualitative methods.

Additionally, going large scale can help you to balance your qualitative data. Yes, we talk about confirmation bias in product discoveries. When your team makes important, long-term product decisions, surveys can help you to validate how much your customers indeed exhibit that focus behaviour.

Before we move on, we want to mention that surveys can be paired with quantitative methods too. For example, you might use analytics to track the success of the design of a webpage. But to get into not just what people do but how they feel about what they did, surveys are the most cost and time effective methods.

Before your very first survey

You might be lucky enough to come into product design with a strong quantitative background. Some of us have graduated from research heavy programs. In those cases, we're all familiar with topics like how to write unbiased questions, data cleaning techniques or statistical methods.

But UXR is a colourful field. It can very well be that your strengths lie elsewhere. In this case we do not suggest jumping straight into your first survey. First, take the time to prepare properly. Learn about survey methods. It does not mean you have to go back to university. Or that you should spend your money on expensive courses. There are several reliable online courses that can be done at your own pace. You can audit courses for free both on Coursera and EdX. They also cover surveys at IDF’s quantitative course. These courses are a good way to hone your craft and to be more confident in the reliability of your results.

And do not forget: you are not alone. Survey creation is not an innate ability, but something that you have to learn and practice. If we can advocate for one thing and one thing only, it is this: do not shy away from turning to other researchers who are more experienced in this area. Even if you are a sole researcher within your organisation, you can reach out to others via Facebook groups, Slack channels or LinkedIn. In our experience, UX research is a field where most people understand the struggle of starting out and usually they are happy to help.

Additionally, in “part two” of this article we will also share the processes we follow when we create our own surveys. Stay tuned.

Smart examples of using surveys in UX research

Lastly, we would like to share our favourite survey practices. We have seen them applied either by our fellow researchers here, at UX Studio, or learned about them from other colleagues' share outs at UX conferences.

  • Quantification of user groups: you might have identified certain personas/mental modes/user groups during your interviews. But how would you answer such questions as how valuable certain personas are for your business? Small sample size qualitative methods are not designed to answer this question. This question can be balanced by using surveys to assess the size of the identified user groups.
  • Prioritisation - stack ranking: “Nothing in life is as important as you think it is, while you are thinking about it” - says Daniel Kahneman. He calls this the focusing illusion. Sadly this can lead to unsuccessful products even in teams that include discovery and testing in their product development. This comes from the fact that during interviews we ask users about certain problems. When we observe them recalling a highly frustrating problem, it makes sense to prioritise solving it. But how certain are we that these problems are indeed the most important ones for our target customers and not just the ones that have been the focus of our interview? Customer problem stack ranking is a method developed by Shreyas Doshi. It asks your customer to actually rank their problems. This means that by running this smart survey you can base your feature prioritisation on quantitative data. Who would not want that?
  • Feature success: surveys are not just useful in the discovery phase but also after introducing a new feature or a design. At that point, it can be hard to see how users really feel about the change just from behavioural data. For example, did the time spent on a certain page increase because they did not find what they were looking for? Or were they actually enjoying our new content? Quick intercept surveys can help you to achieve one of the main goals of UX research: to reduce this uncertainty for your product teams. To track over time how users feel about product changes, you can run surveys on your webpage to measure your crucial goals.

To sum it up

By now, we hope you are excited to add surveys to your toolbox. And you should be: surveys are a great tool to quantify qualitative insights, to gather continuous and cheap feedback about people’s attitudes and to help to prioritise important decisions. That is, when surveys are done properly.

We hope that after reading this article you feel more confident to decide when you should and when you should not apply them. Remember, the only thing worse than going blindly into product development is going in with false insights. To help you further with this, we will follow up with an exact step by step guide to create your own surveys.

At the same time, do not stress too much. Hone your craft, ask more experienced researchers, try the learnings in practice and next time do it better.

And have fun while you do it. Stay tuned