Things to pay attention to when designing a survey: Lessons learned from 2 past research studies
If you have ever worked in the field of UX research or market research, you’ve probably experienced designing a survey to screen the participants you want to recruit for a study or collect their feedback on a product. There are already many articles online talking about the challenges and what you need to pay attention to while designing these surveys. In this article, I would like to talk about two challenges I encountered when designing the survey, combined with my past research experience, to give you a more vivid approach to understanding how these challenges might affect your research.
First story: Clear and quantifiable measures for some survey answers is crucial for recruiting the right participant
Once you have identified your target audience and started recruiting, you’ll want to recruit those who fit your screening criteria. Otherwise, you might spend extra time conducting the study with participants who are not your target audience. Unfortunately, this kind of circumstance happened to me in the past due to an error I made while designing the screening survey to recruit participants.
One of the usability study projects I conducted in the past aimed to find usability issues for people new to Duolingo* and unfamiliar or completely new to Korean or Japanese when starting their first two lessons on learning Korean or Japanese characters. Our target audience needed to be someone who had never used Duolingo before wanted to learn Japanese/Korean and had limited or no knowledge of Japanese/Korean characters. After defining our target audience, I designed and delivered the screening survey online. We finished recruiting, thinking everything went well and started our first usability testing session on Japanese character learning lessons on Duolingo. Here’s where the problem arose: our first participant, who indicated having limited experience with Japanese, actually knew all the hiragana and katakana characters; what she didn’t know well were only the kanji.

I then realized there was something wrong with our survey design, so I looked back at the question I designed to ask about participants’ language proficiency (as shown below).

The error I made when designing the survey was that I forgot to provide a clearer definition of “characters” and a more quantifiable measure of “some.” When designing the survey, I forgot that kanji could also be interpreted as Japanese characters, and my teammates and I were referring to “some characters” as knowing only a few hiragana or katakana characters in Japanese. A more accurate way to recruit someone with very limited experience in Japanese would be to alter the question like this: “I can only read or write under 10 hiragana and katakana characters and have no knowledge of kanji.” (Usually, those who know kanji would be more advanced Japanese learners). And, of course, we should provide a clearer definition of characters for each answer, including hiragana, katakana, and kanji.
Since we had already distributed the survey and completed recruitment, we could not change the survey. However, we decided to conduct the study only with those who indicated having no experience with Japanese/Korean at all, and all subsequent sessions went smoothly. This error taught me a valuable lesson: when designing survey questions, it’s important to provide quantifiable measures for answers that are relatively vague. It would also be helpful to conduct a pilot test of the survey internally with your team to avoid making mistakes.
Second story: Social desirability might have an influence on the survey data you collected and should always take into consideration when designing the survey
One of the past market and user research studies I conducted aimed to determine whether sustainable packaging would affect customers’ choices regarding meal kit services. It also sought to collect users’ feedback and opinions on sustainable packaging for these services. I designed the survey and distributed it online, asking a set of questions to those who currently use or have used HelloFresh (a meal kit service) in the past. Although I won’t go over all the survey questions here, I will focus on two questions that generated a very interesting (and somewhat conflicting) result.
The first question in the survey asked participants to rate the importance level of various factors, such as customer reviews, food quality, menu selection, modification and cancellation policy, packaging, and price, on a Likert scale when selecting a meal kit service. After collecting all the customer data, I generated a data visualization of the Likert scale results, which showed that the top three factors for customers when choosing a meal kit service are menu choice, customer reviews, and price. Sustainable packaging received the lowest score, indicating that most consumers view it as a neutral factor that won’t influence their decision much.

However, the interesting part is that another question in the survey asked former users of HelloFresh about their reasons for ending their subscription, and we received responses indicating that the factor of “sustainable packaging” had a much higher ranking, coming in third place for reasons to stop a subscription.

The results we obtained from consumers for these two questions in the survey do not seem to align with each other, and one reason for this conflicting answer may be due to “social desirability”. Even though we emphasized at the beginning of the survey that it was confidential, people still tend to overemphasize the importance of sustainability in their choices when answering the question without giving a rating.
To be honest, I still haven’t found an approach to perfectly resolve the “social desirability” tendency when users answer the survey. However, based on the slightly different answers we obtained from the two survey questions in my research study, we can see that “indirect questioning” might allow users to provide more honest feedback. Instead of asking for what factors play a role, allowing them to provide an important level for different factors may elicit more truthful responses.
Overall, we need to take “social desirability” into consideration when designing surveys and find approaches to enable people to be more honest with their answers.
*If you would like to learn more about the usability testing project on Duolingo, please feel free to visit this link: https://www.yimengwux.com/duolingo-and-foreign-language-learning