How to combine user insights with the science of human behavior in the design process

Incorporating a brief social-science literature review into the design process

Riikka Iivanainen
Bootcamp

--

I was recently in a job interview discussing the role of user insights in the design process. After I shared my perspective, I asked the interviewer to share his. He said that they try to apply customer insight “to anything we can think of.”

Part of me was happy about the response: an organization truly embracing a human-centered approach! The other part of me was thinking: to anything?

I’ve become hesitant about accepting customer and user insight as an answer to everything. My hesitation grew while I was working as a UX researcher at a health tech company; over the months, I understood that the science of human behavior was crucial for making good design choices.

I eventually landed at the following insight: There’s a time for interviewing users, validating concepts with them, and testing usability. But there’s also a time for looking up existing research on the design problem — reading academic articles or other types of literature.

But how exactly should you combine these two sources of insight in the design process?

In this article, I’m going to explore this question by outlining a classic design process — with a small literature review baked in — and presenting an imaginary case study. It’s part of a series on understanding the difference between designing for a good experience and designing for behavior change.

The following paragraphs may be especially relevant to you if you’re developing a (primarily) digital product in the field of health, well-being, or coaching. But I believe them to be relevant to anyone wishing to harness social-science research in product development.

Ready? Let’s go.

Design process including a brief social-science literature review

The following is my take on a design process that includes a brief social-science literature review (spoiler alert: it’s in step 4). The process is largely based on a classic model of the design process: the double diamond.

Step 1: Identify a performance metric which, if improved upon, brings you closer to a business objective.

Yes, don’t just choose any problem, but one that provides value.

Step 2: Explore the problem space.

  • Explore the human side of the problem: How are your users behaving and why? What are they struggling with? In what kind of environment/situation are they making decisions regarding the design problem?
  • And the data side: What kind of usage patterns can you identify in product data?

Step 3: Scope down the problem.

Specify the problem you want to tackle based on the qualitative and quantitative insights you’ve collected.

Step 4: Explore potential solutions.

How might you solve the problem?

  • Search for any relevant research that could help you solve the problem.
  • Use your creativity to come up with solution ideas.

Step 5: Test and refine ideas.

Or hypotheses. Teresa Torres has written a lot about the value of testing hypotheses instead of ideas. In any case, sketch, test, and refine. And then do it all over again.

A visualization of the double diamond model for the design process.
The classic double diamond model for the design process. Image from Wikipedia.

As you can tell, exploring relevant (social-science) research didn’t come up before step 4. That’s because for more “traditional” design problems — ones that aren’t initially about behavior change — the science often comes in during the ideation phase. In the double diamond, that would be the divergent phase of “designing things right.”

The science of human behavior could, of course, be relevant in other phases as well — especially if the problem itself relates to human behavior. You could do some reading in the “designing the right thing” part of the process: while exploring the problem space (What helps people resist temptations?) or scoping down the problem (What are the most effective ways to improve willpower?).

You may also benefit from reading at the very end of the design process. Once you’ve selected an idea to refine, you may again find yourself trying to influence user behavior. This offers another opportunity to explore the research on, for example, human decision-making. (Check out this article for more on how to identify whether you’re dealing with a behavior-change related design problem.)

If the above description left you feeling confused, don’t worry. Next, we’ll look at an imaginary case study to make it more concrete (is that an oxymoron?).

An imaginary case study: Improving conversion rates in a digital mental health coaching app

Imagine you work for a digital mental health coaching company like Mindshine or Mojo. OK? Now, let’s look at the case study.

1. Identifying a performance metric to improve upon

Your team has identified a relevant performance metric to improve upon: only 20 % of the people who download the app start their first coaching course. Your goal is to get the metric up to 50 %.

Your team also has a hypothesis: If you achieve this goal, you will have more people subscribing to the app after the seven-day trial; you believe that once people try out the offering, they want to stay — the coaching content is simply so good.

How could your team go about doing that?

2. Exploring the problem space

First, you need to understand why people are dropping off. You start by doing some interviews with the people who’ve recently downloaded the app. The interviews point to the following main issues.

1. Users download the app out of mere curiosity and simply forget about it:

“A friend recommended it to me, so I downloaded it. But then I forgot about it and when I went back, the trial had already ended.”

2. Users feel overwhelmed by the course selection and don’t know where to start — so they don’t:

“All the courses looked so interesting. So I just kept scrolling and scrolling and reading descriptions. Like ooh, that gratitude course sounds good. But I’ve also always wanted to start meditating. Then again, I could use some more self-compassion.”

Next, you search the product data for any patterns in the behavior of first-time users.

The analytics reveal the following insights:

  1. For some users, the clicks drop off dramatically after the first download.
  2. For others, there’s lots of scrolling and clicking back and forth.
  3. The first-time users who start a course are 15 % more likely to purchase a subscription compared to the ones who don’t.
  4. The first-time users who finish at least one mini-course during the trial are 40 % more likely to purchase a subscription.

3. Scoping down the problem

Your team decides to tackle the user pain point I don’t know where to start first. The choice is based on the hypothesis that the users who actively explore the selection are more motivated and, thus, more likely to subscribe. Moreover, the product analytics indicate that increasing the number of people starting (and finishing) a course could improve the conversion rate.

4. Exploring potential solutions

You formulate the following How Might We question: How might we go about motivating first-time users to choose a course, whichever it is?

This is where the science comes in. You decide to explore the existing research on decision-making biases and intrinsic motivation.

As you read, you learn that…

  • People are more likely to make a choice from a smaller rather than a larger set of options.
  • People are more likely to make a choice when it feels urgent.
  • People are more likely to make a choice they see others making.
  • Letting people choose from a set of options instead of forcing a choice boosts perceived autonomy, making a person more likely to stick with it.

Based on these insights, your team comes up with multiple solution ideas, the following three being the most promising:

  1. Highlighting the most popular courses of all time
  2. Creating a “featured” course only open for a limited time
  3. Offering three course suggestions based on a brief questionnaire that identifies the user’s main interests and challenges

5. Testing and refining ideas

Now you need to choose an idea to start testing. Idea 1 is promising, but it mainly rests on social proof. Idea 2 is interesting, but quite conceptual and laborious. You choose to go with idea 3, because it taps into several motivational factors.

This is where the case ends. Obviously, you’re only getting started; you still have a lot of sketching, testing, and refining to do. But that’s the topic of another article.

“Trying to apply customer insight to anything we can think of” is a great starting point. But once you’ve gotten to know your customers and their pain points, it’s good to pause and consider other sources of ideas. The existing science on human behavior and decision-making may be just what you’re looking for.

--

--

Writer, content designer, and user researcher fascinated by the human mind and behavior. I study (social) psychology for fun and love to tell stories.