I used ChatGPT for user research and here is what I learned
Today, I’m going to walk you through my experience playing around with ChatGPT, my recommendations for how to utilize this tool effectively, and the drawbacks that may bubble beneath the surface.

Generative AI is making splashes in the tech world, and it feels like the industry is barreling forward towards a future of the fully integrated AI experience. While we learn about how generative AI impacts our enterprise users, it leaves me wondering how I can leverage this technology to become a better, stronger researcher.
Many of us ponder the theoretical capabilities of AI to create research plans, write interview questions, build surveys, analyze data and uncover insights. The novelty of AI integration gives me hope that we can amplify the needs of our users, tackle product strategy, and of course build things that matter.
Looming behind the allure of ChatGPT, sits an interesting question of how generative AI impacts the human psyche. What is the impact of generative AI on our critical thinking, brain development, and perhaps even how we retain information in memory? How does AI impact the way we do work? A researcher’s brain may be our most powerful tool, allowing us to draw insights, drive impact and influence product direction. So how does Generative AI impact the way we do research?
How do we use ChatGPT for User Research?
ChatGPT sources information across the web and is a cauldron of knowledge for you to sift through. However, as with most spells, effective utilization requires your experience, wisdom and magic.
You may be inclined to use ChatGPT when you’ve just been acquainted with a research topic. Your colleagues and mentors are a wonderful resource for feedback and ideas. In addition, you can also make use of the wealth of information that ChatGPT has access to. This can be useful in situations where you may be the sole researcher, or you don’t have another person to brainstorm with.
I modeled our thought experiment after an exploratory project I completed in December. For the purposes of this article, let’s focus on one of the research questions, to explore user expectations around AI in developing software applications.
To see how ChatGPT would serve as a discussion partner during exploration of a problem space, I asked a series of questions, starting very broadly with prompts and questions such as:

— I want to learn about [description of topic]
— How would you research this topic?
— What user research methods would be appropriate for this? Why?
A great time to incorporate ChatGPT would be once you’ve spent some time in the problem space. This would give you the opportunity to challenge your ideas, compare methods, explore benefits and drawbacks that you may not have thought of. Leverage ChatGPT and ask questions
— Why did you identify [list provided options] as your top 3 methods?
— In what order would you suggest conducting these methods?
— I’m interested in some of the unmoderated methods you’ve listed, [name of method] in particular. Could you provide an in depth description of how [name of method] would be useful for this study
— How can pairing [name of method] with [name of method] be beneficial?
ChatGPT should generate what is essentially the textbook definition of a method, and a high level description of how it is typically used. What you may learn from this process is that you may need to rephrase or continue to prompt ChatGPT until it provides you the level of detail you need or answer the question you actually have. In the same way that people misinterpret questions and language, ChatGPT is not immune to this.
It’s almost like when you ask a question during a presentation, and once you get an answer, you find yourself asking what you even asked.

Once the chat generated what feels like a good starting point for our thought experiment, I thanked the chat and asked it to generate a research plan for this study.

Writing a Research Plan
Possibly one of the most important documents you could create, the research plan acts as a conduit for communication between you and your stakeholders. Crafting a document that is detailed yet succinct is critical to facilitating your research.
In addition to resource related constraints such as budget, timeline, and availability, there may also be roadblocks in terms of project clarity. Your stakeholders could have competing priorities, and your research questions could be undefined. ChatGPT produces what I consider to be a great start, but doesn’t have the granularity and level of detail that I seek for my project work. To mitigate this, I put on the hat of a sprightly researcher interested in learning more and focused on theoretical project constraints.
To help sculpt a meaningful response, you could ask a series of questions that you would expect to be asked during a method brainstorm, such as:
— Due to budget constraints, conducting multiple moderated research studies concurrently with a large number of participants may not work. What suggestions would you have to pair the User Interview with instead?
— I was actually thinking about doing [name of method], could you compare the benefits and drawbacks of using [name of first method] or [name of second method] paired with the User Interviews?
— Would [name of method] be pragmatic for a 4 week turnaround time?
— I want to use [name of method] to explore [research question], could I also use this to explore [second research question]?
With the responses you get from ChatGPT, I believe this offers you a starting point to draft your research plan. I think that ChatGPT serves as a great tool to discuss methods you might be considering for the problem space, especially when your social resources are limited i.e., if you’re the only researcher on your team, or your colleagues are unable to provide you feedback at this time.
Building your study
With the mock constraints provided to ChatGPT, the suggested study incorporated semi-structured interviews and a survey questionnaire.
With the intent of exploring what ChatGPT can do for us, I asked ChatGPT to generate survey questions. Of course, this took refinement, additional prompts, and much tweaking. I ultimately took what was useful, added what I felt to be missing, and adjusted question types and wording to be in my voice. We ended up with a short questionnaire with a mix of open ended and closed ended response types.
In the same way that ChatGPT remained generic in choosing a method, and writing a research plan, the same happens when you ask it to generate research goals, survey questions, or interview questions.
It’s important for you to not take these responses at face value, because they lack human touch. When using ChatGPT to supplement your work, be sure to thoroughly review and adjust them to your project’s needs.

Qualitative research is a touchpoint with users that thrives off of fostering emotional connection, building rapport and making users comfortable. The knowledge, skills and abilities that define a researcher influence the work we produce, the questions we ask, and it makes all the difference.
Data Analysis
Data privacy is a key consideration when it comes to User Research tooling and using external platforms for housing data and analysis. Many companies have restrictions on usage of generative AI and ChatGPT due to the risks surrounding a company’s proprietary information. This in part due to the lack of information surrounding how data models are currently being built. Researchers ought to be mindful about how our research data is stored in order to keep our data safe.
For those reasons, I could not upload transcripts of the user interviews I conducted last quarter. For our thought experiment, I conducted mock interviews and I threw our short survey into Qualtrics, which has a feature that generates test responses, such as likert scale responses.

In attempt to understand the limitations of ChatGPT, I tested a couple prompts. Starting the entire quantitative mock-dataset, I copy and pasted all the mock data without prompt. This led to an error, asking me to regenerate or shorten the prompt.

Then, I selected our first survey question in the dataset, and copied all cells and pasted them into ChatGPT, again without prompt. In response, ChatGPT recognized that what I had sent was from a Likert scale question, but asked me to clarify what I was asking it to do.

When asking ChatGPT to analyze the data, the responses felt like I was asking it to draw something from memory. ChatGPT requires a direct prompt to generate average ratings and percentage of responses.

I actually ran into something interesting, where ChatGPT was unable to generate percentages for each rating. It provided me the python code that would have been used to calculate this information. While this is not exactly what I was looking for, it provided me a bit of comfort knowing how the generative AI is making these calculations.

In addition to quantitative data, I wanted to explore how ChatGPT could handle open ended responses from a survey, user feedback from an interview or behavioral data from an observational study.
To simulate the User Interviews that would be paired with our short survey questionnaire, I interviewed my peers, and created a small data set of qualitative feedback that could be fed into ChatGPT.
The chat summarized each user’s response at a high level, and also provided me with common themes across the participants. However, what I found was that the common themes and key takeaways would change depending on how I’d prompt ChatGPT.
As we learned earlier, it’s important to note that ChatGPT needs to be prompted with specific and clear directions. Provide the chat with all the context, details and requirements necessary to provide you a meaningful or useful response, for example:
— Please conduct thematic analysis on these five users’ responses
— Use sentiment analysis to determine the overall sentiment of users on [topic]
— What are the common themes across these participants?
While ChatGPT identified many of the key insights that I outlined from the data on my own, I felt like the inconsistency in ChatGPT’s responses leads me to believe there is a way to go towards instilling confidence in ChatGPT’s response accuracy. When leveraging tools to simplify our processes, remember that the power of research lies in producing thoughtful, accurate and representative results.
When it comes to the efficiency of using ChatGPT and finding the porridge that’s just right, you may find it takes less time and effort to do it yourself.
Generating Action Items and Next Steps
One of my favourite parts of the research process is providing teams with next steps and opportunities for future research. I sit on a team that works both in the embedded (within products) and horizontal (think internal consulting) engagement model, and providing concrete next steps for future research is often a great way to advocate for UX and research.
Curiosity got the best of me, so I started to ask ChatGPT what they thought action items would be, what our next steps could be, what followup research we should do, etc.
What I received was vague, and a bit lifeless. It lacked impact and much of what was generated didn’t feel actionable or relevant. This in part is due to the limited information that I provided the chat. When building next steps for your team and for your stakeholders, you can take any number of things into account such as:
- Product roadmaps and information about upcoming releases or updates
- Upcoming and concurrent research
- Research capacity and availability
- Budget and financial restraints
And much more. Without this context, and without all the information that is business and project relevant, ChatGPT will be unable to provide you a meaningful response.
Reflections
Understanding how we can use AI to our benefit is the way forward, and I’d be remiss to think otherwise. There are of course limitations to the technology and how much it can actually do for you, but this is where your personal touch and our human intelligence comes to play. We as researchers have a unique privilege of engaging with our users. We get to watch them interact with products, actually see what they’re struggling with, and hear from them directly about what sparks joy and what their pain points are. We also get to think deeply about problems with the context of our company culture and the needs of our organization.
As generative AI’s power lies in the context you’re able to provide it, we must recognize that much of this knowledge and context is difficult to share with a generative AI chat. ChatGPT will not always be able to solve your problems and derive all the insights you’re looking for. It’s lack of detail in responses isn’t the fault of ChatGPT, but rather a limitation birthed from the insufficient context that we are allowed to share with the chat. But this may be pertinent reality, as many companies restrict what data can be shared openly with generative AI and outside company channels. Moreover, as researchers, we have the responsibility to protect participant privacy and keep data secure.
A question I’d ask is how we can leverage these tools while also taking great care towards data privacy and avoiding bias?
One of the things we’re always taught to recognize is that researcher’s may bring bias into their work. Similarly, we need to be cognizant the potential for bias to creep into algorithms and datasets. When we use generative AI, we aren’t able to see where this data is sourced, and where it comes from. The uncertainty we have towards ChatGPT or generative AI and what it’s doing will grow with the volume of data that it consumes.

As more data is thrown into the cauldron, it becomes increasingly difficult to identify the source of the information and validate the output it generates. When we are given information from a colleague or mentor, do we take this information at face value? Or do we ask questions, ask for sources, and make an effort to understand where it comes from?
I suppose ethical data modeling is a topic for another time.
Despite the limitations of the technology, potential bias and inaccuracies, generative AI is the path forward. Given ChatGPT’s ability to gather information at the speed of light, I believe that as long as we stay curious and ask questions, AI will bolster our ability to impact products with meaningful, research-informed, strategic business decisions.
Key Takeaways
- Generating prompts — Prompt generation may be your primary hurdle. You may need to regenerate and rephrase your ChatGPT prompts to get what you need for your projects.
- Accuracy — ChatGPT is well informed on research methods, typical sample sizes, it may not be able to generate accurate suggestions based on nuances that you may consider including timeline, feasibility and even data privacy and legal logistics. ChatGPT does not change the way it responds when asked to look at the problem space from a “user researcher’s” perspective.
- Understand the limitations of the technology — For example, it may not be able to tackle large datasets due to the character limitations of the chat. As of January 26th, 2024, the character limit is 4096 characters.
Jenni’s Recommendations
- Place Value in Human Intelligence — Stay curious and ask questions. Leverage your peers and those around you and discuss the AI suggestions the same way you would discuss methodology without AI. Your perspectives, your experience, and your knowledge go a long way. Instill confidence in yourself and in others about what you bring to the table in the era of AI.
- Adopt a Learning Mindset — Embrace change, and stay open minded to what generative AI has to offer. You could easily learn a new way to tackle a problem that differs greatly from how things are typically done. The data model is learning just as much as you are. You can inform ChatGPT of something new, letting it know of other research or business related considerations to keep in mind.
- Retain a Healthy Amount of Skepticism towards AI — It’s our responsibility to be informed of what AI is capable of and what it can and cannot do. Respect the value that AI brings to the table, while advocating for your own skills and abilities. Stay in a growth mindset, and up skill yourself constantly.
Special thanks to my colleagues Rachel Chen, Vanny Chao, Frank Beege and Dana Lin for providing valuable feedback to the creation of this piece.
Jenni Chan is a User Experience Researcher and ERG leader at SAP. With a background in Human Computer Interaction, Design, Psychology, and Fine Art, she builds learning opportunities and uses her platform to advocate for inclusive practices for research.