Most of us have asked LLMs to be our UX assistants by now — helping us plan projects, review flows, and critique UX copy, in minutes instead of hours. But what if AI could act as users themselves? Could data gathered from synthetic LLM-generated users augment traditional research, offering a faster, cheaper path to quality insights? And if so, what are the pros and cons of this renegade approach?
In this post we’ll break down what “synthetic users” are, how to use them responsibly, and where they might be able to (and definitely shouldn’t) replace research with human participants.
What are synthetic users?
Basically, AI-generated representations of users. These entities simulate user personas by predicting what they might say or do, based on inputted data.
Synthetic users can be created using regular LLMs (like GPT-4o or Gemini), or with dedicated platforms like syntheticusers.com, which typically include more tailored industry-specific data sets and pre-made examples.
Unlike traditional, static user personas, synthetic users can engage in live conversation, generating responses in real time.
Sounds dubious, right? I’m skeptical too. But can they can be useful as a side dish to the real deal? Let’s suss it out.
Why use synthetic users?
Before we get into the cons, synthetic users do have some pretty compelling advantages in the right context:
Speed
No recruitment, scheduling, or NDAs.Scale
Generate diverse perspectives at a fraction of the effort. Struggling to recruit visually impaired users for accessibility testing? This could be the way.Cost
Dramatically cheaper than running real studies.Risk-free testing
Useful for exploring sensitive topics where real users might hesitate or feel uncomfortable.Early exploration
Great for pre-research, helping you refine study directions before talking to actual users.
Real-world example:
A product team testing a new mental health app for patients with schizophrenia could try using synthetic users to explore early-stage concept reception. This could help them identify potential concerns and triggers for the patients before running real interviews, helping protect patients’ interests and save time and resources for redeployment elsewhere in the development process.
Where synthetic users can work well
Discovery research
Need fast, directional input on an unfamiliar feature? Synthetic users can help surface themes to help prepare for speaking with real users.
Concept testing
Before building a prototype, get a quick gauge of how an idea might be received.Hard-to-find users
Niche audiences (think astrologers, surgeons) can be difficult to reach—synthetic users can provide an accessible stand-in for first-pass insights.Testing extreme scenarios
Take a closer look at accessibility issues and worst-case failure states (😱) without real-world risk.
Real-world example:
A fintech startup could use synthetic users to explore how underbanked populations might interact with a new savings feature, identifying friction points before starting field testing.
And now for the juicy part…
Where synthetic users can fail
Can’t feel emotions or frustrations
AI (and therefore synthetic users) have no lived experience, and can’t “think”. They’re just sophisticated pattern recognition — simulating our behavior and communication. This can lead to shallow, generic, overly rationalised responses, that we’ve come to associate with LLMs (l’ll never say “delve” again.)Bias
As with any LLM output, synthetic users reflect biases in training data, often over-representing dominant narratives while underrepresenting marginalised perspectives. Learn more about this serious ethical issue here.False confidence
AI is a desperate people pleaser. If prompted poorly, it generates idealised rather than realistic responses (recently GPT-4o had the gall to assure me that users always come to appreciate AI. Someone’s been sipping the Altman kool aid 😉 )No real-world context
Synthetic users may describe using a product, but they can’t physically interact with it, meaning they miss tactile and usability issues.
Real-world example:
Going back to the mental health app example, synthetic users could completely miss a poor button placement that makes interactions frustrating. Real users would likely notice this instantly.
---
Best practices for using synthetic users
Basically, if you’re going to use synthetic users, use ‘em wisely.
Here’s how:
Use proper user personas, and upload existing “real” user research data as context
Anchoring the AI in a specific, relevant context reduces generic, surface-level responses.
Ask open-ended questions (just like in real life)
Don’t just let the AI to agree with you. As with humans, open-ended questions encourage richer, more realistic responses from synthetic users…
👍 “Describe the last time you used this feature, and how you felt.”
👎 “How frustrating is the feature for you?”
Ask for journeys
Instead of asking what synthetic users “like,” ask how they achieve their goals, to shift focus to actual workflows for more actionable insights.
👍 “Walk me through how you would use this app to plan a trip.”
👎 “What do you like about this flow?”
Test edge cases and contradictions
Ask the AI uncomfortable questions, to uncover limitations. This helps address AI’s tendency to be overly positive unless deliberately challenged.
👍 “If you had to explain why someone wouldn’t use this product, what would you say?”
Always validate with real users
Never let synthetic research stand alone, it’s fake! Think of it more like a hypothesis generator rather than a final source of truth — so you only need to speak to 3 real users rather than 10.
Wrap up
Synthetic users can be a really useful tool, but no replacement for real users. They offer fast, scalable insights to add to existing real user research data, but lack real human complexity, emotion, and unpredictability.
Try them out for early-stage research, concept validation, and edge case testing, but always validate any findings with real people before making key decisions.
When might synthetic users be useful in your research workflow?
//Pete
This is a valuable insight, Pete.
Recently, I asked three LLMs (GPT4o, Gemini 2.5pro, Grok3) a few interview questions, and Grok3 has done an impressive job explaining why a user might use a fictional product. Also asked them once to criticize my idea for a more unbiased and insightful result. I can tell, I learned a lot from those answers.