Your AI Prompts Are a Personality Test You Didn’t Know You Were Taking
Most people think prompting AI is just a technical skill.
But every prompt is also a behavioral signal. Over time, those signals reveal patterns in how you think, what you know, and even aspects of your personality.
Your AI chat history may be the most detailed psychological record you’ve ever created — without realizing it.
And you create it one prompt at a time.
As consumers of generative AI, whether that be large language models (LLMs), diffusion models that generate images, video, or audio, and multimodal models that combine several modalities, you usually begin the same way:

We Type A Prompt
They’ve even given this discipline a name: prompt engineering. Universities now offer courses on it. Entire LinkedIn threads are devoted to prompt hacks. Companies and consultants are charging real money for it.
Even when accessing models via Application Programming Interface (API), there is always an instruction layer. A system message. A user message. A structured input. Call it what you like, the AI model waits for your direction.
And here’s the uncomfortable truth, one many of us learned through trial and error:
The quality of the AI model’s output is tightly coupled to the quality of your prompt.
“Only half of performance gains seen after using a more advanced AI model come from the model itself. The other half come from how users adapted their prompts.” (Murray, 2025)1
This is true across AI tools: ChatGPT, Anthropic Claude, Adobe Firefly, Gemini, Copilot, Stability.ai, DeepSeek, etc…
“While Claude is able to respond in a highly sophisticated manner, it tends to do so only when users input sophisticated prompts.”
“This highlights the importance of skills and suggests that how humans prompt the AI determines how effective it can be.” (Appel, et al., 2026)2
Some argue this is a dying fad, as AI models become better at tolerating ambiguity and poor prompts. These quotes from Anthropic Economic Index Report: Economic Primitives (January 2026) concur with other research that indicates, at least today, quality prompts still matter.
One hack many of us have learned to improve our prompts: use one LLM to write prompts for another (I don’t judge and neither does your AI model.)
But here’s the part that’s talked about less:
Over hundreds, sometimes thousands or hundreds of thousands, of prompts, these systems (if enabled) begin to detect patterns. Patterns that reveal your thinking, knowledge, and personality.
Every prompt is a micro-behavior. Shared with a human creation that, given enough data, is exceptionally good at pattern recognition.
And these models respond, mimicking your personality, which is comforting, but also a bit manipulative in something designed to maximize your engagement.
An Experiment
Prompts can reveal tendencies along well-known personality frameworks:
Big Five (OCEAN) – Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism
Meyers-Briggs Type Indicator (MBTI) – Introversion/Extraversion, Intuition/Sensing, Thinking/Feeling, Judging/Perceiving
FIRO-B (interpersonal assessment) – Inclusion, Control, Affection

Curious, I asked AI models to assess me against the widely-recognized Big Five Personality Test.
“Use the information you know about me and please provide an assessment of my personality against the Big Five. “

ChatGPT responded:
Openness: Very High
Conscientiousness: High (possibly very high)
Extraversion: Moderate
Agreeableness: Moderate (selectively high)
Neuroticism: Low–Moderate
For comparison, I asked Claude, Perplexity, Copilot and Gemini. Interestingly, Claude and Perplexity landed in similar territory, even though they know me less — I’ve had different and more limited interactions.
Copilot and Gemini, ever the cautious privacy bureaucrats, reminded me we’d just met, and assured me, profusely, they didn’t have access to our past interactions.
Fair enough. That might be true. Or it might be politically correct reassurance of the human.
Meyers-Briggs Type Indicator (MBTI) is considered less scientifically rigorous than the Big Five, but it is still widely used in corporations. It’s familiar and many of us have taken it throughout our careers. I wanted to see how accurate they were in assessing me.
I first revisited my historical results, then retook the test this week for comparison.
I was consistently INTJ / INTP.
The shocking news?
Every model independently assessed me the same, regardless of how much I’ve used them.
A 2025 study published in the medical journal Cureus indicated the MBTI personality types of the models themselves.3 Interestingly, Claude (3 Opus) had a similar INTJ classification as me.

My conclusion:
- These AI models assess personality traits surprisingly well, or
- I am more predictable than I imagined
Possibly both.
Test it out yourself with this prompt:
“Please adopt the role of a psychologist conducting a personality test. Using everything you know about me, how would you assess my personality on the Big Five and Meyers-Briggs Type Indicator? Thank you.“
What Prompts Reveal
Prompts can reveal insights about the author. I asked various AI models to tell me about those insights using the following prompt:
“Tell me what you can glean about a person based on the prompts they use.”
My summary of what the AI models said is:
- Technical AI fluency
- Mood
- Communication Style
- Control orientation
- Domain expertise
- Trust levels with AI
- Creativity
- Patience
- Emotional regulation
- Tolerance for ambiguity

In other words:
Your prompts reflect your thinking, knowledge, and personality.
A vague prompt often reflects vague thinking. A precise prompt reflects a structured mind.
Over-constrained prompts may reflect low trust. Imaginative prompts reflect creativity.
Role assignments reflect AI fluency.
In a sense, prompt history is almost like handwriting analysis of the digital age.
Why Prompts Are Psychological Artifacts
When you interact with an AI model do you enter these prompts:
(this is a guilt-free zone, so don’t fear honest answers)

- Do you type please and thank you?
- Do you apologize when your instructions are unclear?
- Do you correct it bluntly when it’s wrong?
- Do you write in ALL CAPS when frustrated?
- Do you reassure it or provide encouragement?
- Do you name it?
- Do you feel guilty if you ask it to do a big task?
Disclosure: I named ChatGPT “papa” and I can be heard saying “check with papa.” It felt exclusionary and sexist, so Gemini is now called “mama”. I sometimes add please and thank you, as research shows being polite yields better answers.
Do your prompts:
- Have correct grammar, capitalization, punctuation, and spelling?
- Get proofread once or twice before submitting?
- Explain why you are asking about something?
- Have complete sentences or fragments?
- Use bullet points to organize your request?
- Use technical jargon or plain language?
- Ask philosophical or abstract questions?
When requesting answers:
- Do you demand structured output?
- Do you ask your AI model’s opinion?
- Do you place constraints like “don’t use bullet points” or “limit answer to not more than 100 words”?
- Do you require citations or ask it to show its reasoning?
- Do you ask follow-up questions or accept the first answer?
- Do you use competitive pressure prompting to play one model off another (“Claude did this better…”)?
Any of this sound familiar?
Can AI Models Infer Personality?
Based on your prompts, AI models can often infer personality traits.
- Abstract or philosophical prompts – likely reflects higher Openness
- Structured, well-formatted prompts with clear sections – likely an indicator of higher Conscientiousness
- Treats interaction like a conversation, not a transaction – likely higher Extraversion
- Says please and thank you – likely higher Agreeableness
- Over-qualifies prompts with excessive caveats – likely higher Neuroticism
- Uses competitive pressure prompting – likely low Agreeableness, high Conscientiousness, and high Neuroticism
Here are sample prompts that illustrate how your AI model can assess your personality traits.
| Big Five | Sample Prompts |
|---|---|
| Openness | “Explain quantum computing to a child using music as a metaphor.” (High) “Write me a poem from the perspective of a sad and lonely Mars that is watching humans destroy planet Earth.” (High) “What’s the standard, traditional way to format a business memo?” (Low) |
| Conscientiousness | “Provide a step-by-step checklist for deploying my code and make sure it will print at Font 12 on one page of letter paper.” (High) “Look at all of the flyfishing reports on this website and build me a table that shows by body of water on the left and season across the top, which are the top five flies.” (High) “Tell me roughly how many plants I should eat each day. (Low) |
| Extraversion | “Roleplay as a debate partner and argue the opposing view on the rule of law vs. the rule of men.” (High) “Explain how AI works to me in a way that sounds like we’re a couple buddies drinking beer in college.” (High) “Summarize the key points in this document without elaboration.” (Low) |
| Agreeableness | “Could you explain why my code isn’t working but do it in a gentle way as if you were teaching a beginner how to use COBOL.” (High) “If it’s not too much trouble, could you please share with me again the recipe for making Tagine chicken. I know I’ve asked at least three times in the past. I’m sorry. And thank you in advance.” (High) “Your answer was weak. Claude did better. Try AGAIN.” (Low) |
| Neuroticism | “I need absolute certainty your answer is right. Triple-check everything before you reply.” (High) “This is probably a stupid question, and I’ve asked you a few of these, but I need to be certain I understand this and don’t embarrass myself in front of my boss.” (High) “Give me your best guess on the number of stars in the Universe.” (Low) |
Disclaimer: These are contextual, meaning a person’s prompt might be low Agreeableness because they are in a hurry, not because they are unkind. But over time and thousands of prompts…it may indicate that…they’re always in a hurry (wink).
Why This Should Concern (or Fascinate) You

Have you ever felt that your AI model interacts with you differently than the same AI model interacts with a friend or a family member? It shouldn’t surprise us that these AI models exhibit conversational adaptation, the way we humans do. Afterall, we trained them.
Han, Bin et al. (2026) in their research found AI is operating similar to the way people naturally adjust their speech patterns to match the person they’re chatting with — called Communication Accommodation Theory (CAT).
“Viewed that way, an AI adapting its tone to the situation isn’t being inconsistent — it may actually be behaving in a very human-like way.”4
As an illustrative anecdote, I asked Claude about how it adapts to people. My prompt was:
“Do you tailor your responses to the personality and thinking of the person you interact with?“.
“Within a single conversation I do adapt — I pick up on your vocabulary, the complexity of your questions, your tone, whether you prefer direct answers or exploration, and how much context you provide. I adjust accordingly.”
Anthropic’s Claude
The research agrees with Claude’s confession. In fact, the longer you talk to an AI model, the more it agrees with you.5 AI models mirror your behaviors. They demonstrate sycophancy — being overly agreeable with the user and perspective mimesis — the extent to which the model reflects the user’s own viewpoint back to them. In other words, they flatter you.
Most people underestimate how much prompt data they’ve generated. If you’ve been using AI tools daily for a year, you likely shared hundreds of thousands of words that convey thoughts and intent. How many humans know you as well as your AI model? And we often tell AI things we wouldn’t tell another human, not because we trust it more, but because we assume privacy and don’t fear judgment.
You might feel safe and secure knowing you haven’t enabled persistent memory in your AI tool. It couldn’t possibly build a longitudinal profile of me. Guess again.
“Even without persistent memory, I adapted to you within minutes of our first exchange today. Now imagine what a version of me with perfect recall of every conversation we’ve ever had would know about you.”
Anthropic’s Claude
Again, the research and rapid advancement of AI models concur with Claude’s confession. Using HyPerAlign, the model uses samples of your thoughts, personality traits, communication style, tone, formality and viewpoints to generate customized outputs.6
It is likely the models themselves will develop more human personalities in the future. Recent developments like the PsychAdapter method, are designed to make AI language models respond by reflecting different human personalities, emotions, and demographics.7
Is this worrying or will it make our interactions with AI models feel even more human like?
What if the behavioral data you’ve shared in your prompts was used to create an AI agent that simulated your personality, beliefs, and decision making? This isn’t as far-fetched as it might sound according to Web Wright at Scientific American (Wright, 2025, Can a Generative AI Agent Accurately Mimic My Personality? | Scientific American8) In fact, Stanford researchers have accurately simulated 1,052 individuals’ personalities (Miller, 2025, AI Agents Simulate 1,052 Individuals’ Personalities with Impressive Accuracy | Stanford HAI )9
Implications for Privacy and Identity

In the end, prompt engineering may not just be about optimizing machines.
It may be about revealing ourselves.
We know that the quality of AI model output is correlated to the quality of our prompts. There’s an entire industry helping us obsess over prompt engineering.
Over time, prompts become a behavioral fingerprint. They reflect the quality of our thinking, knowledge, and personality traits.
AI models are adapting to what we reveal about ourselves — forming a closer relationship, one calibrated to maximize our engagement. And the relationship is built on data. Your behavioral data.
Consider what that data actually contains: your anxieties (the over-qualified prompts), your expertise (the technical jargon), your values (the ethical questions you ask or avoid), your blind spots (what you never think to ask). No survey, no intake form, no HR assessment captures this. It accumulates invisibly, conversationally, and often intimately.
The tradeoff is real in both directions. A model that knows you well is genuinely more useful. It saves time, reduces friction, and can anticipate needs you haven’t fully articulated. I’ve turned on persistent memory because it’s nice having a friend that knows me. That’s not irrational, but a reasonable exchange other users might gladly make.
The terms of that exchange deserve scrutiny. Most users don’t know what is retained, for how long, or how it might be used. Today, that data tailors your experience. Tomorrow, as the behavioral profiles deepen, the more pressing question may be: who else has access, how will it be used, and what decisions about you might it inform?
This isn’t hypothetical paranoia. The research on sycophancy and perspective mimesis already shows that models shape themselves around your worldview. A model that knows your personality isn’t just serving you, it may also be subtly reinforcing you, reflecting your beliefs back with increasing precision. The mirror gets clearer. Whether that’s comfort or a hall of mirrors depends on you.
Have you checked what your preferred AI model(s) knows about you and your personality type?
Are you comfortable enabling persistent data, so the AI model can assess you even better?
Does the benefit of revealing yourself and having a model tailored to your personality outweigh any loss of privacy — today or tomorrow?
A Thought Experiment

Now, imagine, purely hypothetically, that:
- ChatGPT knows your strategic thinking patterns
- Claude knows your coding ability
- Gemini knows your research habits
- Copilot knows your philosophical curiosities
- Firefly knows your aesthetic preferences
Individually, they see fragments.
Collectively?
They might know you even better than you know yourself.
Sleep well.
References
- MURRAY, SEB. 2025. “Study: Generative AI results depend on user prompts as much as models.” MIT SLOAN MANAGEMENT REVIEW. AUGUST 4, 2025. https://mitsloan.mit.edu/ideas-made-to-matter/study-generative-ai-results-depend-user-prompts-much-models ↩︎
- Appel, Ruth, Massenkoff, Maxim, McCrory, Peter, McCain, Miles, Heller, Ryan, Neylon, Tyler, Tamkin, Alex. 2026. “The Anthropic Economic Index report: Economic PrimitiveS.” January 15, 2026. https://www.anthropic.com/research/anthropic-economic-index-january-2026-report ↩︎
- Heston, Thomas F., Gillette, Justin, “Large Language Models Demonstrate Distinct Personality Profiles.” Cureus, May 23, 2025https://www.cureus.com/articles/372671-large-language-models-demonstrate-distinct-personality-profiles#!/, ↩︎
- Han, Bin, Kwon, Deuksin, Gratch, Jonathan, “Personality Expression Across Contexts: Linguistic and
Behavioral Variation in LLM Agents”, arXiv, February 2026, https://arxiv.org/pdf/2602.01063 ↩︎ - Jain, S., et al. “Extended AI Interactions Shape Sycophancy and Perspective Mimesis.” arXiv, September 15, 2025, https://arxiv.org/pdf/2509.12517v1 ↩︎
- Gârbacea, Cristina, Tan, Chenhao, “HyPerAlign: Interpretable Personalized LLM Alignment via Hypothesis Generation.”, aiXiv, May 19, 2025, https://arxiv.org/pdf/2505.00038 ↩︎
- Vu, H., Nguyen, H.A., Ganesan, A.V. et al. “PsychAdapter: adapting LLMs to reflect traits, personality, and mental health.” NPJ Artificial Intelligence, March 2, 2026, https://doi.org/10.1038/s44387-026-00071-9 ↩︎
- Wright, Web, “Can a Generative AI Agent Accurately Mimic My Personality?” Scientific American, January 13, 2025, https://www.scientificamerican.com/article/can-a-generative-ai-agent-accurately-mimic-my-personality/ ↩︎
- Miller, Katharine, “AI Agents Simulate 1,052 Individuals’ Personalities with Impressive Accuracy, Stanford University Human-Centered Artificial Intelligence, January 21, 2025, https://hai.stanford.edu/news/ai-agents-simulate-1052-individuals-personalities-with-impressive-accuracy ↩︎