Insights
AI isn't Stealing Your Job. Your Laziness Is.
AI can speed up design and decision-making, but speed without direction is just expensive wandering. I decided to explore real-world examples of AI’s UX misfires, why human judgment still matters, and how to use AI without outsourcing your thinking.



We are getting closer and closer to 2026, and the AI hype cycle is in full swing.
Scroll through LinkedIn and you’ll see endless posts about the latest prompt hack, the newest AI productivity tool, or how someone “used ChatGPT to save 20 hours this week.” What you don’t see as often is the uncomfortable question: Did any of that output actually solve the right problem?
Because here’s the truth: AI is brilliant at producing answers. It’s terrible at knowing if those answers matter.
The threat to your job isn’t AI itself. It’s the people who know how to ask it better questions, and who understand that a machine’s first answer is almost never the right one. Most of all, it's those of us that remember that people are all unique.



The Real Danger is Human Complacency
The pattern is everywhere.
Someone feeds a vague request into ChatGPT or a design automation tool. Seconds later, they get an output that looks polished enough to pass as final. And instead of verifying it, they throw it in their report, send it to their manger and colleagues, or say "I'm working smarter, not harder."
No research. Little to no user feedback. No reality check….until it's too late.
In UX, I’ve seen teams generate “data-driven” personas using AI… without speaking to customers. I’ve seen flows redesigned around AI-suggested journeys that were based on assumptions, not actual behavior. And it’s not just a design problem either. The same thing is happening in marketing, sales, HR, etc.
This isn’t AI taking your job. This is you handing it over because you stopped doing the hard thinking. You stopped using design thinking. Maybe you have too much on your plate, or maybe you lost a little empathy after too many interviews. Whatever the cause, you can get it back.
This Isn’t Just a UX Problem
You don’t have to be a designer to fall into the “first answer” trap.
Marketers do it when they take AI copy without fact-checking. Sales teams do it when they send AI-written pitches without tailoring to the client. Leaders do it when they base strategy on AI-generated summaries instead of verified data.
In every field, the people who win with AI will be the ones who keep the human factor alive by questioning, validating, and steering the work.



Why We Let This Happen
AI output feels convincing.
It’s fast, it’s smart, and it’s written in an authoritative tone. Humans are wired to trust that, but AI isn’t a truthsayer. It’s a pattern generator. It has no concept of whether what it’s saying will work in the real world, or even if it is saying the right thing. When the speed of execution goes from weeks to seconds, the bottleneck moves. It’s no longer “how fast can we make this?” It’s “are we even making the right thing?”
If you can’t answer that, no amount of AI-generated polish will save you.
Real-World Examples of AI’s UX Missteps
1. AI That Doesn’t Admit Its Own Limits
A recent analysis shows AI struggles with transparency about what it can’t do. It often tackles tasks it’s not designed for, like summarizing a 200‑page report, without warning the user it may produce a flawed or incomplete answer. That overconfidence without guardrails undermines trust and increases risk in UX decisions. Not to mention the blind following of the human.
2. Shallow Insights from AI Research Tools
Nielsen Norman Group tested AI-powered UX research tools that promise to “analyze your data in seconds.” The reality? Many spat out generic, surface-level takeaways that ignored context and missed key user behavior patterns. Teams relying on those tools without deeper validation risk designing solutions to problems that don’t exist.
3. The Chatbox Default
Look at most new AI products today, and you’ll see the same pattern, a chat interface. This default design choice might be fine for certain cases, but it ignores countless opportunities for more effective interaction models. It seems like the simplest experience to give users but we've all used an AI tool and thought about a "nice-to-have" aspect that seems simple to use but never seems to get added.
The Human Factor AI Can’t Replace
AI can scale execution. It can’t replace judgment. That’s still on us.
Here’s what humans bring to the table that AI doesn’t:
Problem-Finding: AI can solve any problem you give it. But figuring out the right problem, that’s human work. It requires observation, empathy, and connecting dots that aren’t in a dataset. Aka, design thinking.
Context Awareness: AI works from what it has seen. It doesn’t understand your brand, your customer quirks, or the subtle constraints that shape real-world use.
Critical Thinking: Machines don’t smell when something’s “off.” People do that…if they’re paying attention.
These skills aren’t “nice to have” anymore. They are your insurance policy against becoming irrelevant in a world where everyone has the same AI tools.



Design Thinking Is Critical
Empathize
Get to know your users. Observe them in context, ask open questions, and look for pain points they may not even articulate. Challenge your own thinking and and be willing to be wrong. That's the best part!
Define the Problem
Sift through what you learned and frame the real problem. Keep it specific, actionable, and focused on the user’s needs rather than your assumptions. Do a quick design sprint if needed and collect other's ideas and feedback.
Ideate
Explore a wide range of possible solutions. Brainstorm without self-censoring, then narrow down to the most promising ideas. This can be hard when you feel you have the solution already in your head. Step away from the problem then come at it with a fresh mindset.
Prototype & Test
Create quick, low-cost representations of your ideas. Sketch, wireframe, or mock up just enough to test the concept. If you have the talent and speed to get a quick Figma prototype together without too much time, then go for it. Put that prototype in front of users and watch what they do, not just what they say, and collect that oh so important feedback.
Iterate
Refine the idea based on what you learned. Sometimes you loop back to redefining the problem if you uncover something new. "Failing Fast" is just how this job works. Don't fight it, embrace it. Help others within your org understand that this is a part of the process.
Final Thought
AI isn’t a magic wand, and it isn’t the villain. It’s the fastest junior teammate you’ll ever have, and like any junior, it needs direction, feedback, and oversight.
If you take its first answer and run with it, you’re not collaborating with AI. You’re outsourcing your thinking. Do that long enough, and AI won’t have to take your job. You’ll have already handed it over.
More to Discover
Insights
AI isn't Stealing Your Job. Your Laziness Is.
AI can speed up design and decision-making, but speed without direction is just expensive wandering. I decided to explore real-world examples of AI’s UX misfires, why human judgment still matters, and how to use AI without outsourcing your thinking.



We are getting closer and closer to 2026, and the AI hype cycle is in full swing.
Scroll through LinkedIn and you’ll see endless posts about the latest prompt hack, the newest AI productivity tool, or how someone “used ChatGPT to save 20 hours this week.” What you don’t see as often is the uncomfortable question: Did any of that output actually solve the right problem?
Because here’s the truth: AI is brilliant at producing answers. It’s terrible at knowing if those answers matter.
The threat to your job isn’t AI itself. It’s the people who know how to ask it better questions, and who understand that a machine’s first answer is almost never the right one. Most of all, it's those of us that remember that people are all unique.



The Real Danger is Human Complacency
The pattern is everywhere.
Someone feeds a vague request into ChatGPT or a design automation tool. Seconds later, they get an output that looks polished enough to pass as final. And instead of verifying it, they throw it in their report, send it to their manger and colleagues, or say "I'm working smarter, not harder."
No research. Little to no user feedback. No reality check….until it's too late.
In UX, I’ve seen teams generate “data-driven” personas using AI… without speaking to customers. I’ve seen flows redesigned around AI-suggested journeys that were based on assumptions, not actual behavior. And it’s not just a design problem either. The same thing is happening in marketing, sales, HR, etc.
This isn’t AI taking your job. This is you handing it over because you stopped doing the hard thinking. You stopped using design thinking. Maybe you have too much on your plate, or maybe you lost a little empathy after too many interviews. Whatever the cause, you can get it back.
This Isn’t Just a UX Problem
You don’t have to be a designer to fall into the “first answer” trap.
Marketers do it when they take AI copy without fact-checking. Sales teams do it when they send AI-written pitches without tailoring to the client. Leaders do it when they base strategy on AI-generated summaries instead of verified data.
In every field, the people who win with AI will be the ones who keep the human factor alive by questioning, validating, and steering the work.



Why We Let This Happen
AI output feels convincing.
It’s fast, it’s smart, and it’s written in an authoritative tone. Humans are wired to trust that, but AI isn’t a truthsayer. It’s a pattern generator. It has no concept of whether what it’s saying will work in the real world, or even if it is saying the right thing. When the speed of execution goes from weeks to seconds, the bottleneck moves. It’s no longer “how fast can we make this?” It’s “are we even making the right thing?”
If you can’t answer that, no amount of AI-generated polish will save you.
Real-World Examples of AI’s UX Missteps
1. AI That Doesn’t Admit Its Own Limits
A recent analysis shows AI struggles with transparency about what it can’t do. It often tackles tasks it’s not designed for, like summarizing a 200‑page report, without warning the user it may produce a flawed or incomplete answer. That overconfidence without guardrails undermines trust and increases risk in UX decisions. Not to mention the blind following of the human.
2. Shallow Insights from AI Research Tools
Nielsen Norman Group tested AI-powered UX research tools that promise to “analyze your data in seconds.” The reality? Many spat out generic, surface-level takeaways that ignored context and missed key user behavior patterns. Teams relying on those tools without deeper validation risk designing solutions to problems that don’t exist.
3. The Chatbox Default
Look at most new AI products today, and you’ll see the same pattern, a chat interface. This default design choice might be fine for certain cases, but it ignores countless opportunities for more effective interaction models. It seems like the simplest experience to give users but we've all used an AI tool and thought about a "nice-to-have" aspect that seems simple to use but never seems to get added.
The Human Factor AI Can’t Replace
AI can scale execution. It can’t replace judgment. That’s still on us.
Here’s what humans bring to the table that AI doesn’t:
Problem-Finding: AI can solve any problem you give it. But figuring out the right problem, that’s human work. It requires observation, empathy, and connecting dots that aren’t in a dataset. Aka, design thinking.
Context Awareness: AI works from what it has seen. It doesn’t understand your brand, your customer quirks, or the subtle constraints that shape real-world use.
Critical Thinking: Machines don’t smell when something’s “off.” People do that…if they’re paying attention.
These skills aren’t “nice to have” anymore. They are your insurance policy against becoming irrelevant in a world where everyone has the same AI tools.



Design Thinking Is Critical
Empathize
Get to know your users. Observe them in context, ask open questions, and look for pain points they may not even articulate. Challenge your own thinking and and be willing to be wrong. That's the best part!
Define the Problem
Sift through what you learned and frame the real problem. Keep it specific, actionable, and focused on the user’s needs rather than your assumptions. Do a quick design sprint if needed and collect other's ideas and feedback.
Ideate
Explore a wide range of possible solutions. Brainstorm without self-censoring, then narrow down to the most promising ideas. This can be hard when you feel you have the solution already in your head. Step away from the problem then come at it with a fresh mindset.
Prototype & Test
Create quick, low-cost representations of your ideas. Sketch, wireframe, or mock up just enough to test the concept. If you have the talent and speed to get a quick Figma prototype together without too much time, then go for it. Put that prototype in front of users and watch what they do, not just what they say, and collect that oh so important feedback.
Iterate
Refine the idea based on what you learned. Sometimes you loop back to redefining the problem if you uncover something new. "Failing Fast" is just how this job works. Don't fight it, embrace it. Help others within your org understand that this is a part of the process.
Final Thought
AI isn’t a magic wand, and it isn’t the villain. It’s the fastest junior teammate you’ll ever have, and like any junior, it needs direction, feedback, and oversight.
If you take its first answer and run with it, you’re not collaborating with AI. You’re outsourcing your thinking. Do that long enough, and AI won’t have to take your job. You’ll have already handed it over.
More to Discover
Insights
AI isn't Stealing Your Job. Your Laziness Is.
AI can speed up design and decision-making, but speed without direction is just expensive wandering. I decided to explore real-world examples of AI’s UX misfires, why human judgment still matters, and how to use AI without outsourcing your thinking.



We are getting closer and closer to 2026, and the AI hype cycle is in full swing.
Scroll through LinkedIn and you’ll see endless posts about the latest prompt hack, the newest AI productivity tool, or how someone “used ChatGPT to save 20 hours this week.” What you don’t see as often is the uncomfortable question: Did any of that output actually solve the right problem?
Because here’s the truth: AI is brilliant at producing answers. It’s terrible at knowing if those answers matter.
The threat to your job isn’t AI itself. It’s the people who know how to ask it better questions, and who understand that a machine’s first answer is almost never the right one. Most of all, it's those of us that remember that people are all unique.



The Real Danger is Human Complacency
The pattern is everywhere.
Someone feeds a vague request into ChatGPT or a design automation tool. Seconds later, they get an output that looks polished enough to pass as final. And instead of verifying it, they throw it in their report, send it to their manger and colleagues, or say "I'm working smarter, not harder."
No research. Little to no user feedback. No reality check….until it's too late.
In UX, I’ve seen teams generate “data-driven” personas using AI… without speaking to customers. I’ve seen flows redesigned around AI-suggested journeys that were based on assumptions, not actual behavior. And it’s not just a design problem either. The same thing is happening in marketing, sales, HR, etc.
This isn’t AI taking your job. This is you handing it over because you stopped doing the hard thinking. You stopped using design thinking. Maybe you have too much on your plate, or maybe you lost a little empathy after too many interviews. Whatever the cause, you can get it back.
This Isn’t Just a UX Problem
You don’t have to be a designer to fall into the “first answer” trap.
Marketers do it when they take AI copy without fact-checking. Sales teams do it when they send AI-written pitches without tailoring to the client. Leaders do it when they base strategy on AI-generated summaries instead of verified data.
In every field, the people who win with AI will be the ones who keep the human factor alive by questioning, validating, and steering the work.



Why We Let This Happen
AI output feels convincing.
It’s fast, it’s smart, and it’s written in an authoritative tone. Humans are wired to trust that, but AI isn’t a truthsayer. It’s a pattern generator. It has no concept of whether what it’s saying will work in the real world, or even if it is saying the right thing. When the speed of execution goes from weeks to seconds, the bottleneck moves. It’s no longer “how fast can we make this?” It’s “are we even making the right thing?”
If you can’t answer that, no amount of AI-generated polish will save you.
Real-World Examples of AI’s UX Missteps
1. AI That Doesn’t Admit Its Own Limits
A recent analysis shows AI struggles with transparency about what it can’t do. It often tackles tasks it’s not designed for, like summarizing a 200‑page report, without warning the user it may produce a flawed or incomplete answer. That overconfidence without guardrails undermines trust and increases risk in UX decisions. Not to mention the blind following of the human.
2. Shallow Insights from AI Research Tools
Nielsen Norman Group tested AI-powered UX research tools that promise to “analyze your data in seconds.” The reality? Many spat out generic, surface-level takeaways that ignored context and missed key user behavior patterns. Teams relying on those tools without deeper validation risk designing solutions to problems that don’t exist.
3. The Chatbox Default
Look at most new AI products today, and you’ll see the same pattern, a chat interface. This default design choice might be fine for certain cases, but it ignores countless opportunities for more effective interaction models. It seems like the simplest experience to give users but we've all used an AI tool and thought about a "nice-to-have" aspect that seems simple to use but never seems to get added.
The Human Factor AI Can’t Replace
AI can scale execution. It can’t replace judgment. That’s still on us.
Here’s what humans bring to the table that AI doesn’t:
Problem-Finding: AI can solve any problem you give it. But figuring out the right problem, that’s human work. It requires observation, empathy, and connecting dots that aren’t in a dataset. Aka, design thinking.
Context Awareness: AI works from what it has seen. It doesn’t understand your brand, your customer quirks, or the subtle constraints that shape real-world use.
Critical Thinking: Machines don’t smell when something’s “off.” People do that…if they’re paying attention.
These skills aren’t “nice to have” anymore. They are your insurance policy against becoming irrelevant in a world where everyone has the same AI tools.



Design Thinking Is Critical
Empathize
Get to know your users. Observe them in context, ask open questions, and look for pain points they may not even articulate. Challenge your own thinking and and be willing to be wrong. That's the best part!
Define the Problem
Sift through what you learned and frame the real problem. Keep it specific, actionable, and focused on the user’s needs rather than your assumptions. Do a quick design sprint if needed and collect other's ideas and feedback.
Ideate
Explore a wide range of possible solutions. Brainstorm without self-censoring, then narrow down to the most promising ideas. This can be hard when you feel you have the solution already in your head. Step away from the problem then come at it with a fresh mindset.
Prototype & Test
Create quick, low-cost representations of your ideas. Sketch, wireframe, or mock up just enough to test the concept. If you have the talent and speed to get a quick Figma prototype together without too much time, then go for it. Put that prototype in front of users and watch what they do, not just what they say, and collect that oh so important feedback.
Iterate
Refine the idea based on what you learned. Sometimes you loop back to redefining the problem if you uncover something new. "Failing Fast" is just how this job works. Don't fight it, embrace it. Help others within your org understand that this is a part of the process.
Final Thought
AI isn’t a magic wand, and it isn’t the villain. It’s the fastest junior teammate you’ll ever have, and like any junior, it needs direction, feedback, and oversight.
If you take its first answer and run with it, you’re not collaborating with AI. You’re outsourcing your thinking. Do that long enough, and AI won’t have to take your job. You’ll have already handed it over.