Leadership
Companies Not Listening to UX is Usually a Data Problem
If you've worked in UX or product long enough, you've probably had this experience.
You do the work, talk to users, and uncover patterns that are obvious once you finally see them. Then you bring those findings back to the team with a clear sense of what's broken and why.
And nothing happens.
The roadmap doesn't change. The decision was already made. The meeting ends with a polite nod and a vague promise to "keep it in mind." A month later, the same problems show up again, only now they're more expensive.
This is usually framed or seen as a respect problem. Leadership doesn't value UX, product doesn't listen, and/or engineering doesn't care. Sometimes that's true, but more often, it's that you aren’t speaking their language.
Most organizations don't actually know how to make decisions with evidence. Design just happens to be where that weakness becomes most obvious.
The Real Problem
In low-maturity organizations, decisions run on instinct and urgency. Metrics exist, but they're hard to find or they aren’t being gathered at all. They could also be spread out across different teams that are siloed off, Marketing owns some numbers, Product owns others, then maybe Support has its own dashboards. Nobody has a clean line of sight from user behavior to business outcomes.
So when research shows up, it doesn't slot into an existing decision system. It competes with opinions, deadlines, and executive intuition. Even good insights feel abstract because the organization doesn't have a shared way to use them.
That's why research often gets labeled "interesting" instead of "actionable." The company hasn't learned how to utilize it properly.
If you're operating in that environment, your role quietly changes from the one who really understands your users to the ones who keep bringing up what the users need when the others are focused on business metrics. You're no longer just responsible for understanding users. You're responsible for making that understanding usable.



Why This Costs More Than Projects
Before getting into tactics, I want to name what this environment does to people over time.
When your work consistently fails to influence outcomes, it's easy to internalize that as personal failure. You start questioning your ability to communicate. Your credibility. Your value. Many designers assume they aren't persuasive enough or strategic enough.
In reality, they're trying to operate like professionals inside systems that don't support professional decision-making.
That disconnect is why so many talented designers burn out or leave roles feeling vaguely broken. The work never gets the chance to matter.
Learning to translate evidence into decisions is about sustainability as much as influence.
How to Operate Inside This Reality
One of the biggest mistakes I see designers and researchers make is treating insight delivery as the goal. They run studies, synthesize findings, and present what they learned as if clarity alone should be persuasive.
It rarely is, as we all find out in the end.
What stakeholders actually need is help making a decision. They need to understand what changes as a result of what you learned. Without that, even the most rigorous research becomes something they file away to support a point they already wanted to make.
When I want research work to be used, I don't start with findings. I start with the decision that's stuck. Then I work backward. What decision are we trying to make? What uncertainty is blocking it? What did we learn that reduces that uncertainty? And what does that mean we should do next?
That framing forces prioritization. It also forces you to let go of insights that are true but irrelevant to the moment. This is exactly how you would go about working with users, so why wouldn’t you do the same internally?



Finding Data When There Isn't Any
Data is usually where this all breaks down.
Designers are taught to respect rigor, and that's a good thing. But in many organizations, the data you'd like to have simply doesn't exist. Analytics are incomplete. Instrumentation is poor. Legal or compliance restricts access. Dashboards are political or even unreliable.
Waiting for perfect data in those conditions is something you can’t afford to do.
When direct measurement isn't available, the next best thing are proxy metrics. Not as a placeholder, but as a deliberate choice. Sometimes you need to get creative and put in the work.
Support tickets tell you where users are confused. Call length tells you which tasks are hard. Training time reveals system complexity, and rework reveals misalignment. These aren't perfect measures of experience, but they already matter to leadership and they’re better than complaining.
This is where AI has actually changed the work. Not in the ways most headlines suggest, but in the tedious part of pulling themes from hundreds of support tickets, clustering feedback into patterns, flagging anomalies in call data. Work that used to take days now takes hours. That speed matters because it lets you show up to the conversation with evidence instead of intuition. Which is more than most people in a company are able to do.
It also adds confidence. When you can say "I reviewed 400 tickets and here's what I found" instead of "I looked at a sample and noticed something," the conversation shifts. You're not asking leadership to trust your gut, instead you're showing them the pattern and letting them decide what to do about it.
The judgment is still yours though, AI handles the labor. If you let it do more than that, the work starts to sound like you are just reading what AI gave you, and have no conviction behind your words. When you understand it well, it gives you time to synthesize instead of transcribe, and time to run one more study instead of shipping half-baked conclusions.
Once the conversation is anchored in data you can really speak to, you're no longer arguing about opinions. This is where you start to build and increase the trust people have in you. With that trust, that is where you start to make impact on the roadmaps.



Aligning Before the Work Starts
One of the most effective things you can do in a low-maturity org is align on success before you do the work.
This doesn't require a massive framework. A single page is usually enough. What business goal are we trying to move? Which KPI represents progress on that goal? Which part of the experience influences it? How will we know if it changed?
You don't present this as a UX artifact. You build it collaboratively with product and leadership. The agreement matters more than the document itself.
Later, when findings are uncomfortable or inconvenient, you're not advocating for UX. You're pointing back to a shared definition of success that everyone agreed to upfront. This makes it harder for them to dismiss your work.
Making Research Feel Like Theirs
When research is disputed internally, designers and researchers often assume the problem is sample size or methodology. Sometimes it is, but more often, it's ownership.
People trust what they help create. What they were apart of.
When research arrives fully formed, it can feel imposed, especially if it challenges existing plans or mindsets. When stakeholders have seen users struggle with their own eyes, the tone changes. The debate shifts from whether the problem exists to how to address it.
I try to involve stakeholders early whenever I can. Not to run sessions or influence outcomes, but to observe. One or two sessions is usually enough. I also share early patterns before final conclusions, so nothing feels like a surprise.
By the time the readout lands, it's familiar. Familiarity breeds trust. I know it can be hard to get stakeholders to be able to find the time in their busy schedules, so be willing to work around them when possible.



ROI Doesn't Need to Be Perfect
You don't need perfect math, but you do need defensible math.
Executives make decisions based on ranges and probabilities. They want to understand scale and risk. You need to be able to confidently answer questions and push back with data, not your opinion.
A simple model works. How often does this happen? How many people are affected? What does it cost when it goes wrong?
If someone challenges the numbers, that's not a failure. That means they are engaged and you caught their attention. You've moved the conversation into business territory, which is where UX needs to be at much more often.
If You're Stuck Right Now
Don't try to fix the organization.
Pick one workflow with clear business impact. Choose one or two proxy metrics you can access easily. Then run a small study tied to a real decision, and align on what success looks like before you present anything.
Ship the recommendation. Measure what you can. Move on. Keep it simple, stupid.
Influence compounds slowly, then all at once.
The Long Game
Some organizations never mature. They churn through designers and researchers without ever learning how to use them. No article I write can fix that.
But many companies sit right in the middle. They aren't hostile to UX, but they're underdeveloped. In those environments, the people who learn to translate evidence into decisions tend to outlast the others. They reduce risk, clarify tradeoffs, and help the business move forward with fewer blind spots.
That's what design leadership looks like in practice. Making it easier for people to decide.
More to Discover
Leadership
Companies Not Listening to UX is Usually a Data Problem
If you've worked in UX or product long enough, you've probably had this experience.
You do the work, talk to users, and uncover patterns that are obvious once you finally see them. Then you bring those findings back to the team with a clear sense of what's broken and why.
And nothing happens.
The roadmap doesn't change. The decision was already made. The meeting ends with a polite nod and a vague promise to "keep it in mind." A month later, the same problems show up again, only now they're more expensive.
This is usually framed or seen as a respect problem. Leadership doesn't value UX, product doesn't listen, and/or engineering doesn't care. Sometimes that's true, but more often, it's that you aren’t speaking their language.
Most organizations don't actually know how to make decisions with evidence. Design just happens to be where that weakness becomes most obvious.
The Real Problem
In low-maturity organizations, decisions run on instinct and urgency. Metrics exist, but they're hard to find or they aren’t being gathered at all. They could also be spread out across different teams that are siloed off, Marketing owns some numbers, Product owns others, then maybe Support has its own dashboards. Nobody has a clean line of sight from user behavior to business outcomes.
So when research shows up, it doesn't slot into an existing decision system. It competes with opinions, deadlines, and executive intuition. Even good insights feel abstract because the organization doesn't have a shared way to use them.
That's why research often gets labeled "interesting" instead of "actionable." The company hasn't learned how to utilize it properly.
If you're operating in that environment, your role quietly changes from the one who really understands your users to the ones who keep bringing up what the users need when the others are focused on business metrics. You're no longer just responsible for understanding users. You're responsible for making that understanding usable.



Why This Costs More Than Projects
Before getting into tactics, I want to name what this environment does to people over time.
When your work consistently fails to influence outcomes, it's easy to internalize that as personal failure. You start questioning your ability to communicate. Your credibility. Your value. Many designers assume they aren't persuasive enough or strategic enough.
In reality, they're trying to operate like professionals inside systems that don't support professional decision-making.
That disconnect is why so many talented designers burn out or leave roles feeling vaguely broken. The work never gets the chance to matter.
Learning to translate evidence into decisions is about sustainability as much as influence.
How to Operate Inside This Reality
One of the biggest mistakes I see designers and researchers make is treating insight delivery as the goal. They run studies, synthesize findings, and present what they learned as if clarity alone should be persuasive.
It rarely is, as we all find out in the end.
What stakeholders actually need is help making a decision. They need to understand what changes as a result of what you learned. Without that, even the most rigorous research becomes something they file away to support a point they already wanted to make.
When I want research work to be used, I don't start with findings. I start with the decision that's stuck. Then I work backward. What decision are we trying to make? What uncertainty is blocking it? What did we learn that reduces that uncertainty? And what does that mean we should do next?
That framing forces prioritization. It also forces you to let go of insights that are true but irrelevant to the moment. This is exactly how you would go about working with users, so why wouldn’t you do the same internally?



Finding Data When There Isn't Any
Data is usually where this all breaks down.
Designers are taught to respect rigor, and that's a good thing. But in many organizations, the data you'd like to have simply doesn't exist. Analytics are incomplete. Instrumentation is poor. Legal or compliance restricts access. Dashboards are political or even unreliable.
Waiting for perfect data in those conditions is something you can’t afford to do.
When direct measurement isn't available, the next best thing are proxy metrics. Not as a placeholder, but as a deliberate choice. Sometimes you need to get creative and put in the work.
Support tickets tell you where users are confused. Call length tells you which tasks are hard. Training time reveals system complexity, and rework reveals misalignment. These aren't perfect measures of experience, but they already matter to leadership and they’re better than complaining.
This is where AI has actually changed the work. Not in the ways most headlines suggest, but in the tedious part of pulling themes from hundreds of support tickets, clustering feedback into patterns, flagging anomalies in call data. Work that used to take days now takes hours. That speed matters because it lets you show up to the conversation with evidence instead of intuition. Which is more than most people in a company are able to do.
It also adds confidence. When you can say "I reviewed 400 tickets and here's what I found" instead of "I looked at a sample and noticed something," the conversation shifts. You're not asking leadership to trust your gut, instead you're showing them the pattern and letting them decide what to do about it.
The judgment is still yours though, AI handles the labor. If you let it do more than that, the work starts to sound like you are just reading what AI gave you, and have no conviction behind your words. When you understand it well, it gives you time to synthesize instead of transcribe, and time to run one more study instead of shipping half-baked conclusions.
Once the conversation is anchored in data you can really speak to, you're no longer arguing about opinions. This is where you start to build and increase the trust people have in you. With that trust, that is where you start to make impact on the roadmaps.



Aligning Before the Work Starts
One of the most effective things you can do in a low-maturity org is align on success before you do the work.
This doesn't require a massive framework. A single page is usually enough. What business goal are we trying to move? Which KPI represents progress on that goal? Which part of the experience influences it? How will we know if it changed?
You don't present this as a UX artifact. You build it collaboratively with product and leadership. The agreement matters more than the document itself.
Later, when findings are uncomfortable or inconvenient, you're not advocating for UX. You're pointing back to a shared definition of success that everyone agreed to upfront. This makes it harder for them to dismiss your work.
Making Research Feel Like Theirs
When research is disputed internally, designers and researchers often assume the problem is sample size or methodology. Sometimes it is, but more often, it's ownership.
People trust what they help create. What they were apart of.
When research arrives fully formed, it can feel imposed, especially if it challenges existing plans or mindsets. When stakeholders have seen users struggle with their own eyes, the tone changes. The debate shifts from whether the problem exists to how to address it.
I try to involve stakeholders early whenever I can. Not to run sessions or influence outcomes, but to observe. One or two sessions is usually enough. I also share early patterns before final conclusions, so nothing feels like a surprise.
By the time the readout lands, it's familiar. Familiarity breeds trust. I know it can be hard to get stakeholders to be able to find the time in their busy schedules, so be willing to work around them when possible.



ROI Doesn't Need to Be Perfect
You don't need perfect math, but you do need defensible math.
Executives make decisions based on ranges and probabilities. They want to understand scale and risk. You need to be able to confidently answer questions and push back with data, not your opinion.
A simple model works. How often does this happen? How many people are affected? What does it cost when it goes wrong?
If someone challenges the numbers, that's not a failure. That means they are engaged and you caught their attention. You've moved the conversation into business territory, which is where UX needs to be at much more often.
If You're Stuck Right Now
Don't try to fix the organization.
Pick one workflow with clear business impact. Choose one or two proxy metrics you can access easily. Then run a small study tied to a real decision, and align on what success looks like before you present anything.
Ship the recommendation. Measure what you can. Move on. Keep it simple, stupid.
Influence compounds slowly, then all at once.
The Long Game
Some organizations never mature. They churn through designers and researchers without ever learning how to use them. No article I write can fix that.
But many companies sit right in the middle. They aren't hostile to UX, but they're underdeveloped. In those environments, the people who learn to translate evidence into decisions tend to outlast the others. They reduce risk, clarify tradeoffs, and help the business move forward with fewer blind spots.
That's what design leadership looks like in practice. Making it easier for people to decide.
More to Discover
Leadership
Companies Not Listening to UX is Usually a Data Problem
If you've worked in UX or product long enough, you've probably had this experience.
You do the work, talk to users, and uncover patterns that are obvious once you finally see them. Then you bring those findings back to the team with a clear sense of what's broken and why.
And nothing happens.
The roadmap doesn't change. The decision was already made. The meeting ends with a polite nod and a vague promise to "keep it in mind." A month later, the same problems show up again, only now they're more expensive.
This is usually framed or seen as a respect problem. Leadership doesn't value UX, product doesn't listen, and/or engineering doesn't care. Sometimes that's true, but more often, it's that you aren’t speaking their language.
Most organizations don't actually know how to make decisions with evidence. Design just happens to be where that weakness becomes most obvious.
The Real Problem
In low-maturity organizations, decisions run on instinct and urgency. Metrics exist, but they're hard to find or they aren’t being gathered at all. They could also be spread out across different teams that are siloed off, Marketing owns some numbers, Product owns others, then maybe Support has its own dashboards. Nobody has a clean line of sight from user behavior to business outcomes.
So when research shows up, it doesn't slot into an existing decision system. It competes with opinions, deadlines, and executive intuition. Even good insights feel abstract because the organization doesn't have a shared way to use them.
That's why research often gets labeled "interesting" instead of "actionable." The company hasn't learned how to utilize it properly.
If you're operating in that environment, your role quietly changes from the one who really understands your users to the ones who keep bringing up what the users need when the others are focused on business metrics. You're no longer just responsible for understanding users. You're responsible for making that understanding usable.



Why This Costs More Than Projects
Before getting into tactics, I want to name what this environment does to people over time.
When your work consistently fails to influence outcomes, it's easy to internalize that as personal failure. You start questioning your ability to communicate. Your credibility. Your value. Many designers assume they aren't persuasive enough or strategic enough.
In reality, they're trying to operate like professionals inside systems that don't support professional decision-making.
That disconnect is why so many talented designers burn out or leave roles feeling vaguely broken. The work never gets the chance to matter.
Learning to translate evidence into decisions is about sustainability as much as influence.
How to Operate Inside This Reality
One of the biggest mistakes I see designers and researchers make is treating insight delivery as the goal. They run studies, synthesize findings, and present what they learned as if clarity alone should be persuasive.
It rarely is, as we all find out in the end.
What stakeholders actually need is help making a decision. They need to understand what changes as a result of what you learned. Without that, even the most rigorous research becomes something they file away to support a point they already wanted to make.
When I want research work to be used, I don't start with findings. I start with the decision that's stuck. Then I work backward. What decision are we trying to make? What uncertainty is blocking it? What did we learn that reduces that uncertainty? And what does that mean we should do next?
That framing forces prioritization. It also forces you to let go of insights that are true but irrelevant to the moment. This is exactly how you would go about working with users, so why wouldn’t you do the same internally?



Finding Data When There Isn't Any
Data is usually where this all breaks down.
Designers are taught to respect rigor, and that's a good thing. But in many organizations, the data you'd like to have simply doesn't exist. Analytics are incomplete. Instrumentation is poor. Legal or compliance restricts access. Dashboards are political or even unreliable.
Waiting for perfect data in those conditions is something you can’t afford to do.
When direct measurement isn't available, the next best thing are proxy metrics. Not as a placeholder, but as a deliberate choice. Sometimes you need to get creative and put in the work.
Support tickets tell you where users are confused. Call length tells you which tasks are hard. Training time reveals system complexity, and rework reveals misalignment. These aren't perfect measures of experience, but they already matter to leadership and they’re better than complaining.
This is where AI has actually changed the work. Not in the ways most headlines suggest, but in the tedious part of pulling themes from hundreds of support tickets, clustering feedback into patterns, flagging anomalies in call data. Work that used to take days now takes hours. That speed matters because it lets you show up to the conversation with evidence instead of intuition. Which is more than most people in a company are able to do.
It also adds confidence. When you can say "I reviewed 400 tickets and here's what I found" instead of "I looked at a sample and noticed something," the conversation shifts. You're not asking leadership to trust your gut, instead you're showing them the pattern and letting them decide what to do about it.
The judgment is still yours though, AI handles the labor. If you let it do more than that, the work starts to sound like you are just reading what AI gave you, and have no conviction behind your words. When you understand it well, it gives you time to synthesize instead of transcribe, and time to run one more study instead of shipping half-baked conclusions.
Once the conversation is anchored in data you can really speak to, you're no longer arguing about opinions. This is where you start to build and increase the trust people have in you. With that trust, that is where you start to make impact on the roadmaps.



Aligning Before the Work Starts
One of the most effective things you can do in a low-maturity org is align on success before you do the work.
This doesn't require a massive framework. A single page is usually enough. What business goal are we trying to move? Which KPI represents progress on that goal? Which part of the experience influences it? How will we know if it changed?
You don't present this as a UX artifact. You build it collaboratively with product and leadership. The agreement matters more than the document itself.
Later, when findings are uncomfortable or inconvenient, you're not advocating for UX. You're pointing back to a shared definition of success that everyone agreed to upfront. This makes it harder for them to dismiss your work.
Making Research Feel Like Theirs
When research is disputed internally, designers and researchers often assume the problem is sample size or methodology. Sometimes it is, but more often, it's ownership.
People trust what they help create. What they were apart of.
When research arrives fully formed, it can feel imposed, especially if it challenges existing plans or mindsets. When stakeholders have seen users struggle with their own eyes, the tone changes. The debate shifts from whether the problem exists to how to address it.
I try to involve stakeholders early whenever I can. Not to run sessions or influence outcomes, but to observe. One or two sessions is usually enough. I also share early patterns before final conclusions, so nothing feels like a surprise.
By the time the readout lands, it's familiar. Familiarity breeds trust. I know it can be hard to get stakeholders to be able to find the time in their busy schedules, so be willing to work around them when possible.



ROI Doesn't Need to Be Perfect
You don't need perfect math, but you do need defensible math.
Executives make decisions based on ranges and probabilities. They want to understand scale and risk. You need to be able to confidently answer questions and push back with data, not your opinion.
A simple model works. How often does this happen? How many people are affected? What does it cost when it goes wrong?
If someone challenges the numbers, that's not a failure. That means they are engaged and you caught their attention. You've moved the conversation into business territory, which is where UX needs to be at much more often.
If You're Stuck Right Now
Don't try to fix the organization.
Pick one workflow with clear business impact. Choose one or two proxy metrics you can access easily. Then run a small study tied to a real decision, and align on what success looks like before you present anything.
Ship the recommendation. Measure what you can. Move on. Keep it simple, stupid.
Influence compounds slowly, then all at once.
The Long Game
Some organizations never mature. They churn through designers and researchers without ever learning how to use them. No article I write can fix that.
But many companies sit right in the middle. They aren't hostile to UX, but they're underdeveloped. In those environments, the people who learn to translate evidence into decisions tend to outlast the others. They reduce risk, clarify tradeoffs, and help the business move forward with fewer blind spots.
That's what design leadership looks like in practice. Making it easier for people to decide.

