Insights

Fixing the Blind Spots of Accessibility and AI

The tech world loves a good up-and-coming tale, and right now, AI is its golden child. Every keynote, every blog post, every puff piece hypes it like it’s the second coming. The thing that’s finally going to “unlock access for everyone.”

And sure, it’s a beautiful dream. Automatic captions for every meeting. Instant image descriptions for the blind. Conversational interfaces that all “just work.”

But here’s the ugly, unsexy truth: AI isn’t democratizing accessibility. It’s amplifying bias, adding exclusion through code, and scaling ableism at speeds we’ve never seen before.

The New York City Bar Association’s recent task-force report didn’t mince words. It laid out exactly how these shiny systems are failing disabled users … and why they’ll keep failing unless we burn the current playbook and start over.

The Lie of the “Average User”

AI doesn’t see people. It sees data. It sees probabilities. It sees clusters and patterns and the gravitational pull of “most common.”

And when you build systems around “most common,” you quietly decide that anyone who doesn’t fit that mold doesn’t matter.

  • If your speech doesn’t match the cadence of a corporate train dataset? You’re “lost in the ether.”

  • If your wheelchair shows up in less than 0.01% of the images scraped for a model? You’re invisible.

  • If your experience of autism, deafness, blindness, or chronic illness doesn’t match the “normal” cliché? The system doesn’t know you exist.

I’ve seen it firsthand. I’ve ran research studies on captioning and voice interfaces, trying to understand how Deaf and hard-of-hearing users navigate our products. They weren’t just bumping into accessibility gaps — they were hitting full-on brick walls. A voice assistant that never understood an impaired user’s non-standard speech. A caption system that mangled someone’s voice so badly that the transcriptions that is came out as hurling insults.

And these aren’t edge cases. These are people. People the tech claimed to serve.

The Stereotype Factory

Ask a generative model to “describe a blind person” or “write a story about someone with cerebral palsy,” and what you get isn’t innovation. It’s a stereotype factory.

Blindness? It spits out “inspirational” tropes — the blind man who “overcomes his limitations” to climb Everest.

Cerebral palsy? It’s always some plucky kid “defying the odds” with a smile plastered on their face.

Autism? White, male, socially awkward, hoodie … always a hoodie.

This reduces entire communities to caricatures, erasing diversity and flattening humanity into digestible clichés that are easier for a system to generate than reality is to understand.

I call it digital ableism: the baked-in bias that treats disabled users like glitches in the system instead of human beings worthy of design, dignity, and nuance.

I’ve Seen the Harm

I’ve been deep in this space for years, leading design systems, conducting research with Deaf, hard-of-hearing, and speech impaired communities, and sitting with users as they tried to make sense of tech that swore it was built for them.

I’ve seen interpreting systems that turn vital medical conversations into lack of confidence with the provider. Interfaces that lock out blind users because a button wasn’t labeled in code. AI-powered “accessibility” tools that worked great in the staging environment but failed in the chaos of the real world.

This isn’t theoretical. These are failures that cost people their independence, their jobs, their safety. And the kicker? These failures scale. Fast.

The Accessibility Mirage

To be clear, AI can be powerful for accessibility. Products like Be My Eyes and Microsoft’s Seeing AI have shown that pairing intelligent systems with human oversight can unlock incredible value. Captioning platforms have changed lives by making phone conversations accessible in real time for seniors who are learning to live with limited hearing.

But the minute you strip out the human element? Accuracy drops. Trust collapses.

We’ve seen this with transcription tools like Otter.ai and Google Meet captions. They’re good until they’re not. A single bad transcription can be detrimental in a meeting, a medical appointment, or a legal conversation.

This is why human oversight isn’t optional. It’s the safety net that keeps accessibility from becoming another “move fast and break things” casualty.

Designers and Engineers, This Is Your Wake-Up Call

If you’re building AI today, you’re either part of the solution or you’re quietly reinforcing the problem. There’s no neutral ground.

Ask your product team how they’re handling accessibility for users who are neurodivergent. Go ahead … I’ll hold your spot 👈.

…So, let me guess: silence. Maybe a nervous laugh. Maybe a vague “we’ve talked about it.” But no plan. No research. Nothing.


And that’s the industry in a nutshell. Companies are happy to tick the box that asks, “Do you have a disability?” when you apply for a job. They’ll flag ADHD or autism on their HR paperwork to look compliant. But when it comes to making sure their products actually work for neurodivergent users who process information differently, who may need consistent navigation, clear content hierarchies, or even just predictable motion patterns, it’s a black hole.

I’ve seen it up close. During a usability session, I watched a neurodivergent participant try to use a critical workflow in our app. The interface was “clean,” “modern,” “beautiful” — everything designers pat themselves on the back for, but it was also overloaded with shifting panels, hidden actions, and animations that triggered motion sensitivity. The participant froze, showing frustration, “I can’t even figure out where to start.” That wasn’t their failure. That was ours.

Accessibility in a digital product isn’t just the right colors and text size. It’s designing systems that recognize the spectrum of human cognition. Ignoring that isn’t a small oversight. It’s ignorance. If you’re not designing for these users, you’re actively designing against them.

Practical Steps for Teams

  1. Include Neurodivergent Users in Research – Don’t just recruit “average” participants. Seek out those with ADHD, autism, or cognitive differences. Their feedback uncovers gaps your standard usability tests never will.

  2. Design for Consistency and Clarity – Predictable navigation, consistent placement of elements, and clear content hierarchies are lifelines for neurodivergent users.

  3. Audit Motion and Animation – Overly complex motion can overwhelm or even make interfaces unusable. Provide reduced-motion modes by default.

  4. Use Plain Language – Jargon-heavy or complex instructions alienate users. Write as if clarity is a feature … because it is.

  5. Keep Feedback Loops Open – Create easy, accessible ways for neurodivergent users to share ongoing feedback.

The False Choice

One of the biggest lies in tech is that accessibility slows innovation. That you can’t go fast if you’re designing for everyone.

That’s bullshit.

The reality is, building inclusively makes your systems better — more robust, more flexible, more resilient. Accessibility isn’t a drag; it’s optimization. It forces clarity in your data, consistency in your UX, and accountability in your outputs.

And if you don’t do it now, you’ll pay for it later — in lawsuits, in PR disasters, and in products that nobody trusts.

The Cost of Doing Nothing

Ignore this, and we’re staring down a future where AI hardcodes inequity into every system we touch.

Imagine job screening tools that silently filter out candidates with non-standard speech. Banking algorithms that deny loans to disabled applicants because their profiles don’t match the “average.” Healthcare AI that deprioritizes patients with complex needs because they don’t fit a “normal” statistical pattern.

This isn’t hypothetical. These failures are already happening. They just don’t make the slidedecks.

Burn the Playbook

The old way — build fast, test later, patch accessibility after the fact — is dead. Or at least it should be.

The new playbook looks like this:

  • Inclusion by default.

  • Representation in data.

  • Accessibility in every build pipeline.

  • Human oversight baked in.

  • Feedback loops with real users.

And if that feels like too much work? Then you have no business building systems that claim to serve everyone.

This Isn’t About Pity — It’s About Power

Accessibility isn’t charity. It’s not about being “nice” or “inclusive.” It’s about building power into systems that have historically stripped it away.

When AI is built with disability at its core, it doesn’t just “help the disabled.” It makes the entire system stronger. Smarter. More humane.

And that’s the point.

The Bottom Line

AI is either going to be the biggest leap forward for accessibility since the smartphone or the largest mass-exclusion event in digital history.

Which way it goes depends entirely on the choices we make right now.

  • Keep building for the mythical “average user,” and we’ll scale ableism into every corner of the digital world.

  • Build with ALL users’ needs at the center, and we’ll create systems that actually deliver on the promise of equity.

The clock is ticking. The hype train is screaming down the tracks. And if we don’t grab the wheel, we’ll end up exactly where the data tells us we will: a future where the people who most need access are the first to be shut out.

Like what you see? There’s more.

Get monthly inspiration, insight updates, and creative process notes — handcrafted for fellow creators.

More to Discover

Insights

Fixing the Blind Spots of Accessibility and AI

The tech world loves a good up-and-coming tale, and right now, AI is its golden child. Every keynote, every blog post, every puff piece hypes it like it’s the second coming. The thing that’s finally going to “unlock access for everyone.”

And sure, it’s a beautiful dream. Automatic captions for every meeting. Instant image descriptions for the blind. Conversational interfaces that all “just work.”

But here’s the ugly, unsexy truth: AI isn’t democratizing accessibility. It’s amplifying bias, adding exclusion through code, and scaling ableism at speeds we’ve never seen before.

The New York City Bar Association’s recent task-force report didn’t mince words. It laid out exactly how these shiny systems are failing disabled users … and why they’ll keep failing unless we burn the current playbook and start over.

The Lie of the “Average User”

AI doesn’t see people. It sees data. It sees probabilities. It sees clusters and patterns and the gravitational pull of “most common.”

And when you build systems around “most common,” you quietly decide that anyone who doesn’t fit that mold doesn’t matter.

  • If your speech doesn’t match the cadence of a corporate train dataset? You’re “lost in the ether.”

  • If your wheelchair shows up in less than 0.01% of the images scraped for a model? You’re invisible.

  • If your experience of autism, deafness, blindness, or chronic illness doesn’t match the “normal” cliché? The system doesn’t know you exist.

I’ve seen it firsthand. I’ve ran research studies on captioning and voice interfaces, trying to understand how Deaf and hard-of-hearing users navigate our products. They weren’t just bumping into accessibility gaps — they were hitting full-on brick walls. A voice assistant that never understood an impaired user’s non-standard speech. A caption system that mangled someone’s voice so badly that the transcriptions that is came out as hurling insults.

And these aren’t edge cases. These are people. People the tech claimed to serve.

The Stereotype Factory

Ask a generative model to “describe a blind person” or “write a story about someone with cerebral palsy,” and what you get isn’t innovation. It’s a stereotype factory.

Blindness? It spits out “inspirational” tropes — the blind man who “overcomes his limitations” to climb Everest.

Cerebral palsy? It’s always some plucky kid “defying the odds” with a smile plastered on their face.

Autism? White, male, socially awkward, hoodie … always a hoodie.

This reduces entire communities to caricatures, erasing diversity and flattening humanity into digestible clichés that are easier for a system to generate than reality is to understand.

I call it digital ableism: the baked-in bias that treats disabled users like glitches in the system instead of human beings worthy of design, dignity, and nuance.

I’ve Seen the Harm

I’ve been deep in this space for years, leading design systems, conducting research with Deaf, hard-of-hearing, and speech impaired communities, and sitting with users as they tried to make sense of tech that swore it was built for them.

I’ve seen interpreting systems that turn vital medical conversations into lack of confidence with the provider. Interfaces that lock out blind users because a button wasn’t labeled in code. AI-powered “accessibility” tools that worked great in the staging environment but failed in the chaos of the real world.

This isn’t theoretical. These are failures that cost people their independence, their jobs, their safety. And the kicker? These failures scale. Fast.

The Accessibility Mirage

To be clear, AI can be powerful for accessibility. Products like Be My Eyes and Microsoft’s Seeing AI have shown that pairing intelligent systems with human oversight can unlock incredible value. Captioning platforms have changed lives by making phone conversations accessible in real time for seniors who are learning to live with limited hearing.

But the minute you strip out the human element? Accuracy drops. Trust collapses.

We’ve seen this with transcription tools like Otter.ai and Google Meet captions. They’re good until they’re not. A single bad transcription can be detrimental in a meeting, a medical appointment, or a legal conversation.

This is why human oversight isn’t optional. It’s the safety net that keeps accessibility from becoming another “move fast and break things” casualty.

Designers and Engineers, This Is Your Wake-Up Call

If you’re building AI today, you’re either part of the solution or you’re quietly reinforcing the problem. There’s no neutral ground.

Ask your product team how they’re handling accessibility for users who are neurodivergent. Go ahead … I’ll hold your spot 👈.

…So, let me guess: silence. Maybe a nervous laugh. Maybe a vague “we’ve talked about it.” But no plan. No research. Nothing.


And that’s the industry in a nutshell. Companies are happy to tick the box that asks, “Do you have a disability?” when you apply for a job. They’ll flag ADHD or autism on their HR paperwork to look compliant. But when it comes to making sure their products actually work for neurodivergent users who process information differently, who may need consistent navigation, clear content hierarchies, or even just predictable motion patterns, it’s a black hole.

I’ve seen it up close. During a usability session, I watched a neurodivergent participant try to use a critical workflow in our app. The interface was “clean,” “modern,” “beautiful” — everything designers pat themselves on the back for, but it was also overloaded with shifting panels, hidden actions, and animations that triggered motion sensitivity. The participant froze, showing frustration, “I can’t even figure out where to start.” That wasn’t their failure. That was ours.

Accessibility in a digital product isn’t just the right colors and text size. It’s designing systems that recognize the spectrum of human cognition. Ignoring that isn’t a small oversight. It’s ignorance. If you’re not designing for these users, you’re actively designing against them.

Practical Steps for Teams

  1. Include Neurodivergent Users in Research – Don’t just recruit “average” participants. Seek out those with ADHD, autism, or cognitive differences. Their feedback uncovers gaps your standard usability tests never will.

  2. Design for Consistency and Clarity – Predictable navigation, consistent placement of elements, and clear content hierarchies are lifelines for neurodivergent users.

  3. Audit Motion and Animation – Overly complex motion can overwhelm or even make interfaces unusable. Provide reduced-motion modes by default.

  4. Use Plain Language – Jargon-heavy or complex instructions alienate users. Write as if clarity is a feature … because it is.

  5. Keep Feedback Loops Open – Create easy, accessible ways for neurodivergent users to share ongoing feedback.

The False Choice

One of the biggest lies in tech is that accessibility slows innovation. That you can’t go fast if you’re designing for everyone.

That’s bullshit.

The reality is, building inclusively makes your systems better — more robust, more flexible, more resilient. Accessibility isn’t a drag; it’s optimization. It forces clarity in your data, consistency in your UX, and accountability in your outputs.

And if you don’t do it now, you’ll pay for it later — in lawsuits, in PR disasters, and in products that nobody trusts.

The Cost of Doing Nothing

Ignore this, and we’re staring down a future where AI hardcodes inequity into every system we touch.

Imagine job screening tools that silently filter out candidates with non-standard speech. Banking algorithms that deny loans to disabled applicants because their profiles don’t match the “average.” Healthcare AI that deprioritizes patients with complex needs because they don’t fit a “normal” statistical pattern.

This isn’t hypothetical. These failures are already happening. They just don’t make the slidedecks.

Burn the Playbook

The old way — build fast, test later, patch accessibility after the fact — is dead. Or at least it should be.

The new playbook looks like this:

  • Inclusion by default.

  • Representation in data.

  • Accessibility in every build pipeline.

  • Human oversight baked in.

  • Feedback loops with real users.

And if that feels like too much work? Then you have no business building systems that claim to serve everyone.

This Isn’t About Pity — It’s About Power

Accessibility isn’t charity. It’s not about being “nice” or “inclusive.” It’s about building power into systems that have historically stripped it away.

When AI is built with disability at its core, it doesn’t just “help the disabled.” It makes the entire system stronger. Smarter. More humane.

And that’s the point.

The Bottom Line

AI is either going to be the biggest leap forward for accessibility since the smartphone or the largest mass-exclusion event in digital history.

Which way it goes depends entirely on the choices we make right now.

  • Keep building for the mythical “average user,” and we’ll scale ableism into every corner of the digital world.

  • Build with ALL users’ needs at the center, and we’ll create systems that actually deliver on the promise of equity.

The clock is ticking. The hype train is screaming down the tracks. And if we don’t grab the wheel, we’ll end up exactly where the data tells us we will: a future where the people who most need access are the first to be shut out.

Like what you see? There’s more.

Get monthly inspiration, insight updates, and creative process notes — handcrafted for fellow creators.

More to Discover

Insights

Fixing the Blind Spots of Accessibility and AI

The tech world loves a good up-and-coming tale, and right now, AI is its golden child. Every keynote, every blog post, every puff piece hypes it like it’s the second coming. The thing that’s finally going to “unlock access for everyone.”

And sure, it’s a beautiful dream. Automatic captions for every meeting. Instant image descriptions for the blind. Conversational interfaces that all “just work.”

But here’s the ugly, unsexy truth: AI isn’t democratizing accessibility. It’s amplifying bias, adding exclusion through code, and scaling ableism at speeds we’ve never seen before.

The New York City Bar Association’s recent task-force report didn’t mince words. It laid out exactly how these shiny systems are failing disabled users … and why they’ll keep failing unless we burn the current playbook and start over.

The Lie of the “Average User”

AI doesn’t see people. It sees data. It sees probabilities. It sees clusters and patterns and the gravitational pull of “most common.”

And when you build systems around “most common,” you quietly decide that anyone who doesn’t fit that mold doesn’t matter.

  • If your speech doesn’t match the cadence of a corporate train dataset? You’re “lost in the ether.”

  • If your wheelchair shows up in less than 0.01% of the images scraped for a model? You’re invisible.

  • If your experience of autism, deafness, blindness, or chronic illness doesn’t match the “normal” cliché? The system doesn’t know you exist.

I’ve seen it firsthand. I’ve ran research studies on captioning and voice interfaces, trying to understand how Deaf and hard-of-hearing users navigate our products. They weren’t just bumping into accessibility gaps — they were hitting full-on brick walls. A voice assistant that never understood an impaired user’s non-standard speech. A caption system that mangled someone’s voice so badly that the transcriptions that is came out as hurling insults.

And these aren’t edge cases. These are people. People the tech claimed to serve.

The Stereotype Factory

Ask a generative model to “describe a blind person” or “write a story about someone with cerebral palsy,” and what you get isn’t innovation. It’s a stereotype factory.

Blindness? It spits out “inspirational” tropes — the blind man who “overcomes his limitations” to climb Everest.

Cerebral palsy? It’s always some plucky kid “defying the odds” with a smile plastered on their face.

Autism? White, male, socially awkward, hoodie … always a hoodie.

This reduces entire communities to caricatures, erasing diversity and flattening humanity into digestible clichés that are easier for a system to generate than reality is to understand.

I call it digital ableism: the baked-in bias that treats disabled users like glitches in the system instead of human beings worthy of design, dignity, and nuance.

I’ve Seen the Harm

I’ve been deep in this space for years, leading design systems, conducting research with Deaf, hard-of-hearing, and speech impaired communities, and sitting with users as they tried to make sense of tech that swore it was built for them.

I’ve seen interpreting systems that turn vital medical conversations into lack of confidence with the provider. Interfaces that lock out blind users because a button wasn’t labeled in code. AI-powered “accessibility” tools that worked great in the staging environment but failed in the chaos of the real world.

This isn’t theoretical. These are failures that cost people their independence, their jobs, their safety. And the kicker? These failures scale. Fast.

The Accessibility Mirage

To be clear, AI can be powerful for accessibility. Products like Be My Eyes and Microsoft’s Seeing AI have shown that pairing intelligent systems with human oversight can unlock incredible value. Captioning platforms have changed lives by making phone conversations accessible in real time for seniors who are learning to live with limited hearing.

But the minute you strip out the human element? Accuracy drops. Trust collapses.

We’ve seen this with transcription tools like Otter.ai and Google Meet captions. They’re good until they’re not. A single bad transcription can be detrimental in a meeting, a medical appointment, or a legal conversation.

This is why human oversight isn’t optional. It’s the safety net that keeps accessibility from becoming another “move fast and break things” casualty.

Designers and Engineers, This Is Your Wake-Up Call

If you’re building AI today, you’re either part of the solution or you’re quietly reinforcing the problem. There’s no neutral ground.

Ask your product team how they’re handling accessibility for users who are neurodivergent. Go ahead … I’ll hold your spot 👈.

…So, let me guess: silence. Maybe a nervous laugh. Maybe a vague “we’ve talked about it.” But no plan. No research. Nothing.


And that’s the industry in a nutshell. Companies are happy to tick the box that asks, “Do you have a disability?” when you apply for a job. They’ll flag ADHD or autism on their HR paperwork to look compliant. But when it comes to making sure their products actually work for neurodivergent users who process information differently, who may need consistent navigation, clear content hierarchies, or even just predictable motion patterns, it’s a black hole.

I’ve seen it up close. During a usability session, I watched a neurodivergent participant try to use a critical workflow in our app. The interface was “clean,” “modern,” “beautiful” — everything designers pat themselves on the back for, but it was also overloaded with shifting panels, hidden actions, and animations that triggered motion sensitivity. The participant froze, showing frustration, “I can’t even figure out where to start.” That wasn’t their failure. That was ours.

Accessibility in a digital product isn’t just the right colors and text size. It’s designing systems that recognize the spectrum of human cognition. Ignoring that isn’t a small oversight. It’s ignorance. If you’re not designing for these users, you’re actively designing against them.

Practical Steps for Teams

  1. Include Neurodivergent Users in Research – Don’t just recruit “average” participants. Seek out those with ADHD, autism, or cognitive differences. Their feedback uncovers gaps your standard usability tests never will.

  2. Design for Consistency and Clarity – Predictable navigation, consistent placement of elements, and clear content hierarchies are lifelines for neurodivergent users.

  3. Audit Motion and Animation – Overly complex motion can overwhelm or even make interfaces unusable. Provide reduced-motion modes by default.

  4. Use Plain Language – Jargon-heavy or complex instructions alienate users. Write as if clarity is a feature … because it is.

  5. Keep Feedback Loops Open – Create easy, accessible ways for neurodivergent users to share ongoing feedback.

The False Choice

One of the biggest lies in tech is that accessibility slows innovation. That you can’t go fast if you’re designing for everyone.

That’s bullshit.

The reality is, building inclusively makes your systems better — more robust, more flexible, more resilient. Accessibility isn’t a drag; it’s optimization. It forces clarity in your data, consistency in your UX, and accountability in your outputs.

And if you don’t do it now, you’ll pay for it later — in lawsuits, in PR disasters, and in products that nobody trusts.

The Cost of Doing Nothing

Ignore this, and we’re staring down a future where AI hardcodes inequity into every system we touch.

Imagine job screening tools that silently filter out candidates with non-standard speech. Banking algorithms that deny loans to disabled applicants because their profiles don’t match the “average.” Healthcare AI that deprioritizes patients with complex needs because they don’t fit a “normal” statistical pattern.

This isn’t hypothetical. These failures are already happening. They just don’t make the slidedecks.

Burn the Playbook

The old way — build fast, test later, patch accessibility after the fact — is dead. Or at least it should be.

The new playbook looks like this:

  • Inclusion by default.

  • Representation in data.

  • Accessibility in every build pipeline.

  • Human oversight baked in.

  • Feedback loops with real users.

And if that feels like too much work? Then you have no business building systems that claim to serve everyone.

This Isn’t About Pity — It’s About Power

Accessibility isn’t charity. It’s not about being “nice” or “inclusive.” It’s about building power into systems that have historically stripped it away.

When AI is built with disability at its core, it doesn’t just “help the disabled.” It makes the entire system stronger. Smarter. More humane.

And that’s the point.

The Bottom Line

AI is either going to be the biggest leap forward for accessibility since the smartphone or the largest mass-exclusion event in digital history.

Which way it goes depends entirely on the choices we make right now.

  • Keep building for the mythical “average user,” and we’ll scale ableism into every corner of the digital world.

  • Build with ALL users’ needs at the center, and we’ll create systems that actually deliver on the promise of equity.

The clock is ticking. The hype train is screaming down the tracks. And if we don’t grab the wheel, we’ll end up exactly where the data tells us we will: a future where the people who most need access are the first to be shut out.

Like what you see? There’s more.

Get monthly inspiration, insight updates, and creative process notes — handcrafted for fellow creators.

More to Discover