Categories
Accessibility Content Creation Disability Representation Living with Sight Loss

AI Accessibility Barriers: When Tools That Help Get Blocked

There is a quiet revolution happening in accessibility, and most people haven’t noticed it yet.

For those of us who are blind, visually impaired, or living with other disabilities, artificial intelligence has started doing something that decades of legislation and good intentions only partially managed: it has begun to reduce the relentless, grinding effort that everyday digital tasks demand. Not perfectly, not completely, but meaningfully.

Then, just as we start to settle into that relief, the door gets closed again.

A Thousand Small Frictions

If you are fully sighted and move through the digital world without much friction, it can be hard to appreciate what it actually costs a disabled person to complete tasks that you do without thinking. Every poorly labelled button, every inaccessible form, every image without alt text, every website that doesn’t work well with a screen reader — these aren’t minor annoyances. They are a tax. A tax paid in time, in energy, in frustration, levied dozens or hundreds of times a day.

AI has started to quietly reduce that tax. Not by fixing the broken websites (those remain broken), but by giving me a layer on top of them that helps me navigate, compose, summarise, and act with far less effort.

What I Started Using Claude For

I write and publish posts regularly. I’ve been using AI to help me draft content and review it before it goes out. That part works well. But the final steps — entering the post into the publishing platform, selecting the right image, applying the correct tags — those remained my task. Manual steps, done with a screen reader, inside a platform that isn’t always as accessible as it could be.

A few weeks ago, I came across Claude Cowork, a desktop tool that lets Claude work more directly alongside you, using browser tools to interact with web apps on your behalf. I’ll be honest; I wasn’t expecting much. Another subscription, another product that might not quite fit. But I was curious enough to try.

The first time, I walked Claude through the process step by step. It watched, learned, and did exactly what I needed. The friction I had been absorbing for years, all the clicks, the tab-navigation, the second-guessing of unlabelled fields, just wasn’t there anymore. We turned the workflow into a repeatable skill, and for several publishing runs it worked smoothly. A genuine sense of relief is probably the best way I can describe it.

Why AI Accessibility Barriers Keep Happening

I came to publish another post, went through the same process, and hit a wall. The publishing platform had updated its settings to block the kind of browser interaction that Claude uses. The automation stopped working.

I understand, intellectually, why companies do this. Blocking automated browser access is a reasonable defence against bots, scrapers, and bad actors. These are legitimate concerns, and the people making those decisions are not setting out to cause harm.

But understanding the reasoning doesn’t make the impact any less real.

For me, in that moment, it wasn’t a technical inconvenience. It was a step backwards. The effort I had stopped expending had come back, without warning, because of a decision made somewhere in the platform’s infrastructure with no thought — I suspect — for users like me. That feeling, familiar to most disabled people, landed with its usual weight: you don’t quite matter enough for this to have been considered.

Finding a Way Through AI Accessibility Barriers

I didn’t give up. After a conversation with Claude, I found a path forward using Microsoft Playwright, a developer tool that allows browser automation in a different way. It requires a bit more setup, but it gets me back to something close to where I was.

It shouldn’t require that level of technical problem-solving for an accessibility workaround. But here we are.

I’ve also reached out to the platform directly to explain the situation. My experience with them has been constructive in the past — they’ve listened when I’ve raised barriers before — and I’m hopeful this will be no different. I’m not writing this to criticise them specifically; they’re not named here because this isn’t about one company. It’s about a pattern.

How to Reduce AI Accessibility Barriers

When companies make decisions about AI access, bot prevention, and automation, accessibility is rarely part of the conversation. These policies are written to protect the product and the platform. That is understandable. But the unintended consequence is that disabled users, who are often the people most reliant on assistive automation, bear the cost.

This is the same pattern we have seen with captchas that screen readers can’t navigate, with two-factor authentication flows that assume everyone can read a screen, with apps that disable copy-paste in ways that break assistive technology. Each individual decision might seem defensible. Together, they add up to a world that keeps telling disabled people: this wasn’t built with you in mind.

AI automation tools are, for many of us, assistive technology. Blocking them needs to be treated with the same care and consideration as blocking any other accessibility accommodation.

Honest About My Own Tools

There is something I need to say while I am on the subject. Claude — the very tool I have been using throughout this process — is not fully accessible itself. I am only able to use it at all because I have some residual vision. Without that, the current interface would present real barriers of its own.

I think it is important to say that plainly. The same case I am making here, about companies needing to consider disabled users before they make decisions that affect access, applies to Anthropic too. More work is needed to ensure Claude is genuinely accessible to everyone, not just those of us who happen to have some sight left or who are technically minded enough to find workarounds. That work matters, and it needs to happen.

This is not a reason to dismiss the tool. It is a reason to keep pushing.

What I’d Ask For

I’m not asking companies to abandon security. I’m asking them to include disabled users in the conversation before they make changes that affect accessibility.

A few practical things that would help: consider accessibility impact assessments before rolling out bot-blocking changes. Talk to disabled users — not after, before. Create exception pathways or alternative routes for users who rely on assistive automation. And when someone reaches out to tell you that a change has created a barrier, listen.

The technology exists to remove AI accessibility barriers that disabled people have been navigating for years. That’s worth protecting.

Tell Me What You Think

Have you hit a similar wall — an AI tool or automation that was helping you, suddenly blocked? I’d really like to hear your experience. And if you work at a company making these kinds of decisions, I’d especially love to talk. Drop a comment below or get in touch directly.

Tell me what you think in the comments below or on X @timdixon82

By Tim Dixon

Tim Dixon has worked in IT for over 20 years, specifically within the Testing Inspection and Certification industry. Tim has Cone Dystrophy, a progressive sight loss condition that impacts his central vision, colour perception and makes him sensitive to light. He likes to share his experience of life and how he navigates the abyss of uncertainty.

Follow Tim Dixon on LinkedIn