Artificial Intelligence (AI) tools, such as the language tool ChatGPT and the image generator Midjourney, hold promise in aiding individuals with disabilities. They can summarize content, craft messages, and describe images, potentially making digital content more accessible. However, their effectiveness is not without question. Their propensity for inaccuracies, logical shortcomings, and perpetuation of ableist biases are real concerns.
A study conducted by the University of Washington, involving seven researchers, delved into the utility of these AI tools for accessibility. This three-month autoethnographic study drew upon the experiences of individuals both with and without disabilities. The findings, presented at the ASSETS 2023 conference in New York, highlighted the mixed results of AI tools in improving accessibility.
The study involved tests in various use cases. These included generating images, writing Slack messages, summarizing writing, and improving the accessibility of documents. The results revealed that while AI tools had their moments of success, they also displayed significant issues in most situations.
The research was presented through seven vignettes, often amalgamating experiences to maintain anonymity. One account detailed the experience of “Mia”, who utilized AI tools to assist with work during bouts of intermittent brain fog. The tools were sometimes accurate, but also often gave incorrect answers. In one instance, the AI tool subtly altered the argument of a paper, an error that went unnoticed initially. Such subtle errors highlight some of the most insidious problems with AI.
However, the same AI tools proved useful in creating and formatting references for a paper. Despite making mistakes, the technology was helpful in reducing the cognitive load.
Other tests revealed similarly mixed results. An autistic author found AI useful in crafting Slack messages, increasing their confidence in interactions, even though peers found the messages robotic. AI tools were also tested in increasing the accessibility of content, such as tables for a research paper or a slideshow for a class. While the AI programs could state accessibility rules, they could not consistently apply them.
An author with aphantasia (an inability to visualize) found image-generating AI tools helpful in interpreting imagery from books. However, the same tool failed to accurately create an illustration of ‘people with a variety of disabilities looking happy but not at a party’, instead producing ableist incongruities.
The study highlighted the need for further research to develop solutions to the problems revealed. A complex issue involves finding ways for people with disabilities to validate the products of AI tools. The frequency of AI-caused errors makes research into accessible validation crucial.
As AI continues to evolve and integrate into our lives, it’s essential to ensure that it serves everyone and does not leave anyone behind. The tools have potential, but they also have room for improvement, particularly in terms of accessibility. It’s a journey, one that requires patience, empathy, and meticulous attention to detail. And we must be blunt: we cannot afford to ignore the issues that this study has brought to light.
Remember, health isn’t just about the physical; it’s also about the social and mental aspects. Accessibility is a part of that. So let’s keep pushing for better, for a world where AI tools are not just helpful, but also respectful and inclusive.
Our content is enriched by a variety of data from different sources. We appreciate the information available through public web sites, databases and reporting from organizations such as: