By Kate Kaye, Protocol
“It’s a lot to listen to a robot all day long,” said Tina Pinedo, communications director at Disability Rights Oregon, a group that works to promote and defend the rights of people with disabilities.
But listening to a machine is exactly what many people with visual impairments do while using screen reading tools to accomplish everyday online tasks such as paying bills or ordering groceries from an ecommerce site.
“There are not enough web developers or people who actually take the time to listen to what their website sounds like to a blind person. It’s auditorily exhausting,” said Pinedo.
Whether struggling to comprehend a screen reader barking out dynamic updates to a website, trying to make sense of poorly written video captions or watching out for fast-moving imagery that could induce a seizure, the everyday obstacles blocking people with disabilities from a satisfying digital experience are immense.
Needless to say, technology companies have tried to step in, often promising more than they deliver to users and businesses hoping that automated tools can break down barriers to accessibility. Although automated tech used to check website designs for accessibility flaws have been around for some time, companies such as Evinced claim that sophisticated AI not only does a better job of automatically finding and helping correct accessibility problems, but can do it for large enterprises that need to manage thousands of website pages and app content.
Still, people with disabilities and those who regularly test for web accessibility problems say automated systems and AI can only go so far. “The big danger is thinking that some type of automation can replace a real person going through your website, and basically denying people of their experience on your website, and that’s a big problem,” Pinedo said.
Why Capital One is betting on accessibility AI
For a global corporation such as Capital One, relying on a manual process to catch accessibility issues is a losing battle.
“We test our entire digital footprint every month. That’s heavily reliant on automation as we’re testing almost 20,000 webpages,” said Mark Penicook, director of Accessibility at the banking and credit card company, whose digital accessibility team is responsible for all digital experiences across Capital One including websites, mobile apps and electronic messaging in the U.S., the U.K. and Canada.
Accessibility isn’t taught in computer science.
Even though Capital One has a team of people dedicated to the effort, Penicook said he has had to work to raise awareness about digital accessibility among the company’s web developers. “Accessibility isn’t taught in computer science,” Penicook told Protocol. “One of the first things that we do is start teaching them about accessibility.”
One way the company does that is by celebrating Global Accessibility Awareness Day each year, Penicook said. Held on Thursday, the annual worldwide event is intended to educate people about digital access and inclusion for those with disabilities and impairments.
Before Capital One gave Evinced’s software a try around 2018, its accessibility evaluations for new software releases or features relied on manual review and other tools. Using Evinced’s software, Penicook said the financial services company’s accessibility testing takes hours rather than weeks, and Capital One’s engineers and developers use the system throughout their internal software development testing process.
It was enough to convince Capital One to invest in Evinced through its venture arm, Capital One Ventures. Microsoft’s venture group, M12, also joined a $17 million funding round for Evinced last year.
Evinced’s software automatically scans webpages and other content, and then applies computer vision and visual analysis AI to detect problems. The software might discover a lack of contrast between font and background colors that make it difficult for people with vision impairments like color blindness to read. The system might find images that do not have alt text, the metadata that screen readers use to explain what’s in a photo or illustration. Rather than pointing out individual problems, the software uses machine learning to find patterns that indicate when the same type of problem is happening in several places and suggests a way to correct it.
“It automatically tells you, instead of a thousand issues, it’s actually one issue,” said Navin Thadani, co-founder and CEO of Evinced.
The software also takes context into account, factoring in the purpose of a site feature or considering the various operating systems or screen-reader technologies that people might use when visiting a webpage or other content. For instance, it identifies user design features that might be most accessible for a specific purpose, such as a button to enable a bill payment transaction rather than a link.
Some companies use tools typically referred to as “overlays” to check for accessibility problems. Many of those systems are web plug-ins that add a layer of automation on top of existing sites to enable modifications tailored to peoples’ specific requirements. One product that uses computer vision and machine learning, accessiBe, allows people with epilepsy to choose an option that automatically stops all animated images and videos on a site before they could pose a risk of seizure. The company raised $28 million in venture capital funding last year.
Another widget from TruAbilities offers an option that limits distracting page elements to allow people with neurodevelopmental disorders to focus on the most important components of a webpage.
Some overlay tools have been heavily criticized for adding new annoyances to the web experience and providing surface-level responses to problems that deserve more robust solutions. Some overlay tech providers have “pretty brazen guarantees,” said Chase Aucoin, chief architect at TPGi, a company that provides accessibility automation tools and consultation services to customers, including software development monitoring and product design assessments for web development teams.
“[Overlays] give a false sense of security from a risk perspective to the end user,” said Aucoin, who himself experiences motor impairment. “It’s just trying to slap a bunch of paint on top of the problem.”
In general, complicated site designs or interfaces that automatically hop to a new page section or open a new window can create a chaotic experience for people using screen readers, Aucoin said. “A big thing now is just cognitive; how hard is this thing for somebody to understand what’s going on?” he said.
Even more sophisticated AI-based accessibility technologies don’t address every disability issue. For instance, people with an array of disabilities either need or prefer to view videos with captions, rather than having sound enabled. However, although automated captions for videos have improved over the years, “captions that are computer-generated without human review can be really terrible,” said Karawynn Long, an autistic writer with central auditory processing disorder and hyperlexia, a hyperfocus on written language.
“I always appreciate when written transcripts are included as an option, but auto-generated ones fall woefully short, especially because they don’t include good indications of non-linguistic elements of the media,” Long said.
Click here to read the full article on Protocol.