Deaf since birth, Paul Meyer has used human interpreters and captioners to communicate with colleagues during his almost three-decade career in HR and technical recruiting.
But when companies started relying more on video conferencing during the pandemic, he noticed a worrying trend. As meetings moved online, companies started to regularly use AI-driven transcription software. And as that technology became part of everyday business, some employers thought it could be deployed in other instances, for example to replace human interpreters.
The problem, according to Meyer, is that there are faults in the technology that employers are not aware of and which are making life harder for deaf workers.
“The company thought the AI technology for captioning was perfect. They were confused why I was missing a lot of information.”
Speech-recognition technology, which became available in workplaces in the 1990s, has vastly improved and created new opportunities for disabled people to have conversations when an interpreter is not available.
It is now becoming more widely used by hearing people as a productivity tool that can help teams summarise notes or generate transcripts for meetings, for example. According to Forrester Research, 39 per cent of workers surveyed globally said their employers had started using or planned to incorporate generative AI into video conferencing. Six out of 10 now use online or video conferencing weekly, a figure that has doubled since 2020.
The increased prevalence has many positives for deaf workers but some are warning that these tools could be detrimental for disabled people if employers fail to understand their limitations. One concern is the assumption that AI can replace trained human interpreters and captioners. The worry is compounded by a historic lack of input from disabled people into AI products, even some that are marketed as assistive technologies.
Speech-recognition models often fail to understand people with irregular or accented speech, and can perform poorly in noisy settings.
“People have false ideas that AI is perfect for us. It is not perfect for us,” Meyer says. He was let go from his job and believes the lack of proper accommodations made him an easy target when the company downsized.
Some companies are now looking to improve voice-recognition technology — through efforts such as training their models on a broader spectrum of speech.
Google, for example, began collecting more diverse voice samples in 2019 after it recognised its own models did not work for all of its users. It released the Project Relate app on Android in 2021, which collects individual voice samples to create a real-time transcript of a user’s speech. The app is aimed at people with non-standard speech, including those with a deaf accent, ALS, Parkinson’s disease, cleft palate and stutter.
In 2022, four other tech companies — Amazon, Apple, Meta and Microsoft — joined Google in research led by the Beckman Institute at the University of Illinois Urbana-Champaign to collect more voice samples that will be shared among them and other researchers.
Google researcher Dimitri Kanevsky, who has a Russian accent and non-standard speech, says the Relate app allowed him to have impromptu conversations with contacts, such as other attendees at a mathematics conference.
“I became much more social. I could communicate with anybody at any moment at any place and they could understand me,” says Kanevsky, who lost his hearing aged three. “It gave me such an amazing sense of freedom.”
A handful of deaf-led start-ups — such as OmniBridge, supported by Intel, and Techstars-funded Sign-Speak — are working on products that focus on translating between American Sign Language (ASL) and English. Adam Munder, the founder of OmniBridge, says that while he has been fortunate at Intel to have access to translators throughout the day, including while walking through the office and in the canteen, he knows many companies do not offer such access.
“With OmniBridge, it could fill in those hallway and cafeteria conversations,” Munder says.
But despite the progress in this area, there is concern about the lack of representation of disabled people in the development of some more mainstream translation tools. “There are a lot of hearing people who have established solutions or tried to do things assuming they know what deaf people need, assuming they know the best solution, but they might not really understand the full story,” Munder says.
At Google where 6.5 per cent of employees self-identify as having a disability, Jalon Hall, the only Black woman in Google’s deaf and hard-of-hearing employee group, led a project beginning in 2021 to better understand the needs of Black deaf users. Many she spoke to use Black ASL, a variant of American Sign Language that diverged largely due to the segregation of American schools in the 19th and 20th centuries. She says the people she spoke to did not find Google’s products worked as well for them.
“There are a lot of technically proficient deaf users, but they don’t tend to be included in important dialogues. They don’t tend to be included in important products when they’re being developed,” says Hall. “It means they’ll be left further behind.”
In a recent paper, a team of five deaf or hard-of-hearing researchers found that a majority of recently published sign language studies failed to include deaf perspectives. They also did not use data sets that represented deaf individuals and included modelling decisions that perpetuated incorrect biases about sign language and the Deaf community. Those biases could become an issue for future deaf workers.
“What hearing people, who do not sign, see as ‘good enough’ might lead to the baseline for bringing products to the market becoming fairly low,” says Maartje De Meulder, senior researcher at the University of Applied Sciences Utrecht in the Netherlands, who co-authored the paper. “That is a concern, that the tech will just not be good enough or not be voluntarily adopted by deaf workers, while they are being required or even forced to use it.”
Ultimately, companies will need to prioritise the improvement of these tools for people with disabilities. Google has yet to incorporate advancements in its speech-to-text models into commercial products despite researchers reporting reducing its error rate by a third.
Hall says she has received positive feedback from senior leaders on her work but no clarity on whether it will affect Google’s product decisions.
As for Meyer, he hopes to see more deaf representation and tools designed for disabled people. “I think that an issue with AI is that people think it will help make it easier for them to talk to us, but it may not be easy for us to talk to them,” Meyer says.
Design work by Caroline Nevitt
Read the full article here