Google’s Lookout app, which helps people who are blind or low-vision identify objects and read documents, is now using AI to generate more detailed image descriptions.
As part of the update, users can upload an image and get more precise information about what’s depicted, even if there aren’t any captions or alt text. (Alt text describes an image and can be read aloud by a screen reader for people who are visually impaired.) Additionally, people can ask follow-up questions about an image by typing out the questions or using their voice. This can help provide more details and give a better idea of what’s being shown.
Google says the new Image Q&A feature on Lookout can be used on social media photos without alt text, pictures in group chats or images from someone’s camera roll.
“There’s so much visual media that’s pervasive, and it’s so often inaccessible,” Lookout Product Manager Scott Adams told CNET. “We’re trying to make this more inclusive and really give folks another tool, another option for interacting with that…visual world.”
The Image Q&A feature launches Tuesday in the US, UK and Canada in English only, before eventually launching globally in more languages.
“We really wanted to make sure that we’re giving it to enough folks and we can build some real confidence around how people are using it, what they’re experiencing, what we should improve, before we roll it out further,” Adams said. “If we launch globally, we also want to make sure that people in different cultures and in different dress are happy with the way the model is handling photos made from their environment.”
Lookout is also adding 11 new languages, including Chinese, Japanese and Korean. It now supports a total of 34 languages.
Lookout is one of several apps and features from Google designed to improve digital accessibility. In recent years, the search giant has also unveiled Project Relate, which is designed to help people with speech impairments more easily communicate with others; Live Transcribe, which offers real-time speech-to-text transcriptions for people who are deaf or hard of hearing; and Sound Notifications, which alerts people with hearing loss about “critical household sounds” like appliances beeping, water running and dogs barking. It also launched a new accessibility feature for Pixel phones last year to help blind and low-vision users take selfies.
Other tech companies have also released their fair share of accessibility updates. Apple’s People Detection feature lets blind and low-vision iPhone and iPad users know how close someone is to them, and the company added Live Captions to the iPhone, iPad and Mac to help people follow along with audio and video on FaceTime, video conferencing apps and streaming media. Meanwhile, fellow tech giant Amazon recently added a feature that lets Amazon Fire TV customers with hearing loss stream audio directly to their hearing implants. It also released a feature called Dialogue Boost, which makes it easier for viewers to hear dialogue above background music and effects in a show or movie.