Visit
By Tom Fleischman, Cornell Chronicle

Artificial intelligence is touching nearly every aspect of life – including assistive technology for blind and low-vision (BLV) individuals.

And just like in other arenas, the AI used to assist BLV people is good – but far from perfect.

In a study involving 20 vision-impaired participants, researchers at Cornell Tech found that the large language model-enabled application they developed to help BLV individuals interpret their surroundings work well for general “What is this?” questions, but struggled when asked to provide more detailed assistance in complex tasks, such as describing artistic pieces. The researchers also proposed nine “skills” that would improve the models powering the intelligence of these apps.

Read more in the Cornell Chronicle.