Microsoft and partners are working to narrow the data desert that restricts access to AI.

Artificial intelligence-based tools, such as computer vision and voice interfaces, have the potential to change the lives of people with disabilities, but in fact, these AI models are usually built with little data from these people. Microsoft is working with a number of non-profit partners to help enable these tools to reflect the needs and everyday realities of people living in conditions such as blindness and mobility.

Consider, for example, a computer vision system that can recognize objects and describe things such as things on a table. The algorithm is likely to be trained with data collected by normal people, and people in wheelchairs who want to do the same thing may find that from this lower point of view, the system is not working well. Similarly, blind people don’t know how to keep the camera in the right position long enough for the algorithm to work, so they have to experiment and error. A significant number of faces in the system used to train people with disabilities have parts like ventilators, or jet and blow controllers, or headband masks, which can greatly affect accuracy if the system has never seen anything like it.

So Microsoft today announced efforts led by advocacy groups to make a difference in this “data desert” that limits the inclusion of artificial intelligence. The first was a collaboration with Team Gleason, an organization set up to raise awareness of amyotrophic lateral sclerosis (ALS), a degenerative disease of neuromodulation. What they’re worried about is the question above about facial recognition. People with ALS have a variety of symptoms and assistive technologies that interfere with algorithms they have never seen before. For example, if a company wants to ship gaze-tracking software that relies on face recognition, that’s a problem, and Microsoft certainly wants to do the same.

Project Insight is the name of a new joint effort with Microsoft that will collect images of the faces of volunteer users with ALS while doing business. Over time, these face data will be integrated with Microsoft’s existing cognitive services, but will also be released for free so that others can use it to improve their algorithms. Their goal is to be released by the end of 2021.

Another area for improvement is for people with impaired vision or in wheelchairs who need to get data from their point of view. There are two efforts aimed at solving this problem. One of Microsoft’s projects with City University of London is the “Object Recognition for Blind Image Training” project, which is building a data set for the use of smartphone cameras to identify everyday objects. Unlike other datasets, however, this will come entirely from blind users.

Microsoft and partners are working to narrow the data desert that restricts access to AI.