Microsoft to Remove Face Analysis Tools from Push for Responsible A.I.
Academics and activists have raised concerns for years about facial analysis software that claims it can identify an individual’s gender, age and emotional state.
Microsoft acknowledged some of these criticisms on Tuesday and said that it would remove the artificial Intelligence service features for detecting, analyzing, and recognizing faces. They will cease to be available for new users starting this week and will be phased out by existing users within a year.
These changes are part of Microsoft’s push to tighten control over its artificial intelligence products. Microsoft’s “Responsible AI Standard” is a document of 27 pages that outlines requirements for A.I., after a two-year-long review. Systems to ensure that they do not have a negative impact on society.
These requirements include the need to ensure that systems offer “valid solutions for problems they are intended to solve” as well as “a comparable quality of service for an identified demographic group, including marginalized.”
Technology that could be used to make decisions about an individual’s access to education, financial services, or employment is reviewed by Natasha Crampton (Microsoft’s chief A.I. responsible). officer.
Microsoft expressed concern about its emotion recognition tool. It labels someone’s expressions like anger, contempt or disgust, fear and happiness.
Ms. Crampton stated that there is a lot of cultural, geographic and individual variation in how we express ourselves. She said that reliability concerns were raised, as well as the larger question of whether facial expression can be used to indicate your inner emotional state.