Microsoft to Remove Face Analysis Tools from Push for Responsible A.I.
Ms. Crampton stated that the removal of the age and gender analysis tools, along with other tools to detect facial characteristics such as hair and smile, could have been useful in interpreting visual images for low-vision or blind people. However, the company felt it was too difficult to make the profiling tools available to the general public.
She said that the system’s “gender classifier” as binary and not in line with her values.
Microsoft will also tighten controls on its face recognition technology, which can be used for identity checks and to search for specific people. Uber uses the software to verify that the driver’s face matches that of the driver’s ID. Microsoft’s facial recognition software will require developers to apply to have access to it and to explain how they intend to use it.
A.I. will be used in abusive ways by users. Users will also have to explain their intentions. Custom Voice. This service can generate a human voiceprint based on a sample from someone’s speech. Authors can use this to create synthetic voices in order to read their audiobooks in other languages.
Speakers must confirm their authorization to use the tool. Microsoft can detect watermarks in recordings.
“We are taking concrete steps in order to live up to our A.I. Ms. Crampton has been a Microsoft lawyer for over 11 years, and she joined the ethical A.I. Ms. Crampton, who has been a Microsoft lawyer for 11 years, joined the ethical A.I. group in 2018. It’s going to be a long journey.
Like other tech companies, Microsoft has made mistakes with its artificially intelligent products. It released Tay, a chatbot for Twitter in 2016, that was intended to “conversationally understand” the interactions it had with users. Microsoft was forced to remove the bot from Twitter after it began posting offensive and racist tweets.
Researchers discovered in 2020 that speech-to text tools created by Microsoft, Apple Google, IBM, IBM, and Amazon did not work well for Black people. Although Microsoft’s system was the most effective, it misidentified 15% of words for White people, while 27 percent were for Black people.
To train its A.I., the company had collected a variety of speech data. The company had collected a variety of speech data to train its A.I. system, but didn’t realize how many languages there could be. Microsoft hired a sociolinguistics expert from the University of Washington to help explain the different languages that Microsoft needed to understand. The research went beyond the demographics and regional variation to include how people talk in formal and informal settings.
Ms. Crampton stated that thinking about race as a determining factor in how someone speaks is misleading. “What we learned from consultation with the expert is that a wide range of factors affects linguistic diversity.”
<< Previous Next >>