Thursday, 24 Jul 2025
America Age
  • Trending
  • World
  • Politics
  • Opinion
  • Business
    • Economy
    • Real Estate
    • Money
    • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion / Beauty
    • Art & Books
    • Culture
  • Health
  • Sports
  • Entertainment
Font ResizerAa
America AgeAmerica Age
Search
  • Trending
  • World
  • Politics
  • Opinion
  • Business
    • Economy
    • Real Estate
    • Money
    • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion / Beauty
    • Art & Books
    • Culture
  • Health
  • Sports
  • Entertainment
Have an existing account? Sign In
Follow US
© 2024 America Age. All Rights Reserved.
America Age > Blog > Money > Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
Money

Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’

Enspirers | Editorial Board
Share
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
SHARE

For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.

Acknowledging some of those criticisms, Microsoft said on Tuesday that it planned to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will stop being available to new users this week, and will be phased out for existing users within the year.

The changes are part of a push by Microsoft for tighter controls of its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that sets out requirements for A.I. systems to ensure they are not going to have a harmful impact on society.

The requirements include ensuring that systems provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographic groups, including marginalized groups.”

Before they are released, technologies that would be used to make important decisions about a person’s access to employment, education, health care, financial services or a life opportunity are subject to a review by a team led by Natasha Crampton, Microsoft’s chief responsible A.I. officer.

There were heightened concerns at Microsoft around the emotion recognition tool, which labeled someone’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.

“There’s a huge amount of cultural and geographic and individual variation in the way in which we express ourselves,” Ms. Crampton said. That led to reliability concerns, along with the bigger questions of whether “facial expression is a reliable indicator of your internal emotional state,” she said.

Read More on Artificial Intelligence

The age and gender analysis tools being eliminated — along with other tools to detect facial attributes such as hair and smile — could be useful to interpret visual images for blind or low-vision people, for example, but the company decided it was problematic to make the profiling tools generally available to the public, Ms. Crampton said.

In particular, she added, the system’s so-called gender classifier was binary, “and that’s not consistent with our values.”

Microsoft will also put new controls on its face recognition feature, which can be used to perform identity checks or search for a particular person. Uber, for example, uses the software in its app to verify that a driver’s face matches the ID on file for that driver’s account. Software developers who want to use Microsoft’s facial recognition tool will need to apply for access and explain how they plan to deploy it.

Users will also be required to apply and explain how they will use other potentially abusive A.I. systems, such as Custom Neural Voice. The service can generate a human voice print, based on a sample of someone’s speech, so that authors, for example, can create synthetic versions of their voice to read their audiobooks in languages they don’t speak.

Because of the possible misuse of the tool — to create the impression that people have said things they haven’t — speakers must go through a series of steps to confirm that the use of their voice is authorized, and the recordings include watermarks detectable by Microsoft.

“We’re taking concrete steps to live up to our A.I. principles,” said Ms. Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the ethical A.I. group in 2018. “It’s going to be a huge journey.”

Microsoft, like other technology companies, has had stumbles with its artificially intelligent products. In 2016, it released a chatbot on Twitter, called Tay, that was designed to learn “conversational understanding” from the users it interacted with. The bot quickly began spouting racist and offensive tweets, and Microsoft had to take it down.

In 2020, researchers discovered that speech-to-text tools developed by Microsoft, Apple, Google, IBM and Amazon worked less well for Black people. Microsoft’s system was the best of the bunch but misidentified 15 percent of words for white people, compared with 27 percent for Black people.

The company had collected diverse speech data to train its A.I. system but hadn’t understood just how diverse language could be. So it hired a sociolinguistics expert from the University of Washington to explain the language varieties that Microsoft needed to know about. It went beyond demographics and regional variety into how people speak in formal and informal settings.

“Thinking about race as a determining factor of how someone speaks is actually a bit misleading,” Ms. Crampton said. “What we’ve learned in consultation with the expert is that actually a huge range of factors affect linguistic variety.”

Ms. Crampton said the journey to fix that speech-to-text disparity had helped inform the guidance set out in the company’s new standards.

“This is a critical norm-setting period for A.I.,” she said, pointing to Europe’s proposed regulations setting rules and limits on the use of artificial intelligence. “We hope to be able to use our standard to try and contribute to the bright, necessary discussion that needs to be had about the standards that technology companies should be held to.”

A vibrant debate about the potential harms of A.I. has been underway for years in the technology community, fueled by mistakes and errors that have real consequences on people’s lives, such as algorithms that determine whether or not people get welfare benefits. Dutch tax authorities mistakenly took child care benefits away from needy families when a flawed algorithm penalized people with dual nationality.

Automated software for recognizing and analyzing faces has been particularly controversial. Last year, Facebook shut down its decade-old system for identifying people in photos. The company’s vice president of artificial intelligence cited the “many concerns about the place of facial recognition technology in society.”

Several Black men have been wrongfully arrested after flawed facial recognition matches. And in 2020, at the same time as the Black Lives Matter protests after the police killing of George Floyd in Minneapolis, Amazon and Microsoft issued moratoriums on the use of their facial recognition products by the police in the United States, saying clearer laws on its use were needed.

Since then, Washington and Massachusetts have passed regulation requiring, among other things, judicial oversight over police use of facial recognition tools.

Ms. Crampton said Microsoft had considered whether to start making its software available to the police in states with laws on the books but had decided, for now, not to do so. She said that could change as the legal landscape changed.

Arvind Narayanan, a Princeton computer science professor and prominent A.I. expert, said companies might be stepping back from technologies that analyze the face because they were “more visceral, as opposed to various other kinds of A.I. that might be dubious but that we don’t necessarily feel in our bones.”

Companies also may realize that, at least for the moment, some of these systems are not that commercially valuable, he said. Microsoft could not say how many users it had for the facial analysis features it is getting rid of. Mr. Narayanan predicted that companies would be less likely to abandon other invasive technologies, such as targeted advertising, which profiles people to choose the best ads to show them, because they were a “cash cow.”

TAGGED:Artificial IntelligenceComputers and the InternetFacial Recognition SoftwareMicrosoft CorpRace and EthnicitySoftwareThe Washington Mail
Share This Article
Twitter Email Copy Link Print
Previous Article New ‘Stranger Things’ trailer teases a bloody battle for Hawkins — and some major deaths New ‘Stranger Things’ trailer teases a bloody battle for Hawkins — and some major deaths
Next Article Having trouble hearing your TV? These genius headphones — now over 40% off — enhance dialogue Having trouble hearing your TV? These genius headphones — now over 40% off — enhance dialogue

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
LinkedInFollow
MediumFollow
QuoraFollow
- Advertisement -
Ad image

Popular Posts

Anthony Hopkins celebrates 47 years of sobriety: ‘I have found a life where no one bullies me’

Anthony Hopkins reflects on his sobriety journey ahead of his 85th birthday. (Photo: Bauer-Griffin/FilmMagic)Anthony Hopkins…

By Enspirers | Editorial Board

Christina Ricci: Johnny Depp ‘explained to me what homosexuality was when I was 9’

Christina Ricci shared with Andy Cohen a funny story about how Johnny Depp told her…

By Enspirers | Editorial Board

Ukraine Latest: Putin Sets Key China, India, Turkey Meetings

(Bloomberg) -- Vladimir Putin will hold bilateral meetings this week with leaders of China, India,…

By Enspirers | Editorial Board

Which Coronavirus Vaccine Will Work in the Youngest Children?

WASHINGTON — Over the past 10 months, as tens of millions of children and teenagers…

By Enspirers | Editorial Board

You Might Also Like

Hackney birdsong? Stolen Lime bikes the brand new sound of summer time within the metropolis
Money

Hackney birdsong? Stolen Lime bikes the brand new sound of summer time within the metropolis

By Enspirers | Editorial Board
‘Unbelievably terrible’: one of the best (and worst) grocery store vanilla ice-cream, examined and rated
Money

‘Unbelievably terrible’: one of the best (and worst) grocery store vanilla ice-cream, examined and rated

By Enspirers | Editorial Board
‘We got upset, then we got angry’: the couple who took on one of many UK’s greatest cold-call scams
Money

‘We got upset, then we got angry’: the couple who took on one of many UK’s greatest cold-call scams

By Enspirers | Editorial Board
What’s Credit score Card Piggybacking?
Money

What’s Credit score Card Piggybacking?

By Enspirers | Editorial Board
America Age
Facebook Twitter Youtube

About US


America Age: Your instant connection to breaking stories and live updates. Stay informed with our real-time coverage across politics, tech, entertainment, and more. Your reliable source for 24/7 news.

Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • WP Creative Group
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Terms of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 America Age. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?