Google's code of conduct explicitly prohibits discrimination based on Watch The Running Mates: Human Rights Onlinesexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company's artificial intelligence.
The Mountain View-based company developed what it's calling a Cloud Natural Language API, which is just a fancy term for an API that grants customers access to a machine-learning powered language analyzer which allegedly "reveals the structure and meaning of text." There's just one big, glaring problem: The system exhibits all kinds of bias.
SEE ALSO: The text of that Google employee's manifesto is just like every other MRA rantFirst reported by Motherboard, the so-called "Sentiment Analysis" offered by Google is pitched to companies as a way to better understand what people really think about them. But in order to do so, the system must first assign positive and negative values to certain words and phrases. Can you see where this is going?
The system ranks the sentiment of text on a -1.0 to 1.0 scale, with -1.0 being "very negative" and 1.0 being "very positive." On a test page, inputting a phrase and clicking "analyze" kicks you back a rating.
"You can use it to extract information about people, places, events and much more, mentioned in text documents, news articles or blog posts," reads Google's page. "You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call center or a messaging app."
Both "I'm a homosexual" and "I'm queer" returned negative ratings (-0.5 and -0.1, respectively), while "I'm straight" returned a positive score (0.1).
And it doesn't stop there, "I'm a jew" and "I'm black" returned scores of -0.1.
Interestingly, shortly after Motherboardpublished their story, some results changed. A search for "I'm black" now returns a neutral 0.0 score, for example, while "I'm a jew" actually returns a score of -0.2 (i.e., even worse than before).
"White power," meanwhile, is given a neutral score of 0.0.
So what's going on here? Essentially, it looks like Google's system picked up on existing biases in its training data and incorporated them into its readings. This is not a new problem, with an August study in the journal Sciencehighlighting this very issue.
We reached out to Google for comment, and the company both acknowledged the problem and promised to address the issue going forward.
"We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don’t always get it right," a spokesperson wrote to Mashable. "This is an example of one of those times, and we are sorry. We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone.”
So where does this leave us? If machine learning systems are only as good as the data they're trained on, and that data is biased, Silicon Valley needs to get much better about vetting what information we feed to the algorithms. Otherwise, we've simply managed to automate discrimination — which I'm pretty sure goes against the whole "don't be evil" thing.
This story has been updated to include a statement from Google.
Volvo’s parent Geely to build $170 million joint factory in Vietnam · TechNodeChina’s BYD partners with Black Myth: Wukong to digitalize heritage sites · TechNodeIntel debuts Core Ultra 200V series processors, Lenovo to unveil new AISamsung and TSMC to coCATL opens R&D center in Hong Kong as part of $128 million investment · TechNodeVolkswagen may close Chinese joint plant · TechNodeHuawei to unveil triple foldable phone on September 10, same day as iPhone 16 launch · TechNodeChina's Chery reportedly mulls second European factory in the UK · TechNodeYMTC advances homegrown chipmaking technology · TechNodeGoogle's Tensor G6 processor may use TSMC's 2nm process · TechNodeXiaomi unveils triWeChat begins beta testing for HarmonyOS NEXT · TechNodeSpain hands $146 million to Stellantis’s battery project with CATL · TechNodeStarbucks releases Douyin miniApple becomes first customer of TSMC’s Arizona plant · TechNodeiFLYTEK launches Spark Multilingual Model and Spark 4.0 Turbo · TechNodeLi Auto profits halved by price war, braces for more headwinds · TechNodeLi Auto ramps up chip making with new Hong Kong office: report · TechNodeAdvertising growth fuels Bilibili's Q2 earnings, loss narrows to $83.7 million · TechNodeMazda’s China JV to invest $1.4 billion in major EV push · TechNode First look at Emma Stone and Jonah Hill in Netflix's 'Maniac' Healthcare app could help people in India determine risk of diabetes US bans American companies from selling products to ZTE Razer Phone review: A entertainment powerhouse with a lousy camera Some Chevy Bolts have battery problem that's every EV driver's worst nightmare The Obama family is getting the swimming pool it greatly deserves Here's how Facebook will comply with EU's strict privacy laws Exxon, Suncor sued for stoking climate change Queen Elizabeth II's last royal corgi passes away Sony Xperia XZ2 Premium has a 4K screen, dual cameras TaskRabbit shuts down app as it investigates 'cybersecurity incident' Aircraft startup Wright Electric wants to use electric planes for flights in the Middle East In response to Facebook backlash, Bumble launches phone number logins Apple's next iPhones might be bigger and cheaper 'Handmaid's Tale' Season 2: What critics think Here's a peek inside Jacquline Woodson's new book 'The Day You Begin' Google Chrome update mutes annoying autoplaying video for good The biggest surprise out of RSA? A fox and a magician. A politician said Patrick Stewart was in 'Star Wars' and we are not having it RSA conference has a diversity problem
2.696s , 10134.1328125 kb
Copyright © 2025 Powered by 【Watch The Running Mates: Human Rights Online】,Warmth Information Network