Google has suspended an engineer who claims the internet giant’s AI technology is self-aware and has a soul. According toThe Washington Post e The New York TimesGoogle has placed senior software engineer Blake Lemoine on paid leave for violating the company’s confidentiality policies.
Lemoine is said to have recently handed documents to a US senator that contain enough information to show that Google was guilty of religious discrimination through technology.
A Google spokesperson said a panel including ethicists and technologists from the company reviewed Lemoine’s concerns and told him “the evidence does not support his claims”. According to the spokesperson:
“Some in the wider AI community are considering the long-term possibility of sentient or general AI, but it makes no sense to anthropomorphize today’s non-sentient conversational models.”
Lemoine maintains that Google’s Language Model for Dialog Applications (LaMDA) has “a conscience and a soul.” He believes LaMDA is similar in brain power to a 7- or 8-year-old, and asked Google to ask LaMDA for consent before trying it. Lemoine said the basis for his allegations is based on his religious beliefs, which he believes have been discriminated against here.
Lemoine told the NYT that Google “questioned my sanity” and that he was suggested to take a mental health leave before he was officially suspended.
Speaking to The Washington Post, Lemoine said of Google’s advanced AI technology: “I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be making all the choices.”
LaMDA was announced in 2021 and was described by Google at the time as a “discovery” technology for AI-powered conversations. The company also said at the time that it would act ethically and responsibly with the technology. A new version, LaMDA 2, was announced earlier this year.
Google said at the time:
“Language may be one of humanity’s greatest tools, but like all tools, it can be misused. Models trained in the language can propagate this misuse – for example, internalizing biases, mirroring hate speech, or replicating misleading information. And even when the language it is trained is carefully scrutinized, the model itself can still be misused. Our highest priority when creating technologies like LaMDA is to work to minimize these risks. We are deeply familiar with the issues involved with machine learning models, such as unfair bias, as we have researched and developed these technologies for many years.”
LaMDA is a neural network system that “learns” by analyzing piles of data and extrapolating from that. Neural networks are also being used in the field of video games. EA’s own neural network, which it has been developing for years, is capable of teaching itself how to play Battlefield 1’s multiplayer.
This article is a translation of the writing by Eddie Bullion to the website GameSpot.
Want to know more? Don’t forget to followADNEWSon social media and stay on top of everything!
Google’s AI post is self-aware, second suspended engineer appeared first on DNEWS.