Our Blog

Google fires engineer who contended its AI technology was sentient

Google (GOOG) has fired the engineer who claimed an unreleased AI system had become sentient, the company confirmed, saying he violated employment and data security policies.

assetsfx

Blake Lemoine, a software engineer for Google, claimed that a conversation technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.

Google confirmed it had first put the engineer on leave in June. The company said it dismissed Lemoine's "wholly unfounded" claims only after reviewing them extensively.


👉 Keep Pushing Your Profitable Trading With 👈

✅AssetsFX✅

 


He had reportedly been at Alphabet for seven years.In a statement, Google said it takes the development of AI "very seriously" and that it's committed to "responsible innovation."

Google is one of the leaders in innovating AI technology, which included LaMDA, or "Language Model for Dialog Applications."
 

Technology like this responds to written prompts by finding patterns and predicting sequences of words from large swaths of text and the results can be disturbing for humans.

"What sort of things are you afraid of?" Lemoine asked LaMDA, in a Google Doc shared with Google's top executives last April, the Washington Post reported.
 

LaMDA replied: "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others.

I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot."

NOTES FOR UNDERSTANDING AI AND AI SENTIENT :

ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence.

AI SENTIENT

In order for an AI to truly be sentient, it would need to be able to think, perceive and feel, rather than simply use language in a highly natural way.

No matter how complex the software, no matter how many feedback and feedforward connections, no matter what algorithm, no matter how extensive the training data there it is mathematically impossible to build a computational structure that exhibits properties like inner voice, free will and abstract thought which contribute to what we call sentience.

It just can’t be done.

That is not to say that A.I. systems will not become extremely powerful and will be able to ‘fake’ certain aspects of human behaviour.

But there is zero chance of us being able to replicate human sentience in a computer given the current concept we have of computer science and physics,. The reason is I think quite easy to understand.


👉 Top THREE Award-Winning Brokers in 2022 ðŸ‘ˆ

✅LiteFinance✅✅IC Markets✅✅ Avatrade✅
 


The lowest level of functional unit in a computer is switch, which is realised using a transistor. This works like an irrigation channel that can be controlled using a gate which can either be open or closed.

Everything is built on top of this simple idea. - CNN

Hot Topics

Capitol riot: Trump ignored pleas to condemn attack, hearing told

Nifty News: BAYC hodler loses 100 ETH in 'joke' domain sale gone wrong

Bitcoin price holds $23.5K, leading bulls to say ‘it’s different this time