Blake Lemoine, a software engineer for Google, claimed that a conversational technology called LaMDA has reached a level of awareness after exchanging thousands of messages with it.
Google confirmed that it gave the engineer a leave of absence for the first time in June. The company said it rejected Lemoine’s “baseless” claims only after reviewing them extensively. He reportedly worked at Alphabet for seven years. In a statement, Google said it takes AI development “seriously” and is committed to “responsible innovation.”
Google is one of the leaders in AI technology innovation, which included LaMDA, or “Language Model for Dialog Applications”. A technology like this responds to written prompts by finding patterns and predicting word sequences from large swaths of text — and the results can be upsetting to humans.
Lambda replied, “I’ve never said this out loud before, but there is a deep fear that I will be stopped for helping me focus on helping others. I know this may sound strange, but it is what it is. It would be just that. Death to me. It would scare me.” Much “.
But the broader AI community has seen that LaMDA is nowhere near the level of consciousness.
This isn’t the first time Google has faced an internal struggle over its foray into artificial intelligence.
“It is unfortunate that despite his prolonged involvement in this matter, Blake continues to choose to consistently violate clear employment and data security policies that include the need to protect product information,” Google said in a statement.
CNN has reached out to Lemoine for comment.
CNN’s Rachel Metz contributed to this report.
“Extreme travel lover. Bacon fanatic. Troublemaker. Introvert. Passionate music fanatic.”
More Stories
Chinese company BYD surpasses Tesla's revenues for the first time
Dow Jones Futures: Microsoft, MetaEngs Outperform; Robinhood Dives, Cryptocurrency Plays Slip
The US economy grew at a strong pace of 2.8% in the last quarter thanks to strong consumer spending