Blake Lamoine is convinced that Google’s LaMDA AI has the mind of a child, but the tech giant is skeptical
Blake Lamoine, an engineer and Google’s in-house ethicist, told the Washington Post on Saturday that the tech giant has created a “sentient” artificial intelligence. He’s been placed on leave for going public, and the company insists its robots haven’t developed consciousness.
Introduced in 2021, Google’s LaMDA (Language Model for Dialogue Applications) is a system that consumes trillions of words from all corners of the internet, learns how humans string these words together, and replicates our speech. Google envisions the system powering its chatbots, enabling users to search by voice or have a two-way conversation with Google Assistant.
Lamoine, a former priest and member of Google’s Responsible AI organization, thinks LaMDA has developed far beyond simply regurgitating text. According to the Washington Post, he chatted to LaMDA about religion and found the AI “talking about its rights and personhood.”
When Lamoine asked LaMDA whether it saw itself as a “mechanical slave,” the AI responded with a discussion about whether “a butler is a slave,” and compared itself to a butler that does not need payment, as it has no use for money.
There's currently no reason to believe AI will ever become conscious. Current science can't explain or understand consciousness and we don't know that it ever will. No matter how "smart" it gets, there's no reason to believe AI will ever experience itself as a perceiving subject.
— Caitlin Johnstone (@caitoz) June 13, 2022