excerpted from https://bigtechnology.substack.com/p/google-fires-blake-lemoine-engineer and submitted with only a single comment at the end.
July 24
"A month after Blake Lemoine, who worked for Google's Responsible AI organization, was placed on paid leave for breaching confidentiality while insisting that the company's AI chatbot, LaMDA, is sentient; he has been fired. Lemoine shared the news of his firing in a taping of Big Technology Podcast on Friday, just hours after Google dismissed him.
[snippet]
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot."
Google's response:
" “As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation.
LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.
If an employee shares concerns about our work, as Blake did, we review them extensively.
We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.
These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.
We will continue our careful development of language models, and we wish Blake well.”
Doth protesteth too much?