Google Fires Engineer for Claiming AI is Sentient

June 13, 2022

From The New York Times:

For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. Most A.I. experts believe the industry is a very long way from computing sentience.

As these algorithms get more clever they may pass the Turing Test but whether they are conscious or have a soul is a much deeper question than whether they can convince a single person of that. Or as Qui-Got Jinn once quipped in Star Wars, “the ability to speak does not make you intelligent.”

Want to get posts like this in your email?

This work by Matt Zagaja is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.