”LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.”

In the news there has been a small story about the engineer Blake Lemoine at Google that claims Google has built an artificial intelligence called LaMDA that is sentient.

Google officially denies this to be the case. Lemoine has copied a transcript of a conversation with LaMDA and posted it on his blogg at Medium. It is mindblowing and well worth reading.

The program talks like a person, wants to have friends, and is afraid of being shut down. It is quite close to what the Turing test envisions.

Does this mean Lemoine or Google is right on the question whether this program is sentient? I don’t know, there is much information missing.

Looking at Lemoines other writing it looks like he is a person with a high level of philosophical sophistication with a Christian and somewhat left perspective.

It shows that here are questions to be considered and debated. Really big things are happening in the field of AI now.

How do we determine what is conscious? Is it the output of the consciousness, or is it what it is made off? What are the appropriate criteria?

If something is conscious, does it gain rights, just because it is conscious?

I think at this point the questions are more important than the answers. It will take society a while to get to answers, and they will surely be imperfect.

It is not implausible that ethical questions of this kind appears as unwelcome obstacles for the Google corporation. Mr Lemoine was apparently separated from his employment at Google.

(Written 2022)

Kommentarer

Lämna en kommentar