_
_
_
_
_

LaMDA, the machine that is like ‘a seven-year-old kid’: can a computer have consciousness?

A Google engineer believes he had a conversation with an artificial intelligence system capable of independent thought. Although the scientific community has scoffed at the idea, advances in AI will lead to ‘uncomfortable debates’ in the future

Artificial Intelligence
Manuel G. Pascual

If we were to hand Isaac Newton a smartphone, he would be completely captivated. He wouldn’t have the faintest idea how it worked and one of the greatest scientific minds in history would quite possibly start talking of witchery. He might even believe he was in the presence of a conscious being, if he came across the voice assistant function. That same parallel can be drawn today with some of the advances being made in artificial intelligence (AI), which has achieved such a level of sophistication that on occasion it can shake the very foundations of what we understand as conscious thought.

Blake Lemoine, a Google engineer working for the tech firm’s responsible AI department, appears to have fallen into this trap. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told The Washington Post in an interview published last week. The 41-year-old engineer was talking about LaMDA, a Google chatbot generator (a program that carries out automated tasks via the internet as if it were human). Last autumn, Lemoine took on the task of talking to the program to determine whether it used discriminatory or hate-inciting language. The conclusions he drew have shaken the scientific world: he believes that Google has succeeded in creating a conscious program, with the power of independent thought.

Is that possible? “Anyone who makes this sort of claim shows that they have never written a single line of code in their lives,” says Ramón López de Mántaras, director of the Artificial Intelligence Research Institute at the Spanish National Research Council. “Given the current state of this technology, it is completely impossible to have developed self-conscious artificial intelligence.”

However, that does not imply that LaMDA is not extremely sophisticated. The program uses neural network architecture based on Transformer technology, which replicates the function of the human brain to autocomplete written conversations. It has been trained with billions of texts. As Google vice president and head of research Blaise Agüera y Arcas told The Economist in a recent interview, LaMDA takes 137,000,000 parameters into account to decide what response has the highest probability of best fitting the question it has been asked. That allows it to formulate sentences that could pass as having been written by a person.

But as López de Mántaras points out, although it may be able to write as though it were human, LaMDA doesn’t know what it is saying: “None of these systems have semantic understanding. They don’t understand the conversation. They are like digital parrots. It is us who give meaning to the text.”

Agüera y Arcas’ essay, which was published just a few days before Lemoine’s interview in The Washington Post, also highlights the incredible precision with which LaMDA is able to formulate responses, but the Google executive offers a different explanation. “AI is entering a new era. When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent. That said, these models are far from the infallible, hyper-rational robots science fiction has led us to expect,” he wrote. LaMDA is a system that has made impressive advances, he says, but there is a world of difference between that and talking about a conscience. “Real brains are vastly more complex than these highly simplified model neurons, but perhaps in the same way a bird’s wing is vastly more complex than the wing of the Wright brothers’ first plane.”

Researchers Timnit Gebru and Margaret Mitchell, who headed Google’s Ethical AI team, issued a warning in 2020 that something similar to Lemoine’s experience would occur, and co-signed an internal report that led to them being fired. As they recapped in an opinion piece in The Washington Post last week, the report underlined the risk that people might “impute communicative intent to things that seem humanlike,” and that these programs “can lead people into perceiving a ‘mind’ when what they’re really seeing is pattern matching and string prediction.” In Gebru and Mitchell’s view, the underlying problem is that as these tools are fed millions of unfiltered texts taken from the internet, they could reproduce sexist, racist or discriminatory language in their operations.

Is AI becoming sentient?

What led Lemoine to be seduced by LaMDA? How did he come to the conclusion that the chatbot he was conversing with is sentient? “There are three layers that converge in Blake’s story: one of these is his observations, another is his religious beliefs and the third is his mental state,” a noted Google engineer who has worked extensively with Lemoine told EL PAÍS, under the condition of anonymity. “I think of Blake as a clever guy but it is true that he hasn’t had any training in machine learning. He doesn’t understand how LaMDA works. I think he got carried away by his ideas.”

Lemoine, who was placed on paid administrative leave by Google for violating the company’s confidentiality policy after going public, defines himself an “agnostic Christian,” and a member of the Church of the SubGenius, a post-modern parody religion. “You could say that Blake is a bit of a character. It is not the first time he has attracted attention within the company. To be honest, I would say that in another company he would have been fired long ago,” says his colleague, who is unhappy about the way in which the media is bleeding Lemoine dry. “Beyond the silliness, I’m glad the debate has come to light. Of course, LaMDA doesn’t have a conscience, but it is evident that AI will become increasingly capable of going further and further and we have to rethink our relationship with it.”

Part of the controversy surrounding the debate has to do with the ambiguity of the terminology employed. “We are talking about something that we have not as yet been able to agree on. We don’t know exactly what constitutes intelligence, consciousness and feelings, nor if all three elements need to be present for an entity to be self-aware. We know how to differentiate between them, but we don’t know how to precisely define them,” says Lorena Jaume-Palasí, an expert in ethics and philosophy of law in applied technology who works as an advisor to the Spanish government and the European parliament in matters related to AI.

Attempting to anthropomorphize computers is very human behavior. “We do it all the time with everything. We even see faces in clouds or mountains,” says Jaume-Palasí. When it comes to computers, we also drink from the cup of European rationalist heritage. “In line with the Cartesian tradition, we tend to think that we can delegate thought or rationality to machines. We believe that the rational individual is above nature, that it can dominate it,” says the philosopher. “It seems to me that the discussion as to whether an AI system has a conscience or not is steeped in a tradition of thought in which we try to extrapolate characteristics onto technologies that they do not have and cannot have.”

The Turing Test has been outdated for some time now. Formulated in 1950 by the famous mathematician and engineer Alan Turing, the test consists of asking a machine and a human a series of questions. The test is passed if the interlocutor is unable to determine whether it is the person or the computer that is answering. Others have been put forward more recently, such as the 2014 Winograd schema challenge, which is based on commonsense reasoning and use of knowledge to satisfactorily answer the questions. To date, nobody has developed a system able to beat it. “There may be AI systems that are able to trick the judges asking questions. But that does not prove that a machine is intelligent, only that it has been well-programmed to deceive,” says López de Mántaras.

Will we one day witness general artificial intelligence? That is to say, AI that equals or even exceeds the human mind, that understands context, that is able to connect elements and anticipate situations as people do? This question is in itself a field of traditional speculation within the industry. The consensus of the scientific community is that if such artificial intelligence does come into being, it is very unlikely it will do so before the end of the 21st century. It is, though, likely that the constant advances being made in AI development will lead to more reactions like that of Blake Lemoine (although not necessarily ones that are as histrionic). “We should be prepared to have debates that will often be uncomfortable,” says his former Google colleague.


More information

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_