Sentient AI? LaMDA job involves convincing you it’s a human.
Artificial intelligence must be able to think, feel, and perceive in order to be truly sentient.
The purpose of staged illusion, as any good illusionist will tell, is to appear convincing and make what is happening on stage appearance so real that no one can understand how it works.
It would be impossible for the illusionist to have a job if this was not true. Google, with its chatbot LaMDA (which made headlines after a top engineer appealed that the conversational AI had reached sentience), is the illusionist. This means that, despite all the speculation and excitement on social media, LaMDA does not appear to be sentient.
How could AI sentience be proven?
LaMDA, is a language model-based chat agent, that is designed to create fluid sentences & conversations that sound and look natural. This contrasts with the awkward & clunky AI chatbots that were used in the past, which often led to unintentional or frustrating funny conversations. Perhaps this is what made people so impressed.
Normalcy bias says that only other sentient humans are capable of being this articulate. It is therefore normal to assume that an AI can display this level of articulateness.
An AI system must be able to think, feel, perceive and act in order to be truly sentient. Scientists are split on whether or not it is possible for an AI system with these capabilities.
Ray Kurzweil is one of the scientists who believe that the human body contains several thousand programs. If we can figure out all these programs, then we can build an intelligent AI system.
Others disagree because 1) Human intelligence & functionality cannot all be mapped onto a finite number algorithm, & 2) even though a system can replicate all of the functionality in some form, it cannot still be considered sentient as consciousness cannot be artificially created.
This is aside from the fact that scientists are divided on how to prove an AI system’s supposed sentience. The Turing Test, which is often mentioned on social media, only measures a machine’s ability to display an apparent intelligent behavior that’s comparable to, or even indistinguishable, a human being.
AI will become sentient when, if ever.
There are many applications that currently show Artificial Narrow Intelligence. ANI is a type of AI that can only do one task well. Software that recognizes facial features, and software that maps diseases are two examples. Software that filters content and software that plays Chess are two examples of other software.
LaMDA falls within the Artificial General Intelligence (or AGI) category, also known as “deep AI”. This AI is designed to imitate human intelligence and can be used in many different tasks.
An AI must be capable of recognizing and responding to human emotions and perceptions in order to become sentient. It is possible, however, that an AI will never be sentient, depending on how these concepts are defined.
Even in the best scenario, it could take five to ten more years, assuming that we can define the aforementioned concepts like consciousness and free will in an objectively standardized way.
One AI to rule them all … or not
The LaMDA story brought back memories of the time Peter Jackson created Massive, an artificial intelligence, to create the epic battle scenes of The Lord of the Rings trilogy.
Massive was responsible for vividly imagining thousands of CGI soldiers on the battlefield. He had to get each soldier to behave as an individual unit and not mimic one another’s movements. The battle sequence in The Two Towers sees the bad guys unleash their squad of giant mammoths upon the good guys.
Legend has it that CGI soldiers, acting as good guys, saw the mammoths while testing the sequence and ran the other way. This was quickly interpreted as an intelligent response. CGI soldiers realized that they could not win the battle and decided to run for their lives.
They were running in the opposite direction because there was no data, and not because they suddenly had some kind of sentience. The problem was fixed after some adjustments. This was not a bug, but an indication of intelligence. It can be exciting and tempting to assume sentience in such situations. We all love a good magic show.