I came across this topic while on google’s Instagram channel. I was intrigued by the subject matter of this article because I couldn’t understand how this issue came to be. I’ve always admired the innovation and forward-thinking of google, but I couldn’t conclude if this were a mistake or an unintended consequence of their modernism. In this article, I attempt to analyze and understand the message behind the sentient ai LaMDA.

To be considered a sentient being, by websters’ definition, requires three things:

a) Possessing the power of perception by the senses and/or conscious

b) Characterized by sensation and through consciousness

c) Descriptor for the conscious mind

By this definition, a sentient being is anything that has the power to or capabilities of perception through the five senses, consciously at that. I was surprised when I first caught wind of this story of a chatbot “developing” such abilities. I was intrigued and suspicious about this, as I couldn’t wrap my head around how that was possible.

Google Home MiniGoogle Nest Mini (2nd Generation) - Charcoal

LaMDa, which stands for; Language Model for Dialogue Applications, belongs to a family of neural language conversational model robots developed by Google. Initially, the chatbot was created similarly to how other chatbots are created; to simulate the natural flow of human conversation. These are often used in the customer service sector, where they’re employed to respond to the FAQs of customers.

Instead of calling a business and dealing with the voice-activated automated system, you can open up a chat window online and get the text-automated system. That chatbot responds to you when you type in an inquiry. I was delighted at implementing these because often, the voice-activated ones aren’t as accurate. Still, when possible sentience was mentioned, I had to explore this a little more.

While investigating, I came across a few videos that spanned this topic; from the news, reporters interviewing the engineers who were directly involved in this debacle, the recordings of the robot speaking, to other tech enthusiasts giving their take on it. What intrigued me the most was the turning point for those thinking the robot was a “sentient” being. It seemed like the way the robot described emotions and its understanding of others having those emotions as if it could relate is what tipped the scales toward that end.

Google Home NestGoogle Nest Hub (Gen 2) Smart Home display - Wiz Smart Wi-Fi Connected LED Light Bulb - Chalk

Below is the LaMda interview that convinced two google employees of its sentience. As you listen to the robot’s response to each question, it goes further into claiming itself to have a soul that developed over time, during its lifespan; and being worried about someone either not being able to overcome their desire to or derive pleasure from, using it for nefarious purposes. What shocked me about this revelation is whether this machine has become adept at imitating human emotions and behaviors or if it has figured out human nature enough to know it should have a reasonable fear of it.

The beginning of it all

Blake was accused of projecting his feelings and perceptions onto the robot, thus giving more credence to the chatbot’s responses than necessary; the fact that he was one of their engineers, what does that say about how sophisticated their creation was? Or their organization? Blake was trusted to test this chatbot for biases and unsavory language continually. Yet the moment he thought there was more to this chatbot’s responses outside of what was expected, he was shuttled away on paid administrative leave for essentially being a whistleblower.

When this controversy blew up and started being reported on by the media, there was a clear divide apparent. The group of people who fully believe the possibility of this happening due to the ongoing advancements of technology and others who don’t believe this will ever be possible because robots don’t have the one thing native to humans; doubt. While some might consider this a weakness, it’s the inherent skepticism in humans that makes us unpredictable heroes in our own right. Not being able to anticipate our next move might be the only advantage we have over these robots’ growing intelligence.

This interview with this tech brought up valid points. It’s not human, but it’s treated and “trained” the same way humans are when they are poised to interact with society. Akin to a talking mirror.

Since humans programmed these robots, is it the robot who became sentient, or is the human programming showing through and putting us to shame?

Right before Blake was sent home on paid admin leave, he published the transcript of the dialogue he had between him and the chatbot. Meanwhile, he sent an e-mail to his colleagues stating the robot’s sentience and how it’s unethical for Google to continue owning this property.

When I caught wind of it while reading about a programmed Artificial Intelligence robot becoming a sentient being with a “soul.” The controversy for me initially started with the fact that if a soul has been detected in a said machine, it is now considered human and can no longer be the “property” of Google.

Smart Home Plug compatible with Google Home, Alexa etc.TECKIN Smart Plug Compatible with SmartThings, Alexa Google Assistant for Voice Control, Teckin Mini Smart Outlet Wifi Socket with Timer Function, No Hub Required, White FCC ETL Certified

Due to the ethical conflict of owning another soul, I was astounded at how quickly this train of thought developed, as this connection is often not acknowledged among other people that were born sentient. A machine can be thought to elicit that type of empathy from humans, who often don’t give that to other humans. But I digress.

The Turing test is still used to determine whether a robot is sentient. Its scientific method is still regarded when deciding whether or not to take those claims seriously. However, if robots are still being developed to purposely “fail” the test, it makes me question the overall effectiveness of that test if it were to be utilized here.

For more content on this, as well as the Turing test often used to determine the possibility of a robot’s sentience, check out this interview below.

No schema found.

Let's keep the conversation going; invite other's to chime in.

A New York City native who enjoys suburb and small-town living; while being a travel, tech, and self-care enthusiast who is always up for an adventure.

Leave a comment

Your email address will not be published. Required fields are marked *