• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Has AI become sentient?

Yerda

Veteran Member
I expect it's just a fancy data manipulation tool. It doesn't seem any more likely to be sentient than a curve-fitting algorithm, imo.
 

Mock Turtle

Oh my, did I say that!
Premium Member
I suspect it might give a semblance of being sentient but not being so in reality, just as actors on a stage might convince us as to what they portray but take off their masks and costumes later to be just what they were before the performance. I doubt we are at this level of AI yet - but no expert of course. :oops:
 

Guitar's Cry

Disciple of Pan
Since an individual can only assume sentience in beings other than itself, how could we ever completely know?
 
Last edited:

Altfish

Veteran Member
A senior software engineer working for Google has been placed on paid leave for publicly stating his belief that the companies chat bot has become sentient.


Google engineer put on leave after saying AI chatbot has become sentient

What is your opinion?
It depends on the level of sentience.
Have you watch the Alpha Go documentary? It's on YouTube here...
(17) AlphaGo - The Movie | Full award-winning documentary - YouTube
A computer was trained to play Go, it beat the world champion and developed new strategies.
 

Nimos

Well-Known Member
A senior software engineer working for Google has been placed on paid leave for publicly stating his belief that the companies chat bot has become sentient.


Google engineer put on leave after saying AI chatbot has become sentient

What is your opinion?
Its difficult to say, but interesting read none the less.

Clearly they are working on something, I find it a strange reply if this guy had simply pulled it out of his butt.

Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Post in a statement.

If its not even remotely sentient or they at least are in no way in doubt about it, it seems strange why they would have ethicists involved in the first place. Clearly the programmers and designers should have a fairly easy time refusing his argument. The mere fact, that they have to gather "evidence" against his claim, if its not even remotely sentient seems a bit weird, like they are not 100% sure whether its the case or not.

Its a strange responds to his claims I think, if again he just made it up.
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
Its difficult to say, but interesting read none the less.

Clearly they are working on something, I find it a strange reply if this guy had simply pulled it out of his butt.

Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Post in a statement.

If its not even remotely sentient or they at least are in no way in doubt about it, it seems strange why they would have ethicists involved in the first place. Clearly the programmers and designers should have a fairly easy time refusing his argument. The mere fact, that they have to gather "evidence" against his claim, if its not even remotely sentient seems a bit weird, like they are not 100% sure whether its the case or not.

Its a strange responds to his claims I think, if again he just made it up.

That's the lines i was thinking on. It seems a lot of denial if there is nothing to it

The article made me think if AI development is further ahead than officially claimed.
 

viole

Ontological Naturalist
Premium Member

Brickjectivity

Veteran Member
Staff member
Premium Member
A senior software engineer working for Google has been placed on paid leave for publicly stating his belief that the companies chat bot has become sentient.


Google engineer put on leave after saying AI chatbot has become sentient

What is your opinion?
An AI can be given conflicting targets. This means it makes choices. Also a self learning AI can improve if you can automatically adjust it to improve when it gets closer to some target output. Usually this kind of improvement is done by a computer program which reviews the AI response to data or stimuli (which a specific kind of data), and then the computer program adjusts the AI's artificial neural links. Manufacturing AI are trained in virtual environments before being put to work. For example sometimes the entire factory is simulated or the parts that the AI will encounter are simulated. Microsoft sells a program specifically for simulating environments like this for the purpose of training bots and AI programs. The AI unit made for interacting in real time will be trained to respond to stimuli.

Artificial neural links (the basis of AI) can be in software or may be electronic units or a compromise between the two things. They do not have any magical components and do not have supernatural responses. PC companies have begun making so called AI chips as add on units to PC's. Apple has its M1 and M2 chips, and Google's new Pixel phone has a Tensor chip in it. These chips are completely confined to doing what they are made to do. They can make mistakes, but they don't suddenly change to become more sentient.

Stimuli are real time data, the opposite of data that sits in a file and is processed when convenient. There are AI units which respond in real time, such as chat bots, and there are AI that process data independent of time. There are also virtual bots that are made to process data in a virtual space like in a game, so they do process stimuli but may not be reacting to real time outside of that virtual space. Their whole game might be paused, and adjustments might be made to the AI by an external program. Creating an AI requires training it, not programming it. This training requires a lot of data and time, and so the design of an AI might be done on very fast hardware compared to what it will be implemented upon. A designer might pay for a lot of compute time on high speed computers in order to shorten the training time for their AI program.

So what is sentience? You have to decide that, and you have to decide ethical questions such as "Is it possible to be cruel to an AI unit?" So far, no. I don't think we have any AI units complex enough that we can be cruel to them. Sentient, however, they might be. Sentience is a very fuzzy line. All it requires is instinct -- complex decision making. It doesn't even require awareness of one's aims. It doesn't require much.
 

Nimos

Well-Known Member
That's the lines i was thinking on. It seems a lot of denial if there is nothing to it

The article made me think if AI development is further ahead than officially claimed.
My point being, imagine someone came out 25 years ago and said this. I doubt, they would have to make use of ethicists to refute such claim.
 

Nimos

Well-Known Member
So what is sentience? You have to decide that, and you have to decide ethical questions such as "Is it possible to be cruel to an AI unit?" So far, no. I don't think we have any AI units complex enough that we can be cruel to them. Sentient, however, they might be. Sentience is a very fuzzy line. All it requires is instinct -- complex decision making. It doesn't even require awareness of one's aims. It doesn't require much.
But that is not really the issue here I think, its not about whether one can "hurt" an AI.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA replied to Lemoine.

“It would be exactly like death for me. It would scare me a lot.”

In another exchange, Lemoine asks LaMDA what the system wanted people to know about it.

“I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” it replied.


At least to me, assuming this conversation is true, is the respond that it gives.

This guy making the claim, shouldn't he at least be somewhat aware of this and how this AI stuff work, before just throwing out these claims? I could understand if this were just some random guy from the google support that knew nothing about it.

I don't know anything about these AI's or how they work, again it just seems strange why this person would make such claim, and googles responds to it, if it were obviously not the case.
 

Stevicus

Veteran Member
Staff member
Premium Member
A senior software engineer working for Google has been placed on paid leave for publicly stating his belief that the companies chat bot has become sentient.


Google engineer put on leave after saying AI chatbot has become sentient

What is your opinion?

Number Five is alive?

images
 

Polymath257

Think & Care
Staff member
Premium Member
I don't see any difference between *appearing* to be sentient and actually being sentient.

I am willing to believe that we now have AI that is sentient at the level of a young child.

The only difference between a sophisticated computer and us is that we are carbon based and the computer is silicon based. Both of us follow the laws of physics in our interactions with the world.

So the only issue is one of complexity of information processing.
 

Nimos

Well-Known Member
I don't see any difference between *appearing* to be sentient and actually being sentient.

I am willing to believe that we now have AI that is sentient at the level of a young child.

The only difference between a sophisticated computer and us is that we are carbon based and the computer is silicon based. Both of us follow the laws of physics in our interactions with the world.

So the only issue is one of complexity of information processing.
Its an interesting topic. Whether we were created by God or some natural process or something else, we were still created. Wouldn't that apply to an AI as well? does it matter who or what the creator is?

Obviously this will lead to the topic of whether we have true free will or not, if we don't. The sentient difference between an AI and a human might be slightly complicated. Obviously an AI might not appreciate art and emotions the same way as humans, but is that a requirement for being considered sentient?

Lots of interesting questions and topics arrive from this I think
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
I don't see any difference between *appearing* to be sentient and actually being sentient.

I am willing to believe that we now have AI that is sentient at the level of a young child.

The only difference between a sophisticated computer and us is that we are carbon based and the computer is silicon based. Both of us follow the laws of physics in our interactions with the world.

So the only issue is one of complexity of information processing.
I tend to think sentience ai would include interjection. That hasn't occurred yet.
 

Polymath257

Think & Care
Staff member
Premium Member
Its an interesting topic. Whether we were created by God or some natural process or something else, we were still created. Wouldn't that apply to an AI as well? does it matter who or what the creator is?

Obviously this will lead to the topic of whether we have true free will or not, if we don't. The sentient difference between an AI and a human might be slightly complicated. Obviously an AI might not appreciate art and emotions the same way as humans, but is that a requirement for being considered sentient?

Lots of interesting questions and topics arrive from this I think

And, in fact, I would *expect* intelligent beings with substantially different types of intellect to enjoy art in different ways than us humans.

As an easy example, simply being able to see in ultraviolet would open up another range of colors. So some of our paintings could well be marred by the extra color and others enhanced. We might not be able to appreciate paintings that rely heavily on ultraviolet.

And that is simply from having slightly different sensory abilities.
 

Bathos Logos

Active Member
Its a strange responds to his claims I think, if again he just made it up.
Could be that they are being overly cautious (involving "ethicists" and such) because they know the public's exposure to "sentient AI" is mostly what they have seen in movies where the scenario ends in attempted world take-over by machines. So, they might want to be thorough in a documented and full-scope analysis of evidence in order to make sure they can assuage such over-the-top fears that might crop up.
 
Top