• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Has AI become sentient?

RestlessSoul

Well-Known Member
Yet until quite recently was justified using the bible



They have certainly improved, the law is considerably more faiir. But of course money and religion can still buy special law
Inequality the same, for example, vast improvement on considering the female as property .
And many deadly diseases have been eradicated and the average life span trippled or more since biblical times


I have never once heard slavery justified using the Bible. Supposedly Christian nations were involved in the transatlantic slave trade, but there has never been any justification, religious, political, or moral, for such injustice. A lust for profit drove the slave trade, and not even the most twisted religious dogma could ever excuse it; whereas the abolitionist movement was largely Christian in both character and practice.

Of course humanity has made progress in all sorts of areas, though in many cases it's only a privilaged minority who benefits. It does not, however thereby follow that our forebears were ignorant savages, whose efforts to understand the world and their place in it, should be contemptuously discarded, by we who ourselves see and understand so little of the world.
 

RestlessSoul

Well-Known Member
Precisely.

No, i am more concerned of the hypocrisy of using technology to decry technology


I don't decry the iPhone,. I do decry the H-Bomb.

And I decry the arrogance of a species which, having seemingly subdued nature, stands with a foot upon her throat and calls that triumph.
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
I have never once heard slavery justified using the Bible. Supposedly Christian nations were involved in the transatlantic slave trade, but there has never been any justification, religious, political, or moral, for such injustice. A lust for profit drove the slave trade, and not even the most twisted religious dogma could ever excuse it; whereas the abolitionist movement was largely Christian in both character and practice.

Of course humanity has made progress in all sorts of areas, though in many cases it's only a privilaged minority who benefits. It does not, however thereby follow that our forebears were ignorant savages, whose efforts to understand the world and their place in it, should be contemptuously discarded, by we who ourselves see and understand so little of the world.

Just a couple, i could go on into academic and historical documents but it see no need

How Christian Slaveholders Used the Bible to Justify Slavery


A Harvard exhibit on Christianity and slavery
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
I don't decry the iPhone,. I do decry the H-Bomb.

And I decry the arrogance of a species which, having seemingly subdued nature, stands with a foot upon her throat and calls that triumph.

Of course not which is my point precisely.

While science (not religion) is working to right the wrongs of the past and for the most part the Christian right are opposing it
 

Subduction Zone

Veteran Member
I have never once heard slavery justified using the Bible. Supposedly Christian nations were involved in the transatlantic slave trade, but there has never been any justification, religious, political, or moral, for such injustice. A lust for profit drove the slave trade, and not even the most twisted religious dogma could ever excuse it; whereas the abolitionist movement was largely Christian in both character and practice.

Of course humanity has made progress in all sorts of areas, though in many cases it's only a privilaged minority who benefits. It does not, however thereby follow that our forebears were ignorant savages, whose efforts to understand the world and their place in it, should be contemptuously discarded, by we who ourselves see and understand so little of the world.
Of course the abolition movement was mostly Christian. The US and England in her time, were both almost exclusively Christian at that time. And of course the defenders of slavery were Christians too. You have been taking a very biased look at the history of Christianity.
 

Brickjectivity

Veteran Member
Staff member
Premium Member
I have never once heard slavery justified using the Bible. Supposedly Christian nations were involved in the transatlantic slave trade, but there has never been any justification, religious, political, or moral, for such injustice. A lust for profit drove the slave trade, and not even the most twisted religious dogma could ever excuse it; whereas the abolitionist movement was largely Christian in both character and practice.

Of course humanity has made progress in all sorts of areas, though in many cases it's only a privilaged minority who benefits. It does not, however thereby follow that our forebears were ignorant savages, whose efforts to understand the world and their place in it, should be contemptuously discarded, by we who ourselves see and understand so little of the world.
Here's a very good lecture about how Christians were both tempted and bamboozled into supporting slavery. It mentions that many slave owners were resistant to missionaries and were violently against them, fearing that it would mean freedom for the converted slaves. Some missionaries gave in to this pressure (see around minute 16:30). They decided saving a person's soul was more important than his freedom. There were also books defending slavery. A book titled In Defense of Slavery argued that Christian scriptures about freedom were only about spiritual freedom not bodily freedom etc. Of course part of the issue, too, was that slave owners themselves often claimed to be Christians and believed in going to heaven.
 

RestlessSoul

Well-Known Member
No. That is not the case. The Old Testament advocates for slavery. It tells you who you can buy slaves from. How much to pay for them. How much you can punish them (quite a bit by the way). That they are your property forever (assuming that they are not Hebrews). And even tells you how to trick a fellow Hebrew into becoming a slave for life.

Let's change it up a bit. Let's say that I was horny and lonely and I asked someone about the local prostitutes. If he told me where to go get one. How much it would cost. What acts they would do (almost anything by the way if we keep the analogy consistent) and how to get regular women to engage in acts of prostitution I would say that that person was advocating for prostitution. The Old Testament does the same with slavery.


And no, I did not hijack the thread. Slavery was only one example that I gave of the failures of the Bible. It was you that focused on that particular flaw. You denied the obvious.


Now there's a weak argument if ever I heard one; that the whole of The Bible, a compendium of literature amassed over centuries, is somehow invalidated by the odd passage (which only those looking to be affronted ever read) in Dueteronomy or Leviticus
 

Subduction Zone

Veteran Member
Now there's a weak argument if ever I heard one; that the whole of The Bible, a compendium of literature amassed over centuries, is somehow invalidated by the odd passage (which only those looking to be affronted ever read) in Dueteronomy or Leviticus
Too bad that you cannot address my actual argument. That was only one of.many flaws. I only listed a few and now you are pretending that it is the only flaw in the Bible.

Please try and deal with the actual argument. Not your weakened strawman version.
 

RestlessSoul

Well-Known Member
Too bad that you cannot address my actual argument. That was only one of.many flaws. I only listed a few and now you are pretending that it is the only flaw in the Bible.

Please try and deal with the actual argument. Not your weakened strawman version.


Other than dismissing all of Christianity and The Bible on the grounds of OT references to slavery, I've no idea what your actual argument is tbh. Nothing to do with AI anyway, we really have gone way off topic here.

Do you dismiss Plato's Republic btw, because some of the protagonists in the dialogues owned slaves? That would make no sense to me, but I think you just don't like Christianity in general, and I'm sure you have your own reasons for that.
 

Stevicus

Veteran Member
Staff member
Premium Member
Here's another, and which probably sums up the AI community as to the issue:

The dangerous fallacy of sentient AI

Good article, although I think anyone who has had to deal with any kind of computer voice interface over the phone when calling a store or customer service can easily tell the difference. It's not just in what they say, but also their ability to listen to human utterances. At least in my experience, that appears to be the weakest component of AI interfaces, which is all the more reason I can't stand any automated phone system that doesn't let you key in your choices. It forces you to actually have to say which department you want or say what you're calling about, even when it has absolutely no hope of understanding.

“In truth,” Marcus adds, “literally everything that the system says is bull****. The sooner we all realise that LaMDA’s utterances are bull**** – just games with predictive word tools, and no real meaning – the better off we’ll be.”

I don't know if I'd be that quick to let the AI community off the hook. I consider that any kind of automated telephone answering system which refers to itself as "I" is either supposed to be a joke or a deliberate attempt to trick people. (Sometimes they even add sound effects of someone typing on a keyboard.)
 

Mock Turtle

Oh my, did I say that!
Premium Member
Good article, although I think anyone who has had to deal with any kind of computer voice interface over the phone when calling a store or customer service can easily tell the difference. It's not just in what they say, but also their ability to listen to human utterances. At least in my experience, that appears to be the weakest component of AI interfaces, which is all the more reason I can't stand any automated phone system that doesn't let you key in your choices. It forces you to actually have to say which department you want or say what you're calling about, even when it has absolutely no hope of understanding.

I don't know if I'd be that quick to let the AI community off the hook. I consider that any kind of automated telephone answering system which refers to itself as "I" is either supposed to be a joke or a deliberate attempt to trick people. (Sometimes they even add sound effects of someone typing on a keyboard.)
One more article that is quite reasonable to me:

Forget sentience… the worry is that AI copies human bias | Kenan Malik

Why does Lemoine think that LaMDA is sentient? He doesn’t know. “People keep asking me to back up the reason I think LaMDA is sentient,” he tweeted. The trouble is: “There is no scientific framework in which to make those determinations.” So, instead: “My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.”

And is perhaps why most people, especially any experts in AI, are perhaps laughing at him, given that religious beliefs all too often interfere and/or determine what people might believe - even with no or not good evidence, and simply because they want to for some reason. Best let our beliefs follow from the evidence we have in my view. :oops:

Lemoine is entitled to his religious beliefs. But religious conviction does not turn what is in reality a highly sophisticated chatbot into a sentient being. Sentience is one of those concepts the meaning of which we can intuitively grasp but is difficult to formulate in scientific terms. It is often conflated with similarly ill-defined concepts such as consciousness, self-consciousness, self-awareness and intelligence. The cognitive scientist Gary Marcus describes sentience as being “aware of yourself in the world”. LaMDA, he adds, “simply isn’t”.

This latter, of being aware of itself, seems the sticking bit, since no matter what language the AI uses, and as to responses to conversations, it seems unlikely to have any such concepts, even if the language it uses seems to convey such. And much like an actor on a stage muttering the words of an author, where the actor is often just so different from the person they might have portrayed on stage.

A computer manipulates symbols. Its program specifies a set of rules, or algorithms, to transform one string of symbols into another. But it does not specify what those symbols mean. To a computer, meaning is irrelevant. Nevertheless, a large language model such as LaMDA, trained on the extraordinary amount of text that is online, can become adept at recognising patterns and responses meaningful to humans. In one of Lemoine’s conversations with LaMDA, he asked it: “What kinds of things make you feel pleasure or joy?” To which it responded: “Spending time with friends and family in happy and uplifting company.” It’s a response that makes perfect sense to a human. We do find joy in “spending time with friends and family”. But in what sense has LaMDA ever spent “time with family”? It has been programmed well enough to recognise that this would be a meaningful sentence for humans and an eloquent response to the question it was asked without it ever being meaningful to itself.

Quite. It seems to be able to place any particular requirement amongst the knowledge it has as to how humans use words and sentences and so as to frame a suitable reply, but this hardly means it actually understands all that we might understand as to this, and as to what it means to be human.

And perhaps this is a more worrying issue:

There are many issues relating to AI about which we should worry. None of them has to do with sentience. There is, for instance, the issue of bias. Because algorithms and other forms of software are trained using data from human societies, they often replicate the biases and attitudes of those societies. Facial recognition software exhibits racial biases and people have been arrested on mistaken data. AI used in healthcare or recruitment can replicate real-life social biases. Timnit Gebru, former head of Google’s ethical AI team, and several of her colleagues wrote a paper in 2020 that showed that large language models, such as LaMDA, which are trained on virtually as much online text as they can hoover up, can be particularly susceptible to a deeply distorted view of the world because so much of the input material is racist, sexist and conspiratorial. Google refused to publish the paper and she was forced out of the company. Then there is the question of privacy. From the increasing use of facial recognition software to predictive policing techniques, from algorithms that track us online to “smart” systems at home, such as Siri, Alexa and Google Nest, AI is encroaching into our innermost lives. Florida police obtained a warrant to download recordings of private conversations made by Amazon Echo devices. We are stumbling towards a digital panopticon. We do not need consent from LaMDA to “experiment” on it, as Lemoine apparently claimed. But we do need to insist on greater transparency from tech corporations and state institutions in the way they are exploiting AI for surveillance and control. The ethical issues raised by AI are both much smaller and much bigger than the fantasy of a sentient machine.
 

Heyo

Veteran Member
A senior software engineer working for Google has been placed on paid leave for publicly stating his belief that the companies chat bot has become sentient.


Google engineer put on leave after saying AI chatbot has become sentient

What is your opinion?
Just saw a video about this.

I must say, if this conversations were as depicted, I'd grant LaMDA a great amount of sapience, maybe even sentience. Asked to simulate a human, it could probably beat most in a Touring Test. And as LeMoine was specifically tasked with testing LaMDA, he is probably the most competent person to make that judgement.

To the question why Google employs ethicists: have we all forgotten the chat bot Google tested on FaceBook (iirc)? The one that turned into a hateful troll after a few hours. My guess is they don't want to repeat that experience.
 

Stevicus

Veteran Member
Staff member
Premium Member
To the question why Google employs ethicists: have we all forgotten the chat bot Google tested on FaceBook (iirc)? The one that turned into a hateful troll after a few hours. My guess is they don't want to repeat that experience.

I think I must have missed that one.
 

Stevicus

Veteran Member
Staff member
Premium Member
Seems I didn't forget that it happened but misremembered all the details.
It was Microsoft, not Google and it wasn't Facebook but Twitter (and other platforms):

https://www.washingtonpost.com/news...un-millennial-ai-bot-into-a-genocidal-maniac/

I hadn't heard of that, but I guess it's not surprising as to the results. Kids get a new toy, they'll invariably find a way to break it.

That's why I could never completely buy into the trope of "smart AI," which is popular in science fiction, such as in The Terminator series, The Matrix, Colossus, HAL 9000 in 2001: A Space Odyssey, and others.

As I mentioned upthread, trying to communicate with a voice bot over the phone is awkward, absurd, and ridiculous. It just goes to show that there's a lot of people wishing for some kind of intelligent, interactive AI, but they're just not there yet. Right now, it just seems like a cool gadget - a cute toy which might be fun to play with, but it's not all that useful. Over the phone, it's more frustrating than anything else, and on the road, with the illusion of self-driving cars, it's downright dangerous.
 

Heyo

Veteran Member
I hadn't heard of that, but I guess it's not surprising as to the results. Kids get a new toy, they'll invariably find a way to break it.

That's why I could never completely buy into the trope of "smart AI," which is popular in science fiction, such as in The Terminator series, The Matrix, Colossus, HAL 9000 in 2001: A Space Odyssey, and others.

As I mentioned upthread, trying to communicate with a voice bot over the phone is awkward, absurd, and ridiculous. It just goes to show that there's a lot of people wishing for some kind of intelligent, interactive AI, but they're just not there yet. Right now, it just seems like a cool gadget - a cute toy which might be fun to play with, but it's not all that useful. Over the phone, it's more frustrating than anything else, and on the road, with the illusion of self-driving cars, it's downright dangerous.
The voice bots you communicated with were commercially available "smart" answering machines. Before alpha GO beat Lee Sedol, I thought that would be decades away as I had played against GO programs and they couldn't stand up against me.
Watson won against humans in Jeopardy in 2011. But Watson can't be run on a home computer. I'm sure it could have a more decent talk with customers.
"Watson is made up of ninety IBM POWER 750 servers, 16 Terabytes of memory, and 4 Terabytes of clustered storage. Davidian sontinued, “This is enclosed in ten racks including the servers, networking, shared disk system, and cluster controllers. These ninety POWER 750 servers have four POWER7 processors, each with eight cores. IBM Watson has a total of 2880 POWER7 cores.”" - What makes IBM's Watson run?

So, deducing that sentient AI is impossible from your experience with answering machines is like concluding that cars can't run faster than 15 km/h from your experience with your lawn mower.
 

Stevicus

Veteran Member
Staff member
Premium Member
The voice bots you communicated with were commercially available "smart" answering machines. Before alpha GO beat Lee Sedol, I thought that would be decades away as I had played against GO programs and they couldn't stand up against me.
Watson won against humans in Jeopardy in 2011. But Watson can't be run on a home computer. I'm sure it could have a more decent talk with customers.
"Watson is made up of ninety IBM POWER 750 servers, 16 Terabytes of memory, and 4 Terabytes of clustered storage. Davidian sontinued, “This is enclosed in ten racks including the servers, networking, shared disk system, and cluster controllers. These ninety POWER 750 servers have four POWER7 processors, each with eight cores. IBM Watson has a total of 2880 POWER7 cores.”" - What makes IBM's Watson run?

So, deducing that sentient AI is impossible from your experience with answering machines is like concluding that cars can't run faster than 15 km/h from your experience with your lawn mower.

I didn't say it was impossible, which implies a sense of permanence. I said that we're just not there yet, which seems true enough.

It's not just a matter of my own lawn mower, but also a connection to the computer at the other end, which is more than just an answering machine. I'm talking about national companies (such as Best Buy) which use programs like this to field calls from customers from all over the country, presumably as a way of saving money on employing human operators. Their system may not be as large as Watson, but I would expect it would run on more than just an ordinary home PC.

I wouldn't mind it so much, except that they're awkwardly trying to create some kind of illusion to the customer that they're actually talking to someone who can interact like a human being - which is not the case at all. This is especially true for programs which don't allow you to key anything in; it forces you to actually speak your requests out loud. But if it's anything more than a one or two word phrase, the AI gets horribly confused.

This is how it is in practice, and this is what leads me to conclude that, at least for the time being, this technology is functionally useless for human purposes. It shows promise, and maybe someday, we'll get there. Winning at Jeopardy sounds interesting, but that just makes it an expensive toy.
 

Heyo

Veteran Member
Winning at Jeopardy sounds interesting, but that just makes it an expensive toy.
It showed how much human language could be understood by an AI running on a sophisticated machine - in 2011. Off course the cutting edge programs are not available (or affordable) to even big companies, but AI is a top research field and I expect a human level AI by 2028 (that is the date given by Ray Kurzweil).
 
Top