• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

AGI creeping in the back door?

Mock Turtle

Oh my, did I say that!
Premium Member

No one yet knows how ChatGPT and its artificial intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems’ abilities go far beyond what they were trained to do — and even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, though the machines’ technique is different.

This will no doubt be an issue - when such doesn't correspond with what most humans tend to believe. So will AI be on our side or against us? And how will it deal with all the various religious beliefs?

At one level, she and her colleagues understand GPT (short for generative pretrained transformer) and other large language models, or LLMs, perfectly well. The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learned — it is a “stochastic parrot,” in the words of Emily Bender, a linguist at the University of Washington. But LLMs have also managed to ace the bar exam, explain the Higgs boson in iambic pentameter, and make an attempt to break up their users’ marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities.

So, not so easy to dismiss AI as some rubbish in, rubbish out algorithm perhaps.

That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs. “I don’t know how they’re doing it or if they could do it more generally the way humans do — but they’ve challenged my views,” says Melanie Mitchell, an AI researcher at the Santa Fe Institute. “It is certainly much more than a stochastic parrot, and it certainly builds some representation of the world — although I do not think that it is quite like how humans build an internal world model,” says Yoshua Bengio, an AI researcher at the University of Montreal.

At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer. Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-in — a tool ChatGPT can use when answering a query — that allows it to do so. But that plug-in was not used in Millière’s demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context — a situation similar to how nature repurposes existing capacities for new functions.

Perhaps a bit more worrying - as to giving any AI more power than necessary?

Although LLMs have enough blind spots not to qualify as artificial general intelligence, or AGI — the term for a machine that attains the resourcefulness of animal brains — these emergent abilities suggest to some researchers that tech companies are closer to AGI than even optimists had guessed. “They’re indirect evidence that we are probably not that far off from AGI,” Goertzel said in March at a conference on deep learning at Florida Atlantic University. OpenAI’s plug-ins have given ChatGPT a modular architecture a little like that of the human brain. “Combining GPT-4 [the latest version of the LLM that powers ChatGPT] with various plug-ins might be a route toward a humanlike specialization of function,” says M.I.T. researcher Anna Ivanova.

So perhaps AGI is not so far away as many imagine? Any thoughts?

And one can see why so many are worried as to AI being a threat to humans, even if many of those polled might not know too much as to relevant factual information:

AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll

According to the data, 61% of respondents believe that AI poses risks to humanity, while only 22% disagreed, and 17% remained unsure. Those who voted for Donald Trump in 2020 expressed higher levels of concern; 70% of Trump voters compared to 60% of Joe Biden voters agreed that AI could threaten humankind. When it came to religious beliefs, Evangelical Christians were more likely to "strongly agree" that AI presents risks to humanity, standing at 32% compared to 24% of non-Evangelical Christians.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.



This will no doubt be an issue - when such doesn't correspond with what most humans tend to believe. So will AI be on our side or against us? And how will it deal with all the various religious beliefs?



So, not so easy to dismiss AI as some rubbish in, rubbish out algorithm perhaps.





Perhaps a bit more worrying - as to giving any AI more power than necessary?



So perhaps AGI is not so far away as many imagine? Any thoughts?

And one can see why so many are worried as to AI being a threat to humans, even if many of those polled might not know too much as to relevant factual information:

AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll
The push for regulation on ai is forthcoming.


I think its nessessary now.
 

Stevicus

Veteran Member
Staff member
Premium Member



This will no doubt be an issue - when such doesn't correspond with what most humans tend to believe. So will AI be on our side or against us? And how will it deal with all the various religious beliefs?



So, not so easy to dismiss AI as some rubbish in, rubbish out algorithm perhaps.





Perhaps a bit more worrying - as to giving any AI more power than necessary?



So perhaps AGI is not so far away as many imagine? Any thoughts?

And one can see why so many are worried as to AI being a threat to humans, even if many of those polled might not know too much as to relevant factual information:

AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll

It's a technology, and as with anything, it could be quite dangerous in the wrong hands. Just like the kind of danger which has existed since humans learned how to split the atom and make nuclear weapons. I'm still more worried about the humans than the machines, but the machines can still be pretty devastating.

One problem with machines is when they're not built with a proper "on/off" switch. There are some devices I've encountered which can not be turned on or off without a remote control. I recall a story a while back where some school had lights on 24/7 and they couldn't turn them off because it was controlled by some computer system that no one knew how to use and the company that designed the software had gone out of business.

As long as we have the power to flip the switch and turn it off, then AI shouldn't be a threat to humans.
 

Mock Turtle

Oh my, did I say that!
Premium Member
This appears to be worth looking at, although currently I haven't seen it all - Harari on AI:

 

Stevicus

Veteran Member
Staff member
Premium Member
Except, I’m sure similar was said about the Internet and some 25 years later, it is obvious to most that if we were to do that now, we’d all be pretty buggered, no?

Well, if the chips are down and the red light is blinking - and we have to choose between turning off the internet or facing nuclear devastation, I'd rather lose the internet than everything else.
 

PureX

Veteran Member
Any AI regulation will be created and imposed by the rich, for the purpose of serving their bottomless greed, as all new technology is. (Just look at the internet.) They will use it to further confuse, divide, and exploit the 'common rabble' for their own gain just as they always have. And it will work, because too many people are still unable to recognize the difference between what is genuine and what is fake. And they don't really even care.
 

Hermit Philosopher

Selflessly here for you
Well, if the chips are down and the red light is blinking - and we have to choose between turning off the internet or facing nuclear devastation, I'd rather lose the internet than everything else.
Which obviously means losing access to bank accounts, health records, contracts, communication with suppliers, governments, and most personal payment methods in general, to mention a few rather vital things.
 

Nakosis

Non-Binary Physicalist
Premium Member
Since AI can't feel emotions it will have no compassion for humanity.
 

Hermit Philosopher

Selflessly here for you
Telephones, typewriters, fax machines, and file cabinets would still exist.
Yet, if there’s no personnel at the other end of a phone line, none to open your letters, no fax number or fax-machine in their office and their files on you are all digital and saved on their cloud; your old gadgets won’t help you access much.

Anyhow, I’m sure you get my point: once new technology is incorporated into the running of important stuff, it can’t simply be “turned off” without significant consequences to everyone.

There will come a point in the not too distant future, when AI is one of those.
 

Stevicus

Veteran Member
Staff member
Premium Member
Yet, if there’s no personnel at the other end of a phone line, none to open your letters, no fax number or fax-machine in their office and their files on you are all digital and saved on their cloud; your old gadgets won’t help you access much.

Anyhow, I’m sure you get my point: once new technology is incorporated into the running of important stuff, it can’t simply be “turned off” without significant consequences to everyone.

There will come a point in the not too distant future, when AI is one of those.

Sure, I get it, and I'm not ready to turn it off yet either. However, my point was that if we come down to the line and have to make a choice between a chaotic partial destruction or total destruction, partial destruction would be the lesser of two evils.
 

Mock Turtle

Oh my, did I say that!
Premium Member
Since AI can't feel emotions it will have no compassion for humanity.
Not sure if this is true, given that some recent news seemed to indicate they had better bedside interactions with patients than doctors. :oops:
 

Nakosis

Non-Binary Physicalist
Premium Member
I think it's a good thing AI has no emotions towards humans. Otherwise I'd be much more concerned.

Ok but if an AI concludes that humans are unnecessary there will be nothing to counter that.

I suppose at this point it may see humans are necessary for its survival. There may come a time that is not the case. Or perhaps even come to see humans as an impediment to its continuation.
 

Nakosis

Non-Binary Physicalist
Premium Member
Not sure if this is true, given that some recent news seemed to indicate they had better bedside interactions with patients than doctors. :oops:

It faux emotion. Like what a psychopath would be able to imitate. AI doesn't have the hardware to feel emotions like humans do.
 

Mock Turtle

Oh my, did I say that!
Premium Member
Haha, trust Hariri to be one of the first out of the blocks on this. Must be a book or two in this.:cool:

Does he make any good points?
Can't remember how much I have seen of it (about 12 minutes apparently), but I'm sure he will milk it. :D

I have an electrical issue at the moment - virtually none existing so no TV - such that my internet allowance is banging over the limit - so as rationing to be likely (me doing such) - given I have to watch anything I do want to watch on TV via the internet. :eek:
 
Last edited:
Top