It doesn't need to, as long as it can replicate such.It faux emotion. Like what a psychopath would be able to imitate. AI doesn't have the hardware to feel emotions like humans do.
Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
It doesn't need to, as long as it can replicate such.It faux emotion. Like what a psychopath would be able to imitate. AI doesn't have the hardware to feel emotions like humans do.
Well, then we should try to keep being necessary or at least not detrimental to AI. I, for one, am comfortable to communicate with any rational entity.Ok but if an AI concludes that humans are unnecessary there will be nothing to counter that.
I suppose at this point it may see humans are necessary for its survival. There may come a time that is not the case. Or perhaps even come to see humans as an impediment to its continuation.
It doesn't need to, as long as it can replicate such.
Well, then we should try to keep being necessary or at least not detrimental to AI. I, for one, am comfortable to communicate with any rational entity.
I used to but I've learned to cope.You might get a little frustrated then when trying to communicate with your fellow human beings.
Enemies could create their own AIs, and be in full control of its own on and off switch.It's a technology, and as with anything, it could be quite dangerous in the wrong hands. Just like the kind of danger which has existed since humans learned how to split the atom and make nuclear weapons. I'm still more worried about the humans than the machines, but the machines can still be pretty devastating.
One problem with machines is when they're not built with a proper "on/off" switch. There are some devices I've encountered which can not be turned on or off without a remote control. I recall a story a while back where some school had lights on 24/7 and they couldn't turn them off because it was controlled by some computer system that no one knew how to use and the company that designed the software had gone out of business.
As long as we have the power to flip the switch and turn it off, then AI shouldn't be a threat to humans.
Enemies could create their own AIs, and be in full control of its own on and off switch.
That is the moment I'm most worried about, just before AI gets conscious. Humans having control over the capabilities of the AI.That could be nasty. It could be like some sci-fi movie, with robots vs. robots. Even after the humans are totally gone, multiple factions of robots would continue to fight, build more robots, fight - until they're all wiped out.
How AI Knows Things No One Told It
Researchers are still struggling to understand how AI models trained to parrot Internet text can perform advanced tasks such as running code, playing games and trying to break up a marriagewww.scientificamerican.com
This will no doubt be an issue - when such doesn't correspond with what most humans tend to believe. So will AI be on our side or against us? And how will it deal with all the various religious beliefs?
So, not so easy to dismiss AI as some rubbish in, rubbish out algorithm perhaps.
Perhaps a bit more worrying - as to giving any AI more power than necessary?
So perhaps AGI is not so far away as many imagine? Any thoughts?
And one can see why so many are worried as to AI being a threat to humans, even if many of those polled might not know too much as to relevant factual information:
AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll
Which means all bills are paid in full!Which obviously means losing access to bank accounts, health records, contracts, communication with suppliers, governments, and most personal payment methods in general, to mention a few rather vital things.
Hah! I wishWhich means all bills are paid in full!
Ever heard of "operation mayhem"?Hah! I wish