• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Willful Machines: Is AI a Threat to Humanity?

Nous

Well-Known Member
Premium Member
Appearing before a meeting of the National Governor’s Association on Saturday, Tesla CEO Elon Musk described artificial intelligence as “the greatest risk we face as a civilization” . . .

[. . . ]

Part of Musk’s worry stems from social destabilization and job loss. “When I say everything, the robots will do everything, bar nothing," he said.

But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. “[They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.”

Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war.​

Elon Musk: Artificial Intelligence Is the 'Greatest Risk We Face as a Civilization'

Do you agree with Musk here?

Frankly I think he has confused reality with a B-grade movie plot. I find it difficult to imagine what he is even referring to regarding AI in his comment about “social destabilization and job loss.” It's certainly true that automation changes the types of jobs that humans do. Automation has reduced, and will continue to reduce, the number of menial jobs available. But, as far as I know, there is little reason to believe that this takeover of mindless production-line tasks by machines will or needs to be the source of “social destabilization” any more so than it has been in the past.

What Musk calls his “bigger concern” seems to me even less problematic. Humans have been subjected to false information since the dawn of time. People who want to know what's true make an effort to separate fact from fiction, and I think are generally successful.

It is unclear whether Musk's comments here refer to the traditional fear-mongering about AI, which is that AI machines or entities will somehow develop or be programmed have a will of their own, and will thereby do things that they are not programmed to do which humans (for some reason) cannot prevent.

As a counterpoint here, I note the simple points that Yann LeCun, Director of AI Research at Facebook and Professor at NYU, made in an article at Quora:

If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity.

Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).

Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence. [Obviously a reference to Trump.]

[. . .]

A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?

Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.​

https://www.forbes.com/sites/quora/...ver-become-a-threat-to-humanity/#4e2f7ddd2142

So, do you believe that “willful machines” will ever be constructed by humans, that it is possible for humans to build machines or intelligent entities that will develop or can be programmed to have a will of their own? If so, by what process would an independent will develop or be programmed in a human-made machine or entity? And, if such a thing were possible, why would such machines or entities pose a threat to human civilization? I.e., why wouldn't we program them to be at least as civil and peace-loving (if not more so) as the average human?
 

Yerda

Veteran Member
There are some plausible arguments that a super-intelligent machine (where super-intelligent is considerably more intelligent than the cleverest person) with goal orientated behaviour and the freedom to pursue it's goal any way it chooses could destroy us or cause great harm without intending to.

See for e.g.: Paperclip maximizer - Lesswrongwiki

On the whole I think it's not worth worrying too much about.
 

Kilgore Trout

Misanthropic Humanist
Musk is rich because he's a cult of personality, not because of his technological prowess or his ability to accurately predict future trends. In fact, I'd say he's rich despite his lack of those things.
 

David1967

Well-Known Member
Premium Member
Could be. Who knows what the future may hold. But if AI went far enough, how long till they realized that we (humans) are an unnecessary risk to this world?
 

Brickjectivity

Veteran Member
Staff member
Premium Member
Frankly I think he has confused reality with a B-grade movie plot. I find it difficult to imagine what he is even referring to regarding AI in his comment about “social destabilization and job loss.” It's certainly true that automation changes the types of jobs that humans do. Automation has reduced, and will continue to reduce, the number of menial jobs available. But, as far as I know, there is little reason to believe that this takeover of mindless production-line tasks by machines will or needs to be the source of “social destabilization” any more so than it has been in the past.
I agree with Elon, but I do not think he is worried about robots going psycho. I have an associates degree in Computer Science, and while I am not able to construct a fuzzy network I have a cursory understanding of how they operate. The AI that the public usually sees: Google's picture search, video game sprites etc. is actually a severely limited edition of what is possible. The only real limitation to the use of Neural networks is that they require training, and training them is the expensive, time consuming thing. However once trained, a neural net can be reproduced. All that work can be multiplied. Take a look at youtube at the various robots that have been designed and trained. Some walk. Some cook. Some dialogue a little. Some manufacture. Some checkout groceries. By using multiple trained neural networks in one robot-body, you could achieve a unit that did all of these things. Yes, I agree %100 that a robot can replace many jobs and that eventually no human workers are, theoretically, necessary. All it takes is an investment in training one successful neurally programmed bot, and then you can make 10,000 of them.

What Musk calls his “bigger concern” seems to me even less problematic. Humans have been subjected to false information since the dawn of time. People who want to know what's true make an effort to separate fact from fiction, and I think are generally successful.

It is unclear whether Musk's comments here refer to the traditional fear-mongering about AI, which is that AI machines or entities will somehow develop or be programmed have a will of their own, and will thereby do things that they are not programmed to do which humans (for some reason) cannot prevent.
No, I do not think he is suggesting AI will develop wills of their own. That would require so much training as to be ridiculous. A bot that had all the necessary 'Brainage' for that would require extensive development and testing, followed by a lengthy nurture program. It could be done, but it could not just happen by accident. If it were nurtured badly or designed badly it could be a psycho, yes, but it would not be both psycho and clever unless it were designed to be. Who would design one that way? It would be so much work that nobody would bother. No, what people want are nice robots. They want obedient robots. Even generals want nice ones that follow orders.
 
It's certainly true that automation changes the types of jobs that humans do. Automation has reduced, and will continue to reduce, the number of menial jobs available. But, as far as I know, there is little reason to believe that this takeover of mindless production-line tasks by machines will or needs to be the source of “social destabilization” any more so than it has been in the past.

As regards this, it's not just menial jobs that are going to go. The most at risk jobs are those which use a narrow skill set to perform similar actions, but this covers things from your 'mindless' jobs to surgeons, pilots, legal researchers, tax accountants, etc.

When you actor in plenty of 'middle class' jobs going, and jobs such as drivers, retail assistants, warehouse workers, etc. etc. being massively diminished, it certainly isn't a given that enough new professions can be created to fill the gap.

At lot more jobs than you seem to think are at risk.

So, do you believe that “willful machines” will ever be constructed by humans, that it is possible for humans to build machines or intelligent entities that will develop or can be programmed to have a will of their own? If so, by what process would an independent will develop or be programmed in a human-made machine or entity? And, if such a thing were possible, why would such machines or entities pose a threat to human civilization? I.e., why wouldn't we program them to be at least as civil and peace-loving (if not more so) as the average human?

Machines probably won't be 'wilful' in the way humans are, but they will 'learn' autonomously via recursive feedback which, at least in theory, could lead to unintended consequences. Once things reach great complexity, our meagre intellects have great difficulty predicting what can actually happen.

While no one can predict what is possible with things that don't actually exist at the moment, it is wise to be very cautious when creating systems with unpredictable effects.

As such, we should assume that future AI, in theory, has the potential to harm us. All projects dealing with advanced AI systems should start from this premise instead of a naive assumption that anyone urging caution is some luddite or sci-fi fantasist.
 

Corvus

Feathered eyeball connoisseur
Make sure you hardwire (unalterable non software programming) the three laws of robotics into it first, or a very close approximation that removes some errors that might occur from sticking to the three laws inflexibly as Asimov reveals as plot lines in his sci fi books about robots, of which there are a few.

''1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.''
 

Corvus

Feathered eyeball connoisseur
As such, we should assume that future AI, in theory, has the potential to harm us. All projects dealing with advanced AI systems should start from this premise instead of a naive assumption that anyone urging caution is some luddite or sci-fi fantasist.
Entirely sensible.
 

columbus

yawn <ignore> yawn
A couple of days ago, for another thread, I went looking for a great political cartoon I saw awhile back. But I couldn't find it, not being that good at such internet searching.
So I will just try to describe it, as best as I remember.
It showed a cartoon image of the Earth from space. Missiles were shooting out of it in all directions. A square speech bubble, in that font that typically denotes a computerized voice stuck out from the planet. It read:
What do you even have these things for? I am launching them all into the sun!
Across the bottom of the cartoon it read "When Artificial Intelligence Rules".
:)
Tom
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
Appearing before a meeting of the National Governor’s Association on Saturday, Tesla CEO Elon Musk described artificial intelligence as “the greatest risk we face as a civilization” . . .

[. . . ]

Part of Musk’s worry stems from social destabilization and job loss. “When I say everything, the robots will do everything, bar nothing," he said.

But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. “[They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.”

Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war.​

Elon Musk: Artificial Intelligence Is the 'Greatest Risk We Face as a Civilization'

Do you agree with Musk here?

Frankly I think he has confused reality with a B-grade movie plot. I find it difficult to imagine what he is even referring to regarding AI in his comment about “social destabilization and job loss.” It's certainly true that automation changes the types of jobs that humans do. Automation has reduced, and will continue to reduce, the number of menial jobs available. But, as far as I know, there is little reason to believe that this takeover of mindless production-line tasks by machines will or needs to be the source of “social destabilization” any more so than it has been in the past.

What Musk calls his “bigger concern” seems to me even less problematic. Humans have been subjected to false information since the dawn of time. People who want to know what's true make an effort to separate fact from fiction, and I think are generally successful.

It is unclear whether Musk's comments here refer to the traditional fear-mongering about AI, which is that AI machines or entities will somehow develop or be programmed have a will of their own, and will thereby do things that they are not programmed to do which humans (for some reason) cannot prevent.

As a counterpoint here, I note the simple points that Yann LeCun, Director of AI Research at Facebook and Professor at NYU, made in an article at Quora:

If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity.

Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).

Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence. [Obviously a reference to Trump.]

[. . .]

A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?

Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.​

https://www.forbes.com/sites/quora/...ver-become-a-threat-to-humanity/#4e2f7ddd2142

So, do you believe that “willful machines” will ever be constructed by humans, that it is possible for humans to build machines or intelligent entities that will develop or can be programmed to have a will of their own? If so, by what process would an independent will develop or be programmed in a human-made machine or entity? And, if such a thing were possible, why would such machines or entities pose a threat to human civilization? I.e., why wouldn't we program them to be at least as civil and peace-loving (if not more so) as the average human?


Amazon, alexa, netflix and siri are developments in ai, as are many other gizmos you use every day, (google translate comes to mind). As to whether malevolent ai is a possibility, quite probably. With the ability to analyse data and act on that analysis, coupled with the ability to self program, then yes. Why would an immensely powerful thinking machine stand idly by while lesser entities destroy its habitat?

On a brighter note, there's always the off button.
 

Nakosis

Non-Binary Physicalist
Premium Member
No, I do not think he is suggesting AI will develop wills of their own. That would require so much training as to be ridiculous. A bot that had all the necessary 'Brainage' for that would require extensive development and testing, followed by a lengthy nurture program. It could be done, but it could not just happen by accident. If it were nurtured badly or designed badly it could be a psycho, yes, but it would not be both psycho and clever unless it were designed to be. Who would design one that way? It would be so much work that nobody would bother. No, what people want are nice robots. They want obedient robots. Even generals want nice ones that follow orders.

According to Scientology, yes sorry Scientology but for the sake of an interesting story.

Consciousness, life, beings before had robotic bodies. They wanted to find a way to develop organically grown bodies. So they developed the science of organic reproduction. This was so economic and successful they eventually abandoned the use of robotic bodies.

This of course assumes that consciousness has the ability to invest itself into any physical form.

So robots take over, decide they want feelings and emotions and stuff. Create a physical form that allows for this and we're back to human beings in a never ending cycle.

The human body is just a physical form, really is that any different than a robot? Just an advanced robot with an advanced central nervous system.

Either way I suppose, with or without dualism it seems a possibility. Still science fiction at this point but in a few hundred years, maybe.
 

Brickjectivity

Veteran Member
Staff member
Premium Member
According to Scientology, yes sorry Scientology but for the sake of an interesting story.

Consciousness, life, beings before had robotic bodies. They wanted to find a way to develop organically grown bodies. So they developed the science of organic reproduction. This was so economic and successful they eventually abandoned the use of robotic bodies.

This of course assumes that consciousness has the ability to invest itself into any physical form.

So robots take over, decide they want feelings and emotions and stuff. Create a physical form that allows for this and we're back to human beings in a never ending cycle.

The human body is just a physical form, really is that any different than a robot? Just an advanced robot with an advanced central nervous system.

Either way I suppose, with or without dualism it seems a possibility. Still science fiction at this point but in a few hundred years, maybe.
No. Its a possibility now, although it would take $$, time and specifically focused research. Its not fiction. Its entirely possible. Its got nothing to do with anything except neural networks, either. No ghosts needed.
 

Nakosis

Non-Binary Physicalist
Premium Member
No. Its a possibility now, although it would take $$, time and specifically focused research. Its not fiction. Its entirely possible. Its got nothing to do with anything except neural networks, either. No ghosts needed.

Conscious willfulness seem elusive to me. Even trying to pin it down in humans. No doubt robots could be purposefully or accidentally programmed to cause man's destruction. I just question the ability to give them conscious will to rebel against what man has programmed them to do.

Maybe that's what happen to God. God gave man consciousness. Man rebelled and killed God.

Now our turn to create consciousness. Our turn to be killed by our creation.
 

Brickjectivity

Veteran Member
Staff member
Premium Member
Conscious willfulness seem elusive to me. Even trying to pin it down in humans. No doubt robots could be purposefully or accidentally programmed to cause man's destruction. I just question the ability to give them conscious will to rebel against what man has programmed them to do.
I think conscious willfulness is not as mysterious as that, but there is a long way to go. Its not a foregone conclusion that we will ever make a conscious being, but I absolutely think people can. It will not happen by itself.
 

Nous

Well-Known Member
Premium Member
There are some plausible arguments that a super-intelligent machine (where super-intelligent is considerably more intelligent than the cleverest person) with goal orientated behaviour and the freedom to pursue it's goal any way it chooses could destroy us or cause great harm without intending to.

See for e.g.: Paperclip maximizer - Lesswrongwiki

On the whole I think it's not worth worrying too much about.
I don't find the scenario of a paperclip-making machine using up all the matter in the solar system plausible at all. It's a B-grade movie plot.
 

Nous

Well-Known Member
Premium Member
Musk is rich because he's a cult of personality, not because of his technological prowess or his ability to accurately predict future trends. In fact, I'd say he's rich despite his lack of those things.
But he makes beautiful cars.
 

Nous

Well-Known Member
Premium Member
Could be. Who knows what the future may hold. But if AI went far enough, how long till they realized that we (humans) are an unnecessary risk to this world?
If a machine were to deduce that humans are an "unnecessary risk to the world," it would only be because the machine had been programmed to make such a deduction. There has to be lots of other assumptions before one can consider such a deduction made by a machine threatening to humanity.
 

Nous

Well-Known Member
Premium Member
I agree with Elon, but I do not think he is worried about robots going psycho. I have an associates degree in Computer Science, and while I am not able to construct a fuzzy network I have a cursory understanding of how they operate. The AI that the public usually sees: Google's picture search, video game sprites etc. is actually a severely limited edition of what is possible. The only real limitation to the use of Neural networks is that they require training, and training them is the expensive, time consuming thing. However once trained, a neural net can be reproduced. All that work can be multiplied. Take a look at youtube at the various robots that have been designed and trained. Some walk. Some cook. Some dialogue a little. Some manufacture. Some checkout groceries. By using multiple trained neural networks in one robot-body, you could achieve a unit that did all of these things. Yes, I agree %100 that a robot can replace many jobs and that eventually no human workers are, theoretically, necessary. All it takes is an investment in training one successful neurally programmed bot, and then you can make 10,000 of them.
Then humans would be free to make a living by sitting around and debating--like we do here. Or perhaps some of us would enjoy the freedom to go express our nuturing instincts toward animals and plants.

No, I do not think he is suggesting AI will develop wills of their own. That would require so much training as to be ridiculous. A bot that had all the necessary 'Brainage' for that would require extensive development and testing, followed by a lengthy nurture program. It could be done
How does one create a will out of electrical circuitry, or whatever? Has any experiment ever demonstrated that?

]
 

Nous

Well-Known Member
Premium Member
As regards this, it's not just menial jobs that are going to go. The most at risk jobs are those which use a narrow skill set to perform similar actions, but this covers things from your 'mindless' jobs to surgeons, pilots, legal researchers, tax accountants, etc.

When you actor in plenty of 'middle class' jobs going, and jobs such as drivers, retail assistants, warehouse workers, etc. etc. being massively diminished, it certainly isn't a given that enough new professions can be created to fill the gap.

At lot more jobs than you seem to think are at risk.
In the past, automation taking over jobs led to new avenues of employment (and, of course, people utilized the new technology in order to produce fewer children). I'm unsure why automation in the future should be different.

Machines probably won't be 'wilful' in the way humans are
Why not?

Once things reach great complexity, our meagre intellects have great difficulty predicting what can actually happen.
If humans create a machine too complex for us or the machine to predict the output given an input, perhaps we could build another machine to analyze the first machine.
 
Top