Appearing before a meeting of the National Governor’s Association on Saturday, Tesla CEO Elon Musk described artificial intelligence as “the greatest risk we face as a civilization” . . .
[. . . ]
Part of Musk’s worry stems from social destabilization and job loss. “When I say everything, the robots will do everything, bar nothing," he said.
But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. “[They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.”
Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war.
[. . . ]
Part of Musk’s worry stems from social destabilization and job loss. “When I say everything, the robots will do everything, bar nothing," he said.
But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. “[They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.”
Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war.
Elon Musk: Artificial Intelligence Is the 'Greatest Risk We Face as a Civilization'
Do you agree with Musk here?
Frankly I think he has confused reality with a B-grade movie plot. I find it difficult to imagine what he is even referring to regarding AI in his comment about “social destabilization and job loss.” It's certainly true that automation changes the types of jobs that humans do. Automation has reduced, and will continue to reduce, the number of menial jobs available. But, as far as I know, there is little reason to believe that this takeover of mindless production-line tasks by machines will or needs to be the source of “social destabilization” any more so than it has been in the past.
What Musk calls his “bigger concern” seems to me even less problematic. Humans have been subjected to false information since the dawn of time. People who want to know what's true make an effort to separate fact from fiction, and I think are generally successful.
It is unclear whether Musk's comments here refer to the traditional fear-mongering about AI, which is that AI machines or entities will somehow develop or be programmed have a will of their own, and will thereby do things that they are not programmed to do which humans (for some reason) cannot prevent.
As a counterpoint here, I note the simple points that Yann LeCun, Director of AI Research at Facebook and Professor at NYU, made in an article at Quora:
If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity.
Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).
Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence. [Obviously a reference to Trump.]
[. . .]
A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?
Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.
Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).
Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence. [Obviously a reference to Trump.]
[. . .]
A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?
Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.
https://www.forbes.com/sites/quora/...ver-become-a-threat-to-humanity/#4e2f7ddd2142
So, do you believe that “willful machines” will ever be constructed by humans, that it is possible for humans to build machines or intelligent entities that will develop or can be programmed to have a will of their own? If so, by what process would an independent will develop or be programmed in a human-made machine or entity? And, if such a thing were possible, why would such machines or entities pose a threat to human civilization? I.e., why wouldn't we program them to be at least as civil and peace-loving (if not more so) as the average human?