ecco
Veteran Member
Whatever led you to thinkSorry if real life offended you.
Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
Whatever led you to thinkSorry if real life offended you.
Whatever led you to thinkyou'vereal life offended me?
computers already out think humansto make the best use of its time.
Whatever led you to thinkyou'vereal life offended me?
I have no idea what you are referring to.I see no other reason you shouldn't gripe about a relevant post on a public thread
Is Kaku correct that AI will be dangerous as AI machines attain self awareness?
Who is right about AI: Mark Zuckerberg or Elon Musk?
I think AI, as any other tool, will be used for profit and for evil purpose by some. That, imo, is the real danger.
Kaku is a pop-scientist/psuedo-scientist. And a very annoying one at that.
A pseudo-scientist? What do you base your opinion on? And how does he annoy you?Kaku is a pop-scientist/psuedo-scientist. And a very annoying one at that.
I have no idea what you are referring to.
Kaku is a pop-scientist/psuedo-scientist. And a very annoying one at that.
Kaku is a pop-scientist/psuedo-scientist. And a very annoying one at that.
We’re a far way off from AI. Self recognition is different from self awareness. Self awareness has more nuances to it. It means the ability to identify you as separate entity from the environment and question it. I’m not sure if AI will ever achieve this state, because we have emotions and pleasurable vs painful stimuli responses. These may be necessary elements for AI to develop. I’m not sure if logic alone is enough to incept the desire to contemplate one’s existential self.
Here, you're describing simulating AI, not actual AI. I suppose, it's possible there's no difference, but I argue that actual AI has to come to grips with its existential nature. This, I think, is not something you can program.Even the most basic avatars from computer games of 35 years ago had something called "hit points". When these were diminished, the avatar "feeling the pain" could no longer fight as well. When all were gone the avatar died.
In 1991 Sid Meier came out with a game called Civilization. At its most basic, the human plays against a number of AI "leaders". The leaders all have different personalities, some being more aggressive, some more deceitful, some more adventurous, etc. These AI leaders react to circumstances. I'm not saying they experience pleasure and pain in the same way that humans do. But if you walk across the territory of an aggressive leader you will probably start a war. As wars go on, leaders feel the weight of their losses, evaluate them against gains and may sue for peace.
What our brains do with electrical impulses and chemicals, avatars do with 1s and 0s. "Fuzziness" can be established with just 1s and 0s.
For now brains have more cells than computers have bits. For now.
This site has a great visualization of the increases in processing power since ~ 1960:
https://www.visualcapitalist.com/visualizing-trillion-fold-increase-computing-power/
The 7090 at the top of the chart cost $2.9 million (equivalent to $19 million in 2018) and had a blazing speed of about 2 million flops.
By comparison, a PS4 can churn out 1,800 billion flops and costs about $300,
Here, you're describing simulating AI, not actual AI.
I suppose, it's possible there's no difference, but I argue that actual AI has to come to grips with its existential nature. This, I think, is not something you can program.
The opposite of artificial intelligence is natural intelligence. We are naturally intelligent. I suppose this term is convoluted, because when I say AI, I mean sentients, and not just something simply following algorithms. Simulating intelligence or emulating it is quite different from actually being intelligent. I have to direct you to the Chinese room - Wikipedia problem. How do we know or can we ever know something is sentient? This is what I'm talking about.No, I am describing simulating I (intelligence). Since it is a simulation, we can refer to it as A (artificial).
This is all very impressive but it did not address what I said. They said Deepmind played millions of games for it to improve and compared it's play-style to intuition. I'm not entirely sure how far any so-called AI can get with this. Intuition may not work this way and it may use an accumulation of emotions, experience and taking chances(I.E. gut feelings). No doubt, they are getting the experience part but they're missing vital ingredients for actual AI.What does "existential nature" even mean? Self-awareness? Did you not get the point I was making about the AI leaders in Civilization?
Anything can be programmed.
A computer was programmed to win at chess. A later version had the computer "learning" to play chess. A computer has now been programmed to learn to play GO.
... Self recognition is different from self awareness. Self awareness has more nuances to it. It means the ability to identify you as separate entity from the environment and question it. ....
Yes, I think this is an important part of becoming self-aware: the dichotomy to avoid pain and seek pleasure and how this relates to the self. Basically, desire is, I think, a necessary property.To me Self awareness is one step more. Self awareness of separate selves inevitably leads to pain. The individual self, empowered by true intelligence, however, has the competence to see through the veil of separateness and overcome pain.