• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Movies on Artificial Intelligence (AI)

DawudTalut

Peace be upon you.
This is a subject that's getting closer and closer to the religious/philosophical themes of this website. If we ever do develop fully self-aware AI, what will happen. Will it be the end of the human race, a cyborg version of it, a symbiotic relationship (for how long), or is it impossible or immoral in the first place?

Some of the most salient and prescient(?) examples of it IMNTBHO are:

2001: A Space Odyssey
AI: Artificial Intelligence
He
r (my favorite)
Ex Machina which opens wide in April. Currently 94% R/T and 8.0 IMDb.
Peace be on you.
There is no need to be worried at practical level. If these things were to make any huge effect, poverty in Africa and super-militancy of ISIS etc would have been eliminated by now....But sane governments need to keep tight check on their developments.
 

Brickjectivity

Veteran Member
Staff member
Premium Member
Bill Gates recently said that he considered AI to be a threat, but that was a press release. I think most people don't understand how AI works. AI is a threat, but it is not the threat of an AI intelligence going crazy. The real and direct threat is the use of AI.

Current A.I. tech is capable of the following:
  • Identify an animal or other object and fire a projectile at it
  • Traverse to any point on a map unmanned
  • Fly an aircraft without direct human control
  • Sort through objects at high speed
  • Self Repair or repair other AI units
  • Automatic analysis of many simple systems
  • Run and monitor chemical processing
  • Toss objects and catch them
  • Assemble materials into a system or weapon
  • Print parts, cut parts, test for QA
  • Understand audible language and listen for key words or phrases
  • Read sign language (theoretically. I don't know if anyone has a system that does this.)
  • Describe visual information audibly or as text
  • Navigate some limited kinds of obstacle courses without human assistance
  • Use computing power and other techniques to see through walls
  • Detect motion
  • Detect smells, sounds.
  • Detect emotional cues
  • Calculate distances, shapes, intercept coordinates, volumes, lengths
  • Follow fairly complex programmed steps
  • Recognize faces
  • many other things
So the threat of AI is real along with its usefulness. It doesn't have to become a raging lunatic master computer to be dangerous.
 

Ouroboros

Coincidentia oppositorum
Bill Gates recently said that he considered AI to be a threat, but that was a press release. I think most people don't understand how AI works. AI is a threat, but it is not the threat of an AI intelligence going crazy. The real and direct threat is the use of AI.
...
So the threat of AI is real along with its usefulness. It doesn't have to become a raging lunatic master computer to be dangerous.
Agree.

The biggest threat really is that we'll become lazy, dumb, and bored. The AI/robots/automation is replacing us in different areas where we have to think, work hard, etc. Who cares why 1+1=2 when a calculator can do it for you. :D

We'll become so dependent on technology that we forget how it works, and when the day comes to repair, we'll look for the robots to do it, but if they're broken too...
 

illykitty

RF's pet cat
Came back yesterday from the cinema, went to see Ex-Machina (it's out here in the UK). I thought it was a good movie but... Well I don't want to spoil it. Let's just say it didn't end how I was expecting it to. Perhaps I'll discuss it, if I remember to, in a few months when it's out elsewhere as well.

Now, with AI, I'm not really scared of it. With all these movies, I'm sure people have already considered some possible repercussions. I think that they can be of great help in the future. I'm more weary about the human users, tbh. Perhaps there can be safety protocols? I'm not qualified to say... I'm just a lay person very much into reading future-related blogs and websites. :D
 

ThePainefulTruth

Romantic-Cynic
Bill Gates recently said that he considered AI to be a threat, but that was a press release. I think most people don't understand how AI works. AI is a threat, but it is not the threat of an AI intelligence going crazy. The real and direct threat is the use of AI.

Current A.I. tech is capable of the following:
  • Identify an animal or other object and fire a projectile at it
  • Traverse to any point on a map unmanned
  • Fly an aircraft without direct human control
  • Sort through objects at high speed
  • Self Repair or repair other AI units
  • Automatic analysis of many simple systems
  • Run and monitor chemical processing
  • Toss objects and catch them
  • Assemble materials into a system or weapon
  • Print parts, cut parts, test for QA
  • Understand audible language and listen for key words or phrases
  • Read sign language (theoretically. I don't know if anyone has a system that does this.)
  • Describe visual information audibly or as text
  • Navigate some limited kinds of obstacle courses without human assistance
  • Use computing power and other techniques to see through walls
  • Detect motion
  • Detect smells, sounds.
  • Detect emotional cues
  • Calculate distances, shapes, intercept coordinates, volumes, lengths
  • Follow fairly complex programmed steps
  • Recognize faces
  • many other things
So the threat of AI is real along with its usefulness. It doesn't have to become a raging lunatic master computer to be dangerous.

Those are all still just advanced computer functions. We've had missles that can target a specific spot on earth for decades. As you say, it's how their used. AI is a different question, an artificial self-aware intelligence that understands its and our mortality. Will they be able to lie, and will they freak out like HAL when it's about to be disconnected.

Agree.

The biggest threat really is that we'll become lazy, dumb, and bored. The AI/robots/automation is replacing us in different areas where we have to think, work hard, etc. Who cares why 1+1=2 when a calculator can do it for you. :D

We'll become so dependent on technology that we forget how it works, and when the day comes to repair, we'll look for the robots to do it, but if they're broken too...

Only some of us will surrender to sloth. If we all do, well then, we'll just turn into plants.

Came back yesterday from the cinema, went to see Ex-Machina (it's out here in the UK). I thought it was a good movie but... Well I don't want to spoil it. Let's just say it didn't end how I was expecting it to. Perhaps I'll discuss it, if I remember to, in a few months when it's out elsewhere as well.

But was it a satisfying ending? An ending is for me the critical part of a movie. It doesn't have to be happy or sad or whatever, just not something that leaves you hanging.

Now, with AI, I'm not really scared of it. With all these movies, I'm sure people have already considered some possible repercussions. I think that they can be of great help in the future. I'm more weary about the human users, tbh. Perhaps there can be safety protocols? I'm not qualified to say... I'm just a lay person very much into reading future-related blogs and websites. :D

I put together a paper on the several themes in the movie Her, one being AI. We all seem to bow to the speed and accuracy of computers, thinking that it makes them superior--which they are in those areas. We bring emotion and passion to the table which provides the motivation to pursue given pursuits. We have both emotion and reason though we struggle to balance them. A computers mortality wouldn't break into its consciousness as anything other than a bit of data without emotions. As Samantha told Theo in Her, he taught her how to want. Without emotions, a computer will never be a true AI. It wasn't until the second or third viewing that that mundane statement got across to me what was going on--we take our emotions so much for granted, as well as our full self-awareness.
 

Brickjectivity

Veteran Member
Staff member
Premium Member
Those are all still just advanced computer functions. We've had missles that can target a specific spot on earth for decades. As you say, it's how their used. AI is a different question, an artificial self-aware intelligence that understands its and our mortality. Will they be able to lie, and will they freak out like HAL when it's about to be disconnected.
AI does not have the potential to become self aware any time soon. Electronics can imitate neurons but not quickly enough and not at the scale that brains can. This has to do with the number of connections needed between artificial neurons and the training of them. You can make a large neural net that operates very slowly or a small one that operates in real time. Training a small neural net to do one or two tricks (like recognize letters) is doable, but training a large net to accomplish multiple tasks is more difficult. Adding a task does not mean simply adding neurons, either.

Information in a neural net is not easily transferable to a dissimilar neural net, nor can you sew two nets together to add their abilities. You can train them to work together -- but how you go about doing that is not easy to determine. Its because the relationships between neurons do not have something called 'Linearity' like other electronics have. Electric circuits have linearity, and that is what enables them to be added together as separate components into systems. You can 'Add' two resistors to make one big resistor with twice the resistance or two circuit-boards to make a better more complex circuit-board, but if you 'Add' two neural cells you don't get twice the intelligence. You can add regular circuits together, but you can't just add neural nets together. They don't enhance one another in a linear, additive way. Instead a neural net must be trained as it is for every task that it will be given. If it is going to do two things, it must be trained to do both. If you want it to do a lot you have to make a huge net and devote a lot of time and energy to training it (and hope you aren't wasting your time). You can't train two nets separately and then glue them together (mostly). This is one of the things that researchers are working to change, but there just are practical limits.
 

illykitty

RF's pet cat
But was it a satisfying ending? An ending is for me the critical part of a movie. It doesn't have to be happy or sad or whatever, just not something that leaves you hanging.

I didn't think it left you hanging (well depends what you mean by that), just wasn't what I was expecting. I enjoyed it, regardless.

Maybe it's just because I was trying to guess what will happen since a friend said they were doing so and the ending ended up being different from their guess... I'll have to see if he had a similar conclusion to mine.
 

freethinker44

Well-Known Member
T
Lol you can't program consciousness into a computer...that's like common sense. You could have a computer the size of the universe, but if all it's doing is a very complicated calculation, it will simply do the calculation faster.

1+1 = 2 no matter if it's done in 100 seconds or 10^-20 seconds.
However 1+1 =/= awareness, violition, and subjective experience
his is kind of a straw man. The field of AI is way past doing complex math.

Kind of a tangent here, but this reminds me of an MIT lecture about AI. There was an anecdote about an AI skeptic and a calculus program. The skeptic was shown a difficult calculus program and was asked if he thought a computer might be intelligent if it could solve this problem. He said yes, it would be an intelligent machine. so they showed him a program that did indeed solve the problem correctly and the guy was amazed. Then the researcher explained how the program worked and the guy retracted his amazement saying, "this computer isn't intelligent after all, it solves the problem the same way I do."
 

ThePainefulTruth

Romantic-Cynic
AI does not have the potential to become self aware any time soon. Electronics can imitate neurons but not quickly enough and not at the scale that brains can. This has to do with the number of connections needed between artificial neurons and the training of them. You can make a large neural net that operates very slowly or a small one that operates in real time. Training a small neural net to do one or two tricks (like recognize letters) is doable, but training a large net to accomplish multiple tasks is more difficult. Adding a task does not mean simply adding neurons, either.

Information in a neural net is not easily transferable to a dissimilar neural net, nor can you sew two nets together to add their abilities. You can train them to work together -- but how you go about doing that is not easy to determine. Its because the relationships between neurons do not have something called 'Linearity' like other electronics have. Electric circuits have linearity, and that is what enables them to be added together as separate components into systems. You can 'Add' two resistors to make one big resistor with twice the resistance or two circuit-boards to make a better more complex circuit-board, but if you 'Add' two neural cells you don't get twice the intelligence. You can add regular circuits together, but you can't just add neural nets together. They don't enhance one another in a linear, additive way. Instead a neural net must be trained as it is for every task that it will be given. If it is going to do two things, it must be trained to do both. If you want it to do a lot you have to make a huge net and devote a lot of time and energy to training it (and hope you aren't wasting your time). You can't train two nets separately and then glue them together (mostly). This is one of the things that researchers are working to change, but there just are practical limits.

I invite you to read up on quantum computers. In any case we've almost always way over or under estimated advancement in science.
 

ThePainefulTruth

Romantic-Cynic
I didn't think it left you hanging (well depends what you mean by that), just wasn't what I was expecting. I enjoyed it, regardless.

Maybe it's just because I was trying to guess what will happen since a friend said they were doing so and the ending ended up being different from their guess... I'll have to see if he had a similar conclusion to mine.

Thanks. And by hanging I mean open ended, unanswered questions/plot holes, especially pointless, and otherwise counter-satisfying.
 

MD

qualiaphile
T

his is kind of a straw man. The field of AI is way past doing complex math.

Kind of a tangent here, but this reminds me of an MIT lecture about AI. There was an anecdote about an AI skeptic and a calculus program. The skeptic was shown a difficult calculus program and was asked if he thought a computer might be intelligent if it could solve this problem. He said yes, it would be an intelligent machine. so they showed him a program that did indeed solve the problem correctly and the guy was amazed. Then the researcher explained how the program worked and the guy retracted his amazement saying, "this computer isn't intelligent after all, it solves the problem the same way I do."

Not really, since we don't even understand the very nature of consciousness to claim that we will be able to program it into our computers sounds so ridiculous that I'm surprised people aren't called out for being fools by saying this. I suppose all the hype fools most people and then you have Kurzweil who dreams of immortality with AI Gods and what not, the whole thing is so ridiculous that it's becoming some sort of religion.

There are constant advances being made where the complexity of the brain is shown to be more and more. Last October they found that dendrites process information, now they're finding that most dendritic spines process information. And I haven't done much since my undergrad days of C plus, but in the end computers are built for algorithmic processes. The brain works on non algorithmic sharing of information. So the very basic architecture of computers is incompatible with the structure of the brain.

Nice story btw, but totally irrelevant to my point. Calculus is still a formal process. Even IF we understand everything in the brain and, IF we change the way computers work and IF we build an analog computer, and IF we solve the problems of consciousness (because through consciousness we will understand volition and meaning), then we still have to train and raise an AI from the beginning, in a culture, with a body, which interacts with the world to prevent it from going mad.

Without all of this a fully functional AI is no more than flying through a black hole, heck we at least know more about the physics of black holes than we know about consciousness. Not to mention that in all the time it takes to develop an AI we will have developed genetic modification as well as neural implants to augment our own intelligence, so that will push our own intelligence farther.
 

freethinker44

Well-Known Member
Not really, since we don't even understand the very nature of consciousness to claim that we will be able to program it into our computers sounds so ridiculous that I'm surprised people aren't called out for being fools by saying this.
That's a two way street there. If we can't claim we will be able to create a conscious computer simply because don't understand consciousness yet then, for the same reason, you can't claim it won't happen. The best you can do is claim we can't do it now.

There are constant advances being made where the complexity of the brain is shown to be more and more. Last October they found that dendrites process information, now they're finding that most dendritic spines process information. And I haven't done much since my undergrad days of C plus, but in the end computers are built for algorithmic processes. The brain works on non algorithmic sharing of information. So the very basic architecture of computers is incompatible with the structure of the brain.

Nice story btw, but totally irrelevant to my point. Calculus is still a formal process. Even IF we understand everything in the brain and, IF we change the way computers work and IF we build an analog computer, and IF we solve the problems of consciousness (because through consciousness we will understand volition and meaning), then we still have to train and raise an AI from the beginning, in a culture, with a body, which interacts with the world to prevent it from going mad.

Without all of this a fully functional AI is no more than flying through a black hole, heck we at least know more about the physics of black holes than we know about consciousness. Not to mention that in all the time it takes to develop an AI we will have developed genetic modification as well as neural implants to augment our own intelligence, so that will push our own intelligence farther.
This is another kind of fallacy about AI. People get too hung up on the "intelligence" part of AI and they'll accept no less than some super intelligent human brain computer. This is only true if and only if a human brain is needed to produce intelligence, but it's not. There are different levels of intelligence and even different kinds of intelligence. (another side note, people are always concerned about a traditional neural net intelligence, but what they should be worried about is swarm intelligence. Modelling intelligence on a swarm of wasps is far more terrifying, and easier to accomplish, than modelling it on the human brain.) I was hinting at this fallacy in the anecdote of the calculus program, and really it is a logical fallacy of "moving the goal posts". Whenever AI reaches a milestone the victory is discarded out of hand and the goals reset. Once people learn how an AI accomplished a task it, it isn't considered intelligent anymore. When people do this I always picture them about to be killed by terminators saying, "oh, that's not intelligent, it hunts people the same way I would."
 

MD

qualiaphile
That's a two way street there. If we can't claim we will be able to create a conscious computer simply because don't understand consciousness yet then, for the same reason, you can't claim it won't happen. The best you can do is claim we can't do it now.

What a rubbish response, it doesn't matter if I don't understand consciousness, I'm not trying to create it. Without understanding it, we can't program it. And I never claimed it can't happen, I said it won't happen for centuries. This is why several AI winters have occurred because computer scientists over simplify biology and philosophy without understanding how incredibly complex and powerful they are. The human brain is the penultimate product of 2-3 billion years of evolution. You can't replicate that in a 100 years, no matter how fast your machine is. As we approach 2045 and no singularity is on the horizon, it will result in such a deep winter for AI that I think as a civilization we will probably feel that it is impossible and the funding will dry up considerably.

This is another kind of fallacy about AI. People get too hung up on the "intelligence" part of AI and they'll accept no less than some super intelligent human brain computer. This is only true if and only if a human brain is needed to produce intelligence, but it's not. There are different levels of intelligence and even different kinds of intelligence. (another side note, people are always concerned about a traditional neural net intelligence, but what they should be worried about is swarm intelligence. Modelling intelligence on a swarm of wasps is far more terrifying, and easier to accomplish, than modelling it on the human brain.) I was hinting at this fallacy in the anecdote of the calculus program, and really it is a logical fallacy of "moving the goal posts". Whenever AI reaches a milestone the victory is discarded out of hand and the goals reset. Once people learn how an AI accomplished a task it, it isn't considered intelligent anymore. When people do this I always picture them about to be killed by terminators saying, "oh, that's not intelligent, it hunts people the same way I would."

If something is algorithmic, it will be replicated given enough computing power. But things like volition are not algorithmic and without volition there is no AI, just a very complex abacus. That's why it will not be programmed in any computer we have now because the architectures of all computers are different from the brain, which is the only thing in the universe that we know of that has volition.

I never said there won't be AI, I do think there will be machines that think on the level of insects or even rodents in my lifetime. And by think I mean actually emulate rather than simulate behavior. But to create a super intelligent machine which does not go mad and has purpose is so beyond anything the AI folks have that it will take centuries. Quoting from Kurzweil won't really help their cause since he's been pretty much debunked by neuroscientists.
 

Ouroboros

Coincidentia oppositorum
In 1970 there was a good effort in "Colossus: The Forbin Project".
I watched it finally.

A perfect example of:
1. The super-evil AI computer taking over the world
2. The super-stupid scientist who can't see further than he's own ego
3. The super-gullible and naive president/government letting the scientists create a machine that takes over control, and then when the computer needs more, they just give it to it, and suddenly are surprised when things go wrong. Duh!
 

Ouroboros

Coincidentia oppositorum
What a rubbish response, it doesn't matter if I don't understand consciousness, I'm not trying to create it. Without understanding it, we can't program it.
That's the core of the problem. You can't program AI, but you can still create it. The way you make it happen is far different than using algorithms and linear thinking. It has to be a more holistic approach.
 
Top