Twilight Hue
Twilight, not bright nor dark, good nor bad.
One of my favorites was screamers.
Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
Peace be on you.This is a subject that's getting closer and closer to the religious/philosophical themes of this website. If we ever do develop fully self-aware AI, what will happen. Will it be the end of the human race, a cyborg version of it, a symbiotic relationship (for how long), or is it impossible or immoral in the first place?
Some of the most salient and prescient(?) examples of it IMNTBHO are:
2001: A Space Odyssey
AI: Artificial Intelligence
Her (my favorite)
Ex Machina which opens wide in April. Currently 94% R/T and 8.0 IMDb.
Agree.Bill Gates recently said that he considered AI to be a threat, but that was a press release. I think most people don't understand how AI works. AI is a threat, but it is not the threat of an AI intelligence going crazy. The real and direct threat is the use of AI.
...
So the threat of AI is real along with its usefulness. It doesn't have to become a raging lunatic master computer to be dangerous.
Bill Gates recently said that he considered AI to be a threat, but that was a press release. I think most people don't understand how AI works. AI is a threat, but it is not the threat of an AI intelligence going crazy. The real and direct threat is the use of AI.
Current A.I. tech is capable of the following:
So the threat of AI is real along with its usefulness. It doesn't have to become a raging lunatic master computer to be dangerous.
- Identify an animal or other object and fire a projectile at it
- Traverse to any point on a map unmanned
- Fly an aircraft without direct human control
- Sort through objects at high speed
- Self Repair or repair other AI units
- Automatic analysis of many simple systems
- Run and monitor chemical processing
- Toss objects and catch them
- Assemble materials into a system or weapon
- Print parts, cut parts, test for QA
- Understand audible language and listen for key words or phrases
- Read sign language (theoretically. I don't know if anyone has a system that does this.)
- Describe visual information audibly or as text
- Navigate some limited kinds of obstacle courses without human assistance
- Use computing power and other techniques to see through walls
- Detect motion
- Detect smells, sounds.
- Detect emotional cues
- Calculate distances, shapes, intercept coordinates, volumes, lengths
- Follow fairly complex programmed steps
- Recognize faces
- many other things
Agree.
The biggest threat really is that we'll become lazy, dumb, and bored. The AI/robots/automation is replacing us in different areas where we have to think, work hard, etc. Who cares why 1+1=2 when a calculator can do it for you.
We'll become so dependent on technology that we forget how it works, and when the day comes to repair, we'll look for the robots to do it, but if they're broken too...
Came back yesterday from the cinema, went to see Ex-Machina (it's out here in the UK). I thought it was a good movie but... Well I don't want to spoil it. Let's just say it didn't end how I was expecting it to. Perhaps I'll discuss it, if I remember to, in a few months when it's out elsewhere as well.
Now, with AI, I'm not really scared of it. With all these movies, I'm sure people have already considered some possible repercussions. I think that they can be of great help in the future. I'm more weary about the human users, tbh. Perhaps there can be safety protocols? I'm not qualified to say... I'm just a lay person very much into reading future-related blogs and websites.
AI does not have the potential to become self aware any time soon. Electronics can imitate neurons but not quickly enough and not at the scale that brains can. This has to do with the number of connections needed between artificial neurons and the training of them. You can make a large neural net that operates very slowly or a small one that operates in real time. Training a small neural net to do one or two tricks (like recognize letters) is doable, but training a large net to accomplish multiple tasks is more difficult. Adding a task does not mean simply adding neurons, either.Those are all still just advanced computer functions. We've had missles that can target a specific spot on earth for decades. As you say, it's how their used. AI is a different question, an artificial self-aware intelligence that understands its and our mortality. Will they be able to lie, and will they freak out like HAL when it's about to be disconnected.
But was it a satisfying ending? An ending is for me the critical part of a movie. It doesn't have to be happy or sad or whatever, just not something that leaves you hanging.
his is kind of a straw man. The field of AI is way past doing complex math.Lol you can't program consciousness into a computer...that's like common sense. You could have a computer the size of the universe, but if all it's doing is a very complicated calculation, it will simply do the calculation faster.
1+1 = 2 no matter if it's done in 100 seconds or 10^-20 seconds.
However 1+1 =/= awareness, violition, and subjective experience
AI does not have the potential to become self aware any time soon. Electronics can imitate neurons but not quickly enough and not at the scale that brains can. This has to do with the number of connections needed between artificial neurons and the training of them. You can make a large neural net that operates very slowly or a small one that operates in real time. Training a small neural net to do one or two tricks (like recognize letters) is doable, but training a large net to accomplish multiple tasks is more difficult. Adding a task does not mean simply adding neurons, either.
Information in a neural net is not easily transferable to a dissimilar neural net, nor can you sew two nets together to add their abilities. You can train them to work together -- but how you go about doing that is not easy to determine. Its because the relationships between neurons do not have something called 'Linearity' like other electronics have. Electric circuits have linearity, and that is what enables them to be added together as separate components into systems. You can 'Add' two resistors to make one big resistor with twice the resistance or two circuit-boards to make a better more complex circuit-board, but if you 'Add' two neural cells you don't get twice the intelligence. You can add regular circuits together, but you can't just add neural nets together. They don't enhance one another in a linear, additive way. Instead a neural net must be trained as it is for every task that it will be given. If it is going to do two things, it must be trained to do both. If you want it to do a lot you have to make a huge net and devote a lot of time and energy to training it (and hope you aren't wasting your time). You can't train two nets separately and then glue them together (mostly). This is one of the things that researchers are working to change, but there just are practical limits.
I didn't think it left you hanging (well depends what you mean by that), just wasn't what I was expecting. I enjoyed it, regardless.
Maybe it's just because I was trying to guess what will happen since a friend said they were doing so and the ending ended up being different from their guess... I'll have to see if he had a similar conclusion to mine.
T
his is kind of a straw man. The field of AI is way past doing complex math.
Kind of a tangent here, but this reminds me of an MIT lecture about AI. There was an anecdote about an AI skeptic and a calculus program. The skeptic was shown a difficult calculus program and was asked if he thought a computer might be intelligent if it could solve this problem. He said yes, it would be an intelligent machine. so they showed him a program that did indeed solve the problem correctly and the guy was amazed. Then the researcher explained how the program worked and the guy retracted his amazement saying, "this computer isn't intelligent after all, it solves the problem the same way I do."
Yeah I'm not read up on that.I invite you to read up on quantum computers. In any case we've almost always way over or under estimated advancement in science.
That's a two way street there. If we can't claim we will be able to create a conscious computer simply because don't understand consciousness yet then, for the same reason, you can't claim it won't happen. The best you can do is claim we can't do it now.Not really, since we don't even understand the very nature of consciousness to claim that we will be able to program it into our computers sounds so ridiculous that I'm surprised people aren't called out for being fools by saying this.
This is another kind of fallacy about AI. People get too hung up on the "intelligence" part of AI and they'll accept no less than some super intelligent human brain computer. This is only true if and only if a human brain is needed to produce intelligence, but it's not. There are different levels of intelligence and even different kinds of intelligence. (another side note, people are always concerned about a traditional neural net intelligence, but what they should be worried about is swarm intelligence. Modelling intelligence on a swarm of wasps is far more terrifying, and easier to accomplish, than modelling it on the human brain.) I was hinting at this fallacy in the anecdote of the calculus program, and really it is a logical fallacy of "moving the goal posts". Whenever AI reaches a milestone the victory is discarded out of hand and the goals reset. Once people learn how an AI accomplished a task it, it isn't considered intelligent anymore. When people do this I always picture them about to be killed by terminators saying, "oh, that's not intelligent, it hunts people the same way I would."There are constant advances being made where the complexity of the brain is shown to be more and more. Last October they found that dendrites process information, now they're finding that most dendritic spines process information. And I haven't done much since my undergrad days of C plus, but in the end computers are built for algorithmic processes. The brain works on non algorithmic sharing of information. So the very basic architecture of computers is incompatible with the structure of the brain.
Nice story btw, but totally irrelevant to my point. Calculus is still a formal process. Even IF we understand everything in the brain and, IF we change the way computers work and IF we build an analog computer, and IF we solve the problems of consciousness (because through consciousness we will understand volition and meaning), then we still have to train and raise an AI from the beginning, in a culture, with a body, which interacts with the world to prevent it from going mad.
Without all of this a fully functional AI is no more than flying through a black hole, heck we at least know more about the physics of black holes than we know about consciousness. Not to mention that in all the time it takes to develop an AI we will have developed genetic modification as well as neural implants to augment our own intelligence, so that will push our own intelligence farther.
That's a two way street there. If we can't claim we will be able to create a conscious computer simply because don't understand consciousness yet then, for the same reason, you can't claim it won't happen. The best you can do is claim we can't do it now.
This is another kind of fallacy about AI. People get too hung up on the "intelligence" part of AI and they'll accept no less than some super intelligent human brain computer. This is only true if and only if a human brain is needed to produce intelligence, but it's not. There are different levels of intelligence and even different kinds of intelligence. (another side note, people are always concerned about a traditional neural net intelligence, but what they should be worried about is swarm intelligence. Modelling intelligence on a swarm of wasps is far more terrifying, and easier to accomplish, than modelling it on the human brain.) I was hinting at this fallacy in the anecdote of the calculus program, and really it is a logical fallacy of "moving the goal posts". Whenever AI reaches a milestone the victory is discarded out of hand and the goals reset. Once people learn how an AI accomplished a task it, it isn't considered intelligent anymore. When people do this I always picture them about to be killed by terminators saying, "oh, that's not intelligent, it hunts people the same way I would."
I watched it finally.In 1970 there was a good effort in "Colossus: The Forbin Project".
That's the core of the problem. You can't program AI, but you can still create it. The way you make it happen is far different than using algorithms and linear thinking. It has to be a more holistic approach.What a rubbish response, it doesn't matter if I don't understand consciousness, I'm not trying to create it. Without understanding it, we can't program it.