Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
I don't know about that, but I do agree, that it is pretty alarming that it can reach a conclusion that it is correct when it clearly isn't. Obviously being wrong about a Sudoku isn't a huge issue. Developing medicine or safety procedures against hacking or whatever an AI that is confident about being right when it isn't is kind of disturbing, it's not like it answers: "Im not really certain, please help me verify whether it is the correct solution or not, because I have issues with.... whatever". Maybe a bit more effort should be put into making sure these things know what the hell they are doing might be a good ideaAi is the medium that will bring in a worldwide dystopia that in my view is going to be unstoppable and imminent after the honeymoon of Ai enabled utopia is over.
For now most is ok but look at what is already going on. Some good stuff and some bad stuff, neither is predominant at present, but the writing on the wall is there.I don't know about that, but I do agree, that it is pretty alarming that it can reach a conclusion that it is correct when it clearly isn't. Obviously being wrong about a Sudoku isn't a huge issue. Developing medicine or safety procedures against hacking or whatever an AI that is confident about being right when it isn't is kind of disturbing, it's not like it answers: "Im not really certain, please help me verify whether it is the correct solution or not, because I have issues with.... whatever". Maybe a bit more effort should be put into making sure these things know what the hell they are doing might be a good idea
I don't know about that, but I do agree, that it is pretty alarming that it can reach a conclusion that it is correct when it clearly isn't. Obviously being wrong about a Sudoku isn't a huge issue. Developing medicine or safety procedures against hacking or whatever an AI that is confident about being right when it isn't is kind of disturbing, it's not like it answers: "Im not really certain, please help me verify whether it is the correct solution or not, because I have issues with.... whatever". Maybe a bit more effort should be put into making sure these things know what the hell they are doing might be a good idea
Agree as long as it is on the stage it is it is fine. The problem is that when it becomes more commercialized and integrated, companies start to train their own AIs and models etc. This can cause problems, you can't have AIs thinking they are doing the correct things if they are not. Maybe there is nothing to it, but to me at least it is a bit surprising.For now most is ok but look at what is already going on. Some good stuff and some bad stuff, neither is predominant at present, but the writing on the wall is there.
I think we will see this accelerate upon advanced machine learning with autonomous platforms that don't require human input. Mix that with human nature and the game of dominance and control will kick into overdrive.
Agree as long as it is on the stage it is it is fine. The problem is that when it becomes more commercialized and integrated, companies start to train their own AIs and models etc. This can cause problems, you can't have AIs thinking they are doing the correct things if they are not. Maybe there is nothing to it, but to me at least it is a bit surprising.For now most is ok but look at what is already going on. Some good stuff and some bad stuff, neither is predominant at present, but the writing on the wall is there.
I think we will see this accelerate upon advanced machine learning with autonomous platforms that don't require human input. Mix that with human nature and the game of dominance and control will kick into overdrive.
The problem is ai is too good to ignore. It's way smarter and more precise.Agree as long as it is on the stage it is it is fine. The problem is that when it becomes more commercialized and integrated, companies start to train their own AIs and models etc. This can cause problems, you can't have AIs thinking they are doing the correct things if they are not. Maybe there is nothing to it, but to me at least it is a bit surprising.
Even if the AI can't solve a Sudoku now, it probably will at some point. And it is still extremely impressive. But I could also easily imagine how people could simply assume that the AI is right, why would it lie or be wrong? Again it "sounds" very confident and sure that it is correct. So let's assume that it was much more complicated than a Sudoku, let say you are not sure how to handle some chemical product at home or whatever, and you just trust the AI, then you might be in trouble if it isn't certain but simply appears to be. So it is not only in the hands of bad people, but simply because there isn't any real safety mechanics within it.The problem is ai is too good to ignore. It's way smarter and more precise.
It's really a fatal attraction for which its going to be a matter of time when it goes out of control in the hands of terrible evil people.
Look it learn at the game GO. Something normal Ai can't do without machine learning.Even if the AI can't solve a Sudoku now, it probably will at some point. And it is still extremely impressive. But I could also easily imagine how people could simply assume that the AI is right, why would it lie or be wrong? Again it "sounds" very confident and sure that it is correct. So let's assume that it was much more complicated than a Sudoku, let say you are not sure how to handle some chemical product at home or whatever, and you just trust the AI, then you might be in trouble if it isn't certain but simply appears to be. So it is not only in the hands of bad people, but simply because there isn't any real safety mechanics within it.
And we know people are stupid, you don't need an AI or anything for this, people just do stupid things for a lots of different reasons.
Watched the video very interesting and is "very" old (AlphaGo) being from 2016 and given I have never tried Go, it is really fun to see how excited the commentators are when they make a move, which to me is just another Dot on the boardLook it learn at the game GO. Something normal Ai can't do without machine learning.
It's already beating world class strategists now. It's well past Chess and Sudoku now.
This is an hour and a half long, but a fascinating watch if you choose to watch it.
As long as the cure isn't annihilate humanity I'm right their with you.Sweet. Hope AI finds cures for cancer, HIV and more.
Something just seems wrong with how it works. Like maybe OpenAI trained it too much to simply say what people want to hear or to impress rather than being accurate and admitting when it is incapable of doing something. Because clearly when you ask it, it knows it sucks at it. Yet its final answer seems to just be what it expects me to want to hear. Exactly as when I asked it to solve the Sudoku, it "happily" complied and wanted to satisfy my request.
What I think is interesting about using Sudoku is that it is an extremely simple game, it doesn't require learning a lot of rules or crazy tactics, and it is very straightforward, just add some numbers, which makes it very good for humans to be able to double-check the AI with ease.Curse you @Nimos, now I've looked up the rules and I'm being inexorably dragged into actually doing it!
I just tried it with "copilot" which is a Microsoft "sort of" AI that can be accessed through a browser. I'd been quite impressed with it previously, asking general questions about general things. It was very confident that it could solve any puzzle. It first quoted the rules, but seemed to think the whole game consisted of solving a single 3X3 square. I gave it one to solve and it got it horribly wrong, but presented its solution as correct. I took some time to research the game for myself, then told it that it was wrong. It didn't seem to have remembered the puzzle and asked me to enter it again. When I tried to do it it said I had to change the subject. OK, I need a more sophisticated AI, but it occurs to me that this version could emulate a human politician perfectly! (See my post about AI running the world).
It did seem a bit like your version in that it seemed to be trying to solve the puzzle without any real knowledge of the game, just as a total beginner might give the rules a cursory look, then dive in.
Im not against AI by any means, I think what it can do is insane. I am however slightly worried about whether some business people/companies and shareholders are capable of applying it to humanity in a safe manner.My only concern about AI is what will it do with humans once it no longer needs us?
But I'm enjoying the positives of it for now whilst it is still primitive and dependant.
This will also be an issue we have to deal with once it to late, like humans tend to do thingsTo those that doubt AI's ability to run the world ethically, I'll give a short summary of Asimov's Robot stories. He was along way ahead of his time.
Robots had self awareness to an extent, and had three rules built in.
1. A robot may not harm a human or by inaction allow a human to come to harm.
2. A robot must obey orders given by a human so long as it does not conflict with the first rule.
3. A robot must protect itself so long as it does not conflict with the first or second rule.
So, robots had ethics built in and the limitations were accepted in the cause of safety. For example, a child being supervised by a robot "nanny" could avoid his bed time by repeating "you're hurting me", which would stop the robot doing anything. Safety dictated that a robot was not allowed much latitude in determining if harm was involved. One story had a human using robots (or planning to) to fly armed spaceships and attacking other spaceships by telling them that only robots were involved. So the rules could be circumvented by manipulating what robots knew.
Later stories had robots actually helping humans en masse, by inventing a "zero" rule which which applied rule 1 to humanity as a whole.
Well that's nice fiction, but how does it apply to the reality of AI?To those that doubt AI's ability to run the world ethically, I'll give a short summary of Asimov's Robot stories. He was along way ahead of his time.
Robots had self awareness to an extent, and had three rules built in.
1. A robot may not harm a human or by inaction allow a human to come to harm.
2. A robot must obey orders given by a human so long as it does not conflict with the first rule.
3. A robot must protect itself so long as it does not conflict with the first or second rule.
So, robots had ethics built in and the limitations were accepted in the cause of safety. For example, a child being supervised by a robot "nanny" could avoid his bed time by repeating "you're hurting me", which would stop the robot doing anything. Safety dictated that a robot was not allowed much latitude in determining if harm was involved. One story had a human using robots (or planning to) to fly armed spaceships and attacking other spaceships by telling them that only robots were involved. So the rules could be circumvented by manipulating what robots knew.
Later stories had robots actually helping humans en masse, by inventing a "zero" rule which which applied rule 1 to humanity as a whole.
Asimov's robots (or to be more precise their brains) were much more advanced than current AI. I think we have to somehow project the science forward as best we can before we decide what we will be able to (or should) do.Well that's nice fiction, but how does it apply to the reality of AI?
If you teach a child bad rules some of them will be capable of learning to survive In spite of the rules you indoctrinate them with. So how do we know that self learning robots won't discard programming against their self survival interests once they become sufficiently aware of the conflict of interest?