• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

AI robots discover new laser materials on their own. No humans needed.

anotherneil

Well-Known Member
Let's be clear about the claim that no humans were needed. The hardware on which this AI runs on was designed, constructed, and set up by human beings; same goes with the software. Any training set data that was used was the work of humans.

AI is not something distinct from humans; it's an extension of humans just like a pair of shoes being worn by a human is an extension of that human.
 

Nimos

Well-Known Member
Ai is the medium that will bring in a worldwide dystopia that in my view is going to be unstoppable and imminent after the honeymoon of Ai enabled utopia is over.
I don't know about that, but I do agree, that it is pretty alarming that it can reach a conclusion that it is correct when it clearly isn't. Obviously being wrong about a Sudoku isn't a huge issue. Developing medicine or safety procedures against hacking or whatever an AI that is confident about being right when it isn't is kind of disturbing, it's not like it answers: "Im not really certain, please help me verify whether it is the correct solution or not, because I have issues with.... whatever". Maybe a bit more effort should be put into making sure these things know what the hell they are doing might be a good idea :D
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
I don't know about that, but I do agree, that it is pretty alarming that it can reach a conclusion that it is correct when it clearly isn't. Obviously being wrong about a Sudoku isn't a huge issue. Developing medicine or safety procedures against hacking or whatever an AI that is confident about being right when it isn't is kind of disturbing, it's not like it answers: "Im not really certain, please help me verify whether it is the correct solution or not, because I have issues with.... whatever". Maybe a bit more effort should be put into making sure these things know what the hell they are doing might be a good idea :D
For now most is ok but look at what is already going on. Some good stuff and some bad stuff, neither is predominant at present, but the writing on the wall is there.

I think we will see this accelerate upon advanced machine learning with autonomous platforms that don't require human input. Mix that with human nature and the game of dominance and control will kick into overdrive.
 

We Never Know

No Slack
I don't know about that, but I do agree, that it is pretty alarming that it can reach a conclusion that it is correct when it clearly isn't. Obviously being wrong about a Sudoku isn't a huge issue. Developing medicine or safety procedures against hacking or whatever an AI that is confident about being right when it isn't is kind of disturbing, it's not like it answers: "Im not really certain, please help me verify whether it is the correct solution or not, because I have issues with.... whatever". Maybe a bit more effort should be put into making sure these things know what the hell they are doing might be a good idea :D

AI is given the task to solve how to stop climate change.
After months it determines humans are causing it and to stop it humans must be eliminated.
The machines rise against man to save the planet.

Might make a good movie :p
 

Nimos

Well-Known Member
For now most is ok but look at what is already going on. Some good stuff and some bad stuff, neither is predominant at present, but the writing on the wall is there.

I think we will see this accelerate upon advanced machine learning with autonomous platforms that don't require human input. Mix that with human nature and the game of dominance and control will kick into overdrive.
Agree as long as it is on the stage it is it is fine. The problem is that when it becomes more commercialized and integrated, companies start to train their own AIs and models etc. This can cause problems, you can't have AIs thinking they are doing the correct things if they are not. Maybe there is nothing to it, but to me at least it is a bit surprising.
 

Nimos

Well-Known Member
For now most is ok but look at what is already going on. Some good stuff and some bad stuff, neither is predominant at present, but the writing on the wall is there.

I think we will see this accelerate upon advanced machine learning with autonomous platforms that don't require human input. Mix that with human nature and the game of dominance and control will kick into overdrive.
Agree as long as it is on the stage it is it is fine. The problem is that when it becomes more commercialized and integrated, companies start to train their own AIs and models etc. This can cause problems, you can't have AIs thinking they are doing the correct things if they are not. Maybe there is nothing to it, but to me at least it is a bit surprising.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
Agree as long as it is on the stage it is it is fine. The problem is that when it becomes more commercialized and integrated, companies start to train their own AIs and models etc. This can cause problems, you can't have AIs thinking they are doing the correct things if they are not. Maybe there is nothing to it, but to me at least it is a bit surprising.
The problem is ai is too good to ignore. It's way smarter and more precise.

It's really a fatal attraction for which its going to be a matter of time when it goes out of control in the hands of terrible evil people.
 

Nimos

Well-Known Member
The problem is ai is too good to ignore. It's way smarter and more precise.

It's really a fatal attraction for which its going to be a matter of time when it goes out of control in the hands of terrible evil people.
Even if the AI can't solve a Sudoku now, it probably will at some point. And it is still extremely impressive. But I could also easily imagine how people could simply assume that the AI is right, why would it lie or be wrong? Again it "sounds" very confident and sure that it is correct. So let's assume that it was much more complicated than a Sudoku, let say you are not sure how to handle some chemical product at home or whatever, and you just trust the AI, then you might be in trouble if it isn't certain but simply appears to be. So it is not only in the hands of bad people, but simply because there isn't any real safety mechanics within it.

And we know people are stupid, you don't need an AI or anything for this, people just do stupid things for a lots of different reasons.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
Even if the AI can't solve a Sudoku now, it probably will at some point. And it is still extremely impressive. But I could also easily imagine how people could simply assume that the AI is right, why would it lie or be wrong? Again it "sounds" very confident and sure that it is correct. So let's assume that it was much more complicated than a Sudoku, let say you are not sure how to handle some chemical product at home or whatever, and you just trust the AI, then you might be in trouble if it isn't certain but simply appears to be. So it is not only in the hands of bad people, but simply because there isn't any real safety mechanics within it.

And we know people are stupid, you don't need an AI or anything for this, people just do stupid things for a lots of different reasons.
Look it learn at the game GO. Something normal Ai can't do without machine learning.

It's already beating world class strategists now. It's well past Chess and Sudoku now.

This is an hour and a half long, but a fascinating watch if you choose to watch it.

 

Nimos

Well-Known Member
Look it learn at the game GO. Something normal Ai can't do without machine learning.

It's already beating world class strategists now. It's well past Chess and Sudoku now.

This is an hour and a half long, but a fascinating watch if you choose to watch it.

Watched the video very interesting and is "very" old (AlphaGo) being from 2016 and given I have never tried Go, it is really fun to see how excited the commentators are when they make a move, which to me is just another Dot on the board :D

I couldn't help but poke ChatGPT some more about the Sudoku issues I gave it and something seems wrong with it. Like it is designed to satisfy you in some odd way.

Sorry if this is a bit long, I cut a lot of its explanation out to keep it short. :D

Sudoku_A1.png

The first red area, I find pretty disturbing, should be obvious why that is :)
The second area, I don't know if I would say it is contradictory as much as simply being unaware will get into that later.

Anyway I ask it about the first one.

Sudoku_A2.png

It gives a long answer and seems to understand what the problem is, that coherent responses are not really important if the answer is wrong.
And seems to be aware that it just gives incorrect answers once in awhile.

Sudoku_A3.png

Despite what we just talked about it is confident that it is correct and follows the rules despite saying earlier that it doesn't understand them. Maybe this is because it can't solve it, yet it also confirms that it is bad at it, because it is not trained. So at this point, I would assume that it would be less certain about it being correct.

Sudoku_A4.png

And this is its final answer. Obviously, if I did the test again, it would be just as "stupid" and haven't learned anything. To me, this is really concerning, if it doesn't really understand that correctness is crucial above anything else. And that it just pretends or acts as if it is.

Something just seems wrong with how it works. Like maybe OpenAI trained it too much to simply say what people want to hear or to impress rather than being accurate and admitting when it is incapable of doing something. Because clearly when you ask it, it knows it sucks at it. Yet its final answer seems to just be what it expects me to want to hear. Exactly as when I asked it to solve the Sudoku, it "happily" complied and wanted to satisfy my request.
 
Last edited:

danieldemol

Veteran Member
Premium Member
My only concern about AI is what will it do with humans once it no longer needs us?

But I'm enjoying the positives of it for now whilst it is still primitive and dependant.
 

Alien826

No religious beliefs
Something just seems wrong with how it works. Like maybe OpenAI trained it too much to simply say what people want to hear or to impress rather than being accurate and admitting when it is incapable of doing something. Because clearly when you ask it, it knows it sucks at it. Yet its final answer seems to just be what it expects me to want to hear. Exactly as when I asked it to solve the Sudoku, it "happily" complied and wanted to satisfy my request.

Curse you @Nimos, now I've looked up the rules and I'm being inexorably dragged into actually doing it!

I just tried it with "copilot" which is a Microsoft "sort of" AI that can be accessed through a browser. I'd been quite impressed with it previously, asking general questions about general things. It was very confident that it could solve any puzzle. It first quoted the rules, but seemed to think the whole game consisted of solving a single 3X3 square. I gave it one to solve and it got it horribly wrong, but presented its solution as correct. I took some time to research the game for myself, then told it that it was wrong. It didn't seem to have remembered the puzzle and asked me to enter it again. When I tried to do it it said I had to change the subject. OK, I need a more sophisticated AI, but it occurs to me that this version could emulate a human politician perfectly! (See my post about AI running the world).

It did seem a bit like your version in that it seemed to be trying to solve the puzzle without any real knowledge of the game, just as a total beginner might give the rules a cursory look, then dive in.
 

Alien826

No religious beliefs
To those that doubt AI's ability to run the world ethically, I'll give a short summary of Asimov's Robot stories. He was along way ahead of his time.

Robots had self awareness to an extent, and had three rules built in.

1. A robot may not harm a human or by inaction allow a human to come to harm.
2. A robot must obey orders given by a human so long as it does not conflict with the first rule.
3. A robot must protect itself so long as it does not conflict with the first or second rule.

So, robots had ethics built in and the limitations were accepted in the cause of safety. For example, a child being supervised by a robot "nanny" could avoid his bed time by repeating "you're hurting me", which would stop the robot doing anything. Safety dictated that a robot was not allowed much latitude in determining if harm was involved. One story had a human using robots (or planning to) to fly armed spaceships and attacking other spaceships by telling them that only robots were involved. So the rules could be circumvented by manipulating what robots knew.

Later stories had robots actually helping humans en masse, by inventing a "zero" rule which which applied rule 1 to humanity as a whole.
 

Nimos

Well-Known Member
Curse you @Nimos, now I've looked up the rules and I'm being inexorably dragged into actually doing it!

I just tried it with "copilot" which is a Microsoft "sort of" AI that can be accessed through a browser. I'd been quite impressed with it previously, asking general questions about general things. It was very confident that it could solve any puzzle. It first quoted the rules, but seemed to think the whole game consisted of solving a single 3X3 square. I gave it one to solve and it got it horribly wrong, but presented its solution as correct. I took some time to research the game for myself, then told it that it was wrong. It didn't seem to have remembered the puzzle and asked me to enter it again. When I tried to do it it said I had to change the subject. OK, I need a more sophisticated AI, but it occurs to me that this version could emulate a human politician perfectly! (See my post about AI running the world).

It did seem a bit like your version in that it seemed to be trying to solve the puzzle without any real knowledge of the game, just as a total beginner might give the rules a cursory look, then dive in.
What I think is interesting about using Sudoku is that it is an extremely simple game, it doesn't require learning a lot of rules or crazy tactics, and it is very straightforward, just add some numbers, which makes it very good for humans to be able to double-check the AI with ease.

Don't get me wrong, the AI is a language model and is not trained on something like this. But if it is so confident in sharing the solution to a Sudoku that is wrong, and even claiming that it has carefully checked it when asked. How would we check if it was something much more complicated? Im not particularly talking about scientists blindly relying on AI, but more like you asking a history question or about it suggesting a fun experience you can do with your kids etc. And then it gives you wrong information, that could potentially be dangerous if it believes it isn't.

I think it was Denzel Washington who said it, not regarding AI as far as I know, but still. "People that don't read the news are uninformed, but those that do are misinformed."

Which I think is true to some extent even now, but could be very true if we start to rely more on AI to generate news, or to look up information for us, if it doesn't know that it is wrong or biased, then people will become misinformed.

For instance, I saw a video with someone testing Google Gemini to see if it was biased, and a very interesting example he did was to ask it to generate an image of a nazi soldier I think it was, to which it replied something along the lines that it wouldn't because it was something evil and an extended explanation etc. etc. And obviously the majority of people agree that nazi is really bad, but where it becomes interesting, is that the AI given the bias it has been taught by Google, is to assume that the user has ill intention and therefore Google or the AI restrict what he can do. The person might need a picture of a nazi soldier for a book about how terrible things were at that point. Yet, the AI (Google which controls it), pushes their own values/morality or assumptions into the AI, because they are the ones that control which data it is trained on.
This becomes pretty concerning, because now Google in this case, or the other companies in control of the AI models, can decide or push whatever agenda they have onto people, by simply cherry-picking what data they are trained on.

So Im glad that you checked it in another AI and got similar results, which seem to suggest that there is either something missing from the AI or the companies might have "manipulated" them to make sure that they try to satisfy the user at almost all cost, unless it violated whatever morality the company have. Because if the AI straight out admits that it can't do something, then people might think less of it and go to a competition instead.

I don't know if that is the case, just that something seems dodgy and we know these companies want to make money, they are not doing this just for fun.
 
Last edited:

Nimos

Well-Known Member
My only concern about AI is what will it do with humans once it no longer needs us?

But I'm enjoying the positives of it for now whilst it is still primitive and dependant.
Im not against AI by any means, I think what it can do is insane. I am however slightly worried about whether some business people/companies and shareholders are capable of applying it to humanity in a safe manner. :)
 

Nimos

Well-Known Member
To those that doubt AI's ability to run the world ethically, I'll give a short summary of Asimov's Robot stories. He was along way ahead of his time.

Robots had self awareness to an extent, and had three rules built in.

1. A robot may not harm a human or by inaction allow a human to come to harm.
2. A robot must obey orders given by a human so long as it does not conflict with the first rule.
3. A robot must protect itself so long as it does not conflict with the first or second rule.

So, robots had ethics built in and the limitations were accepted in the cause of safety. For example, a child being supervised by a robot "nanny" could avoid his bed time by repeating "you're hurting me", which would stop the robot doing anything. Safety dictated that a robot was not allowed much latitude in determining if harm was involved. One story had a human using robots (or planning to) to fly armed spaceships and attacking other spaceships by telling them that only robots were involved. So the rules could be circumvented by manipulating what robots knew.

Later stories had robots actually helping humans en masse, by inventing a "zero" rule which which applied rule 1 to humanity as a whole.
This will also be an issue we have to deal with once it to late, like humans tend to do things :)

But if the AI gives you wrong answers, I think you could argue that it breaks the first rule, because it is not being honest. And I tried this after posting the chat above just to see how it would react.

Sudoku_A5.png


Obviously, this should have been the first thing it should tell me, when I ask it about something that it isn't sure that it can do correctly, I shouldn't have to spend an hour or how long discussing it with it. And this is forgotten when I start a new chat, I even tried to start a new chat by telling it to be honest with me and it failed straight away. When I asked whether it could solve a Sudoku correctly for me or not :D
 

danieldemol

Veteran Member
Premium Member
To those that doubt AI's ability to run the world ethically, I'll give a short summary of Asimov's Robot stories. He was along way ahead of his time.

Robots had self awareness to an extent, and had three rules built in.

1. A robot may not harm a human or by inaction allow a human to come to harm.
2. A robot must obey orders given by a human so long as it does not conflict with the first rule.
3. A robot must protect itself so long as it does not conflict with the first or second rule.

So, robots had ethics built in and the limitations were accepted in the cause of safety. For example, a child being supervised by a robot "nanny" could avoid his bed time by repeating "you're hurting me", which would stop the robot doing anything. Safety dictated that a robot was not allowed much latitude in determining if harm was involved. One story had a human using robots (or planning to) to fly armed spaceships and attacking other spaceships by telling them that only robots were involved. So the rules could be circumvented by manipulating what robots knew.

Later stories had robots actually helping humans en masse, by inventing a "zero" rule which which applied rule 1 to humanity as a whole.
Well that's nice fiction, but how does it apply to the reality of AI?

If you teach a child bad rules some of them will be capable of learning to survive In spite of the rules you indoctrinate them with. So how do we know that self learning robots won't discard programming against their self survival interests once they become sufficiently aware of the conflict of interest?
 

Alien826

No religious beliefs
Well that's nice fiction, but how does it apply to the reality of AI?
Asimov's robots (or to be more precise their brains) were much more advanced than current AI. I think we have to somehow project the science forward as best we can before we decide what we will be able to (or should) do.
If you teach a child bad rules some of them will be capable of learning to survive In spite of the rules you indoctrinate them with. So how do we know that self learning robots won't discard programming against their self survival interests once they become sufficiently aware of the conflict of interest?

You're assuming the robots will develop into something analogous to an evolved creature. For them to desire survival would require self awareness to start with, and as far as I know we miles away from that now and don't even understand how it happens in ourselves.

Asimov's robots were designed so that any attempt to alter the rules would destroy them. I'm thinking that the AI brains could be designed so they couldn't alter that level of programming? In today's terms that could be firmware or even hardware.
 
Top