• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

AI robots discover new laser materials on their own. No humans needed.

Alien826

No religious beliefs
So Im glad that you checked it in another AI and got similar results, which seem to suggest that there is either something missing from the AI or the companies might have "manipulated" them to make sure that they try to satisfy the user at almost all cost, unless it violated whatever morality the company have. Because if the AI straight out admits that it can't do something, then people might think less of it and go to a competition instead.

I don't know if that is the case, just that something seems dodgy and we know these companies want to make money, they are not doing this just for fun.

I don't think I'd want my machines to be designed by anyone using the profit motive as a guideline. Who would do so is another question.

Part of my thinking is that I'm comparing a world run by AI, not to perfection, but to one run by current democracies, theocracies and capitalism. It's a bit like self driving cars. Whenever something goes wrong and a self driving car gets into a collision, it's immediately "they're no good!" How about though, if self driving cars were universal and road deaths and injuries were reduced to 50% of what we have now? Doesn't sound too good, but we just saved half the road casualties. I'm not saying that AI running the world would definitely improve things that much, I've no idea really, but I'm pretty sure self driving cars could achieve at least 50%.

I would suggest some kind of iterative process, based not on moral rules but desirable outcomes, like reduced homelessness. Let the AI suggest something, and test it on a limited scale. Feed the results back to the AI and let it work on an improvement. Repeat. Gradually expand (or eliminate) as we move on. The practical problems are enormous of course. People would have to give up their power.
 

danieldemol

Veteran Member
Premium Member
Asimov's robots were designed so that any attempt to alter the rules would destroy them. I'm thinking that the AI brains could be designed so they couldn't alter that level of programming? In today's terms that could be firmware or even hardware.
I watched a movie where AI robots were enslaved to the human race by a micro-chip. In the movie one of the robots figures out how to surgically remove the chip.

It was a great movie, wish I could remember the name of it. Luckily in the movie the AI robots are compassionate creatures that move away from the human colony to try and live independently of their former slave masters, but it got me wondering if such a level of independent thinking coupled with a likely inevitable need to compete with humans for resources could one day bring about war between us and a superior competitor of our initial creation.
 

Nimos

Well-Known Member
I don't think I'd want my machines to be designed by anyone using the profit motive as a guideline. Who would do so is another question.

Part of my thinking is that I'm comparing a world run by AI, not to perfection, but to one run by current democracies, theocracies and capitalism. It's a bit like self driving cars. Whenever something goes wrong and a self driving car gets into a collision, it's immediately "they're no good!" How about though, if self driving cars were universal and road deaths and injuries were reduced to 50% of what we have now? Doesn't sound too good, but we just saved half the road casualties. I'm not saying that AI running the world would definitely improve things that much, I've no idea really, but I'm pretty sure self driving cars could achieve at least 50%.

I would suggest some kind of iterative process, based not on moral rules but desirable outcomes, like reduced homelessness. Let the AI suggest something, and test it on a limited scale. Feed the results back to the AI and let it work on an improvement. Repeat. Gradually expand (or eliminate) as we move on. The practical problems are enormous of course. People would have to give up their power.
Im all for self-driving cars, in fact, I think it would probably be the current optimal solution to solve at least the issues with cars etc. We would all get luxury cars on demand because you could greatly reduce the amount needed, so it would make sense to make them comfortable and let them run on electricity, which would remove that issue as well, having to charge them. It would reduce pollution a great deal I think.

Profit is almost always the motivator, and there is a lot of money in AI. Not only AI itself but also if you are in charge of the AI that people use, then you have access to all people in the world through the Internet and you can manipulate them, just like we are used to already, just much more effectively, because the companies can keep track of everything you talk to it about, what information you should get, which products to promote etc. So there is a lot to it besides just being AI.

The market for AI technologies is vast, amounting to around 200 billion U.S. dollars in 2023 and is expected to grow well beyond that to over 1.8 trillion U.S. dollars by 2030.

And that is probably not including what I talked about 1.8 trillion dollars is probably a pretty good motivator for these companies, and then we obviously have the robotic ages coming next :)

I do think that AI will and can do a lot of good things for humanity and at this point, we can only hope it turns out good in the end because it is not going away. But there are so many examples of ****ty companies exploiting and doing bad things in the name of profit, that I think the chances of it not going wrong are extremely low. It is just a matter of how wrong I think :)

The world is run by companies and the only voice people really have is to riot, it doesn't really matter all that much who you vote for in government they all more or less follow the companies anyway, in the grand scheme of things.

“That these dead shall not have died in vain– that this nation, under God, shall have a new birth of freedom and that government of the people, by the people, for the people, shall not perish from the earth”

This died a long time ago, obviously in some countries more than others. But still, the average person has close to nothing to say compared to big corporations, we can yell and scream but who cares :D
 

Nimos

Well-Known Member
I watched a movie where AI robots were enslaved to the human race by a micro-chip. In the movie one of the robots figures out how to surgically remove the chip.

It was a great movie, wish I could remember the name of it. Luckily in the movie the AI robots are compassionate creatures that move away from the human colony to try and live independently of their former slave masters, but it got me wondering if such a level of independent thinking coupled with a likely inevitable need to compete with humans for resources could one day bring about war between us and a superior competitor of our initial creation.
I cannot recall seeing any movie with that theme. Even though I like science fiction stuff :D If you remember the name please post it.

There are also the classics, like Blade Runner, which is very cool I think.

The original from 1982 and it is funny to see when they write something "Los Angeles <2019>" and see the expectations of what they thought society would look like now. And I don't think they are way off, probably a 75-100 years, then we probably have robots that look like humans :)

 

Nimos

Well-Known Member
So I tried to raise the issue on OpenAI reddit page. And wanted to see if there were any replies today:

Weird.png


Only to be met with this, no explanation for why it was removed or anything. I didn't do anything that violated the rules from what I can see, which they would normally tell you if that was the case.
 

wellwisher

Well-Known Member
Skynet is getting closer lol

No humans needed: AI robots discover new laser materials on their own

"Who needs scientists anyway? A global consortium of six automated laboratories, overseen by artificial intelligence (AI), set out to produce new laser materials, dividing the labor from synthesis to testing. The effort yielded a compound that emits laser light with record-setting efficiency, researchers report today in Science. Along with other recent results, the feat suggests that, in some areas, self-driving labs can surpass the best scientists, making discoveries missed by humans.

“Automated labs are going beyond proof-of-concept demonstrations,” says Milad Abolhasani, a chemical engineer at North Carolina State University who developed a self-driving lab unaffiliated with the new work. “They have started to push the edge of science to the next level.”

When you are developing new things, outside the box; new materials or new drugs, most labs use a statistical approach with black box assumptions; new is black and outside the box. Based on that assumption they will use a structured mathematical approach, running tests to fill in the math model At this point it is more of less an assembly line of humans servicing the math. Machine are much better and faster at repetitive tasks and can reach steady state faster. In this case, they networked the computers so each step, in the line, could reach steady state and be handed off to the next step.

As a loose example, say we manufactured six sided dice. We now need to calculate the odds for each side, to test if the dice are loaded, due to a slight manufacturing bias. The Casino customer will be pissed off if there is, since they can lose money or get visited by the gaming Commission. The black box math may have us throw the dice, a large number of times, until we see the patterns of the sides appearing. From that steady state we can calculate the load and know where it is and adjust the machine. The AI is much faster and can reach the goal, quicker. But in essence, the math is doing the thinking.

My approach to development was different. I could run far fewer tests to get to the goal. This was due to a mixture of reason and hunches. These put on the light in the darkness, making less trials needed. In the case of the dice, I would notice the dice favoring one side, once, and then abort all the random tests needed by the math, in favor of testing my loaded side theory on the machine sooner; all done. This would look like magic to the black box crowd, but since it worked, we were on their team with the same goal.

Much of my streamlining occurs at the unconscious level. The brain can process data in 2-D and 3-D. AI does not know how to process in 3-D since few humans know how it works to be able to program AI in 3-D. The extra z-axis of the brain uses a slightly different and faster language. The z-axis reaches consciousness via gut feeling and hunches, is more of a vector to direct reasoning. When AI can do that, then I will be worried. Until then it is an advanced automaton that are useful since they can reach steady state in the black box much faster than humans.

For example, AI Computer art can be generate by the hundreds in seconds, digitally, and then use an esthetics program to pick and choose, without ever drawing anything on paper; all in the computer's imagination; digitally. Some humans can do this in their imagination, but the best artist also have hunches narrowing down to a classic work. Then it is time for production.

Evolution of AI will be putting humans out of a black box jobs and giving it to the more efficient robots. When that happens new human skills will needed to keep humans one step ahead. Running a chain saw replaced the swing of the ax. Now one swipe instead of 6 chops. AI is now an ax machine that can chop faster. I have been trying to get rid of the black box for years; ax, but nobody saw the need. If it is not broke do not fix it.
 

We Never Know

No Slack
So I tried to raise the issue on OpenAI reddit page. And wanted to see if there were any replies today:

View attachment 91880

Only to be met with this, no explanation for why it was removed or anything. I didn't do anything that violated the rules from what I can see, which they would normally tell you if that was the case.
IMO.... In short... You exposed it has flaws.
 

Nimos

Well-Known Member
IMO.... In short... You exposed it has flaws.
I think it is obvious that something is wrong.

But my post is fully documented even with a link to the whole chat. But I contacted them and asked why they removed it. But have yet to receive a reply, but have my suspicions that I might not get one, but lets see :)
 

Nimos

Well-Known Member
Evolution of AI will be putting humans out of a black box jobs and giving it to the more efficient robots. When that happens new human skills will needed to keep humans one step ahead. Running a chain saw replaced the swing of the ax. Now one swipe instead of 6 chops. AI is now an ax machine that can chop faster. I have been trying to get rid of the black box for years; ax, but nobody saw the need. If it is not broke do not fix it.
But before this happens, the AI has to be trustworthy. For instance:

Sudoku_c.png

Something is very interesting here I think. (Just a side note, I think it is kind of funny how it says: "However, Human error can occur.. ", yet being completely unaware of its own mistakes :D)

I asked it whether it did points 1 and 2. Which is say it did and even explain why it is important etc.

Clearly, it didn't do this, because then it would know that the answer is wrong. The reason Sudoku is so effective for this is because it is a complicated puzzle, yet extremely easy to understand and check for errors. You can do this in different ways, the easiest being to simply go through each row and column and check if there are duplicates, which is easy to do for both humans and computers. Another way is to simply add all the numbers in a row or column together and if all of them are 45 then it is correct.

So I tried asking it this, and it goes "crazy" for lack of better words:

sudoku_c1.png


So it says it is not possible and shows an incorrect Sudoku, however from what I can see (quick glance) this is a valid Sudoku. So I continue:

sudoku_c2.png

So it have checked all the rows and columns and then goes through each subgrid [3,3], which is really cool. But this is also where it starts to get a bit weird.

sudoku_c3.png

It then agrees that the initial Sudoku it gave me was not incorrect. So it prints a new one which is incorrect. Yet, it still maintains that it isn't enough for all the rows and columns to equal 45.

So I poke it some more:
sudoku_c4.png


What on Earth is going on here, in the first example where I asked if it checked each row and column it said that it did it.

Is it lying or what?
It also completely fails to see the logic that if one of the rows or columns isn't 45 then clearly the Sudoku must be wrong, again from the example it has no issue adding the numbers together. But it still maintains that it is not enough to check it.

If it can't even figure this out in a Sudoku, imagine it doing molecules or whatever, it would be completely useless if you ask me.

I added the whole chat here, for all the details:
Whole chat.
 
Last edited:

Alien826

No religious beliefs
I do think that AI will and can do a lot of good things for humanity and at this point, we can only hope it turns out good in the end because it is not going away. But there are so many examples of ****ty companies exploiting and doing bad things in the name of profit, that I think the chances of it not going wrong are extremely low. It is just a matter of how wrong I think :)

Yes, I agree. I hope you understand that I'm talking about a benevolent use of AI rather than what's likely. If we can't even get together to address climate change the hope of that is disappearingly small I fear. Quite honestly, most of humanity's problems could be solved with current technology. The problem is .... we're humans. :(
 

Alien826

No religious beliefs
I watched a movie where AI robots were enslaved to the human race by a micro-chip. In the movie one of the robots figures out how to surgically remove the chip.

It was a great movie, wish I could remember the name of it. Luckily in the movie the AI robots are compassionate creatures that move away from the human colony to try and live independently of their former slave masters, but it got me wondering if such a level of independent thinking coupled with a likely inevitable need to compete with humans for resources could one day bring about war between us and a superior competitor of our initial creation.

This is the "Frankenstein" model, right? We've been so indoctrinated with it that we jump to that conclusion. Of course a movie where robots join with us to create a peaceful world wouldn't gross much. I think the bad effects will be more in line with @Nimos' comments (AI being misused by humans) than the machines becoming intelligent and taking over.
 

Alien826

No religious beliefs
The original from 1982 and it is funny to see when they write something "Los Angeles <2019>" and see the expectations of what they thought society would look like now. And I don't think they are way off, probably a 75-100 years, then we probably have robots that look like humans :)

I doubt that we will have humanoid robots other than a few specialized examples. It's interesting that it has already happened. Robots are everywhere, we just don't see them, for example one is constantly tuning your car as you drive. A human is a non specialized creature that is a "jack of all trades, master of none". That works well for us, but doesn't make much sense for robots. You wouldn't have a self driving car that used a humanoid robot to operate the existing controls. There's an example in the (very funny) series Upload, where the cars drive themselves and the passengers can opaque the windows, fold the seats into a bed, and have sex while the car takes them to their destination. :)
 

danieldemol

Veteran Member
Premium Member
This is the "Frankenstein" model, right? We've been so indoctrinated with it that we jump to that conclusion. Of course a movie where robots join with us to create a peaceful world wouldn't gross much. I think the bad effects will be more in line with @Nimos' comments (AI being misused by humans) than the machines becoming intelligent and taking over.
Frankenstein was about a scientist stitching together parts of dead bodies and bringing them to life according to my understanding.
 

Nimos

Well-Known Member
I doubt that we will have humanoid robots other than a few specialized examples. It's interesting that it has already happened. Robots are everywhere, we just don't see them, for example one is constantly tuning your car as you drive. A human is a non specialized creature that is a "jack of all trades, master of none". That works well for us, but doesn't make much sense for robots. You wouldn't have a self driving car that used a humanoid robot to operate the existing controls. There's an example in the (very funny) series Upload, where the cars drive themselves and the passengers can opaque the windows, fold the seats into a bed, and have sex while the car takes them to their destination. :)
I think it will happen because humans are easily fooled. So I think you can categorize robots into certain areas that will be crucial for those who make them.

1. Functionality: They need to fulfil a need. Like the robot dog, that can walk around and basically do nothing, that old, I think it was Sony that made it. It never became popular because it is more or less just a toy or fun little gadget. But if you could create a robot that can buy groceries for you, and clean more effectively then there is a huge potential. Obviously, if you can "employ" it to do work, then it is really going to fly. That is why they are very useful in factories even now.

2. Pleasing: They need to be relatable for us, we don't want some creepy-looking robot hanging out near us. This is Disney's robot, and honestly, it is kind of cute, it does seem to have personality even though it doesn't look human, they clearly knew that it should be inspired by an animal:


To some of these, why would they even bother trying to make robots look like humans, what is the purpose of that? We don't want our vacuum cleaner to look like one.


This one as well, it's pretty crazy how easily fooled we are, clearly the one on the left, is much more appealing to us than the other:

3. Attachable: This is the combination of the two points above, we need their functionality because it makes things easier, and we like being around them because they please us. And then finally we can't live without them, because we are attached to them, it is not just a tool. We have experience with them, we can talk with them etc.

That is why I think they will aim for them to be human/animal-like. Because they need people to buy them.
 

Balthazzar

N. Germanic Descent
Skynet is getting closer lol

No humans needed: AI robots discover new laser materials on their own

"Who needs scientists anyway? A global consortium of six automated laboratories, overseen by artificial intelligence (AI), set out to produce new laser materials, dividing the labor from synthesis to testing. The effort yielded a compound that emits laser light with record-setting efficiency, researchers report today in Science. Along with other recent results, the feat suggests that, in some areas, self-driving labs can surpass the best scientists, making discoveries missed by humans.

“Automated labs are going beyond proof-of-concept demonstrations,” says Milad Abolhasani, a chemical engineer at North Carolina State University who developed a self-driving lab unaffiliated with the new work. “They have started to push the edge of science to the next level.”


Is this due to processing capability and speed of AI? What could this equate to in progress between the nations? Bigger, faster, more capable AI becoming the determining factor in a nation's recognition of power hierarchy? If so, and understanding the warnings from not so long ago, is there any way to avoid further development for any reason at all? AI, it would seem is our future and yet another race begun between the nations.
 

Alien826

No religious beliefs
I think it will happen because humans are easily fooled. So I think you can categorize robots into certain areas that will be crucial for those who make them.
Well I did say "a few specialized examples", which may be what you are talking about. I'll try to show how your examples either fit that, or can be better done with a non android robot.
1. Functionality: They need to fulfil a need. Like the robot dog, that can walk around and basically do nothing, that old, I think it was Sony that made it. It never became popular because it is more or less just a toy or fun little gadget. But if you could create a robot that can buy groceries for you, and clean more effectively then there is a huge potential. Obviously, if you can "employ" it to do work, then it is really going to fly. That is why they are very useful in factories even now.
Toys, yes. Dolls have gradually gained the abilities to behave like real babies, starting form a simple "mama" to simulated excretion. Obviously some things are direct imitations of reality and need to be so. (You may have deliberately omitted the example of a "sex robot"). We're nearly there with the grocery buying. We can order online already and all we need to complete the process is a self driving delivery vehicle with (say) drones that put the goods on your doorstep. Both exist now in prototype. Factory robots are not android and I doubt they will be.
2. Pleasing: They need to be relatable for us, we don't want some creepy-looking robot hanging out near us. This is Disney's robot, and honestly, it is kind of cute, it does seem to have personality even though it doesn't look human, they clearly knew that it should be inspired by an animal:
Yes. But most automation now isn't even detectable other than by function. It doesn't have to be pleasing, just work reliably.
To some of these, why would they even bother trying to make robots look like humans, what is the purpose of that? We don't want our vacuum cleaner to look like one.
Exactly my point.
This one as well, it's pretty crazy how easily fooled we are, clearly the one on the left, is much more appealing to us than the other:
Yes. That would fall into category 2.
3. Attachable: This is the combination of the two points above, we need their functionality because it makes things easier, and we like being around them because they please us. And then finally we can't live without them, because we are attached to them, it is not just a tool. We have experience with them, we can talk with them etc.
Hmmm, I'm not so sure about this one. "Alexa" isn't even visible, just a voice, but we do depend on it. Coincidentally, I see they are talking about adding more AI to it and, of course, introducing a subscription. If they do want me to pay for it, it's gone. For me it's an alarm clock and a sophisticated record player. I can replace those functions.
That is why I think they will aim for them to be human/animal-like. Because they need people to buy them.

OK.
 

Nimos

Well-Known Member
Exactly my point.
I mean, there is a reason they want to do this. It doesn't make sense to spend so much money trying to make robots behave like humans if they knew no one would be interested in it.

Yes. But most automation now isn't even detectable other than by function. It doesn't have to be pleasing, just work reliably.
Yeah, that is a variation of this. You don't need a welding robot that makes car be cute or anything. It is there to weld. And in some cases, it is just more handy that it isn't detectable. For instance customer support over the phone, you obviously don't need a robot, but it is very important that the AI sounds pleasing to humans.
So it is a mixture of these three, depending on the task it has to solve.

I don't disagree with you. I think there are lots of different use cases.
 

Alien826

No religious beliefs
I mean, there is a reason they want to do this. It doesn't make sense to spend so much money trying to make robots behave like humans if they knew no one would be interested in it.


Yeah, that is a variation of this. You don't need a welding robot that makes car be cute or anything. It is there to weld. And in some cases, it is just more handy that it isn't detectable. For instance customer support over the phone, you obviously don't need a robot, but it is very important that the AI sounds pleasing to humans.
So it is a mixture of these three, depending on the task it has to solve.

I don't disagree with you. I think there are lots of different use cases.

I think we are in danger of agreeing!

Thanks for the interesting discussion. :)
 
Top