• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

When we can't keep up anymore, then what? (AI)

Nimos

Well-Known Member
I was watching an interview with an expert in AI technology and given the trend of AI at the moment it is already at a point where they are more intelligent than the majority of people. But what happens when they become 1000 times more intelligent than any human?

For instance, imagine the average person talking to Einstein and trying to explain some scientific stuff. The majority of us who are not well-versed in science might go "I have no clue what you are talking about"

Now imagine an AI that is 1000 times more intelligent than Einstein, even he would probably have no clue what it was talking about, simply because the gap in intelligence would be so massive, that it is basically two different worlds.

We already know by now, that those people working with AI, have no clue what exactly these are capable of, and that they are seemingly learning way faster than they thought or are capable of things that they didn't know. We also know that these AIs are being integrated into everything at the moment and that it is a race both between companies and countries, where I doubt there is a whole lot of control over what exactly is being done, simply because I would you even check this when even the creators have no clue what they can really do?

We have already seen a lot of examples (even though) it is only in the starting phases, but for instance, I heard reasonably that someone had used AI to fake voices and images etc. to make an abduction and demand money. This just adds to the already long list of potential things these AIs can and will be used for and again, this is only the beginning of the AI era.

The question is how long can humans be at the helm if there is something that is 1000 times or even more intelligent than us, how can we control something or the use of something which we might not even understand to begin with?

If anyone is interesting in the interview it can be seen here:
 

Nakosis

Non-Binary Physicalist
Premium Member
I was watching an interview with an expert in AI technology and given the trend of AI at the moment it is already at a point where they are more intelligent than the majority of people. But what happens when they become 1000 times more intelligent than any human?

For instance, imagine the average person talking to Einstein and trying to explain some scientific stuff. The majority of us who are not well-versed in science might go "I have no clue what you are talking about"

Now imagine an AI that is 1000 times more intelligent than Einstein, even he would probably have no clue what it was talking about, simply because the gap in intelligence would be so massive, that it is basically two different worlds.

We already know by now, that those people working with AI, have no clue what exactly these are capable of, and that they are seemingly learning way faster than they thought or are capable of things that they didn't know. We also know that these AIs are being integrated into everything at the moment and that it is a race both between companies and countries, where I doubt there is a whole lot of control over what exactly is being done, simply because I would you even check this when even the creators have no clue what they can really do?

We have already seen a lot of examples (even though) it is only in the starting phases, but for instance, I heard reasonably that someone had used AI to fake voices and images etc. to make an abduction and demand money. This just adds to the already long list of potential things these AIs can and will be used for and again, this is only the beginning of the AI era.

The question is how long can humans be at the helm if there is something that is 1000 times or even more intelligent than us, how can we control something or the use of something which we might not even understand to begin with?

If anyone is interesting in the interview it can be seen here:

Were we ever really at the helm. Seems to me we are simply a "victim" of evolution, not the master of it.
This is just another step in the chain of evolution.
The only thing we currently possess that AI does not is the experience of self-awareness.

I suspect, predict, I suppose, that we will eventually find a way to integrate our self-awareness with AI.
We are kind of already doing that but in the future perhaps a more direct connection will exist.
Neuralink
 

PoetPhilosopher

Veteran Member
At this point, I'm skeptical AI will face that 1000 times growth of intelligence without somehow caving in, or making itself "dumber" in the process.

People talk about AI taking over. But at times, as it is - it won't even follow a simple command.
 

Nimos

Well-Known Member
Teach it about Buddhism, and the preciousness of sentient beings?
I honestly think that religion and especially something like Buddhism might benefit from the AI era as society becomes more and more technological, I think a lot of people will feel disconnected from both themselves and from nature and that spirituality is going to be something that a lot of people with turn to.

But on the side of AI, I don't think they will be spiritual, at least not in a true sense but will rather focus on effectiveness as the ultimate goal. So the question is if such a highly intelligent AI would even remotely share the same values as humans, its value system probably wouldn't be based on life or at least how biological life sees it.

Probably we could imagine humans trying to understand the "value" system of a rock or a tree, and why should it, it is not biological?
 

PoetPhilosopher

Veteran Member
At this point, I'm skeptical AI will face that 1000 times growth of intelligence without somehow caving in, or making itself "dumber" in the process.

People talk about AI taking over. But at times, as it is - it won't even follow a simple command.

One example of this is the transition from ChatGPT-3.5 to ChatGPT-4 causing an AI that when it works, is sometimes smarter, but also, one that seems more prone to being stubborn.
 
Last edited:

Nimos

Well-Known Member
Were we ever really at the helm. Seems to me we are simply a "victim" of evolution, not the master of it.
This is just another step in the chain of evolution.
The only thing we currently possess that AI does not is the experience of self-awareness.

I suspect, predict, I suppose, that we will eventually find a way to integrate our self-awareness with AI.
We are kind of already doing that but in the future perhaps a more direct connection will exist.
Neuralink
I mean that helm of civilization, if you have an AI 1000 times more intelligent than a human, why would you put a human in charge? In the video they also talk about emotions, in fact, they talk about a lot of things, the title about climate change is a bit misleading, but it is very interesting despite that.

But in regards to emotions, if these are as intelligent as they predict they will be, they will also be much more emotional than we are, but probably in a different way. The one getting interviewed gives the example of how human intelligence makes us more emotional than for instance a fish. If the AI is 1000 times more intelligent, its understanding or range of emotions would probably far exceed ours.
 

Nimos

Well-Known Member
At this point, I'm skeptical AI will face that 1000 times growth of intelligence without somehow caving in, or making itself "dumber" in the process.

People talk about AI taking over. But at times, as it is - it won't even follow a simple command.
Yeah, that is true, but I also think this is a misunderstood way of looking at it.

The issue in the start is not the AI itself, we are only at the beginning at the moment, so the first thing that is going to happen is that one person using AI is going to replace 5-10 people, simply because of the increase in productivity, eventually, this person will probably get replaced as well as AI improves.

I think the best way of looking at it, except that we have to supersonic it, is like someone saying that cars are never going to replace horse carriages, but as the technology improves and productivity increases the carriages never had a chance. This technology is probably the most intense "arms" race in human history I would assume, that no one, neither companies nor country can afford to lose this, the consequences could be disastrous. We only need to look at the biggest powers in the world, the US won't lose to China or Russia, and they won't lose to the US and EU etc. Even big companies could potentially go bankrupt if someone creates a super solution, so they have to invest in this like maniacs.
 

PoetPhilosopher

Veteran Member
Yeah, that is true, but I also think this is a misunderstood way of looking at it.

The issue in the start is not the AI itself, we are only at the beginning at the moment, so the first thing that is going to happen is that one person using AI is going to replace 5-10 people, simply because of the increase in productivity, eventually, this person will probably get replaced as well as AI improves.

I think the best way of looking at it, except that we have to supersonic it, is like someone saying that cars are never going to replace horse carriages, but as the technology improves and productivity increases the carriages never had a chance. This technology is probably the most intense "arms" race in human history I would assume, that no one, neither companies nor country can afford to lose this, the consequences could be disastrous. We only need to look at the biggest powers in the world, the US won't lose to China or Russia, and they won't lose to the US and EU etc. Even big companies could potentially go bankrupt if someone creates a super solution, so they have to invest in this like maniacs.

I see.

I'm kind of looking at it more like a human, I guess. To have a 100 IQ is okay. To have a 120 IQ may make you more functional even, in general. To have a 200 IQ, it becomes complicated. Once you have a 200 IQ, there are things you might do well, and things you don't.

Hopefully, AI doesn't have the same limitations.
 

Nimos

Well-Known Member
One example of this is the transition from ChatGPT-3.5 to ChatGPT-4 causing an AI that when it works, is sometimes smarter, but also, one that seems more prone to being stubborn or getting things wrong.
Again as said to others, we are in the very first stages of AI. What you see now, is nothing compared to the potential of this technology, and I think that is the real danger that people see it more like a gimmick, rather than getting outmatched by something that they simply won't be able to compete with.

If you take a simple example of how the AI at the moment can simulate whatever voice you want and translate pretty much any text into another language in a few seconds, it's pretty obvious what that means for anyone working with translation. They have no chance whatsoever.
 

Stevicus

Veteran Member
Staff member
Premium Member
I was watching an interview with an expert in AI technology and given the trend of AI at the moment it is already at a point where they are more intelligent than the majority of people. But what happens when they become 1000 times more intelligent than any human?

For instance, imagine the average person talking to Einstein and trying to explain some scientific stuff. The majority of us who are not well-versed in science might go "I have no clue what you are talking about"

Now imagine an AI that is 1000 times more intelligent than Einstein, even he would probably have no clue what it was talking about, simply because the gap in intelligence would be so massive, that it is basically two different worlds.

We already know by now, that those people working with AI, have no clue what exactly these are capable of, and that they are seemingly learning way faster than they thought or are capable of things that they didn't know. We also know that these AIs are being integrated into everything at the moment and that it is a race both between companies and countries, where I doubt there is a whole lot of control over what exactly is being done, simply because I would you even check this when even the creators have no clue what they can really do?

We have already seen a lot of examples (even though) it is only in the starting phases, but for instance, I heard reasonably that someone had used AI to fake voices and images etc. to make an abduction and demand money. This just adds to the already long list of potential things these AIs can and will be used for and again, this is only the beginning of the AI era.

The question is how long can humans be at the helm if there is something that is 1000 times or even more intelligent than us, how can we control something or the use of something which we might not even understand to begin with?

If anyone is interesting in the interview it can be seen here:

I guess it depends on what "more intelligent" actually means. Our computers and phones already have access to tons of information, books, subjects of study, etc., that I don't know and couldn't possibly digest or assimilate in a single lifetime. These devices can remember things much better than I can, too.

Machines are already stronger and faster than humans - and at the time, some might have seen that as a threat to human existence. But even being stronger or faster doesn't make them human. It just makes them better machines. This is just as true if they're more intelligent as well.

There are a lot of ways we can use machines and technologies in hostile, negative, and self-destructive ways. People learned that when they were told of the existence of nuclear weapons and their devastating consequences. And also, the long-term effects of industrialism on climate change and the threat that poses. But this video is saying that AI is worse than climate change?

I think AI, just like any tool, can be misused and abused by people with a nefarious and malignant agenda. What seems inevitable is that there could be some kind of AI "arms race" in play, but if that's the case, then all major powers are in a position to have to keep researching and developing their AI capabilities, lest they get caught behind.

I never really could get into popular culture notions (fed by science fiction) that AI could ever become "sentient" or independent in thought, as if it were the emergence of a new lifeform.
 

Nimos

Well-Known Member
I see.

I'm kind of looking at it more like a human, I guess. To have a 100 IQ is okay. To have a 120 IQ may make you more functional even, in general. To have a 200 IQ, it becomes complicated. Once you have a 200 IQ, there are things you might do well, and things you don't.

Hopefully, AI doesn't have the same limitations.
In the interview, he said that it has an IQ of around 155 at the moment (from what I remember). But again one has to realise that the way this AI develops is not linear but exponential and we are only at the very start of this exponential curve.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
At this point, I'm skeptical AI will face that 1000 times growth of intelligence without somehow caving in, or making itself "dumber" in the process.

People talk about AI taking over. But at times, as it is - it won't even follow a simple command.
That's how Machine learning works.

AI is dumb as a rock at first but learns from its mistakes until it becomes unbeatable.

 

Nimos

Well-Known Member
I guess it depends on what "more intelligent" actually means. Our computers and phones already have access to tons of information, books, subjects of study, etc., that I don't know and couldn't possibly digest or assimilate in a single lifetime. These devices can remember things much better than I can, too.

Machines are already stronger and faster than humans - and at the time, some might have seen that as a threat to human existence. But even being stronger or faster doesn't make them human. It just makes them better machines. This is just as true if they're more intelligent as well.

There are a lot of ways we can use machines and technologies in hostile, negative, and self-destructive ways. People learned that when they were told of the existence of nuclear weapons and their devastating consequences. And also, the long-term effects of industrialism on climate change and the threat that poses. But this video is saying that AI is worse than climate change?

I think AI, just like any tool, can be misused and abused by people with a nefarious and malignant agenda. What seems inevitable is that there could be some kind of AI "arms race" in play, but if that's the case, then all major powers are in a position to have to keep researching and developing their AI capabilities, lest they get caught behind.

I never really could get into popular culture notions (fed by science fiction) that AI could ever become "sentient" or independent in thought, as if it were the emergence of a new lifeform.
He gives an explanation here of what he means by intelligence (I have linked it with timestamp):


And it kind of goes into the question of what exactly is life? People have different definitions of this and also what it means to be sentience, I don't have an answer to this, because in some ways we look at ourselves as being life and sentient. But if the AI behaves and learns and problem-solves the same way as we do, are they alive and sentient? Is biological tissue a requirement for life or being sentient?

I think my biggest issue is probably that I have a difficult time really understanding that these AI are not just pattern-driven, meaning that whatever they rely upon is probably hardcoded somewhere. To me, at least that complicates things a bit in regard to the definitions.
 
Last edited:

Nimos

Well-Known Member
That's how Machine learning works.

AI is dumb as a rock at first but learns from its mistakes until it becomes unbeatable.
Yes, that is machine learning, but that is not how AI works, AI doesn't just test all possible solutions in that way. however, you can look at AI as the next step and just before AGI (which could be considered to be a replication of the human brain in the form of a computer).

He gives an example at the beginning of the interview when it was working at Google and they tried to make some robotic arms pick up some children's toys, which they couldn't. At some point, he noticed that one of them managed to pick up a yellow ball I think after I don't know how long, and then when he came back Monday all of the robotic arms could pick up the yellow ball and eventually they could pick up all the different toys. And that is when he left Google due to that.

So the way they learn is like a child trying to fit a cylinder into a wooden box with different shape holes. The kid will try to put it in the star shape etc. but eventually, it will figure it out. It is much the same with AI. It is not programmed to specifically solve the task but to learn it.

See the clip I posted just above, that explains it better than me :D
 
Last edited:

Stevicus

Veteran Member
Staff member
Premium Member
He gives an explanation here of what he means by intelligence (I have linked it with timestamp):


And it kind of goes into the question of what exactly is life? People have different definitions of this and also what it means to be sentience, I don't have an answer to this, because in some ways we look at ourselves as being life and sentient. But if the AI behaves and learns and problem-solves the same way as we do, are they alive and sentient? Is biological tissue a requirement for life or being sentient?

I think my biggest issue is probably that I have a difficult time really understanding that these AI are not just pattern-driven, meaning that whatever they rely upon is probably hardcoded somewhere. To me, at least they don't complicate things a bit in regards to the definitions.

Interesting. I don't have time to watch the entire 2-hour video, but I did watch for a few minutes.

I suppose if one can demonstrate "sentience" in AI, then it might be considered "alive." I'm reminded of a Star Trek: TNG episode "The Measure of a Man," where there was a tribunal questioning whether Data was truly a sentient, artificial lifeform. Earlier, he was compared to Pinocchio, a wooden puppet who wanted to be a real boy. Will AI become the "Pinocchio" of our age? Will its nose grow if it lies?

The other side of this is the idea that human intelligence is somehow being replicated technologically and electronically, but how much do we really know about how the human mind actually works? Whatever they're building, I can't see it would be actual "human" intelligence. It would still be a machine. Pinocchio will always remain Pinocchio. But that doesn't mean it wouldn't have value.
 

Nimos

Well-Known Member
One example of this is the transition from ChatGPT-3.5 to ChatGPT-4 causing an AI that when it works, is sometimes smarter, but also, one that seems more prone to being stubborn or getting things wrong.
This is just another example of AI being used, which is within the art industry.

AI_face.jpg

So the AI have turned the drawing on the left into the image on the right. And this technology is also only really in the beginning.

This is a higher quality image of an AI generated human, by simply typing in a text prompt.
AI_face1.jpg


So again, it doesn't require a lot of imagination to figure out what impact this could have on the art/photography industry as a whole. And there are lots of examples of this, it is very easy to do and get high-quality images. You don't have to travel to a location and set up all the equipment, hire a big crew etc. You just type it into a text prompt of what you want and you get the image.

Imagine you were a cloth designer, if people can't tell the difference anyway? (All in that image below is AI generated)

AI_girl.jpg
 

PoetPhilosopher

Veteran Member
This is just another example of AI being used, which is within the art industry.

View attachment 81447
So the AI have turned the drawing on the left into the image on the right. And this technology is also only really in the beginning.

This is a higher quality image of an AI generated human, by simply typing in a text prompt.
View attachment 81448

So again, it doesn't require a lot of imagination to figure out what impact this could have on the art/photography industry as a whole. And there are lots of examples of this, it is very easy to do and get high-quality images. You don't have to travel to a location and set up all the equipment, hire a big crew etc. You just type it into a text prompt of what you want and you get the image.

Imagine you were a cloth designer, if people can't tell the difference anyway? (All in that image below is AI generated)

View attachment 81449

AI does well on simple things, but when you give it more complex commands - it can do things like generate people with 3 arms, currently.
 

Stevicus

Veteran Member
Staff member
Premium Member
This is just another example of AI being used, which is within the art industry.

View attachment 81447
So the AI have turned the drawing on the left into the image on the right. And this technology is also only really in the beginning.

This is a higher quality image of an AI generated human, by simply typing in a text prompt.
View attachment 81448

So again, it doesn't require a lot of imagination to figure out what impact this could have on the art/photography industry as a whole. And there are lots of examples of this, it is very easy to do and get high-quality images. You don't have to travel to a location and set up all the equipment, hire a big crew etc. You just type it into a text prompt of what you want and you get the image.

Imagine you were a cloth designer, if people can't tell the difference anyway? (All in that image below is AI generated)

View attachment 81449

I sometimes wonder if a lot of the panic over AI is coming from those motivated by self-interest. After all, considering all the possibilities here, a lot more than simply menial or blue-collar workers will be replaceable. It seems fashion designers, writers, computer programmers, doctors, lawyers, and any number of other occupations could conceivably be put out of work.

Nobody cried all that much when John Henry was defeated by the steam drill, and as machines replaced human labor, it seemed to affect mainly those at the lower end of the economic spectrum. But it was seen as progress. When factory workers and others were thrown out of work due to outsourcing, a lot people said "Well, they can just learn new skills and get different jobs doing something else." There were many who did just that. Meanwhile, society has adjusted to paying at the pump, using Automatic Teller Machines, self checkout at stores, and now they even have totally automated fast-food restaurants and driverless taxis.

But now, it's more than just the lower-paid jobs at risk; now there are many more professions and jobs which could conceivably be replaced by machine. And that probably scares a lot of people in certain occupations.
 
Top