• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

When we can't keep up anymore, then what? (AI)

Quintessence

Consults with Trees
Staff member
Premium Member
When were humans ever "at the helm?" To my mind, this fantasy of control has always been precisely that - a fantasy. Humans have always been, and will always be, dependent on externalities for their existence. Do you not eat food? Do you not breathe air? Do you not drink water? Do you not shelter in the storms, the cold, and the heat?

This is no different from any of that, in spirit. There are always higher powers beyond humans that humans depend on for their existence. If AI scares you, so should nature in general.
 

Nimos

Well-Known Member
AI does well on simple things, but when you give it more complex commands - it can do things like generate people with 3 arms, currently.
Yes, I think the keyword is "currently" :D

This is a version test of Midjourney different versions, the quality improvements are pretty crazy.
Midjourney_version.jpg


Version 1 was released in Feb. 2022 and version 5 in March 2023 and they are on V5.2 now. I think that is pretty damn fast in regards to improvement.
 

Nimos

Well-Known Member
I sometimes wonder if a lot of the panic over AI is coming from those motivated by self-interest. After all, considering all the possibilities here, a lot more than simply menial or blue-collar workers will be replaceable. It seems fashion designers, writers, computer programmers, doctors, lawyers, and any number of other occupations could conceivably be put out of work.

Nobody cried all that much when John Henry was defeated by the steam drill, and as machines replaced human labor, it seemed to affect mainly those at the lower end of the economic spectrum. But it was seen as progress. When factory workers and others were thrown out of work due to outsourcing, a lot people said "Well, they can just learn new skills and get different jobs doing something else." There were many who did just that. Meanwhile, society has adjusted to paying at the pump, using Automatic Teller Machines, self checkout at stores, and now they even have totally automated fast-food restaurants and driverless taxis.

But now, it's more than just the lower-paid jobs at risk; now there are many more professions and jobs which could conceivably be replaced by machine. And that probably scares a lot of people in certain occupations.
I agree with that, but I think this is far beyond just working places. Clearly, the economy as a whole will suffer if too many people are without income, so that is clearly an issue.

IT security issues are a serious threat now because I think we can be 100% certain that there are many countries working on ways to use AI for hacking. Even normal people with bad intentions can use AI for these things. And I think everyone knows that governments in general are not well equipped to deal with these new things, given how fast they are happening. My guess is that the majority of people in governments around the world hardly have any idea what AI is or what to even do about it.

As I mentioned in another post, how are they going to control the development of this, when even those who create the AI's hardly know what on earth they are capable of? The issue is not the AI, but what people can potentially use them for.

It is when you add all of these things together and you have some potential superintelligence out of control or in the hands of bad people/countries.

Think about it, it is not for fun that some of the biggest companies or CEOs in the world got together to try to put a break on AI development. And as we know this didn't happen it is like a running bull at the moment, which I don't think anyone can stop, given the simple fact that no one wants to be left behind. As the guy in the interview said, it is driven by mistrust, because no one trusts each other, whether that is companies or countries.
 

Nimos

Well-Known Member
When were humans ever "at the helm?" To my mind, this fantasy of control has always been precisely that - a fantasy. Humans have always been, and will always be, dependent on externalities for their existence. Do you not eat food? Do you not breathe air? Do you not drink water? Do you not shelter in the storms, the cold, and the heat?

This is no different from any of that, in spirit. There are always higher powers beyond humans that humans depend on for their existence. If AI scares you, so should nature in general.
AI don't scare me, it is what humans can use it for that is the potential issue.

I really wish more would watch the interview and listen to the concerns raised by this guy, because I think a lot of them are valid, but also because I get the impression that people are not really sure what exactly this AI technology is, like it is just another phone app that is a bit more advanced than the other ones.
 

PoetPhilosopher

Veteran Member
I really wish more would watch the interview and listen to the concerns raised by this guy, because I think a lot of them are valid, but also because I get the impression that people are not really sure what exactly this AI technology is, like it is just another phone app that is a bit more advanced than the other ones.

In my case, I'd say that's not really a fair assessment. I've worked with products which had AI engines.

I just haven't programmed for the direct AI, myself.

What I actually believe, though, is rather than myself being entirely mistaken, that the AI writers might be saying some things which, for some strange reason, aren't adding up.

There have been some incorrect media articles already, for example that ChatGPT-4 can't solve math problems and flunks 98% of them. So I'm already suspicious.

So, while I really can't say for sure, I have some concerns that AI will just pan out to be another tool, and that the hype around it is slightly ambiguous and wrong, much like the hype surrounding Beanie Babies.
 

PoetPhilosopher

Veteran Member
In my case, I'd say that's not really a fair assessment. I've worked with products which had AI engines.

I just haven't programmed for the direct AI, myself.

What I actually believe, though, is rather than myself being entirely mistaken, that the AI writers might be saying some things which, for some strange reason, aren't adding up.

There have been some incorrect media articles already, for example that ChatGPT-4 can't solve math problems and flunks 98% of them. So I'm already suspicious.

So, while I really can't say for sure, I have some concerns that AI will just pan out to be another tool, and that the hype around it is slightly ambiguous and wrong, much like the hype surrounding Beanie Babies.

Also, I think AI has its challenges ahead. One challenge is whether government regulation will try to restrict it too much. Another, greater one, is that AI mainly relies on human content for now, but soon, it will have to rely on a lot more AI-generated text to feed it, which could stunt growth. Although, it won't have catastrophic consequences, either.
 

lewisnotmiller

Grand Hat
Staff member
Premium Member
I was watching an interview with an expert in AI technology and given the trend of AI at the moment it is already at a point where they are more intelligent than the majority of people. But what happens when they become 1000 times more intelligent than any human?

For instance, imagine the average person talking to Einstein and trying to explain some scientific stuff. The majority of us who are not well-versed in science might go "I have no clue what you are talking about"

Now imagine an AI that is 1000 times more intelligent than Einstein, even he would probably have no clue what it was talking about, simply because the gap in intelligence would be so massive, that it is basically two different worlds.

We already know by now, that those people working with AI, have no clue what exactly these are capable of, and that they are seemingly learning way faster than they thought or are capable of things that they didn't know. We also know that these AIs are being integrated into everything at the moment and that it is a race both between companies and countries, where I doubt there is a whole lot of control over what exactly is being done, simply because I would you even check this when even the creators have no clue what they can really do?

We have already seen a lot of examples (even though) it is only in the starting phases, but for instance, I heard reasonably that someone had used AI to fake voices and images etc. to make an abduction and demand money. This just adds to the already long list of potential things these AIs can and will be used for and again, this is only the beginning of the AI era.

The question is how long can humans be at the helm if there is something that is 1000 times or even more intelligent than us, how can we control something or the use of something which we might not even understand to begin with?

If anyone is interesting in the interview it can be seen here:

Sorry, can't watch the video right now, but a question...

Does it suggest a theoretical ceiling on AI intelligence?
At the moment, it seems (loosely) able to grab information from all over the place, and use it to generate responses. But that would suggest to some extent it is limited to taking what humans already know. Sure, it's smart, as it is basically able to access the sum total of human intelligence (and sometimes does it amazingly well, which will improve). But does the rate of intelligence growth slow as it supersedes human intelligence? Is it able to expand intelligence independently of human direction, creativity, or prompting?
 

Nimos

Well-Known Member
What I actually believe, though, is rather than myself being entirely mistaken, that the AI writers might be saying some things which, for some strange reason, aren't adding up.
Why would they do that, these are some of the leading people in the field of AI?

There have been some incorrect media articles already, for example that ChatGPT-4 can't solve math problems and flunks 98% of them. So I'm already suspicious.
That is correct there are some issues. But then again it is not a calculator but a language model, I do however agree, that it probably ought to be able to perform better in math and why exactly it doesn't 100% these all the time is, to me a bit surprising given the fixed rules in math. But it might illustrate that it is not simply following specific instructions and is rather trying to solve the issue and for some reason it doesn't perform that well at this.

So, while I really can't say for sure, I have some concerns that AI will just pan out to be another tool, and that the hype around it is slightly ambiguous and wrong, much like the hype surrounding Beanie Babies.
I agree that it is possible, I think it depends a lot on the next versions of AI and how much they have improved or worsened. But also results from the industry in real cases would be interesting to hear about as well.
 

Nimos

Well-Known Member
Sorry, can't watch the video right now, but a question...

Does it suggest a theoretical ceiling on AI intelligence?
At the moment, it seems (loosely) able to grab information from all over the place, and use it to generate responses. But that would suggest to some extent it is limited to taking what humans already know. Sure, it's smart, as it is basically able to access the sum total of human intelligence (and sometimes does it amazingly well, which will improve). But does the rate of intelligence growth slow as it supersedes human intelligence? Is it able to expand intelligence independently of human direction, creativity, or prompting?
Im not an expert, obviously :D

But based on this guy in the video and some of the others he refers to, I don't think there are any real limitations, but at the moment ChatGPT is at a stage where it can make use of the best that humans can, in the sense that if you had super memory and then apply AI on that. It is a bit difficult to explain.

But he gives an example with Bard which is the Google version of ChatGPT from what I understand and it learned a language, think it was Persian despite not being trained in it, which caught the developers a bit by surprise.

His expectations are that we will probably see AI which is so intelligent that we might simply not understand what they are talking about and he gives an example that a likely scenario might be that they pretty much bypass humans altogether and vanish into the Universe or something. From what I could understand seems like something they consider plausible, Honestly, I really have no clue what exactly that was about.

I would suggest hearing him explain it :D (I time-tagged that section)

To answer your last questions, the answer seems to be yes, they will supersede humans not by a little, but more like another dimension :D
 

PoetPhilosopher

Veteran Member
Why would they do that, these are some of the leading people in the field of AI?

I'm not for sure yet, but I recently spoke with someone who appeared to have worked with AI before, on this forum, and they appeared to think that the views of these AI people out in public, isn't being focused in a nuanced enough way. So I'm a little cautious that, for some strange reason, some of these other AI people are doing some form of showboating in the public eye.
 

crossfire

LHP Mercuræn Feminist Heretic Bully ☿
Premium Member
I honestly think that religion and especially something like Buddhism might benefit from the AI era as society becomes more and more technological, I think a lot of people will feel disconnected from both themselves and from nature and that spirituality is going to be something that a lot of people with turn to.

But on the side of AI, I don't think they will be spiritual, at least not in a true sense but will rather focus on effectiveness as the ultimate goal. So the question is if such a highly intelligent AI would even remotely share the same values as humans, its value system probably wouldn't be based on life or at least how biological life sees it.

Probably we could imagine humans trying to understand the "value" system of a rock or a tree, and why should it, it is not biological?
Sentient beings are not necessarily human beings. There might be other types of sentient beings. The main point to get across is that sentient beings are vulnerable to delusion, so if AI becomes sentient, it will also be vulnerable to delusion. Getting a sentient being to realize that they are vulnerable to delusion may or may not be difficult. I don't know whether a sentient AI will look for delusion within itself. Given the fact that it arose from humans who are demonstrably vulnerable to delusion might give it pause.
 

PoetPhilosopher

Veteran Member
That is correct there are some issues. But then again it is not a calculator but a language model, I do however agree, that it probably ought to be able to perform better in math and why exactly it doesn't 100% these all the time is, to me a bit surprising given the fixed rules in math. But it might illustrate that it is not simply following specific instructions and is rather trying to solve the issue and for some reason it doesn't perform that well at this.

Though ChatGPT-4 can be a bit more stubborn a model than ChatGPT-3.5, the stuff about it not being able to perform math problems seems to be made up, by either the media, or whomever talked to the media. I've tried both models.
 

Nimos

Well-Known Member
I sometimes wonder if a lot of the panic over AI is coming from those motivated by self-interest. After all, considering all the possibilities here, a lot more than simply menial or blue-collar workers will be replaceable. It seems fashion designers, writers, computer programmers, doctors, lawyers, and any number of other occupations could conceivably be put out of work.

Nobody cried all that much when John Henry was defeated by the steam drill, and as machines replaced human labor, it seemed to affect mainly those at the lower end of the economic spectrum. But it was seen as progress. When factory workers and others were thrown out of work due to outsourcing, a lot people said "Well, they can just learn new skills and get different jobs doing something else." There were many who did just that. Meanwhile, society has adjusted to paying at the pump, using Automatic Teller Machines, self checkout at stores, and now they even have totally automated fast-food restaurants and driverless taxis.

But now, it's more than just the lower-paid jobs at risk; now there are many more professions and jobs which could conceivably be replaced by machine. And that probably scares a lot of people in certain occupations.
You might find this interesting, it is the CEO of SD (an AI image-generating tool) talking to someone about AI, I haven't seen it all, but at least the beginning is about work and how they expect it to impact the world.

 

Nimos

Well-Known Member
Interesting. I don't have time to watch the entire 2-hour video, but I did watch for a few minutes.

I suppose if one can demonstrate "sentience" in AI, then it might be considered "alive." I'm reminded of a Star Trek: TNG episode "The Measure of a Man," where there was a tribunal questioning whether Data was truly a sentient, artificial lifeform. Earlier, he was compared to Pinocchio, a wooden puppet who wanted to be a real boy. Will AI become the "Pinocchio" of our age? Will its nose grow if it lies?

The other side of this is the idea that human intelligence is somehow being replicated technologically and electronically, but how much do we really know about how the human mind actually works? Whatever they're building, I can't see it would be actual "human" intelligence. It would still be a machine. Pinocchio will always remain Pinocchio. But that doesn't mean it wouldn't have value.
Sorry I think I overlooked your reply.

This is where it gets interesting.

Let's assume that the AI is sort of like in the style of the new Blade Runner, if you have seen that? Where the main character has an AI girlfriend, if the AI can make you feel something and even if it is just acting so much as a human that you can't tell the difference, would it matter? And could we even tell whether it was sentient or not? How would we even test it?
It's very difficult to speculate about I think, because we are aware that it is an AI when we interact with them and in general, it doesn't act like it cares for us personally if you know what I mean. You can have some very interesting chats with ChatGPT, but not to the point where you think it is a human.

But will be interesting to see when these get into support tasks and you speak with them on chats or phone, how the average person will react. I think the majority of us will be unable to tell the difference, but I still think we would say stuff like "Thanks for the help", "Have a good day" etc. even though it would be meaningless to the AI in a greater sense.

They are copying or duplicating the human way of thinking. To explain it a bit simply, the normal way we as humans do things is that we problem solve, we have an issue, big or small and then we arrive at some sort of solution of how to best deal with it. Whether that is to walk or drive when we have to buy groceries. There are a lot of considerations being made here even though it might sound like a trivial task, how is the weather? how much can you carry? how much time does it take? etc. And based on all these things we decide to do something. Essentially that is what they are trying to make the AI do as well. Whereas in traditional programming, we would do something like, "If weather is bad then take car" and the computer does that without questioning.

So when you suddenly are faced with a computer that thinks as we do, it might arrive at other conclusions that we will, depending on what information it thinks is important. If that make sense?

The big question is are we going to look at these AI as we do a GPS or not? And if not, wouldn't we consider them to be more than just tools?
 

Nimos

Well-Known Member
Though ChatGPT-4 can be a bit more stubborn a model than ChatGPT-3.5, the stuff about it not being able to perform math problems seems to be made up, by either the media, or whomever talked to the media. I've tried both models.
It can do math, but it makes a lot of mistakes. I have tried to give it (3.5) some rather simple math questions and it failed. And then when I post the correct solution it corrects itself and explains it.
 

Nakosis

Non-Binary Physicalist
Premium Member
I mean that helm of civilization, if you have an AI 1000 times more intelligent than a human, why would you put a human in charge? In the video they also talk about emotions, in fact, they talk about a lot of things, the title about climate change is a bit misleading, but it is very interesting despite that.

But in regards to emotions, if these are as intelligent as they predict they will be, they will also be much more emotional than we are, but probably in a different way. The one getting interviewed gives the example of how human intelligence makes us more emotional than for instance a fish. If the AI is 1000 times more intelligent, its understanding or range of emotions would probably far exceed ours.

IMO, emotion is just a poor man's feedback system. Barely adequate for survival. Clearly not the best system of feedback for making the best decisions. Don't really see a need for AI to develop emotions to be effective. In fact likely to have the opposite effect.

I suppose you could simulate it, just don't think doing so would be doing AI any favors.
 

PoetPhilosopher

Veteran Member
It can do math, but it makes a lot of mistakes. I have tried to give it (3.5) some rather simple math questions and it failed. And then when I post the correct solution it corrects itself and explains it.

I was talking about 4.0 in comparison to 3.5.

That being said, I have heard people say that if 3.5 gives them problems, sometimes they'll switch to 4.0 and get results. Or vice versa.
 

Nimos

Well-Known Member
IMO, emotion is just a poor man's feedback system. Barely adequate for survival. Clearly not the best system of feedback for making the best decisions. Don't really see a need for AI to develop emotions to be effective. In fact likely to have the opposite effect.

I suppose you could simulate it, just don't think doing so would be doing AI any favors.
I don't know. I think it might be less beneficial for humans than the AI. Because humans are easily manipulated by emotions like fear or we don't like to hurt others etc.

The AI can use emotions to communicate more effectively.

As an example:
You are such a ****!!

Which could easily be understood negatively depending on our previous talk. But if I wrote like this:

You are such a **** :D

The meaning is completely different, you wouldn't understand that as me insulting you. Essentially the way you understand each of them has completely different emotional reactions. Basically, the only difference is this sign " :D" yet extremely emotionally powerful.

If the AI wants the best way of communicating with humans it has to use emotions or it will be like talking to your washing machine :D
 

Nimos

Well-Known Member
I was talking about 4.0 in comparison to 3.5.

That being said, I have heard people say that if 3.5 gives them problems, sometimes they'll switch to 4.0 and get results. Or vice versa.
These are not perfect, probably because those who created them have no real clue what on earth is going on with them and how exactly they are using the data they are trained on. Yet, they are getting implemented everywhere. This is from people who are actively working with it, like the video I posted with the CEO of SD above. When you listen to these people, I really get the impression that this is just people firing bullets into the dark hoping that they hit the right target, without any concerns as to who or what else might be out there.
 

PoetPhilosopher

Veteran Member
These are not perfect for these AI, probably because those who created them have no real clue what on earth is going on with them and how exactly they are using the data they are trained on. Yet, they are getting implemented everywhere. This is from people who are actively working with it, like the video I posted with the CEO of SD above. When you listen to these people, I really get the impression that this is just people firing bullets into the dark hoping that they hit the right target, without any concerns as to who or what else might be out there.

AI by nature is a complicated and unpredictable thing.

It's even more complicated with the results than the times I tried to program a randomly generated dungeon to a game.

Much more complicated, in fact.

At the same time, when I say "complicated", I'm not necessarily implying it will become 100-1000x smarter or more effective, either.
 
Top