• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

When we can't keep up anymore, then what? (AI)

Stevicus

Veteran Member
Staff member
Premium Member
Sorry I think I overlooked your reply.

This is where it gets interesting.

Let's assume that the AI is sort of like in the style of the new Blade Runner, if you have seen that? Where the main character has an AI girlfriend, if the AI can make you feel something and even if it is just acting so much as a human that you can't tell the difference, would it matter? And could we even tell whether it was sentient or not? How would we even test it?

I've seen the old Blade Runner. I didn't know there was a new one. But I saw Blade Runner as another variation on Pinocchio, albeit with a lot of cool visual and special effects.

Another AI-related sci-fi was the more comedic Short Circuit, in which a robot is struck by lightning and somehow gains a sense of awareness and sentience which it is able to articulate and assert. "Number Five is alive," as the line went. As it was a robot, it certainly didn't look human at all, but there's a scene where his creator is talking with him and questioning him, in order to test whether he's really sentient. The clincher, apparently, was when he told a joke and Number Five started bursting out laughing, and viewed as confirmation that he was, in fact, sentient and alive.

So, perhaps a concept like humor might be beyond AI's capabilities.

It's very difficult to speculate about I think, because we are aware that it is an AI when we interact with them and in general, it doesn't act like it cares for us personally if you know what I mean. You can have some very interesting chats with ChatGPT, but not to the point where you think it is a human.

But will be interesting to see when these get into support tasks and you speak with them on chats or phone, how the average person will react. I think the majority of us will be unable to tell the difference, but I still think we would say stuff like "Thanks for the help", "Have a good day" etc. even though it would be meaningless to the AI in a greater sense.

I find that most computerized attempts at customer service to be woefully lacking. I've tried communicating with organizations and companies through their customer service or support lines, and I find that, unless you're calling for something very basic and easy, it's a very befuddling and frustrating experience. It still becomes necessary to speak to a human, because the AI interface over the phone just isn't there yet - not even close. I know this from my own experience.

They are copying or duplicating the human way of thinking. To explain it a bit simply, the normal way we as humans do things is that we problem solve, we have an issue, big or small and then we arrive at some sort of solution of how to best deal with it. Whether that is to walk or drive when we have to buy groceries. There are a lot of considerations being made here even though it might sound like a trivial task, how is the weather? how much can you carry? how much time does it take? etc. And based on all these things we decide to do something. Essentially that is what they are trying to make the AI do as well. Whereas in traditional programming, we would do something like, "If weather is bad then take car" and the computer does that without questioning.

So when you suddenly are faced with a computer that thinks as we do, it might arrive at other conclusions that we will, depending on what information it thinks is important. If that make sense?

The big question is are we going to look at these AI as we do a GPS or not? And if not, wouldn't we consider them to be more than just tools?

The human way of thinking is not necessarily the most efficient. We're not machines, and sometimes, humans can be distracted from a given task or start daydreaming. Sometimes, physical conditions can affect our thinking processes, such as hunger, thirst, sleeplessness, pain - things that AI would have no knowledge or experience with.

Human minds can also be cluttered with a lot of random memories. I find I end up remembering things I would rather forget, and forgetting things I would like to remember. I wish my brain had the ability to "save" and "delete" files like we have with computers.
 

Nimos

Well-Known Member
Thinking it over, I think the only way it might really actually work, is to give Universal Basic Income to the people who are working, as well.

Otherwise, it could create problems, I think.
You could have different incomes between people, but I don't see how it could be a functional economic system when there is a fixed income. Because we still have to buy stuff from each other like natural resources, which I guess could be traded instead without ever mentioning the word money. Basically, you get one apple and I get one orange kind of deal. But this leaves certain countries in a pickle if they don't have valuable resources.

I honestly have no clue how things should function when this happens, but I can't see how it won't happen. AI is here and from the video about Tesla, Elon Musk makes it very clear that he believes that this is the future of Tesla, I think he is right, everyone would want robots, and as he says the market could be billions of robots, probably 2 to 1 robot vs human if that can even do it. Because eventually, they are going to be more agile, stronger, precise etc. than any human, so why wouldn't you get them to build buildings? clean the street etc. Serve burgers in McDonald etc. basically any task that a human could do.
 

Nimos

Well-Known Member
I've seen the old Blade Runner. I didn't know there was a new one. But I saw Blade Runner as another variation on Pinocchio, albeit with a lot of cool visual and special effects.

Another AI-related sci-fi was the more comedic Short Circuit, in which a robot is struck by lightning and somehow gains a sense of awareness and sentience which it is able to articulate and assert. "Number Five is alive," as the line went. As it was a robot, it certainly didn't look human at all, but there's a scene where his creator is talking with him and questioning him, in order to test whether he's really sentient. The clincher, apparently, was when he told a joke and Number Five started bursting out laughing, and viewed as confirmation that he was, in fact, sentient and alive.

So, perhaps a concept like humor might be beyond AI's capabilities.

This is from the "new" one with his girlfriend, which could potentially work something like that. i do however think that they will be physical robots.


I think the goal will be to make them as human-like as possible because that is appealing to us and that is what we want to interact with.


This is early stages, but I don't think anyone is in doubt what the aim is, when you see this and maybe when this AI really gets hold and is super intelligent it might be able to solve the problems with how these robots walk and move in general.

The human way of thinking is not necessarily the most efficient. We're not machines, and sometimes, humans can be distracted from a given task or start daydreaming. Sometimes, physical conditions can affect our thinking processes, such as hunger, thirst, sleeplessness, pain - things that AI would have no knowledge or experience with.

Human minds can also be cluttered with a lot of random memories. I find I end up remembering things I would rather forget, and forgetting things I would like to remember. I wish my brain had the ability to "save" and "delete" files like we have with computers.
They do actually drift or hallucinate as they call it, when they suddenly come up with weird or wrong things.

While leading AI experts aren't entirely sure what causes hallucinations, there are several factors that are often cited as triggers. First, hallucinations can occur if the training data used to develop the model is insufficient or includes large gaps leading to edge cases that are unfamiliar to the model.

But you can read more about it here:

Whether they will keep doing this as the datasets expand I don't know.

Sure, they would have to emulate emotions, but whether they have to react to them I don't know. Let's say a robot firefighter, we don't want it to be afraid of fire, we want it to go into the fire and put it out and save whoever might be stuck in there.

I actually saw a video about human thinking vs. AI, and I think it is a benefit that humans think the way we do vs that of an AI that would probably work well together. Especially in the fields like sciences etc. But when it comes to most tasks humans can't compete. it is just a biological limitation. We need sleep, rest, food, vacations etc. So in that sense, there is not going to be any competition once robots can move like we can.

It will probably be possible in the future to save and delete things from the brain.

Artificial intelligence can create images based on text prompts, but scientists unveiled a gallery of pictures the technology produces by reading brain activity. The new AI-powered algorithm reconstructed around 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy.

So it's not unthinkable that you could stimulate the brain somehow to get rid of certain memories or whatever, probably not any time soon, but there is so much crazy stuff going on that I think the biggest mistake is to assume that something is not possible.

In theory, you could make a digital version of yourself, which might be somewhat primitive, but the tools already exist. You can use AI generative graphics to scan yourself, and pictures you might have taken over the years, so it can recreate your visually and even some of your memories. You can make it emulate your voice as well and last you could install an AI locally on your machine and train it on yourself, your life experiences, desires, personality, beliefs etc.

It sounds crazy to even think about, but you could actually do it if you wanted.

This girl apparently made an AI version of herself, and she makes a good amount of money on it :D
 
Last edited:

Stevicus

Veteran Member
Staff member
Premium Member
This is from the "new" one with his girlfriend, which could potentially work something like that. i do however think that they will be physical robots.


I think the goal will be to make them as human-like as possible because that is appealing to us and that is what we want to interact with.


This is early stages, but I don't think anyone is in doubt what the aim is, when you see this and maybe when this AI really gets hold and is super intelligent it might be able to solve the problems with how these robots walk and move in general.


They do actually drift or hallucinate as they call it, when they suddenly come up with weird or wrong things.

While leading AI experts aren't entirely sure what causes hallucinations, there are several factors that are often cited as triggers. First, hallucinations can occur if the training data used to develop the model is insufficient or includes large gaps leading to edge cases that are unfamiliar to the model.

But you can read more about it here:

Whether they will keep doing this as the datasets expand I don't know.

Sure, they would have to emulate emotions, but whether they have to react to them I don't know. Let's say a robot firefighter, we don't want it to be afraid of fire, we want it to go into the fire and put it out and save whoever might be stuck in there.

I actually saw a video about human thinking vs. AI, and I think it is a benefit that humans think the way we do vs that of an AI that would probably work well together. Especially in the fields like sciences etc. But when it comes to most tasks humans can't compete. it is just a biological limitation. We need sleep, rest, food, vacations etc. So in that sense, there is not going to be any competition once robots can move like we can.

It will probably be possible in the future to save and delete things from the brain.

Artificial intelligence can create images based on text prompts, but scientists unveiled a gallery of pictures the technology produces by reading brain activity. The new AI-powered algorithm reconstructed around 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy.

So it's not unthinkable that you could stimulate the brain somehow to get rid of certain memories or whatever, probably not any time soon, but there is so much crazy stuff going on that I think the biggest mistake is to assume that something is not possible.

In theory, you could make a digital version of yourself, which might be somewhat primitive, but the tools already exist. You can use AI generative graphics to scan yourself, and pictures you might have taken over the years, so it can recreate your visually and even some of your memories. You can make it emulate your voice as well and last you could install an AI locally on your machine and train it on yourself, your life experiences, desires, personality, beliefs etc.

It sounds crazy to even think about, but you could actually do it if you wanted.

This girl apparently made an AI version of herself, and she makes a good amount of money on it :D

AI might be an effective tool with numerous potential applications, but I just can't see it as a substitute for genuine human connections and relationships. I never could see or understand the fascination that some people have with these "sex dolls." It seems there would be easier and cheaper ways to masturbate, but to each their own. An "AI girlfriend" seems along the same lines. That Blade Runner clip also looked kind of freaky.

If they can feel emotions, like love, then they might also become insanely jealous.
 

Nimos

Well-Known Member
AI might be an effective tool with numerous potential applications, but I just can't see it as a substitute for genuine human connections and relationships. I never could see or understand the fascination that some people have with these "sex dolls." It seems there would be easier and cheaper ways to masturbate, but to each their own. An "AI girlfriend" seems along the same lines. That Blade Runner clip also looked kind of freaky.

If they can feel emotions, like love, then they might also become insanely jealous.
Im not saying that I agree with these things being a good idea or not. Simply that it will happen if people can be satisfied with just people on a video and a chat, then I don't see how they couldn't be with a robot like in the Blade Runner video or an actual robot.

I think my initial question or point was, whether we even care when it comes to it? At the moment it is all new so it is very awkward and weird. But when the first porn images/movies appeared, I would imagine that the majority of people saw it as filth and weird as well, because it pushed the norm or the boundaries. Then sex toys emerged and that was probably weird as well. But looking at it today, hardly anyone thinks twice about there being porn, it is basically one click away freely available yet I think the majority of people find it pretty normal.

These AI/robots just add another level to all of this, a more intimate side that you won't find in porn. But if these AI / robots eventually can fulfil a need in humans is it really any different, than for instance some people treating their animals as if they were their children and talking to them like they were and so forth? I find that weird, which might be because I don't have any pets, but not to the point where I don't understand why they do it.
So if you could get an AI/robot that was "designed" to match you and cared to listen to you, wouldn't cheat, challenge you intellectually at the right level etc. And we assume that these reach the point of something like in Blade Runner, then I don't know how humans would react. I think it is too early to draw a conclusion on how one would react if you essentially couldn't tell the difference.

It seems more acceptable that people treat animals like "humans", but treating a robot like one is not, im not saying whether I agree or not, because again I have no clue what to really think about it, except that I think our view will change a lot, simply because it has always been like that, so see no reason why that would suddenly change.

Another example, even though it is a long time ago now if you remember Second Life? which was a game of some sort where people would basically live virtual lives and they created shops and bought houses etc. And this was pretty crappy based on today's standard, but I think it demonstrated that a lot of people really got into this. So it doesn't take a lot for people to get hooked up on this and these new things that are about to hit are far beyond Second Life.

Just looked it up and it apparently still exists, not sure how popular it is. But then people can go here and party and do whatever they do in there.

1693399767440.png
 
Last edited:
Top