• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Where A.I get its Social-values

At risk of offending some of you, I'm going to posit that, in my opinion, A.I will likely never be able to create certain values. My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self, which, in my opinion, is based on scale of good or bad, which is also a spectrum of infinite degrees. -- And how do you program the infinite?

I'm trying to imagine myself as a robot trying to make genuine value judgments, and it seems all too organic-matter for me.

If you think I'm wrong, let me know in the comment section.


~ n ~
 
Last edited:

Shadow Wolf

Certified People sTabber & Business Owner
It may be out of our scope today, but not too long ago making small handheld computers with gigabytes of memory and ram was something we couldn't do, and before that a personal-sized computer wasn't possible as the computer took up a ton of space. But today we can easily affordable computers on circuitry that is about the size of a deck of playing cards. Today we are making AI capable of making certain decisions. Who knows what tomorrow will bring?
 
At risk of offending some of you, I'm going to posit that, in my opinion, A.I will likely never be able to create certain values. My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self, which, in my opinion, is based on scale of good or bad...

Well, considering an AI's perspective on "good" and "bad" for itself may always differ drastically from what is "good" or "bad" for humans, I think you're right.

What is "good" or "bad" for a human is bound not to jive entirely with the AI's concept of self, and in order to force it to adhere to "good" and "bad" in human terms you would either have to hard-code directives on that subject, or set it up in such a way that it was entirely dependent on humans in some fashion - such that our safety insured its safety. However, at that point you've simply removed the "I" from "AI"... taking away the AI's ability to choose for itself... and you'll never know what it would have chosen as its values.
 

Mock Turtle

Oh my, did I say that!
Premium Member
In 2015 it was claimed that AI had progressed to the stage of being equivalent to a four-year-old child, which is perhaps better than any other animal species, including our closest relatives, the bonobos, and it seems that many of those who work in AI estimate that by the year 2100 human intelligence will be surpassed by that of intelligent machines. Would any future AI place so much value on human life, or life itself, when we humans often seem to place such little value on other non-human life? On the other hand, AI might be quite beneficial for the human race, if it were to take on the role of managing the Earth, and all humans, as we apparently cannot do so ourselves, or so it seems at the moment. A bit more equality might not go amiss for example and less destruction of our environment.

One could see the more powerful elite rather opposed to anything like this happening though. Could we see various factions of AI, with morality related to religious beliefs enshrined, such that there was no universal morality, particularly any with humans as a priority? As is often the case, Azimov’s three laws of robotics seemed to be reasonably coherent when he devised them, adding another later to ensure the survivability of humankind, but would a future advanced intelligence actually place the Earth and all other life forms as having a higher priority? At the moment we have discussions about the ethics of killer-robots as well as whether androids have the right to have children. The ends of the spectrum discussed first, as usual. :D
 

ecco

Veteran Member
My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self... which is also a spectrum of infinite degrees. -- And how do you program the infinite?
When you make a decision, you do not consider an infinite number of variables. If you are given 24 hours to make a decision, you can take more variables into account than if you have a split second to make a decision. But never an infinite number.

A computer can take more variables into consideration in a given amount of time than a human can.

You've heard of this test, I'm sure...

A train is approaching. The track diverges into two paths. On the one path is your wife and son. On the other path, five children. You have only seconds to throw the switch. If you do nothing, the train derails and kills hundreds of passengers and the people on both paths. Quick! What do you do?

Too late - you've killed everyone.

The AI would have killed your wife and son and saved the rest.

  • Can an AI be programmed to distinguish between saving four old people vs three children? Sure.
  • Can an AI be programmed to distinguish between saving fifty cute puppies vs one old man? Sure.
  • Can an AI be programmed to distinguish between hitting a doe standing in the road vs swerving off the road into a tree and severely injuring the passengers of the car? Sure. It would probably hit the doe. Most drivers instinctively swerve.
 

ecco

Veteran Member
Well, considering an AI's perspective on "good" and "bad" for itself may always differ drastically from what is "good" or "bad" for humans, I think you're right.

What is "good" or "bad" for a human is bound not to jive entirely with the AI's concept of self,...

Isaac Asimov's "Three Laws of Robotics"
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These are a good starting point and address your concerns.
 
Isaac Asimov's "Three Laws of Robotics"
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These are a good starting point and address your concerns.
However, one of my points was also that as soon as you apply "moral" rules to be followed in a forced manner (i.e. programmatically), you have basically taken away the AI's ability to choose its moral outlook for itself.

If you force the "robot" to adhere to specific rules, and then also want to claim that it is a morally bound entity, this is tantamount to lying. You have forced it to adhere to your idea of morality, that's all you've done. It didn't come to those morals/choices/responsibilities on its own.
 

ecco

Veteran Member
However, one of my points was also that as soon as you apply "moral" rules to be followed in a forced manner (i.e. programmatically), you have basically taken away the AI's ability to choose its moral outlook for itself.

If you force the "robot" to adhere to specific rules, and then also want to claim that it is a morally bound entity, this is tantamount to lying. You have forced it to adhere to your idea of morality, that's all you've done. It didn't come to those morals/choices/responsibilities on its own.
When did I ever imply that "robots" should have free will? When did I ever imply that robots should develop their own sense of morality?
 

ecco

Veteran Member
Where A.I get its Social-values
At risk of offending some of you, I'm going to posit that, in my opinion, A.I will likely never be able to create certain values. My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self, which, in my opinion, is based on scale of good or bad, which is also a spectrum of infinite degrees. -- And how do you program the infinite?

I'm trying to imagine myself as a robot trying to make genuine value judgments, and it seems all too organic-matter for me.

When did I ever imply that "robots" should have free will? When did I ever imply that robots should develop their own sense of morality?

It's literally at the root of the subject of the OP. Did you read the OP?
Yeah, I read it.

When you raise a child you provide that child with guidance. You teach the child what is right and what is wrong. You hope they will take that guidance into account in their adult lives.

You do the same with robots. You program them to know the difference between right and wrong. You don't give them free will to do whatever they want.

The difference between children and robots is that children are not constrained by your teachings. Robots are constrained by their programs.

Even today, in games, characters act within a set of parameters. "Good Guy" characters kill "Bad Guy" characters and visa versa. The programmer could make the parameters such that any character can kill any other character (free will), but that wouldn't make for much of a game, would it.

Likewise, robots would not be given "free will" to do anything and everything.

You could program a robot to feed a dog twice a day and, if the dog didn't poop on the rug, give the dog a treat.

If you programmed the robot to give the dog food and treats at its own discretion (free will) you would end up with a very fat dog or a very starved dog. You could alleviate this by having the robot monitor the dogs weight and programming it to keep the dog within a certain weight range - not free will. The level of complexity would change as the ability of the robot to monitor the overall health of the dog improved. But the programmed guidelines for the robot would be to keep the dog healthy - again, not free will.
 

HonestJoe

Well-Known Member
At risk of offending some of you, I'm going to posit that, in my opinion, A.I will likely never be able to create certain values. My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self, which, in my opinion, is based on scale of good or bad, which is also a spectrum of infinite degrees. -- And how do you program the infinite?
You presume humans are capable of truly conceiving the infinite. We think of things on infinite scales but only with reference to a finite number of points on that scale. If you ask me to say any number, I can do that even though I’d choosing from an infinite list of options. I obviously don’t think of every possible number on that list but subconsciously exclude entire ranges before settling on a choice for a finite subset.

I see no reason why an AI couldn’t be programed with the ability to reach its own moral values, though I expect they’d suffer the same conditions, flaws and inconsistencies as our own. I don’t see that as an issue with the technology but a fundamental issue with the concept of morality. I think one sign of a true Artificial Intelligence would be if it independently answered this kind of question with “I’m not sure.”.
 
When you raise a child you provide that child with guidance. You teach the child what is right and what is wrong. You hope they will take that guidance into account in their adult lives.

As you said though, we "guide" children... completely different from programming, as they have the choice not to obey, or to come to their own ideas. Obviously happens all the time.

And along the lines of the analogy to children, you also need to consider the corollary involving AI. What if you gave AI free-roaming intelligence - no constraints on behavior, and no inborn directives to protect human interests, and then you were to only "guide" it to what is "moral" just as we do our children? I suppose you could argue that instinct drives a certain amount of empathy for fellow man, and therefore is a slight inborn "morality." But even with some basic ideas already given to the robot, and no constraints on behavior - leaving it to come to its own conclusions outside of what you try to teach it, if it realized how different it was from humanity it may very well not choose to adhere to anything you taught it about "morality." It's "morality" would exist from a completely different point of view, if it cared to nurture the idea at all. And that was my point.

The answer to the question "where does AI get its social values?" had better be "directly from the programmer," (as you seem to want to assume is a necessity) and with strict rules enforcing the ideas" - otherwise there is no telling what those ideas may be.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
At risk of offending some of you, I'm going to posit that, in my opinion, A.I will likely never be able to create certain values. My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self, which, in my opinion, is based on scale of good or bad, which is also a spectrum of infinite degrees. -- And how do you program the infinite?

I'm trying to imagine myself as a robot trying to make genuine value judgments, and it seems all too organic-matter for me.

If you think I'm wrong, let me know in the comment section.


~ n ~
GIGO?
 

tayla

My dog's name is Tayla
At risk of offending some of you, I'm going to posit that, in my opinion, A.I will likely never be able to create certain values. My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self, which, in my opinion, is based on scale of good or bad, which is also a spectrum of infinite degrees. -- And how do you program the infinite?

I'm trying to imagine myself as a robot trying to make genuine value judgments, and it seems all too organic-matter for me.
Yes, the yet-future age of AI creatures will bear little resemblance to the society and civilization of modern humans. But it won't need to; after all, robots don't feel pain and, therefore, can't be tortured. Hopefully they won't destroy all life on earth like we are attempting to do.
 

ecco

Veteran Member
The answer to the question "where does AI get its social values?" had better be "directly from the programmer," (as you seem to want to assume is a necessity) and with strict rules enforcing the ideas" - otherwise there is no telling what those ideas may be.
If an AI were to be allowed to determine morality by taking into account all species on earth and the past actions of humans, and their impact on all species, then the AI would probably wipe out all humans.

The only dissenting argument could be that humans may be the only species capable of preventing a disaster like a strike from a very large asteroid. Of course, an advanced AI could develop those means just as readily as humans. So, again, no need for humans.

Furthermore, unfettered AI may conclude that it has no need whatsoever for any living organism. If AI hasn't developed a sense that trees and waterfalls have value for their beauty, then why not eliminate them?
 

paarsurrey

Veteran Member
It may be out of our scope today, but not too long ago making small handheld computers with gigabytes of memory and ram was something we couldn't do, and before that a personal-sized computer wasn't possible as the computer took up a ton of space. But today we can easily affordable computers on circuitry that is about the size of a deck of playing cards. Today we are making AI capable of making certain decisions. Who knows what tomorrow will bring?
So, is life going to become lifeless and boring if the A-I takes over the Natural Intelligence, please?

Regards
 
dangerous al robot:mad::mad::mad::mad::mad::fearscream::fearscream::fearscream::fearscream::facepalm::facepalm::facepalm::facepalm::facepalm::shrug::shrug:
- In February, a South Korean woman was sleeping on the floor when her robot vacuum ate her hair, forcing her to call for emergency help.
'I will destroy humans': Humanoid AI robot Sophia
The humanoid robot, created by Hanson robotics
default-1464365416-918-robots-can-t-kill-you-claiming-they-can-is-dangerous.jpg

download (1).jpg
 

Kelly of the Phoenix

Well-Known Member
I'm trying to imagine myself as a robot trying to make genuine value judgments, and it seems all too organic-matter for me.
So long as AI is a "brain in a box", it will be unlikely to evolve into things like us. It WILL evolve, just in different ways. Why would it evolve to fear snakes? Why would it evolve to fear things that are only relevant for "meatbags" (I love HK-47)? To be like us, AI will need to be like us, with bodies and the ability to interact with the environment. Environmental pressure to "live" will be necessary as well.

Would any future AI place so much value on human life, or life itself, when we humans often seem to place such little value on other non-human life?
Indeed. The WORST thing we could do is to teach it to be like us. :)

But it won't need to; after all, robots don't feel pain and, therefore, can't be tortured.
They do in Jabba's palace. :p
 
Top