Where A.I get its Social-values
At risk of offending some of you, I'm going to posit that, in my opinion, A.I will likely never be able to create certain values. My rationale behind all of this is that many or all social and personal values are based on an appreciation for the self, which, in my opinion, is based on scale of good or bad, which is also a spectrum of infinite degrees. -- And how do you program the infinite?
I'm trying to imagine myself as a robot trying to make genuine value judgments, and it seems all too organic-matter for me.
When did I ever imply that "robots" should have free will? When did I ever imply that robots should develop their own sense of morality?
It's literally at the root of the subject of the OP. Did you read the OP?
Yeah, I read it.
When you raise a child you provide that child with guidance. You teach the child what is right and what is wrong. You hope they will take that guidance into account in their adult lives.
You do the same with robots. You program them to know the difference between right and wrong. You don't give them free will to do whatever they want.
The difference between children and robots is that children are not constrained by your teachings. Robots are constrained by their programs.
Even today, in games, characters act within a set of parameters. "Good Guy" characters kill "Bad Guy" characters and visa versa. The programmer could make the parameters such that any character can kill any other character (free will), but that wouldn't make for much of a game, would it.
Likewise, robots would not be given "free will" to do anything and everything.
You could program a robot to feed a dog twice a day and, if the dog didn't poop on the rug, give the dog a treat.
If you programmed the robot to give the dog food and treats at its own discretion (free will) you would end up with a very fat dog or a very starved dog. You could alleviate this by having the robot monitor the dogs weight and programming it to keep the dog within a certain weight range - not free will. The level of complexity would change as the ability of the robot to monitor the overall health of the dog improved. But the programmed guidelines for the robot would be to keep the dog healthy - again, not free will.