Alien826
No religious beliefs
So Im glad that you checked it in another AI and got similar results, which seem to suggest that there is either something missing from the AI or the companies might have "manipulated" them to make sure that they try to satisfy the user at almost all cost, unless it violated whatever morality the company have. Because if the AI straight out admits that it can't do something, then people might think less of it and go to a competition instead.
I don't know if that is the case, just that something seems dodgy and we know these companies want to make money, they are not doing this just for fun.
I don't think I'd want my machines to be designed by anyone using the profit motive as a guideline. Who would do so is another question.
Part of my thinking is that I'm comparing a world run by AI, not to perfection, but to one run by current democracies, theocracies and capitalism. It's a bit like self driving cars. Whenever something goes wrong and a self driving car gets into a collision, it's immediately "they're no good!" How about though, if self driving cars were universal and road deaths and injuries were reduced to 50% of what we have now? Doesn't sound too good, but we just saved half the road casualties. I'm not saying that AI running the world would definitely improve things that much, I've no idea really, but I'm pretty sure self driving cars could achieve at least 50%.
I would suggest some kind of iterative process, based not on moral rules but desirable outcomes, like reduced homelessness. Let the AI suggest something, and test it on a limited scale. Feed the results back to the AI and let it work on an improvement. Repeat. Gradually expand (or eliminate) as we move on. The practical problems are enormous of course. People would have to give up their power.