I should have underlined and bolded the word foundation. I believe he's implying that science CAN give us a foundation.
Now I've absolutely no idea what you were getting at in your previous post
But we both agree he thinks science can produce a foundation for values so it doesn't matter. I also think he believes that science will give a foundation for morals that are within a kind of moral 'Overton window' of his.
This is why I'm sceptical of "scientific moralities, as many have been proclaimed, and they all magically seem to more or less align with the values that the person proposing them already holds
I think it's your job to explain why value pluralism is not as flexible as you think I think it is
I believe it is far
more flexible than I think you think it is.
(see previous post I tagged you in for why it doesn't tend towards liberalism)
Can we agree that we're talking about "most" people, not all? For example, can we agree that we don't need to get psychopathic murderers to buy into a universal morality?
As for the rest of the above, you're making big claims concerning what's not possible. Do you agree that generally speaking it's immensely difficult to prove a negative or to prove a thing does not or cannot exist?
Yes, that's fine, it's what I meant anyway. Can we also agree when I say something is impossible, this allows for the sliver of philosophical doubt always present?
I can't prove Jesus won't return tomorrow and fix all our problems, but all available evidence shows this to be highly improbable and not really worth entertaining as a serious prospect.
I find both the 2nd coming and the idea we, unlike any other animal, can somehow transcend our nature through sheer force of will to be absurd, and putting faith in either happening at the expense of actual, practical solutions to be harmful folly.
I've probably posted this 100 times, but it sums it up better than I could.
Bertie [Bertrand Russell] sustained simultaneously a pair of opinions ludicrously incompatible. He held that human affairs are carried on in a most irrational fashion, but that the remedy was quite simple and easy, since all we had to do was carry them on rationally."
John Maynard Keynes
You can't save an irrational animal based on a solution that expects it to act rationally.
Why would doctors and societies all over the world decide that broken legs are worth fixing? I would say that there are universal values driving that decision, no?
Because humans care about those they care about, and people can earn resources for performing tasks.
Humans caring about those they care about also drives terrorism, nepotism, bigotry, etc. too.
The same instincts drive very different behaviours.
I'm not sure I agree that a growing understanding of morality would have no intrinsic direction. I think supporting the well being of conscious creatures is a likely example of such an intrinsic direction.
Knowledge and tech progress. We won't go back, en masse, to believing in geocentrism. Knowledge and tech are not intrinsically humanising though. Medicine cures people, and people use the same tech to make biological weapons.
Any moral "progress" can be lost instantly though. Look at Ukraine, or Iraq for what happens when there is a breakdown in society.
If there were a global environmental and economic collapse related to climate change, do you really think powerful countries would act for the good of humanity, rather than for the good of themselves?
Will Americans and Chinese go hungry just to feed some folk in Chad and Somalia? Or will they use their militaries to get what they need if it comes down to that?
What do you think?
How does science's current take on the concept of "humanity" relate to universal morality? And if "humanity" isn't a label you want us to use, then what term should we use when discussing the ideas and behaviors of the collection of humans on the planet?
Like all language, it depends on the context. Words only have meaning in context.
If someone says we need to ban oil "for the good of humanity", that is a nonsense, there is no humanity in that sense. It's like saying "god wills it".
If you say we shouldn't expect humanity to become rational any time soon, that's fine as it just mean the aggregate of humans based on a characteristic shared by all.
It sounds to me as though you're admitting that we humans have selfish, tribal and irrational natures universally, correct? If those characteristics are universal, why couldn't it be that we also share morals universally? Most of us have functioning mirror neurons. Most of us choose to take care of our offspring. Most of us agree that murder is wrong. It strikes me that we have a lot in common, despite our shortcomings.
I agree we aren’t blank slates and share many things in common.
Some of these include:
Adaptation to environment and cultural conditioning
Irrationality
a view of the world that tends towards confirmation of that we already believe
an aversion to emotionally displeasing facts
In group bias
etc.
Collectively (as in not every individual may display these at any given time, but they are common in any group) these include
A propensity to violence
Hatred and prejudice
Jingoism
etc.
Some of the things we share universally are what prevent "humanity" from ever being united to any real degree. And I would say all of the evidence available supports this.
I know you said we only need to "unite enough", but given that religions that unite up to 20% of the world's population (to some degree), across ethnic, national, linguistic and cultural boundaries are in your view "divisive", what do you mean by uniting just enough?
Of course we can, and do, cooperate transactionally based on common interests, but religions don't really prevent this any more than alternative value systems do. Even the crusades/crusader states were dependent on transactional relationships with Muslims.
What is your vision of a realistic global order than is more united? What values will underpin it?
(I believe we can create a world that is a bit less antagonistic, but not one that is substantially more united)