I’m happy to view science as a subset of philosophy. It is more natural philosophy 2.0 (give or take) though rather than philosophy 2.0.
Much of the rest of philosophy is still as it was as it can not be turned into science in any recognisable sense of the term (logic, ontology, ethics, epistemology, etc.)
The last time I checked, none of these disciplines: logic, ontology, ethics, epistemology, etc, can be done without human beings.
You have given me a small concession and seem to agree that there was a change in the way natural philosophy was conducted. A change that evolved throughout the scientific revolution to what it has become now. You don’t seem to want to concede what that change consisted of.
My thesis is that the change is essentially the acknowledgement of human fallibility and the need to mitigate that fallibility. That’s it. While this change was embraced wholeheartedly by natural philosophy after its evolution, this revolution and subsequent evolution did not spread to the remainder of philosophy. Why?
I’m more concerned with the accuracy, but it’s just you don’t seem to have any issues with accepting many areas of science are far less reliable than others. It’s a pretty straightforward argument after all.
*All* the factors that impact accuracy for study and investigation under a scientific framework are the exact same factors that will apply to *anyone* looking to study and investigate those questions or problems. The only difference will be that within a scientific framework, active effort will be made to mitigate human error in the investigative process.
I don’t have an issue with these limitations because there is no magically waving them away. What is critical is that these limitations are openly acknowledged and appropriate degrees of confidence are assigned to the work product that reflect that reduced accuracy. A scientific framework provides that.
Scope and accuracy are related though. The further away from the traditional, hard sciences we apply scientific methods the less reliable they become.
Scope and accuracy are not entirely independent variables.
This reflects a misconception on your part. Means and methods *vary* and are specifically designed to meet the requirements necessary to address the problem at hand in a manner that also mitigates human fallibility in addressing that specific problem to the best of our capabilities. The means and methods necessary to investigate the Krebs cycle involved in cellular respiration are not the same means and methods needed to study the behavior of macaques in their natural habitat, nor those necessary to investigate a recently discovered archaeological site.
Lack of accuracy is solely a reflection of the difficulties associated with the specific question. The larger the system and the greater the number of variables involved, the more difficult it will be to create accurate and predictive models. Limitations on our ability to gain perspective also impact accuracy. For example, we currently are unable to fly to distant galaxies and observe them as we can observe and investigate Earth, and given the distances involved, accuracy suffers accordingly. And last, as I have said earlier, human behavior adds another order of magnitude to the difficulty because human behavior is not fixed but dynamic. The properties and characteristics of atomic elements or that of gravity do not change. They are fixed. Which means your “hard” sciences really should be referred to as “easy” sciences as compared to studying human behavior, yes?
As to scope, except for personal preferences that have no material effect on others or result in self-harm, there is no limitation on when to apply mechanisms necessary to mitigate human error and fallibility whenever human beings are involved. Wouldn’t you agree?
Overall though, I focus on how science exists as a human activity and how it is used and misused in the real world. Too many people who object to the term scientism tend to talk about science in normative terms relating to how things should work.
How we use what we learn within a scientific framework is a political issue. It is a matter of negotiating subjective preference. Is there a reason to make these subjective political decisions outside our current fallibility mitigated scientific understanding of the world and ourselves, or to purposefully disregard it in our subjective decision making? If so, why?
I’d say here you misunderstand his arguments, in the same way you seem to misunderstand what certain people are arguing in this thread.
1, 2 and 5 may involve charlatanism, but are really just about the prestige of science in the modern world and a tendency to want to make things more “scientific”. This is often done with very serious, scholarly and well meaning intentions.
You are worried about people presenting opinion and subjectivity as objective fact, but science is a far greater source of this than formal philosophy these days.
I’d say that, in a not insignificant number of cases, the social sciences are impacted by the personal beliefs of the researchers.
Without corroborated studies I can’t speak to how significant the problem of personal belief being injected into social sciences is, but I certainly am aware of it anecdotally. I am happy to concede that the problem exists, yet voila, we *see* and *acknowledge* the problem, and now steps can and have been taken to mitigate that problem, correct? That is how the process works. That is the whole point of working within a scientific framework. For example, psychology has come a long way since Sigmund Freud, in my opinion. Is it perfect or error free? No, of course not, as no human endeavor is. But as with all knowledge pursuits conducted within a scientific framework of error mitigation, we see continuous incremental improvement.
I will not concede that misrepresenting subjectivity is more prevalent in science than philosophy nor that philosophy even addresses the issue as is the case with science. I would be interested to see if a reliable study was done on the subject. Be that as it may, the first issue would be that much of what is left to philosophy these days falls to the study of purely analytic abstract systems with no requirement to remain synthetic to the real world, or those fields that consist primarily of subjective preference such as Aesthetics, and Morals/Ethics which are rampant with personal belief yet are not acknowledged to be so. Theology represents additional problems, but let’s not go there.
What is left to philosophy? The philosophy of metaphysics? How is that not rife with personal beliefs outside of a scientific framework? Philosophy of the mind and consciousness? Ditto.
Case in point.
Many people think a scientific approach to ethics would be an improvement (and usually think it would support their ethical values).
Science can play a role in identifying people’s preferences and the best way to achieve these, but this relates as much to policy as it does to ethics.
Science cannot tell you which of many competing values are superior though. For example, should we be utilitarians? If we are, where do we draw the line between greater good and individual rights?
So I can’t in any way see this as representing “scientific ethics”, and by adding the label to give greater credibility, it muddies the water between subjective preference and objective fact. Many would accept moral philosophy has many subjective variables after all.
We can see the problems of 'scientific' ethics in the past, where things like social Darwinism and Marxism had ethical principles believed to be scientific and thus objective truths.
“Scientific” ethics can be a way to turbocharge error.
These would also be examples of scientism where scientific principles were applied beyond their effective boundaries. We would call these pseudoscience nowadays, but that is not how they were always seen at the time (Although scientism as a pejorative was basically invented to critique Marxist pretensions of objectivity and rigour when creating their social and historical theories). Social Darwinism and eugenics were widely accepted within the respectable scientific community though.
Unsurprisingly, scientific ethics are not inherently humanistic or positive, they simply reflect the values of those who are creating them with a veneer of scientistic objectivity.
The problems you describe are nothing more than blatant expressions of human fallibility. Whether it is human beings attempting to bolster arguments advocating for their subjective preference by inappropriately portraying those subjective preferences as objectively scientific conclusions, or they bolster their arguments by claiming they comport with “objective” religious authority, or they bolster their arguments by simply assuming a set of universal axiomatic principles (know a priori or through intuition) that provide the “logical” foundation for them to be seen as objectively true, in each case it is fallible human beings being fallible.
What discipline, what philosophical framework actively works to mitigate this very problem and does so successfully?
The -ism matters as it is something humans do that needs a label so we can try to avoid it.
Because of the status of science in the modern world, and the fact that labelling things 'scientific' functions in a manner similar to labelling them 'objectively true' we need to understand that people often overestimate the accuracy of scientific knowledge in many fields.
And we need to accept the limited utility of science in fields where it is not reliable, and that in complex domains (lets say economics), trying to force reality into a form that can be quantified 'scientifically' often misses out or distorts reality in. a manner that can render the information actively harmful (plenty examples of "scientific" theories leading to errors and even financial crises often because they make people overconfident in their accuracy.
This has essentially been addressed above, but I want to address the last part below separately.
Sometimes non-scientifc insights, expertise, experiences, heuristics and so forth are the best we have, but when people think "more scientific = better" we increase the chances of error.
I find this astonishing. Are there never any negative consequences to “non-scientific insights, expertise, experiences, and heuristics? Is it your argument perhaps, that we should go back to relying on expertise and advice of the village shaman, soothsayers, and augers?
What is the metric that informs us that “non-scientific insights, etc” are “the best we have”?
What I am seeing here is a means by which one subjective preference can be justified as correct, appropriate, or superior to any other subjective preference by assigning a subjective mantle of authority to the “non-scientific source” corroborating the subjective preference. In other words, a framework of unmitigated human fallibility.