Augustus
…
This is an example of science at work, though, isn’t it? Lack of replication highlights a problem that is now identified and can be addressed. This is fallibility mitigation at work, is it not?
Your approach seems to relate to treating science normatively whereas my focus is on what the impacts are in the real world.
There is nothing wrong with fallibility mitigation, we need to recognise the fallibility of fallibility mitigation though and consider what it means for knowledge acquisition.
Being well informed requires the acquisition of correct knowledge and the avoidance of false information, particularly false information that you hold with confidence.
If studies replicate at below 50% in a given field, what does that say about the value of that field in producing knowledge? There is an argument you are better off not reading anything as you learn more error than truth.
What would you say? You can't verify these yourself, so you trust that the process is reasonably good at filtering out truth from error. What is an "acceptable" error rate before you have a problem within a field regarding the knowledge value of its output?
Some of these errors may be picked up years or decades later, and some of the people who had learned these as 'knowledge' might correct their beliefs (although we know that correcting false beliefs is not a clean and easy process). Most people will retain significant amounts of false information though, and this will impact aspects of their behaviour.
When people study psychology are they being told "we don't really know if this study is correct" or "if you read a new study, the chances are it is wrong"? People who make decisions based on these aren't doing so while assuming the findings are, on balance or probabilities, false (or at least quite likey to be false).
Regardless of what should normatively happen, in reality people put a high value on things deemed scientific and thus gain confidence in the accuracy of the information.
In this case the unreliable sciences become a significant source of error, and this is a non-trivial problem.
If they can’t be studied scientifically, then you are saying they can’t be studied at all. So yes, of course there are things that are currently beyond our ability to adequately address and speculate on, and in such cases all we can do is to simply say we do not know.
We can't study the demarcation problem scientifically, or create ethical codes scientifically. I still think we can study them and make meaningful statements regarding them.
I do not agree that we should limit the scope of when to apply the demarcation and fallibility mitigation tools found within a scientific framework when seeking knowledge, nor should we suspend rational skepticism in such endeavors.
My premise is that in classic and traditional Philosophyand Theologythere is not incorporated within the discipline a framework, mechanisms, a set of principles and standards, that actually mitigate the inherent fallibility in the philosopher.
You express this like people choose not to apply "fallibility mitigation tools found within a scientific framework" to certain areas, rather than in certain areas we simply can't apply them
Earlier you identified we couldn't say utilitarianism is better than virtue ethics scientifically.
When philosophers of science discuss the demarcation problem, what "fallibility mitigation tools" do you think they are not applying that they should be applying?
When you define what you think science is what "fallibility mitigation tools" do you think you apply that they do not? Ditto regarding your ethical values?
The issue I take with your position is your singular focus on how investigators operating within a scientific framework get things wrong. I agree, and say how can it not, as it is human beings doing the investigating. What is at question, is whether the mechanisms put in place to address this inherent fallibility work, and work in such a way as to allow actual progress to be made in addressing and answering the questions we put before ourselves.
Far from being my singular focus, it is not even my main focus. My main focus is on the consequences of these errors in the real world. It is how do we make decisions in a world we only partially understand and can only minimally control.
Your singular focus on errors in science does not acknowledge errors in “non-science” and whether such errors in “non-science” are effectively addressed. Is it your position that “non-science” disciplines are error-free, or are either equally or more adept at addressing and mitigating human error in the knowledge acquisition process? If that is your position, then make the case.
That's not remotely my position and have explicitly stated the opposite multiple times.
My only complaint is an incomplete presentation of available evidence regarding “non-scientific intuitive experts”.
As I never made any point about "non-scientific intuitive experts" in a general sense it is a fallacious comparison. Again, your assumptions about motivations may be clouding your judgement at times.
You don't really think that noting a salesperson or an effective political leader may indeed have a degree of expertise beyond that which can be quantified and replicated scientifically is akin to advocacy for soothsayers and shaman, do you?
Going back to the distinction between technical and practical knowledge from an earlier post, is your view that practical knowledge does not exist and only technical knowledge does?
Understanding factors that may lead to instantly wiping out long-term stability is all part-and-parcel to addressing this complex and difficult problem. The difficulty should not be an excuse to simply throw up our hands and not even try. Perhaps your argument is that the very act of trying will only result in things getting worse, and if so, I would disagree.
Who said anything about not trying? It is just that limiting our attempts at understanding to whatever inaccurate and highly incomplete scientific studies can say at this time will not give us the best chance at success.
The best (or least bad) solutions will contain significant amounts of subjective insights that may come from science, philosophy, history, literature, tradition, personal experience or whatever else is at hand.
We won't have the luxury of being able to "objectively" judge between competing claims either, we just have to hope that whoever is making the decisions happens to be an insightful observer of human society as we can't simply "trust the science".
Overestimation of the accuracy of data and conclusions are simply manifestations of human error and fallibility. If you want to give the specific error of overestimating accuracy the label ‘scientism’, then be my guest, however, I do not see it as useful.
Yes, and scientism is a human failing. Unless we retreat to normative abstractions and how things should work or might work in some far off future, all we can do is look at the world we live in.
In the world we actually live in, people overestimate the accuracy of scientific findings and overestimate its potential to solve certain problems we face. This causes problem. We should aim to mitigate these.