• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Who here believes in "Scientism"?

MikeF

Well-Known Member
Premium Member
I like the definition for scientism. I don't like the definition of scientism that includes the word "excessive" in the definition, because, by circular reasoning, it would imply that scientism is automatically false. But I would amend the definition to say that "science is the ONLY" way. As I said before, you and I already agree that science is the best (in my view that means the most reliable).

I don't take issue with science being considered the best way to render truth about the world and reality. So, for sake of a substantive argument, I say we focus on the "ONLY" part of the claim. I fully grant you that it's the best and most reliable source of knowledge we have at our disposal.

I'm fine with it. Our working definition of scientism then is: science (broadly) is the only way to render truth about the world and reality.
.
We don't need to bring up the role of philosophy in the formation of the sciences. Sure, it happened that way. But that was historical accident as much as it was anything. "Philosophy is a systematic study of general and fundamental questions concerning topics like existence, reason, knowledge, value, mind, and language. It is a rational and critical inquiry that reflects on its own methods and assumptions."

That seems like a good definition that we can both agree to for purposes of our conversation.



Agreed.

Good here.


I like this definition, but I have some issues with it. There are different kinds of sciences. Some are heavily empirical, and others are not. If we were to accept that definition wholesale, then you'd pretty much win the debate. What many propose when they advance scientism is the ability of the natural sciences to explain the world. Even if the social sciences do a good job of explaining a great many things, a proponent of scientism would say something like "all knowledge is reducible to the kind of statements that the physical sciences make."

I''m not trying to be unfair here. Nor do I want to adjust definitions to give my position an unfair advantage. But there are several philosophical assumptions in that definition that I would want to clarify before agreeing with it. For example, economics understands the world in an entirely different way than physics. And that should be accounted for. It IS relevant to our discussion to discuss those differences. But I DO agree with most of what that definition says. But I would like to iron out some kinks.

Ok, I’m up for some ironing.

One of the things I think needs to be ironed out is this notion of “natural sciences”, why the category was created, what does ‘natural’ really mean in this context and what then is outside of this concept of ‘natural sciences’ and why.

These are the only two definitions that I say are insufficient for the purposes of our debate. They are great as wikipedia definitions. But they are unclear for our specific purposes here. There are several competing theories of knowledge, each with its own strengths and weaknesses.

And truth? Nah. No wikipedia definition is going to cut the mustard. Nor does it need to. We just need to deal with knowledge (and if things like science and philosophy can provide us with such things). I say we can just use our intuitive concepts of truth to work our way through the debate. "Truth describes something that IS the case. Falsities or errors describe things that are NOT the case." That's the definition I propose that we work with for purposes of our debate.

Feel free to amend and or criticize any of my revised definitions. I think we will arrive at agreement on these definitions sooner rather than later.

Ok, we need some work on deciding what we mean by ‘knowledge’. I accept your proposal regarding ‘truth’.

I would prefer to do this in a separate thread. Or, at least, I think it is too massive of a question... such that it would constantly pull our attention away from the subject we are discussing now: scientism. I AM DOWN to discuss it. But I don't think it will help us clarify any possible problems with scientism... which is what we are trying to do here.

I'm working on your other post now. Gimmie some time. But let's work on agreement of definitions if that's still an issue until I get a response to that completed.
No problem. :)
 

9-10ths_Penguin

1/10 Subway Stalinist
Premium Member
It is not possible for Metaphysical Naturalism to research, study or falsify the existence or nature of any Gods, or the numerous possible beings such as angels, demons of fairies.

Metaphysical naturalism can study literally anything supported by evidence, so this seems like quite the condemnation of gods, angels, demons and fairies.
 

vulcanlogician

Well-Known Member
Ok, I’m up for some ironing.

One of the things I think needs to be ironed out is this notion of “natural sciences”, why the category was created, what does ‘natural’ really mean in this context and what then is outside of this concept of ‘natural sciences’ and why.

So the separation I would want to make is between the "hard" sciences, physics and the like, which require direct empirical evidence to support it's claims.

The softer sciences, social sciences, things like economics and sociology and the like... which rely heavily on empirical fact to support its claims, but also can rely heavily on models and sources of information that are not purely empirical. This means that such-and-such a model can be rejected or accepted for various reasons, and these reasons aren't based on empirical data.

Even further down the line, we have the humanities. (History, philosophy, literature....) the methods for advancing a thesis in one of these fields usually focuses on argumentation. Evidence plays a role sure. But it is not the chief role of these academic disciplines to gather empirical evidence.

The crux of my argument lies here. I think disciplines like history and literature CAN furnish us with genuine and valuable knowledge. They ARE systematic disciplines, but I'd hardly call them sciences.

If you want to call them sciences, we can all get our hats and go home. Because (if we are going to call those disciplines sciences, I TOO am a proponent of scientism) and we pretty much agree on everything.

But if the humanities are NOT sciences in your view, then I stand by my position that we can gain knowledge from something other than the sciences.

Do you have any issues so far with the way I've "ironed out" this particular definition.

Ok, we need some work on deciding what we mean by ‘knowledge’. I accept your proposal regarding ‘truth’.

We could go with Plato's classic definition of knowledge: a "justified, true, belief." Sure plenty of philosophers have poked a few holes in it over the centuries, but it still remains afloat. And for most discussions, it gets the job done.

It even works well with science. Let's say a scientist was calculating the mass of one of Jupiter's moons. He makes observations which lead him to reach the justifiable conclusion that it has x mass. (justified) The mass of the moon actually IS (roughly) what he calculates it to be. (true) And the scientist adds this fact (the moon's mass) to the things he believes about the world. (belief)

Anyhoo, I'm sure we can work out a decent definition of knowledge to use. Just let me know.

Now I'm gonna tackle the other long post you put down before.
 
Last edited:

vulcanlogician

Well-Known Member
I’m neither a Historian nor an Archeologist, so what I am about to say should be taken in that light, but to my mind, an archeologist is going to work to determine the meaning and purpose behind the artifacts that they uncover.

Of course. And one thing that makes them well-equipped to do that is they read a LOT of history. They don't just uncover an urn and figure out everything about it from scratch.

I see the archeologist doing residue testing on pot shards and while many shards may come up negative, two sites might have a shard showing wine, or there may be shards of the same type with 10 different residues showing the pot to be multipurpose. In any case, the response to a negative result would simply be, “the result was negative for wine”. It didn’t mean it never had wine, nor that we will never know if wine was used in these types of pots. It is just known that that pot or shard did not have measurable wine residue when tested.

So where does the historian fit in? I see the archeologist building the most detailed picture possible of each site and placing it within some time span. I see the historian speaking more to historical events, human interactions, and their consequences. The archeological detail is used by the historian to set the scene and establish the conditions in which historical events play out over some time period.

So this question gets into the demarcation issues I brought up in my previous post about how to define the sciences. All the archeologist will be able to say is "I found an urn, and it had no wine residue." But a historian will look at writings from the contemporary era and learn that the wine-ceremony urns had an inscription of the wine god on them. One historian may publish a paper arguing that the inscription was of the wine god -citing x, y, and z reasons... based on ancient documents or nearby hieroglyphs. Another historian may publish a paper rejecting this thesis. This historian may cite documents that mention that urns were commonly inscribed with images denoting who was the owner of those urns. She would likewise present arguments x, y, and z to support her thesis. You can see how we've departed from science here a bit, can't you? And empiricism only plays a minor role in the formulation of these theories. If the archaeologist discovers wine residue, we can all go home. But history is a far more murky affair. We have to rely on our wits, logical deduction, and what little information we have. Then we form historical theories. THEN those theories compete with one another.

If we agree on one thing, it HAS to be that there is a huge difference in the methodologies used by physicists and historians.

Models can also play a huge role in our drawing conclusions about history. We've all heard the narrative of the assassination of Archduke Ferdinand being the cause of WW1. But an "economic determinist" would cite many political and economic factors which would have eventually led to a war-- whether our darling Archduke managed to evade being killed or not. Historians use these models to great effect to make claims about historical events. But again, how empirical are we being here? We're a good ways from physics and its methodologies at this point, yet I would argue, we are still gaining knowledge about the subject matter.

That's all for now. Let me know if we need to iron out definitions or clarify anything else before we proceed further into the debate.

(Which is a rather fun and thought-provoking debate I must say.)
 
To start, I don't think in terms of "natural sciences". That is an anachronistic concept in my view. The purview of science is understanding Reality, with Homo sapiens being a part of Reality.

I do not see one sub-specialty of science as more or less reliable than any other.

You really don't see any difference between studying things that exist independently of human awareness of them, and things that only exist because humans have created them as categories, ideas and concepts? Fields where analysis cannot be abstracted from significant subjectivity and value preference, and often where these cannot be studies directly but only via subjective human verbal responses to questions about these things?

You don't think that the far better track record in the former and these obvious structural differences should make us consider the former to be more reliable?

I have to disagree with you there.

Touching on the quoted 50% failure of repeatability, I do not see that as a problem, per se. That is the whole dang point of science to begin with. It is the repeatability that provides confidence in the findings. When a study can't be reproduced, that is was signals to us there may be problems with the original finding, and there will be problems because science is hard, the problems can be tough to get a handle on, and it is fallible human beings engaged in this activity. We in society, should not be making life choices on studies conducted at the cutting edge of discovery. We should probably confine our confidence to things that fall within the realm of undergrad studies in a particular field.

If the repeatability is what provides confidence, what lesson should we learn from a consistent track record of most studies not repeating?

Also, the research is often being carried out by those teaching the undergraduates and is being taught in undergraduate classes. This is not hyper-advanced theoretical physics on the cutting edge of what is humanly possible.

It's generic social psychology, not pushing any boundaries, using the "tried and trusted" methods of the field.

Some of the studies that fail to be replicated are among the "classics" of the discipline, and it's not like you can say "well it's 5 years old now so we can trust that it is accurate and that lots of people have repeated the experiment and found it to be accurate". Other studies are only pertinent because they are current, so we have to use them (or not) when. they are relatively fresh off the press.

At what point can we start making choices based on material published in this discipline? How should we identify scientific experts in this field, given these scientific experts are the ones creating these flawed studies, conducting the peer-review, promoting them as being meaningful findings and teaching them to those starting out in the field?

I'd say the credentials of an industrial chemist would be pretty reliable regarding the accuracy of their views on industrial chemistry, on the other hand would you be confident a highly credentialed social psychologist would be a good source of advice?

In the case of the former, it is near impossible that someone without a demonstrable background in chemistry could be a better source of advice, in the case of the latter, I'm not even sure their credential can tell you much at all about the likelihood they are a good source of advice.

The current assumption is that they would be an excellent source of advice as they are a trusted scientist in a rigorous scientific field.

Imo, they should be viewed more akin to the journalist who works for the newspaper that prints 50% real news and 50% fake news. Unless you know which is which, there is little value in their insights.

What should be of more concern is if repeat studies by researchers other than the original are never done. The issue here may be related to funding and tight budgets, as one possible reason. As the federal government is a major source of research funding, I think they have some responsibility in setting study requirements, including follow-up studies. Solving these issues, though, is a different topic.

At present, the repeat studies are mostly not being done, at least officially and resulting in published refutations and the retraction of the original paper.

Talking about how things could be done differently in the future does not solve the problem as it exists in the present, and does not act as a bulwark against scientism in the present.

So while I agree there is insufficient skepticism in cutting edge research (among the public), I don't know what the solution to that would be. By no means however, would that solution be to not engage in scientific discovery. My complaint regarding your definition and use of the term 'scientism' mainly revolves around this notion of limiting the scope of science. If, as is my contention, that a scientific approach, or working within a scientific framework, is simply approaching questions and problems from a position of rational skepticism, and that every effort is made to maintain objectivity and mitigate human fallibility in the investigative process, why would we not want to approach our pursuit of knowledge in this way regardless the subject in which knowledge is sought?

You can happily rest in the fact that, as previously noted, I don't believe in "limiting" the scope of the sciences ;)

Presenting it like this would be a misunderstanding of my view.

I support acknowledging that certain questions cannot be answered scientifically, in the normal usage of the term, (as you yourself have accepted) and that studies in certain areas are so unreliable that they don't deserve the privileged status we afford to scientific knowledge in other fields. Also, that this information may not be particularly useful for real-world decision making.

The idea that the insufficient scepticism only exists in the public and that many practitioners in these fields do not overestimate the value and rigour of their discipline seems misguided to me too. Even if they do accept the weaknesses of their discipline, how many do you think doubt their own personal status as the 'good ones' able to overcome the limitations of their peers? Most people think they are smarter than the average person after all.

Historians, philosophers, journalists, etc. apply rational scepticism to their fields, but I wouldn't say historians, philosophers and journalists were scientists simply because they were rational sceptics who tried to mitigate their errors.

This goes back to the Demarcation Problem, but saying we cannot demarcate science from "not science" scientifically is no more an attempt to "limit" science than noting that you can't drive a car to the moon would be an attempt to "limit" your ability to drive a car (even if someone else started calling a space shuttle a "car" and saying flying was a form of "driving").

So can we finally put to bed the idea I propose "limiting" the sciences? :)

Scientism is simply not applying the same standards of rational scepticism to the scope and methods of science as one might apply to other questions, and underestimating methodological, human, structural and institutional fallibility in certain scientific fields.

(disregard of other fields can be a consequence of this as it overstates the ability to get a “scientific” answer)

This is why I find it problematic when people treating science normatively based on how it is supposed to work, instead of based on how it functions in the real world. It seems a way to avoid thinking sceptically about what actually happens (and, ironically, the limitations of human reason).
 
Let’s set aside what is being discussed for a moment and speak of cases generally.

Is it your position that a set of factual statement *cannot* be presented in an emotional way? Can a fact or set of facts elicit an emotional response?

Is it your position that one cannot take a limited subset of facts from a larger set of interrelated facts in order to engender an emotional response that may not occur when viewing all the relevant facts as a whole?

Can one attach emotionally laden adjectives to facts to facilitate a desired emotional response?

If the answer to these questions is yes, then a set of presented facts can and should be evaluated for an appeal to emotion, or any other type of bias or potential to introduce bias.

Using the methods of classical rhetoric, just about any example could appeal to ethos (credibility), pathos (emotion) or logos (logic).

Understanding which appeals someone is making often requires a subjective interpretation of their purpose, and particularly audience.

A lot would depend on the audience. When I make long-ish posts on page 50 of a thread, I assume that basically no one is reading them. There probably aren't hoards of people waiting with baited breath for Augustus' next post on scientism and I doubt the crime rate is going to drop in the 10 mins after each post as people crowd round their screens and forget all other aspects of their lives. I mean other than you and maybe @vulcanlogician, probably no one reads them, maybe 1-2 more if I am lucky.

As such, I don't think an emotional appeal would be particularly relevant for my audience, and if, hypothetically, I was writing an emotional appeal for a mass audience I'd make far more of an effort to conjure up emotive images with visualisable examples rather than a fairly abstract reference to the idea that racialism (as in hierarchical structuring of races) and eugenics, were common "scientific" beliefs among educated classes, including progressives.

It would be pretty easy to add a lot of emotionally laden adjectives, but I simply described it as a catastrophic error, which I would say is reasonable and not hyperbolic given the historical reality.

If I didn't think an emotive appeal would work, trying it would damage both my ethos and the logos of my arguments.

When trying to interpret the motives of others, we all fall prey to our own biases as our brain is often simply trying to find reasons to reject the opposing view. If the purpose is honest discussion, assuming less than ideal motivations behind someone else's actions is problematic as it's hard to identify if we are being fair, or we are the ones being biased.

Now, I'm certainly not going to pretend I'm perfect in this regard, but unless it is absolutely unambiguous and we can't address the issue without mentioning it, it is usually better to give the most generous reading of the post we are reading rather than what we might be tempted to assume.
 
My preference is to have the abstractions we use not conflict with or contradict reality. Beyond that it becomes a matter of what is deemed acceptably useful (ie it would not be acceptable if great for a few but terrible for most kind of thing).

As to your point that an institution can make it irrational not to accept some concept or belief, my counter-point was that I agree, but it is not limited to your example in science, but elsewhere, including religion which might be said to have an even stronger effect in this regard than science does. The difference between science and other institutions is that science permits challenges to these concepts as a principle and pathways and mechanisms to make such challenges. This is not universally true outside of a scientific framework. We human beings go astray at times (often?), whatever the sphere of human endeavor. Science has institutional mechanisms to help steer us back on track, so to speak.

Again, I've never made a claim that other areas are exempt from this.

Many Christians were opposed to eugenics for the same reason they are now opposed to abortion. From a utilitarian perspective, they were probably 'right' in the first case and 'wrong' in the latter.

They were equally "irrational" in the foundations for their beliefs in both cases.

The more "rational sceptical" argument in each case doesn't necessarily lead to the best outcome.

This is why we need to apply sufficient scepticism to the outputs of unreliable sciences.


My attitude is that, as you acknowledge above, human fallibility manifests itself in *every* field, and consequently, *every* field should have mechanisms by which to both identify and mitigate fallibility and errors which that fallibility manifests. We can and should make distinctions between fields and identify those that 1) do exactly this, and 2) do it well.

There is nothing I disagree with here, but it is just an statement of what is normative desirable.

If we look at it positively, many science are not doing it well, and we should highlight these and make sufficient allowances for this when considering their outputs.

Before I gave you an example of a newspaper that was 50% fact and 50% fake news - roughly what would those percentages have to improve to before you would recommend people used its outputs to make consequential decisions based on?

We must *always* maintain a skeptical eye universally towards any explanations of how, or why, or purpose, or to prescriptions and proscriptions. How, then, is this best accomplished? To date, it is my view that it is best accomplished within a scientific framework.

Yes, but scientism is not a critique of normative science, but individual attitudes to science's ability to produce reliable knowledge in all fields.

I agree with most of what you say is normatively desirable, but I don;t think this describes or addresses the reality we actually have today.
 

vulcanlogician

Well-Known Member
I mean other than you and maybe @vulcanlogician, probably no one reads them, maybe 1-2 more if I am lucky.

As Spinoza once said, "All things excellent are as difficult as they are rare."

So never use popularity as a metric for value. You are LUCKY if only one or two people take interest in what you are saying. Otherwise, you may as well be a makeup tiktoker who gets a million views a day. Lemmie tell you. You are better than that.
 

MikeF

Well-Known Member
Premium Member
You really don't see any difference between studying things that exist independently of human awareness of them, and things that only exist because humans have created them as categories, ideas and concepts? Fields where analysis cannot be abstracted from significant subjectivity and value preference, and often where these cannot be studies directly but only via subjective human verbal responses to questions about these things?

You don't think that the far better track record in the former and these obvious structural differences should make us consider the former to be more reliable?

I have to disagree with you there.

Noted.

I see astrophysics, cell biology, primate behavior, climatology, etc all different from one another. Any issues regarding reliability I see as a problem specific issue. Some things are harder to get a handle on than others, and there is no getting around that, and hence why knowledge acquired within a scientific framework is held with varying degrees of confidence. Does subjective human behavior with its abstract systems create a significant challenge? Of course it does. But tackling problems in any field, if we’re looking for objective answers, would require investigating them within a framework that works to maintain objectivity in the process and mitigate fallibility to best abilities. Everything approached in such a way is to have approached it scientifically, in my view.

I think there is another issue at stake in trying to isolate “social sciences” from the rest of science in that it loses sight of the fact that we are studying the behavior of a biological organism. We have yet to fully understand all there is to know regarding the functioning of our very complex central nervous system. Understanding how and why we behave the way we do should be seen as requiring an integrative approach, seeing all the related subspecialties as working on their piece of the same integrated problem, human behavior. I would also say, we should not see Homo sapiens as entirely unique and separate from the rest of the animal kingdom either. We should (and do) also study how human behavior relates and compares to that of other species as well.

You can happily rest in the fact that, as previously noted, I don't believe in "limiting" the scope of the sciences ;)

So can we finally put to bed the idea I propose "limiting" the sciences? :)

Well, then it seems our discussion is at an end as your definition of ‘scientism’ has fallen apart.

All that science cannot speak objectively about cannot be spoken objectively about by any other claimed avenue for finding objective truth. This problem then would not be uniquely related to science and thus perhaps we need to coin a separate -ism defined as a belief that one’s subjective preference is objective. It really doesn’t matter how the belief is rationalized.

As to “excessive belief in accuracy”, besides the fact that there are qualitative aspects to what constitutes knowledge as well as quantitative, that some of our understanding can only be in terms of probabilities, what your complaint boils down to is the holding of scientific work product, from any sub-specialty, with an unjustified level of confidence. When this occurs it is simply an expression of human error, and as with all types of errors injected by fallible human investigators or outside observers, requires mitigation to the best of our abilities. Error is expected as we can’t take human beings out of the equation. Pointing out all the ways mistakes are made in scientific inquiry misses the whole point of conducting the investigation within a scientific framework to begin with. If, despite human fallibility, this framework manages to provide a level of objectivity not available otherwise, mitigates fallibility, and does so to the extent not possible with any other proposed avenue to knowledge, then it is the best available avenue we have for rendering truth.
 

MikeF

Well-Known Member
Premium Member
Using the methods of classical rhetoric, just about any example could appeal to ethos (credibility), pathos (emotion) or logos (logic).

Understanding which appeals someone is making often requires a subjective interpretation of their purpose, and particularly audience.

A lot would depend on the audience. When I make long-ish posts on page 50 of a thread, I assume that basically no one is reading them. There probably aren't hoards of people waiting with baited breath for Augustus' next post on scientism and I doubt the crime rate is going to drop in the 10 mins after each post as people crowd round their screens and forget all other aspects of their lives. I mean other than you and maybe @vulcanlogician, probably no one reads them, maybe 1-2 more if I am lucky.

As such, I don't think an emotional appeal would be particularly relevant for my audience, and if, hypothetically, I was writing an emotional appeal for a mass audience I'd make far more of an effort to conjure up emotive images with visualisable examples rather than a fairly abstract reference to the idea that racialism (as in hierarchical structuring of races) and eugenics, were common "scientific" beliefs among educated classes, including progressives.

It would be pretty easy to add a lot of emotionally laden adjectives, but I simply described it as a catastrophic error, which I would say is reasonable and not hyperbolic given the historical reality.

If I didn't think an emotive appeal would work, trying it would damage both my ethos and the logos of my arguments.

When trying to interpret the motives of others, we all fall prey to our own biases as our brain is often simply trying to find reasons to reject the opposing view. If the purpose is honest discussion, assuming less than ideal motivations behind someone else's actions is problematic as it's hard to identify if we are being fair, or we are the ones being biased.

Now, I'm certainly not going to pretend I'm perfect in this regard, but unless it is absolutely unambiguous and we can't address the issue without mentioning it, it is usually better to give the most generous reading of the post we are reading rather than what we might be tempted to assume.

I seem to have touched a nerve and offended you. Given that, I shall not try to defend my analysis and leave things here.
 

MikeF

Well-Known Member
Premium Member
Again, I've never made a claim that other areas are exempt from this.

Many Christians were opposed to eugenics for the same reason they are now opposed to abortion. From a utilitarian perspective, they were probably 'right' in the first case and 'wrong' in the latter.

They were equally "irrational" in the foundations for their beliefs in both cases.

The more "rational sceptical" argument in each case doesn't necessarily lead to the best outcome.

This is why we need to apply sufficient scepticism to the outputs of unreliable sciences.




There is nothing I disagree with here, but it is just an statement of what is normative desirable.

If we look at it positively, many science are not doing it well, and we should highlight these and make sufficient allowances for this when considering their outputs.

Before I gave you an example of a newspaper that was 50% fact and 50% fake news - roughly what would those percentages have to improve to before you would recommend people used its outputs to make consequential decisions based on?



Yes, but scientism is not a critique of normative science, but individual attitudes to science's ability to produce reliable knowledge in all fields.

I agree with most of what you say is normatively desirable, but I don;t think this describes or addresses the reality we actually have today.

That we fall short of our goals is to be human. I laude all skepticism and challenge regarding scientific work product. That, after all, is a critical and required element of this normative (as you label) process. Since these inherent shortcomings cannot be cured, all that seems left to us is to continue to refine and improve this process, this scientific framework, that continues to provide demonstrable results, results that have not been achieved by other means.

I guess it is not clear to me if you envision viable alternatives to this process. In other words, despite the shortcomings, what else are we going to do.
 

MikeF

Well-Known Member
Premium Member
So the separation I would want to make is between the "hard" sciences, physics and the like, which require direct empirical evidence to support it's claims.

The softer sciences, social sciences, things like economics and sociology and the like... which rely heavily on empirical fact to support its claims, but also can rely heavily on models and sources of information that are not purely empirical. This means that such-and-such a model can be rejected or accepted for various reasons, and these reasons aren't based on empirical data.

Even further down the line, we have the humanities. (History, philosophy, literature....) the methods for advancing a thesis in one of these fields usually focuses on argumentation. Evidence plays a role sure. But it is not the chief role of these academic disciplines to gather empirical evidence.

The crux of my argument lies here. I think disciplines like history and literature CAN furnish us with genuine and valuable knowledge. They ARE systematic disciplines, but I'd hardly call them sciences.

As an experiment, I’m going to rewrite the above using different phrasing (hopefully synonymous or at least not materially incorrect) to see how it might affect our perception of the ideas being expressed:

“So the separation I would want to make is between the "easy" sciences (working solely with phenomena that exhibit fixed properties and characteristics), physics and the like, which require rendered truths to support its claims.​
The “more difficult” sciences (working with dynamic phenomena that have mutable characteristic that may not remain constant under identical conditions, in addition to fixed property phenomena), social sciences, things like economics and sociology and the like... which rely heavily on rendered truth to support its claims, but also can rely heavily on models and sources of information that are not true. This means that such-and-such a model can be rejected or accepted for various reasons, and these reasons aren't based on rendered truths.
Even further down the line, we have the humanities. (History, philosophy, literature....) the methods for advancing a thesis in one of these fields usually focuses on argumentation. Rendered truths play a role, sure. But it is not the chief role of these academic disciplines to render truths.​
The crux of my argument lies here. I think disciplines like history and literature CAN render truths. They ARE systematic disciplines, but I'd hardly call them sciences.”​

What do you think? I’m sure you have a few objections to the changes (perhaps to all).

In any event, I feel my changes highlight where I found concern in your original version. In your description of “social sciences”, you describe modeling as not based solely on empirical information. If that is the case, how would you describe the non-empirical aspects of modeling? You also include “other sources of information” that are non-empirical, and I wonder if you could elaborate on what those may be and why they would be included in any analysis.

Last point, you describe the humanities as having the capacity to render truths but that it is not their goal or intent. My question here would be to ask, if truths are rendered in these disciplines, what provides the confidence that in those instances that truth is claimed, that it has actually occurred, and how is it demarcated from the remainder of the work product produced within the humanities?

If you want to call them [history, philosophy, literature] sciences, we can all get our hats and go home. Because (if we are going to call those disciplines sciences, I TOO am a proponent of scientism) and we pretty much agree on everything.

But if the humanities are NOT sciences in your view, then I stand by my position that we can gain knowledge from something other than the sciences.

I would argue that anyone and everyone has some capacity to render truth. It is my contention that we are in fact all born amateur scientists. Babies, from the moment they are born begin to exercise amateur empiricism, their experiences starting the process of building an understanding of the world around them, building a base of knowledge.

So, in whatever activity you care to name, knowledge can be rendered. What is at issue is by what means do we establish confidence in what we have determined to be knowledge given the inherent fallibility of Homo sapiens? How, in any of these activities can we be sure we have gained actual knowledge?

In my view, if, as a result of our inherent fallibility, we have to be skeptical of everything we claim as regards rendered truth, each claim has to be evaluated within a framework that strives to maintain objectivity and actively mitigates human fallibility. It is that framework that determines and assigns a level of confidence to the claimed knowledge.

If the activity does not incorporate the necessary mechanisms to appropriately assign confidence to knowledge claimed within the activity, then such claimed knowledge cannot be held with any objective confidence unless and until it has been appropriately evaluated in this way outside of the activity.

Scientific inquiry or working within a scientific framework simply means to establish objectivity and mitigate human fallibility to best abilities, using whatever means and methods are necessary and most appropriate for the specific question at hand, regardless of our abstract categorization of subjects.

We could go with Plato's classic definition of knowledge: a "justified, true, belief." Sure plenty of philosophers have poked a few holes in it over the centuries, but it still remains afloat. And for most discussions, it gets the job done.

It even works well with science. Let's say a scientist was calculating the mass of one of Jupiter's moons. He makes observations which lead him to reach the justifiable conclusion that it has x mass. (justified) The mass of the moon actually IS (roughly) what he calculates it to be. (true) And the scientist adds this fact (the moon's mass) to the things he believes about the world. (belief)

Anyhoo, I'm sure we can work out a decent definition of knowledge to use. Just let me know.

Now I'm gonna tackle the other long post you put down before.

I don’t think Plato’s conception is going to work for me, no offense to Plato. “Justified” to me, can end up just being “rationalized” and doesn’t take into account inherent human fallibility.

Here is my working definition I formulated when I began participating in the forum:

Knowledge is defined as rational expectation based on experience.

Let me know what you think and feel free to rip into it given your experience with other definitions and models of knowledge. :)
 

MikeF

Well-Known Member
Premium Member
Of course. And one thing that makes them well-equipped to do that is they read a LOT of history. They don't just uncover an urn and figure out everything about it from scratch.

So this question gets into the demarcation issues I brought up in my previous post about how to define the sciences. All the archeologist will be able to say is "I found an urn, and it had no wine residue." But a historian will look at writings from the contemporary era and learn that the wine-ceremony urns had an inscription of the wine god on them. One historian may publish a paper arguing that the inscription was of the wine god -citing x, y, and z reasons... based on ancient documents or nearby hieroglyphs. Another historian may publish a paper rejecting this thesis. This historian may cite documents that mention that urns were commonly inscribed with images denoting who was the owner of those urns. She would likewise present arguments x, y, and z to support her thesis. You can see how we've departed from science here a bit, can't you? And empiricism only plays a minor role in the formulation of these theories. If the archaeologist discovers wine residue, we can all go home.

Interesting. You have archeologists reading historians summation and interpretation of source documents relevant to the period and site in question, where I imagine the archeologists digging into the source documentation themselves.

I guess we’ll have to wait and see if some actual historians and archeologists chime in and clarify this for us.

As to departing from empiricism, it seems to me that the source documents empirically exist, the hieroglyphs empirically exist. It seems to me, in all cases, each piece of empirical evidence is assigned some level of confidence that only grows when corroborated from other pieces of empirical evidence. This seems quite scientific in my view.

I am curious as to how empiricism, establishing a piece of information and assigning a level of confidence to it, only plays a minor role in formulating archaeological and historical theories. Does the majority of the theory consist of unsupported opinion?

But history is a far more murky affair. We have to rely on our wits, logical deduction, and what little information we have. Then we form historical theories. THEN those theories compete with one another.

"Quantum gravity is a far more murky affair. We have to rely on our wits, logical deduction, and what little information we have. Then we form scientific theories. THEN those theories compete with one another."

The scientific theories compete until some new observed phenomenon either supports or refutes one, some, or all of the competing theories.

This, to, can be true in archeology and history, as new material may be uncovered that sheds additional light, either confirming or refuting the current standing theories.

If we agree on one thing, it HAS to be that there is a huge difference in the methodologies used by physicists and historians.

And an equally huge difference between physicists and cell biologists, etc etc. Means and methods are specific to the question at hand and must, by the very nature of the differences inherent in the problems being addressed, be unique and appropriate to the specific circumstances under investigation.

What makes something scientific is employing the necessary means and methods specific to the problem within a framework that both maintains objectivity and mitigates human fallibility to best abilities, thus providing some level of confidence in the work product produced.

Models can also play a huge role in our drawing conclusions about history. We've all heard the narrative of the assassination of Archduke Ferdinand being the cause of WW1. But an "economic determinist" would cite many political and economic factors which would have eventually led to a war-- whether our darling Archduke managed to evade being killed or not. Historians use these models to great effect to make claims about historical events. But again, how empirical are we being here? We're a good ways from physics and its methodologies at this point, yet I would argue, we are still gaining knowledge about the subject matter.

Indeed, how empirical is it? It all depends on the confidence in, and completeness of, the data set used to make the model. When analyzing the results of these models, what criteria are used to establish that such historical models are working to great effect? What provides or instills confidence in them?

If they are simply mental thought exercises engaged in for fun, then that’s great. If you are making any claims that truths are being rendered, then there must be some means by which we can determine some level of confidence in what the models produce. Yes?

(Which is a rather fun and thought-provoking debate I must say.)

Ditto.
 
Well, then it seems our discussion is at an end as your definition of ‘scientism’ has fallen apart.

All that science cannot speak objectively about cannot be spoken objectively about by any other claimed avenue for finding objective truth. This problem then would not be uniquely related to science and thus perhaps we need to coin a separate -ism defined as a belief that one’s subjective preference is objective. It really doesn’t matter how the belief is rationalized.

This is akin to saying racism and sexism are both forms of bigotry so there is no need to have separate terms for them.

There is a difference in excessive belief due to religious fundamentalism and that due to scientism. Both can cause harms, but in different ways. It makes no sense to try to insist we need a one-size-fits-all term that is far less precise and meaningful.

We should recognise that scientific information underpins narratives and worldviews, the same as religious principles do, and that many people privilege scientific information and afford it great trust across the board.

We have all kinds of words that point to real phenomena, what is special about scientism that we cannot give it a name that points to a real world behaviour?

As to “excessive belief in accuracy”, besides the fact that there are qualitative aspects to what constitutes knowledge as well as quantitative, that some of our understanding can only be in terms of probabilities, what your complaint boils down to is the holding of scientific work product, from any sub-specialty, with an unjustified level of confidence. When this occurs it is simply an expression of human error, and as with all types of errors injected by fallible human investigators or outside observers, requires mitigation to the best of our abilities. Error is expected as we can’t take human beings out of the equation. Pointing out all the ways mistakes are made in scientific inquiry misses the whole point of conducting the investigation within a scientific framework to begin with. If, despite human fallibility, this framework manages to provide a level of objectivity not available otherwise, mitigates fallibility, and does so to the extent not possible with any other proposed avenue to knowledge, then it is the best available avenue we have for rendering truth.

Again you are returning to normative abstractions of what should happen in science. You are not addressing what actually happens directly and why we should (or should not) be concerned about it. What you are really doing is saying how you think people should avoid scientistic thinking, but that's like analysing the impact of racism on society by saying "it's simple, folk just shouldn't be racist". I agree with that idea of course, but it says little about the present reality.

Scientism is an expression of human error, yes. It is not a critique of normative science. So we should be able to call a spade a spade rather than trying to obscure it behind normative descriptors of how things should work in theory. Published findings in unreliable fields are not published with the proviso that, on balance of probabilities, the findings should be assumed to be false. Experts in any field are likely to overstate their expertise and the validity of their knowledge as humans tend towards overconfidence, particularly in their own abilities. Even those who recognise problems in their field think they can rise above it.

They may acknowledge limitations, but the scientists assume that they are the "good" ones in their field who yield meaningful results. In fact, the system encourages people to make overconfident claims and generate "interesting" findings as that is what is needed to gain status, funding and job security and progression.

Something I don't think you have really addressed, other that by describing how things should work normatively and that everything is really working as expected.

At what point does a field stop being a meaningful or reliable producer of knowledge?

If you have a field that produces 25% correct and 75% incorrect information, what value does information in that field have? Can we say it is a producer of knowledge or should we simply treat it as an attempt at methodological refinement which should not really be trusted for public consumption at this stage, basically akin to a medicine in the research phase that may yield fruit years in the future?

The current state of affairs is that, for many people, these are a reliable producer of knowledge. Do you agree?

If we have current policies that are imperfect, but work reasonably well, and someone was suggesting reforms based on a range of finding from this field because they were "scientific" how should we view this?

In reality, they are not going to run every test again and again until they can be confident in the knowledge. Many people "trust the science", even when the field is massively unreliable.

Do you really think people look at the output of these sciences the same way they would view a newspaper they knew was 25% real news and 75% fake news? Do you really think this is just a public problem and people in the field genuinely assume most of their field, including their own work, is just the equivalent of fake news?

Proposing a normative solution to this problem is irrelevant, if a solution needs to be proposed then the problem must exist and can be examined as is.


I guess it is not clear to me if you envision viable alternatives to this process. In other words, despite the shortcomings, what else are we going to do.

I'm happy to explain this, but would be more meaningful if you could address the questions above on the current state of affairs regarding how people view the unreliable sciences in the present.

Do you accept that many people do overestimate their reliability? Or would you say there are very few people who insufficiently apply principles of rational scepticism towards the unreliable sciences?
 
So the separation I would want to make is between the "hard" sciences, physics and the like, which require direct empirical evidence to support it's claims.

The softer sciences, social sciences, things like economics and sociology and the like... which rely heavily on empirical fact to support its claims, but also can rely heavily on models and sources of information that are not purely empirical. This means that such-and-such a model can be rejected or accepted for various reasons, and these reasons aren't based on empirical data.

Even further down the line, we have the humanities. (History, philosophy, literature....) the methods for advancing a thesis in one of these fields usually focuses on argumentation. Evidence plays a role sure. But it is not the chief role of these academic disciplines to gather empirical evidence.

The crux of my argument lies here. I think disciplines like history and literature CAN furnish us with genuine and valuable knowledge. They ARE systematic disciplines, but I'd hardly call them sciences.


There are things that can be studied directly and that exist outside of human awareness of them, and others that can only be studied based on concepts, ideas, models and theories that we have created. So there is always a linguistic layer between us and "reality".

All the archeologist will be able to say is "I found an urn, and it had no wine residue." But a historian will look at writings from the contemporary era and learn that the wine-ceremony urns had an inscription of the wine god on them. One historian may publish a paper arguing that the inscription was of the wine god -citing x, y, and z reasons... based on ancient documents or nearby hieroglyphs. Another historian may publish a paper rejecting this thesis. This historian may cite documents that mention that urns were commonly inscribed with images denoting who was the owner of those urns. She would likewise present arguments x, y, and z to support her thesis. You can see how we've departed from science here a bit, can't you? And empiricism only plays a minor role in the formulation of these theories. If the archaeologist discovers wine residue, we can all go home. But history is a far more murky affair. We have to rely on our wits, logical deduction, and what little information we have. Then we form historical theories. THEN those theories compete with one another.

Recent examples include women buried in graves with typically male grave goods.

Someone will say this is an example of a non-binary or trans Viking (a point that would never have been made 50 years ago), another might say it a woman who achieved high status and thus was honoured with a grave reflecting the masculine trappings of power, another might argue it was a sacrificed slave complete with her owner's offerings to the gods.

I love history, but we can never look at it objectively, especially older history. But even modern history is imbued with myth, as narrating events requires some degree of ideologically influenced curation of them.

A myth is not a falsehood. Rather, a myth is a sophisticated social representation; a complex relationship between history, reality, culture, imagination and identity...

We are predisposed to see order, pattern and meaning in the world, and we find randomness, chaos and meaningless unsatisfying. Human nature abhors a lack of predictability and the absence of meaning. As a consequence, we tend to ‘see’ order where there is none, and we spot meaningless patterns when only the vagaries of chance are operating...

once a person has (mis)identified a random pattern as a ‘real’ phenomenon, it will not exist as a puzzling, isolated fact about the world. Rather, it is quickly explained and readily integrated into the person’s pre-existing theories and beliefs.
R Howells - The myth of the Titanic

We can try to give an accurate picture of the past, all the while accepting we are limited in our ability to do this and our histories will always be, in part, cultural constructs.

This differs from the study of industrial chemistry where, within reason, I can be pretty close to objective.

The social sciences are somewhere in between something like history and industrial chemistry as they often rely on narrative and cultural constructs.


Models can also play a huge role in our drawing conclusions about history. We've all heard the narrative of the assassination of Archduke Ferdinand being the cause of WW1. But an "economic determinist" would cite many political and economic factors which would have eventually led to a war-- whether our darling Archduke managed to evade being killed or not. Historians use these models to great effect to make claims about historical events. But again, how empirical are we being here? We're a good ways from physics and its methodologies at this point, yet I would argue, we are still gaining knowledge about the subject matter.

I'd say we can also learn general lessons from history, even if it is subjective whether or not the specific case in question is an accurate example or not.

If we looked at abolition of the slave trade people can make competing claims about changing economic conditions being the driver, or evangelical religious revivalism, or growing resistance from enslaved people, or imperial rivalry, or imperialism itself enforcing values on weaker nations.

All of these would describe things that could be considered to have played a role, and that describe general trends in world history, even if some of them may be more marginal than others in the actual process.

We can learn from these processes and see how similar phenomena might impact things today, even if they weren't all highly relevant to that specific situation.

Another way of thinking about history is, as some wit once said, "We need to learn from or past mistakes as we don't have enough time to make them all ourselves".

I agree that these would constitute knowledge, even if the narrative histories that tell them are not objectively true. Historically, history was often a discipline for communicating moral or practical lessons anyway (as well as an exercise in propaganda for the powerful).
 
I seem to have touched a nerve and offended you. Given that, I shall not try to defend my analysis and leave things here.

Not at all. I post out of enjoyment of discussion. Nothing said here will ever offend me, it's just an entertainment medium of no real consequence. It was just a long winded way of explaining I'm addressing you specifically as I doubt anyone else reads, and an emotional appeal would make little sense to someone who disagrees in the first place.

I'm happy for people to critique me all they like, if I disagree I'll explain why.

I tolerate @shunyadragon after all, who is the ultimate in bad faith arguing and who in true Scooby Doo style has a long running campaign to unmask me as a secret Christian who would have got away with it if it had not been for his intrepid sleuthing ;)

That doesn't offend me, I just find it funny.

I generally think you try to argue in good faith, although I sometimes think your presumptions as to motivation influences your judgement and interpretation. None of us here are beyond reproach though, and we all have our biases and presumptions.
 

shunyadragon

shunyadragon
Premium Member
Not at all. I post out of enjoyment of discussion. Nothing said here will ever offend me, it's just an entertainment medium of no real consequence. It was just a long winded way of explaining I'm addressing you specifically as I doubt anyone else reads, and an emotional appeal would make little sense to someone who disagrees in the first place.

I'm happy for people to critique me all they like, if I disagree I'll explain why.

I tolerate @shunyadragon after all, who is the ultimate in bad faith arguing and who in true Scooby Doo style has a long running campaign to unmask me as a secret Christian who would have got away with it if it had not been for his intrepid sleuthing ;)

That doesn't offend me, I just find it funny.

I generally think you try to argue in good faith, although I sometimes think your presumptions as to motivation influences your judgement and interpretation. None of us here are beyond reproach though, and we all have our biases and presumptions.
I have difficulty tolerating @Augustis after all, who is the ultimate in bad faith arguing and who in true Scooby Doo style has a long running campaign to present a paranoid selective belief in opposition to science.
 

MikeF

Well-Known Member
Premium Member
I generally think you try to argue in good faith, although I sometimes think your presumptions as to motivation influences your judgement and interpretation. None of us here are beyond reproach though, and we all have our biases and presumptions.

I will say that it is my express goal to argue in good faith and feel like I am at least trying. It is also my express goal to *know* what is true (to the extent that is possible) and not simply believe something to be true. Nor is it my intent or desire to instill false or untrue beliefs in others. I also acknowledge that I, like everyone else, have biases, subjective preferences, can be misinformed, hold ideas or concepts that are false and am in no way beyond reproach. :)

I guess I would ask what exactly it means to you to *not* argue in good faith, or to argue in bad faith. Is it arguing in bad faith to argue from a set of biases and subjective preferences that one holds for some set of *reasons* that validate them for the individual, or to argue from misinformation or with ideas and concepts that are false if the individual is unaware of their incompleteness or falsity?

I personally do not see arguments formulated and expressed from within one’s set of both conscious and unconscious biases as automatically qualifying as having argued in bad faith. However, I also do not think it is in bad faith to acknowledge bias and subjective preference in a conversation in order to clearly and openly demarcate between what constitutes facts in the discussion and what constitutes subjective preference or some form of bias.

What, then, constitutes arguing in bad faith for me? I guess, off the top of my head, I would say that intentionally and knowingly arguing false ideas or concepts as true or in some way purposefully misrepresenting the state of things would constitute arguing in bad faith.

Now, in full acknowledgement of all my shortcomings and inadequacies, it has been my *subjective perception*, whether or not that subjective perception is objectively valid, either in whole or in part, that whenever bias is mentioned in reference to reference material you provide or to your personal comments, you initiate a vigorous defense (which is fine), but also characterize the mere broaching of the subject as an act of arguing in bad faith and sometimes as a personal attack. From my flawed subjective perspective, this essentially makes the topic of bias and its relevance to a discussion taboo in any conversation with you. If it is agreed that *everyone*, from a Nobel laureate to Joe Schmo on an internet forum, has bias and subjective preference, how is it *ever* not relevant to explore how it may be being expressed in a discussion? I can’t accept that to address what everyone agrees is present and active is somehow an act of bad faith.

You in turn observe that my presumptions as to motivation influences my judgment and interpretation. I will not argue that, being the admittedly flawed and imperfect creature that I am, that I do not make such presumptions and that, once made, they influence my judgment and interpretation. I would ask however, if it is possible in at least some instances, that it is a subjective perception involved and not always and/or solely a presumption that is influencing my judgment and interpretation. It would seem to me that if bias, subjective preference, or some motive is *ever* being expressed, surely it is possible to be perceived, at least by *someone* even if such a skill proves to be completely unavailable to myself. If it is ever possible to perceive motive, bias, and subjective preference in another, than surely it is appropriate to evaluate whether what has been subjectively perceived is *actually* and objectively present. Are we not *supposed* to receive *all* information with a critical eye, through a filter of rational skepticism? If not, how are we ever to recognize bias, subjective preference, motive, misinformed or misleading information, or information that is simply false?

In light of all of the above, I do not see it as useful or appropriate to consider the subject of bias or subjective preference as taboo, nor would I consider broaching the subject to automatically constitute arguing in bad faith.

Not at all. I post out of enjoyment of discussion. Nothing said here will ever offend me, it's just an entertainment medium of no real consequence. It was just a long winded way of explaining I'm addressing you specifically as I doubt anyone else reads, and an emotional appeal would make little sense to someone who disagrees in the first place.

One way to interpret that is that you are simply talking to yourself, using another simply as a prompt for soliloquy on a particular subject or point. :)

That has not been my subjective perception, but perhaps it is the case.

I'm happy for people to critique me all they like, if I disagree I'll explain why.

Ok then. I’ll try to remember that. :)

I tolerate @shunyadragon after all, who is the ultimate in bad faith arguing and who in true Scooby Doo style has a long running campaign to unmask me as a secret Christian who would have got away with it if it had not been for his intrepid sleuthing ;)

That doesn't offend me, I just find it funny.

Not speaking to this case specifically, but I can see how this might present a conundrum. If there were a secret Christian who was intent on maintaining a persona as an explicit atheist (for whatever reason), how does one distinguish between the fervent claims of being an atheist made by an actual atheist to those fervent yet false claims of atheism made by an actual secret Christian?

I suppose it doesn’t really matter, for a rose, by any other name, still smells as sweet (or sour, depending on preferences). All one can do is address directly the words and actions as found in context. Is it wrong to point out similarities a contextual set of words and actions have with those expressed by others? Not necessarily. Things can be learned through such comparing and contrasting, in my view.
 
I guess I would ask what exactly it means to you to *not* argue in good faith, or to argue in bad faith. Is it arguing in bad faith to argue from a set of biases and subjective preferences that one holds for some set of *reasons* that validate them for the individual, or to argue from misinformation or with ideas and concepts that are false if the individual is unaware of their incompleteness or falsity?

Arguing in good faith means trying your best to understand your opponents perspective, present it accurately, when in doubt interpret it charitably, (within reason as we are time limited) trying to respond to the whole argument (at least its key points) etc.

Bad faith would be misrepresenting ideas either on purpose or with complete disregard for accuracy, interpreting words in the most uncharitable manner, wilfully ignoring corrections (or my personal favourite, when you correct their misrepresentation they accuse you of "moving the goalposts"), repeatedly ignoring key arguments and finding something minor to quibble etc.

We are all biased and as this medium of ciommunication is perfectly designed for miscommunications and misunderstandings so these alone don't make something bad faith, although if you have to correct the same misunderstanding half a dozen times, you might start to assume they are not really trying their best to accurately present your views.


One way to interpret that is that you are simply talking to yourself, using another simply as a prompt for soliloquy on a particular subject or point. :)

That has not been my subjective perception, but perhaps it is the case.

I post for a variety or reasons, but will sometimes make an argument to make me think about the argument as it is something I'd like to think about and discussion focuses the mind.

In real life, I rarely discuss the kind of things I discuss here.

Not as much as I'd like, as those kinds of topics aren't that common here, sometimes I will make an argument because I want to read something and it gives me the motivation as I acquire books and articles at about 100 times the rate I read books and articles.

There are lots of reasons for posting, but we are mostly here for entertainment (intrinsically motivated learning is also a form of entertainment imo)

Now, in full acknowledgement of all my shortcomings and inadequacies, it has been my *subjective perception*, whether or not that subjective perception is objectively valid, either in whole or in part, that whenever bias is mentioned in reference to reference material you provide or to your personal comments, you initiate a vigorous defense (which is fine), but also characterize the mere broaching of the subject as an act of arguing in bad faith and sometimes as a personal attack. From my flawed subjective perspective, this essentially makes the topic of bias and its relevance to a discussion taboo in any conversation with you. If it is agreed that *everyone*, from a Nobel laureate to Joe Schmo on an internet forum, has bias and subjective preference, how is it *ever* not relevant to explore how it may be being expressed in a discussion? I can’t accept that to address what everyone agrees is present and active is somehow an act of bad faith.

You in turn observe that my presumptions as to motivation influences my judgment and interpretation. I will not argue that, being the admittedly flawed and imperfect creature that I am, that I do not make such presumptions and that, once made, they influence my judgment and interpretation. I would ask however, if it is possible in at least some instances, that it is a subjective perception involved and not always and/or solely a presumption that is influencing my judgment and interpretation. It would seem to me that if bias, subjective preference, or some motive is *ever* being expressed, surely it is possible to be perceived, at least by *someone* even if such a skill proves to be completely unavailable to myself. If it is ever possible to perceive motive, bias, and subjective preference in another, than surely it is appropriate to evaluate whether what has been subjectively perceived is *actually* and objectively present. Are we not *supposed* to receive *all* information with a critical eye, through a filter of rational skepticism? If not, how are we ever to recognize bias, subjective preference, motive, misinformed or misleading information, or information that is simply false?

While nothing is wrong, in general with what you say, we have one major benefit when we are personally accused of having a certain motivation, in that we can easily see which ones are very far from the truth (even if we often can't see our specific biases, we know which ones are so wide of the mark as to be easy to rule out).

Me mentioning your potential biases was largely the result of having being unsuccessful in getting you to stop repeating the same error by directly stating the reasons it was incorrect. Was just a different tack.

How do you respond when someone said something about you that you know to be incorrect? What do you do if they keep repeating it?

Anyway, I'm happy to say that, on RF I have been accused of:

Pro-science bias
Anti-science bias
Pro-philosophy bias
Pro-woo bias
Bias against religion because I'm an atheist
Bias against atheism
Bias because I'm religious
Pro-Christian bias
Pro-Muslim bias
Anti-Muslim bias
Bias because I am a Muslim
Bias because I am a Christian
Bias because I am a Protestant
Bias because I am a Catholic
Anti-Catholic bias
Anti-pagan bias
Bias because I am a conservative
Bias because I am a liberal
Anti-communist bias
Anti-capitalist bias
etc. etc.

:D

(Many of these for presenting absolutely standard positions and supported with peer-reviewed scholarship. It's almost as if, in general, people have a tendency to assume bias on the part of people who disagree with them...)

So tend to find it less than thrilling to have to correct claims of bias from a perspective that I don't actually hold.

So, for example, if you argue I want to "limit science" and shield philosophy from rational scepticism, I know this to be false and that any imagined motivation behind my desire to do these must also be false.

What I am arguing is that we need more rational scepticism regarding the unreliable sciences, how we deal with their outputs, and how we should operate in areas where we don't have reliable scientific information (an will not have it for the forseable future).

(also I never see anything here as a "personal attack" or take offence as it's just an internet forum, so no need to worry about that ;) )

Not speaking to this case specifically, but I can see how this might present a conundrum. If there were a secret Christian who was intent on maintaining a persona as an explicit atheist (for whatever reason), how does one distinguish between the fervent claims of being an atheist made by an actual atheist to those fervent yet false claims of atheism made by an actual secret Christian?

I suppose it doesn’t really matter, for a rose, by any other name, still smells as sweet (or sour, depending on preferences). All one can do is address directly the words and actions as found in context. Is it wrong to point out similarities a contextual set of words and actions have with those expressed by others? Not necessarily. Things can be learned through such comparing and contrasting, in my view.

Usually folk who adopt a fake religious/irreligious persona to troll do so very clumsily and for a handful of posts before getting bored. It's not like it brings any real long-term benefit for people interested in meaningful discussions with community members.

Re: "Is it wrong to point out similarities a contextual set of words and actions have with those expressed by others?" I'd say it's something we should be careful about as we assume unrelated beliefs "cluster" (for example pro-gun and anti-abortion).

While these may reflect actual trends, it causes half of the misunderstandings on RF.

A key example would be scientism, which, as I noted before, is a magical word that makes many people instantly jump to conclusions completely unable to understand pretty basic concepts that they would likely agree with in any other circumstance. The reason being that fundies misuse the term scientism, therefore any person who uses it must share some of the same agendas as then fundies (despite it being a standard academic term in use for 50+ years prior to being appropriated for religious apologetics).
 

MikeF

Well-Known Member
Premium Member
This is akin to saying racism and sexism are both forms of bigotry so there is no need to have separate terms for them.

There is a difference in excessive belief due to religious fundamentalism and that due to scientism. Both can cause harms, but in different ways. It makes no sense to try to insist we need a one-size-fits-all term that is far less precise and meaningful.

We should recognise that scientific information underpins narratives and worldviews, the same as religious principles do, and that many people privilege scientific information and afford it great trust across the board.

We have all kinds of words that point to real phenomena, what is special about scientism that we cannot give it a name that points to a real world behaviour?



Again you are returning to normative abstractions of what should happen in science. You are not addressing what actually happens directly and why we should (or should not) be concerned about it. What you are really doing is saying how you think people should avoid scientistic thinking, but that's like analysing the impact of racism on society by saying "it's simple, folk just shouldn't be racist". I agree with that idea of course, but it says little about the present reality.

Scientism is an expression of human error, yes. It is not a critique of normative science. So we should be able to call a spade a spade rather than trying to obscure it behind normative descriptors of how things should work in theory. Published findings in unreliable fields are not published with the proviso that, on balance of probabilities, the findings should be assumed to be false. Experts in any field are likely to overstate their expertise and the validity of their knowledge as humans tend towards overconfidence, particularly in their own abilities. Even those who recognise problems in their field think they can rise above it.

They may acknowledge limitations, but the scientists assume that they are the "good" ones in their field who yield meaningful results. In fact, the system encourages people to make overconfident claims and generate "interesting" findings as that is what is needed to gain status, funding and job security and progression.

Something I don't think you have really addressed, other that by describing how things should work normatively and that everything is really working as expected.

At what point does a field stop being a meaningful or reliable producer of knowledge?

If you have a field that produces 25% correct and 75% incorrect information, what value does information in that field have? Can we say it is a producer of knowledge or should we simply treat it as an attempt at methodological refinement which should not really be trusted for public consumption at this stage, basically akin to a medicine in the research phase that may yield fruit years in the future?

The current state of affairs is that, for many people, these are a reliable producer of knowledge. Do you agree?

If we have current policies that are imperfect, but work reasonably well, and someone was suggesting reforms based on a range of finding from this field because they were "scientific" how should we view this?

In reality, they are not going to run every test again and again until they can be confident in the knowledge. Many people "trust the science", even when the field is massively unreliable.

Do you really think people look at the output of these sciences the same way they would view a newspaper they knew was 25% real news and 75% fake news? Do you really think this is just a public problem and people in the field genuinely assume most of their field, including their own work, is just the equivalent of fake news?

Proposing a normative solution to this problem is irrelevant, if a solution needs to be proposed then the problem must exist and can be examined as is.




I'm happy to explain this, but would be more meaningful if you could address the questions above on the current state of affairs regarding how people view the unreliable sciences in the present.

Do you accept that many people do overestimate their reliability? Or would you say there are very few people who insufficiently apply principles of rational scepticism towards the unreliable sciences?

The scientific framework is the demarcation tool that enables us to demarcate between facts, subjective preferences, and imaginative woo in a manner that strives to do so as objectively as possible and in a manner that seeks to actively mitigate human fallibility. This tool cannot work if it is not used. If you agree, then there is no limit to scope to which this tool is used, for it is required to figure out exactly what it is we are talking about, and if there is no limit in scope then your definition of scientism falls apart.

How to improve this necessary tool and how to improve people’s understanding of how this tool works would be a different topic and a different thread in my view.
 
Top