• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Are there any good arguments for God?

Tiberius

Well-Known Member
Neither is religion/spirituality.

That's not a science at all, is it?

I mean, we can at least get people to agree about the majority of things about George Washington. Where he was born, and when. What he did. All that sort of stuff. We don't get people insisting he was a plumber or a farmer. We agree that he was the President.

OK, then anecdotal evidence is above 'worthless'.

When you can get people saying the same thing, MAYBE.

But religion/spirituality doesn't do that, does it. How many different religions are there? Too many to count.

The argument I would make for God is certainly more complex than the argument for George Washington but I would argue the best analysis of the evidence and argumentation leads to belief in a God concept. . (I discussed this in my first post #202 in this thread)

Given that most if not all of the sources for George Washington are in agreement and most if not all of the sources for God are not, I don't think it works.

And your argument in post 202 is just a claim, and you readily admit that you can't provide "normal" evidence for it, but must instead rely on " 'beyond the normal' evidence that consciousness can not be explained materialistically". In other words, you got nothing testable. (Of course, if you disagree, please show me a way I can verify your claims.)
 

Tiberius

Well-Known Member
For the sake of simplicity, I won't get into the issues with NHST and other problems (mostly placebo related) as to why this doesn't work as well as we'd like.

I'm not aware of any problems, but if you can give me a source. And given the vast amount of knowledge about the universe it has provided, I;d say it works pretty well.

But before I get into the real issues:


Now you're just insulting me! :)

At least you're not saying I'm wrong!

Let's examine scientific testing. I'm not going to check my memory here, but I believe it was oddly enough exactly a century before Einstein's 1905, Nobel Prize winning work on the photoelectric effect that Young "proved" light was a wave. He demonstrated that light behaved in ways that no particle could (although Newton's reputation and preference for the particle/corpuscular theory of light made acceptance of Young's view take time). By the close of the 19th century, not only had Young's view been vindicated but the notion of light (and the electromagnetic spectrum) as consisting of waves was central to physics. A century of scientific testing had shown that not only was light a wave, but it was so definitely a wave the most successful framework in physics since Newton's mechanics required this (that framework being electromagnetism).
Then, in 1905, Einstein showed that light was composed of particles.
If scientific testing were as simple as you state, physicists would have laughed at any attempt to restore the "corpuscular" (particle) view of light. Einstein would have been dismissed. But let's imagine that some scientists, wishing to show him to be wrong rather than rely on established "fact", decided to put his explanation of the photoelectric effect to the test (which was done, actually). They would find evidence that Einstein was, indeed, correct: light is made up of parts. But this presents a very, very big problem. After all, a century of scientific testing and indeed a central theory to all of physics (at the time) held that light was a wave. Now, "The Scientific Method" (which we don't actually practice, at least not as taught) holds that this means we have to perform tests to see which of the opposing hypotheses- light as a wave vs. light as a particle- is the correct one. If scientists had actually done this, had actually followed the naïve, simplified "scientific method" taught in primary school and to undergraduates, they would still be arguing to this day over whether light was a wave or a particle. Luckily, the question was so simplistic and the evidence so clear that, despite the best efforts of the greatest physicists of the period, only one conclusion was possible: any scientific test to determine whether light was a wave or a particle was bound to fail because the entire theoretical framework which stated that something could either be a particle or a wave but not both was ITSELF wrong. Completely wrong. Nothing was either a wave or a particle.

Your argument depends on a false dichotomy - light being either a particle or a wave. The experiments clearly showed that light behaved as a particle in some cases and as a wave in others, so your dichotomy would have quickly been discovered to be false.

Also, what do you think the "primary school scientific method" is, and how is it not actually being practiced? Please provide examples.

Unfortunately, things in the sciences are rarely so simple and clear when it comes between deciding whether the evidence you get is because you tested the right question the right way, or because your theoretical assumptions falsely dictated either how you would ask a question or how you would test it (or both) or even because your methods (statistical, instrumental, etc.) were flawed.

That's what peer review is for.

Thus rather basic, fundamental theories over the nature of cognition which are mutually exclusive and incompatible have been tested and supported for almost half a century now (and the older view of cognition has been around since the beginning of the cognitive sciences).
The primary methodology used for scientific testing across the sciences (from particle physics to medicine to the behavioral sciences that begat the method) is NHST (null hypothesis significance testing, a.k.a. significance testing, a.k.a. statistical decision theory). It is the combining of two radically opposed approaches to statistics and data analysis (the approach of Sir Ronald Fisher on the one hand and Pearson & Neyman on the other) that has been criticized as fundamentally flawed since before its inception. It is the standard methodology taught to researchers today, despite the fact that the many, many hundreds of criticisms of the paradigm as fatally flawed go almost completely unanswered (or, in some cases, those trying to answer it, such as an APA task force, have been met not only by apathy but by the adoption of standard practices by the APS and particle physicists more generally of adopting this welding of two opposed statistical testing paradigms in the social/behavioral sciences).

This is a lot of words, but doesn't say much. What are the flaws? What are the criticisms? You don't even provide a link to a source to support your claims. All it says is, "Some people think science is flawed, and the science guys ignore them." And a few name drops.

Currently in the "hardest" of sciences (physics) there exists a fundamental dispute. It isn't over experiments or theory. It is over how to do physics and what physics is or should be. On one side are the anthropic physicists who believe that our best theories and evidence make it clear that there is no possible way the classical, reductionist model can succeed and no "theory of everything" that could exist. They regard those (an increasing minority) who stick to the goal of such a theory and the reductionist approach as one of basically religious bias (or something like religious bias). Those with ACTUAL religious bias accuse the anthropic physicists as opting for the anthropic solution over god because they are biased against anything resembling evidence for a creator. And the (ever-dwindling) supporter of the classical approach join those inclined to see evidence for a created or at least "special" universe in physics/cosmology in their critique of the anthropically inclined as too willing to accept mathematical aesthetics as evidence and too willing to opt for solutions other than our universe as "special" in order to bias the evidence by fixing the models/equations.
And an even fewer minority accuse basically everybody else as being too willing to see reality in the mathematical models that were irreparably separated from physical reality by quantum mechanics a century ago, a divide which particle physics, cosmology, etc., have only weakened (this is my position).

You're absolutely right! There are people who disagree over science, therefore God is likely, since we obviously can't trust what the science guys say!

In the "hardest" science, therefore, we find as mainstream theories such frameworks as string theory, quantum gravities, inflationary cosmology, supersymmetry, dark energy/matter, and many more theories or notions lacking any empirical support and often even the capacity FOR empirical support.

Please provide a source that says that one of these things is being presented as FACT (rather than an interesting avenue to explore and may be right, but also may not be) without any supporting evidence.

Historians have more evidence, in general, than exists for M-theory or even any of the theories of gravitation.

lol, more evidence than gravitation.

Are you really saying there is not much evidence for relativity? lol
 

LegionOnomaMoi

Veteran Member
Premium Member
I'm not aware of any problems, but if you can give me a source.
I've written about this (and provided sources) elsewhere:
How Hypothesis Testing Doesn’t Test Hypotheses
Part I of Part II: Null Hypothesis Significance Testing (NHST)
The Cult of Statistical Significance: Reader Recommended in Research Review’s Review
and so on. But to give you a more comprehensive bibliography, see here:
402 Citations Questioning the Indiscriminate Use of Null Hypothesis Significance Tests in Observational Studies'
with some additions from my bibliography:

Gigerenzer, G., Krauss, S., & Vitouch, O. (2004). The null ritual. In D. Kaplan (Ed.). (2004). The Sage handbook of quantitative methodology for the social sciences (pp. 391–408).
Hubbard, R., & Lindsay, R. M. (2008). Why P values are not a useful measure of evidence in statistical significance testing. Theory & Psychology, 18(1), 69-88.
Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56(1), 16.
Kline, R. B. (2013). Beyond Significance Testing: Statistics Reform in the Behavioral Sciences. American Psychological Association.
Lambdin, C. (2012). Significance tests as sorcery: Science is empirical—significance tests are not. Theory & Psychology, 22(1), 67-90.
McCloskey, D. N., & Ziliak, S. T. (2009). The Unreasonable Ineffectiveness of Fisherian" Tests" in Biology, and Especially in Medicine. Biological Theory, 4(1), 44.
Taagepera, R. (2008). Making Social Sciences More Scientific: The Need for Predictive Models. Oxford University Press.
Ziliak, S. T., & McCloskey, D. N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. University of Michigan Press.

Your argument depends on a false dichotomy - light being either a particle or a wave.
No, my argument depends upon physicists believing that false dichotomy. It depends upon that false dichotomy as being false. The problem wasn't that it was false, but that all experiments and the interpretations of their outcome weren't actually simply a matter of testing but rested upon theoretical assumptions. This is always true, but we are seldom faced with such clear evidence that our tests will always fail until we completely revise our theory (in fact, this basically never happens).

The experiments clearly showed that light behaved as a particle in some cases and as a wave in others, so your dichotomy would have quickly been discovered to be false.
The experimental tests have not, cannot, and will not ever show this. First, because it is impossible to "behave" like a particle in any sense that physicists even up to Einstein, Bohr, Heisenberg, etc., would have recognized: everything WAS either particles or waves). Second, because the orthodox interpretation of quantum mechanics is that experimental tests CANNOT even in theory determine what anything at the subatomic scale "behaves" like: quantum physics simply allows us to predict the outcome of experiments; it DOES NOT tell us anything about the nature of reality other than these outcomes (I VEHEMENTLY object to this interpretation). Third, this isn't MY dichotomy. It was the empirically, scientifically PROVEN dichotomy of the entirety of modern physics (right up until the early-mid-20th century). The debate dates back to the ancient Greeks, but it wasn't until the modern era and scientists like Galileo, Newton, Laplace, etc., that such hypotheses were rigorously tested. Finally, the wave-particle duality language was thrown in as an ad hoc linguistic (and embarrassing) device by physicists desperate to cling to the doomed model of classical physics. A "particle" that "behaves" like a "wave" is akin to a dead cat acting as if it were alive. It is by definition paradoxical.

Also, what do you think the "primary school scientific method" is, and how is it not actually being practiced? Please provide examples.
I have a section on one of my blogs devoted to this:
The Scientific Method

That's what peer review is for.
That's what peer-review can't address and hasn't. It's why there are decades of peer-reviewed studies that are absolutely in fundamental conflict (some of which I've worked on), why we are lucky that the wave/particle debate was so simple and the evidence contradicting the entire model of physics at the time so clear, and why decades of peer-reviewed studies on the utter failure of NHST has been ignored even as the paradigm has gained in popularity.


This is a lot of words, but doesn't say much. What are the flaws? What are the criticisms? You don't even provide a link to a source to support your claims
Ok, some additional citations (I intended to include only those that I could find links for, but discovered upon testing that some were broken; please let me know if any of the links don't work or if you would like copies of those that I couldn't find working links for and I will attach the papers).
Ambaum, M. H. P. (2010). Significance tests in climate science. Journal of Climate, 23(22), 5927-5932.
Branch, M. (2014). Malignant side effects of null-hypothesis significance testing. Theory & Psychology, 24(2), 256-277.
Gill, J. (1999). The insignificance of null hypothesis significance testing. Political Research Quarterly, 52(3), 647-674.
Gliner, J. A., Leech, N. L., & Morgan, G. A. (2002). Problems with null hypothesis significance testing (NHST): what do the textbooks say?. The Journal of Experimental Education, 71(1), 83-92.
Hunter, J. E. (1997). Needed: A ban on the significance test. Psychological Science, 8(1), 3-7.
Killeen, P. R. (2005). An alternative to null-hypothesis significance tests. Psychological science, 16(5), 345-353.
Orlitzky, M. (2011). How can significance tests be deinstitutionalized?. Organizational Research Methods, 1094428111428356.
Rozeboom, W. W. (1960). The fallacy of the null-hypothesis significance test. Psychological bulletin, 57(5), 416.
Schweder, T., & Norberg, R. (1988). A Significance Version of the Basic Neyman-Pearson Theory for Scientific Hypothesis Testing [with Discussion and Reply]. Scandinavian Journal of Statistics, 225-242.
Stang, A., Poole, C., & Kuss, O. (2010). The ongoing tyranny of statistical significance testing in biomedical research. European journal of epidemiology, 25(4), 225-230.
Thompson, B. (2004). The “significance” crisis in psychology and education. The Journal of Socio-Economics, 33(5), 607-613.
Yoccoz, N. G. (1991). Use, overuse, and misuse of significance tests in evolutionary biology and ecology. Bulletin of the Ecological Society of America, 72(2), 106-111.


You're absolutely right! There are people who disagree over science, therefore God is likely, since we obviously can't trust what the science guys say!
I am a "science guy".



Please provide a source that says that one of these things is being presented as FACT (rather than an interesting avenue to explore and may be right, but also may not be) without any supporting evidence.
In general, we don't provide anything as "fact" or "proven". Even in the hardcore, computational aspects of scientific fields (e.g., the Hodgkin-Huxley model or Bell's inequality) "proofs" are hotly disputed because they are context-ridden and theory-laden.

lol, more evidence than gravitation.
Perhaps THE unsolved problem in physics (the two best supported theories in practically all of science, quantum mechanics and general relativity, disagree: in general relativity, gravitation doesn't really exist at all, where as in quantum mechanics it is essentially non-existent and in extensions of quantum mechanics, e.g., QFTs/particle physics it is the graviton field which lacks any and all empirical support).

Are you really saying there is not much evidence for relativity? lol
No. Although modern physics suggests that what reality IS isn't what we experience. I came to physics for the most part after already working as a scientist, so I am less inclined than those who started out in physics to accept such sentiments as these:

"The notion of Physical Object is Untenable”
D’Ariano, G. M. (2015). It from Qubit. In It From Bit or Bit From It? (pp. 25-35). Springer.

"We now know that the moon is demonstrably not there when nobody looks."
Mermin, N. D. (1981). Quantum mysteries for anyone. The Journal of Philosophy, 78(7), 397-408.

“The only reality is mind and observations”
Henry, R. C. (2005). The mental universe. Nature, 436(7047), 29-29.

“Our external physical reality is a mathematical structure”
Tegmark, M. (2008). The mathematical universe. Foundations of Physics, 38(2), 101-150

"The laws of quantum physics are in conflict with a classical world, in particular, with local and macroscopic realism as characterized by the violation of the Bell and Leggett-Garg inequalities, respectively."
Kofler, J., & Brukner, Č. (2008). Conditions for quantum violation of macroscopic realism. Physical review letters, 101(9), 090403.

"It is generally believed that quantum physics refutes realism, materialism, determinism, and perhaps even rationality. These beliefs, central to the so-called Copenhagen interpretation, were held by the very fathers of the new physics, particularly Niels Bohr (1934), Max Born (1953), Werner Heisenberg (1958), and Wolfgang Pauli (1961)."
Bunge, M. (2012). Does Quantum Physics Refute Realism, Materialism and Determinism?. In Evaluating Philosophies (Boston Studies in the Philosophy of Science) (pp. 139-149). Springer.[/QUOTE][/QUOTE]
 

Yerda

Veteran Member
It's just confusing because you claimed their experiences were enough to convince you to keep an open mind about what they said.
Aye. That's it.

Tiberius said:
Measure the dryness of their mouth if you want. Check for signs of dehydration. Get them the drink and see if they drink it or not.

In any case, your analogy isn't the best, because a glass of water is a long way removed from the fundamental nature of the universe.
True. I was using it to show that we often depend on what you call anecdotal evidence.

Tiberius said:
By scientific testing. If you have a control group which does not meditate and another group that does meditate, and the changes occur only in the group that meditates, then it is evidence that the meditative state is the cause. Further testing with greater control of the variable (whether the people are meditating or not) will give more accurate answers.

Really, science isn't that hard.
Fair enough.
 

atanu

Member
Premium Member
Only my own personal interpretation, which, I am fully aware, is not demonstrable to anyone else.

Yeah. How do you then expect that I can communicate to you my experience of my own "I"?

But based on all available evidence, we are each individuals that exist.

But this is getting a little off topic...

It is not off topic. I asked: Do you exist on your own? Did you give rise to your "I" sense? Are you in control of your "I" sense?

I do not think you have answered.
 
Last edited:

Aiviu

Active Member
1.) Would you like to translate this into clear english please?

2.) If you can't see the evidence until you believe, then it sounds to me like getting people to believe so they will be less critical of weak arguments supporting the claim.

3.) No, evidence is based on reality.

4.) That's not a question.

5.) And my experiences are unique to me, and thus anything that comes from them is objective.

1.) I dont want to sound offending but your "logic" is not that intelligent as you think. It needs a bit more. What?! You want to know what "more"? But .. but i thought you are clever. Or at least you seem to be smart... Ok a little help .... what about being positive? Read your OP [... perfectly willing to ...] - but now your OP and further answrs are deconstructive
2.) That was not very clever from you. Go on and be "critical" with other but yourself... What i keep for your cleverness turns out to be your weakness.
3.) Evidence is science. Its not to prove Gods existence. Could you evidence someone/.thing that is unknown to you? No! You need a reason to hope that there is. If you dont have a reason then you waste your time by searching for evidence. Please read Goethes "Faust". Faust is almost similar to you.
4.) You even didnt read what i wrote. You require evidence from others .. and i say evidence ahs to be asked to yourself. And by evidence i understand something different in a topic where its about something which obviously doesnt exists to you.
5.) Wow. You are so cruel to yourself ... you are only an object to yourself.

You keep your argumentations deconstructive. It feels like Eristic Dialectic
And yes, you won. I am so done by your cleverness.
 
Last edited:

George-ananda

Advaita Vedanta, Theosophy, Spiritualism
Premium Member
That's not a science at all, is it?
I have certainly not been claiming religion/spirituality is like the hard sciences!

When you can get people saying the same thing, MAYBE.
Good, at least now you see anecdotal evidence can be in the game of understanding the universe.
But religion/spirituality doesn't do that, does it. How many different religions are there? Too many to count. Given that most if not all of the sources for George Washington are in agreement and most if not all of the sources for God are not, I don't think it works.
Ah but religion/spirituality may be a much more complex thing than the existence of George Washington so the anecdotal evidence requires more thought and consideration. Religion/spirituality is integrated with culture, etc. so it takes more analysis and thought. But careful and rational consideration of bodies of anecdotal evidence is part of that which requires consideration (not blind acceptance nor blind dismissal).

And your argument in post 202 is just a claim, and you readily admit that you can't provide "normal" evidence for it, but must instead rely on " 'beyond the normal' evidence that consciousness can not be explained materialistically". In other words, you got nothing testable. (Of course, if you disagree, please show me a way I can verify your claims.)
Such rigorous testing is just applicable to the hard sciences. Religion/spirituality is much more complex and requires consideration and rational thought as opposed to physical tests.
 

outhouse

Atheistically
Religion/spirituality is much more complex and requires consideration and rational thought as opposed to physical tests

This is true. Because it originates in human emotions and conscious thought, mythology and imagination and theology. It does not exist outside human thought.
 

Tiberius

Well-Known Member
I've written about this (and provided sources) elsewhere:
How Hypothesis Testing Doesn’t Test Hypotheses
Part I of Part II: Null Hypothesis Significance Testing (NHST)
The Cult of Statistical Significance: Reader Recommended in Research Review’s Review
and so on. But to give you a more comprehensive bibliography, see here:
402 Citations Questioning the Indiscriminate Use of Null Hypothesis Significance Tests in Observational Studies'
with some additions from my bibliography:

Gigerenzer, G., Krauss, S., & Vitouch, O. (2004). The null ritual. In D. Kaplan (Ed.). (2004). The Sage handbook of quantitative methodology for the social sciences (pp. 391–408).
Hubbard, R., & Lindsay, R. M. (2008). Why P values are not a useful measure of evidence in statistical significance testing. Theory & Psychology, 18(1), 69-88.
Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56(1), 16.
Kline, R. B. (2013). Beyond Significance Testing: Statistics Reform in the Behavioral Sciences. American Psychological Association.
Lambdin, C. (2012). Significance tests as sorcery: Science is empirical—significance tests are not. Theory & Psychology, 22(1), 67-90.
McCloskey, D. N., & Ziliak, S. T. (2009). The Unreasonable Ineffectiveness of Fisherian" Tests" in Biology, and Especially in Medicine. Biological Theory, 4(1), 44.
Taagepera, R. (2008). Making Social Sciences More Scientific: The Need for Predictive Models. Oxford University Press.
Ziliak, S. T., & McCloskey, D. N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. University of Michigan Press.

Using the example given in the first link (the Euphorian drug), there are ways to help minimise the effect.

The proposed trial in the link only has two groups, a group of depressed people given Euphorian and a group of depressed people given a placebo. However, I would design the experiment a little different, with more groups.

A group of depressed people given Euphorian and told nothing.
A group of depressed people given a placebo and told nothing.
A group of depressed people given Euphorian and told that it is Euphorian.
A group of depressed people given Euphorian and told it is a placebo.
A group of depressed people given a placebo and told that it is Euphorian.
A group of depressed people given a placebo and told that it is a placebo.
A group of depressed people given nothing.
In short, I think that proper construction of the trial technique can minimise the risks of inccorect data.

One of the criticisms mentioned was, "Ignore that this difference could be simply because the people in the placebo group were more severely depressed, or that the treatment group had participants that are more prone to the placebo effect, or any number of reasons." COuldn't this be controlled for by simply running the experiment again and changing who's in each group? For example, if you run a trial with 1000 people, and randomly assign each person a number from 1 to 1000, each person will have a unique number. In the first trial, Group A is made up of people 1 - 500. Group B is made up of people 501 - 1000. And then, repeat the trial, but change the groups so Group A is all even numbered people and group B is all odd numbered people.

No, my argument depends upon physicists believing that false dichotomy. It depends upon that false dichotomy as being false. The problem wasn't that it was false, but that all experiments and the interpretations of their outcome weren't actually simply a matter of testing but rested upon theoretical assumptions. This is always true, but we are seldom faced with such clear evidence that our tests will always fail until we completely revise our theory (in fact, this basically never happens).

Why would they believe it is a false dichotomy? The data would clearly show that light behaves as a particle and ALSO as a wave. They would very quickly realise that neither viewpoint is completely correct.

The experimental tests have not, cannot, and will not ever show this. First, because it is impossible to "behave" like a particle in any sense that physicists even up to Einstein, Bohr, Heisenberg, etc., would have recognized: everything WAS either particles or waves). Second, because the orthodox interpretation of quantum mechanics is that experimental tests CANNOT even in theory determine what anything at the subatomic scale "behaves" like: quantum physics simply allows us to predict the outcome of experiments; it DOES NOT tell us anything about the nature of reality other than these outcomes (I VEHEMENTLY object to this interpretation). Third, this isn't MY dichotomy. It was the empirically, scientifically PROVEN dichotomy of the entirety of modern physics (right up until the early-mid-20th century). The debate dates back to the ancient Greeks, but it wasn't until the modern era and scientists like Galileo, Newton, Laplace, etc., that such hypotheses were rigorously tested. Finally, the wave-particle duality language was thrown in as an ad hoc linguistic (and embarrassing) device by physicists desperate to cling to the doomed model of classical physics. A "particle" that "behaves" like a "wave" is akin to a dead cat acting as if it were alive. It is by definition paradoxical.

I'm sorry, but are you saying that experiments can NOT show that light can behave like a particle and can also act like a wave?

I have a section on one of my blogs devoted to this:
The Scientific Method

I see nine posts there. Please show me which one describes this "primary school scientific method."

That's what peer-review can't address and hasn't. It's why there are decades of peer-reviewed studies that are absolutely in fundamental conflict (some of which I've worked on), why we are lucky that the wave/particle debate was so simple and the evidence contradicting the entire model of physics at the time so clear, and why decades of peer-reviewed studies on the utter failure of NHST has been ignored even as the paradigm has gained in popularity.

Are you suggesting that errors in your experimental technique can not be found by getting other people to examine your work, or getting other people to replicate your experiments?

Ok, some additional citations (I intended to include only those that I could find links for, but discovered upon testing that some were broken; please let me know if any of the links don't work or if you would like copies of those that I couldn't find working links for and I will attach the papers).
Ambaum, M. H. P. (2010). Significance tests in climate science. Journal of Climate, 23(22), 5927-5932.

This seems to be talking about results being used incorrectly and being misunderstood, not an actual problem with the technique used to get the ressults.

Branch, M. (2014). Malignant side effects of null-hypothesis significance testing. Theory & Psychology, 24(2), 256-277.

This seems to be common sense. Using our earlier example, we can say that Euphorian leads to decreased depression, but that doesn't mean that decreased depression means the person has been using Euphorian. Like I said earlier, care when constructing experiments can make sure that this erroneous conclusion is avoided.

Gill, J. (1999). The insignificance of null hypothesis significance testing. Political Research Quarterly, 52(3), 647-674.

This is talking about social sciences. I'm talking more about hard sciences such as physics.

Gliner, J. A., Leech, N. L., & Morgan, G. A. (2002). Problems with null hypothesis significance testing (NHST): what do the textbooks say?. The Journal of Experimental Education, 71(1), 83-92.

Again, this is talking about social sciences.

Hunter, J. E. (1997). Needed: A ban on the significance test. Psychological Science, 8(1), 3-7.

This is talking about psychology, a social science.

Killeen, P. R. (2005). An alternative to null-hypothesis significance tests. Psychological science, 16(5), 345-353.

Again, this is talking with regards to psychology.

Orlitzky, M. (2011). How can significance tests be deinstitutionalized?. Organizational Research Methods, 1094428111428356.

This is talking with regards to social sciences.

Rozeboom, W. W. (1960). The fallacy of the null-hypothesis significance test. Psychological bulletin, 57(5), 416.

This is talking with regards to psychology.

Schweder, T., & Norberg, R. (1988). A Significance Version of the Basic Neyman-Pearson Theory for Scientific Hypothesis Testing [with Discussion and Reply]. Scandinavian Journal of Statistics, 225-242.

I am unable to comment on this as there is no link.

Stang, A., Poole, C., & Kuss, O. (2010). The ongoing tyranny of statistical significance testing in biomedical research. European journal of epidemiology, 25(4), 225-230.

Alas, I don't have time to read this right now.

Thompson, B. (2004). The “significance” crisis in psychology and education. The Journal of Socio-Economics, 33(5), 607-613.

This is with regards to psychology.

Yoccoz, N. G. (1991). Use, overuse, and misuse of significance tests in evolutionary biology and ecology. Bulletin of the Ecological Society of America, 72(2), 106-111.

Again, no link, so I can't comment.
 

Tiberius

Well-Known Member
I am a "science guy".

In what field?

In general, we don't provide anything as "fact" or "proven". Even in the hardcore, computational aspects of scientific fields (e.g., the Hodgkin-Huxley model or Bell's inequality) "proofs" are hotly disputed because they are context-ridden and theory-laden.

I know that, but I think you also know what I meant. Can you show me that any of those ideas is being presented as being equivilent to, say, gravity?

Perhaps THE unsolved problem in physics (the two best supported theories in practically all of science, quantum mechanics and general relativity, disagree: in general relativity, gravitation doesn't really exist at all, where as in quantum mechanics it is essentially non-existent and in extensions of quantum mechanics, e.g., QFTs/particle physics it is the graviton field which lacks any and all empirical support).

But relativity has withstood every effort to prove it wrong, and it gives fantastically accurate results.

No. Although modern physics suggests that what reality IS isn't what we experience. I came to physics for the most part after already working as a scientist, so I am less inclined than those who started out in physics to accept such sentiments as these:

I'm not saying that reality is what we experience. What we experience is only our interpretation of reality.
 

LegionOnomaMoi

Veteran Member
Premium Member
I have to apologize because I don't have the time to address all of the great comments at the moment (I will get to them) and I mainly wanted to say one thing about significance testing.
In what field?
Neuroscience (or at least that was my field; it's now more physics than anything else). My specialty is the dynamics and physics of complex systems (and research methods, which is where I get most of my money).

This is talking about social sciences. I'm talking more about hard sciences such as physics.
Neuroscience is one of those fields where I have a lot of colleagues with backgrounds in psychology and many with backgrounds in physics, computer science, engineering, etc. I used to lord it over the psychology bunch that we in the hard sciences (or in the hard science approach to the brain) didn't have to deal with the methodological problems that plague the social sciences. That was before my work in particle physics. Consider a fairly standard text on statistical testing in particle physics:
Lista, L. (2016). Statistical Methods for Data Analysis in Particle Physics. Springer.

From the chapter on hypothesis testing:
"A key task in most of physics measurements is to discriminate between two or more hypotheses on the basis of the observed experimental data.
A typical example in physics is the identification of a particle type (e.g.: as a muon vs pion) on the basis of the measurement of a number of discriminating variables provided by a particle-identification detector (e.g.: the depth of penetration in an iron absorber, the energy release in scintillator crystals or measurements from a Cherenkov detector, etc.)...
In statistical literature when two hypotheses are present, these are called null hypothesis, H0, and alternative hypothesis, H1."

This probably sounds fine. For those of us who know the terminology, however, it sounds like a death-knell for the hard sciences. The methodology used in particle physics to determine whether e.g., we found the Higgs is borrowed from...(cue dramatic music)...the social sciences. In fact the second section in that chapter talks about Fisher's linear discriminant analysis. Fisher was a founder of the hypothesis testing paradigm (the other founders were radically opposed to his views, which means that the foundations of significance testing contain two radically opposed views haphazardly flung together). All the founders were social scientists. In a text for social scientists, Making Social Sciences More Scientific: The Need for Predictive Models, an economist with a physics background takes social sciences to task for their reliance on this flawed paradigm (significance testing) by comparing it to the "real" scientific method of models used in the hardest of the sciences: physics. Although the book is rather flawed in a number of ways, this was more or less my position. In physics, theories/models/frameworks are judged upon how well they predict results. We don't need to determine whether a pill outperforms a placebo using significance tests with a whole host of assumptions about the nature of the data, the participants, and the experimental design. Right?
Wrong. Because quantum mechanics forever severed our ability to directly connect observable reality with experimental reality (in quantum/particle physics, "observables" are mathematical objects with very particular properties that are in no way like their classical counterparts), we can't ever be sure of how much the results of experiments are due to the mathematics vs. observation (because mathematics determines what we observe as it doesn't in classical physics or everyday life). This is partly why particle physicists have adopted a methodology developed in the social sciences in order to determine whether they have detected a muon, pion, or nothing. The most fundamental level of reality is now tested by physicists using the methods developed for behaviorists in the social sciences and taught to current undergrads in psychology and sociology in universities around the world.
You won't find as many criticisms of NHST in the physics literature as you will in the medical literature, and nothing like the criticisms found in the social sciences. This isn't because physics is a hard science. It's because this paradigm was DEVELOPED in the social sciences and physicists borrowed it out of necessity. Reference texts and graduate textbooks in physics now cite well-known works by behavioral scientists from the 30s and 40s. I was extremely depressed upon finding this out.
 

9-10ths_Penguin

1/10 Subway Stalinist
Premium Member
I argue for a non-dual (God and creation are not-two) pantheistic God concept. God is the core of all consciousness in the universe. The argument I make unfortunately does not fit into a short reply post but it starts with 'beyond the normal' evidence that consciousness can not be explained materialistically.
Sounds like an argument from ignorance (unless you've examined all possible materialistic explanations - have you?).

From there it includes the insights of those who perceive beyond the normal and advanced souls that take birth for the purpose of leading us to the truth.
IOW, people whose claims can't be rationally evaluated?
 

George-ananda

Advaita Vedanta, Theosophy, Spiritualism
Premium Member
Sounds like an argument from ignorance (unless you've examined all possible materialistic explanations - have you?).
I would need to examine all possible materialistic explanations (of course an impossible task) only if I was claiming proof of my position. I am only claiming the 'most reasonable' position when all evidence and argumentation is considered.

IOW, people whose claims can't be rationally evaluated?
Anecdotal evidence can be rationally considered for things like quantity, quality and consistency. We learn about history, etc. by rational consideration of anecdotal evidence. Anecdotal evidence should not be blindly accepted nor blindly dismissed. Rational consideration should be used.
 
Last edited:

9-10ths_Penguin

1/10 Subway Stalinist
Premium Member
I would need to examine all possible materialistic explanations (of course an impossible task) only if I was claiming proof of my position. I am only claiming the 'most reasonable' position when all evidence and argumentation is considered.
By claiming your position to be the "most reasonable", you aren't claiming perfect knowlege of all materialistic explanations, but you are claiming to know them to a reasonably high degree of certainty in a general sense. I would argue that this is an impossible task.

... and if you haven't done that much, then I disagree that your position is the "most reasonable".

Anecdotal evidence can be rationally considered for things like quantity, quality and consistency. We learn about history, etc. by rational consideration of anecdotal evidence. Anecdotal evidence should not be blindly accepted nor blindly dismissed. Rational consideration should be used.
It isn't just a matter of it being anecdotal, if by "beyond the normal", you mean something like "beyond that which we can confirm rationally", which I think you do.

Some anecdotal evidence can be checked: for instance, if a historical account says that a battle of the War of 1812 was fought in a particular place, you could go there, look for buried musket balls and other physical signs, and when you find them, take this as support for the account. OTOH, you can't do this for most - if any - paranormal claims; in those cases, you're generally talking about blindly taking people at their word.
 

George-ananda

Advaita Vedanta, Theosophy, Spiritualism
Premium Member
By claiming your position to be the "most reasonable", you aren't claiming perfect knowlege of all materialistic explanations, but you are claiming to know them to a reasonably high degree of certainty in a general sense. I would argue that this is an impossible task.

... and if you haven't done that much, then I disagree that your position is the "most reasonable".
In life, no one is so stringent in their beliefs that they think any knowledge beyond that proved in the hard physical sciences should be accepted as reasonable. We could not function with that level of stringency. Judging 'reasonableness' is part of necessary human intelligence.
It isn't just a matter of it being anecdotal, if by "beyond the normal", you mean something like "beyond that which we can confirm rationally", which I think you do.
No, by 'beyond the normal' I am referring to events including elements outside of our normal/familiar three-dimensional physical word. And there is no perfect definition (before we go there).
Some anecdotal evidence can be checked: for instance, if a historical account says that a battle of the War of 1812 was fought in a particular place, you could go there, look for buried musket balls and other physical signs, and when you find them, take this as support for the account. OTOH, you can't do this for most - if any - paranormal claims; in those cases, you're generally talking about blindly taking people at their word.
I said already that we should not 'blindly accept' but rationally consider using all information and evidence at our disposal. This includes considering things like the quantity, quality and consistency of the evidence, all pertinent theories, etc..
 

LegionOnomaMoi

Veteran Member
Premium Member
Please show me which one describes this "primary school scientific method."

I'll do you one better. I'll give you descriptions/criticisms from the literature rather than one of my blogs:

“Around the middle of the 20th century, the Scientific Method was offered as a template for teachers to emulate for the activity of scientists (National Society for the Study of Education, 1947). It was composed of anywhere from five to seven steps (e.g., making observations, defining the problem, constructing hypotheses, experimenting, compiling results, drawing conclusions). Despite criticism beginning as early as the 1960s, this oversimplified view of science has proven disconcertingly durable and continues to be used in classroom today”
Windschitl, M. (2004). Folk theories of “inquiry:” How preservice teachers reproduce the discourse and practices of an atheoretical scientific method. Journal of Research in Science Teaching, 41(5), 481-512.

“One of the most widely held misconceptions about science is the existence of the scientific method. The modern origins of this misconception may be traced to Francis Bacon’s Novum Organum (1620/1996), in which the inductive method was propounded to guarantee ‘‘certain’’ knowledge. Since the 17th century, inductivism and several other epistemological stances that aimed to achieve the same end (although in those latter stances the criterion of certainty was either replaced with notions of high probability or abandoned altogether) have been debunked, such as Bayesianism, falsificationism, and hypothetico-deductivism (Gillies, 1993). Nonetheless, some of those stances, especially inductivism and falsificationism, are still widely popularized in science textbooks and even explicitly taught in classrooms. The myth of the scientific method is regularly manifested in the belief that there is a recipelike stepwise procedure that all scientists follow when they do science. This notion was explicitly debunked: There is no single scientific method that would guarantee the development of infallible knowledge (AAAS, 1993; Bauer, 1994; Feyerabend, 1993; NRC, 1996; Shapin, 1996).” (emphases added)
Lederman, N. G., Abd-El-Khalick, F., Bell, R. L., & Schwartz, R. (2002). Views of nature of science questionnaire: Toward valid and meaningful assessment of learners’ nature of science. Journal of Research in Science Teaching, 39, 497–521.


"The model of ‘scientific method’ that probably reflects many people’s understanding is one of scientific knowledge being ‘proved’ through experiments...That is, the ‘experimental method’ offers a way of uncovering true knowledge of the world, providing that we plan our experiments logically, and carefully collect sufficient data. In this way, our rational faculty is applied to empirical evidence to prove (or otherwise) scientific hypotheses. This is a gross simplification, and misrepresentation, of how science actually occurs, but unfortunately it has probably been encouraged by the impoverished image of the nature of science commonly reflected in school science." (emphasis added)
Taber, K. S. (2009). Progressing Science Education: Constructing the Scientific Research Programme into the Contingent Nature of Learning Science (Science & Technology Education Library Vol. 37). Springer.


"a focus on practices (in the plural) avoids the mistaken impression that there is one distinctive approach common to all science—a single “scientific method”—or that uncertainty is a universal attribute of science. In reality, practicing scientists employ a broad spectrum of methods" (emphasis added)
Schweingruber, H., Keller, T., & Quinn, H. (Eds.). (2012). A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Committee on a Conceptual Framework for New K-12 Science Education Standards. National Research Council’s Board on Science Education, Division of Behavioral and Social Sciences and Education.


There is no scientific method in the set that there is no linear sequence, no set of steps, and no procedure that accurately describes even a simplistic model of scientific inquiry. The Scientific Method as such is a myth:

“Myth of 'The Scientific Method’
This myth is often manifested in the belief that there is a recipe-like stepwise procedure that typifies all scientific practice. This notion is erroneous: there is no single ‘‘Scientific Method’’ that would guarantee the development of infallible knowledge. Scientists do observe, compare, measure, test, speculate, hypothesize, debate, create ideas and conceptual tools, and construct theories and explanations. However, there is no single sequence of (practical, conceptual, or logical) activities that will unerringly lead them to valid claims, let alone ‘‘certain’’ knowledge”
Abd‐El‐Khalick, F., Waters, M., & Le, A. P. (2008). Representations of nature of science in high school chemistry textbooks over the past four decades. Journal of Research in Science Teaching, 45(7), 835-855.

“A key myth...is a belief in a universal scientific method. As with many myths, those who hold to it are startled when they discover its inaccuracy; those who know it is a myth are surprised by its persistence in textbooks, curricula, and lesson plans. I've seen teachers become visibly shaken when they learn the scientific method is a myth. I!ve also heard aspirants to a teacher education program say they studied the scientific method in preparation for their application interviews. Somehow the myth of the scientific method lives on and not only within the realm of the science classroom. The persisting mythology of a scientific method is viewed as a problem within educational research (Rowbottom & Aiston, 2006) as well as for those who teach science.”
Settlage, J. (2007). Demythologizing science teacher education: Conquering the false ideal of open inquiry. Journal of Science Teacher Education, 18(4), 461-467.


What's amazing is that the criticisms of this presentation of a single method of "steps" (e.g., formulate hypothesis, develop a way to test it, try to prove it wrong, if confirmed it becomes "theory") are almost as old as the notion itself:


“Nothing could be more stultifying, and, perhaps more important, nothing is further from the procedure of the scientist “than a rigorous tabular progression through the supposed ‘steps’ of the scientific method, with perhaps the further requirement that the student not only memorize but follow this sequence in his attempt to understand natural phenomena"
Harvard Committee. (1945). General education in a free society: Report of the Harvard Committee. Cambridge: Harvard University Press.


I use the particle/wave example to show how dependent hypotheses, experiments, and the interpretations of findings are upon theory. Physicists were not aware that they were assuming that physical systems were all either particles or waves. They thought that's just how things were (and obviously so). Thus when Young showed light behaved like a wave, that should have settled the matter. Particles do not and cannot behave like waves, and there is no third option (so it was thought). It turns out that nothing is either particles or waves, that this obvious reality was a false assumption intrinsic to all theories in physics, and no test could confirm that light (or anything else) was actually composed of particles or waves.

In fact, the assumption that things are composed of particle-like elements continues to play a huge role in physics. Quantum mechanics, according to the orthodox interpretation, provides us with a statistical method for predicting experimental outcomes. Physical systems are mathematical entities that live in an abstract (often infinite dimensional) space with no known relationship to any actual "physical" system. However, QM suffers from a serious drawback: it is not relativistic. Early attempts to develop a relativistic quantum physics were hampered by the mass-energy equivalence of special relativity and the extreme oscillations & fluctuations of energy in quantum mechanics. This means that quantum processes should allow for the creation of new quantum "entities" essential ex nihilo.

Particles in modern physics are simply quantized "units" (not necessarily of things). It is an assumption that these units exist as point-particles in some field, and that assumption not only drives the nature of discoveries but what we say these discoveries are. We introduce "virtual" particles into equations and models to balance them, yet these particles aren't virtual (they are causally efficacious, if one is to interpret physical theory as being, well, physical). They do not exist as waves at all and cannot (the wave equation of quantum mechanics cannot allow for the creation of "new" particles and was abandoned early on as a means to get to a relativistic quantum physics; it was replaced by a new, quantum theory of fields). In the standard model of particle physics, the nonlocal, "wave"-like nature of quantum systems is replaced by the nonlocality of fields. Particles are regained by assumption and modern physics proceeds by interpreting and developing the mathematical theory in terms of this assumption. This divide is seen most clearly in certain fields of physics (such as cosmology, astrophysics, particle physics, theoretical physics, etc.) in which much research goes into developing theories that are only “testable” mathematically (e.g., inflation models in cosmology, M-theory, supersymmetry, etc.).




Are you suggesting that errors in your experimental technique cannot be found by getting other people to examine your work, or getting other people to replicate your experiments?

That is quite possible, yes. It is often the case that disagreements aren’t resolved by experiments because replication is irrelevant: there exists disagreement as to the nature and implications of the findings even granting that the experiments can be replicated. For example, the biomedical model of mental health was created rather suddenly in the early 80s. It assumed that underlying each mental illness was a distinct pathology. Psychiatrists thought that the medical evidence would come as our understanding of brain function and physiology increased. Instead, this evidence has shown a surprising degree of similarity between very diverse diagnoses. Yet because the evidence is interpreted in terms of the assumed theory by proponents and without it by critics, experiments have little effect on the debate. There are more extreme examples (e.g., how experiments “support” pseudoscience because reproducible studies make wild assumptions about logical relations between designs, outcome, and interpretation) and less extreme, and while disagreements aren’t generally solved by reproducibility they do get hashed out as more general evidence accumulates (although not always as they should).


This is talking about social sciences. I'm talking more about hard sciences such as physics.

Where do you think the hard sciences got this paradigm? The social sciences. Time was such methods weren’t needed in harder sciences, which were most guided by physical intuition, clearer results, simpler systems, etc. This is no longer true (I challenge anybody who argues so to give me an intuitive account of quantum field theory). The logical issues with NHST don’t change because it is used in medicine, climate science, or physics vs. sociology.
 

LegionOnomaMoi

Veteran Member
Premium Member
But relativity has withstood every effort to prove it wrong, and it gives fantastically accurate results.
Actually, relativity contradicts itself (depending upon how we understand it). Special relativity implies that nothing can go faster than light (or at least whatever does or can, it must be massless). General relativity, combined with observations, tells us that galaxies are "going" faster than the speed of light. This paradox is resolved by understanding the superluminal constraint of special relativity in terms of the general theory.
Gravity isn't. Both special and general relativity are incompatible with ALL RESULTS from the microcosmic realm. It is FANTASTICALLY WRONG to say that EITHER theory gives us ANY "accurate results" here. The ENTIRETY OF QUANTUM PHYSICS had to be redesigned to make quantum theory compatible with SPECIAL relativity, but ANY AND ALL ATTEMPTS to make the superior theory of general relativity compatible with ANY AND ALL experimental results from the microscopic realm (or, indeed, with the entire standard model of particle physics) have and will fail. General relativity is incompatible with ALL OF QUANTUM PHYSICS and ALL of the results from ANY experiments with subatomic systems (and many results with macroscopic systems).
 

Tiberius

Well-Known Member
Neuroscience is one of those fields where I have a lot of colleagues with backgrounds in psychology and many with backgrounds in physics, computer science, engineering, etc. I used to lord it over the psychology bunch that we in the hard sciences (or in the hard science approach to the brain) didn't have to deal with the methodological problems that plague the social sciences. That was before my work in particle physics. Consider a fairly standard text on statistical testing in particle physics:
Lista, L. (2016). Statistical Methods for Data Analysis in Particle Physics. Springer.

From the chapter on hypothesis testing:
"A key task in most of physics measurements is to discriminate between two or more hypotheses on the basis of the observed experimental data.
A typical example in physics is the identification of a particle type (e.g.: as a muon vs pion) on the basis of the measurement of a number of discriminating variables provided by a particle-identification detector (e.g.: the depth of penetration in an iron absorber, the energy release in scintillator crystals or measurements from a Cherenkov detector, etc.)...
In statistical literature when two hypotheses are present, these are called null hypothesis, H0, and alternative hypothesis, H1."

This probably sounds fine. For those of us who know the terminology, however, it sounds like a death-knell for the hard sciences. The methodology used in particle physics to determine whether e.g., we found the Higgs is borrowed from...(cue dramatic music)...the social sciences. In fact the second section in that chapter talks about Fisher's linear discriminant analysis. Fisher was a founder of the hypothesis testing paradigm (the other founders were radically opposed to his views, which means that the foundations of significance testing contain two radically opposed views haphazardly flung together). All the founders were social scientists. In a text for social scientists, Making Social Sciences More Scientific: The Need for Predictive Models, an economist with a physics background takes social sciences to task for their reliance on this flawed paradigm (significance testing) by comparing it to the "real" scientific method of models used in the hardest of the sciences: physics. Although the book is rather flawed in a number of ways, this was more or less my position. In physics, theories/models/frameworks are judged upon how well they predict results. We don't need to determine whether a pill outperforms a placebo using significance tests with a whole host of assumptions about the nature of the data, the participants, and the experimental design. Right?
Wrong. Because quantum mechanics forever severed our ability to directly connect observable reality with experimental reality (in quantum/particle physics, "observables" are mathematical objects with very particular properties that are in no way like their classical counterparts), we can't ever be sure of how much the results of experiments are due to the mathematics vs. observation (because mathematics determines what we observe as it doesn't in classical physics or everyday life). This is partly why particle physicists have adopted a methodology developed in the social sciences in order to determine whether they have detected a muon, pion, or nothing. The most fundamental level of reality is now tested by physicists using the methods developed for behaviorists in the social sciences and taught to current undergrads in psychology and sociology in universities around the world.
You won't find as many criticisms of NHST in the physics literature as you will in the medical literature, and nothing like the criticisms found in the social sciences. This isn't because physics is a hard science. It's because this paradigm was DEVELOPED in the social sciences and physicists borrowed it out of necessity. Reference texts and graduate textbooks in physics now cite well-known works by behavioral scientists from the 30s and 40s. I was extremely depressed upon finding this out.

Just for simplicity's sake, are you able to sum up the problems in a few sentences? And maybe describe some solutions that have been proposed?
 

LegionOnomaMoi

Veteran Member
Premium Member
Just for simplicity's sake, are you able to sum up the problems in a few sentences? And maybe describe some solutions that have been proposed?
Sure. The problem is that hypothesis testing/NHST is fundamentally flawed. The solution is to rid ourselves of it (as has been proposed for many decades).
Another problem is that there is no The Scientific Method, and we should stop teaching that this nonsense exists.
 
Top