• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

charlie sc

Well-Known Member
You cherry-picked the evidence to focus on skeptic Ray Hyman's calculations from 1985 and then jumped to the conclusion that studies below the independently significant 5% level "found nothing."
I’m not even talking about psi. I’m trying to see if you use your logic in all areas. In other words, I’m trying to determine if you’re logically coherent or rational. It didn’t matter if I used a meta analysis for psi or for alcohol addiction, etc. I am curious if you will also call any study that is insignificant in a meta analysis low quality.

Btw I don’t even know what you mean by the “jumped to the conclusion that studies below the independently significant 5% level "found nothing." part. Expound on this. Are you saying independently, they found significance, but after Hymans Bonferoni corrrection, they weren’t?

And you found unworthy of comment that "Six of the 10 investigator groups reported significant outcomes, and cumulation by investigator yielded a composite Z of 6.16."
Where did you get this from?
 
Last edited:

joe1776

Well-Known Member
I’m not even talking about psi. I’m trying to see if you use your logic in all areas. In other words, I’m trying to determine if you’re logically coherent or rational.
:D

It didn’t matter if I used a meta analysis for psi or for alcohol addiction, etc. I am curious if you will also call any study that is insignificant in a meta analysis low quality.
I'm not sure if you still don't understand that my reference to quality was directed toward mainstream publishing or whether you're making a deliberate attempt to re-frame the argument.

Btw I don’t even know what you mean by the “jumped to the conclusion that studies below the independently significant 5% level "found nothing." part. Expound on this. Are you saying independently, they found significance, but after Hymans Bonferoni corrrection, they weren’t?
No. When you say they "found nothing" that would lead unbiased readers to think that the result was zero in 55% of the studies and not lower than the statistically significant 5% standard.
 

charlie sc

Well-Known Member
Uh huh.

I'm not sure if you still don't understand that my reference to quality was directed toward mainstream publishing or whether you're making a deliberate attempt to re-frame the argument.
Why would you only direct that meta-analysis to mainstream publishing and not others? Not finding significance is not finding significance.

No. When you say they "found nothing" that would lead unbiased readers to think that the result was zero in 55% of the studies and not lower than the statistically significant 5% standard.
5% and bellow is the standard for accepting an alternate hypothesis...
 

joe1776

Well-Known Member
Why would you only direct that meta-analysis to mainstream publishing and not others? Not finding significance is not finding significance.
We are not on the same page. I thought your previous remarks referred to a point of earlier disagreement. So, scratch my previous response.

5% and bellow is the standard for accepting an alternate hypothesis...
Yes, but 4% is better than zero ("They found nothing" is what you wrote.)
 

charlie sc

Well-Known Member
We are not on the same page. I thought your previous remarks referred to a point of earlier disagreement. So, scratch my previous response.
ok.

Yes, but 4% is better than zero ("They found nothing" is what you wrote.)
I'm not sure I understand. For a study's hypothesis to be accepted, the alternate hypothesis(e.g. psi) in the study, in the inferential statistics section, needs to show a <=5% probability that it was by chance. You could technically say it the other way around, like a >= 95% probability that it was not by chance. But the latter is never used. Many times in the calculations the probability is reduced for numerous reasons to reduce false positives.
If a study is greater than 5%, so 5.1, say, then it's considered not significant. In other words, it does not support the alternate hypotheses. One may say in the discussion section it was close, but you aren't allowed by say there was significance. A study that found 5.1 would support the null hypothesis, not the alternative. Also, If a study finds insignificance, the effect size is wholly disregarded.

So when you're talking about 4% or others, I don't know what you mean. You'll need to explain it in more detail.
 
Last edited:

joe1776

Well-Known Member
ok.


I'm not sure I understand. For a study to be accepted, the alternate hypothesis(e.g. psi) in the study, in the inferential statistics section, needs to show a <=5% probability that it was by chance. You could technically say it the other way around, like a >= 95% probability that it was not by chance. But the latter is never said. Many times in the calculations the probability is reduced for numerous reasons to reduce false positives.
If a study is greater than 5%, so 5.1, say, then it's considered not significant. In other words, it does not support the alternate hypotheses. One may say in the discussion section it was close, but you aren't allowed by say there was significance. A study that found 5.1 would support the null hypothesis, not the alternative. Also, If a study finds insignificance, the effect size is wholly disregarded.

So when you're talking about 4% or others, I don't know what you mean. You'll need to explain it in more detail.
I agree I could have explained my objection better than I did.

To understand it, you need to recall what I explained a few times earlier: I make my arguments in debate forums such as this as though I'm writing for unbiased readers.

Your statement "they found nothing in 55% of the studies" would likely mislead those readers unless they were familiar with what you just explained in the quoted post. They would have been likely to think that there was zero effect in 55% of the studies, which would have been much more persuasive in their minds than the explanation you just gave. In other words, they would have been misled.
 

charlie sc

Well-Known Member
Your statement "they found nothing in 55% of the studies" would likely mislead those readers unless they were familiar with what you just explained in the quoted post. They would have been likely to think that there was zero effect in 55% of the studies, which would have been much more persuasive in their minds than the explanation you just gave. In other words, they would have been misled.
I'm not sure why you're concerned of others, since you're talking to me and I'm talking to you. Anyway, for scientific purposes the failed replications, or non-significant studies, could not support their hypothesis. So I'm fairly correct saying they found nothing in the context of the theory they're attempting to demonstrate.

What I'm trying to get here, is that you judged the failed replications in the 100-replication-study as low quality or the originals were low quality. There does not seem to be any more than the logical argument I presented to your rationality. Therefore, I see no reason why, for you, any failed study in a meta-analysis would also be considered low-quality or the originals. I'm trying to understand how you classified the originals of the 100-replication-study or perhaps it's this simple.
 
Last edited:

joe1776

Well-Known Member
I'm not sure why you're concerned of others, since you're talking to me and I'm talking to you.
That device prevents me from making arguments that only people who agree with me would find convincing (preaching to the choir). There was a lot of that in this thread. It also prevents frustration when our arguments fail to convince our opponents.

What I'm trying to get here, is that you judged the failed replications in the 100-replication-study as low quality or the originals were low quality.
For fourth and final time: I made no judgment of the individual studies. I faulted the publishers for publishing studies 64% of which didn't replicate
 

charlie sc

Well-Known Member
For fourth and final time: I made no judgment of the individual studies. I faulted the publishers for publishing studies 64% of which didn't replicate
Errrr, but this happens all the time in science. This is why we have systematic reviews and meta-analyses. If you may noticed, failed replications also occur in your beloved parapsychology.
 

charlie sc

Well-Known Member
If what you're saying is true, why are science journalists saying there is a "replication crisis?"
Btw, if you didn't know, the meta-analysis did the meta-analysis AND the 100 replications. So they did quite a bit.

First, 100-replication-meta-analysis copied the 100 studies exactly.This is fairly unusual, because, usually, psychologists will replicate and attempt to improved upon the research by adding variables. This means if it fails it might be disregarded, because of this very reason. Second, they replicated individual studies and individual theories. So, the conclusion to the research is that it's more more likely an original and single piece of work will (1) not be able to explain their methodology concisely - this is demonstrably supported by the results, because they found the studies that were more likely to be significant are the ones with simple instructions, (2) probably had small sample sizes, meaning results may carry a false positive (3) something went wrong in the ecological validity and (4) there may be some biases. Third, in light of second, it means that the scientific methods needs to be improved in order to remedy this. There are numerous solutions to this, but the easiest one is that no one, especially psychologists, should not take single papers seriously unless it's been replicated, replications should be replicated exactly and the instructions have to be as simple and easy to understand as possible.
 
Last edited:

ecco

Veteran Member
I faulted the publishers for publishing studies 64% of which didn't replicate

  • People do experiments.
  • If the experiments are reasonably well documented and have a sound footing, they may get published in peer-review publications.
  • Other people read of the experiments.
  • Some of these other people attempt to replicate the experiments.
  • These people publish their results.
  • Some other people evaluate the published results in what can be called a meta-study.
  • This meta-study determines how many experimenters could or could not replicate the findings of the original experimenter.

If the original study did not get published there is no way many other people could have tried to replicate the study. In the example you discuss, 64% of additional experimenters could not replicate the original findings.

By what logic can you fault the publishers for publishing studies that later turned out could only be replicated 36% of the time?
 

joe1776

Well-Known Member
Btw, if you didn't know, the meta-analsis did the meta-analysis AND the 100 replications. So they did quite a bit.
I did know that and agree.

First, 100-replication-meta-analysis copied the 100 studied exactly.This is fairly unusual, because, usually, psychologists will replicate and attempt to improved upon the research by adding variables.
It was unusual because the task was unusual: measuring the extent of the problem.

Three, in light of second, it means that the scientific methods needs to be improved in order to remedy this. There are numerous solutions to this, but the easiest one is that no one, especially psychologists, should not take single papers seriously unless it's been replicated and it should be replicated exactly.
That would help no doubt. But my guess is that psychologist just don't have a handle yet on how to do solid research. I've been following their research on morality for years now. I'm not impressed with the work being done but to explain why would involve a very long post.
 

ecco

Veteran Member
If what you're saying is true, why are science journalists saying there is a "replication crisis?"
Replication crisis - Wikipedia

Your link - my emphasis
The replication crisis has been particularly widely discussed in the field of psychology (and in particular, social psychology) and in medicine, where a number of efforts have been made to re-investigate classic results, and to attempt to determine both the reliability of the results, and, if found to be unreliable, the reasons for the failure of replication.
Psychology is the scientific study of the mind and behavior.

Hence, it incorporates your beloved psi. You are criticizing the processes of your own belief system and don't even realize it. That's really sad.
 

charlie sc

Well-Known Member
It was unusual because the task was unusual: measuring the extent of the problem.
Why? In the paper they even say the greatest sceptics of science are scientists. There's always room for improvement, but they'd need to show there's a problem in the first place.

That would help no doubt. But my guess is that psychologist just don't have a handle yet on how to do solid research. I've been following their research on morality for years now. I'm not impressed with the work being done but to explain why would involve a very long post.
I don't know any psychology theories on morality. Can you cite them?

The theories I do know about are pretty substantiated. Every other replication almost always shows significance.
 

joe1776

Well-Known Member
By what logic can you fault the publishers for publishing studies that later turned out could only be replicated 36% of the time?
Logically, those journals have a purpose, do they not? If the studies they publish cannot be expected to be science we can rely on, what is their purpose?
 

joe1776

Well-Known Member
I don't know any psychology theories on morality. Can you cite them?

The theories I do know about are pretty substantiated. Every other replication almost always shows significance.
You don't know any psychology theories on morality, but those you know about are pretty substantiated?o_O

Rationalist theories have dominated for years despite the fact that they don't hold up logically and there's no science to support them. The intuitionist theories are on the right track and science is supporting them but the studies are **** poor. Social scientists as yet don't have an elegant theory to explain how we make moral judgments.
 

charlie sc

Well-Known Member
You don't know any psychology theories on morality, but those you know about are pretty substantiated?o_O
Yes. Since you seem surprised, why don't you cite some for me? Hmmmm?

Rationalist theories have dominated for years despite the fact that they don't hold up logically and there's no science to support them. The intuitionist theories are on the right track and science is supporting them but the studies are **** poor. Social scientists as yet don't have an elegant theory to explain how we make moral judgments.
Okay... I still don't know what you're talking about. Why don't you give me some references or cite something ;) ?
 

joe1776

Well-Known Member
Okay... I still don't know what you're talking about. Why don't you give me some references or cite something ;) ?
I think the only interest you have here is in asking this is in wasting my time. But on the off chance I'm wrong. Here's one link to get you started. It's a primer. Then, if you're still interested, search the names Haidt, Greene and Bloom off that site.. All are social scientists who have done research on moral intuition.

The New Science of Morality | Edge.org
 
Last edited:

charlie sc

Well-Known Member
I think the only interest you have here is in asking this is in wasting my time. But on the off chance I'm wrong. Here's one link to get you started. It's a primer.
Nope, no link present. Hmmm, interesting how asking for a citation from you is wasting your time. Okay, lol.

Then, if you're still interested, search the names Haidt, Greene and Bloom off that site.. All are social scientists who have done research on moral intuition.
So, you want me to examine people, not studies in psychology. Okay...
 
Top