The most common way stats are misused, is to amplify their ambiguity.
For every political, news report, online article, etc., that uses stats there are a dozen studies published in countless journals across the sciences. The problem is that the social & behavioral sciences (as wonderful as they were at developing experimental paradigms) did not ever have a lot of researchers with a background in mathematics (with the exception of economists, only even they are the
butt of jokes as being failed mathematicians). In fact, the emphasis on the logic of experimental design was partially motivated by the restrictive capacity to employ sophisticated statistical techniques.
That has all changed. Across the board, from the mathematically
disinclined researchers in the medical sciences to the sociologists and psychologists, we find increases in graduates as well as (and partly because of) new research areas. This increase in specialization (among other things) drives up the amount of information required to learn more central to a field than research methods (let alone statistical analysis or models used in these methods). However, most importantly the need to know mathematics to employ extremely sophisticated statistical analysis on high-dimensional data sets has become a matter of supreme ease: simply hit the right buttons or input a few commands in SPSS, SAS, STATISTCA, MATLAB, etc. So we have an increase in research fields corresponding to an increase in research corresponding to an increase in publications, but none of this is correlated with an increase in time spent on the logic behind data analysis. Instead, undergrads and graduate students learn just enough to recognize general statistical methods given some general type of research question (i.e., which buttons to press).
For example atheists like to suggest that the secular nation of Sweden is proof secularism works because their quality of life stats are so high.
I've addressed problems with the research behind religiosity claims (including the misuse of stats) on this very forum (e.g.,
Cognitive & Personality Tests or How to Publish Pseudoscience in Mainstream Journals). That doesn't make what you said a matter of statistics.
I found while in college that those who know more about statistics than anyone have a greater capacity to abuse them than anyone.
Did you go to college before the widespread use of statistical software packages? Because I tutored those in college (before I taught; I got paid a small amount by my university as an undergrad to tutor my peers) who knew the least about stats, and this generally meant an equal or lesser capacity to abuse them. I can teach undergrads to churn out highly technical data analyses in a few days. That's because it involves pressing buttons, not understanding anything.
I turned in a paper that justified duck hunting because almost all the money used to buy wetlands to raise ducks on (instead of the farmers selling the land for development) was paid for by duck hunters.
Ok, but this isn't statistical inference, modelling, or analysis.
I have had 12 semester hours of statistics. I never claimed to be an expert but I do claim to be familiar with them. However the stats I used are about the most reliable possible.
Great. So what statistics did you use here?
If I wish to know if something exists. The most meaningful stat I can acquire is a data set of those who met the criteria to have witnessed the existence of X.
Statistics are not, in general, used to ever demonstrate the existence of entities, but processes, phenomena, tendencies, etc. Witness testimony is so problematic that the literature on analyses of structured interviews (like cognitive interviews) is filled to the brim with cautionary tales, stringent conditions and methods, and in general how to use as a secondary method.
On the other hand, the existence of the countless ways that all humans are subject to experiential biases is vast indeed.
If I wish to know if Big foot existed I am not going to ask people that live in the Congo.
No, you'd find Big Foot. Witness testimony is for historians and journalists, not statistical analyses to indicate the existence of an entity by anybody.
If I wanted to know if it is cold in the ant-arctic I am not going to ask those that had never been there.
The CRU at East Anglia and NASA are probably the main producers of global temperature records, including temperature data in the Antarctica. The problem is that spatial coverage even in specific land regions is irregular, and thus we require more advanced analyses of measurement data than e.g., simple averages. For places like Antarctica in particular, measures of ice-sheet growth and decrease are taken often (in various forms, including satellite which, since the 1970s has also been used for hemispheric temperature records thanks to Christy & Spencer's award winning work using MSU to give us a locally unbiased measure of temperature).
That's what we use to find out Antarctic temperatures. Not witnesses.
If I wanted to know how strong the evidence is that the Christian God existed I am going to ask those who met the requirements to have experienced him
If these are humans, than they are necessarily fewer than the number of humans who have frequently experienced perceptual/cognitive biases (i.e., experiential biases/misperceptions). Because whatever your number is, the frequency for biased interpretations of experiences is 100% among humans.
This exact methodology is used in every realm of discourse, law, and academia
No, it isn't.