At its core, this topic boils down what is meant by "evidence."
I define evidence as "Anything that lets us reliably differentiate between ideas that are merely imaginary, and ideas that accurately describe reality."
I do most of my research these days in interdisciplinary fields and quantum foundations (and related areas). A nice thing about foundations is that questions which were largely "taboo" in physics (and often provoked open hostility) regarding what our best theories tell us about reality (or whether, in fact, they do this) are now asked quite readily and even urgently.
In none of my work, nor in the work of anybody I've encountered so far as I am able to tell, has anybody ever encountered a method that meets your standard of evidence.
In other words, your standard would basically rule out empirical studies and scientific research from counting as evidence. It's an unbelievably high standard.
The best framework for creating reliable evidence that humans have identified is the scientific method.
This method is largely a list adapted from a 1910 work from Dewey on critical reasoning entitled
How We Think. Of course, the idea that there exists a method that characterizes scientific inquiry is older (e.g., Pearson's
The Grammar of Science or going back to widely over-simplified but heavily referenced classics we have e.g., Francis Bacon or even Aristotle). And naïve Popperian ideas have also featured largely in pre-college as well as introductory college textbooks and science education more generally (actually, naïve Popperian falsification is something that a lot of practicing scientists were taught and many preach at least nominally).
The problem with the textbook presentations and their popular science counterparts is that they are wholly inaccurate, incredibly misleading, and generally harmful to the laypersons understanding of the nature of science (NOS) and scientific inquiry.
For decades now, various scientific societies and government organizations as well as scholarly research in the area of science education have issues numerous official publications, papers, literature for educators and policy makers, and much more trying to have this problem addressed (i.e., rid science classes and science texts of the description of The Scientific Method myth), but to little avail. As evidenced by this forum in recent months alone (not to mention wider social issues among those claiming to support science with slogans like Science is Real whilst misrepresenting what we do), the idea that there is such a thing as The Scientific Method persists. Yet, and so you don't have to simply rely on my word here, it's well-known this is a crock:
“Despite the resounding message from scientists that context determines method of inquiry, many science teachers continue to instill belief in a common “scientific method”—a myth that is reinforced by the prominence given to “the scientific method” in the introductory chapters of science textbooks."
Wong, S. L., & Hodson, D. (2009). From the horse's mouth: What scientists say about scientific investigation and scientific knowledge.
Science education,
93(1), 109-130.
“It’s probably best to get the bad news out of the way first. The so-called scientific method is a myth” (p. 210)
Thurs, D. P. (2015). Myth 26. That the Scientific Method Accurately Reflects What Scientists Actually Do. In R. L. Numbers & K. Kampourakis (Eds.)
Newton's Apple and Other Myths about Science (pp. 210-218). Harvard University Press.
“Scientists and historians do not always agree, but they do on this: there is no such thing as the scientific method, and there never was.” (p. 1)
Cowles, H. M. (2020).
The Scientific Method: An Evolution of Thinking from Darwin to Dewey. Harvard University Press.
“If there is one thing that most people think is special about science, it is that it follows a distinctive “scientific method.” If there is one thing that the majority of philosophers of science agree on, it is the idea that there is no such thing as “scientific method.”” (p. 9)
McIntyre, L. (2019).
The Scientific Attitude: Defending Science From Denial, Fraud, and Pseudoscience. MIT Press.
In his book
Science and Common Sense, famed scientist and former president of Harvard James B. Conan remarks (in a chapter with the provocative title “Concerning the alleged scientific method”) “[t]here is no such thing as the scientific method. If there were, surely an examination of the history of physics, chemistry and biology would reveal it.”
In science, a very very supported model, which has made many new and accurate predictions, becomes a "theory."
I hear this all the time and every time I imagine some alchemical process where a tiny hypotheses that has grown by being fed critical experimental tests has finely reached maturation and burst from its chrysalis at some International Conference of Real Scientists in Lab Coats to emerge as a fully grown, beautiful "theory."
Back to reality, though, this is just not how things work. It is true that a great many research papers (probably most) include clear statements regarding hypotheses the authors explicitly state about the research they did (that these and the nature of such papers actually hide more than they reveal, particularly about the decisions made, hypotheses formulated and reformulated, the context in which the hypotheses can even make sense, etc.). But I would like to see a few clear examples in some scientific literature where somebody proposes or acknowledges that some hypothesis should be cleared for acceptance as a theory. It would be nice if this word was actually generally used in this sense.
But, of course, it isn't. How "theory" is used in the literature depends heavily on the field. In many sciences there exists a framework or theory that is about hypothesis testing. It can't be supported by hypotheses because it is about a general method for applying classes of statistical tests to data together with a means of determining whether or not the data support hypotheses (like those accursed
p-values or NHST more generally).
In fields like HEP, particle physics, and most areas which rely a lot on quantum field theories, "theory" refers to a Hamiltonian or more likely Lagrangian:
“In practice, “theory” and “Lagrangian” mean the same thing.” (p. 1.)
Kane, G. (2017).
Modern Elementary Particle Physics: Explaining and Extending the Standard Model. Cambridge University Press.
Elsewhere in the natural sciences, terms like "model", "theory", "principle", "law", "hypothesis", "postulate", are frequently used in different ways, used interchangeably, and most importantly almost never in the textbook manner:
“The waters of the present discussion are sometimes muddied by the variety of terms used to describe the same thing. We speak, for example, of the gas
laws, Planck’s quantum
hypothesis, the Pauli
principle and the
postulates of quantum mechanics. Each term highlighted in italics has essentially the same meaning...” (p. 6; italics in original)
Grinter, R. (2005).
The Quantum in Chemistry: An Experimentalist's View. Wiley.
Nor is the stepwise, heirarchical view of scientific knowledge progressing from hypothesis to theory (and on to become a Law in old age, perhaps?) an accurate one. In reality, we can't formulate hypothesis without theory, we can't test hypotheses without theory, and in most fields it doesn't even make sense to formulate research questions designed to expand and improve our knowledge about reality without relying on the very theories we are ostensibly in some sense "testing". And even if this were possible, it still wouldn't be true that scientific research involves the kind of transition to "theory" that you describe:
"Theories and laws are sometimes presented as a stepwise progression from hypothesis to theory to law. Books, faculty, and GTAs often refer to the strict objectivity of scientists, and the only acknowledgment of creativity is during experimental design, if that is even a part of the laboratory. Laboratories that focus on cookbook procedures serve to reemphasize these misconceptions by presenting science as purely experimental, with no room for creativity, focusing on objective measures of right and wrong to reach a clear answer.
It is hard to imagine that students could gain an appropriate understanding of NOS under these conditions.” (p. 210; emphasis added)
Schussler, E. E., & Bautista, N. U. (2012). Learning about nature of science in undergraduate biology laboratories. In M. S. Khine (Ed.)
Advances in Nature of Science Research: Concepts and Methodologies (pp. 207-224). Springer.
While every model is imperfect and has explanatory boundaries, the fact that they can predict things we don't know yet about reality is the hallmark of science and the method guiding its discoveries about the facts of reality. This is why we have airplanes, computers, and medicine today.
This is closer to the truth. Prediction is indeed an incredibly important part of scientific inquiry. However, most of the time that we do things like build working airplanes or computers it has considerably less to do with "facts of reality" and considerably more to do with "incorrect/inaccurate pictures of reality that as approximations have a useful domain of validity in application". Also, while prediction is key, it is simply not the case that "the fact that [scientists/science] can predict things we don't yet know about reality is the hallmark of science" nor does this guide "discoveries about the facts of reality." You are throwing the term "reality" around quite carelessly given that your thread topic is about evidence and specifically concerned with a standard of evidence that tells about reality itself. We don't understand yet what it is that makes solids as we can't yet approach the kinds of solutions to the quantum many-body problem that would perhaps inform us why we have "bronze" in a why that is less qualitative than Aristotle's. But we use bronze and other materials and use classical mechanics and other theories that do not correspond to reality when we do things like build airplanes.
Theism in general isn't even a predictive model
For the most part, neither is evolutionary theory (there are huge caveats here, of course, but it is still a core principle of evolutionary theory that there are unpredictable, essential random elements that preclude predictive models even without the worry about ascribing teleological status to natural processes). But theism isn't a model. So far, you've laid down a level of evidence that science can't meet, and for some reason argued this tells us something about the lack of evidence for theism. Granted, I don't spend a lot of time with theists, but I do know some and I've been here for years and most theists don't think of the evidence that they rely on as the kind that is used in the sciences. Also, different sciences use evidence differently (and treat things like theories and models in fundamentally different ways), while there are entire disciplines that are concerned with understanding reality and with evidence (like history) which don't use scientific methods either.
From everything I've ever seen, forms of evidence that theists propose are unreliable and/or fallacious. A fallacy is a way of reasoning, identified by logicians, that reliably leads to false conclusions.
Fallacies allow for false conclusions or poor arguments, they do not reliably lead there.