Legion I think you fleshed out all the caveats and qualifications to Sunstone's single sentence. But to within the degree of accuracy usually expected of any single sentence, Sunstone's point is valid.
I started that post with a qualification for that very reason. I actually started with a much longer section with caveats and qualifiers to my own post, but when I hit "submit reply" I got an error message saying that network activity was too high and I should try later. Which means my whole post was deleted. So when I re-wrote it I was much shorter, as I had already put in so much time and got nothing in return. Most of that is likely a good thing, as I write too much and it meant a much less detailed reply to address a single sentence.
However, it also meant that I didn't include all qualifications I wanted. In particular, I didn't include the main reason for responding (it took some time to decide to do so): a perhaps clinical level of compulsion when it comes to accuracy, nuances, and details.
If I weren't a believer in the scientific method, I'd be doing something else, both occupationally and with my free time. Most studying I do I call a hobby, as it isn't necessary for what I do.
One unfortunate side effect is that I have too many details for too many topics for someone who has a compulsion when it comes details and accuracy.
This is also related to my general agnosticism (the result of a blend of skepticism with uncertainty), as part of the compulsion to study comes from a desire for greater certainty.
So on the one hand, I have the fact that I do think that empirical inquiry, logic, rationality, etc., is the way to approach everything, whether historical, scientific, or religious.
On the other hand, I have my skepticism/uncertainty. Which means I question my own degree of certainty about another's system of knowing, or what they believe, or how it is they determine things like truth or accuracy.
The more I have learned and experienced, the more doubt I tend to have about many things.The lab I worked had a director and therefore members (the grad students and doctorates) who strongly oppose a theory of cognition I think is probably correct. An entire seminar consisted of the director having everyone, including the PhDs, read various studies conducted to support the theory I think more accurate (embodied cognition), and then have us try to tear them apart. Many of the criticisms I didn't think were very logical. At one point, when we were reading some of the better, more recent research supporting emodied cognition, the director himself admitted that it was fairly difficult to come up with an explanation that fits the classical cog. sci. view, but that even were this the case, it would be because our methodology (mostly the various experimental designs used in neuroimaging) was flawed. However, the methods he was refering to are used by everybody, in that whether one belies cognition is embodied or not, neuroimaging and behavioral research methods don't change.
Basically, when the evidence doesn't fit the theory, then we need new methods. So I asked him at one point one decides that it is the methods which are flawed, rather than the theory. He replied it's whether you have a lot of good reasons to expect the experiments to show something but they don't.
The problem, however, is one pointed out by guys like Quine. We have excellent reasons to think that the brain is responsible for cognition. We have excellent reasons to think that it is the result of electrical signals generated in neurons. We have some evidence to support that a single neuron can generate a "meaningful" part of some neural activity. We have good reasons to think that most of what the brain does relies not on a neuron firing as part of a network (like a computer bit), but that information or "neural code" is correlations between spike trains of neurons, and that most neural firing is noise. The more complex the issues becoome, from temporal vs. rate enoding, to how coordinated networks work, to how these networks relate to input systems, etc., all the way to how people think, the more room for error there is. Which means that the "good reasons" one has for a theory of cognition are only "good" insofar as one has decided a series of ever more complex findings should be interpreted in a particular way.
I am bombarded by news reports of studies, and although I've yet to see one which doesn't distort the research, I've seen plenty for which the actual research was flawed. And it is times like these that I think about the various philosophers of science who were/are, I think, usually wrong and the more radical they were/are the more wrong. Yet there are very important elements to some of their work, and I chose the example I did not just because it is my field or (in the case of the lab director's comment) my experience, but because it highlights the few (I believe) criticisms by guys like Kuhn, Quine, etc., which have an element of truth and are currently seriour problems.
One is that because cognitive science is no longer so much an interdisciplinary field, but is an umbrella category which covers many interdisciplinary fields, there is a lot of different work done and people with many different backgrounds doing it. In particular, there are a lot of psychologists and social psychologists now doing neuroimaging studies, especially fMRI. Even if we leave the proton spins of hydrogen atoms and how this relates to brain hemodynamics, why cerebral blood flow is a good proxy for neural activity, and how nuclear magnetic resonance can create image, we're still left with some very sophisticated mathematics that very few researchers using fMRI scanning understand. The first is how the raw data is processed to
get brain images. Most researchers have a set of possible programs they can run and (hopefully) a good idea about when to run which. Some parts of this are not much of an issue, as they don't vary much between subjects or experimental designs. Others are more important.
A much bigger problem, and one that is essential across the board, is how one can test the data from the processed images. That is, given that I now have images showing brain activity, how do I determine whether or not it shows anything? This involves a selection of one or more statistical methods and/or mathematical models. And thanks to your average requirement of two math course (an intro stats course during undergrad years, and multivariate stats during graduate), along with coming across the names of methods used in the literature, researchers know which names of various mathematical/statistical techniques are related to the work they are doing. They do not know much about the actual math, but they are able to plug data in, select "PCA" or "SEM", select which features to include, and out comes the results. They now have (usually) one or more alpha levels, and if a certain value is achieved, that is interpreted as statistically significant. Why? Because the probability that they would have gotten these results by accident are very small.
Of course, that's only true if they actually understood the underlying logic behind using the mathematics they did, which would require understanding the mathematics. And they don't. But thankfully, even if there happens to be someone who reviews the study and knows a lot about mathematics, many of these studies use models/methods which make it impossible for the reviewer to know whether or not they were used correctly. Because it's easy to say "structural equation modeling" or "dynamic causal model", but without having the raw data and knowing exactly what the researchers did (e.g., did they actually say how they defined prior covariance? or did they just say "...and obtained a positive Lyapunov exponent") the reviewer is unable to determine the validity.
Then there is the issue of design itself. In the volume
Foundational Issue in Human Brain Mapping, there is an entire section entitled "The Underdetermination of Theory by Data", a phrase taken from the philosophy of science and a serious problem in the neurosciences. Because even with excellent design and all the right math and perfect subjects and so forth, so much depends upon what theoretical frameworks were intrinsic from the start.
So independent of my own personal problems with compulsions, inability to write a brief post, incapacity to stop myself from writing 1,000 lines based on one another wrote, the reason I thought it important to respond to that 1 line (and to your response to me) is because I believe that the certain limits and/or failures in the empirical approach are partly responsible for some of its negative evaluation.
Even if this is not the case, I believe it is extremely important for those who do research and/or those who (like me, you, and sunstone) think that empirical methods are the best methods for understanding the nature of reality and should be used when possible to know keep in mind how this can fail and why.
And the examples I gave in my response to what can be safely ignored are the best examples I know of to underscore the importance of keeping these things in mind.