I understand where you're coming from with the different fields mingling together etc, but I think we're focusing on the wrong things here.
The authors of the current biology article were put together to write a story on this relation between the right amygdala and the ACC.
They weren't.
Current Biology is an academic journal. This was a published study in a science journal, not a story.
The authors weren't the ones doing the research (well 1 was),
They all were. First, it started with
Firth and Feilden, who went to a neuroscientist (Rees) to get the brains of two politicians scanned. Rees then went to Kanai who (apparently) had already in his possession the 90 scans for some other study. They all went back to those participants (the people originally scanned) and asked them about political orientation. They then wrote the whole thing up, sent it into review, and had it published as research.
they referenced many previous studies and arrived at this conclusion.
Let me clear a few things up now:
1) I worked with fMRI scans and studies and with others who did.
2) I have read thousands of neuroimaging studies, papers, reviews, etc. Some because I was interested, some for a study (a "literature review"), others for a graduate seminar or for a presentation by some research group, etc.
3) I have seen how researchers publish studies, including how they use references they don't understand and haven't fully read, mathematical methods they don't understand, and include descriptions that are basically lifted from sources they barely read (and couldn't understand if they did).
Part of this is normal. When I started out, I was told about the "standard" fMRI textbook and I bought and read it. I found out that this wasn't what people did. Mostly, they read the sections which give them some idea about what the names of various statistical analyses are so that they can find out how to run these on their software programs. That part is a problem. What isn't is the fact that they skip the sections on how MRI actually works, as getting into subatomic physics isn't really necessary. More necessary (and also skipped) are the technicalities between the processing/signalling and brain hemodynamics. In other words, I'd bet most grad students and probably even many undergrads know that fMRI doesn't actually measure neural activity, but a proxy for neural activity.
The difference between functional MRI (fMRI), and MRI is that the former allows one to see brain activity by measuring increases in blood flow. More blood, more activity. MRI (what the original scans were that the authors had) can't measure function. It uses the same principles for imaging (the spins of hydrogen protons), but instead is more like an x-ray in that it is static, not dynamic.
If you look at the Reference tab, you'll see that each statement is essentially referenced from people in the neurology field.
Neurology is clinical. This is neuroscience. However, I've looked at the references. And I've read many of them. Because I am one of many who have a serious problem with the state of the social & behavioral sciences in general and the state of neuroscience in particular, especially social psychology and social neuroscience (basically, studies like this). One of the biggest names in the field, Diedrick Stapel, was fired because he made up data over decades, and study after study had to be retracted. After that (not to mention other incidents, like Marc Hauser, a guy who used to work where I did until he was canned for fraud), an already increasing level of concern became much greater. Hence studies like this:
"
False-Positive Psychology Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant."
Psychological Science,
22(11), 1359-1366.
Or, on fMRI studies in particular, studies like
this one which is in a neuroscience journal on brain imagining methods and on the problems with too many protocols to produce and analyze data which the people who are producing and analyzing don't understand. They just follow.
The first referenced article has 4 authors and all were in the neurology field.
They aren't. Arie W. Kruglanski, John T. Jost, & Jack Glaser, are all social psychologists. Frank J. Sulloway has a PhD in the history of science.
Same goes for the second reference and so on.
Yes, the list goes on. For example, when the authors state: " One of the functions of the anterior cingulate cortex is to monitor uncertainty [
16,
17] and conflicts [
18]", they don't tell you that number 17 there is the study "Optimal decision making and the anterior cingulate cortex" from
Nature Neuroscience nor do they tell you it concludes that the primary role of the ACC is
not to monitor uncertainty, but "integrating reinforcement information over time rather than in monitoring". This study is basically
contradicting the view that the ACC "monitor['s] uncertainty", but rather that it is part of a distributed learning system. Basically, the entire study is to show that descriptions of the ACC which concern its role in monitoring are inaccurate, as lesions in this region in monkeys affected the ability of the mokeys to remember rewards over time. Now, firstly, this is a serious problem because cutting into the brains of monkeys to show how the now-deficient monkeys can't be conditioned the way other monkeys can doesn't tell us a single thing about the role of the amount of gray matter in the ACC.
Even better, study 17 cites study 18 in order to say that it is incorrect. The authors cited two studies, one after another, when the first one is devoted to demonstrating that the second is wrong.
The focus shouldn't be on the current biology summary or results, but on the references as to why the results were determined.
I'm going to go out on a limb here and say that you don't read a lot of academic journals and technical literature. And please don't take that as an insult because it isn't nor is it any kind of a criticism; it's true of most people because most people don't go into research, and the kind of literature
Current Biology and the references cited in this study are often not available for free, but require university access (which is how I was able to get many of the studies). Additionally, the academic volumes, monographs, etc., which are equally important to a field are expensive (I can buy three of them and still have collectively less pages than in a
Harry Potter book, yet have spent a thousand dollars). Also, even if they aren't that expensive they aren't in book stores because nobody is going to go into
Barnes &
Noble and buy
Information Processing by Biochemical Systems: Neural Network-Type Configurations or some equally technical book.
So I can understand where you are coming from. But this was a research study published in a research journal, and the references are standard practice in every single study there is (although it is not supposed to be standard practice to misuse the references). That's how academic literature, whether it is in historical linguistics or cognitive engineering, works.
It's so researchers don't have to re-invent the wheel. They cite sources so that others who don't think they are correct can go to those sources and see where that view comes from and what the evidence for it is.
Unfortunately, too often that isn't done.
Here, for example, it wasn't noticed that the authors cited a 1998 textbook to support using a specfic algorithm, even though the don't indicate anywhere what algorithm or where in the textbook which contains several chapters on such algorithms one can find what they used.
They don't mention why even though they didn't use functional MRI scans, they seperated the white/gray matter in their imaging data using "the Oxford University Centre for
Functional MRI of the Brain Software Library".
Nor do they mention why, when they state "The ROI for ACC was defined as a sphere with a radius of 20 mm centered at (x =
3, y = 33, z = 22) [
4,
27]" (ROI= "region of interest") they cite the sources they do. They are describing where they looked and the size of that region. This is very necessary for fMRI scans and sometimes for MRI scans because neural
activity that one measures is often in very, very small regions. But the smaller the size of the region, the more chance you will miss something. So fMRI studies usually cite why they selected the size (actually, volume, measured in voxels) they chose.
The problem is that
1) The researchers didn't do an fMRI study. They had MRI images, not fMRI data..
2) The first study they cite in support didn't use either fMRI or MRI but ERP.
3) The second used fMRI.
4) Neither study gives any reason to use that radius at that center.