They don't need to. Done properly, studies like the one in question (which can't be done properly yet) can infer from samples. Let's say someone gives me a bunch (say, a million) of pages from books. I have no idea what books were used nor any idea whether each page is from a book sampled only once or that in reality all the pages are from the same book but that the page numbers, the language the book was translated into, etc., have changed.
Now imagine that I all these million pages are spread out over the floor. A million pages, especially factoring translations of the same works, is a tiny, infinitesimal sample of the total number of pages. Yet if I start picking up page after page after page, and at most every 10th page and at least every 3rd page is from Essai philosophique sur les probabilités, or some translation of it, and furthermore that no other pages are from either that text (or a translation of it) nor from a text that any other page is from, I can infer that long before looking at anywhere near the full million pages that the probability is almost certain that this collection of pages is mostly from Essai philosophique sur les probabilités.
If I randomly (and carefully randomly) sample from people across cultures and this world, I can do the same. If I control for language, for SES, for exposure to literature, for orality, and for any number of other variables, and I still find that certain neurophysiological properties of the nervous system hold true, I don't need to sample all people.
Quite literally, you are suggesting we can't know whether a human has a heart or not based on the fact that we haven't checked every human.