What Fine-Tuning is (and isn't)
Like so many terms within the sciences, “fine-tuning” is both widely misunderstood, used inconsistently even by specialists, and misleadingly suggestive. Fine-tuning, or the fine-tuning problem, intuitively suggests (if true/real) the existence of a “fine-tuner” or designer. It’s true that many specialists have used the term this way, but most aren’t physicists (most are actually philosophers, theologians, or scientists in unrelated fields). Physicists, on the other hand, tend to use the term differently. To understand the various senses the term conveys, it is useful to find some common ground to them all. At heart, fine-tuning is similar to the Weak Anthropic Principle (WAP). The WAP is the simple and obvious truth that, because we are here, the structures, laws, constants, forces, dynamics, etc., of the universe must be such to allow for the fact that we are here. Fine-tuning is a bit more involved, but only because it turns out there are a number of properties of the universe that, were any to be slightly different (sometimes by a percent or two, sometimes by over a billion billionths of a percent), life or the universe wouldn’t exist. However, life and the universe do exist.
Many, if not most, physicists are extremely dissatisfied with these fine-tunings. The most general problem is that not only are the finely-tuned parameters related, they are often dependent upon the standard model or grand unified theories (GUTs). Most physicists, probably even those who believe in a creator deity, do not like to rely on divine creation to explain the cosmos. However, if the nature of the universe isn’t designed, we can’t simply state that the values required for fundamental constants or other “fine-tuned” parameters are due to chance. Why? Because these values (and even the parameters) differ in different models of the cosmos. We would like to be able to rule-out different theories like string theory and its various successors from other alternatives to or extensions of the standard model and one great way to do this would be to find models that adhere to known empirical findings and accepted theories as well as solve the “fine-tuning problem.” Put differently, if e.g., some model based on supersymmetry requires numerous parameters to all be extremely precise for no reason, but Susskind’s cosmic landscape or some other multiverse theory makes finding such parameters necessary because there are literally infinitely many universes with different values for the parameters and we happened to find ourselves in one that allowed us to be there (compared to all the ones in which life doesn’t exist because it can’t), then the general preference is for a model that requires the “fine-tuning” we find rather than one in which the values must be extremely precise for no reason.
You’ll note that I’ve referred to values being extremely precise, but a clear objection is that e.g., there is nothing problematic with this at all. For example, the speed of light is very precise, and if it weren’t we wouldn’t have special relativity. Precision itself clearly isn’t a problem. The problem is probabilistic (of sorts). If there is no reason for all the fundamental forces, the initial conditions of the universe, and other parameters that are basic, foundational components of the nature of reality to be all precisely set to values that are “biophilic” or friendly to life, why should countless (actually, uncountably infinite) small deviations from any of these would yield a non-hospitable universe? Let’s make this more precise with two examples by Tegmark:
“Suppose you check into a hotel, are assigned room 1967 and, surprised, note that this is the year you were born. After a moment of reflection, you conclude that this is not all that surprising after all, given that the hotel has many rooms and that you would not be having these thoughts in the first place if you’d been assigned another one. You then realize that even if you knew nothing about hotels, you could have inferred the existence of other hotel rooms, because if there were only one room number in the entire universe, you would be left with an unexplained coincidence.
As a more pertinent example, consider M, the mass of the Sun. M affects the luminosity of the Sun, and using basic physics, one can compute that life as we know it on Earth is only possible if M is in the narrow ranger 1.6 x 10^30 kg - 2.4 x 10^30 kg- otherwise Earth's climate would be colder than on Mars or hotter than on Venus. The measured value is ~ 2.0 x 10^30 kg. This apparent coincidence of the habitable and observed M-values may appear disturbing given that calculations show that stars in the much broader mass range M ~ 10^29 kg - 10^32 kg can exist. However, just as in the hotel example, we can explain this apparent coincident if there is an ensemble and a selection effect: if there are in fact many solar systems with a range of sizes of the central star and the planetary orbits, then we obviously expect to find ourselves living in one of the inhabitable ones."
Tegmark, M (2004). Parallel universes. In J. D. Barrow, P. C. W. Davies and C. L. Harper (Eds.) Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity. Cambridge University Press.
So we can imagine (indeed, we must if we are to advance beyond the standard model, universally acknowledged to be deficient) a variety of possible universes. The odds against the one we actually reside is less probable than getting hit by lightning twice a few seconds after winning several lotteries. However, if, like Rees’ hotel rooms or the actual suns, our “universe” is really just a universe in a multiverse, then the fact that we find ourselves to have won this incredibly improbable lottery is because out of all the universes in the multiverse we exist in one that we can.
This explanation, though, is problematic in two ways. First, it doesn’t adequately deal with the probability space of “possible universes”. That is, if we assume that our universe is it, we can’t use Rees’ argument because there are no other “hotel room universes”. This is it. Second, it sneakily introduces non-conditional probability after introducing the wrong way to understand the conditional probability. Going back to the hotel example, we can calculate the probability that we find ourselves in a particular room just by taking the number of hotel rooms possible and dividing 1 by this number. We can’t do the example of our sun, but because we know a lot about stellar astrophysics we can calculate a decent approximation of the probability. But in both cases, we’re not interested in the probability of getting any hotel room or any star as the Sun, but rather that, given that we find ourselves in a hotel room or a star with the M value we do, what is the probability that the hotel room will have a number equal to our birthday or to the M value of the Sun?”
I’ve come to agree that an example by Leslie is far better than either of Tegmark’s and better than I myself have thought for some time (much of my opposition, I realized, was due to its misuse by those who argued that fine-tuning “proves” a designer). I use a condensed form given by Rees:
"There are various ways of reacting to the apparent fine tuning of our six numbers. One hard-headed response is that we couldn't exist if these numbers weren't adjusted in the appropriate 'special' way: we manifestly are here, so there's nothing to be surprised about. Many scientists take this line, but it certainly leaves me unsatisfied. I'm impressed by a metaphor given by the Canadian philosopher John Leslie. Suppose you are facing a firing squad. Fifty marksmen take aim, but they all miss. If they hadn't all missed, you wouldn't have survived to ponder the matter. But you wouldn't just leave it at that - you'd still be baffled, and would seek some further reason for your good fortune.”
M. J. Rees (1999). Just Six Numbers: The Deep Forces that Shape the Universe. Weidenfeld and Nicholson.
It is important to note quickly that Rees’ “six numbers” are a very low number compared to the typical number of fine-tuned parameters, and constitute Rees’ “solution”, but that this number requires a lot of other assumptions and ignores additional problems (as Rees admits here and in e.g., Rees, M. J. (2007). Cosmology and multiverse. In B. Carr (Ed.) Universe or Multiverse? (pp. 57-75). Cambridge University Press). However, the point is that it gives us the right way to deal with the conditional probability of finding ourselves in a universe just right for life. We don’t ask “given that we won the lottery, what is the probability we did?” or “given that a firing squad of 50 all fired but missed me, what are the chances I survived death by firing squad?” Likewise, we don’t ask “given we exist, what is the probability we find ourselves in a universe that allows our existence?” Rather, we want to know, if the universe isn’t designed, why so many possible slight differences to the basic make-up of the universe would make it unfit for life (or galaxies, or atoms, or its survival beyond a few seconds)?
This is a very nuanced issue. On the one hand, we might approach it not in terms of probable universes but why we couldn’t have existed in a universe in which gravitation was much stronger, or the cosmological constant different by .00000000000000000000000000001? After all, we are assuming that there is no fine-tuner (no designer), and thus not only very different values for the “fine-tuned” parameters but universes without e.g., infinitely many initial conditions very like to the one which shaped our universe and which wouldn’t re-collapse almost immediately or fail to form galaxies. This assumption fails, but why? Why do we not only find ourselves in a universe fundamentally suited for life, but that life (and in many cases atoms, galaxies, or the survival of the universe beyond a very small interval of time) requires fine-tuning when it presumably wasn’t fine-tuned?
Intelligent Design, Irreducible Complexity, and the Nature of Actual "Design" Arguments
Or, we could approach it using probabilities for particular universes. This is a basic starting point for multiple multiverse cosmologies. Or we could use other approaches. However, what is quite important is to distinguish all of these from the kinds of arguments some argue fine-tuning suggests and even count as part of the more general “tuning” of the sort found in creationist/intelligent design arguments. In reality, there is a vast difference.
Those like Behe, Meyer, Dembski, Collins, Schroeder, Polkinghorne, etc., who argue that we couldn’t have evolved without god or that the nature of life in all its complexities reveals a designer, are dealing (at least partly) with a common fallacy we as humans are predisposed to: seeing purpose, cause, or design whenever we see organization of the type we normally associate with planning or an organizer. In other words, making claims that we can “find god” by realizing the beautiful intricacies of DNA, the incredible complexity of cells, and similar non-god-in-the-gaps fallacies “see” design because of our inclination to equate particular phenomena or structures with design (the difference between “cells are really complex, therefore god” and a god-in-the-gaps fallacy is a matter of what claims are made on the same evidence; saying that such complexity requires god is a “gaps” argument, while saying such complexity requires a designer is like “seeing” a face in the moon or Jesus in some rock formation; in order to process abstract, conceptual information to understand language or recognize faces we MUST ignore details and generalize away from specifics, but this has the downside of “seeing” instantiations of abstractions when they aren’t there).
In other words, the claim that living systems are “fine-tuned” in that they consist of millions of intricate structures that seem impossibly complicated to have resulted by chance is a very different claim than fine-tuning. First, the structure of DNA or the evolution of a flagellum are specific instances of structures among a much, much larger set that some claim are just “too” complicated to be relegated to chance the way that such claims (either implicitly or explicitly) assume we could of e.g., stalagmites or diamonds. Second, we are talking about instances of elements (systems, phenomena, etc.) in the universe, not the make-up of the universe itself. To see this, think about one way of defeating an argument for a designer based upon apparent irreducible complexity or the changes that evolutionary processes would result in the human brain or whatever. We might say that the physics of the universe make this possible, and reduce all such issues to the problems in the probability arguments made by creationist/I.D. proponents here. Fine-tuning, though, is what makes not only the would-be evidence for “design” possible, but even the survival of the universe for long enough to make evolution start, or long enough to allow for atoms, or for atoms to form no matter how long the universe “survived”.
Conclusion:
Essentially, the difference is that between trying to find the correct way to approach the probability of particular, seeming “designs” in the universe given that the universe makes them at least possible, and having a universe in which these are possible when, under the assumption there is no designer, we shouldn’t find that there are so many things that must be “just so” in order to for anything that seems to be designed as well as all that which doesn’t to even exist.
Also, fine-tuning is most frequently used to argue for theories like multiverse cosmologies or particular unified theories rather than a fine-tuner, while evidence for a designer from evidence for designs we find in the universe makes no predictions, explains nothing, and advances no models or theories.