As I explained, the entropy for the early universe is determined by the number of accessible quantum states. But for a pure vacuum, there is only one such state. That means the pure vacuum has the lowest possible entropy.
Well, the point is that those constants may NOT be independent. And to assume they adjust to maximize complexity is far from being an ad hoc proposal. The point is that maximizing complexity inevitably leads to life.
But this is inherently untestable and hence non-scientific. The proposal that the constants have values that maximize complexity *is* testable.
Well, asking where the laws come from is *always* going to lead to problems. For the most fundamental laws there *cannot* be a more fundamental explanation.
But that is not *at all* what I am proposing. I am not tweaking to give something as precise as English words on a distant world. I am simply saying the constants adjust to maximize complexity. And *that* inevitably leads to enough complexity to form life.
It *isn't* a 'superlaw'. It is simply a law like every other physical law: it just describes how the constants are determined.
It is certainly *possible* for there to exist a multidimensional civilization that is able to create universes. It is also possible that our universe was created as an art project by an elementary schooler in that civilization. But I find no evidence for such and my proposal of an *extra* physical law showing how the basic constants change is far, far superior as an explanation of the observed complexity of our universe.
Perhaps. have you done the calculation? Which values for the constants allow for a type of atom that can form complex structures? And, if the constants adjust to produce complexity, why would an external agent be required to produce the observed complexity?
Not at all. Again, if the constants adjust to maximize complexity, we would expect atoms like carbon to form allowing for complex structures. And, again, we would expect, if complexity is maximized, to see such structures develop into life. There is only *one* problem: the amount of complexity. And that problem is solved by the proposed law that the constants maximize complexity.
On the contrary, design is typically the *worst* explanation unless we know there is an intelligent agent ahead of time and what the capabilities of that agent are. I am NOT making an arbitrary exception: I am proposing another natural law that shows why the observed complexity is what it is.
Nope. For the Rosetta Stone, we already know there are humans that have the capabilities of making that stone, that they existed in the area, that Egyptian and Greek were already known to be languages of that area, etc.
But if those laws are untestable, they should be thrown out immediately. Which is one reason ID is not even under consideration.
I strongly disagree. A simple law stating that the constants are such to maximize complexity is in line with all sorts of other optimization laws we already use in physics. For example, all of the laws we currently see as fundamental involve maximizing a Lagrangian. All that happens with my proposal is that the constants are* also* subject to an optimization for complexity. Again, it is NOT a 'superlaw', but merely another proposed ordinary law that affects the constants as parameters.