I still don't know what correspondance you're looking for. In the era of Rutherford messing around with gold foil, we said that the nucleus was a collection of charged spheres, even though we hadn't actually seen them and later on saw there were no spheres to be found.
Can you describe for me what the no-cloning theorem is?
The models we use are actually models of the experiment outcomes before the experiment is performed. We describe the physical system in such a way that it's specifications are determined by outcomes which have not yet happened and are determined through statistical means and are only related to anything we do via a measurement which requires another mathematical function completely separate from the one we called the system.
Perhaps there's a better way of getting at this. Why do we us amplitudes instead of calculating probabilities directly in QM?
Or was there some "reality" missing from the wavefunctions that the spheres had?
Yes, of course. All models are wrong. That's why we have statistical mechanics. We know that we're wrong but things get too complicated. The difference is that we called it "statistical mechanics" because we were using probability theory and statistics to simplify some system yet say something meaningful. Here, we're doing much the same, only we are calling the probability function the system rather than describing the system in terms of probabilities.
You know that observation in QM effects the system in non-trivial ways. In fact, you believe that it causes a branching universe. If we want to describe how a quantum system evolves over time, but we know that any observation of the system will disturb it in some non-trivial way (whether "collapsing" or creating "branching histories" or "branching universe" or whatever), how do we specify the initial state of the system such that we can describe how it evolves?
I'm not talking about only this thread, although I am fairly sure you have specifically said that exact thing to me specifically at least once.
If I did, I meant it to be conditional. That is, given the way we are describing the system, the specifications of the system are conditioned by theory not measurement/observation, and the measurements/observations are related to the system we described via a mathematical function developed independently and in advance. This is because when we set-up the experiment, we are describing the state of something that we cannot obtain without altering it in ways that would make the experiment impossible. It's not that no physical system exists, but we have described something that doesn't.
To see that this is obvious in at least one sense, think of the fact that the state vector is said to contain all possible information about the system.
1) How can that be true if the uncertainty principle is true?
2) If we have completely described the system, and we observe it at some point, why do we need Hermitian operators to tell us what we observe?
I actually meant to mean the probability distribution after you apply the right operators and Born rule everything. That's 3D. (or maybe 4, but who's asking?)
The Born rule-
"The probability that a measurement on an observable A of a system in a state described by a state vector ψ = Σi ciφi will yield an eigenvalue
αn is |C
n|^2, where Aφ
n =
αn φ
n, with A is the operator corresponding to observable A, with ψ and φi normalised: (ψ,ψ
= 1 = (φi, φi)."
Now I'm asking: The state vector of some pure system is a complete description of the system as an element in Hilbert space. How do we determine what the variables that are characteristic of the system are? Put simply: we have a vector that isn't generalized with
n this or a1, a2, etc., but with actual values. How do we obtain these values?
Also, I'm now kinda intrigued about if it's possible to "visualize" the same WF in different dimensional spaces
In honor of Socrates, let's speculate about the space itself (using language that implies "we/us" and questions as if we both didn't know where this was going, when really I do and it's so annoying that they finally just killed Socrates for doing it). Regardless of dimension, we describe the system in terms or a ray or a vector in a Hilbert space. At then end of an experiment, is the system still in Hilbert space? Our we in Hilbert space? If the answer to both is "yes", then why do we ever talk about some system in terms of anything other than Hilbert space (more specifically, Euclidean or Minkowski space)? If the answer to the first is "yes", then what does the projection postulate entail in terms of the space in which the system is and the values obtained by measurement?
The explanation is: thinking in terms of particles - and
spraying paint everywhere as a result - is wrong. The objects being described by QM are not particles (or waves) and do not obey the rules of what particles do, so thinking in terms of molecules "being in two places at once" is a type error - akin to thinking of a point-like cow, or a pressurized shade of blue, or a curious green idea.
That's the standard model you've said we need to get rid of: QM is irreducibly statistical and we don't ask questions like what exists before we measure a quantum system and we don't try to interpret the measurement (either in terms of splitting branches or as a spread out physical system
actually collapsing to a point-like state or any other interpretation). The issue is that this interpretation (or perhaps non-interpretation is a better description) was developed when physics was still about idealized isolated systems. Schroedinger's cat was the first serious challenge to this logic as it used quantum formalism, that "it's just math" approach, and proved that a cat can be alive and dead at the same time. You, it seems, wish to keep the standard model but apply
post hoc a description for which there is no reason and without explaining why we used the standard model (preparing a system through a transcription process in which the system is described in terms of statistical theory, the measurement apparatus, and the measurement process, and then observing this "system" by another step in the measurement process coupled with another mathematical function). Let's assume some possible world where we are both renowned physicists with the necessary equipment to prepare a quantum system for an experimental procedure. We use the same formalisms and the same design. So why isn't it like rolling dice? That is, even though we're using probabilities, we aren't generalizing them in the way we do for rolling dice such that given an idealized system (dice or quantum), one prepared and transcribed in the same way, we can't just say the probability of getting snake eyes is constant (as it is in classical probability and statistical mechanics)?
There's nothing mysterious about wavefunctions being spread out, or doing bonkers things like what we see in delayed choice experiments. The mistake is to interpret the wavefunction as the probability of finding a particle there - there is no particle, only an entanglement
There is everything mysterious about wavefunctions being spread out, because I don't typically call the probability of getting heads or tail flipping a fair coin as being spread out. The wave function is a probability function, whether you wish to think of it in terms of finding particles or not. The relative state interpretations all have the same or a similar problem, from Everett to whomever it was who reincarnated his work (Deutsch? DeWitt?) now with a new an improved title (many-worlds interpretation) to the polymodal omniontologistical interpretation. Probability functions describe the probability of something. I get heads or tails. In QM, we describe something like the double-slit experiment in terms of the following probabilities: getting one result corresponding (in some way) to detection and to one of the slits, getting a result corresponding (in some way) to the other, and getting a result corresponding to both. But we never get both, and thus if we assume that both occurred we are no longer dealing with probabilities (because all outcomes occur), but we are using them regardless. We are applying probabilistic reasoning without a basis for our probabilistic outcome.
the universe isn't actually built out of anything, but is merely a linear algebra calculator
As long as it doesn't use Dirac's notation.
Sorry, I assumed you would be aware of the expression
lies-to-children.
I am, although my interpretation of it differs from Wikipedia's which I find too often reflects the defense of those who distort and claim it is simplification. The way math is taught and the way psych students learn about neurons are perfect examples.
EDIT: I don't think I've heard this term and was confusing it with 'the noble lie' which, thankfully, is basically the same.