• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Free will?

Thief

Rogue Theologian
I am not arguing against free will. I was just curious at your claim that "your free will goes as far as your will." It seemed an odd thing to say unless you distinguish between will and free will.

Even you possess the ability to make storms go away....raise the dead...
feed thousands at a moments notice....and walk on water....

If you do another's will....are you free?

But if you CAN do such things....are you not more free than most?
 

atanu

Member
Premium Member
There are actual particles and not only photons are used. I have read that in the experiments instruments can be used to literally grab or pick up a particle and send it through the slit and box experiment and determine which box the particle landed in.

My dear friend, IMO, the time is ripe for you to by-pass the thoughts-words 'particle' or 'wave' etc. etc. Time is ripe to meditate on these thoughts-words as mere products of awareness.

:yes:
 

LegionOnomaMoi

Veteran Member
Premium Member
It tested quantum theory
Quantum theory developed as a means to understand this experiment. To say it "tested" quantum theory is like saying I can "test" Newton's theory of gravity by dropping apples. What actually happened is that two different experiments, one which had "proven" that light was a wave, and behaved just like classical waves, and another which "proved" it could only be explained by parts, brought classical mechanics to a grinding halt. Because there was nothing in it to explain the dynamics of a system which couldn't be measured. Classical measurement was based on the precept that measurements could be made arbitrarily "gentle" on thus there was no theoretical limit to precision. Which meant that classical systems and classical states were understood via a one-to-one correspondence between a measurement of that state, and the property of the system described by the measurement.

Only here, two experiments gave contradictory results, out of which quantum theory was derived, only even the founders were deeply unhappy with the theory because it wasn't just developed out of two contradictory interpretations, but was a resolution of this contradiction by simple combinations of two contrary classical entities into one unmeasurable, nonlocal "reality" described as measurable and localized when convenient. But the "shut up and do the math approached ultimately failed, because this:

Yet they produce the right answers. How can they not describe the states of the quantum system, yet produce the right answers in all tested circumstances?

represents a complete misconstrual of both quantum theory and the quantum mechanics used (in opposition to the theory) to model quantum systems. Perhaps this is why you repeatedly "don't see the problem", and I have failed to successfully explained that the "right results" are always, in every single experiment on any quantum systems,determined to be "right" based on how a final state is reached by assuming an initial state we know to be wrong, using a function to describe the system's evolution we know is wrong, and then seeing how destroying the sysem results in that final state. A state we determined to an unknown degree by the experimental devices themselves, that we understand in terms of an initial state we made up, and that evolved according to specifications of some mathematical function and some set of variables derived before the system existed and which can't (according to quantum theory) describe the quantum system (so we ignore this terminological misuse rather than be left with the problem of having to explain why we are talking about the states of our mathematical system and not ever dealing with the quantum system).



quantum theory tells us that a single electron fired through the slit will land in a given position some proportion of the time.

No it doesn't. Nor does QM formalism suggest this. First, "firing" a single electron through slits means that before we even reach the detection device we've fundamentally altered the system. That's what the double-slit experiment and later variants tell us. We get one thing if we have one slit, and something completely different with two. Moreover, we can now get both at once using our newer and more sophisticated devices. Second, we can arbitrarily change either the set-up, or the mathematical formalism describing the mapping of states and the variables, or both, such that whatever we decide is the "right" result can be the "wrong" one too, without even changing the experiment. Because all we know is that somehow our experimental set-up involves some probability that we will end up with certain measurements vs. others, but we don't know how. Third, however we calculate the probabilities that the "system" will end up such that we say something about its final state can only be possible if we specify the intial state. But we can't specify the state of a system defined by "quantum processes that are literally unobservable". So we invent it.

Do the experiment lots of times, and that's what you will get. It also tells us that this pattern will not appear if you somehow measure which slit the electron went through.

REALLY!!? "Quantum systems exhibit particle- or wavelike behavior depending on the experimental apparatus they are confronted by. This wave-particle duality is at the heart of quantum mechanics. Its paradoxical nature is best captured in the delayed-choice thought experiment, in which a photon is forced to choose a behavior before the observer decides what to measure. Here, we report on a quantum delayed-choice experiment in which both particle and wave behaviors are investigated simultaneously. The genuinely quantum nature of the photon’s behavior is certified via nonlocality, which here replaces the delayed choice of the observer in the original experiment. We observed strong nonlocal correlations, which show that the photon must simultaneously behave both as a particle and as a wave." A Quantum Delayed-Choice Experiment

Huh. And here I thought Science was supposed to be a prestigious journal. Yet not only do the authors present their experiment, they note that it is simply a different approach to what has already been experimentally demonstrated several times:
"Wheeler’s experiment has been implemented experimentally by using various systems, all confirming quantum predictions. In a recent experiment with single photons, a spacelike separation between the choice of measurement and the moment the photon enters the interferometer was achieved.
We explored a conceptually different take on Wheeler’s experiment."

Now, the first problem is, if there were already so many experiments demonstrating that this thought experiment was empirically validated, including one recent experiment on a single photon, why do we need another experiment? Should we be concerned that many phlogiston really exists? Or that light requires"aether" as a medium to travel? Or perhaps it's because of the inherent, well-known, documented, and increasingly problematic nature of results which are deemed "right" based on "measurements" of a system's "states" we invented (for the intial), and then mostly invented (final). The second problem is why, if your understanding of quantum theory is sound, does it not reflect quantum theory as it has been understood since Wheeler first destroyed any attempt at "interpreting" the double-slit experiment in the way you do now in 1978.

So, here's your interpretation of the experiment:
quantum theory tells us that a single electron fired through the slit will land in a given position some proportion of the time. Do the experiment lots of times, and that's what you will get. It also tells us that this pattern will not appear if you somehow measure which slit the electron went through.
Here's the interpretation within actual quantum theory: the electron doesn't ever travel through the slits, and whatever pattern we get depends on how many slits we cut, when and how we detect it, and not on "firing" the actual dynamics of the quantum system. Even more interesting is this:
Also, you better take away Feynman's PhD if constructing classical physics from quantum has "no theorectical basis."

Because his thesis, and the formulation of the path integral presented in his later paper in 1948 on the same path integral, describe "particles" as at all times taking all possible routes through the interference screen/splitter itself, at the same time (not the "slits"). And when attempts were made to try to create a way to falsify this, and determine some "actual" dynamics of some quantum system based on the patterns we get themselves (rather than take Feynamn's word for it),Wheeler's thought experiment further supported Feynman and added a twist: it's not just that we "think" or "describe" a particle as travelling through one or two slits at the same time, or even that it travels through multiple slits as long as we don't look. It's that whatever pattern we get is the result not of the actual paths taken, or even that is a result we determined simply by cutting slits so that we would only detect certain paths when it actually didn't take these (via Feynman), but that even that if we try to detect the "path(s)" after they are taken, they won't be there.

And now we've shown this experimentally. Multiple times. We can "detect" a photon as being in two fundamentally different states at the same time after it's supposed to have already given us the pattern based on the interaction with the splitter.

We determined the pattern by preparing things a particular way, from the number of slits to the "detection" device, and we have no way of knowing ever how to verify the magnitude of error in any experiment with any quantum system. We only know that the error is guarenteed, and is not simply a matter of our "probability function", because this depends itself on the specifications made to what we call the "system" but which doesn't correspond with any actual system in any way except to the extent the entire theoretical basis for running the experiments in the first place states that it doesn't.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Via quantum theory, which infers it from observation.
Impossible. Quantum theory holds you can't observe the states of quantum systems. However, we need two things to describe a final state: an initial state, and the final. We only get one "observation" which will destroy the system. And as for inference? That tells us that the functions we use to map the "inferred" intial state to the "final" state cannot be correct. Because the functions are deterministic. The specifications on the "system" (i.e., what we write down about the intial "state" of a system we never actually observe) will guarentee particular outcomes. But what Feynman, among many others, have shown (via the same inference you refer to) is that the "final state" cannot correspond to the formalisms we use to map our constructed initial state onto some "final" state we determine to be a "state" even though it is inferred from an initial state of a system we never observed and a function which contradicts all inferences foundational to quantum theory.

You appear to be contradicting yourself.

Which is because, as is well known, quantum mechanics (meaning the actual description of quantum systems and experiments on quantum systems) relies on formalisms that quantum theory holds (via inference) cannot be correct. However, as we need something in order to crank out results, we ignore this. Until it stops producing the results we requrie. Which was about 2 decades ago. Since then, the idea of interpreting what the "wavefunction" really was, or the states of quantum systems (or the systems themselves) really corresponded to physical reality or the actual systems rather than what we simply call the system, became vastly more important to physicists. Because increasingly sophisticated devices removed the bastion we dreamt up to protect the "classical world" and pretend we could derive classical mechanics from quantum. The same experimental inferentialism which brought about quantum mechanics in the first place showed that the "classical limit" approach we hoped wasn't just circular reasoning was flawed. Because we could detect the same things which led us to develop quantum theory at the macroscopic level.


Depending on which view (in the technical sense) of reality (for there are several) you are operating in, quantum systems are not ontologically indeterministic.
Not depending on quantum theory, though. Whatever model of "what's really going on" one chooses to subscribe to, the inference process you assert is the reason for giving the "states" of the system we do is the same process we use to know that the description of intial states as well as the functions used to understand "final states" are necessarily wrong. However, they were still useful for some time. But they have become increasingly problematic and the "little problem" or "pseudo problem" (scheinproblem) turned out to be quite serious, and demonstrated that the "classical limit" solution used was flawed all along. And, alas, the Everett et al. "relative-state" approaches of some new underastanding of decoherence ("it happens. don't ask where or how or when or why") is either just as flawed, and demonstrated to be so empirically, or is a fuzzy, ill-defined, nebulous treatment which can't possibly achieve what the "classical limit" was supposed to: derive classical mechanics from QM.


In Hilbert space, it is absolutely deterministic
"It"? What? And Hilbert space? A problem with sciences like mine is that we often not only have massive amounts of data, but frequently have to "quantify" user responses. The naive way to do this (which plagues research in psychology and sociology) is to treat linguistic responses as numbers corresponding to a scale constructed by the number of possible responses (e.g., the likert scale). Armed now with actual numbers, most social scientists plug them into some statistical technique from what (they frequently don't even know) is called the generalized linear model and which is utterly inadequate for their use.

An alternative to using the poorly understood and inadequate typical GLM distance measures is to use other distance measures in Minkowski space or some other formulation of multidimensional space, often constructed over the complex numbers. But when we talk about the distinctions between abstract categories based on these distances in some space (Euclidean, Hilbert, Minkowski, whatever) we don't say "nouns are located over here in Hilbert space, while verbs are located over here".
All quantum mechanics is deterministics. It's the only way to say anything about the states. "Hilbert space" is simply a particular generalization of Euclidean geometry. And saying that quantum mechanics is "deterministic in Hilbert space" isn't saying anything at all, as all formulations, functions, and other mathematical models of quantum systems require complex planes and some mathematical formalism to describe them; they are also all deterministic.


the time evolution of a closed system is unitary and reversible.
Which conflicts with the theory.
The indeterminism appears when your quantum system is not closed
Like in Everett's interpretation? Or every single current understanding of decoherence? Or every single experiment ever done with quantum systems?

It is only in the ontology of this classical, 4D view of reality that quantum theory is indeterministic.

Really?
1) This "4D reality" isn't "classical" but the result of relativistic physics, which remains at odds (and perhaps in contradiction to) quantum physics. The attempts to tie the two together have tried to incorporate the first most successful theory in physics with the second have either been empirically invalidated or have yet to be.
2) That same "inference" process you refer to the process which gives us the system's "state" (or all of quantum theory for that matter) is what makes it ontologically indeterministic. Neither Everett nor anyone else has changed that. But quantum systems have always been described deterministically. Otherwise we would be utterly unable to say anything about final states.
3) You said you were find with counterfactual indeterminancy. Which means that not only are quantum systems indeterministic, it is useless to measure them.



Quantum theory dictates that no deterministic model can describe the state produced in the classical view of the universe, for it is indeterministic.
Every single experiment is produced in using classical devices in the classical world. If we could deterministically account for quantum systems, we wouldn't need quantum theory. The reason we "infer" that quantum systems exist is because our observations conflict depending on how we set them up. Which means that every single quantum experiment involves determining what measurements we will find in ways we can't know.


Because practical experiments have measurement errors that are far higher than the theoretical requirements.

That's because they make up the measurement errors. They invent them. There are always measurement errors. Do you really believe that measurement error is the measurement problem? The term "classical physics" wouldn't exist if this were true. The "measurement problem" is that we haven't a clue what the measurement error is, because we construct it to start with, based on an inferential processes which dictates that we can't define what the measurement error actually is. Period.



Do we ever have that? We have measurement uncertainty to deal with.

But we don't. That's my point.

Your device is made of electrons and protons and sub-atomic particles. It is, ontologically, quantum too.

Yes. Hence the entire problem. We talk about classical limits and measurement errors of systems we construct based on inferences which contradict them and in a decoherence process we pretend (experimentally) creates "measurement" but which we know (from inference and theory) is fundamentally dependent on the interactions between quantum systems and the "macroscopic reality" we describe in terms of a non-existant classical limit.
If velocity is only relative
It isn't.

There is no explicit splitting, only arbitrarily large divergence.
Which makes every measurement or experiment meaningless.

I'm not sure how this question makes sense in perspective of continuously varying branches.
Because these "branches" came about as a result of "our experience" (the entire attempt was to explain the results of "observation" in terms of the reality of the formalisms) and because in QM, the result is a determined state (while in the developed version of Everett's interpretation, this state is somehow "selected" out of infinitely-many, but somehow also meaningful, such that we can say something about the state of the system we never measured).

Are we sure that Born's rule isn't just an artefact of classical interaction?
We are sure that it is. We are also sure that Everett based his interpretation on it, as did those who developed his initial view.

...? Quantum theory disagrees with itself?
Quantum mechanics is the description and formalisms describing the dynamics of quantum systems in terms of things like "initial" and "final" states. Quantum theory dictates that these descriptions cannot be true.


You're denying that things are themselves here.

I'm not. I'm saying that what we describe as the "things" are defined in contradiction to what makes us think there is a quantum reality at all.
 
Last edited:

idav

Being
Premium Member
My dear friend, IMO, the time is ripe for you to by-pass the thoughts-words 'particle' or 'wave' etc. etc. Time is ripe to meditate on these thoughts-words as mere products of awareness.

:yes:

Life is a universal wave function whether it is or isn't a substance of some sort. The labels help picture what's going on.
 

PolyHedral

Superabacus Mystic
Quantum theory developed as a means to understand this experiment. To say it "tested" quantum theory is like saying I can "test" Newton's theory of gravity by dropping apples.
But you can. It's not even circular if you derived the theory from anything apart from the apple falling in that particular place. (The facts that gravity affects all objects equally regardless of mass, and Earth's gravity varies little across its surface both require verification.) It's not a very interesting test, since you have every confidence to think it'll work, but it is nonetheless a test.

Because your theory must (within margins of error) agree with every experiment. Otherwise it is not a complete theory. Every experiment therefore constitutes a test of every theory which comments on the experiment's results.

A state we determined to an unknown degree by the experimental devices themselves...we are talking about the states of our mathematical system and not ever dealing with the quantum system).
I can describe the double-slit set-up in entirely mathematical terms, and produce a correct answer in entirely mathematical terms. There's only one point where the translation can fail: the translation of the electron gun and slits into geometry we can do quantum in. "Assume a electron (which is understood to be a cloud of complex numbers) travels down a pipe of width w with energy E..." When do the calculation, Born rule included, you wind up with the probability the electron strikes the screen in a given place. This probability is correct when the experiment is repeated. Repeating it with any other type of particle produces correct results

Did we "make up" the geometry of the electron gun? The probabilities? What have we arbitrarily decided that isn't borne out by its accuracy to observed data? :shrug:


First, "firing" a single electron through slits means that before we even reach the detection device we've fundamentally altered the system. That's what the double-slit experiment and later variants tell us.
So if we change the geometry, we wind up with a different answer. I'm not following why we should be surprised - we already know there is wave behaviour.

I also don't see how we've "altered" the system by firing an electron down an electron gun. Since the gun works via voltage differential, it doesn't significantly measure anything about the electron except its velocity, which doesn't have a significant effect in what we're trying to study.

Second, we can arbitrarily change either the set-up, or the mathematical formalism describing the mapping of states and the variables, or both, such that whatever we decide is the "right" result can be the "wrong" one too, without even changing the experiment.
Show me. Preferably in your own words. :p

Because all we know is that somehow our experimental set-up involves some probability that we will end up with certain measurements vs. others, but we don't know how.
You don't know how, because you're denying the realism of quantum mechanics. We end up with the probabilities we do because the wave function dictates we do.

But we can't specify the state of a system defined by "quantum processes that are literally unobservable". So we invent it.
This is very much the point of doing in the theory in algebra. We know lots about the initial state - for instance, the particle's "direction" (as much as that makes sense in this context) approximate energy, and rest mass. Do we need anything else?

REALLY!!?
Get back to me on this one. I have a potential answer, but it needs some consideration. :p

Here's the interpretation within actual quantum theory: the electron doesn't ever travel through the slits.../quote]I do mean "fire" in an approximate sense - the most dense part of the cloud "went through" the slit. (But of course we can't measure that without breaking everything) Since, of course, the cloud technically spans the majority of the universe, it's not going to all pass through the slit.

Also, you appear to have just said, "the results vary based on the experimental setup." Well, ...yes? :shrug:

It's that whatever pattern we get is the result not of the actual paths taken, or even that is a result we determined simply by cutting slits so that we would only detect certain paths when it actually didn't take these (via Feynman), but that even that if we try to detect the "path(s)" after they are taken, they won't be there.
3D probability wavefunctions don't take defined paths, so isn't that the expected result?


Because increasingly sophisticated devices removed the bastion we dreamt up to protect the "classical world" and pretend we could derive classical mechanics from quantum.
A macroscopic quantum result is not a counterexample to deriving classical mechanics from quantum. A counterexample would be a situation where classical and quantum mechanics disagree, and classical is correct. Would you like to cite such a situation?

The wavefunction of a closed system. It evolves in a time-reversible and unitary fashion.


they are also all deterministic.
Except for the probabilistic element of which value you end up with from your measurement. :shrug:

Which conflicts with the theory.
How? Why?

Like in Everett's interpretation? Or every single current understanding of decoherence?
The universe is closed by definition, so no.

Or every single experiment ever done with quantum systems?
Are those closed?

1) This "4D reality" isn't "classical" but the result of relativistic physics,
Relativity is classical in the context of QM. Also, the unification will definitely be 4D or more, because it has to incorporate both.

3) You said you were find with counterfactual indeterminancy. Which means that not only are quantum systems indeterministic, it is useless to measure them.
What? It means that quantum systems are indeterministic in a known, predictable way.

The reason we "infer" that quantum systems exist is because our observations conflict depending on how we set them up. Which means that every single quantum experiment involves determining what measurements we will find in ways we can't know.
Your models conflict. Experimental data cannot conflict; it is data. By definition, all observations are correct, because that is the core of science. The fault must lie in our understanding - either of what the experiment is doing, or what the data tells us of the model underlying it.

That's because they make up the measurement errors. They invent them.
...? And no scientist in the world has noticed fabrication of data? (Because your error margin is data as much as your actual values are.)

Do you really believe that measurement error is the measurement problem?
No. I also never said that. Keep your quote contexts straight.

But we don't. That's my point.
I don't mean measurement errors in the uncertainty principle. I mean measurement error as in engineering flaws in your devices.

It isn't.
Excuse me a moment; I must snark.
3s7ype.jpg

However, I did choose that particular example explicitly because GTR tells us that velocity is relative. You're going to have to expand if you want me to believe you.

while in the developed version of Everett's interpretation, this state is somehow "selected" out of infinitely-many,...
Everett's interpreation is designed around no "selecting" going on. Every version "happens" to some degree. That's the whole point.

We are sure that it is. We are also sure that Everett based his interpretation on it, as did those who developed his initial view.
So to describe the universal wavefunction ,we don't need the Born rule. Hurray. Everything classical is just an approximation anyway. :p

Quantum mechanics is the description and formalisms describing the dynamics of quantum systems in terms of things like "initial" and "final" states. Quantum theory dictates that these descriptions cannot be true.
You are the only one I have ever seen make a distinction.


Also, I have a relevant, simple example! Let me describe the mechanics of a game called Kingdom of Loathing. In it, characters have 3 stats, which have one substat level. The idea being when you accumulate a certain level of a substat, you gain a point in the corresponding stat.

However, the way you gain substats from monster combat is interesting here. Monsters have, through other mechanics, a certain number of experiences points they deliver when defeated, which is then distributed between the 3 substats. ...The problem is, this number of experience points is not usually divisible evenly into the stat distribution (which FWIW is always 2:1:1 in some order) and it is not possible to gain a fractional substat point.

To solve this, the devs have been clever - a fraction represents a probability. If the naive division means you should be gaining 2.25 of a stat this turn, you find that you gain 2 of the state 75% of the time, and 3 of the stat 25% of the time.

All of this is, of course, derived by player experiment, with no help from the developers. There is no suggestion that this is going on in the actual in-game text. The reason this is important in the meta-game is because it's possible to increase the monster experience in small increments.

Since the metaphor at this point should be obvious, tell me - what's the difference between this artificially example of a quantized system, and actual quantum physics? :D
 

LegionOnomaMoi

Veteran Member
Premium Member
It's not a very interesting test, since you have every confidence to think it'll work, but it is nonetheless a test.

A test is meaningless unless the interpretive framework (which includes how to set up the experiment) is sound. For example, the director of the lab I worked at until I had to move held a seminar for the grad students and PhDs. Every meeting there would be some studies that everyone was required to read, and a bunch more which were suggested, but only particular presenters had to read one them. And each time, the director would assign studies (usually neuroimaging) which "confirmed" a particular theory about cognition ("embodied cognition"), and try to get everyone to rip it to shreds (he's been around almost as long as Chomsky, and has a similar reputation within neuropsychology, so he's a bit set in his ways). At one point, however, when we were dealing with newer (and therefore more sophisticated) neuroimaging experimental designs and data analysis, he raised the possibility that even though he believed none of these studies actually showed what the authors stated, it might be that eventually all the "problems" he had identified in each one would be resolved and still not find what he felt theory dictated must be true, in which case it means there is something wrong with the way we understand something like fMRI data. So I asked "if experiments continually show what you don't think to be true, how do you determine whether or not the problem is with your experimental procedure, vs. your theory itself?" His response was basically that it depends on how "strongly" your theory is supported. Which is basically a mix of how much evidence you think there is and how biased you are towards a particular interpretation. And it's true of every science there is. Every hypothesis is tested in a particular framework, which dictates what is a valid set-up vs. one which will bias your results, and what your interpretation should be.

"Classical" physics had a particular framework. It was deterministic, and involved things like systems of particles or waves or both and how interactions occured. One experiment (Young's) within this framework showed that light was a wave. It confirmed a particular hypothesis within the framework of physics. Einstein did the same, only with the opposite result. All of a sudden, the oldest and most respected/envied leading sciences (with the oldest and most complete record of success, and with the most supported, validated, and empircally sound interpretative framework of the sciences), had an obvious contradiction. Two experiments with results which could not exist within that framework (i.e., all of physics).

But it was worse than that. Because when physicists like Einstein tried to reconcile the results and somehow extend, or alter in a more typical way, the framework within which physics existed, the mix of experiments, debates, and theoretical foundations made things even worse. One of the reasons (other than the history) that physics was held in such esteem (especially by physicists) was that it was "easy" to test. You have a system, you can measure any state with arbitrarily high precision, and you don't have to deal with things like human responces, or the complexity of life, or semi-arbitrary classification schemes, or any number of similar problems just about every other science faced.

And that stopped. Dead. But there was nothing to replace it. Now there is a theory about this "world" which exists, and from which the classical picture somehow emerges, but we can't test it. We can't set up some isolated system, run it, and measure it's state (even if this means disturbing it). Because everything we do somehow determines what we are going to find, and we don't know how. That's why this:
I mean measurement error as in engineering flaws in your devices.
is exactly what I meant. We make these up. We don't have any way of knowing what the measurement error is, because the uncertainty principle means that we can't tell in what ways our devices determines the results we will get, other than that we know it has to and we know it has to in ways we cannot determine.

But we do know one thing. The resolution (the "new" framework) wasn't really new. The hope was that whatever this "weirdness" intrinsic to the subatomic world was, it stayed there. Plus, we found that we could explain the initial results if we simply combined to contradicting aspects of the old framework, smooshed them together, and hoped eventually whatever kinks in this solution existed would be worked out. Only that never happened, nor was it satisfying to begin with. When your fundamental framework says "waves act like this" and "particles act like that", and you find out that neither is true, but that you can continue to explain your results (sort of) by combining the contradicting aspects of your old framework, you're going to run into problems. It's like saying "well, certain observations seem to support the idea that we're the center of the universe, and others that the sun is but is stationary, and still others that it's all moving, so we're going to say all of the above is true." And that's what happend.

You have, I believe, in other threads talked about "logic" being independent of any human invention. That is, something like tertium non datur (excluded middle) is true whether or not we are there to think it. But the solution to the problems with the old framework was to violate that logic. Quite literally (there is a massive amount of literature on ontological "vagueness" and "quantum logic" thanks to this). Because we literally chose a third way: it's not a wave, but it acts like one sometimes, and it's not a particle, but it acts like one, and so we'll say it's both and neither. Why? Because when we combine the mathematical models from the old framework, we can explain our results, even though now we have a theory that fundamentally conflicts with how we are using the math.

Because your theory must (within margins of error) agree with every experiment.

The margins of error are built into the theoretical framework, regardless of the science. Young really did "prove" that light was a wave, and Einstein that it was not. All within the margins of error of that framework. It's just that for the first time, there was clearly a contradiction. The same experiments, using the same methods for determining margins of error, for understanding "measurement", and for interpreting data, led to contradictory conclusions. "Margins of error" is just shorthand for "if we assume that our theoretical basis for understanding things the way we do is correct, then here's how likely it is our results support our theory."

Otherwise it is not a complete theory. Every experiment therefore constitutes a test of every theory which comments on the experiment's results.

If you don't mind answering: you are a computer scientist/programmer, correct? If so, do you conduct research? Not that this would necessarily matter, but the more one tends to work in a research field, the more one tends to be familiar with how possible it is for competing theories to be "complete", supported by experimental data, and exist in contradiction to one another. Interpretation is ubiquitous. Every "observation" entails certain assumptions, whether you are observing a rat in a maze, a computer simluation of a neuron, or working at CERN.


I can describe the double-slit set-up in entirely mathematical terms, and produce a correct answer in entirely mathematical terms.

What you cannot do, however, is determine your margin of error.
 

LegionOnomaMoi

Veteran Member
Premium Member
Did we "make up" the geometry of the electron gun? The probabilities?

Yes. It began by resolving the paradoxical results of two conflicting interpretations by incorporating the deterministic mathematics of classical physics into some hybrid which could explain the results of an experiment. Then came Feynman, who said that our results (the pattern) wasn't just a matter of what we were detecting based on "where" electrons or photons or whatever went, but was simply wrong. We were "observing" by making slits in the first place, which determined what we would find by changing the state of the system before we observed it. The "path integral" means that every pattern we detect is determined by changing the states of the system in a way not reflected in the wavefunctions, the initial specifications, the initial "state", or the final state. The final state of the actual trajectories are all possible routes through the screen and all points on the detector. And Wheeler introduced the idea that even if we tried to "undetermine" the paths by "observing" after the electron or whatever actually took some path, we would never find the "superposition state" or anything "quantum". And with new sophisticated set-ups, we can actually implement Wheeler's "delayed-choice" experiment. He have. Over and over again, in different waves. And each time we find the same thing: we can detect confliciting "states" at the same time, and there is no theoretical limit to how many different conflicting states we could detect at any given time.


we already know there is wave behaviour.
Which we determine. And there is "particle" behavior. Which we also determine. And we can find both, at the same time, by determining them. Which means that every experiment means an unknown margin of error between initial and final states, because each are determined by every part of the experiment, and all we have our inferences of this indirect treatment of an indeterministic system we treat deterministically.


I also don't see how we've "altered" the system by firing an electron down an electron gun.

Because we never describe this. You quoted wikipedia before about the "states" of the system. How our specifications used to transcribe the "system" into a wavefunction such that we have anything to say about whatever it is we "fire"?


You don't know how, because you're denying the realism of quantum mechanics. We end up with the probabilities we do because the wave function dictates we do.

What "realism"? According to the "many-worlds" interpretation, there are no probabilities. We always get every possible result, but we only end up with the single state in our universe. Of course, because that state is determined in advance by the specifications used transcribing the wavefunction, how did we get it out of some infinite possible universes in which we somehow made any "meaningful" measurement which allows us to say anything at all?
We know lots about the initial state - for instance, the particle's "direction" (as much as that makes sense in this context) approximate energy, and rest mass. Do we need anything else?

No. But it we be nice to have any of the above. Because we don't.


3D probability wavefunctions don't take defined paths, so isn't that the expected result?

A probability function describes the probability of something happening. If the wave function describes the actual system, it isn't a probability function. It's a nonlocal system in multiple places at once taking multiple trajectories. The trajectories we say it takes correspond to the way in which we transcribed the "system" to get an initial state and the way in which we determined the trajectories ahead of time, contradicting quantum theory itself.


A macroscopic quantum result is not a counterexample to deriving classical mechanics from quantum. A counterexample would be a situation where classical and quantum mechanics disagree, and classical is correct. Would you like to cite such a situation?

Sure. As soon as you tell me what either means. "Classical" mechanics is the name we give to an approach which we found stopped working at some "fuzzy" point, and not only didn't work, but was completely contrary to our entire framework. "Quantum" mechanics is the framework we use to describe experiments which contradict the actual theory itself. And when I cited a peer-reviewed journal article in which the author stated this, you said he was wrong because wikipedia talked about quantum states.
This is the measurement paradox: the process of measurement cannot be described by standard quantum dynamics."
Ellis, G. F. (2012). On the limits of quantum theory: Contextuality and the quantum–classical cut. Annals of Physics.

You ask for citations, but then you ignore them, so why ask?



Except for the probabilistic element of which value you end up with from your measurement. :shrug:

Which is classical determinism. There's always a "margin for error', and we determine these in quantum systems by pretending we have classical systems.

Inference leads us to believe, as shown by Feynman, Wheeler, and everyone since Bohm, Heisenberg, & Bohr, that we quantum systems cannot be observed. Any observation determines the result. Which means we cannot start with known quantum state, because the only way to know it is to be have some way of observing it. But doing so destroys the quantum system. So we invent the state, and use the same inference which informs us that quantum reality is fundamentally, ontologically, and "really" indeterministic, to develop mathematical methods which allow us to ignore our own theory and perform experiments which violate the theoretical framework of quantum theory.


The universe is closed by definition, so no.
The indeterminism appears when your quantum system is not closed - unlike the universe - and includes an interface to a classical, measuring object. It is only in the ontology of this classical, 4D view of reality that quantum theory is indeterministic.
This is backwards. Quantum mechanics treats systems as closed, deterministic systems. However, decoherence depends on quantum systems being open:
"over the past three or so decades it has been slowly realized that the isolated-system assumption—which, as we have described above, had proved so fruitful in classical physics and had simply been taken over to quantum physics—had in fact been the crucial obstacle to an understanding of the quantum-to-classical transition. It was recognized that the openness of quantum systems, i.e., their interaction with the environment, is essential to explaining how quantum systems (and thus, assuming that quantum mechanics is a universal physical theory, all systems) become effectively classical: How their “quantumness” seems to slip out of view as we go to larger scales, finally arriving in the world of our experience where cats are either alive or dead but are never observed to be in a superposition of such two classically distinct properties." (Decoherence and the Quantum-to-Classical Transition; Springer, 2007).

The problem with the above is that while it is now generally accepted to be the correct approach (somehow), it doesn't resolve anything. Because "to observe interference effects between components of a superposition state, these components must have not been measured, i.e., which-state information must not be available." (ibid).


What? It means that quantum systems are indeterministic in a known, predictable way.

Which is not indeterminism, and is absolutely not countefactual indefiniteness. Countefactual indefiniteness literally means "if I didn't observe what I just did, what I observed wouldn't be there". It's why Einstein asked "is the moon there when you don't look at it?"


By definition, all observations are correct
Can the following describe "correct" observations?
"Yesterday, upon the stair,
I met a man who wasn’t there
He wasn’t there again today
I wish, I wish he’d go away..."

Someone observed something which wasn't there. That's counterfactual indefiniteness.

...? And no scientist in the world has noticed fabrication of data? (Because your error margin is data as much as your actual values are.)

No, every single physicist knows this. And every single scientist interested in quantum theory learns it.



Excuse me a moment; I must snark.

Hey, if it's done well (and 'twas) no problem.

However, I did choose that particular example explicitly because GTR tells us that velocity is relative.

No, it tells us that the observation is relative. And we know with respect to what. Everett et al. took the determinisim of quantum formalisms, and said that all possible "states" existed, but the one we ended up with in our universe is the result of our measurement. However, although Everett intended to "take it like it is", the problem is that quantum formalisms are deterministic: we start with an initial state and a function which allows us to "know" at the very least a far smaller probability range than the "many-world" approach dictates is involved in every measurement. Which means we can't get the states we do using this interpretation.


Everett's interpreation is designed around no "selecting" going on.
What are you basing your understanding of Everett on?
 

PolyHedral

Superabacus Mystic
So I asked "if experiments continually show what you don't think to be true, how do you determine whether or not the problem is with your experimental procedure, vs. your theory itself?"
I'm not sure how useful "interpretation" is in terms of predictable physics. Sure, you can disagree with the reality of the wavefunction, or whether or not the cat is "actually" alive and dead simultaneously, but no matter which way you interpret any given physical theory, apples fall, planets spin, and electrons make patterns on screens. Your theory must agree and predict such things, otherwise it is not correct physics. If you have a theory that the discrete electron is delivered to the screen via fickle quantum pixies, that's still valid physics as long as you get the same answers out at the end.
"Classical" physics had a particular framework. It was deterministic, and involved things like systems of particles or waves or both and how interactions occured. One experiment (Young's) within this framework showed that light was a wave.
It confirmed a particular hypothesis within the framework of physics.
Young's experiment with single photons can't be done inside classical physics at all, because the concept of photons doesn't exist there. The only classical construction of light is Maxwell's equations, which produce waves which are infinitely divisible. The fact that both photons and electrons produce discrete dots on the screen which nonetheless form interference-like fringes cannot be formulated in any classical theory's ontology - they deal with point-like particles or continuously variable waves.

Young's experiment can't confirm a classical hypothesis, because classical theories can't describe the result. It didn't say, "Light's a wave," it said, "Your theory doesn't work; try again."

Because everything we do somehow determines what we are going to find, and we don't know how.
We can test it, even though our experimental apparatus is technically not isolated. (We knew that from classical thermo- and electrodynamics anyway.) We do know how, otherwise we wouldn't be able to predict anything.

We make these up. We don't have any way of knowing what the measurement error is, because the uncertainty principle means that we can't tell in what ways our devices determines the results we will get, other than that we know it has to and we know it has to in ways we cannot determine.
The uncertainty principle is hardly relevant, considering your devices can't measure that accurately. (AFAIK)

When your fundamental framework says "waves act like this" and "particles act like that", and you find out that neither is true, but that you can continue to explain your results (sort of) by combining the contradicting aspects of your old framework, you're going to run into problems.
I thought we threw both out and invented a third class of object that explained everything consistently.

It's like saying "well, certain observations seem to support the idea that we're the center of the universe, and others that the sun is but is stationary, and still others that it's all moving, so we're going to say all of the above is true." And that's what happend.
And we made it consistent, so there! :p

Because we literally chose a third way: it's not a wave, but it acts like one sometimes, and it's not a particle, but it acts like one, and so we'll say it's both and neither.
It'd be inaccurate to say it is both; it is neither - it is a third class of object.

If you don't mind answering: you are a computer scientist/programmer, correct? If so, do you conduct research? Not that this would necessarily matter, but the more one tends to work in a research field, the more one tends to be familiar with how possible it is for competing theories to be "complete", supported by experimental data, and exist in contradiction to one another.
Interpretation is ubiquitous.
Not in computing, it isn't. However, that's because computing as such has no equivalent to physical theories - it's based on mathematics and logic. Even in a practical sub-field like AI, because your theory is written in mathematics, no interpretational ambiguity can be present. (If, meanwhile, there's ambiguity in the semantics, ur doin' it wrong.)

And Wheeler introduced the idea that even if we tried to "undetermine" the paths by "observing" after the electron or whatever actually took some path, we would never find the "superposition state" or anything "quantum".
Wheeler's experiment doesn't let you detect conflicting at the same time - that would require the same photon to somehow hit the telescopes and the screen at once.

How our specifications used to transcribe the "system" into a wavefunction such that we have anything to say about whatever it is we "fire"?
I don't understand this question.

We always get every possible result, but we only end up with the single state in our universe.
Because there's many observers, each of which get a consistent universe. (The probabilities re-arise if you stick a probability measure across the wavefunction.)

No. But it we be nice to have any of the above. Because we don't.
We don't know an electron's rest mass?

It's a nonlocal system in multiple places at once taking multiple trajectories.
You're right - the wavefunction is one of complex numbers, but the probability function is trivially constructable from it. What makes you think its nonlocal, though? :p


"Quantum" mechanics is the framework we use to describe experiments which contradict the actual theory itself.
Classical mechanics is also the name of the physical theories that range in complexity up to and including Relativity. Quantum mechanics is the theory describing the behaviour of wavicles.

Which is classical determinism. There's always a "margin for error', and we determine these in quantum systems by pretending we have classical systems.
We determine them by applying the mathematics we've determined produce correct answers. :shrug:

Which means we cannot start with known quantum state, because the only way to know it is to be have some way of observing it. But doing so destroys the quantum system.
Which is fine, since it's our initial state. We can then let it evolve.

"over the past three or so decades it has been slowly realized that the isolated-system assumption—which, as we have described above, had proved so fruitful in classical physics and had simply been taken over to quantum physics—had in fact been the crucial obstacle to an understanding of the quantum-to-classical transition.
Closed quantum systems don't have to transition into classical reality.

How their “quantumness” seems to slip out of view as we go to larger scales, finally arriving in the world of our experience where cats are either alive or dead but are never observed to be in a superposition of such two classically distinct properties." (Decoherence and the Quantum-to-Classical Transition; Springer, 2007).
IOW, the system becomes indeterministic when it interacts with classical objects. Which is what I said.

The problem with the above is that while it is now generally accepted to be the correct approach (somehow), it doesn't resolve anything. Because "to observe interference effects between components of a superposition state, these components must have not been measured, i.e., which-state information must not be available." (ibid).
i.e. you must not have measured it. Isn't this what I've been saying all along?

Which is not indeterminism, and is absolutely not countefactual indefiniteness. Countefactual indefiniteness literally means "if I didn't observe what I just did, what I observed wouldn't be there". It's why Einstein asked "is the moon there when you don't look at it?"
The actual words "counterfactual definiteness" mean been able to get a definite answer for an experiment you haven't done yet.

Someone observed something which wasn't there. That's counterfactual indefiniteness.
That's your sensory apparatus playing up, not counterfactual anything.

No, every single physicist knows this. And every single scientist interested in quantum theory learns it.
Shall I fetch the tinfoil hats? :D

No, it tells us that the observation is relative. And we know with respect to what.
Oops, I should've specified 3-velocity. Is that still what you mean?

However, although Everett intended to "take it like it is", the problem is that quantum formalisms are deterministic: we start with an initial state and a function which allows us to "know" at the very least a far smaller probability range than the "many-world" approach dictates is involved in every measurement. Which means we can't get the states we do using this interpretation.
I don't understand what you mean. MWI says you get exactly the number of world as there are basis states produced by the measurement. :shrug:


What are you basing your understanding of Everett on?
My understanding seems to have diverged from Everett. :p
 

LegionOnomaMoi

Veteran Member
Premium Member
How can you conflate these statements without being inconsistent?

IOW, the system becomes indeterministic when it interacts with classical objects. Which is what I said.

i.e. you must not have measured it. Isn't this what I've been saying all along?
&
I don't understand what you mean. MWI says you get exactly the number of world as there are basis states produced by the measurement. :shrug:

You can't have "states" produced by a "measurement" if you can't measure anything. Also, there's no such such thing as a "closed quantum system" in quantum mechanics such that this makes sense:
Closed quantum systems don't have to transition into classical reality.

Which seems to be again a rather fundamental problem of disconnect here. Which also seems to explain this:

Which is fine, since it's our initial state. We can then let it evolve.

So let me illustrate using a simple example which is basically what that "initial state" and "evolution" consist of. Say I build a infiltration AI system I designate Cyberdyne Systems Model 101 (and for some reason I give it an Austrian accent). But it has no weapons, so I give it a shotgun. I am interested in knowing how well my infiltration AI unit (hereafter referred to as "the Terminator") can hit a target using this shotgun. So I put put I a bunch of targets in my live fire house and tell the Terminator to go in & shoot at these.

Then I go in to examine how well the shot spread covers the center of each of the targets. And I realize I made a mistake. If the Terminator was standing two feet away from each target, then I can't really say it performed well. It's practically impossible to miss. If, on the other hand, there were times when the terminator was firing while moving at around 50 meters at targets which were angled away from the Terminator's forward direction, and it still managed to get the shot spread fairly close to the center of those targets, that would be really impressive.

For every target, I have some "measurement" which tells me where the shots landed. I also know that for each target, the terminator had to be within certain spatial parameters and moving at speeds also defined within certain ranges (unlike my plans for my T1000, which can run really fast, the Terminator isn't all that quick). So for every target, I make up where the terminator was when it fired at that target, the firing angle, the terminator's speed, etc. I then combine all these into a single "average" score based on my made-up initial states (all the variables I picked out of a far wider set of possibilities which could have been initial states), and the what I see on the target.

I do have a "margin for error" in some sense, but it's so incredibly large it's worthless. I have no way of knowing if a "decent" shot was the result of the Terminator carefully getting into a stable position, taking its sweet time aiming, and then firing, all from a distance of 3 meters, or whether that shot was taken on the run with barely a glance while the Terminator was heading full speed at a 130 degree angle away from the target. My initial state is pretty worthless, but without it my measurement is meaningless and I can't get a "final state" (accuracy).


That's what happens in QM. But having made a simple example, I'll now give a far more precise and technical description from Exploring the Quantum: Atoms, Cavities, and Photons (Oxford Graduate Texts; Oxford University Press, 2006):
legiononomamoi-albums-other-picture4088-wave-functions-position-momentum-spaces.jpg

"In layman’s language, we may say that the wave function describes the state of the particle suspended, before measurement, in a continuous superposition of an infinite number of possible positions."
(scanning was the only way I knew of to get the math in there). In footnote 3 (and actually in footnote 4 as well), the authors take care to point out that these aren't really "states" in the way we typically use the term. Also note that the "states" of an electron vary widely, and finally that before measurement, there are an infinite number of possible "positions" any given "particle" can be in. So how do we get an initial state that corresponds to anything we actually "prepare" using some devices? After dealing with the formalism the authors explain more of the experimental process itself (italics in original; emphases added):
"Using this apparatus on an arbitrary initial state, an experimenter (the ‘preparer’) measures the value (0 or 1) of the projector and in the case when he finds 1, records that the system is prepared in |psi>. In this way, the preparer knows the system’s state and can make all kinds of probabilistic predictions on the outcome of any measurement performed on it.
Can we say, however, that this quantum state, defined on a unique system, has an ‘objective reality’ ? A reasonable criterion of reality is that any other experimenter (a ‘measurer’, as opposed to the preparer), being given a single copy of this state and not knowing anything about the preparation, should be able to find out what the quantum state is. Clearly, the measurer is, under the conditions we have defined, unable to acquire this information, however clever he is. If he performs a measurement on the system, he obtains partial information, but the state is irreversibly and randomly modified. The probability amplitudes prior to the measurement are irremediably lost and no more information can be acquired about them. Hence, a wave function, or a spin state existing as a single copy cannot be determined by someone who has not prepared it (or who has not communicated with the preparer). This ‘lack of objective reality’ of the wave function of a single quantum system is a fundamental quantum feature (D’Ariano and Yuen 1996) which plays, as we will see, an important role in many quantum information procedures. It has an important corollary, the impossibility of copying exactly an unknown quantum state, a property known as the no-cloning theorem (Wootters and Zurek 1982). If such a copy were possible, quantum mechanics would be inconsistent. By making a large number of identical copies of a given quantum state, one could perform statistical measurements on the copies and deduce from them the wave function of the initial state, in contradiction with the ‘lack of objective reality’ that we have just defined.
We have restricted our analysis so far to ‘pure’ quantum states about which the preparer has maximum information. Usually, the situation is less ideal and even the preparer has an imperfect knowledge of the state and of its subsequent evolution, due to random perturbations or to the unavoidable coupling of the system to its environment."


Notice first that the system's "initial state" is arbitrary, and that how the system is "prepared" is determined after the initial measurement and experiment is over. However, we don't call it an experiment, but "preperation." Yet this preperation ensures that the "initial state" can't exist.

What, however, if we return to this statement:
It tested quantum theory - quantum theory tells us that a single electron fired through the slit will land in a given position some proportion of the time. Do the experiment lots of times, and that's what you will get. It also tells us that this pattern will not appear if you somehow measure which slit the electron went through.

The authors adress this as well: "The difficulty arises of course when we ask the question: through which slit did a given atom cross the second screen? This is a natural question for a classical mind but a meaningless one in the quantum world. If no experiment is performed to measure the position of the atom when it crosses the second screen, this position has no physical reality."

And we can also finally address the "isolated system" problem. There is none. "It is merely postulated as a kind of ‘black box’ property of the measuring process, an attribute of the classical character of the meter which prevents us from describing it as a quantum entity".
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
The actual words "counterfactual definiteness" mean been able to get a definite answer for an experiment you haven't done yet.

They don't. The actual phrase refers to a type of conditional ontological statement. Conterfactual meaning "if [some apodosis] were true, then [some protosis] would be true". So, in this case (not simply a counterfactual but counterfactual definiteness, evolving still out of Lewis' modal logic) "if I hadn't looked at the moon, it would still be there."

I'm not sure how useful "interpretation" is in terms of predictable physics.
Absolutely vital.

Sure, you can disagree with the reality of the wavefunction, or whether or not the cat is "actually" alive and dead simultaneously, but no matter which way you interpret any given physical theory, apples fall, planets spin, and electrons make patterns on screens.

Apples fall? What's falling? And electrons also don't make patterns on screens. We can clearly observe this simply by placing a detection device at one of the slits. No more patterns. It's not about the reality of the wavefunction.

Young's experiment with single photons can't be done inside classical physics at all, because the concept of photons doesn't exist there.

Young didn an experiment with light. Within the framework of his day, his experiment showed that light was a wave. It resolved the problem. Until Einstein explained another result in a contradictory manner. Which required a new framework which itself was internally inconsistent according to basic logic, with causality, and once developed, the next several decades repeated the theory while ignoring it in practice. That's how the wavefunctions were developed: in contradiction to quantum theory so that we could pretend there were isolated systems and a "classical limit" which we derived by calling classical observations "quantum observables".

The only classical construction of light is Maxwell's equations, which produce waves which are infinitely divisible.
Maxwell was only able to do this after Young showed that light was a wave.

The fact that both photons and electrons produce discrete dots on the screen which nonetheless form interference-like fringes cannot be formulated in any classical theory's ontology
Or any other. Because ontology means "what is". And something which is only what we make it behave like has no "reality". Decoherence now depends not on the "classical limit" but by trying to see in what ways and under what conditions we can detect quantum processes in some "fuzzy" range. No more pretend isolated systems. From the intro of Bell's Theorem and Quantum Realism: Reassessment in Light of the Schrödinger Paradox (SpringerBriefs in Physics; Springer, 2012):
"Whatever position one takes on the subject, quantum theory is certainly surprising in its radical break from other fields of physics. Not only does it exhibit indeterminism, but the theory entails an essential denial of objectivity, an abandonment of realism. The latter issue stems from the fact that quantum theory offers little description of physical systems apart from what takes place during measurement processes"

We can test it, even though our experimental apparatus is technically not isolated. (We knew that from classical thermo- and electrodynamics anyway.) We do know how, otherwise we wouldn't be able to predict anything.

No, it just limits what we can say about our predictions. Like in the example of my Terminator system. I know that he fired a shot gun inside the building. But as far as what I say the "initial" and "final" states are, my margin of error is so huge it's meaningless. For a while, the tiny scales of quantum reality and a lack of anything other than thought experiments to get closer defended the "classical limit" of vanishing constants. Until it didn't.

Wheeler's experiment doesn't let you detect conflicting at the same time - that would require the same photon to somehow hit the telescopes and the screen at once.

I linked you to an experiment which did this in a new way, compared to similar experiments which also carried out Wheeler's "thought experiment" empirically. We've been able to do this, and have done it, for years.


I thought we threw both out and invented a third class of object that explained everything consistently.

We didn't "invent" a third class of object. We combined two contradictory formal descriptions of classical objects and created a new formalism.



Not in computing, it isn't. However, that's because computing as such has no equivalent to physical theories - it's based on mathematics and logic.
All of science is. However, computer science does have more than applied research and computabililty theory. So I was wondering if you happen to work in such a field. But it's not all that important.


Even in a practical sub-field like AI, because your theory is written in mathematics, no interpretational ambiguity can be present. (If, meanwhile, there's ambiguity in the semantics, ur doin' it wrong.)

If there's semantics, there's ambiguity. Mathematics has no semantics. That's what "formal" means. You avoid interpretation because you deal with syntax. As soon as semantics enters the picture, you are dealing with "meaning" which necessarily means interpretation.



We don't know an electron's rest mass?

No.


Classical mechanics is also the name of the physical theories that range in complexity up to and including Relativity. Quantum mechanics is the theory describing the behaviour of wavicles.

Quantum theory is supposed to account for everything. Period. The fact that it isn't clear how to reconcile it with something as fundamental as relativity is a bit of a problem.



Closed quantum systems don't have to transition into classical reality.
Sure. They were doing find not transitioning before quantum mechanics existed. Then we developed quantum mechanics, which is about calling classical observations "quantum" observations, calling equations "systems", ignoring the actual systems, and describing quantum reality in terms of an inferred and fundamental indeterminancy subsequently ignored in practice so that we get "isolated systems". Until, again, all that stopped working.


Shall I fetch the tinfoil hats? :D

Some reading of physics literature to confirm might serve better. But tinfoil hats are stylish I've heard.
 
Last edited:

PolyHedral

Superabacus Mystic
You can't have "states" produced by a "measurement" if you can't measure anything.
I mean, the number of branches produced by the wavefunction is the same as the number of outcomes that could be produced by a measurement. You don't have to do any measurements to get he splitting.

Also, there's no such such thing as a "closed quantum system" in quantum mechanics such that this makes sense:
Sure there is - the entire universe.

(The Terminator metaphor is being skipped because I don't feel its accurate to the process, but to explain it to me, I'd have to understand how the metaphor accurately represents the process... so I don't need the metaphor. :p)

(scanning was the only way I knew of to get the math in there).
You can use this if you can speak LaTeX.

In footnote 3 (and actually in footnote 4 as well), the authors take care to point out that these aren't really "states" in the way we typically use the term. Also note that the "states" of an electron vary widely, and finally that before measurement, there are an infinite number of possible "positions" any given "particle" can be in. So how do we get an initial state that corresponds to anything we actually "prepare" using some devices?
I don't know, but the source you quote apparently does, since they see no issue with preparing a state. What's the problem with taking some eigenstate and then doing non-measurement things to it, and that counting as your "preperation?"

Can we say, however, that this quantum state, defined on a unique system, has an ‘objective reality’ ? A reasonable criterion of reality is that any other experimenter (a ‘measurer’, as opposed to the preparer), being given a single copy of this state and not knowing anything about the preparation, should be able to find out what the quantum state is.
I disagree with this criteria, so there. :p

Usually, the situation is less ideal and even the preparer has an imperfect knowledge of the state and of its subsequent evolution, due to random perturbations or to the unavoidable coupling of the system to its environment."
If there is no objective reality, then how can one's knowledge be perfect anyway? :shrug:

Notice first that the system's "initial state" is arbitrary, and that how the system is "prepared" is determined after the initial measurement and experiment is over. However, we don't call it an experiment, but "preperation." Yet this preperation ensures that the "initial state" can't exist.
There's no more arbitarariness in quantum mechanics than there is in any other theory.

The authors adress this as well: "The difficulty arises of course when we ask the question: through which slit did a given atom cross the second screen? This is a natural question for a classical mind but a meaningless one in the quantum world. If no experiment is performed to measure the position of the atom when it crosses the second screen, this position has no physical reality."
You're attacking my wording more than my point, I think. I'm using "fired through the screen" as short-hand for the experimental set-up. It doesn't matter whether the particle actually goes through the slits, or jumps all over the universe in such a way that it ends up at the screen anyway.

And we can also finally address the "isolated system" problem. There is none. "It is merely postulated as a kind of ‘black box’ property of the measuring process, an attribute of the classical character of the meter which prevents us from describing it as a quantum entity".
I don't see how that quote demosntrates that there are no isolated systems. :shrug:

They don't. The actual phrase refers to a type of conditional ontological statement. Conterfactual meaning "if [some apodosis] were true, then [some protosis] would be true". So, in this case (not simply a counterfactual but counterfactual definiteness, evolving still out of Lewis' modal logic) "if I hadn't looked at the moon, it would still be there."
It is. It's also everywhere else. :p

Apples fall? What's falling? And electrons also don't make patterns on screens. We can clearly observe this simply by placing a detection device at one of the slits. No more patterns. It's not about the reality of the wavefunction.
And apples don't fall if you stick something under them. You know what I meant. :p I could translate what I meant by apples fall into pure data detected by my senses, but it'd take far too long. You would surely, however, agree such a translation is possibe.

Maxwell was only able to do this after Young showed that light was a wave.
But the qualifcation was "with photons." We didn't know about photons in the original version of the experiment.


Or any other. Because ontology means "what is". And something which is only what we make it behave like has no "reality".
I can make my computer behave differently. Does that mean it has no reality?

"Whatever position one takes on the subject, quantum theory is certainly surprising in its radical break from other fields of physics. Not only does it exhibit indeterminism, but the theory entails an essential denial of objectivity, an abandonment of realism. The latter issue stems from the fact that quantum theory offers little description of physical systems apart from what takes place during measurement processes"
The wavefunction is objectively real. Done.

I linked you to an experiment which did this in a new way, compared to similar experiments which also carried out Wheeler's "thought experiment" empirically. We've been able to do this, and have done it, for years.
Um, yes. The experiment does not show you the same object behaving like a wave and a particle. It shows you that quantum objects can have both behaviours depending on context. Which is what you'd expect.

We didn't "invent" a third class of object. We combined two contradictory formal descriptions of classical objects and created a new formalism.
...forming a third description. Which is not contradictory at all, except with outdated notions of what reality is.

If there's semantics, there's ambiguity. Mathematics has no semantics. That's what "formal" means. You avoid interpretation because you deal with syntax. As soon as semantics enters the picture, you are dealing with "meaning" which necessarily means interpretation.
Yet the program still works without our semantics attached to it. :D

CODATA Value: electron mass!

Quantum theory is supposed to account for everything. Period. The fact that it isn't clear how to reconcile it with something as fundamental as relativity is a bit of a problem.
75% of the universe (3/4 of the forces) isn't bad. :p

Sure. They were doing find not transitioning before quantum mechanics existed. Then we developed quantum mechanics, which is about calling classical observations "quantum" observations, calling equations "systems", ignoring the actual systems, and describing quantum reality in terms of an inferred and fundamental indeterminancy subsequently ignored in practice so that we get "isolated systems". Until, again, all that stopped working.
Would you like to explain quantum ontology from the ground up, or would you like me to do it and you can pick holes in it?

Some reading of physics literature to confirm might serve better. But tinfoil hats are stylish I've heard.
You are suggesting conspiracy to
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I mean, the number of branches produced by the wavefunction is the same as the number of outcomes that could be produced by a measurement. You don't have to do any measurements to get he splitting.

The entire point is to understand how the wavefunction "collapse" need not happen, by somehow taking quantum formalisms as providing its own interpretation (which just about everyone has agreed is at least in some way, wholly or partially, indefensible). That is, given any quantum system and some wavefunction to talk about intial and final states, the solution Everett had was a particular interpretation of Born's rule; an interpretation of probabilities. The problem, however, is that this doesn't resolve the issue at all (which is clear merely from looking at the divergence among those who say they follow Everett). Because Everett relied on the formalisms behind Born's probabilistic explanation along with the projection postulate, but ignored what these "meant" in their typical sense (that is, not probabilities based on the macroscopic measurement and the "state" of the quantum system based on this measurement), this was supposed to remove the problems inherent in combining a unitary evolution (generically), and an entirely seperate (and diverging) "projection" (or collapse) given the measurement. However, Everett didn't change the formalism itself. And the reason the formalism existed was to explain in some sense how one could have a quantum system with some "state" and end up with another "state" during measurement such that one could say anything about this. Everett and others kept that formalism, but instead of saying that the "initial" and "final" states were governed by this probabilistic interpretation and a subsequent "selection" the measurement caused, all the states were realized and the measurement simply resulted in only one observed. However, as the formalism was designed to allow the experimenter to a prepare a particulare "state" with some given (perhaps possible but limited range) or resulting states, we have a circularity. In order to explain why we ended up with the state measurment we do, we throw away the macroscopic "projection postulate", which assigns a particular state to the quantum system given the macroscopic measurement and the probability function (itself based on preperation).
However, once we do this, we are no longer justified in using the formalism at all. The Born rule was developed empirically via statistical frequencies of prepared states and subsequent observations. Simplistically, imagine we did not know the rules to poker, but could run the game and observe which hands one and which didn't over a lengthy period of time. We could derive the rules of the game. Everett used formalism developed via this empirical approach to explaining systems both prepared, & then subsequently "measured", in this world. His explanation relies on a formalism which holds true regardless of who does the experiment. Given some preperation, we will get a particular set of possible results after some individual has interfered with the system. Which means that for some reason, whatever "worlds" branch off given any observation of any quantum system are independent of the particular observer but dependent on the preperation of the system in this world, and its measurement in this world.

The solution, to describe quantum mechanics in terms of all states, out of which only one is observed, is fine in and of itself, but here it is derived from, uses, and cannot explain nor derive the very formalism it says requires no explanation. There is no reason that, without some interpretation, we should be able to use a frequency approach to macroscopic preperation and measurement and know in advance which "splitting branch" we will get. So either we need some way of explaining why we have all possible states but only observe one we knew too much about before it existed, or we are no better off. It is akin to not watching a poker game, but observing which hands "win", deriving the rules of the game, and then saying that whichever hand wins is completely random. We prepare a quantum system, we know something about what the macroscopic measurement will tell us about the final state, but we shouldn't know this. Everett explains something derived under the assumption (and which requires this assumption) that prepared states and final states in our macroscopic reality can be used to inform us of a quantum reality Everett denies exists- there is no "macroscopic" measurement, only a particularly observed state which is the result of some branch. But then there is no reason to use the formalism to begin with, and every reason for supposing it can't work.

Put as simply as possible, you can't have "right answers" if all you are doing when you run some experiment is splitting branches of reality.

Sure there is - the entire universe.

Which isn't a "closed" quantum system, and especially not if it is constantly and infinitely branching due to interactions among an infinite number of systems.


You can use this if you can speak LaTeX.

I'll learn it now. Thanks!

I don't know, but the source you quote apparently does, since they see no issue with preparing a state. What's the problem with taking some eigenstate and then doing non-measurement things to it, and that counting as your "preperation?"

My source does, actually. And increasingly, this isn't really the approach. Instead of trying to run some experiment on an artificially closed quantum system, modern decoherence approaches seek to explain quantum systems and quantum processes entirely in terms of how and under what circumstances quantum processes "remain" without a "projection", "collapse", or decoherence. Of course, this still doesn't resolve the problem, because although it is at least not inherently contradictory, it is inherently limited.

I disagree with this criteria, so there. :p

Which is fine. So do many others. But the problem is not solved by a many-worlds approach, because given this prepared "initial state" the "measurer" can still say something about the final "state" which no observer should be able to say. Because (again) the very formalism representing the system's dynamics is only true under the assumption that the system measured is the same one prepared (not some "branch" of it).

If there is no objective reality, then how can one's knowledge be perfect anyway? :shrug:
It can't. But there is imperfect, and then there is quantum mechanics:

There's no more arbitarariness in quantum mechanics than there is in any other theory.

If I give you the specifications of any other system, and an equation like those which exist in quantum mechanics, how I prepare it doesn't matter. All I need to do is tell you the initial state, including the relevant formalism describing the dynamics of the system in terms of that state, and we can say all the same things. We can't in QM. If there were "no more arbitrariness" in QM, why does it exist? That is, you seem to think it no more than some extension of physics which is not really qualitatively different than e.g. the shift after Maxwell and the inclusion of electromagneticism within physics. Yet we are dealing with physics, the oldest science and that which is fundamentally about explaining physical reality. Classical dynamics involve complictations, but only epistemological. We can observe any system with arbitrarily high precision. This is not the case in quantum mechanics. There is an ontological indeterminism.

Nor has there ever been any theory in physics (or actually in any science) in which so many people can say they follow one interpretation out of many (e.g., MWI), and yet fundamentally disagree. There is no other theory which involves so much basic disagreement over just about every single aspect of it. The only thing that comes close is the ontology of spacetime, but as this involves quantum theory as well (hence QED and the later "updates" of QFT), this simply makes the theory that much more problematic.

There are lots of things we don't know about. One related to this thread is consciousness. Whatever scientific papers, monographs, volumes, etc., are used in scientific discourse on the subject of consciousness inevitably involve philosophy, ill-defined terms, and far too little scientific theory. It's too speculative. But we're not dealing with "we just don't understand it yet" when it comes to QM. We're dealing with "we've been doing this for a century and nobody agrees on what exactly it is we're doing, but everbody knows that we've been doing certain things which can't be right for as long as we've had QM."

What other theory is like this? What other theory is so fundamental to a branch of science, yet stands in conflict with the only other equally fundamental branch to that science, in which the theory has so many different approaches (not just interpretations, but the various flavors of QFT and similar modern quantum mechanics as used in experiments and modern theoretical physics)?

I'm using "fired through the screen" as short-hand for the experimental set-up. It doesn't matter whether the particle actually goes through the slits, or jumps all over the universe in such a way that it ends up at the screen anyway.

It does matter if the "particle" exists in any meaningful way such that we can "fire" it. There is no such thing as "particles" at the subatomic level, so how to describe the dynamics of one is vital to understanding how we derived its initial state. And we did this by a combination of guesswork, circular reasoning, inference, and being wrong.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I don't see how that quote demosntrates that there are no isolated systems. :shrug:

You can't "prepare" a quantum system via macroscopic devices. You have to prepare something which allows you to get some initial state, but as that preperation is unique to that specific preperation (not the system, even with the initial state), it cannot be isolated. It fundamentally depends on the preparation. Then there's the measurement, or observation of a "final" state which involves understanding the microscopic quantum state through macroscopic devices and the relation of these to system dynamics which explicitly reject the many-worlds interpretation and also the relation of both the dynamics and the final state to the intial state, which never existed to begin with. The only thing we know through (as you said earlier) repeated running of experiments is how likely we are, given some particular initial state, get some final state. But this is again a frequency approach to developing a "generic" system dynamics which allows us to talk about systems we can't see. If we are simply "observing" some branch after measurement, then the formalism developed is useless. And as the initial state isn't really initial, but the result of an experiment (we just don't call it that), which we then apply to another system whose state we don't know, how is this isolated?
It is. It's also everywhere else. :p

You find dualism so repulsive, yet you adopt that which is most contrary to physicalism and realism: counterfactual definiteness.

And apples don't fall if you stick something under them. You know what I meant. :p
Now I was unclear. I meant "what does it mean to fall?" Relativity tells us that what we think of as free fall is about as close to being motionless as is possible. An apple that falls from a tree is more stationary than the tree.


But the qualifcation was "with photons." We didn't know about photons in the original version of the experiment.

Which was my point about interpretation frameworks. Running experiments and data analysis is useless. That's what was done with Young. But at the time, a particular interpretation framework resulted in a particular understanding of particular results. This is always true.


I can make my computer behave differently. Does that mean it has no reality?
The qualification was "only". It is only what we make it behave like. Your computer doesn't turn into a chicken when you hit "enter" (if it does, either you really should make some tinfoil hats, or you've been typing on a chicken).

The wavefunction is objectively real. Done.
Great. But in that case there is no the wavefunction. Moreover, like all functions it takes input and produces output. That input consists of descriptions of physical (quantum) systems, and so does the output. It doesn't matter if you say the wave function is real, because without being able to say anything about the states of the system which the wavefunction maps onto the same system at some later time t, it's like saying "the quadratic function is objectively real". You can assert it, but it doesn't mean much. And saying that wavefunctions are objectively real tell us nothing about quantum systems, because we also need the variables in the domain and image (or, more generically, the codomain).

Um, yes. The experiment does not show you the same object behaving like a wave and a particle. It shows you that quantum objects can have both behaviours depending on context. Which is what you'd expect.

Nobody expected it, and nobody liked this interpretation much either. I'm going off of elementary school memory, but for illustration: scientists thought they had a new dinosaur, the brontosaurus. It was a relative of the apatosaurus. Then it was discovered that the two were the same. What if, however, instead of saying this, the scientists said "it can be both depending on how we put the bones together?" That's essentially what the "resolution" behind QM did. Take two contradicting components of reality from the framework of classical physics, note how mathematics could resolve macroscopic (and therefore "classic") observations in terms of these classical notions, and come up with this "duality". You seem to follow the "just use the math" approach, but the math comes from understanding quantum reality as both a wave and a particle and neither. Not as "depending on context" because the math doesn't depend on context.

...forming a third description. Which is not contradictory at all, except with outdated notions of what reality is.

Only by incorporating the mathematics used to describe the contradictory and outdated notions, and doing so mainly out of convenience.

Yet the program still works without our semantics attached to it. :D
"Works" is a semantic notion. Just by saying a program works is attaching semantics.


Derived theoretically. Not experimentally. Ever. In fact, the idea that it has a specific mass is problematic.

75% of the universe (3/4 of the forces) isn't bad. :p

It was better before, a century ago when we had accounted for almost everything. Until we realized we hadn't. Again, interpretative framework is unavoidable.

Would you like to explain quantum ontology from the ground up, or would you like me to do it and you can pick holes in it?

The entire modern physics community would love you to explain what is still a mystery.

You are suggesting conspiracy to

If everybody in the field knows it, and the information is available (albeit at times expensive, especially when it comes to the various academic monograph/volume series), it's not a conspiracy.
 
Last edited:

idav

Being
Premium Member
Here is an interesting video which is alternative to the multi world theory. Aside from why the video says atoms have waves(time dialations or something like that), I like the pictures of what the wave-particles are doing. It shows how the atom has deterministic qualities while getting caught in an in-deterministic wave that came from itself. An interesting approach where the atoms are creating the possibilities but not necessarily taking every path as said in MWT.
[youtube]TcWx52AkM4I[/youtube]
Alternative to the many worlds Interpretation of quantum physics - YouTube
 

PolyHedral

Superabacus Mystic
Here is an interesting video which is alternative to the multi world theory. Aside from why the video says atoms have waves(time dialations or something like that), I like the pictures of what the wave-particles are doing. It shows how the atom has deterministic qualities while getting caught in an in-deterministic wave that came from itself. An interesting approach where the atoms are creating the possibilities but not necessarily taking every path as said in MWT.
[youtube]TcWx52AkM4I[/youtube]
Alternative to the many worlds Interpretation of quantum physics - YouTube
This is not only dualistic, but mysticism.
 
Top