Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
I'm assuming "sees" here means "considers"?
Wrong. First, in the realization linked to (Liefer calls this "Wigner's enemy") the friend was a bit of information that could be reversed or erased using e.g., weak measurements or partial measurements and something like spontaneous parametric down-conversion.
This is the scenario:
First, recall Bell's reformulation of Bohm's version of EPR: you have some (not necessarily quantum) two-level or bipartite system that becomes space-like and time-like separated. It could be two correlated particles with spin that decayed from a spin-0 particle, or two envelopes with notes saying "Yes" for one and "No" for the other. It doesn't matter. The idea is to have a common source for the "information" sent to two "labs" such that, when the "information" gets there Alice can open her envelope or measure polarization or whatever. So can Bob.
Now, Bell then assumes that there are parameters λi (e.g., λ1 and λ2) such that, whether or not we can determine what the parameters represents or if they can be measured or just about anything, we can determine that it is at least possible to explain the correlations that between Alice and Bob's measurements by a local source (the original system that generated the "information" sent in the form of envelopes or what have you).
Then pick a useful relation between the measurement outcomes for your purposes (or one determined empirically). See if it can be explained in terms of these hidden parameters. For the case even of many quantum systems, there is a way to reproduce the correlations classically. You can show that, in order to have no classical explanation, you must violate an inequality generated e.g., by a set of assumptions that include object definiteness, which is to say that while we may not know which envelope contains the card with "Yes" vs. "No" or |0> vs. |1>, the system had this property and the correlations are due to the original, local interaction.
Then Bell shows that using tripartite quantum spin systems one can violate such an inequality. In other words, no such λ's can exist.
In the Bell-type Wigner's friend, or at least this one (Brukner's is a bit different, and Renner's is so different it doesn't involve Bell-type statistics at all), the friends Charlie and Debbie are the two systems that would correspond to the decayed atoms or envelops. The measurement settings Alice and Bob pick (x and y) are the local hidden variables. The outcomes A and B are the same as in the Bell set-up, corresponding to Alice's and Bob's measurements, respectively.
Now, you make the assumptions that the conditional probability of reversing/erasing the "friends" measurement is the same at least approximately the same as them not making measurements: the probability P_under reversal "undoing" the friends measurement_ (A, B|x,y) is roughly equal to the probability P_no Charlie or Debbie_(A,B|x,y)
That is, you assume that you can choose to use Charlie or Debbie's measurement or not, but if you choose not to allow them to measure (experimentally realized by not having the measured photon interact with "Debbie" or "Bob" via that path), then you should be able to treat this as if they didn't interact with the system. That is, if you don't measure the photon produced via no spontaneous parametric downconversion that takes the D or C route, or rather you erase the path/information such that it is as if you are simply making a standard measurement, then it shouldn't matter that D or C existed as a route at all. You should be able to assign truth values (akin to the object definitiveness from Bell experiments) to your own measurements. If your measurement uses information about the Charlie and/or Debbie path, then then you should still have (and will have) a definite output for that case (x & y both are 1) while for other values the choice is made to erase the photon from the SPD that interacted with the C & D path, measure the ones that didn't, and obtain a definite outcome consistent with this operational procedure.
You can't. It doesn't work.
You can't get disagreements about the actual results (which is why Rovelli is continuing to dig himself into this whole that I wish he would stop), which is why Rovelli's relativity analogy breaks down completely. You can't perform the measurements of the same system. In the classical Wigner scenerio, you get one result if you ask the friend, and another if you put the friend into a superposition state.
For many physicists, basically all measurements in QM that attempt to determine something like the state of the system in the sense discussed here are contradictions to QM. That's because in QM, evolution is unitary. The projection postulate, Born's rule, collapse, reduction, or even "update" are all non-unitary and are ad hoc. They contradict the predictions for the dynamics of all systems in QM. .We don't say, that of course (unless we subscribe to a no-collapse interpretation). We say that measurement involves a different process, and sweep under the rug the fact that the probabilities that we use when we claim that a measurement outcome is predicted by QM come from an ensemble of measurement degrees of freedom that can't (unlike classical ensembles) be decomposed even in principle into the statistics of single states/measurements.
In short, we have to use a series of ingenious methods and measurement schemes for each different system in order to be able to talk about the probabilities associated with it, but these are determined not by QM (which, again, describes systems via unitary evolution or in the more general operational approach in terms of CPTP maps, where we likewise replace the states with density operators and the projection-valued measurements with POVMs). So we have "predictions" QM makes that we determine by using QM right up until we extract information. Because we don't have a theory that accounts for measurements, we can't use QM without being able to talk about measurements, we get a contradiction if we treat the measuring device quantum-mechanically (that's what Wigner's friend is about, except that it is intended to be more drastic), so we simply tack on another type of state evolution to quantum theory.
Put more simply, we can pretend there is no contradiction, and then simply see what the measurement outcome would be if we treated the mesurement apparatus quantum mechanically the way we would if we treated it like one in the lab: in terms of the Hilbert ray we'd obtain from the product of the Hilbert spaces and rays corresponding to System (tensor product) Apparatus.
That's Schrödinger's cat and Wigner's friend. QM predicts something never seen. Hence, it contradicts itself.
Or we don't say that and we think about measurement as part of QM that we haven't worked out yet. One way to go about this is to try to think about how the measurement process works, generalize it to an operational framework that can be used without deciding on an interpretation, and then apply it in the development of no-go theorems and the like as well as their experimental realizations.
Depends. Firstly, if one is talking about the Frauchiger-Renner experiment, then it is about self-measurement (an extension of the Deutsch version). If it is the standard EWFS of Brukner, then the actual measurements will disagree as this version is about obtaining information from the friend that we can later compare (in principle). In the classic Wigner's friend, the only way we wouldn't get a contradiction is if two friends walked out of two labs with both outcomes (or, more friends, labs, and outcomes). In the Griffith version, the contradiction is in the ability to assert if an event is observed, it happened.
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.
Correct. But why would I assume that erasing something would be the same as it not happening at all? That, to me, is very counter-intuitive,specially in the context of QM.
Firstly, that's trivially true because you can only measure once. Period. If you try to do so continually, you still won't get a contradiction but you will stop state evolution altogether.But the point is that you cannot get a contradiction from the same measurement
Not a classical physics. Classical statistics. As in probability theory.You can get what a classical observer would interpret as a contradiction, but why use classical theory? QM is the correct theory and it gives the correct results, up to giving correct correlations.
Yes, if the two friends walk out of the labs and compare, they will have the same results.
If that were the real issue, then one could reasonably expect that the kind of statistics one gets from any sort of EPR-type experiment would produce non-classical correlations. But this isn't true. You can have use local resources to produce quantum correlations for the EPR and EPRB set-ups. Yet both cases involve noncommutative observable algebras.
Also, it's mathematical nonsense. Measure theory doesn't require a function space, or most of the other structures that operator theory does.
Finally, nobody cares about that aspect of it in this context for a very specific reason. That isn't the point at all. The noncommutativity was known long before Bell. EPR was nearly 30 years earlier. Even Kolmogorov had noted (but none gone much further) the difficulties with quantum probabilities and the measure spaces required for probability theory.
The issue was that Bohm had shown it is possible to have object definiteness, determinism, and the kind of explanation that von Neumann's no-go theorem had supposedly shown to be impossible. Bohm provided and then developed a counter-example.
One reason it was found to be distasteful was that it was horribly nonlocal and the view at the time was there wasn't any need for this kind of approach Bohr had already dealt with, QM was all that was needed and all the questions were basically answered as much as they could be.
However, although Bohm's reformulation of EPR was so much clearer, experimentally one ran into a problem: you can do this experiment over and over again and it won't matter in the slightest in terms of whether or not e.g., the spin-values measured had definite values prior to measurement. It was simply more or less decreed (for different reasons, depending upon how much one followed Bohr vs. Heisenberg vs. some growing textbook orthodoxy) that one didn't ask.
What Bell did, among other things, was cast the problem into a form that could be tested empirically AND in terms of those aspects of the theory which were present but we didn't ask about and they couldn't be answered anyway (and in some sense this was correct, as one can model bipartite entanglement correlations using classical statistics).
If you need something more elementary, think of it as being akin to the ability in these same sort of experimental arrangements that have been discussed and then discussed for about a century. The basic double-slit or which-path set ups can be extended to show how one can restore coherence, or even more basically that one can set-up and experiment to determine which path e.g., a photon takes but then, so long as the information is erased, instead get the interference "pattern". It's sort of a basic, fundamental component of entanglement and the logic underlying quantum theory. It's also a part of the more general issues related to contextuality.
But no matter. If you wish to know more about the importance of this sort of experimental arrangement, you are no doubt familiar with the delayed-choice and quantum eraser experiments, but perhaps could benefit from reviewing the somewhat more involved combination of these two. Or just skip over it and forget about the experimental details and recall what the point was: to show that measured outcomes can disagree, or more precisely to violate local friendliness and more specifically the Absoluteness of Observed Events.
Wigner's thought experiment is a clear contradiction, as Wigner's friend cannot experience himself in a superposition, and even if this were possible the superposition that Wigner ascribes has two states (or more, depends on the set-up) but only one outcome. And his friend is in both states. In each of these two, the friend has a different outcome (in principle) that Wigner can't know about unless he asks, collapsing his friend into one state.
Firstly, that's trivially true because you can only measure once. Period. If you try to do so continually, you still won't get a contradiction but you will stop state evolution altogether.
And this is missing the point.
The point of Wigner's friend and EWFSs is that I only don't get a contradiction if I
1) Treat the system as obeying two types of contradicting state evolution and
2) I don't try perform measurements on systems that include subsystems capable of recording definite outcomes that I won't have access through standard projective measurements/collapse.
Not a classical physics. Classical statistics. As in probability theory.
Wrong. They can't. Recall that the two friends in this case are Schrödinger's cats with memory. But you could just as easily return to Schrödinger's cat and then realize you are asserting that a dead cat and alive cat would walk out of that lab agreeing that they are dead and alive.
No, it is not. Wigner's friend got a definite result. Wigner is ignorant of that result so his description uses a wave function that is still in a superposition.The point is that Wigner describes a friend, not a cat. Treating this isolated system quantum mechanically requires treating Wigner's friend in terms of a superposition state of mutually exclusive outcomes (all outcomes, actually). So for a binary yes/no, heads/tails, or 0 and 1 type of outcome pair, Wigner treats his friend as a superposed state of, on the one hand, obtaining "heads" and, on the other, of obtaining "tails".
But his friend could only obtain one of these outcomes. So if the friend's event actually was definite (AOE), then Wigner's description is in contradiction with the friends AOE.
ALSO, IMPORTANTLY, because the outcome was DEFINITE (the friend DID MEAURE AND GOT AND OUTCOME), Wigner's quantum mechanical treatment predicts probabilities that cannot occur and the state assignment cannot be correct as it allows for non-zero probabilities for impossible events.
So, for example, the experiment starts out with the friend having not performed the measurement yet, and Wigner likewise describing friend (tensor product) lab as some ray in the product space. Then the friend performs his measurement and obtains "HEADS", or "Spin-up", or "YES". Meanwhile, Wigner is describing a state in which HEADS/TAILS (or Spin-up/Spin-down, YES/NO, etc.) both occur. Let's stick with HEADS/TAILS. So Friend measures and gets "HEADs". Wigner describes his friend as in a superposition state of obtaining "HEADS" with some probability and "TAILS" with some probability, but this is not possible.
Wigner then performs his measurement, which will yield either "HEADS" or "TAILS" but not both. But, according to you, two friends walk out. Which means we have a contradiction, as one will say that the outcome was "HEADS" and the other that it was "TAILS."
Completely false. That's like saying integrals are, essentially, the limit of rectangles under a curve or that vector spaces are, essentially, triples of real numbers. You've listed an example of measures that isn't even particularly relevant for many (if not most) of the relevant uses. Measures are defined on sets, not vector spaces (function spaces are no). Hence, in probability theory, it's extremely useful as we can do away with defining so-called "continuous random variables" vs. "discrete" and however one wishes to treat the embarrassing case of so-called "mixed" distributions or the like in elementary probability theory. We replace these with the full-fledged classical, axiomatic probability theory of Kolmogorov that is fundamentally rooted in measure theory.Measures are, essentially, the Banach space dual of the collection of continuous functions
I know what operator theory is. It has no relevance to your comment on measure theory, and no substantive relevance in the context of this experiment or one's like it. One could come closer to a relevant statement on measure theory by noting that any theory, quantum or otherwise, which would satisfy the Bell inequalities for some set of measurement outcomes that are imagined to be independent and objective must be described via the joint distribution of independent r.v.s on a probability space (which is built up, foundationally and fundamentally, out of a measure space where the total measure of the set is unity).That collection of continuous functions is a commutative C* algebra and all commutative C* algebras are of that form. Operator theory, in contrast, is essentially the study of C* algebras in general.
A Hilbert space is a function space. What on earth are you talking about?Operator theory also does not require a function space, merely a Hilbert space
You also said that a Hilbert space, which is actually essentially a function space (it's perhaps the prime example of one), need not be what it is (namely, a function space).So, yes, as I said, measure theory is essentially the commutative version of operator theory.
Wrong. Firstly, in the case of the statistical versions, it's the fact that one can't define independent random variables that satisfy the necessary conditions imposed by experiments (Bell's theorem does not need quantum theory, although it would be trivial so far as we know were it not for the violation of the inequality make possible by exploiting a quantum system as a shared resource). And this is related, in turn, to the fact that quantum events can't be embedded into a probability space because any such space (being a measurable space of generated by the sigma-field of subsets of the set Ω, and therefore because all such spaces can be decomposed into a Boolean one (where outcomes can, loselely speaking, be interpreted as events having values of 0 or 1, or, alternatively, truth values).And it is the non-commutativity in operator theory that leads to the violation of Bell's inequalities (which only hold in the commutative theory).
It doesn't have a real problem with spin. And anti-matter interactions are interpretations that were forced upon the failures of QM in a relativistic setting. It's not really fair to take the as yet non-rigorous interpretation of QFT that grew out of empirical necessity combined with some imaginative reinterpretations of what the outcomes of experiments were (not to mention what physical systems vs. their properties were) and complain about the Bohmian version without specifically pointing to how it fairs relative to the ad hockery in the standard cases (e.g., path integral or canonical), not to mention the various hand-waving (e.g., "we'll just pretend we can multiply these distributions that we hope exist..." or "Let's move along from the rigorous Gaussian and Wiener measures and pretend that we can apply such generalizations to the Path integral would-be measures...").But Bohmian mechanics doesn't generalize well to the relativistic setting, specifically when anti-matter interactions are involved. It also has real problems with spin, for example.
Actually, of course, QM *is* local, just not realist.
It is not, in general, at all easy to determine whether or not an experimental result is in contradiction with quantum theory. This is for several (not necessarily distinct) reasons:Why is it a contradiction for Wigner's friend to be in a superposition for Wigner?
What would a possible contradiction be when you allow for anti-realism to hold and don't bother with how quantum theory predicts anything? Indeed, how are you allowing for a contradiction to be possible regardless of realism when you haven't discussed how quantum theory can yield outcomes using measurements? Because the contradiction is rooted in the fact that QM is inherently self-contradictory. The collapse postulate and generalizations of it contradict the unitary evolution of quantum systems. Which might be perfectly acceptable, if we had some way of determining when a quantum system might obey the continuous, deterministic evolution described by quantum theory without observers and the rule that we employ ad hoc for observers.OK, and there is still no contradiction.
None of this makes sense. Are you seriously not aware of how absolutely essential measure theory is in noncommutative mathematics? Do you really think something that doesn't even require operations like commutativity, because it is built up out of set theory, is somehow impossible because you are dealing with a generalization of matrix algebra? Even quantum probability is measure-theoretic. It has to be. You can't even integrate without measure theory. What on earth are you talking about?Yes, exactly. You can't use measure theory (the commutative C* algebra theory) in analyzing quantum physics (because quantum physics uses the non-commutative C* algebra theory).
If you believe that wave functions are ways to make bets or are inherently subjective, then you are correct, such paradoxes pose no problems for your interpretation. Reality does, naturally. Or the quantum-to-classical transition. Or defining a way to make sense out of particle physics, or physics more generally.No, it is not. Wigner's friend got a definite result. Wigner is ignorant of that result so his description uses a wave function that is still in a superposition.
If you view Wigner's use of QM as a way of calculating probabilities, then his friend isn't in a superposition. The state of space of outcomes are described using a superposition of states. But as this doesn't have anything to do with the physical Wigner or anything else physical, then it doesn't describe what the "friend is in".That friend is in a superposition according to Wigner
There was no spin in EPR. That was Bohm's version.In the EPR, for example, each leg detects the spin according to the correct probabilities as calculated by QM. When they get together, they agree that the correlation is precisely what QM predicts. And that happens even if the two measure at very different times *as long as neither interacts with the results of the other prior to measurement*.
Completely false. That's like saying integrals are, essentially, the limit of rectangles under a curve or that vector spaces are, essentially, triples of real numbers. You've listed an example of measures that isn't even particularly relevant for many (if not most) of the relevant uses. Measures are defined on sets, not vector spaces (function spaces are no). Hence, in probability theory, it's extremely useful as we can do away with defining so-called "continuous random variables" vs. "discrete" and however one wishes to treat the embarrassing case of so-called "mixed" distributions or the like in elementary probability theory. We replace these with the full-fledged classical, axiomatic probability theory of Kolmogorov that is fundamentally rooted in measure theory.
A measurable space is defined on a set and a set of subsets of this set. This pair (if it satisfies the requisite properties, in particular that the subsets be a σ-algebra, or what probabilists often refer to as a σ-field). A measure space is a measurable space equipped with a special map called a measure.
Now, like topological spaces, a great many spaces have so much additional structure that, even though they are technically measure spaces we don't bother describing them as such any more than we describe them as topological spaces, because 1) the additional structure that makes them "special" or worthy of names like Hilbert space or Banach space or whatever would make reference to these as a measure space in general worthless or less than worthless (as calling a normed space a "measure space" is too suggest that there is something special about the measure, rather than that "measures are, essentially, the Banach space dual of blah blah blah [trivial example of measure space] and 2) such spaces can be equipped any number (or uncountably many) different measures without altering the essential structures of these spaces, s.t., drawing attention to these as "measure spaces" or referring to them in terms of measure theory because they are "the Banach space dual of the collection of continuous functions" or similar, singularly unhelpful characterizations of what must be some measure space or can be made into one without providing the actual measure, sigma-algebra, or anything else that would warrant "measure theory" until you need to in e.g., the properties and proofs of theorems regarding the spectral theory generalizations in infinite dimensions and with operators that are unbounded (i.e., may be bounded or not bounded, and thus proofs must hold for both cases).
Measures are set-theoretical. The sets need not be equipped with a vector space structure, still less a norm.
It's clear you know the basics. But it is also clear you don't know many of the more advanced topics.I know what operator theory is.
It has no relevance to your comment on measure theory, and no substantive relevance in the context of this experiment or one's like it. One could come closer to a relevant statement on measure theory by noting that any theory, quantum or otherwise, which would satisfy the Bell inequalities for some set of measurement outcomes that are imagined to be independent and objective must be described via the joint distribution of independent r.v.s on a probability space (which is built up, foundationally and fundamentally, out of a measure space where the total measure of the set is unity).
Operator theory is required for much of quantum theory and an avenue of research developed in particular for attempts at making QFT rigorous as well as (and at the same time as) making QM axiomatic. It is also vital to QM generally because of the spectral theorem and the fact that we cannot, in general, associate to a given observable "operator" in QM an eigenvector (and in the physics literature this is bypassed by using the delta function in some expression such as δ(x − λ), and claim that these distributions are the eigenvectors of the supposed operator in the infinite dimensional case where A is some self-adjoint operator over the square integrable space defined (or whose elements defined) over a real-valued, bounded interval, e.g., L^2([a,b]).
But this is sloppiness, not even of much heuristic value, and calling them generalized eigenvectors without defining them properly doesn't help. And things get trickier because perhaps the most important operators for QM in this case are unbounded, and therefor cannot even be both self-adjoint and defined on the whole space L^2([a,b]) or any other (but rather dense subspaces).
A Hilbert space is a function space. What on earth are you talking about?
You also said that a Hilbert space, which is actually essentially a function space (it's perhaps the prime example of one), need not be what it is (namely, a function space).
Meanwhile, your description of operator theory made no mention of measures, measurable spaces, sigma-algebras, or anything relevant. You've taken spaces that are sometimes assumed to have canonical measures (e.g., Lebesgue-Stieltjes, or just Lebesgue) and referred to these in terms of measure theory, which makes them less related to measure theory then the real number line.
Finally, and most importantly, non-commutativity is actually relevant in a particular way or at least a particular approach here. But as we need only deal with spin and therefore with matrices defined over the field of complex numbers, we don't really need much operator theory (unless you are also in the habit of calling linear algebra operator theory because of the trivial relationship).
Wrong. Firstly, in the case of the statistical versions, it's the fact that one can't define independent random variables that satisfy the necessary conditions imposed by experiments (Bell's theorem does not need quantum theory, although it would be trivial so far as we know were it not for the violation of the inequality make possible by exploiting a quantum system as a shared resource). And this is related, in turn, to the fact that quantum events can't be embedded into a probability space because any such space (being a measurable space of generated by the sigma-field of subsets of the set Ω, and therefore because all such spaces can be decomposed into a Boolean one (where outcomes can, loselely speaking, be interpreted as events having values of 0 or 1, or, alternatively, truth values).
Since you can violate Bell's inequality using the algebra of observables that are not only finite but which are downright simplistic, only in the sense that matrices are non-commutative in general is it true that one finds a straightforward connection here.
1) The Bell theorems do not depend upon QM. It depends upon sets of measurements. If quantum theory were replaced by something more fundamental, this wouldn't matter one bit when it comes to Bell's inequalities. Any theory that would explain these measurements (i.e., those of experiments in which Bell's inequality is violated) has to deal with the fact that violations of Bell's imply nonlocality. The whole "realist vs. local" is a common misperception even among physicists. It's a myth, and its poorly defined at that.
2) There is no way to regain locality in QM by assuming an anti-realist view. One has to do more. If one asserts simply that a physical system does not have defined properties until they are measured, one cannot suddenly factorize the space of events necessary to fit the data with a multivariate distribution that assumes nothing about the system to begin with (other than what is required to ensure that the preparation procedure took place in such a way and at such a time and that the subsequent measurements did so too that there is no way for information about the preparation to be transmitted or encoded to allow for local hidden variables to specify the properties measured).
Again, you can turn Bell's theorem into a game. You can violate the inequality easily with a telephone (signaling) or cheating or any number of ways that correspond to loopholes in the foundations literature. But simply asserting that the measure properties weren't there until they were measured doesn't explain the correlations, and the correlations aren't actually correlations in any strict sense because they cannot be defined in terms joint distributions of r.v.s from a probability space which is what correlations are.
It is not, in general, at all easy to determine whether or not an experimental result is in contradiction with quantum theory. This is for several (not necessarily distinct) reasons:
1) Quantum mechanics makes no predictions. By this I do not mean that either QM or quantum theory more generally should be understood to consist only of the “physical” part (e.g., the unitary evolution of the SE, state vector, observables in the Heisenberg picture, etc.), as the no-collapse proponents would have it. Rather, I mean that it is more akin to probability theory than even statistical mechanics. You must first have some theory you input in order to get predictions as output. This is both theoretically and practically challenging, because it amounts to the fact that we too often rely on classical theories and classical descriptions that we then attempt to “quantize” (at least in practice), when we actually regard the quantum description we derived as the more fundamental one. In terms of experiments, this doesn’t matter so much for many of those that use Bell states and Bell-type experimental paradigms. Most of the time, the physical systems in question need only have spin, and therefore the difficulty is in building accurate enough devices, channels, etc., to obtain accurate enough statistics from a particular physical system (e.g., photons vs. electrons) rather than a theoretically justified representation of the system in the appropriate quantum framework such as QED.
However, even here this difficulty remains. How do we know that a system as spin? Through experiments, as spin (at least quantum mechanical spin) is a very simple quantum property. But if we were to find a contradiction in some experiment that we were able to trace to a prediction about measurements of the spin of some system, this wouldn’t contradict QM. It would mean the “quantization” scheme was wrong, and therefore the system was inappropriately described, rather than that the theory itself was contradicted.
2) There is no way to determine whether or not measurements are in contradiction with QM because there is no quantum theory of measurement. I do not mean by this that there is no universal agreement regarding interpretations of QM or solutions to the measurement problem. I mean that one can have any interpretation one pleases and still one cannot go to the lab and tell an experimentalist what counts as a “measurement” in any manner other than one that is ad hoc. This is a central point of Wigner’s original thought-experiment. Schrödinger used his infamous thought experiment about a cat to highlight the contradiction inherent in quantum theory, but it was too early and too easily swept under the rug (principally by Bohr). Wigner’s extension would perhaps have been much better had he not tried to interpret it in terms of consciousness or minds, but it remains an improvement.
The reason Wigner’s thought experiment is an improvement over Schrödinger’s is due to the way we actually use quantum theory. We have nothing other than trial and error and many years over a few generations of intuition to help us determine when we can and can’t use unitary evolution. But knowing that it will work for e.g., fiber optic cables or some similar experimental device, equipment, etc., doesn’t help us to explain why it does in these cases but not for PMTs or some other type of experimental equipment. More basically, it doesn’t allow us to use QM to say what is or isn’t a measurement. Of course, there is a very basic manner in which we have always known what counts as a measurement. When someone goes and looks at a dial, through a telescope, at a readout screen, etc., and sees some measured result. In other words, when we act as observers and “see” an observed result.
No. OUR state isn't collapsed until we look at the result.So far, this is just Schrödinger’s cat. The crucial difference comes after we perform “our” measurement and let the friend out of the lab. She hands us a slip of paper with either spin-up or spin-down, but not both. This “collapses” our system’s state.
Then we ask the friend what it was like to be in a superposition state only to “collapse” into having a definite outcome when we opened the door. But, unlike a dead cat, our friend can tell us that this never happened. The friend obtained a definite outcome in the lab. From her perspective, our description of the physics was laughably, ludicrously, and grossly wrong. It is a stark, blatant contradiction with what actually happened.
But we used QM, so where did we go wrong? That’s where disagreements start. For some, QM doesn’t describe physical states or properties, so there was no contradiction, we updated our state of knowledge appropriately. But for these individuals, we have no account of how physical systems appear to have definite values, how classical physics can ever emerge from quantum, etc. And quantum systems are inherently, absolutely subjective.
3) Measurement outcomes are necessarily statistical. To see why this is a problem when it comes to whether or not experiments contradict quantum mechanical predictions, consider EPR. By this time, Einstein had given up on trying to devise a way to get beyond Heisenberg’s indeterminacy principle (as it should be called, and as he later called it). Instead of trying clever thought experiments that he believed would allow measurement, in principle, of both the momentum and velocity of a quantum system to arbitrary precision, he had found a way to use QM against itself (he thought). By using what would be called (by Schrödinger) entangled systems, he show even more: The measurement performed to determine the outcome for one observable could actually yield the necessary value for a canonically conjugate observable of the system without even requiring an experiment! Bye-bye Heisenberg’s indeterminacy principle, hello incompleteness of QM!
Yes, if you assume a commutative theory, you can derive such a joint distribution. But that is violating QM and the essentially non-commutative aspect it has.What Bell did was embed all such experiments in terms of the statistics of these measurements. If we assume that there exists some value like “head” that goes on one note and “tails” that goes on the other, then we can use a joint probability distribution and satisfy Bell’s inequality.
If, however, we have a situation like that described first by Boole (before QM) and later derived independently by Bell (based on Bohm’s reformulation of EPR), in which the two persons on each planet use quantum system as a shared resource, then we can violate the inequality required for any theory in which we would have that the results obtained by two such experiments on distant planets would be decided the moment the notes were put into the enveloped.
In QM, this corresponds to the idea that atoms or subatomic particles have the measured properties they do independently of whether or not either of the two persons on distant planets actually performs an experiment to measure them (objective facts) AND the ability for the correlations to be actual correlations (that is, defined in terms of a joint distribution). One can’t simply say “abandon realism but retain locality” because, by itself, this doesn’t do anything. There is no reason that systems outside of any possible causal (local) influence can have yield joint measurement values that exceed the maximum allowed by correlations just because the properties measured were supposed to be indefinite.
The point, however, is that what Bell did was formulate, in a theory-independent way, a method of testing theoretical aspects of QM that lacked conceptual clarity. Single-shot experiments are meaningless in QM, but of course the manner in which physical systems are supposed to have states and properties that can be measured via a single experiment is crucial. Bell tied the two together. He provided a way to take thought experiments relying on single-shot experiments to be tested theory-independently and empirically.
You're the one equating function spaces with L2 spaces, not me. A vector space is a function spaces- trivially, in exactly the same way any of your claims about non-commutative operator algebras being relevant would have to mean you are including trivial cases where the operators are matrices acting on finite-dimensional complex vector spaces, as that's all we need for Bell's theorem, Bell's inequality, and even the empirical realization of Bell tests and Bell states.Sigh. OK, now that I have a determination of what level of math you are familiar with, I can start to be a bit more explicit.
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory.
It is utterly irrelevant what one "can" do to extend measures or even how measures come into play in either noncommutative or commutative spaces (or operator algebras).Yes, measures are defined on a collection of subsets of some set (usually a sigma-algebra). You are probably only familiar with positive measures, but it is possible to extend
No kidding. You can stop back-peddling. The problem claim is not that measure theory isn't relevant, it is your claim that:Measures allow for the definition of integration in a more general setting.
This is patently false, absurd nonsense. You have confused an application of measure theory, or a use of measure theory, with what you claim measure theory to be, an "essentially" so.Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory.
This means that a measure automatically gives a linear functional on the collection of continuous functions. In other words, an element of the dual of the Banach space of continuous functions.
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory.
This is why we can identify the collection of measures in that context with the dual of the space of continuous functions.
Ultimately, the point is that measure theory is, essentially, commutative operator theory.
we can identify the collection of measures in that context with the dual of the space of continuous functions
On the contrary, you are now trying to talk about how measure theory is used in a particular context. That's great. It's not what you said. When you want to address the glaringly, obviously incorrect statements you initially made and stop back-peddling, great. I'm more than happy to agree with statements about measures in the context of normed spaces and linear functionals. I am not about to dismiss measure theory as somehow "essentially" boiling down to what is actually one application of the subject matter to functional analysis.You are completely missing my point
Von Neuman developed Hilbert space and his algebras to do this. QM isn't based on it. Also, the inequality in question is not based on quatum mechanics.Since QM is based on operators, it tends to fail the inequalities that are true for measure theory.
Most of the operators involved are 'closed operators' when defined on those dense subspaces
In the sense that, technically, anything involving integration or even summation trivially must. However, QM depends on measure theory. Anything with integrals does. Quantum measure theory is still measure theory. Non-commutative measure theory (basically the same thing) is measure theory. POVMs are measures, projection-valued measures are likewise measures, and QM depends on measure theory. So what?Bell's inequalities depend on measure theory.
It doesn't. At all. Unless you are now going to claim that you need to talk about C*algebras in order to describe a correlation between two independent random variables or to deal with joint distributions in classical probability theory. Because, again, the inequality doesn't rely on QM except insofar as it was used to inspire the way that classical random variables could yield a value of maximal correlation. Nobody would generally care about this, of course, without the fact that QM violates such an inequality (and generalizations and variations of it).That means they are essentially dealing with commutative C* algebras (integration with respect to measures gives the dual of such). The crucial aspect of QM is the non-commutative nature of the underlying operator algebra.
Correct. They are NOT joint distributions because such are, in essence, defined from the commutative theory. QM is inherently non-commutative. Probability theory based on measures cannot model this behavior. But states in a Hilbert space and operators on such can and do model things very well.
Since when are random variables (which are necessarily real) commutative c*-algebras, again? Oh, and since when did we not require measures for QM? Are we still dealing with your arbitrary, imaginary distinction of a special measure theory you invented, the one where this:You cannot do QM with commutative C* algebra theory (i.e, measures and random variables).
is somehow true? Or are you trying again to make sense out of the mathematics here?Ultimately, the point is that measure theory is, essentially, commutative operator theory.
Are you aware that this is considered one of the most radical interpretations of QM? It is far beyond the sort of indeterminacy or instrumentalism even attributed to Bohr and Heisenberg (mostly incorrectly) let alone relational QM, Healey's pragmatism, operationalist QM, the statistical interpretation (of Ballentine and others), the generalized probabilistic theory interpretations, the entire class of epistemic interpretations, etc.? That there exists basically two approaches to QM that hold this view to be true:The wave function you use describes your uncertainties.
Exactly. In the commutative thoery (integration with respect to a measure), certain inequalities between correlations can be proved. The corresponding inequalities are false in the non-commutative case of operators on a Hilbert space.
Since QM is based on operators, it tends to fail the inequalities that are true for measure theory.
The collection of bounded operators on a Hilbert space is a non-commutative C* algebra. The positive linear functionals correspond to elements of the underlying Hilbert space via A--> <Ax,x>. So the positive linear functionals of norm 1 correspond to normalized states....
Similarly, the collection of continuous functions on a compact Hausdorff space is a commutative C* algebra. The positive linear functional functionals on this C* algebra correspond to positive Borel measures on the compact space via integration. Those of norm 1 thereby correspond to the probability measures.
...
And yes, non-commutativity is essential here. It is crucial that the C* algebra be non-commutative in QM. That is why many ideas from (commutative) measure theory fail in QM.
...
Correct. They are NOT joint distributions because such are, in essence, defined from the commutative theory. QM is inherently non-commutative. Probability theory based on measures cannot model this behavior. But states in a Hilbert space and operators on such can and do model things very well.
You cannot do QM with commutative C* algebra theory (i.e, measures and random variables).
FYI- In addition to misrepresenting the use of C*-algebras in QM and operator algebras more generally, you've confused and conflated to different approaches with different definitions of states that actually matter. So your "norm 1" is true, for example, in the standard case but doesn't make sense in the algebraic case formulated using C*-algebras. The discussion of measure is almost entirely wrong. So is the descriptions of Bell's theorem and QM. But much of that I've already covered. What I want to do now is show you a little bit of what the actual algebraic appraoch involves in contrast to Hilbert state approach as opposed to the convoluted mixing of fundamental notions (with a bunch of trivial, unnecessary aspects of functional analysis that are mostly trivial and not relevant thrown in).Once again, measure theory, random variables, and joint distributions are essentially assuming things are commutative. But QM works in operator theory which is inherently non-commutative. There *are* notions that correspond to 'measures, random variables, and joint distributions', but they are fundamentally different because of the non-commutativity of the operators.
And, if you use operator theory and those notions, QM gives the correct results.
The neural matrix of the brain is a medium that allows both quantum affects as well as consciousness. At this internal neural level, it is very likely they are connected to each other.
Most of the arguments against the connection between mind and the quantum universe is based on what is outside the brain, that the eyes see; reference, while ignoring the obvious internal connections. This is because science is more extroverted and expects the answers to reality to be outside itself, whereas an introverted approach; inside world of imagination and self awareness would make this easier to see, being more self contained.
The hydrogen proton, which is connected to hydrogen bonding in water, for example, quantum tunnels in entangled pairs. This happens in all aspects of life, including the brain.
Entropy is the key. Entropy is considered a state variable, meaning for any given state of matter, there will be a constant entropy value. What this means is the random and the quantum aspects, that are used to model the microscopic aspects of any given state, need to add to a constant entropy. What appears to be random is actually controlled by the determinism of the constant entropic state.
The entropy of the state forms a deterministic rule, which then causes the needed entanglements between consciousness, chemistry and quantum physics, since these all need to add to a constant. Consciousness works based on the second law. The neurons via ion pumping lower ionic entropy setting a potential with the second law; neurons will need to fire and ionic currents need to disperse. Consciousness and life both need to evolve to higher complexity; 2nd law driven. It does so in quantum jumps, into states of constant entropy; with new entanglements appearing, upon each steady state, so the state remains constant.
The life sciences still overuse a statistical approach since they leave out the deterministic nature of the water in life. The water helps to mediate the constant entropic state entanglements. Water controls the shapes of things inside the cell, so any given volume of water can form a constant entropic state. Getting the quantum ducks in a row is the nature of life.