• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Einstein and "spooky actions"

Reptillian

Hamburgler Extraordinaire
Hi Reptillian,

They were debating the multiple worlds interpretation of QM. I objected to Legion's claim that, unlike in classical mechanics, the QM state (namely the ket or wavefunction) does not represent the state of a physical system.

Ah, yeah I agree that the wavefunction represents the state of the physical system. Both before and after measurement.

The reason we're talking about cards as vector spaces is because I was trying to model picking a card off of a deck as a quantum mechanical operation.

What's the measurement picking out...card color? card suit? number or face? even or odd value?

All I know is that Bell's Inequality means that it makes an observable difference whether your quantum cards have the property you're measuring prior to picking it out.
 

LegionOnomaMoi

Veteran Member
Premium Member
Possibly, but nothing you quoted appear to contradict me.
Did you read my last post? Why is a state vector represented by basis vectors?


If we just wanted to represent each card as "a vector," sure, we could just have a 1D vector in the space [0,52]

There's a reason I used the ' symbol when representing vectors
This card is represented by a vector [a1, a2, a3,...a52]'

Generally speaking, vectors are vertical. That way we can distinguish between a 1-dimensional matrix with 53 entries and a 53-dimensional vector. However, either way [0,52] represents a closed interval, not a vector or matrix, unless it is a vector with only 2 entries/components (and we are being more flexible about requiring a vector to be horizontal, which is sometimes done, but as the whole problem here is related to what notations mean, let us both try to be precise when we use them).

A vector [0, ...51, 52]' would be a 53 dimensional vector because you included 0, which is an entry in the vector making 53 total entries. A vector with 1,000 components/entries, all of which are 0's, would be in 1,000th-dimensional space.

The framework of quantum mechanics says that the vectors representing our eigenstates (of which there are 52, because there are 52 eigenvalues, i.e. measurable outcomes) should be basis vectors, and thus we need the vectors to be 52D.

How would you interpret these (FYI, these I got from "pic"-mining Google images:

Image1765.gif

img366.png

e4c9648d0c7dcbd17c9af24cf4897fc7.png


I don't know about you, but it seems the ket in the first is equal to a "vector" of inner products. In the second, a ket, a bra, and an operator are used and with these "vectors" (treated as a matrix and its transpose, which one can do with vectors) we can multiply a "column vector" (a matrix) by a matrix operator and a vector. Apparently the Ax=b notation and its variants didn't cut it, so now we have to resort to asterisks. In the 3rd, once again we have a matrix and its transpose giving us the expected matrix that an outer product would, but why do the psi symbols apart from the ket one all have asterisks?

The space is C^53 with the equivalence class that, for some entry A, A=xA for any complex value x.
1) The reason the A's are in caps but not the x is because the x is a vector. The standard notation for a linear transformation is Ax=b, because a matrix A multiplied by a vector x will yield a vector b. It's a function.
2) Using another image search, I found:

7bb04ea504c09e284f146e34a84d1d8c.png


which is apparently on wiki's English mobile site (en.mobile.Wikipedia.org). Those e's are the standard basis vectors for R3. A is the matrix you'd get (a diagonal of 1's with all other entries being 0's) if you treated the columns of the some matrix A multiplied by the standard basis vectors.



Do the CS thing and invent your own notation.

I don't have to. It already exists:
“Mathematicians tend to despise Dirac notation, because it can prevent them from making important distinctions, but physicists love it, because they are always forgetting such distinctions exist and the notation liberates them from having to remember.” - D. Mermin

I've quoted that before. Mathematicians have complex analysis and it doesn't require Dirac's notation. I use what mathematicians have used since before Dirac was born.

I don't follow. A linear combination of vectors is, by the definition of what a vector is, also a vector.
I read that as there existing some set of coefficients to represent Ψ in one basis, and there also exists another (implicitly distinct) set to represent it in another basis. Of course that's true.
A vector space is a basis space iff it is made up of a linear combination of vectors s.t. that the set of these vectors spans the vector space and are closed under scalar multiplication and vector addition (i.e., are independent; adding another vector will not take us out of this space, nor will multiplication by a scalar). The only way to know if we have a basis is to see if some collection of vectors satisfies these requirements. No vector (except the null one) can be a basis for any vector space by itself.



For deriding a lack of rigor

The point of rigor is not that one must always be precise, as among other things it is far more difficult to teach that way and it is hard to talk informally about experimental designs and/or analyses and keep to such precision. When something is published it's a different story. Dirac actually published a book to supply a formalism that would be less precise than that already existing.

The reason (IMO) that, e.g. the set of polynomials is an infinite dimensional function space is

...because there are infinitely many polynomials.

Wave functions can be thought of as vectors because the field we are building the vectors out of is the square integrable functions.
You do realize that a vector space is, by definition, a field over some set of vectors, right? And thus the above says "Wave functions can be thought of as a vector space because..."

So, how do you specify a function over an infinite space by example?

You aren't using an infinite space, but a finite-Hilbert space. These exist.

We are dealing with a projective space
Seriously? "Suppose we have a point (x,y) in the Euclidean plane. To represent this same point in the projective plane, we simply add a third coordinate of 1 at the end: (x, y, 1)." from An Introduction to Projective Geometry (that I quote-mined). But as I can't exactly hand you my textbooks on the subject this was the best I can manage.




That's what "projective space" means.

Right. Luckily, the .pdf I linked to also has an html version, where we have a page for the definition: Projective Space. Now, when you want to admit that a point in 3D Hilbert space is a point in 3D P space, and therefore is defined by four coordinates rather than three, let me know.
I think you're quote mining because
...you are confusing terms (among other things) which makes things seem out of context:


The full sentence reads,
The space is projective, so all vectors pointing in the same direction are equivalent and the final answer by a scalar doesn't make a difference.
It has nothing to do with basis vectors

It has nothing to do with anything
1) Projective Hilbert spaces, along with projective Hilbert metrics, are real mathematical notions and neither are equivalent to Hilbert space.
2) You don't know what basis vectors are.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Anyone care to summarize what's being discussed for me so I don't have to read through tens of pages of posts?
Most of it is completely unrelated to the current discussions, but as I started the thread I'm ok with that.
I started the thread to not mess-up one that Mr. Sprinkles created on QM, because someone was making claims that about QM that both Mr. Spinkles and I thought inaccurate. I gave up first, and I don't believe Mr. Sprinkles made any headway either although I'm not sure.

However, that part is completely unrelated to the current discussion, which has a history that goes back before the dawn of time (and Aslan and a stone table) in which Polyhedral and I have argued over the interpretation of QM formalism and how it relates to the measurement problem.

Recently, however, a 3rd party who's actually a physicist joined in: Mr. Sprinkles.

Ah, yeah I agree that the wavefunction represents the state of the physical system. Both before and after measurement.

As I think Mr. Sprinkles may have given up on me, perhaps I can direct your attention to some key posts which won't require you wading through page after page. I believe that the best place to begin is where I neither used my own explanations nor quotations peppered with my own comments, but just quotations, all of which make up three sequential posts: 592-594.

Mr. Sprinkles responded here, and the last response (mine) was here.

Those are, I believe, all you need to understand the discussion Mr. Sprinkles had with me. As you agree, perhaps you can take up the mantle of [insert glorious title here] that Mr. Sprinkles wore.

The card game discussion started here, and my first response is two posts down from the link to the thread page (starting at a particular post) I gave in the latest link. My last post was the latest on this issue. I don't think it has anything to do with quantum physics but it is interesting because it's the first time I've been able to use projective geometry other than the reason for which I studied it.

By the way the dimension of a vector space is equal to the number of basis vectors for the space and basis vectors don't have to be orthonormal (that's why we have the Gram-Schmidt orthonormalization procedure)...they just have to have components which are orthogonal.

You may already know this but the context makes something of a mess, so for clarity: that procedure is used to create one function space of φ functions out a sequence of the spin functions (which must be normalized) to give use a sequence of remainder functions φ that are orthogonal to the spin functions (i.e., we remove the projections of the spin functions) and then use this new sequence to create another sequence of functions ψ to finally use this normalized function space to give us the function φn out of ψn divided by the square root of the inner product of ψn.

Basically, we add another step.
 

Reptillian

Hamburgler Extraordinaire
As I think Mr. Sprinkles may have given up on me, perhaps I can direct your attention to some key posts which won't require you wading through page after page. I believe that the best place to begin is where I neither used my own explanations nor quotations peppered with my own comments, but just quotations, all of which make up three sequential posts: 592-594.

Mr. Sprinkles responded here, and the last response (mine) was here.

Those are, I believe, all you need to understand the discussion Mr. Sprinkles had with me. As you agree, perhaps you can take up the mantle of [insert glorious title here] that Mr. Sprinkles wore.

I agree with Spinkles in that one of the axioms of quantum mechanics is that physical systems are described as states...which are linear combinations of basis vectors in a vector space (well a Hilbert space). It's one of the Dirac-von Neumann axioms. Measurements correspond to eigenstates...regardless of whether the system was "actually" in a linear combination of eigenstates prior to measurement or was in that eigenstate all along, the physical system is still described by a vector in a Hilbert space by assumption.

You may already know this but the context makes something of a mess, so for clarity: that procedure is used to create one function space of φ functions out a sequence of the spin functions (which must be normalized) to give use a sequence of remainder functions φ that are orthogonal to the spin functions (i.e., we remove the projections of the spin functions) and then use this new sequence to create another sequence of functions ψ to finally use this normalized function space to give us the function φn out of ψn divided by the square root of the inner product of ψn.

Basically, we add another step.

Assuming that your vector space is one of functions, then I'd suppose that is what the Gram-Schmidt procedure does. At it's core in general though it takes a non-orthonormal basis in some abstract vector space and uses it to construct an orthonormal one.

(By the way, the asterisks on the psi entries in your above notation represent the complex conjugate of the psi wavefunctions...replace i with negative i in the psi...it's easy as pie...:D If you flip a ket into a bra you take the complex conjugate transpose in matrix notation)
 

PolyHedral

Superabacus Mystic
Did you read my last post? Why is a state vector represented by basis vectors?
Because every vector can be represented by a sum of basis vectors.

However, either way [0,52] represents a closed interval, not a vector or matrix, unless it is a vector with only 2 entries/components (and we are being more flexible about requiring a vector to be horizontal, which is sometimes done, but as the whole problem here is related to what notations mean, let us both try to be precise when we use them).
You may want to reread that post - the reference to [0,52] is not the vector, but the space the vector is defined on.

A vector [0, ...51, 52]' would be a 53 dimensional vector because you included 0, which is an entry in the vector making 53 total entries. A vector with 1,000 components/entries, all of which are 0's, would be in 1,000th-dimensional space.
That was an off-by-one error, and I edited it out the post. (A 52D vector in projective Hilbert space does have 53 components, though.)

I don't know about you, but it seems the ket in the first is equal to a "vector" of inner products. In the second, a ket, a bra, and an operator are used and with these "vectors" (treated as a matrix and its transpose, which one can do with vectors) we can multiply a "column vector" (a matrix) by a matrix operator and a vector.
1) I don't know why "vector" is in quotes there, since that's a true vector - all the values of c are scalars.
2) The second expression also evaluates to a scalar value.

Apparently the Ax=b notation and its variants didn't cut it, so now we have to resort to asterisks. In the 3rd, once again we have a matrix and its transpose giving us the expected matrix that an outer product would, but why do the psi symbols apart from the ket one all have asterisks?
Are you genuinely asking this question, or expecting me not to know the answer? If the former, you may want to revise bra-ket notation, or possibly even just complex numbers.

1) The reason the A's are in caps but not the x is because the x is a vector. The standard notation for a linear transformation is Ax=b, because a matrix A multiplied by a vector x will yield a vector b. It's a function.
In the sentence you quoted, A was specifically defined to be a vector and x a scalar. (I guess if you wanted, you could interpret scalars as functions on vectors, but vectors are definitely not functions on scalars.) It doesn't matter what the standard usuage in another context is.

which is apparently on wiki's English mobile site (en.mobile.Wikipedia.org). Those e's are the standard basis vectors for R3. A is the matrix you'd get (a diagonal of 1's with all other entries being 0's) if you treated the columns of the some matrix A multiplied by the standard basis vectors.
Please don't use the same letters to represent more than one object. :cover:

I've quoted that before. Mathematicians have complex analysis and it doesn't require Dirac's notation. I use what mathematicians have used since before Dirac was born.
Dirac's notation does have the advantage of being quite strongly-typed.

A vector space is a basis space iff it is made up of a linear combination of vectors s.t. that the set of these vectors spans the vector space and are closed under scalar multiplication and vector addition (i.e., are independent; adding another vector will not take us out of this space, nor will multiplication by a scalar).
What's a "basis space?" From that definition, it looks as though every vector space is a basis space.

The only way to know if we have a basis is to see if some collection of vectors satisfies these requirements. No vector (except the null one) can be a basis for any vector space by itself.
Every vector space has a basis*, and every set of vectors is a basis of some space. (Although not necessarily the space you originally got the vectors from, or a particularly useful space.)

(*Yes, you need the axiom of choice to prove that, but you can't prove that its false in ZF!)

...because there are infinitely many polynomials.
There are infinitely many reals, but the reals are one-dimensional.

You do realize that a vector space is, by definition, a field over some set of vectors, right?
Vector spaces are not fields, because they have no multiplicative inverses. They're not even rings, because there is no "multiplication" that is closed on the set of vectors. Vector spaces have underlying fields, but they are not the same as those fields in any meaningful way.

You aren't using an infinite space, but a finite-Hilbert space. These exist.
I'm using a 52D space which is infinite in extent and dense. How do you suppose I specify a function over that space without variables?

Or are you actually OK with defining a function of two vectors as
png.latex
?

Right. Luckily, the .pdf I linked to also has an html version, where we have a page for the definition: Projective Space. Now, when you want to admit that a point in 3D Hilbert space is a point in 3D P space, and therefore is defined by four coordinates rather than three, let me know.
When did I say otherwise? :areyoucra

It has nothing to do with anything
1) Projective Hilbert spaces, along with projective Hilbert metrics, are real mathematical notions and neither are equivalent to Hilbert space.
Projective Hilbert spaces are special cases of Hilbert spaces.
 
Last edited:

idav

Being
Premium Member
The cards in the deck need to be quantumly entangled so that you can't draw the same card twice. Once you draw a card the rest of the cards follow suit.
 

LegionOnomaMoi

Veteran Member
Premium Member
I agree with Spinkles in that one of the axioms of quantum mechanics is that physical systems are described as states
is it wrong to say that QM postulates the existence of a ket which represents the complete state of a physical system, not just our knowledge of it, and this is the standard, most common way of formulating QM?


regardless of whether the system was "actually" in a linear combination of eigenstates prior to measurement or was in that eigenstate all along, the physical system is still described by a vector in a Hilbert space by assumption.
The issue isn't whether the physical system was described by anything, but that it "actually" is in that state it is said to be prior to measurement, contradicting the standard interpretation:
"The orthodox position: The particle wasn't really anywhere. It was the act of measurement that forced the particle to "take a stand" (though how and why it decided on the point C we dare not ask). Jordan said it most starkly: "Observations not only disturb what is to be measured, they produce it...We compel (the particle) to assume a definite position." This view (the so-called Copenhagen interpretation) is associated with Bohr and his followers. Among physicists it has always been the most widely accepted position. Note, however, that if it is correct there is something very peculiar about the act of measurement- something that over half a century of debate has done precious little to illuminate" pp. 3-4



Assuming that your vector space is one of functions

Yes. Only

At it's core in general though it takes a non-orthonormal basis in some abstract vector space and uses it to construct an orthonormal one.

In this case it does it multiple times. Same procedure (basically), just used to create several orthonormal function spaces out of one another.

(By the way, the asterisks on the psi entries in your above notation represent the complex conjugate of the psi wavefunctions...replace i with negative i in the psi...it's easy as pie...:D If you flip a ket into a bra you take the complex conjugate transpose in matrix notation)

I know. It was a rhetorical question and a matter of some irritation as it is yet another notational devise that isn't used in complex analysis where instead we have a vector with a line over it to denote that it is a complex conjugate.
 

LegionOnomaMoi

Veteran Member
Premium Member
You may want to reread that post - the reference to [0,52] is not the vector, but the space the vector is defined on.


I did need to reread it. But it hasn't gotten any better. First we'll look at the definition of a vector space (again) and then what you said:
"A vector space is a set V on which two operations + and · are defined, called vector addition and scalar multiplication. The operation + (vector addition) must satisfy the following conditions...The operation · (scalar multiplication) is defined between real numbers (or scalars) and vectors, and must satisfy the following conditions..." Vector Spaces.In any event, one of the conditions for both is closure. That is, in order to be a vector space neither scalar multiplication nor vector addition can take you out of that space. Here's your 1D vector space:

If we just wanted to represent each card as "a vector," sure, we could just have a 1D vector in the space [0,52]

BOTH vector addition and scalar multiplication will take you out of this whatever you meant by "the space [0,52]". Hence, not a vector space.

1) I don't know why "vector" is in quotes there, since that's a true vector - all the values of c are scalars.
2) The second expression also evaluates to a scalar value.

Because the "vector" in the first instance has as its entries not just a list of operations, but one in which every operation (inner product) has an identical notation to that "vector" which it is defining. It's the very notation I dislike, because the notation for this vector makes up its entries as well. As for the second...well unfortunately someone already gave that away.


In the sentence you quoted, A was specifically defined

...badly by a confused notation.


Please don't use the same letters to represent more than one object. :cover:

I DIDN'T. That's what QM is doing (I found the wiki page to check). This is what it says over that picture (emphasis added):

"The vector A can be written using any set of basis vectors and corresponding coordinate system. Informally basis vectors are like "building blocks of a vector", they are added together to make a vector, and the coordinates are the number of basis vectors in each direction. Two useful representations of a vector are simply a linear combination of basis vectors, and column matrices. Using the familiar Cartesian basis, a vector A is written [the image I used]" (Background: Vector spaces)

Where are the matrices in the image? (Wikipedia's entry on Matrix notation)

"Matrices are conventionally identified by bold uppercase letters such as A, B, etc. The entries of matrix A may be denoted as Ai j or ai j , according to the intended use." (source)

This is basic notation. So don't accuse me of improper use of notation just because you didn't know that QM is imprecise and the notations misleading. PLEASE READ A LINEAR ALGEBRA TEXTBOOK (QM/complex analysis is an extension, not a different field).



What's a "basis space?" From that definition, it looks as though every vector space is a basis space.
A vector space need not be formed from a set of linearly independent vectors. You can span some space via linear combinations of a set of vectors that are not all linearly independent. These vectors form a vector space, but there is something other than the trivial solution in the null space. That is, if the equation Ax=0 has only the trivial solution (i.e., x is all 0's), then the columns of A can be treated as basis vectors for some vector space. Alternatively, and less precise, basis vectors can't have redundant vectors.


Every vector space has a basis*,
Ever vector space other than that formed by the 0 vector (which is a vector space) has infinitely many basis vectors. A basis is a subspace of some vector space V. To prove there are infinitely many, begin with the standard basis vectors for that space (e.g., if the space is R2, then e1 & e2 are the standard basis vectors). There are an infinite number of scalars we can multiply either or both of the these vectors by. Each time, we obtain a new set of vectors that can still be basis vectors, because their linear combinations will still span R2.


every set of vectors is a basis of some space
Wrong. Let v1= [1, 1, 1], v2= [2, 2, 2] and v3 = [a1, a2, a3] (all transposed so that I can write them horizontally)

The third vector may contain whatever coefficients you pick, and you will never, ever, ever get a vector space. Why? Because each vector corresponds to a point in R3, but one vector is a multiple of another. To span R3, we need 3 vectors that are independent, and we don't have that. Also, you still don't seem to understand that vectors form a basis.



(*Yes, you need the axiom of choice to prove that
Get a linear algebra textbook. Find out what a "basis" is. You'll get your proof without this ridiculous axiom of choice thing. It's probably in the lecture notes I linked to. It's on Wikipedia. It's trivial.

There are infinitely many reals, but the reals are one-dimensional.
You really can't read the pic you used? First, it proves that polynomial degrees can be mapped to vectors s.t. the dimension of such a vector is equal to the degree.
Second, it does so using functions on the natural numbers. What you've said is that we can form infinitely vectors using the natural numbers, but not the reals. Why? Because you don't know what you are looking at.


Vector spaces are not fields, because they have no multiplicative inverses
"A set V equipped with binary operation of addition and with the operation of multiplication by numbers from the Field K, is called a linear vector space over the field K, if the following conditions are fulfilled:
{standard 8 conditions}
The elements of a linear vector space are usually called vectors" p. 11 of Sharipov's Course of Linear Algebra and Multidimensional Geometry

"over the field"
You do realize that a vector space is, by definition, a field over some set of vectors, right?

Maybe a reference text would be easier, like Oxford's User's Guide to Mathematics. There we go right from scalar fields to vector fields (how else do we understand curl?): "Suppose we are given a vector field F=F(P), i.e., an assignment of a vector F(P) to each point P."

Or maybe the problem is that you don't know what a vector space is. But how will you define tensors?

Vector spaces have underlying fields, but they are not the same as those fields in any meaningful way.

Right. They are defined this way, but because you don't know the difference between a vector and a vector space you don't realize that a collection of vectors (which has a multiplicative inverse) can form a vector space because it is a set and this set can be defined by some field over it for the same reason

I'm using a 52D space which is infinite in extent and dense. How do you suppose I specify a function over that space without variables?

I brought up a deck as an analogy. You wanted to turn this into quantum shuffling. So now we have a 52D space that is infinitely dense and without bounds yet represents a known deck of cards such that you can ask me about why drawing one can't mean something? You set up a specific example and asked me why I can't imagine 52-superposition states of the person who drew the card. It's because, as I said then, your formula was meaningless and it's gone downhill from there. You can't even give me an example of how we'd represent drawing a card mathematically but you want me to answer that question?

When did I say otherwise
You didn't. You made a mistake. You're covering for a mistake yet again. Or you are correct, and you can show me how QM always uses a Hilbert metric, and why we have something called "Hilbert space" and "Hilbert Geometry".

Projective Hilbert spaces are special cases of Hilbert spaces.
This is the same Hilbert space you do QM in - complex projective function space.
 
Last edited:

Reptillian

Hamburgler Extraordinaire
The cards in the deck need to be quantumly entangled so that you can't draw the same card twice. Once you draw a card the rest of the cards follow suit.

That's a very good point. We'd have to do as you suggest, unless we have a bunch of decks of quantum cards and only draw one card from each deck. (which is what we'd have to do in the statistical/ensemble interpretation of quantum mechanics...since in that interpretation, quantum mathematics only applies to collections of systems all in the same quantum state...not to individual physical systems)

The issue isn't whether the physical system was described by anything, but that it "actually" is in that state it is said to be prior to measurement, contradicting the standard interpretation:

I'll have to agree with you there since it would make an observable experimental difference if it were "actually" in that state prior to measurement according to the standard interpretation.

I know. It was a rhetorical question and a matter of some irritation as it is yet another notational devise that isn't used in complex analysis where instead we have a vector with a line over it to denote that it is a complex conjugate.

I remember taking complex analysis and finding the notation irksome at times too (I learned the physics star notation first. I had the same problem with notation in calculus with spherical polar coordinates...phi and theta are different things for physicists and mathematicians...not to mention all the dubious shortcuts physicists take that make mathematicians cringe and fidget in their seats)
 

LegionOnomaMoi

Veteran Member
Premium Member
I remember taking complex analysis and finding the notation irksome at times too (I learned the physics star notation first. I had the same problem with notation in calculus with spherical polar coordinates...phi and theta are different things for physicists and mathematicians...not to mention all the dubious shortcuts physicists take that make mathematicians cringe and fidget in their seats)

If I'm reading the above correctly, it sounds like you started with physics and then learned the mathematics necessary as you went forward. If that's true, I have a question: did you get introduced to, and get used to, vectors (whether in terms of x, y, & z coordinates or forces or other classical mechanics uses), before you were ever exposed to matrices? I ask both from experiences tutoring physics majors and because having...uh...acquired (stolen from family members) a few textbooks and bought two myself, I've found that a university level textbook intended for a full two semesters can be replete with vectors and never even contain the word "matrix".

There are a few professors who have started what I hope will change college curricula (those here who have created several calculus and analysis textbooks completely free and much better than most textbooks, Prof. Gilbert Strang and others who teach a lot of engineers and physicists, to name a few).

Calculus used to be the method for physics, because before it we had nothing much to use things like change, nonlinear functions, etc. And for several centuries, enormous time and effort was devoted to making calculus rigorous. Then, about a 100 yrs after Weierstraß, physics changed radically.

So why do students still take several semesters of calculus (often with vectors and matrices introduced haphazardly or as if they were something to get through as fast as possible), when we have on QM formalism called "matrix mechanics" and matrices and vectors have become at least as important as calculus? Come to think of it, why do university physics textbooks delve into classical mechanics so deeply only to have multiple terms and notions replaced (often without changing the name) when the section on QM (or a new textbook on QM) comes up?

I'm ranting again. I've been meaning to start a thread on this (inspired by a notion "lies to children" which Polyhedral introduced me to), so that I can at least rant in a coherent way. When does simplification and inaccuracy stop being a stairway to higher levels and start being unnecessary distortions? At least by starting a thread it can be ignored completely rather than going off topic (this thread started as an off-topic issue that was quickly abandoned in favor of other off-topic discussions only to have these be replaced by yet another set).
 

PolyHedral

Superabacus Mystic
BOTH vector addition and scalar multiplication will take you out of this whatever you meant by "the space [0,52]". Hence, not a vector space.
Oh, that's true. Do it on Z_52 instead, then. (Interesting that you didn't spot that fractional cards don't make much sense.) If that still doesn't work, then clearly its not actually as good an idea as you suggested.

Because the "vector" in the first instance has as its entries not just a list of operations, but one in which every operation (inner product) has an identical notation to that "vector" which it is defining. It's the very notation I dislike, because the notation for this vector makes up its entries as well. As for the second...well unfortunately someone already gave that away.
You've never seen a polynomial written like this?
gif.latex


Maybe my Lin. Alg. lecturer was just annoyingly obtuse. :shrug:


...badly by a confused notation.
What's badly confused about it? Notations are not wrong because they defy convention.

I DIDN'T. That's what QM is doing (I found the wiki page to check). This is what it says over that picture (emphasis added):

"The vector A can be written using any set of basis vectors and corresponding coordinate system. Informally basis vectors are like "building blocks of a vector", they are added together to make a vector, and the coordinates are the number of basis vectors in each direction. Two useful representations of a vector are simply a linear combination of basis vectors, and column matrices. Using the familiar Cartesian basis, a vector A is written [the image I used]" (Background: Vector spaces)
Accompanying the image, you say:
Those e's are the standard basis vectors for R3. A is the matrix you'd get (a diagonal of 1's with all other entries being 0's) if you treated the columns of the some matrix A multiplied by the standard basis vectors.
You make reference to A having multiple columns, and a meaningful diagonal, whereas the A in the image does not. :shrug:

This is basic notation. So don't accuse me of improper use of notation just because you didn't know that QM is imprecise and the notations misleading. PLEASE READ A LINEAR ALGEBRA TEXTBOOK (QM/complex analysis is an extension, not a different field).
As alluded to, I have studied it at university level. It may interest you to know that I aced the exam. Please stop talking down so much.

A vector space need not be formed from a set of linearly independent vectors. You can span some space via linear combinations of a set of vectors that are not all linearly independent. These vectors form a vector space, but there is something other than the trivial solution in the null space. That is, if the equation Ax=0 has only the trivial solution (i.e., x is all 0's), then the columns of A can be treated as basis vectors for some vector space. Alternatively, and less precise, basis vectors can't have redundant vectors.
I was asking specifically about your term "basis space." I think we've gone over the concept of basis set enough already.

Wrong. Let v1= [1, 1, 1], v2= [2, 2, 2] and v3 = [a1, a2, a3] (all transposed so that I can write them horizontally)

The third vector may contain whatever coefficients you pick, and you will never, ever, ever get a vector space. Why? Because each vector corresponds to a point in R3, but one vector is a multiple of another. To span R3, we need 3 vectors that are independent, and we don't have that. Also, you still don't seem to understand that vectors form a basis.
The bold sentence is not as connected to the rest of the paragraph as it looks. Those three vectors will never span R3, but R3 is not the only vector space. Those three vectors will span R2, or R1 very easily.

(They will not form a basis, though, that was my mistake. Assuming, that is, you require all of the elements of a basis to be independent.)

Get a linear algebra textbook. Find out what a "basis" is. You'll get your proof without this ridiculous axiom of choice thing. It's probably in the lecture notes I linked to. It's on Wikipedia. It's trivial.
Not in the infinite-dimensional case.

You really can't read the pic you used?
I wrote it.

First, it proves that polynomial degrees can be mapped to vectors s.t. the dimension of such a vector is equal to the degree.
Second, it does so using functions on the natural numbers. What you've said is that we can form infinitely [many?] vectors using the natural numbers, but not the reals. Why? Because you don't know what you are looking at.
Where did I say that? I said that the reals were one-dimensional, with the intended implication that the size of the set is not what defines dimensionality. I did not say you could not form infinitely many vectors using the reals as the underlying field.

"over the field"

Maybe a reference text would be easier, like Oxford's User's Guide to Mathematics. There we go right from scalar fields to vector fields (how else do we understand curl?): "Suppose we are given a vector field F=F(P), i.e., an assignment of a vector F(P) to each point P."
You've used three different phrasings to describe this so far:

  • "You do realize that a vector space is, by definition, a field over some set of vectors, right?"
  • "a linear vector space over the field K"
  • an assignment of a vector F(P) to each point P.
Are you asserting that all three of these mean the same thing?

Also, what operation corresponds to vector "multiplication?" I honestly can't think of what it would be. :shrug:

Or maybe the problem is that you don't know what a vector space is. But how will you define tensors?
As much as I enjoy talking with you, insulting my intelligence or education is against the rules.

Right. They are defined this way, but because you don't know the difference between a vector and a vector space you don't realize that a collection of vectors (which has a multiplicative inverse) can form a vector space because it is a set and this set can be defined by some field over it for the same reason
Support that bolded bit. What's the multiplication operation on vectors and how does one invert it?

You can't even give me an example of how we'd represent drawing a card mathematically but you want me to answer that question?
I did give an example, which you called meaningless, having misinterpreted it:

Consider the value of the deck in a 52D complex projective space. It is 52D because we have 52 eigenvalues of our "Draw a card" operator, and thus 52 linearly independent eigenvectors. The state of a perfectly shuffled deck is therefore the sum of all 52 vectors, multiplied by 1/2sqrt(13).

You didn't. You made a mistake. You're covering for a mistake yet again. Or you are correct, and you can show me how QM always uses a Hilbert metric, and why we have something called "Hilbert space" and "Hilbert Geometry".
I didn't say otherwise, yet I made a mistake? I'm confused.:shrug:
 

LegionOnomaMoi

Veteran Member
Premium Member
As much as I enjoy talking with you, insulting my intelligence or education is against the rules.

I'm going to address the rest as soon as I'm done with this, but I want to make one thing clear: this is an online forum, and thus discussions lack the face-to-face advantage of realizing when one has gone too far or said something that was taken as more than just a "you are wrong" but a "you are stupid". I do not mean to insult your intelligence; I have respected it and said so multiple times (hopefully as many as the times when I've been unnecessarily offensive), and I still respect it. I have no idea what your education is, so I can't insult it, but I can say that I believe whatever you have learned through whatever ways you have is inaccurate in some way or ways. That is not intended to reflect anything about your intelligence, but the human condition.

Ask me about statistical models, neurons, or whether there is evidence in Homer that supports the idea of a Pre-Indo-European stage of PIE in which the language was an active/stative type, and I'll have a lot to say. Ask me about an infinite number of other things, and I'll either have nothing much to say or not know what it is you are talking about.

I think you are missing some basic notions here that may be the result of not dealing with a lot of this particular type of math. I absolutely do not think this is any indicator whatsoever of your intelligence. Nor am I less impressed with the level of knowledge you've expressed in countless posts in so many threads I've lost track. So I apologize that, in my frustration, I have said things which imply what I did not intend them to.

Typically, I either know a topic, or I don't. I don't really ever simply read a little on some research area. So I'll fight to the death when I think I'm right because I've been reading literature on the subject for a long time (or I wouldn't be debating it). I've still been wrong, even in such cases. I am not a physicist, and no matter how much I read physics literature, I know there is a difference between actually doing experiments and just reading about them. Vectors, matrices, etc., on the other hand, play a major role in what I do and I use them constantly. So when I have repeatedly stated things backed by sources and continued to deal with the same issues, it's doubly frustrating because I am not simply dealing with some topic I've studied as a hobby but something which is fairly central to my work.

That's no excuse, however, to write things that imply I don't respect your intelligence and education; on the contrary, I find both impressive. So once again, I apologize.
 
Last edited:

Reptillian

Hamburgler Extraordinaire
If I'm reading the above correctly, it sounds like you started with physics and then learned the mathematics necessary as you went forward. If that's true, I have a question: did you get introduced to, and get used to, vectors (whether in terms of x, y, & z coordinates or forces or other classical mechanics uses), before you were ever exposed to matrices? I ask both from experiences tutoring physics majors and because having...uh...acquired (stolen from family members) a few textbooks and bought two myself, I've found that a university level textbook intended for a full two semesters can be replete with vectors and never even contain the word "matrix".

There are a few professors who have started what I hope will change college curricula (those here who have created several calculus and analysis textbooks completely free and much better than most textbooks, Prof. Gilbert Strang and others who teach a lot of engineers and physicists, to name a few).

Calculus used to be the method for physics, because before it we had nothing much to use things like change, nonlinear functions, etc. And for several centuries, enormous time and effort was devoted to making calculus rigorous. Then, about a 100 yrs after Weierstraß, physics changed radically.

So why do students still take several semesters of calculus (often with vectors and matrices introduced haphazardly or as if they were something to get through as fast as possible), when we have on QM formalism called "matrix mechanics" and matrices and vectors have become at least as important as calculus? Come to think of it, why do university physics textbooks delve into classical mechanics so deeply only to have multiple terms and notions replaced (often without changing the name) when the section on QM (or a new textbook on QM) comes up?

I'm ranting again. I've been meaning to start a thread on this (inspired by a notion "lies to children" which Polyhedral introduced me to), so that I can at least rant in a coherent way. When does simplification and inaccuracy stop being a stairway to higher levels and start being unnecessary distortions? At least by starting a thread it can be ignored completely rather than going off topic (this thread started as an off-topic issue that was quickly abandoned in favor of other off-topic discussions only to have these be replaced by yet another set).

Yeah I pretty much just picked up the math as necessary. I came pretty close to having math as a second major as a result. I never had much of an interest in math in school unless it was applicable to a scientific discipline. I view the difference between math and science as the difference between grammar and poetry...guess I'm a poet at heart. I first ran across matrices briefly in the second year of high school algebra and then not again until I took a college course in engineering math. Rotations, stresses, strains, and the like. I definitely learned about vectors before well before matrices. I learned statistics almost entirely through physics. I took my first undergraduate level statistics class as a graduate physics student and found that I already knew most of the material...except for the student t-distribution, central limit theorem (I had always wondered why Gaussian distributions were so prevalent in nature), and some stuff from Bayesian statistics.

In general college subjects need to include more interdisciplinary studies. Professors tend to wall themselves up into their departments and specialties and few focus on the "big picture".

I'd never heard of quantum physics until my second year of classes when I took Modern Physics. The professor decided to skip the chapter on special relativity so he could cover more quantum stuff and bridge the gap between classical physics and quantum more smoothly.

If you start a new thread, be sure to post the link.
 

LegionOnomaMoi

Veteran Member
Premium Member
Interesting that you didn't spot that fractional cards don't make much sense

Making sense is relative.
clearly its not actually as good an idea as you suggested.

What idea? The card analogy?
My point was to clarify what "complete knowledge" entails from a pragmatic standpoint (regardless of interpretation).

You've never seen a polynomial written like this?

I've seen many iterated functions of approximations that look like that. More commonly I've seen a good many characteristic equations expressed in a similar form (not for to be confused with the characteristic polynomial equation in linear algebra), used to find 0's for some polynomial of nth degree. The problem is that you have the same argument and it's not a variable but 0 and you have no indices which makes interpretation (without further context) some what difficult. After all, the actually polynomial isn't given, and thus we can't evaluate any derivative at 0 because we have nothing to evaluate. Also, you don't have = 0 or some similar concrete relation ( = ), yet you have specific arguments. Also, I haven't seen equations like that in courses that are just linear algebra rather than multivariate mathematics (some combination of linear algebra with multivariate calculus).

Most importantly, thought, I don't see how it is a response to what I said.

Notations are not wrong because they defy convention.

Notations are conventions. It's a universal language that means scientists (unlike e.g., classicists) don't have to learn 3 other modern languages). Tables of results, equations of models, proofs and derivations, etc., are universal languages just so long as they are standardized.

Unconventional use of notation defeats the purpose.

whereas the A in the image does not.
That doesn't mean it isn't using it. That's the whole point of notation, including matrices themselves. Behind a matrix is a system:

eq-matrix2.gif



Each number in a matrix is really multiplying a variable, but as this is understood, by stripping away the variables it becomes easier and clearer to carry out computations and understand the mathematical structure as long as a standard and consistent notation is used.


The wiki pic is doing matrix multiplication using the mother of all linear algebra equations Ax=b. Here, the matrix A is this:


1604390085c2e12.jpg


If we pretend that the left side of the following matrix is the above matrix, we can see clearly how the wiki pic is doing matrix multiplication:

systems-linear-equations-matrices0.gif



The matrix takes the x, y, and z coordinates and transforms them into a new matrix which, in this case is a matrix with all diagonals, only instead of 1's we'd have the actual values Ax etc. represent.

However, not only do we never see the original transformation matrix (that's not so bad, we all know what e means), instead of a vector with the coordinates we find matrix notation:

"Matrices are conventionally identified by bold uppercase letters such as A, B, etc. The entries of matrix A may be denoted as Aij or aik , according to the intended use." (links in last reply).

Not only that, but what should happen in the last step is a new matrix that combines the columns into another 3 by 3 matrix, indicating the underlying structure. That's what the whole matrix schema is for. So that we need not write out linear all the variables and elucidate the mathematical structure. Instead, what we find is standard matrix notation for matrix entries misused, the original matrix never shown, finally a combination of x, y, and z columns forming a vector improperly aligned. That last "vector" is not equal to A, but to a vector in which every entry is a matrix.



your term "basis space." I think we've gone over the concept of basis set enough already.
No, we haven't:
You may want to reread that post - the reference to [0,52] is not the vector, but the space the vector is defined on

You seem to want to think of vectors as defined on some space, rather than the equivalent of a point in that space with direction (for physicists anyway). If so, then the basis vectors that span a space and are linearly independent exist in a basis space. It's not accurate, but I tried the more precise approach earlier (emphasis added):

it is 52-dimensional (because otherwise there couldn't be 52 distinct basis vectors.)
1) Within 2D Euclidean space, there are infinitely many pairs of vectors that that span that space. Think about the properties of a vector space in general: any linear combination spans the space.

You still do not seem to understand how a linear combination defines a vector space:
That's a non-sequitor. A linear combination of vectors is a vector - I never claimed the elements were vector spaces in themselves.

Apart from my own definitions, I used two (and full text for you) here.


Those three vectors will span R2, or R1 very easily.
They can't: Dimension theorem for vector spaces. Each of those vectors exists in R3. To span R2, when we'd need closure under the 2 operations required. However, for any plane spanned by these vectors we could use either operation and not be on that plane. We do not have the required closure, and if we are mapping from R3->R2 than we have non-trivial solutions to Ax=b.


Not in the infinite-dimensional case.

1) If any set of basis vectors has 52 elements in it, as this one does, that implies the space is 52-dimensional.

Infinite-dimensional does not mean it extends infinitely and is infinitely dense. Think of R1. It extends infinitely and is infinitely dense, yet is 1D. You insisted we are in 52D. If so, then is isn't an infinite-dimensional case (which would mean the dimensions, not the space, were infinite).




Fields are defined over some set. Please see General vector spaces over a field

Are you asserting that all three of these mean the same thing?
A vector field and a field over a vector space are two different things.

Also, what operation corresponds to vector "multiplication?"
Please see the link on general vector spaces above.


you called meaningless, having misinterpreted it:
Ax=λx. When you can tell me how to get 52 λ's out of that equation using your card scenario, then tell me how meaningful it is.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Yes you do - you know that the space they're in is a Hilbert one (by context) and that it is 52-dimensional (because otherwise there couldn't be 52 distinct basis vectors.) :p
Consider the value of the deck in a 52D complex projective space. It is 52D because we have 52 eigenvalues of our "Draw a card" operator, and thus 52 linearly independent eigenvectors. The state of a perfectly shuffled deck is therefore the sum of all 52 vectors, multiplied by 1/2sqrt(13).
1) If any set of basis vectors has 52 elements in it, as this one does, that implies the space is 52-dimensional.
2) So? We already know a basis exists.
The reason (IMO) that, e.g. the set of polynomials is an infinite dimensional function space is that this works:
png.latex

If there is anything to be salvaged from this quantum card game, it will necessarily involve some issues with dimensions, spaces, bases, and functions.

There are, it seems to me, a few central problems.
1) What a vector space or subspace is (what conditions must be met to make something such a space)
2) What "infinite-dimensional" means and how it relates to dimensional spaces
3) What eigenvectors and eigenvalues are.

The 2nd is the easiest because it narrows (infinitely, actually) what we have to deal with..
Wave functions can be thought of as vectors because the field we are building the vectors out of is the square integrable functions.
So, how do you specify a function over an infinite space by example?

The answer is easy. You don't do that. Because Hilbert space isn't a function over an infinite space, nor do we think of vectors as built out of squared integrable functions, but rather that these constitute an infinite number of functions on or over a space. They're a metric imposing mathematical structure extending Euclidean space.

The right question is, "what is the field of square integrable functions?" The answer:
Take any function f. If f satisfies the following:

47c5c47c71fd5ab040bfdf524b632c10.png


then f qualifies as a square integrable function. Basically, it means that a function defined anywhere on the number line s.t. it is itself squared qualifies. To qualify as a Hilbert space, we need only ensure there is an inner product defined for any two function f and g and the we have the necessary metric or measure mu:
gif.latex


All fascinating stuff, and a very scary was of saying that Hilbert space is similar to the space we seem to move around in (3D Euclidean space), only it has a few extra properties and it is infinite dimensional.

Let's say we really were working in a 52-dimensional space. How much larger is that space than a 4-dimensional space? Well, you can get pretty technical here or just realize that both spaces are infinitely dense and extend infinitely in all directions. The main difference is just the number of "directions" in which the space can extend infinitely.

The point, however, as that a 52-dimensional space is not an infinite dimensional space. So any worry we might have about functions dealing with infinite dimensional spaces is needless.

So now we are back in good ol' 52-dimensional space. Specifically, a linear finite Hilbert space. What do we do with it? Well, if we're interested at understanding how to approach this space using vectors, we figure out what vectors in this space would look like. In R2, a point is defined by two coordinates (x, y). The same is true here, only we have 52 coordinates. A vector in R2 corresponds to some point (x, y) but it has a direction, or represents an increment. The differences in physics are usually pretty clear, but in mathematics in general it is rather arbitrary. The temperature of a room a some time t is a point in 1D space. Same with the temperature of the room at some time t + 1 hour. Both are points. However, the increment or change from the first to the second has a direction and magnitude (the temperature change is a certain amount in a certain direction). That gives us a vector.

Once again, it is pretty much exactly the same in 52D. A vector is nothing more than a list of 52 entries that correspond to a point in 52D space, but from some direction to that point. Some vectors, however, are more important than others. Because if I take the linear combination of certain 52-D vectors (for each vector, I can multiply it by a scalar, including 0), I can "hit" any point anywhere in the entire infinitely extending and infinitely dense 52D space. And if, by removing any one of these vectors, I can no longer do this, then I have basis vectors for that space.

The only real problem left is this german Eigen-whats it. THE equation in linear algebra is Ax=b. A is a matrix. However, a matrix is not just a group of numbers but a function (more than that actually). Specifically, we call it a linear transformation that takes the vector x and transforms it into b. Both vectors, in 52D space, have 52 entries. The matrix A must, therefore, have as many columns as the vector does rows (52). The equation we need to get both an eigenvalue and eigenvector is Ax=λx. The reason this is a lot more important than it looks is because the linear transformation takes the vector x and maps it onto some multiple of itself. The eigenvalue is the scalar that can do what the matrix A does, and it corresponds to the eigenvector that is mapped to a multiple of itself. Things do get a little more complicated when we have an eigenvalue i or -i, because for every coordinate we have both a "real" and an "imaginary" or complex part, but as this doesn't actually change anything for a quantum shuffle, we don't have to worry about calculating something that can't be calculated.
 
Last edited:

PolyHedral

Superabacus Mystic
Making sense is relative.
:D

What idea? The card analogy?
My point was to clarify what "complete knowledge" entails from a pragmatic standpoint (regardless of interpretation).
This one:
I can represent each card as an entry in a single vector in 52-dimensional space. I can do this in a number of ways. I can make all number cards an entry in the vector that has a value corresponding to that number, make aces have values of 1, and make face cards either 11, 12, & 13 (jack, queen, & king respectively) or choose one value for all face cards as in a number of card games.


The problem is that you have the same argument and it's not a variable but 0 and you have no indices which makes interpretation (without further context) some what difficult.
I did not understand that at all. Can you rephrase it? :shrug:

After all, the actually polynomial isn't given, and thus we can't evaluate any derivative at 0 because we have nothing to evaluate. Also, you don't have = 0 or some similar concrete relation ( = ), yet you have specific arguments. Also, I haven't seen equations like that in courses that are just linear algebra rather than multivariate mathematics (some combination of linear algebra with multivariate calculus).
The reason it was useful was because we were dealing with polynomials fit to data, where we did have dp/dx(0). (Or, Δp(0) )

Also, you don't have = 0 or some similar concrete relation ( = ), yet you have specific arguments.
I don't see how the two parts of this sentence connect to one another? :shrug:

Most importantly, thought, I don't see how it is a response to what I said.
I am pointing out that defining an object in terms of its own values is, while slightly confusing on first glance, a perfectly true statement.

Notations are conventions. It's a universal language that means scientists (unlike e.g., classicists) don't have to learn 3 other modern languages). Tables of results, equations of models, proofs and derivations, etc., are universal languages just so long as they are standardized.

Unconventional use of notation defeats the purpose.
And yet the statement you're objecting to is perfectly formulated, and IMO, quite readily understandable. Just because it is convention for A to represent a matrix doesn't mean using it as a vector is wrong, if that's what its defined as.


Each number in a matrix is really multiplying a variable, but as this is understood, by stripping away the variables it becomes easier and clearer to carry out computations and understand the mathematical structure as long as a standard and consistent notation is used.
But matrices represent something apart from the systems of equations. This is most obvious because the operations (row exchange, row scaling) that preserve system identity do not preserve matrix identity.


Not only that, but what should happen in the last step is a new matrix that combines the columns into another 3 by 3 matrix, indicating the underlying structure. That's what the whole matrix schema is for. So that we need not write out linear all the variables and elucidate the mathematical structure. Instead, what we find is standard matrix notation for matrix entries misused, the original matrix never shown, finally a combination of x, y, and z columns forming a vector improperly aligned. That last "vector" is not equal to A, but to a vector in which every entry is a matrix.
That's because the matrix in the example is not a system of equations at all, but instead a transform acting on the general vector [x,y,z]^T.

You seem to want to think of vectors as defined on some space, rather than the equivalent of a point in that space with direction (for physicists anyway).

So I used the wrong term to describe the ring [0,52]. :foot:


You still do not seem to understand how a linear combination defines a vector space
Adding together any number of (scaled) vectors will always yield a single, new vector. That's what all your sources say, and its the definition of a vector. :shrug:(The space as a whole is defined by how many distinct linear combinations it is possible to build.)


They can't: Dimension theorem for vector spaces. Each of those vectors exists in R3. To span R2, when we'd need closure under the 2 operations required. However, for any plane spanned by these vectors we could use either operation and not be on that plane. We do not have the required closure, and if we are mapping from R3->R2 than we have non-trivial solutions to Ax=b.
So which linear combination of [1,1,2] and [1,2,1] does not lie on the plane -3x+y+z=0? :shrug:

(I might've done the math on that one wrong...)

Infinite-dimensional does not mean it extends infinitely and is infinitely dense. Think of R1. It extends infinitely and is infinitely dense, yet is 1D. You insisted we are in 52D. If so, then is isn't an infinite-dimensional case (which would mean the dimensions, not the space, were infinite).
I pointed that out earlier. Either way, the space is infinite in extent. In the case of a wavefunction, it is also infinite-dimensional.

Fields are defined over some set. Please see General vector spaces over a field
A vector field and a field over a vector space are two different things.
Please see the link on general vector spaces above.
You appear to be claiming that F and V<F> (that is, the vector space over F) are the same object or that V<F> is a field. V<F> cannot be a field, because there is no function with signature V<F>.V<F> -> V<F>.

Ax=&#955;x. When you can tell me how to get 52 &#955;'s out of that equation using your card scenario, then tell me how meaningful it is.
Are you having a problem imagining an operator A such that it has 52 &#955;s? (Which are scalars, and therefore should not be in bold. :p)
 
Last edited:

PolyHedral

Superabacus Mystic
The answer is easy. You don't do that. Because Hilbert space isn't a function over an infinite space, nor do we think of vectors as built out of squared integrable functions, but rather that these constitute an infinite number of functions on or over a space. They're a metric imposing mathematical structure extending Euclidean space.
You're either quote-mining, or seriously misreading my posts again. That second sentence refers to the inner product function, not anything to do with square-integrable function space.

Also, the square integrable functions are a field, and thus we can quite easily build a space wherein each vector represents a unique function in that set. (The set produced by multiplying the set of SI functions by itself is also a field, so we could have a vector where each coefficient was a SI function if we wanted to as well. :cool:)

There's also no metric involved here. In fact, AFAIK, there's no notion of distance at all.
 

LegionOnomaMoi

Veteran Member
Premium Member
You're either quote-mining, or seriously misreading my posts again. That second sentence refers to the inner product function, not anything to do with square-integrable function space.

There is a third option. Here's what you stated:


Wave functions can be thought of as vectors because the field we are building the vectors out of is the square integrable functions.

A field is a structure imposed on some mathematical set (or space consisting of elements like a vector space). If we "build" vectors out of a field, we are taking elements from a space or a set and using them to build these vectors. The field you described was the field of square integrable functions. That is an infinite-dimentional field that is constructed out of such functions.

"the square integrable functions" are a type of functions. The field of these is all functions that qualify. To build a vector from the field of these functions means to build vectors out of the infinite-dimensional space of these functions. Hence:

nor do we think of vectors as built out of squared integrable functions, but rather that these constitute an infinite number of functions on or over a space.


Also, the square integrable functions are a field

Technically they aren't. They can't be. Because a field is a set over which some operations/rules are defined and others not. The reason you asked about multiplicative inverse was because you were aware that a field requires such a thing. That's because fields impose a particular mathematical structure on some set or over some space (a space is a set of sorts). That
gif.latex
type symbolism you've come across (for Hilbert spaces the p would be a 2)? That's the space of measureable (in the Lebesgue/measure theory sense) functions from R to C. With a 2 instead of a p, it simply refers to the set of all functions that are
1) integrable (either over R or some other metric space)
&
2) are squared

That's it. The functions themselves are not a field, because they are functions. A field is taking some set or collection of sets (ad infinitum) and determining that it satisfies these criteria. The reason we don't say these functions are a field is because they need not be. I can impose other structures. Just because you can have a field over the reals doesn't make the reals (strictly speaking) a field.

That's why we refer not to an arbitrary field (there's not really any such thing) but some arbitrary set and either let or determine whether that set meets the conditions required of a field. Every Boolean algebra is a field of sets. The only reason (I think) that you come across examples in which the rational numbers are said to be a field is because in that case the set is practically defined by the algebraic structure created by a field. Fields came after, and as generalizations of algebra to algebras (like rings, groups, etc.).

and thus we can quite easily build a space wherein each vector represents a unique function in that set.

A vector isn't a function.


There's also no metric involved here. In fact, AFAIK, there's no notion of distance at all.
You cannot have a space in mathematics without a metric.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
You're either quote-mining, or seriously misreading my posts again.

This is the second time you have implied I am quote mining or otherwise unable to understand what you have said. When you first posed this question, you asked that I explain using my words (presumably rather than scanned pages or lines copied out of my library or from some internet source). I have tried to limit my use of others' words to links and a quote or 2 to support something I have usually already said, and to find reputable material for you rather than just grabbing a few of my books (which would mean quoting sources that you can't access without owning the book). I have gone out of my way to use quotes never just to support what I have said, but always to give you a source to read that has more than what I quoted. I have done that so you can perhaps review something you haven't studied in some time or perhaps which was explained badly or too informally at some time.

It would be very easy to attack your posts with a barrage of sources without leaving one series of one collection of my books here (Springer's Graduate Texts in Mathematics). Widen this to any textbook or reference material published by an academic press and I can fill threads with quotes on any number of statements you have made. But it would be pointless.

I doubt it would help you and it would do no more than show I know just enough to know where to quote mine from (and that I have a lot of books, which just means I can pay money to amazon).

That said, what is the point in trying to explain rather than taking the easy route and simply using actual quotes when you are going to accuse me of doing so anyway?

Rather than insulting my intelligence (which you did before you rightfully pointed out I was being unnecessarily offensive), if you think some statement I make indicates I don't know what I'm talking about, you can ask me to support it and I will.
 
Last edited:

idav

Being
Premium Member
Math and stuff.

:popcorn:

Someone said something about some poker. In QM poker what is the probability of getting 5 ace of spades. Is it really just as likely as any other combo?
 
Top