I actually like the other one better, but let's go with the first, and lets say I'm numbering the cards as such that aces are 1's and face cards go from 11to 13.This one:
I can represent each card as an entry in a single vector in 52-dimensional space. I can do this in a number of ways. I can make all number cards an entry in the vector that has a value corresponding to that number, make aces have values of 1, and make face cards either 11, 12, & 13 (jack, queen, & king respectively) or choose one value for all face cards as in a number of card games.
There are a couple ways to do this. One would be just to partition the vector into four subvectors 1-13, 14-27, etc., and then designate which partition corresponds to which suit.
This is actually not what partitioning would look like. It should just be one vector with horizontal bar separating the entries into 4 partitions, but I don't know how to do that, so we'll pretend that instead of what looks like a vector in a vector is really just one set of () partitioned with something like this (--) only with 4 bars.'
That would be one way. Another way would simply to have a entries 1 through 52 represent variables and one of 4 scalars in front of each variable (card, whether represented numerically or by letters or whatever), like d3 for the 3 of diamonds or sk for the king of spades. Or I could have a vector of ordered pairs:
such that each entry gives the suit and then a number from 1-13 representing ace to king. All of those would be valid vectors in 52-dimensional space.
I did not understand that at all. Can you rephrase it?
Those dots (...) indicate a series (the summation of n terms of a sequence) which ends in a generic formula. It's not a polynomial. It a formula for curve fitting a polynomial function at a = 0 for an arbitrary degree. It's worse than just generic:
This is using Maclaurin to derive the obvious forumula (or theorem), but it shows you how the iteration process of taking derivatives does this. We can start evaluating at a=0 (hence Maclaurin) and plugging in the formula into our function f. For a generic x, the left side shows us what we'd get, while for the actual value 0 we get 1. We take the derivative using the power rule and we and again get both the generic x and the derivative of f at 0. We take the 2nd derivative and the same thing happens. And thanks to the fact that we have the formula already (and, according to legend from a little boy named Gauss who summed all the numbers between 1 and 100 in a few seconds), we can prove the theorem.
You asked if I'd seen a polynomial written like that. The answer is trivially no, because that isn't a polynomial. Notice that we have a formula in my example that allows us to evaluate each term. In your case, we'd have a polynomial to evaluate in a similar way. We plug it in starting at a = 0 and start taking derivatives up to n. That last part makes the entire thing unnecessary because it gives you everything you need to begin with. Whatever the degree of the polynomial, that's the nth derivative.
The reason it was useful was because we were dealing with polynomials fit to data, where we did have dp/dx(0). (Or, Δp(0) )
You have to plug the polynomial into your regression (interpolation, curve fitting/sketching, or whatever you are doing) formula. All you have is the series showing the general procedure we'd follow up to any n. But as we have no formula, the entire thing is like opening a model car kit and finding only the instructions, no pieces. You can't use polynomial approximations (Legendre, Hermite, whatever), without a polynomial.
I don't see how the two parts of this sentence connect
You start your evaluation at a = 0. a of what? That is, if I'm using R, and just starting a simple polynomial regression, I need something more than reading into a variable polyN or something and the using the plot() and model() functions. I need x and y values and my data. In MATLAB, for simple fits all I need to do is enter these in and use the polyfit function. Whether I am trying to using regression, fitting a curve using derived paramaters, or just finding roots, you've given nothing with which to do that. You don't have a polynomial but maybe a procedure for using one.
I am pointing out that defining an object in terms of its own values is, while slightly confusing on first glance, a perfectly true statement.
You didn't do that. At all. You gave some series with no actual polynomial to use, and even then the terms were marked differently and clearly according to a notation used for at least a century. In other words, you supplied not only the wrong context
by saying "a polynomial" when that clearly isn't one, yet I was still able to read the notation well enough to realize thatYou've never seen a polynomial written like this?
1) it was a series
2) you were either providing indices weirdly or an argument of 0
3) that it was for an approximation
All without actually even having a polynomial and being told I did.
Wrong? No. Imprecise and misleading? Yes. Look what you compared it too. I had less info in your example which was a written out as only part of a standard way of describing a series and I knew anyway. Why? Notation.Just because it is convention for A to represent a matrix doesn't mean using it as a vector is wrong, if that's what its defined as.
??? Are you confusing the identity matrix with something else?But matrices represent something apart from the systems of equations. This is most obvious because the operations (row exchange, row scaling) that preserve system identity do not preserve matrix identity.
That's because the matrix in the example is not a system of equations at all, but instead a transform acting on the general vector [x,y,z]^T.
A transform is a matrix.
I am asking about the space, and this:its the definition of a vector.
is wrong. You can built infinitely many linear combinations with two vectors (actually you can do it with one by some definition). Nothing about that definition is accurate or close to accurate. A space is spanned by linear combinations and these concepts combine in ways that have to meet particular conditions.The space as a whole is defined by how many distinct linear combinations it is possible to build.)
So which linear combination of [1,1,2] and [1,2,1] does not lie on the plane -3x+y+z=0?
It's not that I can span a space, but that I can and I cannot get out of that space by the 2 operations. Also, to answer your question:
Row reduce all you want, but you have 2 free variables to take you out of any plane you wish to span.
You said 52D multiple times. If that's the space we're in, then we aren't in infinite dimensional space.I pointed that out earlier.
No. One is simply a list of conditions, and the other is the application ofthese to form an algebraic structure.You appear to be claiming that F and V<F> are the same object
or that V<F> is a field
from Linear Algebraic Groups (2nd ed.) by T. A. Springer.
As I said in my other latest reply to you here, I'm having a problem understanding why you are saying the things you are about vector spaces, fields, etc.Are you having a problem imagining an operator A such that
Last edited: