You didnt read the whole page in my link? I do think that you did not. Had you read this line:
you would have understood that this is what I was referring to. If my reference was unclear, no harm done.
I did read the line, but certain parts of the page were highlighted on your link. However, this was exactly what I was refering to when I spoke of the 200 years between calculus and Weierstrass. Calculus worked, and it became increasingly more fundamental. Obviously, textbooks had to teach it, but mathematicians were actively working on it. I have an older textbook which uses an the older, ill-defined version of infinitesimals.
The point is, the mathematical community absolutely did NOT accept these proofs. Even in your link this is clear. They continued to try to prove that if a functions was continuous, then it was differentiable. None of these were widely accepted, however. The problem continued to be worked on. That's why so many proofs came out during the 19th century. From Newton onwards, more and more people used calculus, but the mathematical community desperately needed better definited terms to use in their proofs, and the lack of a well-defined notion of limits (upon which all of calculus rests) meant that all the various proofs offered were on somewhat shaky grounds, and mathematicians knew it. That's why the work continued. From my old (and favorite) math textbook by Hubbard and Hubbard (
Vector Caculus, Linear Algebra, and Differntial Forms): "Continuity is
the fundamental notion of topology, and it arises throught calculus also. It took mathematicians 200 years to arrive at a correct definition. (Historically, we have our presentation out of order. It was the search for a usable definition of continuity that led to the correct definition of limits.)
I just want to say I believe this line means that a wrong result was stated and proved in most of the leading texts of calculus between 1806 to 1870. If you don't agree with the author of this line, then I guess there is nothing I want to say anymore.
I agree that it was, what I don't agree with is the conclusion that this was accepted by the mathematical community. Textbooks are designed to teach students. They had to learn calculus, so the authors used what was available. For the mathematical community, however, the problem wasn't solved. Newton, Euler, Cauchy, etc., all offered definitions used in their (and others' proofs). But the reason people kept doing so after Newton wasn't because the definitions or the proofs were widely accepted as true, but because they weren't. They worked often enough, and they were unbelievably useful, but mathematicians weren't satisfied.
Secondly, I quite understand graph theory is part of combinatorial analysis but its I would say it is a subset of combinatorics.
I thought that was implied by my use of "part of."
Perhaps you don't understand that it is my opinion that the result under discussion belongs to an area which belongs to combinatorics but not strictly to graph theory.
What I don't understand is why that is your opinion. Perhaps I don't understand what you mean by "strictly." Do you mean it isn't only used in graphs? Because that's true of most applications of graph theory. In fact, it's true of a lot of combinatorics as well. Consider permutations. They're a basic part of probability and set theory because so is combinatorics. Or consider matrices. Graphs can be represented by matrices (adjecent matrices). But a graph can have multiple adjecent matrices. In fact (making further use of permutations), one can demonstrate that graphs G1 and G2 are isomorphic by permuting the adjacent matrix of G1. If this permutation can yield an adjecency matrix for G2, then G1 and G2 are isomorphic.
There are many other "theorems" on that site (
this one is supposed to last from 1961-2002) but I don't understand them hence I didn't quote them.
Abelian groups (actually, groups in general) are part of abstract algebras (the study of algebraic structure; actually a lot of combinatorics and graphs come up here as well). What you refer to did (you can find it in full
here)is something that happens fairly frequently in mathematics. What the paper you refer to did was not so much prove the theorem wrong, but rather extended (that's somewhat simplistic but it works) the abelian categories by constructing a type Joos didn't deal with. A lot of the published work in mathematics involves not wholly new theorems or inventions of new branches of mathematics. Rather, mathematicians improve earlier methods, argue about which methods are superior for which application, etc. Take, for example, a problem I have with a stastitical technique used all the time in just about every science. It's in any intro stats course: Pearson's
r. Technically, it applies only to bivariate populations over a continous range. However, most discrete sets approximate continuity well enough for this test. It is not, I think, a good test for data obtained through likert-scale measures, as these are not only better thought of as ordinal data, they also involve "fuzzy" concepts. Since Zadeh developed fuzzy logic, mathematicians have continued to work on the best extension of correlation measures like Pearson's correlation coefficient, and several different versions are currently available. It is not clear, however, which one might be better in general, or more widely applicable, etc. But none of this either proves any of them wrong, nor demonstrates that Pearson's proof offered a century ago is wrong either. Rather, as new data sets of a type he did not work with began to appear, new test statistics have as well.