• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Different truths are not the same

gnostic

The Lost One
idea said:
And lastly, Factual truth.
you missed

5. relative truth.

what is the velocity of that car? 80 MPH - this is only true relative to the road... if you look at the velocity relative to the sun, the velocity is very different...

come to think of it, all truth seems to be relative...

Actually, all of my categories of truth in th OP could fall under "relative truth" (and "subjective truth"), but only logical truth (particularly mathematics) and factual truth (like evidence-based science) could be in the realm of objective truth.
 
Last edited:

CarlinKnew

Well-Known Member
I only know of one type of truth, and that is reality, or as close as we can get to understanding it. Our understanding is based on facts. So to respond to each of your examples of "truth":

"Personal" - A personal testimony either recounts facts or it doesn't.
"Religious" - A religious claim is either factual or it isn't.
"Logical" - Logic is a process that usually works to predict what might or might not be a fact.

My main dispute is that you labeled religious claims as a type of truth. Truth describes reality, and religion is not always descriptive of reality (nor are your other types of "truth", of course).
 

A-ManESL

Well-Known Member
The reason this is a poor example to give is that it doesn't support your argument at all.

Perhaps you haven't understood what I am saying yet. My argument is not that it was mathematically true and later proved false. Of course, none of the results mentioned in the link considered true and later found false are mathematically true. On its own, I do not consider mathematics subjective, but practically speaking the mathematical results we know of is subject to mathematicians approving of them.
 

filthy tugboat

Active Member
Perhaps you haven't understood what I am saying yet. My argument is not that it was mathematically true and later proved false. Of course, none of the results mentioned in the link considered true and later found false are mathematically true. On its own, I do not consider mathematics subjective, but practically speaking the mathematical results we know of is subject to mathematicians approving of them.

Oh gotcha, so you agree with me when I say that mathematical and logical truths are the only two truths in the OP that are actually true, all of the other forms are naturally subjective?
 

gnostic

The Lost One
A-ManESL said:
Perhaps you haven't understood what I am saying yet. My argument is not that it was mathematically true and later proved false. Of course, none of the results mentioned in the link considered true and later found false are mathematically true. On its own, I do not consider mathematics subjective, but practically speaking the mathematical results we know of is subject to mathematicians approving of them.

Mathematical truth depends on the results, not the approval of the mathematicians. They have to accept the results, regardless of whether they like it or not...otherwise they are nothing more than cheats.

Unless a mathematician is up to "no good", there is no reason why a mathematician shouldn't accept the result...unless of course, the mathematician had discover a mistake(s) with equation/formula used or the flaw in the calculation.

I don't know where you get this idea that mathematicians need to approve of the results.

If the right equation(s) were used for given a situation, and the calculation were done without error (and double-checked), then it shouldn't be issue for a mathematician to approve the result(s).

I don't understand why any mathematician couldn't accept results if the maths were properly implemented. Why do you think so?
 

A-ManESL

Well-Known Member
Oh gotcha, so you agree with me when I say that mathematical and logical truths are the only two truths in the OP that are actually true, all of the other forms are naturally subjective?

Firstly I am glad someone understood what I am saying. All the other forms I know of are naturally subjective. However as I said before, regarding spirituality and the knowledge obtained from it I don't have any idea. I feel it is a genuine thing, but not knowing about it enough I would refrain from commenting on it.

@gnostic: What happens sometimes is that mathematicians make mistakes and the flawed results are accepted for a long time because no one goes through the proof carefully to check it. My point was that this may be true of any mathematical result accepted today. See this and this for examples.
 

LegionOnomaMoi

Veteran Member
Premium Member
@gnostic: What happens sometimes is that mathematicians make mistakes and the flawed results are accepted for a long time because no one goes through the proof carefully to check it.
That's not really true. Mathematicians have an advantage here. Most of the "mistakes" you refer to were not mistakes at all. They were things which mathematicians thought were true but couldn't prove yet, or concepts which had yet to be rigorously defined. There were no "mistakes" and "flawed results" which were "accepted for a long time" because no one checked. The use of infinitesimals (and still is, although not as it was) was around for a long time, because while calculus was proving more and more effective and useful for understanding the world, no one could formulate a clear, usable (i.e., formal) definition. But no one was just ignoring this or accepting it. They argued about it for years until Weierstrauss finally resolved the issue. What mistaken "proof" can you point to which was accepted for a "long time" because no one carefully checked it?
 
Last edited:

A-ManESL

Well-Known Member
I think you have not read the example I have already quoted. There are many other examples in the links. At any rate I don't think you are not grasping the point. It's not that the relevant concepts are still being debated upon or not. I am pointing out that it is certainly possible that a proof has a flaw or a result is incorrect, and yet it is accepted by the mathematical community. It has happened before and therefore may happen again. Even if the proof has been accepted for one day it throws open the possibility for it being accepted for two days and so on. Of course, I understand that the possibility is remote and gets remoter and remoter and we should just accept the proof after sufficient peer-review, but all that is subject to what I consider "sufficient peer-review". You use the word "long time" inside inverted commas. Isn't what you consider a long time subjective?
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I think you have not read the example I have already quoted.

You mean Euler's formula which you talk about based on the link here:
This is the link that I posted previously, about mathematical results which were thought to be proved earlier (some of them for centuries) and later flaws were found in the proof.

Except Euler's work was not "accepted for centuries" nor anything like that. Graph theory began with him, and everything about it was the subject of debate and discussion. The first proof offered by a mathematician other than Euler was in the late 1700s, shortly after Euler had died. And when Euler first proposed the formula, he didn't offer a proof. There are today things which mathematicians think are probably true, but there is no proof, so there is no way of knowing.

I am pointing out that it is certainly possible that a proof has a flaw or a result is incorrect, and yet it is accepted by the mathematical community.
Yes, but until you offer a single example of a proof which was accepted for centuries then there's no point in using this as an argument. Euler's characteristic is not a good example, because various proofs were offered beginning immediately after Euler.

Even if the proof has been accepted for one day it throws open the possibility for it being accepted for two days and so on.

How do you decide when it is accepted? Some proofs take years to be accepted. Look at Cantor's work on the cardinality of infinite sets. However, until you can point to a single proof accepted as true by the mathematical community in the modern era (i.e., when a mathematical community existed), then posting links which talk about work done one proofs over time is pretty meaningless. It goes against your point, because it shows the mathematical community unwilling to accept proofs unless they were quite sure.
 

A-ManESL

Well-Known Member
I would say that if a "wrong" proof is appearing in textbooks it has been accepted (but that's my opinion of course).

But okay. I realize that the modern era standards are pretty tight and works done previously were not subject to the stringent standards. Hence many results were accepted without that stringency earlier but that sort of thing isn't probable today. My point though still stands on a philosophical level though, that we can't be sure of knowledge obtained through maths because it is susceptible to human beings thought process.

By the way you are mistaken about Euler characteristic. The result was published in 1758, and not just before Euler died. It is more a part of polyhedral combinatorics then graph theory. Is there a chance you were thinking about planar graphs?
 

LegionOnomaMoi

Veteran Member
Premium Member
I would say that if a "wrong" proof is appearing in textbooks it has been accepted (but that's my opinion of course).

Which "wrong proof" ? Ampere's? You cite a textbook which states the this proof didn't establish what it was intended to as indicative of its general acceptence? Until Weierstrass, mathematicians continued to argue over exactly what formal definitions were sufficient foundations for differential and integral calculus. And before him, there was no "general acceptance." Mathematicians continued to use integration and differentiation, developing more and more sophisticated methods and applications, but the more they used the techniques and formulae, the more the a lack of a usable formal, clearly defined foundation of limits bothered them.

Hence many results were accepted without that stringency earlier but that sort of thing isn't probable today.
What result? Newton and Leibniz are credited with independently discovering/developing calculus. But they failed to come up with a sufficiently clear formal definition for their formulae. So mathematicians contiued to to work on the problem for 200 years.

My point though still stands on a philosophical level though, that we can't be sure of knowledge obtained through maths because it is susceptible to human beings thought process.
So far you've supported this assertion through a claim concerning certain proofs "accepted by the mathematical community." However, it seems that what you've done is point to times the mathematical community didn't accept formulae, definitions, and proofs.


By the way you are mistaken about Euler characteristic. The result was published in 1758, and not just before Euler died.
Euler's proof
"Playing around with various simple polyhedra will show you that Euler's formula always holds true. But if you're a mathematician, this isn't enough. You'll want a proof, a water-tight logical argument that shows you that it really works for all polyhedra, including the ones you'll never have the time to check. Despite the formula's name, it wasn't in fact Euler who came up with the first complete proof. Its history is complex, spanning 200 years and involving some of the greatest names in maths, including René Descartes (1596 - 1650), Euler himself, Adrien-Marie Legendre (1752 - 1833) and Augustin-Louis Cauchy (1789 - 1857). It's interesting to note that all these mathematicians used very different approaches to prove the formula, each striking in its ingenuity and insight. It's Cauchy's proof, though, that I'd like to give you a flavour of here. His method consists of several stages and steps. The first stage involves constructing what is called a network."

So I still don't understand where you see in all this that the "mathematical community" accepted Euler's proof.

It is more a part of polyhedral combinatorics then graph theory. Is there a chance you were thinking about planar graphs?

That's because Euler's work was the foundation of graph theory. Graph theory is a part of combinatorial analysis: On the Euler Characteristic.
 

A-ManESL

Well-Known Member
You didnt read the whole page in my link? I do think that you did not. Had you read this line:

Between the appearance of Ampere's paper and 1870, the proposition that any (continuous) function is differentiable in general was stated and proved in most of the leading texts of calculus.

you would have understood that this is what I was referring to. If my reference was unclear, no harm done. I just want to say I believe this line means that a wrong result was stated and proved in most of the leading texts of calculus between 1806 to 1870. If you don't agree with the author of this line, then I guess there is nothing I want to say anymore.

Secondly, I quite understand graph theory is part of combinatorial analysis but its I would say it is a subset of combinatorics. Perhaps you don't understand that it is my opinion that the result under discussion belongs to an area which belongs to combinatorics but not strictly to graph theory. Anyway, coming to the point I said "widely accepted" because the link I made earlier contained these words. That site is usually frequented by mathematicians. I had no other reference. There are many other "theorems" on that site (this one is supposed to last from 1961-2002) but I don't understand them hence I didn't quote them.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
You didnt read the whole page in my link? I do think that you did not. Had you read this line:



you would have understood that this is what I was referring to. If my reference was unclear, no harm done.
I did read the line, but certain parts of the page were highlighted on your link. However, this was exactly what I was refering to when I spoke of the 200 years between calculus and Weierstrass. Calculus worked, and it became increasingly more fundamental. Obviously, textbooks had to teach it, but mathematicians were actively working on it. I have an older textbook which uses an the older, ill-defined version of infinitesimals.

The point is, the mathematical community absolutely did NOT accept these proofs. Even in your link this is clear. They continued to try to prove that if a functions was continuous, then it was differentiable. None of these were widely accepted, however. The problem continued to be worked on. That's why so many proofs came out during the 19th century. From Newton onwards, more and more people used calculus, but the mathematical community desperately needed better definited terms to use in their proofs, and the lack of a well-defined notion of limits (upon which all of calculus rests) meant that all the various proofs offered were on somewhat shaky grounds, and mathematicians knew it. That's why the work continued. From my old (and favorite) math textbook by Hubbard and Hubbard (Vector Caculus, Linear Algebra, and Differntial Forms): "Continuity is the fundamental notion of topology, and it arises throught calculus also. It took mathematicians 200 years to arrive at a correct definition. (Historically, we have our presentation out of order. It was the search for a usable definition of continuity that led to the correct definition of limits.)


I just want to say I believe this line means that a wrong result was stated and proved in most of the leading texts of calculus between 1806 to 1870. If you don't agree with the author of this line, then I guess there is nothing I want to say anymore.
I agree that it was, what I don't agree with is the conclusion that this was accepted by the mathematical community. Textbooks are designed to teach students. They had to learn calculus, so the authors used what was available. For the mathematical community, however, the problem wasn't solved. Newton, Euler, Cauchy, etc., all offered definitions used in their (and others' proofs). But the reason people kept doing so after Newton wasn't because the definitions or the proofs were widely accepted as true, but because they weren't. They worked often enough, and they were unbelievably useful, but mathematicians weren't satisfied.


Secondly, I quite understand graph theory is part of combinatorial analysis but its I would say it is a subset of combinatorics.
I thought that was implied by my use of "part of."

Perhaps you don't understand that it is my opinion that the result under discussion belongs to an area which belongs to combinatorics but not strictly to graph theory.

What I don't understand is why that is your opinion. Perhaps I don't understand what you mean by "strictly." Do you mean it isn't only used in graphs? Because that's true of most applications of graph theory. In fact, it's true of a lot of combinatorics as well. Consider permutations. They're a basic part of probability and set theory because so is combinatorics. Or consider matrices. Graphs can be represented by matrices (adjecent matrices). But a graph can have multiple adjecent matrices. In fact (making further use of permutations), one can demonstrate that graphs G1 and G2 are isomorphic by permuting the adjacent matrix of G1. If this permutation can yield an adjecency matrix for G2, then G1 and G2 are isomorphic.

There are many other "theorems" on that site (this one is supposed to last from 1961-2002) but I don't understand them hence I didn't quote them.
Abelian groups (actually, groups in general) are part of abstract algebras (the study of algebraic structure; actually a lot of combinatorics and graphs come up here as well). What you refer to did (you can find it in full here)is something that happens fairly frequently in mathematics. What the paper you refer to did was not so much prove the theorem wrong, but rather extended (that's somewhat simplistic but it works) the abelian categories by constructing a type Joos didn't deal with. A lot of the published work in mathematics involves not wholly new theorems or inventions of new branches of mathematics. Rather, mathematicians improve earlier methods, argue about which methods are superior for which application, etc. Take, for example, a problem I have with a stastitical technique used all the time in just about every science. It's in any intro stats course: Pearson's r. Technically, it applies only to bivariate populations over a continous range. However, most discrete sets approximate continuity well enough for this test. It is not, I think, a good test for data obtained through likert-scale measures, as these are not only better thought of as ordinal data, they also involve "fuzzy" concepts. Since Zadeh developed fuzzy logic, mathematicians have continued to work on the best extension of correlation measures like Pearson's correlation coefficient, and several different versions are currently available. It is not clear, however, which one might be better in general, or more widely applicable, etc. But none of this either proves any of them wrong, nor demonstrates that Pearson's proof offered a century ago is wrong either. Rather, as new data sets of a type he did not work with began to appear, new test statistics have as well.
 

A-ManESL

Well-Known Member
Textbooks are designed to teach students.

I do think there is something wrong if most of them are teaching a wrong thing. I do not think any compromise should be made on this.
 

LegionOnomaMoi

Veteran Member
Premium Member
I do think there is something wrong if most of them are teaching a wrong thing. I do not think any compromise should be made on this.
It's not so much that they were wrong, just incomplete. I'm going to assume you haven't taken calculus because I have no idea if you have or not (or if you took it so long ago you don't remember). If you have (or if you have a good idea what it entails), I apologize for being overly simplistic here.

The problem of curves was known since the Greeks. They could find the area of a square, triangle, and any shape one could make based on these. But the area under a curve or the slope of a line at a point? Not so much. And while geometrically this seems fairly useless, it's fundamental for all the sciences. To give just a few simple examples, the normal (bell-curve) distribution, and similar distributions, are essential to statistics. But the populations they represent (SAT scores, income levels, crime rates) are all areas under a curve (where the highest point may represent, say, the people with the most frequently occuring income and the tails the lowest incomes and the highest incomes at either end). And just about every intro to calc textbook will introduce the derivative as both the slope of a line at a point and then velocity. The derivative of a function which describes the rate of increase of the speed of a falling object will enable you to know it's precise speed at a given moment (the velocity) after it is dropped off of a building.

Basically, calculus is all over the place. In the early years of calculus, it was used for everything from the understanding the movement of planets to basic physics (like velocity). However, as I said before, neither Newton nor Leibniz really set it on firm foundations. The terms they used weren't well-defined formally enough to use in proofs. But the methods still worked.

So, while mathematicians continued to work on formalizing calculus (among other things), what options did they have? First, new mathematicians couldn't learn what calculus was to work on building a better foundation without books on the subject. Nor could the physical sciences (and the uses of calculus itself) advance unless the new students were learning based on formulae which worked, but which couldn't be adequately proved.

So either calculus textbooks could teach calculus with notions like the infinitesimals or other problematic formulae or terms, which allowed the growth of the physical sciences and mathematics, or just...what?

It's one thing if there were proofs that at some point the mathematical community considered "proved" because no one checked hard enough. It's quite another for problematic techniques and concepts to be taught so that methods which worked could be used to develop other sciences and so that students interested in mathematics could improve these techniques and concepts.
 

A-ManESL

Well-Known Member
Of course I know calculus, and know what you are talking about (although I wouldn't say that everyone was unconvinced between 1806-1870). But this isnt what I am talking about. You make big assumptions when you say that the growth of calculus would have been stymed or students could not learn if this result "every continuous function is differentiable" would have not been taught or simply presented as an unproved conjecture. I simply don't agree with that. Since Ampere was a big name the result probably got off to a good start, and that's what it brought it into the books probably.

As an aside, I think all this has got sidetracked. My original point was there seems to be no philosophical basis to validate mathematical truth since the means of checking it was susceptible to error. Can you answer the following question in yes/no?

Is there a possibility that their exist fundamental flaws in the reasoning process of validating any proof which are not capable of being comprehended by the human brain?

Afterthought: I just found this short article on the internet. Precisely what my point is.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Of course I know calculus, and know what you are talking about (although I wouldn't say that everyone was unconvinced between 1806-1870).
Really!? Have you read some of the vindictive papers from french or german mathematicians (not the kind of stuff Newton said of Leibniz, but bashing other's proofs, statements, etc.)? Even up to Bertrand's Russell's rather scathing assessment of infinitesimals. Everyone was convinced that calculus in general worked, and that most of the related theorems were more or less accurate, but I can't think of a time in the history of mathematics, nor a subject of mathematics, which is more filled with dissent, probably because of how essential it was (and is).

You make big assumptions when you say that the growth of calculus would have been stymed or students could not learn if this result "every continuous function is differentiable" would have not been taught or simply presented as an unproved conjecture. I simply don't agree with that.
Exactly what were textbooks designed to do? The idea is to present a introduction to the state of mathematics at the time, and give students an increasingly greater idea of the mathematical techniques and a foundation and nuanced grasp in mathematical analysis. I have an pretty old (19th century) calculus textbook. The difference between it and a modern one is pretty distinct.

How do you think Weierstrass finally settled the issue? He studied math as a hobby, through the very textbooks you're referring to. And yet he was able to finally settle the issue and come up a precise, usable definition of limits. So yes, any single theorem could be removed and it wouldn't have been an issue. But until there was a firm foundation for calculus, it was much more difficult to settle issues such as whether or not all continuous functions had derivatives. After all, the modern definition of continuity rests on limits.


Since Ampere was a big name the result probably got off to a good start, and that's what it brought it into the books probably.
Or, again, without the modern definition of limits, which allow one to say for any function f(x), the function is continuous at a iff the limit of f(x) as x approaches ais f(a). And again, the point is to provide students with the state of the field.


My original point was there seems to be no philosophical basis to validate mathematical truth since the means of checking it was susceptible to error. Can you answer the following question in yes/no?
Well as stated your conclusion absolutely does not follow from your premise. But perhaps I'm misunderstanding you. The philosophical basis for mathematical truth exists in the validity of the logic, not whether it is misused.

Is there a possibility that their exist fundamental flaws in the reasoning process of validating any proof which are not capable of being comprehended by the human brain?
No, not unless some new branch of mathematics turns up. These proofs all depend on explicitly stated axioms and logic methods. This same logic is accepted as a priori truth, because to deny such things as the validity of the induction method or that in a conditional if the antecedent is true, then the consequent must be true (if the conditional is true). To deny these things is a whole different issue. It's more akin of denying any validity in reasoning. As proofs are written in a logical system which is a product of the human reasoning processes, it cannot be true that there is a proof that the human brain cannot comprehend. Of course, this isn't to say that many or most human minds will not be capable of comprehending a proof. I don't speak chinese, but it doesn't mean someone can say something in Chinese that no one who speaks Chinese can understand. Mathematics is a language, with syntax, "lexemes" of sorts, etc. If someone writes a proof that the human brain can't comprehend, then they are no longer using any recognizable mathematical language.
 

A-ManESL

Well-Known Member
Really!? Have you read some of the vindictive papers from french or german mathematicians (not the kind of stuff Newton said of Leibniz, but bashing other's proofs, statements, etc.)? Even up to Bertrand's Russell's rather scathing assessment of infinitesimals.

Yes, I have read some of that. Some one named Bishop Berkely criticised infinitesimals vehemantly.

If you still feel that the textbook was doing its job, I just disagree. I don't think by stating that such and such result is a conjecture, a textbook would have failed in its objective at all. If you believe that by presenting a flawed proof to the student the textbook did a better job, thats your opinion.

Secondly I am not talking of the philosophical basis of mathematical truth, but of the process to validate mathematical truth. I think you are not reading what I wrote clearly.

Thirdly I don't know whether you read this one page article I linked previously. Perhaps you would like to go through it and tell me your opinion. Particularly of the ending:

When I read a journal article, I often find mistakes. Whether I can fix them is irrelevant. The literature is unreliable.

How do we recognize mathematical truth? If a theorem has a short complete proof, we can check it. But if the proof is deep, difficult, and already fills 100 journal pages,
if no one has the time and energy to fill in the details, if a “complete” proof would be 100,000 pages long, then we rely on the judgments of the bosses in the field. In mathematics, a theorem is true, or it’s not a theorem. But even in mathematics, truth can be political.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
If you still feel that the textbook was doing its job, I just disagree. I don't think by stating that such and such result is a conjecture, a textbook would have failed in its objective at all. If you believe that by presenting a flawed proof to the student the textbook did a better job, thats your opinion.

Without a definition of limits (or a usable definition of infinitesimals) any textbook would have to say that a great deal of everything in it was conjecture. Without limits, there is no calculus. It's all built on that concept. So how could one write a textbook before a formal usable definition of limits? Say "by the way, everything in this is based on hotly contested definitions, and who knows what might be wrong, but trust us this is a good thing to learn" ?


Secondly I am not talking of the philosophical basis of mathematical truth, but of the process to validate mathematical truth. I think you are not reading what I wrote clearly.

I am. Perhaps I wasn't clear or you didn't understand me. The point is that proofs are written and based in the language of logic. You cannot write a proof without using logic, as this is the mechanism for demonstrating what follows from what. Case in point:

Thirdly I don't know whether you read this one page article I linked previously. Perhaps you would like to go through it and tell me your opinion. Particularly of the ending:
Did you read what your source says about why the proofs can be difficult to prove? "The reason is that many great and important theorems don't actually have proofs. They have sketches of proofs, outlines of arguments, hints and intuitions..." The authors "obvious to the author" reminds me of Fermat stating he had a proof but no room to write it in the margin.

In other words, it's not that these "proofs" can't be proven, but that they aren't proofs. When the proof is not actually a proof, but only approximating one, then it one cannot use logical reasoning to validate it unless one rewrites it (if it is true). If it isn't, then one has to prove that it leads to a contradiction, which again is difficult if the proof is merely an "outline" or a "sketch." However, even these are still useful. Fermat offered NO proof for his last theorem, but it was proven.
 

A-ManESL

Well-Known Member
Well, if you are saying that the textbook ought to have said "by the way, this particular result is based on hotly contested definitions, and who knows what might be wrong, but trust us this is a good thing to learn" I am agreeing to do that. The source just said that textbooks stated and proved that theorem, (I assumed without this addition).

Fermat offered NO proof for his last theorem, but it was proven.

Or has it been proven? That's the point.
It took several years to confirm the correctness of Wiles’ proof of “Fermat’s Last Theorem”. A mistake was found in the original paper, and there still remained questions about the truth of other results used in the proof. There were also arguments about the completeness of Perelman’s proof of the Poincaré conjecture, the first of the Millennium Problems to be solved. How many mathematicians have checked both Wiles’ and Perelman’s proofs?

I certainly don’t claim that there are gaps in Wiles’ or Perelman’s work. I don’t know. We (the mathematical community) believe that the proofs are correct because a political consensus has developed in support of their correctness.

I don't know whether you consider Fermat's last theorem to be proved or not, but if you do (probably) either you have checked it yourself, or you are just accepting the consensus which has developed in its support. In the second case (which I suspect is true) it is not quite a satisfactory and absolutely sure thing for me, maybe you have a contrary opinion. My philosophical issue is that such a truth which is determine by consensus (in a clique) can never be treated as absolute and the chance remains (even a microscopic one) that it is wrong.
 
Last edited:
Top