None of it was random, none of it was gibberish. Do you even know what we're talking about?
@viole has claimed that inescapable logical proof comes from a lack of evidence. "I don't know any Jews" is PROOF that those unknown Jews are anything and everything including being both atheist and theist simultaneously eventhough this cannot exist in the real world.
I have been arguing this is not logical. It's the opposite of logic and it cannot be proven and this so-called logic is easily escaped.
The so-called proof that has been presented is actually only a half-proof. It can be summarized "If it can't be proven false, it MUST be true." And that's stupid. Everyone knows it's stupid. It is extreme optimism. And it's only useful in rare occasions. But, "classical logic" permits these sorts of gross assumptions.
Good so far?
@viole is arguing in favor of "Classical logic" ( but up to this point it's been equivocated as "logic" ) I am saying, "What you're doing is stupid. Obviously."
@viole is saying "But, but, all these smart people are doing it, and saying it's logic. How can all these smart people be doing something stupid?" And I'm saying "It's stupid to use it in the real world. It's not *actually* logical."
So
@viole asks:
My answer: "What every logical sane intelligent person uses. Natural deduction."
@Sargonski, are you following? I am objecting to "Classical logic". Technically, just one method in Classical logic, and I am proposing a better alternative which is called "Natural Deduction".
In order to do that, I need to show what this is, and I need to show that it is indeed in the catagory of "logic". So I brought the wikipedia article which describes it. See below:
en.m.wikipedia.org
In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.
@viole's method is based on axioms, rules, not proof. It's faith based. My method rejects that; it's evidence-based.
@Sargonski, are you seeing the relevance? There is no *actual* proof for "All the Jews I know are atheists" if I don't know any Jews. The same so-called proof shows that "All the Jews I know are theists and monkeys and goldfish." The only *actual* proof that can be given for the unknown is what is not known about it. But! There's a rule, an axiom, that is being abused, on faith, that an inerrant rule-maker, a math-god, hath ordained, which claims that the unknown magically knows everything.
@viole is claiming there is proof? Natural deduction is "proof theory". That's literally what it says in the article. What I brought is supremely relevant. Not random. Not gibberish.
It's absolutely insane to try to use this non-proof, axiom, in the real world, how much more so to claim it's proven true. It is a leap of faith. I'm arguing against calling this logic.
This next part of the wiki quote is important, but yes it needs an intro.
Logic, like math, attempts to take english language statements, propositions, and translate them into symbolic notation. This isn't always easy. In America, these are called "word problems" or "story problems". They're notoriously challenging for students. If the "story" is not accurately translated into math notation, then the answer is going to be wrong even if the math is evaluated correctly. The problem is compounded if the evalutated answer in symbolic notation is mistranslated again back into an english conclusion. The flow looks like this.
Question in english >>>> Symbolic notation >>>> Math/logic evaluation >>>> symbolic answer >>>> conclusion in english.
The middle parts, notation, evaluation, answer, may all be perfectly correct and consistent. That's the part most people think of when when they think of "doing math" and "getting the right answer". But if the question isn't translated properly, that's a big problem. And if the answer isn't translated back into english properly, that's a big problem. If both the beginning and the end are mistranslated, if none of it is translated properly, that's a huge problem. Get it? That's what's happening here in this thread.
So....
@viole is trying to claim that "All the Jews I know are atheists" should be translated into the logical notation "P ---> Q" This logical concept has several names that are supposed to help translate it into english. But none of those names translate properly. It's usually called "implication" or "If ... then". But, that's not *actually* what it means. It's actually an assumption. The accurate, non-ambiguous, technical name for "P ---> Q" is a material conditional. In classical logic, "P ---> Q" can be evaluated as "True" under the condition that "P ---> Q" is assumd to be true. This is because of the definition of "P ---> Q". It's a rule, a definition, not a proof of anything. What I'm saying can be seen by analysing the truth table which defines "P ---> Q". That was the first part of the reply which you called random gibberish, but you simply didn't know what we're talking about.
So, the question is, does "All the Jews I know are atheists accurately translate into the logical notation "P ---> Q"? I am arguing, no.
@viole is arguing, yes.
"All the Jews I know are atheists" is not an assumption. An assumption is not certain. This is certain.
"All the Jews I know are atheists" is not "If ... then". "If ... then" communicates causation. There is no causation.
"All the Jews I know are atheists" is not an implication. An implication communicates correlation. There is no correlation.
None of it fits. Techncally, in english, the logical notation "P ---> Q" translates into "not P or Q". As strange as that sounds, that's what it is. Again, this was included in the screenshots you called random gibberish. The point is, in order pound this square peg into the round hole of "P ---> Q", the meaning of the statement MUST be changed. This is evident by taking "All the Jews I know are atheists" and rewriting it as an assumption, as causation, as correlation, as "if ... then", or best yet as "not ... or ...".
So...
@viole is asking "If it's not P ---> Q, then what is it? What logic are you using, if you're using any?"
@Sargonski, are you following? I am answering "natural deduction", which is a legit form of logic employed by virually all intelligent sane logical people all over the world. Classical logic doesn't do causation, doesn't do correlation, doesn't do relevance, doesn't do fallacies. It's the lowest standard for evaluating "T" or "F". No one actually uses it in the real world. People learn it like people learn latin. It's a good intro to learning langauges. It gives a good foundation, and many of its principles are included in other more useful forms of logic. And, in alot of ways, it's eeeeeeeeeeeasy. No thinking needed. Ahem. No judgements. And that leads into the next part of the wiki quote.
Classical logic is lacking a lot of things which make it useful in real world situations, as I mentioned. One of those key lacking concepts is a "Judgement". But natural deduction includes this concept. See below: ( remember, this is the wiki article defining the logic I am using, natural deduction )
A judgment is something that is knowable, that is, an object of knowledge. It is evident if one in fact knows it. Thus "it is raining" is a judgment, which is evident for the one who knows that it is actually raining; in this case one may readily find evidence for the judgment by looking outside the window or stepping out of the house. In mathematical logic however, evidence is often not as directly observable, but rather deduced from more basic evident judgments. The process of deduction is what constitutes a proof; in other words, a judgment is evident if one has a proof for it.
The most important judgments in logic are of the form "A is true"
@Sargonski, now, hopefully, all the peices are coming together. In the logic I am employing, it's not true unless is it IN FACT ACTUALLY true. This is obvious. It's logical. And it's a legit academic version of logic. This is the proper method for a HUMAN for evaluating "T" or "F" of the statement.
The method being employed by
@viole is more like a robot. It's primitve AI logic. It has no means for assessing a whole host of real world "deal-breakers" which render a statement false. One of the things it lacks, and natural deduction includes is the concept of a "Judgement".
"All the Jews I know are atheists" is a JUDGEMENT. This is easiy seen because it is claiming knowledge. Reread the definition of a judgement. See that? The very first sentences? BAM! This is a judgement.
"All the Jews I know are atheists" is not "If ... then", its not implication. It is certain.
It is a judgement, and a judgement is not true without *actual* proof. Innocent until proven guilty. Not guilty till proven innocent.
So, that gets you caught up a bit. There's a lot of technical language being tossed around here. Trying to unravel the reasons that an educated person, an intelligent person, a logical person would somehow claim "All the Jews I know are atheists AND I don't know any Jews" is proven true, and inescapably true, requires going way way way down the rabbit hole of set-theory. That's where the "empty-set" the "empty-domain", NULL, and subsets come into play. All of what I posted is relevant. None of it is gibberish, unless you don't speak the language of logic and math.
Any objections or questions?