Midnight Rain
Well-Known Member
You are right. me word not gerd today.Actually, if they repeat, they're not irrational, ie, they can be written as a fraction.
They're rational.
Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
You are right. me word not gerd today.Actually, if they repeat, they're not irrational, ie, they can be written as a fraction.
They're rational.
yoo too?You are right. me word not gerd today.
Yup.Actually, if they repeat, they're not irrational, ie, they can be written as a fraction.
They're rational.
The problem, however, is where that last "1" goes in your representation of the distance between 0.999... and the "1". No matter how many 0's you put in front of that 1 to form the number that is the distance between 1 and 0.999..., we find that there is no such distance between the two. So, for example, if you state that the distance between 0.999... and 1 is 0.00001, I can demonstrate that this can't be so because clearly 1 - 0.99999999 is smaller than 0.00001.My (made up) short hand for representing the difference between the two would be: 0.000...1. I honestly feel that covers the difference between 1.0 and .999...
The method used to verify involves limits or epsilonics or sequences or one or more of several tools that serve as foundations for the whole of the real number line and any and all operations on it (and more than just the real number line- much more). The problem is that this isn't trivial (well, it actually is, but "trivial" is a relative term). It is, for example, vastly easier to show that 0.999... and 1 are equal than it is to show that one can perform simple arithmetic with real numbers (as, among other things, this requires formally constructing the real numbers, something we teach students after a few semesters of calculus and other college level math).Seems like one assertion is just an assumption without a way to verify.
Still not seeing it as verifiable. Still haven't seen that shown in this thread. I'm not able to debunk it, as I'm claiming it is not (inherently) verifiable. Needs an (inherent) limit put on infinite decimals to (strongly) suggest is must be equal to 1, rather than (forever) unequal to 1.
My son that's getting his math major told me a few months back than he finally learned the proof (I think) for what numbers are and how to add. Something with empty sets and stuff. Apparently, it's more complicated than I thought.The method used to verify involves limits or epsilonics or sequences or one or more of several tools that serve as foundations for the whole of the real number line and any and all operations on it (and more than just the real number line- much more). The problem is that this isn't trivial (well, it actually is, but "trivial" is a relative term). It is, for example, vastly easier to show that 0.999... and 1 are equal than it is to show that one can perform simple arithmetic with real numbers (as, among other things, this requires formally constructing the real numbers, something we teach students after a few semesters of calculus and other college level math).
The problem, however, is where that last "1" goes in your representation of the distance between 0.999... and the "1". No matter how many 0's you put in front of that 1 to form the number that is the distance between 1 and 0.999..., we find that there is no such distance between the two.
So, for example, if you state that the distance between 0.999... and 1 is 0.00001, I can demonstrate that this can't be so because clearly 1 - 0.99999999 is smaller than 0.00001.
Your short hand is actually a great demonstration that 0.999...= 1. You keep making your "distance" number (i.e., 0.000...1) smaller and smaller, and I keep showing that the number 0.999... is closer to 1 than the "distance" number you provide. No matter how small of a number you give me, saying "here's the difference between 0.999.... and 1", I can show that there is no such distance between 0.999... and 1. This is because, in fact, there quite literally isn't any distance between 0.999... and 1, as they are equal.
Because it is unverifiable?
They aren't, actually, but that doesn't matter because assumptions are a great way to start a proof (using a method of proof called "proof by contradiction"). If you make an assumption, and find that this assumption implies a logical contradiction, then it cannot be true. For example, imagine I assume that there are only finitely many integers (i.e., at some point there is an integer greater than all others. I know that, given any integer, I can always add 1 and get another integer. Call the largest integer that I've assumed to exist n. I can form the integer n+1, which is necessarily greater than n, which by assumption is the greatest integer. Thus this assumption leads to a logical contradiction, and I've proved my assumption is false.I would say because it is not verifiable, to assume equal or to assume infinitely unequal are assumptions.
My son that's getting his math major told me a few months back than he finally learned the proof (I think) for what numbers are and how to add. Something with empty sets and stuff. Apparently, it's more complicated than I thought.
It is verifiable. You've been given two informal demonstrations. A formal proof would say much the same, but without all the unambiguity and appeals to intuitive (or even counter-intuitive) notions, concepts, etc.
They aren't, actually, but that doesn't matter because assumptions are a great way to start a proof (using a method of proof called "proof by contradiction"). If you make an assumption, and find that this assumption implies a logical contradiction, then it cannot be true. For example, imagine I assume that there are only finitely many integers (i.e., at some point there is an integer greater than all others. I know that, given any integer, I can always add 1 and get another integer. Call the largest integer that I've assumed to exist n. I can form the integer n+1, which is necessarily greater than n, which by assumption is the greatest integer. Thus this assumption leads to a logical contradiction, and I've proved my assumption is false.
Likewise, let us denote the number 0.999... by L. Let us assume, as you do, that this number is not equal to 1. It follows then that there must be some difference (or "distance") between L and the number 1 (equivalently, you are assuming that |L-1| > 0, because both numbers are positive and if the absolute value of the difference between L and 1 were 0, it would mean the two numbers are the same).
Now we see if your assumption leads to a logical contradiction. Again, your assumption implies that there must be some difference between the two numbers. So there must be some positive number/value, which we will denote by ε, that separates/is between L and 1. You can pick any value for ε you like (make it as small as you wish) and you CANNOT EVER find a value that separates L and 1 such that L-1=ε.
Now, such demonstrations (even if you were shown them formally) may not convince you because infinities are often conceptually difficult and lead to unintuitive results. But you rely on this kind of reasoning all of the time, even without knowing it. Numbers like pi are uniquely defined only because we can do this very same thing: show that a unique sequence (3, 3.1, 3.14, 3.141,...pi) converges to pi and only pi the same way 0.999... converges to 1. In fact, the entire real number line is made "complete" and rigorous because every sequence of rational numbers converges to a real number. Exponential functions, square roots, logarithms, even the Pythagorean theorem (when sides a an b are 1, making the length of c the square root of 2) require this method.
It is a rational number.Again, I assume that .000....1 is the distance between the two. I accept that this is not a rational number
The difference is that .000...1 terminates. It is not an infinite decimal. Both are rational numbers (an irrational number is not merely an infinite decimal, but a non-repeating infinite decimal). With an infinite decimal like 0.999..., the fact that it never terminates means it gets "infinitely" close to 1 such that there is no number ε greater than 0 to satisfy the equation 1- 0.999...=ε. The only number which satisfies this equation is 0, and indeed 1 - 1 = 1 - 0.999... = 0.but feel I am able to make sense of it in the way I make sense of .999...
There are no irrational numbers here. 0.999... is a rational number. Recall that 1/3 = .333....It assumes that at some point (which is irrational)
Again, nothing here is irrational, and the number 0.000...1 is necessarily finite. That's why the ellipses, the "..." appear before the numeral 1. Again, 0.000...1 is a finite decimal, while 0.999... is not. Neither are irrational numbers, and the reason that the second is equal to 1 is because your number that separates them is finite (a terminating decimal).that there is a 1 at the end of infinity (also irrational)
Ok. Do you find anything verifiable that concerns numbers like pi or the square root of 2 or even of rational numbers like 1/3 (all of which are infinite decimals)? What do you mean by verifiable? Is 2+2=4 verifiable? Or, slightly more seriously, how would you respond to the tortoise's argument to Achilles here: What the Tortoise Said to AchillesI do not find it to be verifiable
You don't find 1/3 to be rational?, but also do not find .999... (or .333... or similar such numerical depictions) to be rational.
It's not so much the reasoning that you use, but rather the reasoning you depend on in various ways without even knowing it (Do you believe in the Pythagorean theorem? Do you believe that 1/3 is a single number? Do you believe that pi is a number? How about e? Do you have any faith in probability theory, physics, the technology you use daily (from GPS and navigation systems to your computer and TV), etc. are derived/formulated using faulty assumptions? Do you think logic is subjective?)So far nothing you are conveying is convincing me that I use this kind of reasoning all the time, without knowing it
It is a rational number.
The difference is that .000...1 terminates. It is not an infinite decimal. Both are rational numbers (an irrational number is not merely an infinite decimal, but a non-repeating infinite decimal). With an infinite decimal like 0.999..., the fact that it never terminates means it gets "infinitely" close to 1 such that there is no number ε greater than 0 to satisfy the equation 1- 0.999...=ε. The only number which satisfies this equation is 0, and indeed 1 - 1 = 1 - 0.999... = 0.
Again, nothing here is irrational, and the number 0.000...1 is necessarily finite. That's why the ellipses, the "..." appear before the numeral 1. Again, 0.000...1 is a finite decimal, while 0.999... is not. Neither are irrational numbers, and the reason that the second is equal to 1 is because your number that separates them is finite (a terminating decimal).
Ok. Do you find anything verifiable that concerns numbers like pi or the square root of 2 or even of rational numbers like 1/3 (all of which are infinite decimals)? What do you mean by verifiable? Is 2+2=4 verifiable? Or, slightly more seriously, how would you respond to the tortoise's argument to Achilles here: What the Tortoise Said to Achilles
You don't find 1/3 to be rational?
It's not so much the reasoning that you use, but rather the reasoning you depend on in various ways without even knowing it (Do you believe in the Pythagorean theorem? Do you believe that 1/3 is a single number? Do you believe that pi is a number? How about e? Do you have any faith in probability theory, physics, the technology you use daily (from GPS and navigation systems to your computer and TV), etc. are derived/formulated using faulty assumptions? Do you think logic is subjective?)
I really do not see the demonstrations as verifying the notion that .999... = 1. I will concede that they do for you, but would stipulate that I still see it as based on assumption.
This last assertion (especially with the emphasis) strikes me as restating it is unverifiable.
Again, I assume that .000....1 is the distance between the two. I accept that this is not a rational number, but feel I am able to make sense of it in the way I make sense of .999...
It assumes that at some point (which is irrational) that there is a 1 at the end of infinity (also irrational) that would make for the precise difference between .999... and 1 (logical assumption). I do not find it to be verifiable, but also do not find .999... (or .333... or similar such numerical depictions) to be rational.
So far nothing you are conveying is convincing me that I use this kind of reasoning all the time, without knowing it, but I am willing to consider that possibility (feel very open to it). Partially because philosophically and/or spiritually, I do think I use this kind of reason some of the time, and do so knowingly, but understand it as based on assumptions regarding things that more or less appear irrational, but are seemingly sound/logical. Such as: 'now' is an eternal occurrence. Not sure how I would ever verify that, but also seems (or is) sound in my understanding.
So, I know you state this many times. And I get that for you and likely other seasoned mathematicians it is 'most certainly rational.' I hope you get that for me, it is not. And that however you are using that word, is not making sense to me. How I'm using that word is (along lines of): logically understood, verifiable by some logical proof. Even that definition isn't great cause all you (and others) have to do is say, 'it is logical. We have provided you with the proof to verify this.' So, I go with what my dictionary says 'rational' means and as it has one designated definition for mathematics, I'll provide that: expressible, or containing quantities that are expressible, as a ratio of whole numbers. When expressed as a decimal, a rational number has a finite or recurring expansion.
So, everything up to the last 2 words of the definition are, to me, not occurring with the number .999... because I fully believe (and find logical) that no one (or computer) could ever express the quantity of nines that are intrinsic to that actual number, while I do believe the .999... is very convenient shorthand. I just do not see it as sufficient. The last 2 words of that definition are perhaps all that would matter for another to explain to me that the definition does cover this number in terms of being rational, but it currently doesn't and I'd really appreciate it being explained rather than just told.
I wish I could tie this back to A.I. (topic of this thread) in terms of how I am not understanding it, but am only being told, "it is this." Alas, I don't think I can, but this is for me like someone saying, "who created the universe?" And another person saying "God." And while not a great analogy as there are so many diverse tangents to go from there, just imagine if someone followed that up with, "that doesn't make sense to me, how is that logical?" And the person then responds, "because God is logical. It makes perfect sense." That's how I feel in this thread.
I do get this as an attempt to explain what rational numbers are (and are not), but what I'm getting from this paragraph is:
a) .999... does get infinitely close to 1 (but never reaches it)
b) there is no rational number to fill in the gap (that we know of)
c) we have made up our minds that it must be 0, therefore we are right about it being .999... equaling 1
d) this cannot be disputed. Resistance is futile. You will be assimilated.
(Hey, I did kinda sorta come back to A.I.)
For me, if .999... equals 1, then it is (somehow) necessarily finite; otherwise it is a unverifiable (and therefore irrational) number that is getting infinitely close to 1, but is not equal to 1 (ever). While I already pointed out in previous post how I do concede of .000...1 as not being rational (by my definition) but that it does go on for infinity and terminates with 1. That does strike me as logically inconsistent (thus irrational) but seems to work wonders in filling that alleged gap between .999... and 1. IMO, the gap arguably doesn't exist, which would seem to indicate (to me) that I'm coming as close as I have to saying it does equal 1. But what I feel I'm stating is that it is irrational to say there is some finite gap to fill between .999... and 1, as that gap to me would also contain some aspect of infinity, hence my .000...1, number. So, I'm essentially saying the gap is 1, while I hear you (and many other mathematicians) say, no the .999... number equals 1.
By verifiable, I mean can be observed and/or logically expressed as accurate, though in this case, I would go with 'as precise.' So, all the irrational decimal items you bring up (irrational by my understanding) are not verifiable, but limits to the decimal sure as heck help to work further with those numbers so that further math can be done. By limit, I mean terminating the decimal number at some point, based on convenience and/or assumption (that it can rightfully be terminated and still be fairly accurate, but not precise).
I actually do question or have some skepticism around 2+2=4, but mostly find it to be observably accurate and able to be demonstrated to me how 2 items combined with 2 other items would equal to 4 items.
As a fraction, (mostly) yes, as a decimal, no.
Too many questions for me to answer right now. If you feel after reading the rest of my post that asking any one of these questions would help you to help me (understand), then please ask them and we shall see how I respond.
Fair enough.So, I know you state this many times. And I get that for you and likely other seasoned mathematicians it is 'most certainly rational.' I hope you get that for me, it is not. And that however you are using that word, is not making sense to me.
The problem is that even though it was believed that 0.999... and 1 were the same using informal arguments before the epsilon-delta definition of limits was rigorously formulated, it was BECAUSE of logic and DUE TO logic that the equivalence was proved. It is fairly non-trivial to take the explanations present in the thread and translate them into a formal logical proof, as this is at the heart of analysis. So I have to ask, what do you mean by logical proof?How I'm using that word is (along lines of): logically understood, verifiable by some logical proof.
Ok, let me give you another restatement of this but now followed by a proof:Even that definition isn't great cause all you (and others) have to do is say, 'it is logical. We have provided you with the proof to verify this.'
Pretty much... just...... How do you know you are not "A.I."?
Fair enough.
The problem is that even though it was believed that 0.999... and 1 were the same using informal arguments before the epsilon-delta definition of limits was rigorously formulated, it was BECAUSE of logic and DUE TO logic that the equivalence was proved. It is fairly non-trivial to take the explanations present in the thread and translate them into a formal logical proof, as this is at the heart of analysis. So I have to ask, what do you mean by logical proof?
Ok, let me give you another restatement of this but now followed by a proof:
Now, how does one logically proof that e.g., 0.999.... converges to 1? First, here's a definition of convergence:
And, perhaps even more importantly, a logical statement of what a limit (i.e., the thing that "something" like a function or sequence converges to) is. That is, a number L is the limit of a function f (similarly for a sequence) if
(I included this part merely to show how strictly the requirement that definitions, proofs, propositions, etc., in mathematics must adhere to logic).
Here is a formal proof given in plain language rather than formal (symbolic) logic:
Even this proof is an attempt by the authors (Hubbard & Hubbard from Vector Calculus, Linear Algebra, and Differential Forms, 3rd Ed.) to render the logical proof into more intuitive language. In a course in formal logic, one learns early on how to prove/derive conclusions logically that have no meaning, because logical proofs are not concerned with soundness but validity (i.e., granting some set of premises are true, then necessarily some conclusion follows).
And here I thought all along that it was about "Artificial Ignorance".I stands for 'Intelligence'. I'm clearly disqualified.