• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Randomness and discreteness

viole

Ontological Naturalist
Premium Member
What is randomness? It seems to me that randomness is easy to define when there are discrete possibilities, and maybe it is difficult or impossible to define when there are not discrete possibilities?
Any thoughts?

There is a rigorous definition of probability that does not depend on the cardinality of the sets involved (e.g. Kolmogorov axiomatic definition).

So, I am not sure why you see a problem with infinite sets. The only thing that can irritate people is that, when infinite sets are involved, if an event has probability = 0, that does not entail that it cannot occur.

Ciao

- viole
 

LegionOnomaMoi

Veteran Member
Premium Member
There is a rigorous definition of probability that does not depend on the cardinality of the sets involved (e.g. Kolmogorov axiomatic definition).
It actually does. A probability triple
6e961acbe4636a12d42cc96b033d53db.png
is defined on a sigma-algebra of subsets of omega. By definition, such an algebra cannot consist of a set of uncountable unions or intersections.
 

viole

Ontological Naturalist
Premium Member
It actually does. A probability triple
6e961acbe4636a12d42cc96b033d53db.png
is defined on a sigma-algebra of subsets of omega. By definition, such an algebra cannot consist of a set of uncountable unions or intersections.

True. But omega can be uncountable, if my memories of statistics do not betray me.

Ciao

- viole
 

picnic

Active Member
@LegionOnomaMoi , what do you think about the crux of my idea in the OP.

I'll restate my idea so that it might hopefully be clearer to everybody. Imagine that reality is like a decision tree with a branch each time a quantum particle can give a certain observation. We are like readers of a book that has already been written except that we read in a sequence from root to twig on the decision tree. Our perception of reality cannot extend to neighboring twigs on a different branch. Everything that can exist does exist somewhere on the decision tree. Everything has already happened. Nothing is causing anything else. There are no choices or uncertainties. The only difference between past and future is the branching on the decision tree.

I can imagine a decision tree like that if the possibilities are discrete. What happens to a decision tree when the possibilities are not discrete? Can we add more and more discrete branch points until they become continuous? Is the result something like a tree painted into a wash on a watercolor painting? Or does it stop working when it becomes continuous?

What we are in this model is simply an observer or a reader of reality. We transcend reality, but we are so engrossed in reading that we have forgotten ourselves.
 

ScottySatan

Well-Known Member
Alas it is nowhere near that simple. For one thing, there isn't even one type of randomness. One of the most studied types, especially (and for obvious reasons) in computability theory is "algorithmic randomness" (with various types/tests of randomness, such as ML-random, falling under this definition). Another type of "randomness" is that of quantum physics (in particular, quantum mechanics). Here "random" is really "non-deterministic", but even this term is usually misunderstood (particularly as applied here). A quantum system prepared such that it's state-vector is an eigenvector of the observable applied will always yield a determined result. Moreover, randomness in quantum physics is much like that in statistical mechanics or probability theory more generally (in that, although indeterminism is intrinsic, the set of possible outcomes is generally known and can vary in terms of how many there are or how likely particular ones are). This brings us to probability and "random variables". "Random variable" is a misnomer: random variables are functions, not variables. They are "random" simply because they are described via probability distributions. Thus an idealized fair coin is "random" in that it has two (equally likely) outcomes, but loaded dice are just as "random". Any random variable that is normally distributed cannot have equally likely outcomes and for any random variable (including those normally distributed) that is continuous ALL outcomes have probability 0.
Randomness can be stochastic. It can be a measure of uncertainty or entropy (which, from an information-theoretic perspective are basically the same). It can be unpredictability. It can even be something that is epistemically deterministic and algorithmic(although such processes are better describe as pseudo-random; random number generators fall under this category).

In general, though, a random process will not have a set of equally likely outcomes. First because most random processes have can't be described in terms of the probability of outcome (non-uniformly distributed continuous random variables have outcomes that are all 0 yet not all equally likely; actually, the distinction between discrete and continuous is largely a property of undergraduate level probability theory before measure-theoretic probability is learned, but even in measure theory distributions of variables that have outcomes all equal to 0 are not typically uniform, meaning they don't have equal outcome probabilities). Second because probabilistic phenomena in general are not uniform. Third because this definition can't apply to most phenomena and processes we would like to call random (it is enormously restrictive). Finally, it has little philosophical or pragmatic benefits (it can't be incorporated into a Bayesian interpretation of probability or similar interpretations, and corresponds to a small subset of random distributions for frequentists; the pragmatic issues are related but also include the inapplicability of uniformly distributed random variables for most applied purposes).
I'm used to different field having different definitions for things. I describe "random" as described in genetics and in most basic statistics lessons. That's why I specifically asked for a mathematician, to make a more generic statement. Is that you? If so, your answer is atypical and we need some clarificartion.

Your explanation makes sense, but I'm skeptical because you gave a lot of context and still use the word "random" to describe the same thing I've always heard physicists call "uncertainty" when being careful (i.e. they need to watch their language because they're publishing or defending).
Biology teaches that randomness can only have a square-shaped distribution (biologists will chime in and disagree, and I'll counter that they are ignorant of biostatistics.) Maybe it's because we're the only field that really needs that concept as a standard to compare against?
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I'm used to different field having different definitions for things. I describe "random" as described in genetics and in most basic statistics lessons. That's why I specifically asked for a mathematician, to make a more generic statement. Is that you? If so, your answer is atypical and we need some clarificartion.
My “answer” perhaps seems unorthodox because it is really a set of answers, not “an” answer. Or rather, it began with the answer, that there is no single definition of randomness within mathematics or the sciences, and then gave examples of particular answers.

In one of his numerous books on information theory, Exploring Randomness, the eminent Gregory J. Chaitin stated: “There's only one definition of randomness (divided into the finite and infinite case for technical reasons): something is random if it is algorithmically incompressible or irreducible. More preciesely, a member of a set of objects is random if it has the highest complexity that is possible within this set. In other words, the random objects in a set are those that have the highest complexity.”

I find this curious, because Chaitin is so much of an authority on this topic that he defined types of randomness himself (see “DEFINITION 1.8” here). True, his interest is algorithmic randomness, but in probability theory (much like in genetics) one form of randomness concerns the distribution in a given sequence, and the tests for whether such a sequence is random involve convergence to equal chances for elements in the sequence. In statistical physics, more or less founded as it is in ergodic theory, we are not surprised to find ergodicity at the heart of randomness:

“The ergodic approach to the foundations of statistical physics seeks to prove that certain systems will tend to behave in random-looking ways no matter what—with a vanishingly small set of exceptions—their initial conditions. In more recent ergodic theory, the randomness of the patterns shown to occur is close to that of what I will later call the probabilistic patterns.”
Strevens, M. (2003). Bigger Than Chaos: Understanding Complexity through Probability. Harvard University Press.

Biology teaches that randomness can only have a square-shaped distribution (biologists will chime in and disagree, and I'll counter that they are ignorant of biostatistics.)
I have a few textbooks, reference texts, and volumes on biostatistics and a bunch of papers, but it isn’t my field. However, none of my sources assert this and more importantly it practically contradicts the laws of large numbers and the central limit theorem. Bernoulli trials (e.g., coin flips) have Bernoulli distributions (a special case of binomial distributions), not uniform distributions, and even if one samples from a uniform distribution the mean of the samples will tend towards a normal distribution. Thus it would seem that, given this definition, “randomness” in biology is basically non-existent, and the developments in biostatistics and statistical methods in biology since Quetelet, Poisson, Galton, and Pearson let alone those of the 20th century are all ignored. In neurobiology, especially computational neuroscience and neuronal models, we almost never find square-shaped distributions. Most of the statistical and data analyses of DNA as well as the evolutionary algorithms I’ve worked on/with or encountered were from fields like machine learning, computational intelligence, or otherwise abstracted from biology and certainly weren’t biostatistics, but here algorithmic randomness featured heavily and uniform distributions played little to no role.

Maybe it's because we're the only field that really needs that concept as a standard to compare against?
The concept is absolutely crucial in quantum information theory and is practically a foundation for computability theory, and features heavily elsewhere in the mathematics and sciences.
 

LegionOnomaMoi

Veteran Member
Premium Member
Expanding on the above


My interest and research has concerned rather fundamentally the mathematics and physics of complex systems, so I have a certain bias towards approaches to randomness here. For example, from an elementary text I’ve used:

“Before the discovery of this phenomenon, all studies of random processes and of chaos were usually conducted within the frame of classical theory of probability, which requires one to define a set of random events or a set of random process realizations or a set of other statistical ensembles. After that, probability itself is assigned and studied as a measure on this set, which satisfies Kolmogorov’s axioms [2]. The discovery of deterministic chaos radically changed this situation.

Chaos was found in dynamical systems, which do not contain elements of randomness at all, i.e. they do not have any statistical ensembles. On the contrary, the dynamic of such systems is completely predictable, the trajectory, assigned precisely by its initial conditions, reproduces itself precisely, but nevertheless its behavior is chaotic…the phenomenon of deterministic chaos requires a deeper understanding of randomness, not based on the notion of a statistical ensemble.”
Bolotin, Y., Tur, A., & Yanovsky, V. (2009). Chaos: Concepts, Control and Constructive Use (Understanding Complex Systems). Springer.


I also have Bayesian leanings when it comes to probability theory and statistics. The common notion of randomness in terms of sequences and distributions one is taught in undergraduate probability courses and to a certain extent graduate as well are heavily influenced by the work of Ronald Fisher, who almost single-handedly banished the Bayesian perspective from respectable circles for about half a century or more (others involved include von Mises, Kolmogorov himself, Popper, Neyman & Pearson, and the statistical communities response in particular to Fisher). Historically, everybody distinguished between chance (and probability) and randomness until almost the 18th century. Bayes was one of the first to identify the two and avoid this distinction, and Bayesians continue to regard randomness (chances) in terms of subjective (a priori) probabilities. For Bayesians, probability is fundamentally a matter of (rational) degrees of belief; see e.g.,

Press, S. J. (2003). Subjective and Objective Bayesian Statistics: Principles, Models, and Applications (2nd Ed.) (Wiley Series in Probability in Statistics). Wiley.
(“probability reflects a degree of randomness")

Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge University Press.
(“In the vast majority of real applications there are no ‘random variables’ (What defines ‘randomness’?) and no ‘true distribution’...”)

Howson, C., & Urbach, P. (2006). Scientific Reasoning: The Bayesian Approach (3rd Ed.). Open Court.
(“'random variable' does not have to refer to a random procedure: there, it was just a way of describing the various possibilities determined by the parameters of some application. Indeed, not only do random variables have nothing necessarily to do with randomness, but they are not variables either”)

D'Agostini, G. (2003). Bayesian Reasoning in Data Analysis: A Critical Introduction. World Scientific.
(“In the subjective approach random variables (or, better, uncertain numbers) assume a more general meaning than that they have in the frequentistic approach: a random number is just any number in respect of which one is in a condition of uncertainty.”)

Hartigan, J. A. (1983). Bayes Theory (Springer Texts in Statistics). Springer.
(I include this text because it is rather short, assumes the student is familiar with measure-theoretic (graduate-level) probability, and is a very nice, concise treatment of Bayesian statistics and probability for those with a background sufficient for a rigorous approach)

In frequentist approaches, probability itself is divided into “probability theory” and the likelihood Fisher defined which, despite an initially highly negative reaction (in no small part to the ad hoc method of distinguishing “likelihood” from “probability”) is now almost universal in statistics and found everywhere in applied probability. However, a central component to frequentist probability is “random” experiments, alongside “random sampling.” These two are related concepts, mostly via their idealized nature. "Experiments" are conceived of in terms of a random sample from a set of infinitely many identical experiments. Random sampling more generally is the idea that the method of obtaining samples from a population did not itself introduce biases (i.e., it could be a biased sample, but the sampling itself was not biased). In fact, insofar as “random” can be readily distinguished in frequentist probability theory it is in terms of bias. Random variables are, as they always are, merely functions and no more random than they are variables (likewise with random vectors). In probability theory more generally, it might be (and has been) said to be yet again a measure of uncertainty characterized by entropy (e.g., “In probability theory, entropy is a measure of the disorder and randomness present in a distribution”; Liu, L., & Yager, R. R. (2008). Classic works of the Dempster-Shafer theory of belief functions: An introduction. In R.R. Yager and L. Liu (Eds.) Classic Works of the Dempster-Shafer Theory of Belief Functions (pp. 1-34). Springer.

The problem with the notion of randomness conceived of as simply something with equally likely outcomes (apart from the fact that in statistics and probability “likelihood” generally has a technical meaning other than that of probability) is

1) Usually, it is impossible to actually define all outcomes
2) Most of the time outcomes are equally likely in that they are all 0, for infinitely many non-uniform distributions as well as uniform.
3) Random measures are set-theoretic functions on sigma-algebras of a probability triple, and do not correspond either to any intuitive notion of randomness, “random sampling”, “random variables”, etc., but Polish (locally compact) spaces endowed with a mapping M on (the Borel sigma-algebra of) a set E.
4) The problems with “random” in “random sampling”

Concerning 4, consider one of my favorite teaching examples: selecting numbers “at random” from an interval on the real line (say, the interval [0,1]. Define a probability triple using the Dirichlet function (or characteristic function of the rational numbers) on this set, i.e., the probability function is equal to 0 whenever the argument of the function is a rational number and 0 otherwise. There are, of course, infinitely many rational numbers in this interval, and indeed within the interval between any two rational numbers there are infinitely many other rational numbers. So, what can we say about this function’s probability distribution? That is, what is the probability that a “randomly” sampled number from this interval will yield a rational number, given that there are infinitely many rational numbers in the interval? The answer is 0. The probability that any member of the entire set of infinitely many rational numbers will be in a “random sample” from this (or any other interval) is 0, because for any interval the sampled number will be irrational “almost surely” (“a.s.”, the probabilists version of “almost everywhere” or “a.e.”).
 

LegionOnomaMoi

Veteran Member
Premium Member
To me, randomness is very important, because randomness appears in quantum mechanics, and that has some implications about reality and religion IMO.
Randomness appears in classical physics everywhere. What makes quantum mechanics (and therefore quantum field theory and therefore particle physics) different is not randomness but determinism. Classical physics is incapable of treating indeterminism in any sense other than epistemic. That is, in classical physics (so far as it was developed) we can know the future of any system, at least in theory, arbitrarily far into the future. We can never actually do this, because we can never obtain the necessary amount of information, but classical physics suggests that given the state of the cosmos at some time t, everything in the universe and the universe itself is determined at any time t+n.

But the opposite of determinism isn't randomness. If quantum mechanics were truly random, then we couldn't rely on the theory to do anything. The outcome of any experiment with quantum systems would be fundamentally and completely uncertain, and therefore pointless. Instead, quantum mechanics is a procedure whereby we prepare specific systems in particular ways so that we can represent both the manner of preparation and the kind of system in a unified mathematical way and so that we can then "measure" (i.e., disturb) the system in a manner dictated by the theory so as to allow for the mathematical representation of measurement to "act on" the mathematical representation of preparation and system to yield predictable outcomes.
The key aspect is predictable. Certain outcomes are vastly more likely, impossible, definite, etc., in quantum mechanics. They are "random" only in that there is always a degree of uncertainty, but this uncertainty can be minute, and is often nothing compared to the uncertainties associated with systems in classical (statistical) mechanics.

It seems to me that randomness is easy to define when there are discrete possibilities
The difference between discrete and continuous probabilities is doubly deceiving. First, there is the problem of countable sets, in particular the rational numbers. Between any two rational numbers, there are infinitely many rational numbers. Yet the rational numbers are a discrete set. So are the integers. So are the natural numbers. All these sets are infinite, but they are discrete.
Second, this distinction is artificial. Finite, countably infinite, and uncountably infinite sets are all treated the same in probability theory (at least the probability theory that mathematicians use; obviously if you take a course in probability theory, even as a graduate student, you will find the discrete/continuous distinction if the course doesn't cover or assume knowledge of measure theory).

and maybe it is difficult or impossible to define when there are not discrete possibilities?
They are defined the same way as discrete possibilities, via functions which assign values to algrebas of subsets of the entire set of possible (and possibly uncountably infinite) outcomes.

Imagine reality is every possible sequence of 10 coin tosses (kind of like a multiverse).
Branching probabilities in a multiverse don't work like this. They are the realization of orthogonal states of possible configurations of systems in actual "branches" of an ever-branching universe. So, for example, if a spin-1/2 particle could be in "spin up" or "spin down", "down" and "up" are at right angles with one another and the outcome of the experiment will result in two branches of the universe. The problem with this interpretation is that, while it was and is an attempt to simply do without interpretation by taking the mathematics literally, it nonetheless requires an interpretation because the mathematics does not assign probabilities to particular realizations of outcomes. Thus there is no reason for an observer in the multiverse interpretation to "observe" the branch of the multiverse they do.

If our observation is limited to only ONE possible sequence, how do we know if this sequence is a random variable?
Mathematically, because of its similarity to Bernoulli distributions, and in physics because the entire reason for the multiverse interpretation of branching universes is because the theory doesn't work without probabilities. Interpreting them as actualized doesn't change the mathematics.


If the possibilities are discrete, then the multiverse of all possibilities has a finite number of threads of time. When the possibilities are not discrete, then the number becomes infinite. It seems to me that everything works better if reality is discrete.
Quantum mechanics doesn't allow this.
 
Last edited:

picnic

Active Member
Randomness appears in classical physics everywhere. What makes quantum mechanics (and therefore quantum field theory and therefore particle physics) different is not randomness but determinism. Classical physics is incapable of treating indeterminism in any sense other than epistemic. That is, in classical physics (so far as it was developed) we can know the future of any system, at least in theory, arbitrarily far into the future. We can never actually do this, because we can never obtain the necessary amount of information, but classical physics suggests that given the state of the cosmos at some time t, everything in the universe and the universe itself is determined at any time t+n.

But the opposite of determinism isn't randomness. If quantum mechanics were truly random, then we couldn't rely on the theory to do anything. The outcome of any experiment with quantum systems would be fundamentally and completely uncertain, and therefore pointless. Instead, quantum mechanics is a procedure whereby we prepare specific systems in particular ways so that we can represent both the manner of preparation and the kind of system in a unified mathematical way and so that we can then "measure" (i.e., disturb) the system in a manner dictated by the theory so as to allow for the mathematical representation of measurement to "act on" the mathematical representation of preparation and system to yield predictable outcomes.
The key aspect is predictable. Certain outcomes are vastly more likely, impossible, definite, etc., in quantum mechanics. They are "random" only in that there is always a degree of uncertainty, but this uncertainty can be minute, and is often nothing compared to the uncertainties associated with systems in classical (statistical) mechanics.
I think you misunderstood what I said in the OP, because I already know those things. I have noticed that some smart people understand what I say and other smart people think what I say is silly. It doesn't seem to be a function of the person's intelligence or background. I suspect that some people have had similar thoughts already themselves, so when I present my garbled ideas, they can guess what I mean and have a dialogue - even though I can't actually say it myself due to my limited background in physics and math. Math is like the language of ideas. If you don't know math then you can't communicate, and you also can't organize your own thoughts well. That is my problem.

I'll respond to the other stuff in your post later after I have time to think. :)
 

LegionOnomaMoi

Veteran Member
Premium Member
I think you misunderstood what I said in the OP
Probably. I'm good at that.
It doesn't seem to be a function of the person's intelligence or background.
I'm not sure I give much credence to intelligence (it's too often confused with knowledge) but I think background vital here (not necessarily, or even primarily, education, but rather one's worldview as formed by one's background).

I suspect that some people have had similar thoughts already themselves
They have.
If you don't know math then you can't communicate
The philosopher and mathematician Bertrand Russell once said that the subject of mathematics is that "in which we never know what we are talking about, nor whether what we are saying is true."
 

picnic

Active Member
@LegionOnomaMoi , I'll just post some bite size questions or comments.

First off, I think earlier you mentioned Kolmogorov complexity as a measure of randomness? ( https://en.wikipedia.org/wiki/Kolmogorov_complexity ) Maybe I'm remembering wrong.

There is a notion in Kolmogorov complexity of languages to describe sequences more compactly. I imagine there must be some base language that Kolmogorov used to construct more complex languages. I was curious what that base language was? I remember from college that finite state machines were equivalent to some sort of grammar. Kolmogorov must start with something very simple that can be weighed precisely against a sequence. I also wonder if he used discrete or continuous letters in his strings. A finite state machine seems to be inherently discrete. (I guess this is a bit off-topic, but I thought you might know the answers, and I was curious.)
 

LegionOnomaMoi

Veteran Member
Premium Member
@LegionOnomaMoi , I'll just post some bite size questions or comments.

First off, I think earlier you mentioned Kolmogorov complexity as a measure of randomness? ( https://en.wikipedia.org/wiki/Kolmogorov_complexity ) Maybe I'm remembering wrong.

There is a notion in Kolmogorov complexity of languages to describe sequences more compactly. I imagine there must be some base language that Kolmogorov used to construct more complex languages. I was curious what that base language was? I remember from college that finite state machines were equivalent to some sort of grammar. Kolmogorov must start with something very simple that can be weighed precisely against a sequence. I also wonder if he used discrete or continuous letters in his strings. A finite state machine seems to be inherently discrete. (I guess this is a bit off-topic, but I thought you might know the answers, and I was curious.)
Kolmogorov founded probability theory. He did this by introducing measure theory (developed principally by Lebesgue). In probability theory, randomness measure is a particular sort of probability measure, but probability measure here is extremely technical. In measure-theoretic probability, there really isn't any "discrete" or "continuous" distinction. The mostly arbitrary dichotomy is mainly due to the deficiencies of elementary calculus (particularly integration), an insufficiently rigorous algebra of sets (and set theory and topology in general), and finally the failure to appreciate what Lebesgue did by introducing measures in the first place: the advantage of "chopping up" the "domain" space of probability functions rather than the range (as in e.g., your standard Riemann integral).

I wasn't referring to Kolmogorov complexity, but you weren't remembering wrong (I simply wasn't sufficiently clear; I'm not entirely sure how to be here: measure theory is typically introduced to graduate students of mathematics after several years of calculus and other higher-level mathematics, so reducing it to and explaining how it is the foundation of modern probability theory is not exactly easy, at least for me).
 

picnic

Active Member
Branching probabilities in a multiverse don't work like this. They are the realization of orthogonal states of possible configurations of systems in actual "branches" of an ever-branching universe. So, for example, if a spin-1/2 particle could be in "spin up" or "spin down", "down" and "up" are at right angles with one another and the outcome of the experiment will result in two branches of the universe. The problem with this interpretation is that, while it was and is an attempt to simply do without interpretation by taking the mathematics literally, it nonetheless requires an interpretation because the mathematics does not assign probabilities to particular realizations of outcomes. Thus there is no reason for an observer in the multiverse interpretation to "observe" the branch of the multiverse they do.
If I understand your idea correctly, you are saying the concept of probability reappears even if we assume that everything possible happens in reality, because we naturally want to judge if the thread of time that we remember experiencing by tracing from root to twig in the tree of reality is what we should "expect" with this hypothesis? Scientists look at past observations to form hypotheses about the laws of reality and then they test these hypotheses using future observations and statistics. Even a primitive hunter uses probability when he has expectations about the behavior of animals based on past observations.

Maybe the solution is to just say "no" to probability? Probability is a subjective word like beauty that doesn't belong in any hypothesis of reality that could be true. Probability might have a history of shaping decisions when we are uncertain, but it isn't real. Our model of reality should not be built on concepts like "beauty" and "probable". Probability might seem to help at gambling, but it really doesn't. Every possibility happens.

I'm having a hard time thinking about this to be honest, because probability seems to be hard-wired into my brain. Maybe this doesn't even make sense. IDK.
 

Desert Snake

Veteran Member
If I understand your idea correctly, you are saying the concept of probability reappears even if we assume that everything possible happens in reality, because we naturally want to judge if the thread of time that we remember experiencing by tracing from root to twig in the tree of reality is what we should "expect" with this hypothesis? Scientists look at past observations to form hypotheses about the laws of reality and then they test these hypotheses using future observations and statistics. Even a primitive hunter uses probability when he has expectations about the behavior of animals based on past observations.

Maybe the solution is to just say "no" to probability? Probability is a subjective word like beauty that doesn't belong in any hypothesis of reality that could be true. Probability might have a history of shaping decisions when we are uncertain, but it isn't real. Our model of reality should not be built on concepts like "beauty" and "probable". Probability might seem to help at gambling, but it really doesn't. Every possibility happens.

I'm having a hard time thinking about this to be honest, because probability seems to be hard-wired into my brain. Maybe this doesn't even make sense. IDK.
Probability inference is only gained via repetition. Anyone claiming non-repetition probability inference is referring to fantasy or theoretical probability.
 

picnic

Active Member
@LegionOnomaMoi , getting back to the title of this thread, if we do imagine a reality that is a tree of every possibility, does that work better with discrete possibilities? It seems to me that the universe that we observe is finite. If we imagine discrete possibilities then the tree of every possibility is also finite. If we don't have that restriction, then the tree of every possibility becomes infinite. That doesn't seem good to me intuitively.
 

picnic

Active Member
Kolmogorov founded probability theory. He did this by introducing measure theory (developed principally by Lebesgue). In probability theory, randomness measure is a particular sort of probability measure, but probability measure here is extremely technical. In measure-theoretic probability, there really isn't any "discrete" or "continuous" distinction. The mostly arbitrary dichotomy is mainly due to the deficiencies of elementary calculus (particularly integration), an insufficiently rigorous algebra of sets (and set theory and topology in general), and finally the failure to appreciate what Lebesgue did by introducing measures in the first place: the advantage of "chopping up" the "domain" space of probability functions rather than the range (as in e.g., your standard Riemann integral).

I wasn't referring to Kolmogorov complexity, but you weren't remembering wrong (I simply wasn't sufficiently clear; I'm not entirely sure how to be here: measure theory is typically introduced to graduate students of mathematics after several years of calculus and other higher-level mathematics, so reducing it to and explaining how it is the foundation of modern probability theory is not exactly easy, at least for me).
Thanks, that gives me some words to google to see if anything makes sense to me. I wish I could go back to college and learn some of these things at my own slow pace - learning for the sake of curiosity instead of learning for a competitive career. A lot of these concepts can't be discussed or imagined meaningfully without a lot of math that I don't know.
 

LegionOnomaMoi

Veteran Member
Premium Member
A lot of these concepts can't be discussed or imagined meaningfully without a lot of math that I don't know.
For what it's worth, as an undergraduate I couldn't fit most of the courses I wanted into my schedule (I was overloading courses every semester to graduate in 3 rather than for years and had already made the stupid decision to add a major in ancient Greek & Latin and a minor, so when I discovered mathematics was awesome I was already going to have something like two full semester's worth of elective credits that didn't count towards graduation). I had never taken trig, precalculus, and didn't even know what calculus was, but a course in statistics and symbolic logic had made me think there might be aspects of math I would enjoy. As I had liked statistics, I figured reading more advanced sources on statistics would be a perfect place to start. I went to the library and got some books.
I became familiar with vectors before I knew what calculus was even about, and quickly encountered terms that were not explained because I was expected to know them. So, for example, when I first read the term "2nd derivative", not having a clue what that meant, I looked it up and found that it meant taking the derivative of the 1st derivative. Of course, I had no clue what the "1st derivative" was.
Basically, I started learning mathematics at one level, proceeded backwards until I reached the level I could understand, and then retrace my steps.
Worse still, I discovered that the standard mathematics curriculum for pre-college and undergraduate students is awful. Take calculus:
"But none of us teach the calculus integral. Instead we teach the Riemann integral. Then, when the necessity of integrating unbounded functions arise, we teach the improper Riemann integral. When the student is more advanced we sheepishly let them know that the integration theory that they have learned is just a moldy 19th century concept that was replaced in all serious studies a full century ago.
We do not apologize for the fact that we have misled them; indeed we likely will not even mention the fact that the improper Riemann integral and the Lebesgue integral are quite distinct; most students accept the mantra that the Lebesgue integral is better and they take it for granted that it includes what they learned. We also do not point out just how awkward and misleading the Riemann theory is: we just drop the subject entirely." (emphasis added)
The Calculus Integral

Students learn a lot of pointless procedures for manipulating algebraic expressions and equations so that they can take calculus without having to understand its foundations via rote use of precalc skills on practice problems on e.g., limits. Up through calculus, mathematics is all about the mechanical application of rules and computation. Then students encounter courses in linear algebra or similarly abstract mathematics which are no more computationally demanding than basic arithmetic but which are so conceptually difficult and distinct from the mathematics they have practiced throughout their education many find themselves lost. Students spend hours first approximating areas using summations then finding exact areas using such sums as they approach infinity, all to learn integration. Alas, this technique is almost always fails, and is quickly forgotten: "Indeed, it would be a reasonable bet that most students of the calculus drift eventually into a hazy world of little-remembered lectures and eventually think that this [antidifferentiation] is exactly what an integral is anyway. Certainly it is the only method that they have used to compute integrals" (ibid.).
Set theory and logic (practically the foundations of mathematics) are marginalized while precalculus students spend hours and hours on problems involving synthetic division or matrix multiplication (despite the fact that nothing they learn about matrices can be used until it is supplemented by linear algebra, which will cover matrices from scratch anyway). Topics of actual importance, like statistics and probability, are barely taught before college if they are, because apparently proving trig identities is regarded as time better spent for people who will likely have little or no further exposure to mathematics than applied mathematics.

Thanks to my frustrations learning, and thanks to subsequent experience teaching, I've spent a lot of time reviewing sources, finding free material, writing tutorials, etc. So, for example, on measure theory and probability you might look at the following free sources:
Measure Theory and Probability: A Basic Course
Probability and Measure Theory

For more basic material, there are free textbooks here.
There's a great set of online material that walks you through problems on numerous subjects here.
On undergraduate level probability, there's Radically Elementary Probability Theory, Probability, Mathematical Statistics, Stochastic Processes, Introduction to Probability, and better still, Probability Theory: The Logic of Science. Also:
&

I have loads of links to free material on mathematics and more, but as you didn't ask for any, I should probably stop simply listing some. But I'd be happy to point you towards material of the type you wish on topics you are interested in if you want.
 

LegionOnomaMoi

Veteran Member
Premium Member
if we do imagine a reality that is a tree of every possibility, does that work better with discrete possibilities?
I'm not sure what you mean. Generally speaking, it is almost always easier to work with discrete sets (particularly finite sets), particularly when it comes to uniform probability. However, when one asks if something "works" better, one must also consider what one wishes it to work for. Nobody wanted quantum theory (and many tried desperately to explain the results of quantum experiments classically, most famously Einstein). The problem is that classical physics simply doesn't work to explain the dynamics of the atomic and subatomic realms. Heisenberg, who provided the first "complete" quantum mechanics, never sought to actually develop the "matrix mechanics" he so famously did, in part because he didn't even know what matrices were (Born told him, and both of them were so flummoxed they went to David Hilbert for advice). Some work in early quantum electrodynamics (even that by Dirac) was initially abandoned not because it didn't work, but because it seemed undesirable. Later, such undesirable results were found to be necessary.
The question, then, seems to me to be not whether discrete probabilities work better, but what works period. Branching universes and similar relative state interpretations of QM were proposed and are defended by their proponents because they are attempts to explain the mathematics already present in quantum mechanics. What works in terms of branching universes is thus constrained at the very minimum by the probabilities in quantum mechanics.
If you wish to consider a multiverse of branching universes independently of quantum physics, this is certainly possible, but one must ask "what for?" We can imagine that there are exactly ten parallel universes, or that branch universes are formed in triplets every 365 days, etc. But we have no reason to do this. And if such a proposal is intended to be a thought-experiment or conceptual exercise, I confess I don't really understand the merit.
It seems to me that the universe that we observe is finite.
But
1) We can't and don't observe other branches of a multiverse
2) Observation is limited in many ways, from the fact that measurements necessarily yield rational numbers to the fact that our cosmological observations are limited by the universe's expansion and the speed of light. These are limits on observation, but not our ability either to understand the cosmos or limits on the cosmos itself.
3) We observe probabilities that are necessarily continuous all the time. In fact, examples of such necessity can be highly intuitive in a certain sense. Consider geometric probability in the form of a dartboard modeled as a unit circle. In order to determine the probability that we will hit any particular region with a dart, we have to realize that any "area" consists of uncountably infinitely many points and thus we can't use the addition rule for probabilities (rather, we require integration).

If we imagine discrete possibilities then the tree of every possibility is also finite.
Discrete probabilities can be infinite. Discrete means either finite or countably infinite.
If we don't have that restriction, then the tree of every possibility becomes infinite. That doesn't seem good to me intuitively.
It isn't intuitive. But then neither is probability, even when it concerns random (finite) trees. I wish I could hand you:
Random Trees: Interplay Between Combinatorics and Probability
 
Top