• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Does randomness exist in nature?"

setarcos

The hopeful or the hopeless?
Randomness test - Wikipedia

According to Wikipedia (above), the Diehard Battery of Tests is used to determine randomness.

According to Wikipedia (above), the Einstein-Podolsky-Rosen (EPR) paradox caused "statistically independent" to be added to the requirement of the Heisenburg Uncertainty Principle (which derives from the Robertson Shaw Inequality). Einstein, et al, proposed that a particle of spin 1 might randomly split into two particles whose spins add up to 1. So, though the top is random, and the bottom is random, the top plus bottom always equals one. So, you can use only tops of particles or bottoms of particles, but if you mix them, some will be statistically dependent and ruin the results of the Heisenburg Uncertainty Principle.

Quantum mechanics deals with random complex numbers (complex numbers have real and imaginary parts). The wave function (also called the state function because it determines the state of a particle) of an electron is imaginary. Probability is the product of the wave function and its complex conjugate (this is how complex numbers are squared). Thus, the product of two imaginary wave functions creates a probability (of finding the particle in a particular place in space) real. To reiterate, from imaginary wave functions, we get a real probability.

To do quantum mechanics math, we convert the dimensions that we know (x,y,z,time) to dimensions in probability space (called Hilbert Space). Oddly, the probabilities also have dimensions. Also, oddly, the dimensions of space in the real world don't correspond to the dimensions of Hilbert Space.

A simple electron orbiting a single proton produces a Hamiltonian operator (an energy operator which is like a matrix but filled with math operations, and some are complex numbers) that is infinite columns by infinite rows. This is because each probability density corresponds to a unique position of the electron, and, as it orbits, it can be located in an infinite number of places.

Since calculating an infinite operator is too difficult, Schroedinger's Equation substitutes the Hamiltonian operator for its eigenvalue (that's one of many values that solves the Hamiltonian operator). This makes the math much simpler.

Physicists have a shortcut notation for probabilities called Dirak notation. It uses two probability densities. The front one is a < bracket called a bra, and the back one is a > bracket called a ket....together, they form a bracket. Variables in the bra and ket can be interchanged by taking the complex conjugate transpose. But, like matrix math, the order of the variables must be preserved.

Quantum mechanics is used to understand the small world (molecules and smaller). But, when applied to the larger world, the random numbers combine in such a way as to make things non-random. Thus, in the big world of Newtonian physics, random motions is sometimes converted to non-random motion.

Some data has both random and nonrandom elements.

grainy pixelated picture - Google Search

The link above shows grainy pixilated pictures. That is, pictures that have huge square chunks to represent the average values of color in the region.

Pixilated pictures have low frequency noise. So, they can be cleaned up, somewhat by a filter. But, to really be effective, dither has to be used. Dither is random noise that is added to a signal, then the picture is fixed by averaging the dithered signal. Surprisingly, a dithered signal brings out the original picture, in surprising detail. Even details that are not captured in the pixils come out.

Dithering doesn't affect the signals. According to Fourier's theorem, any periodic wave can be split into harmonic components. Thus, a square wave can be represented by an infinite series of sine waves. All sine waves have PDFs (probability density functions) that go to infinity on the endpoints, and are rather low in the middle. As a result, dithering doesn't affect the sine waves, so it doesn't affect the entire signal. For digitally sampled signals, Fourier requires two samples per sine wave (that is, at twice the frequency of the components of the signal).

Dithering (which is noise added to the signal) drops the noise level of signals because the PDF (Probability Density Function) of the dither convolves (a mathematical process) with the PDF of the noise. The process of convolution causes the noise to be spread throughout the frequency spectrum and that lowers the noise of the signal. Noise outside of the band of interest is then filtered out (for examples, noises too low pitched to hear or too high pitched to hear), which removes the noise entirely that used to be in the signal band.

Dither can be used to reverse the effects of quantization and effects of jitter (caused by timing delays, for example, those in a computer as numbers are calculated). The PDF of quantization has a characteristic staircase pattern.

Large scale dither has to be subtracted out at the end.

FINDING RANDOM NOISE IN NATURE:

Clouds are random, yet they still form patterns, so there is some non-randomness in them. Rushing water is somewhat random, though there are some effects that are not. For example Q in fluid dynamics is the flow rate = velocity times the cross sectional area must remain constant. Sometimes, at the bottom of water falls and spillways, the water, leaps higher (called hydraulic jump), caused by disruptions in laminar flow.

Thermodynamics is filled with random events. The mean free path determines how often one air molecule bumps into another (and that is defined as temperature).

Yet, there are laws and boundaries that define thermodynamics. For example, there is the Ideal Gas Law, which relates temperature, pressure, and volume. Saturated steam tables show the amount of water vapor, steam, and air at given temperatures and pressures.

As I was getting degrees and advanced degrees in physics, and various types of engineering, I realized that there is an infinite amount to know about all of this.
Advanced degrees in physics is nothing to sneeze at for sure...so could you give me an example of a proof of a physical law, oh say like any of the laws of thermodynamics?
 

cladking

Well-Known Member
If an objective view of the universe is possible at all, it must by definition be a God’s eye view, indeed.

We know that the act of observation itself has an effect on the phenomenon being observed; we cannot passively look at the world as it would be where we not observing it.

Yes. Everything in four dimensions, the causations, and even the nature of chaos and its effects must all be understood simultaneously.

But there are various means to understand bits and pieces that we call "laws of nature". The problem is most non-scientists mistake theory for reality and their understanding of reality for omniscience. We can't really see reality but rather we catch glimpses of it through experiment much like the ancients caught glimpses through observation and natural logic.

Science, reason, and logic are the only tools we have to understand reality but they are puny tools in relation to the complexity of the subject. We look around and see what we know and the technology of science and it's easy to forget just how little we really know.
 

RestlessSoul

Well-Known Member
:thumbsup:Indeed. Quantum physics has muddled the waters of comprehension somewhat. Here's a thought though, how is it that quantum phenomena holds a seemingly uniformly experienced existence together from our point of view? Since everyone can't observe everything at once some other factors of observation must be at play. One theory is an existent God who is never not looking at everything at once and our collapsing of the quantum fields are experiences of its observations.


This reminds me of something an Indian friend of mine heard from his non religiously observant Hindu father; that we are each facets of a consciousness, experiencing life subjectively.
 

RestlessSoul

Well-Known Member
Yes. Everything in four dimensions, the causations, and even the nature of chaos and its effects must all be understood simultaneously.

But there are various means to understand bits and pieces that we call "laws of nature". The problem is most non-scientists mistake theory for reality and their understanding of reality for omniscience. We can't really see reality but rather we catch glimpses of it through experiment much like the ancients caught glimpses through observation and natural logic.

Science, reason, and logic are the only tools we have to understand reality but they are puny tools in relation to the complexity of the subject. We look around and see what we know and the technology of science and it's easy to forget just how little we really know.


And it seems that all the ground breaking original thinkers, in science, philosophy, the arts, have had the humility to recognise that indeed, we each see only a little, and always from a necessarily limited perspective.

The pointillist painter George Seurat, intrigued - rather than threatened - by the new science of photography, remarked that “We do not see objects; we see the light reflected by objects.”
 

Segev Moran

Well-Known Member
I admit when the idea for this thread titled, "does randomness exist in nature?"
came up i thought it would be easy just to trawl the internet for a few examples of seemingly random events in nature such as the radioactive decay of certain elements.

Then the question naturally arises, how do we know an event is truly random, or conversely, how do we know an event is truly predetermined?

I did some hasty googling and realised I might be a bit over my head on the second question, so I'm hoping some of our more knowledgeable members will chime in, but what I vaguely gathered is that you would have to have an infinite sample size to determine if something was truly random or predetermined, since even seeming patterns still have a chance of being the outcome of a random event.

Thoughts?
In our POV (humans), there is randomness, but it does not really exist if you look from the universal POV.
In the exact fraction of a time the big bang initiated, physical rules formed.
These rules "dictate" our universe what to do next.
atoms are actually particles, attracted to one another due to "random" events, but these events were bound to happen due to the other countless number of physical rules that act upon it.
Think what will happen if humans had the knowledge of all possible physical forces that act on the universe... including quantum and whatever it is beyond it.
Then, in a sense, nothing would be random to us.
Not only do we understand better what caused an event, the more knowledge we gain, the easier we can predict the future.
Already today, we use simple AI to predict amazing things.
It might seem that an AI acts "randomly", but it is a network of functions all connected by specific models.
As we sometimes make very complex networks, it seems that the AI generates random things... but it is not so in actuality. It will be an "answer" based on trillions of calculations... similar to our brain (although our brains are far more complex than we can calculate today... it seems it will be possible in a few years though.).
In that sense, even your actions and thoughts are not random ;)











,
 

robocop (actually)

Well-Known Member
Premium Member
I admit when the idea for this thread titled, "does randomness exist in nature?"
came up i thought it would be easy just to trawl the internet for a few examples of seemingly random events in nature such as the radioactive decay of certain elements.

Then the question naturally arises, how do we know an event is truly random, or conversely, how do we know an event is truly predetermined?

I did some hasty googling and realised I might be a bit over my head on the second question, so I'm hoping some of our more knowledgeable members will chime in, but what I vaguely gathered is that you would have to have an infinite sample size to determine if something was truly random or predetermined, since even seeming patterns still have a chance of being the outcome of a random event.

Thoughts?
No, just psuedo-randomness.
 

LegionOnomaMoi

Veteran Member
Premium Member
I admit when the idea for this thread titled, "does randomness exist in nature?"
Here you go:
Free randomness can be amplified

That’s as good a place to start as many (albeit probably not any, but I don’t have unlimited time to determine the optimal starting place given the data), and here’s an accompanying video by one of the authors:

Then the question naturally arises, how do we know an event is truly random, or conversely, how do we know an event is truly predetermined?

Firstly, “predetermined” is not a term that has any place here. It is bad enough that to many people “deterministic” and “random” seem to be opposite notions (they are not). But at the very least, deterministic is relevant here in the sense that 1) it is common terminology among physicists, applied mathematicians, natural (and social and behavioral) scientists, and philosophers of science
and 2) It seems to fit the bill for what you want out of “predetermined”. That is, a deterministic system is one which can appear to be highly erratic, complex, “random” in the naïve sense, etc., yet if we know the law or laws governing the evolution of the system along with the exact initial conditions, then (with highly technical, mostly irrelevant, yet important caveats and partial exceptions) we can in principle know precisely what the state of the system will be at any arbitrary moment in time we choose.

You have to be careful, though, with vocabulary here. “Random” often is shorthand for a kind of special function (random variable) that maps elements of a sigma-algebra or sigma-field on a measurable space to the real numbers using a measure that differs from the more general requirements for a triple <set, set of subsets, measure> to be a measure space in that it must also make sense in terms of probability theory (so e.g., the measure of the entire space/set must be 1).

But, of course, even from a probabilists perspective probability theory is far, far more than just a subfield of measure theory (which just supplies the needed rigor for the axiomatic formulation given by Kolmogorov). Measure theory says nothing about conditioning, which is a core topic in statistics and probability theory (as is independence).

Additionally, it is generally agreed that we cannot us classical probability theory in quantum theory the way we do in e.g., statistical mechanics because, among other things:
i) you aren’t supposed to have correlations between random variables that are greater than 1 but this is possible for Bell-states and other systems in QM
ii) complex numbers aren’t even really apart of what would otherwise be standard Fourier transform but is instead a real-valued generating function called the moment generating function. They certainly aren’t a part general probability spaces, maps, random variables, or other core components of mathematical statistics and probability theory. But they are absolutely, utterly, completely essential in practice and in theory development (and it would appear in nature or in reality as well) for quantum theory).
iii) The standard evolution of quantum systems is entirely deterministic, while systems in probability theory that evolve in time (stochastic systems) are not.
Etc.

Additionally, it is misleading to think of e.g., quantum theory as somehow “random” rather than “deterministic” because the reality is much more complicated and arguably worse. It is not that quantum systems are inherently random. They are, but that’s not actually saying much (every deterministic system can be modeled or represented as a stochastic system or in a similar manner using intrinsic, fundamental “randomness” and this means nothing). It is not that quantum systems are indeterministic so much as it is that they are indeterminate. That is, things like “position” which are classically pretty much the most basic, determinable properties of a “particle” are not attributes of things like electrons. They are instead properties of measurement events.

It is also vital to understand that even in classical physics, the very assumptions that went into verifying the validity (now known to be a useful approximation in many domains) of classical laws violated a strictly deterministic universe. Put differently, deterministic theories in classical or modern physics can be thought of as saying that under idealized, never (even in theory) realizable conditions, any given system with an initial condition we know precisely and that is perfectly isolated from the rest of the universe will evolve in an entirely predictable manner given infinite computing power to make these predictions.

Nothing in classical or modern physics makes it possible to infer from the possibility that the laws of physics are or were or could have been deterministic that therefore the cosmos/universe is.

Put most simply, if deterministic laws of physics implied a deterministic universe, then there would either be no initial conditions or just one initial condition and most/all of experimental physics and empirical science more generally would rest on false premises.

but what I vaguely gathered is that you would have to have an infinite sample size to determine if something was truly random or predetermined, since even seeming patterns still have a chance of being the outcome of a random event.

An infinite sample size is not enough for what you are referring to. Sampling from processes (i.e., sampling that is relevant to the idea of “random events” in the manner you are talking about) is different from the kind of language used when people generally speak of sample sizes and randomness. Probabilists, statisticians, etc., tend to use words like “experiment” and “event” as mathematical shorthand. An “event” in this case has no time dependence and cannot generally be thought of or understood in terms of time. That’s where stochastic processes come in (the simplest being perhaps Markov chains, especially discrete).

There are several measures of randomness in the relevant literature (which spans several fields, from computer science to philosophy to econophysics and beyond). “Truly random” is a philosophical, metaphysical notion that first and foremost requires one to take a position on the nature of probability theory in general. After all, if one views probability theory as a calculus for subjective inferences or as an agent-based decision theory or something like this, than randomness is related to an individual’s uncertainty and it makes no sense to ask if something is truly random. If one follows many frequentists and their sympathizers and collaborators like von Mises (and his collective) than “truly random” is just “random” again but refers more to ideas like you mention having to do with infinite sample spaces because frequentists posit that their samples are taken from a distribution that is never and can never be observed or exist but rather is a theoretical device that makes calculations simpler and is generally swept under the rug.

Basically, you need to be a lot clearer about what you mean by “truly random”.
 

LegionOnomaMoi

Veteran Member
Premium Member
Hmm...randomness until there is sufficient data. That's a very interesting point. I believe that you are right, and we can see a demonstration of that when we go from the small world of quantum mechanics to the big world of Newtonian mechanics.
If that were true, then QM would be deterministic.

When you toss an apple into the air, you can use equipment to measure the initial velocity, you know the acceleration of gravity (at that altitude), so you can precisely predict how high it will go, and when it will reach the top, and when it will fall on Newton's head. This is called deterministic. That is, deterministic is precisely known.
Classical mechanics makes use of the continuum. Specifically, Newtonian mechanics and extensions involve observables and units (distances, lengths, mass, momentum, time) that we treat in terms of intervals on product spaces of the real numbers (R x R x R x R x ... x R, n times or more simply R^n). So we might e.g., treat a 3-dimensional system with two constraints as having a definite position along e.g., the x coordinate at any time t, where both the x-axis and the time parameter are real-valued and continuous (in the naïve, elmementary analysis or calculus sense).
Such systems are deterministic because we are gaurenteed that there exist unique solutions (with certain limited caveats and partial or possible exceptions) given specific initial conditions. Again, it is the real numbers and the properties of the continuum that the real numbers possess that make this true.
But this also means that in order to precisely know the position of e.g., an apple in point-particle mechanics or measure its mass or the time at which it falls would require more precision than is or will ever be possible. It requires measurements of increments and intervals that are well "below Planck scale" (so to speak). And it requires infinite information just to record the initial conditions precisely.
Nor is it possible, even in principle, to ever determine empirically that topological and other properties that we need to assert such deterministic laws and properties are meaningful actually hold.
The rational numbers are in a sense the best model of measurements we can make. They can be scales infinitely small and approach any value we wish to arbitrary precision such that it would seem that all the limit laws from precalculus and the beginning of calculus would hold for rational numbers.
They don't. Most real numbers are non-computable, the set of rational numbers (which is all we can ever measure) is negligible (almost all numbers are irrational) and all measurements ever made could be recorded in a single real numbers digits with infinite room to spare because that's what we need to "precisely" specify properties in classical physics: infinite accuracy, infinite storage, and infinite computational power.

In the small world of quantum mechanics, nothing is precisely known, everything has random elements (though some things are mixtures of random and non-random events). So, if we shrink Newton to a subatomic particle, we won't be able to tell precisely how high the shrunken apple will go, nor when the shrunken apple will hit shrunken Newton's head.
None of this makes sense.
 
Top