• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Free Will: Some explanations (and its compatibility with the sciences)

LegionOnomaMoi

Veteran Member
Premium Member
I hope to bypass of the issues relating to the “free will” debate by focusing on what most (whether they believe in “free will” or not) would agree that “free will” necessarily entails: a person’s capacity to make a choice or decision such that they could have made a different choice or decision than the one they did.

Example: A large number of people end up going to college/university. Consider a specific student and a decision about a change of major. We’ll call our student Max, a student of German extraction who is studying classical languages with a minor in philosophy (although it isn’t really important, it so happens that Max, whether for simplicity or out of needless shame, only gave the university his first and last name, rather than including the middle part: Ernst Ludwig). After about two years of studying, certain philosophy courses have made Max really interested in physics, especially theoretical or “foundational” physics.

After considering for some time what changing his program of study would entail (extra semesters, a new set of courses required for the new major, more money, etc.) Max decides “dash it all, one only lives thrice, so I might as well give it the ol’ college try, wot? Righto, then, it’s settled.” (Max is a bit of an Anglophile). And he decides to switch his major and study physics, ignoring the advice of others (including a mentor of his, Phil Jolly, who was particularly opposed).

We have here a decision, or choice, that Max made after considerable thought (making it a “conscious” choice, understanding that the adjective “conscious” hasn’t been defined and is to be understood in the colloquial sense). Sticking to the confines of “free will” we defined above, the question is whether or not Max could have decided differently. We are not concerned with the extent to which his upbringing, friends, the courses he took, the state of the universe several hundred millions of years ago, etc., influenced his decision. We’ll even grant that there were a great many things, from his upbringing to a passing remark in one of Max’s classes made by Prof. Rosen, over which Max had little (and in some cases no) influence over, but which did influence Max.

The question is not whether Max was influenced, but whether or not, no matter how “big” we allow our sphere of possible influences to be (e.g., as “big” as the state/conditions of everything from the origins of the universe up to Max’s decision) we can say that Max had the capacity to decide not to change majors. Put another way, Max could have made a decision other than the one he did.

But asking is just the easy part. Answering is another thing altogether. However, we have a starting place, this decision of Max’s along with our question about it, which naturally brings to mind other questions, issues, problems. For example, sticking with what we know about the make-up of human physiology, what about it can we relate to Max’s decision? Most would probably agree that the brain is rather key here. In fact, it’s not only key, but if there is any way in which Max could have made a different decision, then something about the way Max’s brain works must make that a possibility.

Another thought or question which no doubt presents itself here is why or why not we might think (whatever the nature of Max’s brain) there is reason to suppose that Max only thinks he made a choice which he could have made differently, when in reality someone with a suitable amount of data (even if it needs to include the state of the entire universe since the big bang) and the right computing device could have told us what decision Max would make a day before, a month before, a billion years before, etc. In other words, Max’s choice couldn’t have been other than what it was, because it is at least in principle for us to know what that choice will be before Max actually made it, which necessarily entails that it was the only choice he could make.


There are a set of related reasons responsible for entertaining the seemingly impossible (in that it is counter to our everyday experience) notion that there is no decision or choice we can make such that a different one was possible. The first is another type of everyday experiences: effects and their relationship with time and cause. Simply put, the idea that one thing causes another, and that the cause precedes the other (the effect), is something we experience constantly. Why is Cathy Conduitt crying? Because Uncle Newton, during another rant about gravity, has dropped a filled glass and spilled milk all over the floor she had just cleaned. If asked why she was crying, Cathy would say it was because of the mess made by the spilt milk on the floor. And Newton would proudly announce that he had dropped the class, causing gravity to take over, and then ramble on about structural integrity and the essence of hardwood flooring relative to melted sand blown into glass.

That’s the type of thing we experience all the time, mainly because we experience “time” as the unfolding of something we intuitively understand as events (which are more or less temporal intervals we conceptualize as “wholes”). When we can see these events or actions, whether a dropped glass or a match lit or a declaration of war, we conceptualize them in terms of how we experience time and the activity which happens as we experience this time; that is, as a linear sequence of intervals/moments/actions/effects/etc., each one and every one resulting from some previous set of effects which came before it and which caused it.

There is another main reason for supposing that Max only thinks he made a decision which, in the end, was the result of his ability to (at least at times and/or to some extent) determine what he does, such that he could have made a different one. To explain this reason, we have to look back in history a bit. Specifically, we have to deal with a Greek by the name of Aristotle who’s been dead for millennia. For centuries, the big issue here (Max’s decision and whether it was inevitable) had a lot to do with language. Aristotle illustrated the issue with a sea battle, which in Greek is one word (ναυμαχία) and which in Greece of that time was a common enough experience, but which rather dated now. Instead we’ll go with rain. Like Aristotle, philosophers and others even unto today deal with “truth-bearing” statements called propositions. Thus, “is it raining?” is not truth-bearing, but “it’s raining” is. If I say “don’t go outside, it’s raining” and you go outside to find that there’s not a cloud in the sky nor a drop of water falling from it, then what I said was false. But what if you had asked about the weather report? And what if I had answered “It’s going to rain tomorrow”? This appears to be a proposition, in that although we can’t determine whether or not it’s true right after it is said, we can do so the next day.

And because philosophers are lazy, borderline psychotic, obsessive, and generally useless to society, for centuries reasonable people have tried to keep them confined to universities or similar institutions, so that they could spend hundreds of years arguing about how exactly “it’s going to rain tomorrow” is or isn’t a “truth-bearing” statement (proposition), and whether the answer to this question entails fatalism.

Let’s go back to Max. What if, after a last ditch attempt dissuade Max, Phil Jolly had said (just as Max closed the door behind him on the way out of Dr. Jolly’s office) “he’s going to change his major”, let’s assume this statement to be truth-bearing. It turns out that Jolly was correct here, and Max changed his major. Which seems to mean that when Dr. Jolly predicted this, his statement was true. If it was true when he made it, then necessarily Max had to change his major, otherwise Phil Jolly’s statement would be false. Of course, if it were false, then Max couldn’t change his major, because then “he’s going to change his major” would have been true. And for a very, very long time, the safely secluded philosophers argued about this while the rest of society did real work.

But, a few centuries ago, things began which would end up changing how philosophers wasted their time: the physical sciences. People like Descarted, Kepler, Fermat, Newton, and others began to develop and apply mathematical formulae to physical phenomena in order to describe and model physical reality, from the movement of planets to why everything is about apples being where they shouldn’t (the apple which isn’t mentioned in Genesis, the apple which belittles Newton’s work on gravity by reducing it to getting hit on the head, the millions of deaths caused by people who were sure that “an apple a day” would keep the need for medical attention and doctors away, etc.).

They called this “new” approach to figuring out why things were the way they were “science”, derived from the Latin scientia which means “people without common sense”. And they got better and better at it, creating new fields of research where before there was only one, and almost all of this was related to being able to know what was going on and what would be without fondling the innards of sheep (and other messy divination methods).

Even though things didn’t begin with the intent to create a complete set of laws enabling one to (at least in principle) determine how anything and everything would happen, the more cohesive the “natural sciences” became, and the better and more accurate the ever-increasing number of “laws” became at demonstrating how stuff worked, the more it appeared as if everything operated differently than Descartes had thought: mechanics (or laws of motion) wasn’t just seen as applicable only to non-living systems incapable of agency. Instead the entire universe increasingly seemed to obey deterministic “laws” of physic. Natural philosophers (proto-scientists like Newton and Laplace) and later physicists began to think that there isn’t much of a difference between knowing how to answer those insufferably boring, irritating, and pointless questions of the form “if Alice drives east at a rate of 1000 furlongs per fortnight, and Bob drives north for seven moons at a rate of…” and knowing how to determine what decisions people would make before they made them. Sure, the latter is a lot harder, but if everything operates according to deterministic physical laws, then it is at least possible to calculate in principle the state of any system (like a person) arbitrarily into the future.

In fact, scientists spent so long obsessed with models which showed how parts worked and how this equation enabled one to know how X action would produce Y result that the idea of determinism and naïve causality almost became what science was (or strived to be). For simplicity, we’ll say causation and determinism mean that every system (a brain, person, solar system, ant colony, etc.) can be reduced to physical laws which govern the interactions of its parts, right down to the most fundamental level of parts (the indivisible “atoms” or later “particles” that ultimately made up all matter). Additionally, knowing these laws and the state of the parts of a system (as well as whatever relevant external forces are or will act upon it) entails the ability to know exactly what will happen to the system. Finally, we can describe the activity of all these parts in a linear, causal way, such that for any arbitrary interval of time, whatever is happening can be explained completely by a series of immediately prior causes consisting of interactions of fundamental parts.

This would appear to be the place where some statement about the revolutionary changes in physics, namely relativity and quantum mechanics, changed everything. It’s not. It’s time to go back to Max’s brain.


A common misconception about physics is that before quantum physics, we didn’t just think that everything followed this deterministic causation model outlined immediately above, but had sort of “proven” it. Moreover, whatever relationship the new physics has to the brain, it’s generally believed that either the brain is only trivially governed by quantum mechanics, and thus we’re back at a deterministic causality model, or somehow QM can “save” us from admitting we never actually make decisions which couldn’t be (in principle) perfectly predicted before we made them. This isn’t inaccurate.

In fact, it’s not just wrong, it overlooks what is (at least as far as consciousness and choice are concerned) a far more important development of the 20th century: complexity. Until the 20th century, and actually rather late in the 20th century, it was generally believed that the ubiquitous, pervasive, and relative complexity of “curvature” intrinsic in nature wasn’t so much of an issue anymore, as we finally had a sufficiently formal foundation for the calculus, which is all about nonlinearities.

Enter chaos: both the theory AND the frenzied, desperate attempts to retain this idea that, well, simple things could always be represented by simple mathematics, and more complicated things just required a quantitatively (not qualitatively) more complex approach. It turned out that “simple” things like a pendulum swinging could exhibit behavior which could not be precisely solved by any “general” (analytic) mathematical model. REALLY complicated systems, with lots of interacting parts, turned out to be capable of behavior that resulted in processes which couldn’t be reduced to the “sum of their parts” even without getting into the fact that modern particle physics has basically shown reductionism to be dead in the water (even with a generous ontological interpretation of “particles” in modern physics, there aren’t any set that is the most fundamental such that all matter can be understood as made up of these and only these). In fact, everywhere scientists looked they found that non-living things often seemed to “randomly” self-organize, exhibiting properties which were the result of the synchronized activity of the collective, rather than the component parts.

The idea that “the whole is greater than the sum of its parts” is nothing new, but it wasn’t something which the physical sciences were or are equipped to deal with except by abandoning the deterministic, reductionist enterprise. For non-living systems, from clouds to crystals, the emergence of structure, patterns, and properties out of the dynamic activity of constituents created a major problem for causality, or the ability which had increasingly become only an issue for philosophers (as scientists could get results just find without worrying about the nuances of kinds of causation). It wasn’t that things like tornadoes or ant colonies exhibited behavior that made causation irrelevant or even inapplicable in these specific cases (that would bring us to quantum physics beyond physics). Rather, scientists faced problems such as determining what the “cause” was.

Mathematical models behind such problematic systems are too complicated to be useful here, but a simpler abstraction will suffice: a unit circle. We can use algebra to describe: x2 +y2= 1. We can graph this circle, and even know what it looks like without actually constructing the graph (the geometric representation of the algebraic equation). But contrast this with the equation (one of them, anyway) of a line: y=mx +b. With lines, if we know the slope and intercept (m & b, respectively) the value of y at any point is completely determined by the value of x. This is not true of points on a unit circle. For any point of on that circle, we can determine x by y’s value, and vice versa. It’s not that we don’t know the values which give us this circle, it’s that we can’t define one as a function of the other except by arbitrary choice.

That’s basically the issue with many nonlinear systems: the equations we have, with variables representing a complete model of the system, can be arbitrarily described as the function (cause) of others, or can be caused by others.

If that were the only problem, we’d have no issue here. But it seems that biological systems, from cellular activity of some organism to the entire organism, are qualitatively different than other natural systems.


I’ll give two examples:


1) Ant colonies. We know that any individual ant is basically mindless; a completely reactive drone we could simulate on a computer. Ant colonies, on the other hand, can perform incredibly complex tasks extremely effectively (so effectively that an entire subfield of machine learning, swarm intelligence, is dedicated to studying and reproducing this capacity). Yet we still don’t know how this works. We know quite well that it isn’t just the sum of the parts, because for one thing putting 100 ants down or even more will just end up with the ants running around in circles until they die. At some unknown point, however, and for some unknown reason, put enough of the ants together, and they synchronize, forming a complex network capable of emergent functional properties which cannot be produced simply by understanding each ant separately.

2) Cells in living tissue, plants, etc., are constantly active. More specifically, regardless of the type of cell or what plant or animal it is a part of, a large part of cellular activity is described as metabolism and repair. In other words, the activities which allow the cell to “create” energy for power, to repair itself, etc. This metabolic-repair is fundamental for cellular function (after all, without “power”, how would the cell do anything?). Pretty much the entire cell is constantly influences (i.e., in some sense “caused”) by this metabolic-repair process. The question is what causes this process? What most people informally term “cause” corresponds fairly well with what philosophers and scientists have termed “efficient” cause since Aristotle. Thus I can talk about how, for example, certain normal human behavior is “caused” (through evolution) by the state of an environment thousands and thousands of years ago (evolutionary psychology). But when someone has a hard time sticking to a diet rather than eating “junk food” filled with sugars and fats and so forth, it isn’t because at the moment they decide to have a chocolate bar rather than a granola bar they are thinking about the conditions of life thousands of years before civilization which is “causing” the craving. It’s not an efficient cause. The efficient causes would be more things like the neural signals coming and going from their digestive system and pre-frontal cortex. The problem with metabolic-repair in cells is that it appears to be closed to efficient causation. In other words, the same parts of the cell which are part of the metabolic-repair process are also influenced by it at the same time. It’s not just that we can’t figure out what’s causing what because we can arbitrarily choose (as before), but a more serious problem (so serious that at the moment it there exists a mathematical proof that cells cannot be computed, which has caused a rather heated debate for scientists in fields ranging from computational biology to machine learning to mathematicians). The “efficient” cause of the metabolic-repair process is cellular activity, but cellular activity which is also the “efficient” cause of metabolic-repair at the same time. Despite the death-grip reductionism has on the sciences, particularly in areas like biology, it has increasingly become at least partly abandoned because it fails: too often reducing a biological system to its components means you cannot model the system itself, because the behavior is more than just the summed activity of its components.


For several reasons (the fact that physicists were distracted by QM and relativity, the fact that it took some time before we had the computational power to realize that computational power wasn’t the issue, and the work on increasingly complex yet never adequate mathematical models), it wasn’t until recently that limits to classical physics which don’t have anything to do with QM are behind the failure of certain reductionist attempts (in particular those within biology).


And once again, we are finally back to Max’s brain. When he decides to change his major, the process is like the metabolic-repair described above, only on steroids. Instead of an emergent, irreducible functional property of a cell, we have a system of so powerfully synchronized networks coordinated with one another that no system we know of begins to compare in terms not merely of complexity, but of what appear to be violations of physical laws as we understand them. This remains true quite apart from whatever quantum dynamics which may be at work in the brain (not to mention the little problem that when it comes to a lot of modern physics, the reason there are constant, never-ending releases in print, internet, and television media on some other-wordly model of physics that is so much better and cooler than the plain ol’ vanilla “standard model” has nothing to do with experimental research; it’s because even if physicists actually agreed on what the standard model really is, the preference for other models the creation of other models is either entirely or largely due to the fact that we haven’t much of a clue what the models, standard or no, actually describe).

Here biologists (from those who develop evolutionary algorithms for computational models of modularity to neuroscientists) have an advantage: unlike physicists whose field concerns separation in spacetime that is well beyond observation or who conduct experiments in which we have only symbols to describe whatever is going on at the “quantum” level which we can’t see, even biologists who study the origins of life frequently have more “observable” experimental paradigms to collect data.

Within neuroscience, primarily functional imaging is behind such experiments (functional if the “f” in fMRI which, like EEG and PET, create “dynamic” pictures rather than static ones like those produced by MRI or X-ray. At the moment, we aren’t anywhere near models of consciousness which aren’t highly theoretical. However, we do have a good deal of data which we can’t seem to explain, causing everything from descriptions of quantum effects to “quantum-like” neural activity to ignoring these data and focusing on other things like neural correlates rather than how these correlates do what they do.

There are not, therefore, two camps in the sciences: the reductionists/determinists vs. the quantum mechanics allows “free will” of some sort. There are certainly both positions, but

1) The “deterministic/reductionist” or “classical” camp has an increasing number of increasingly difficult experimental results to explain using their theoretical framework

2) What empirical evidence does exist which supports non-trivial quantum processes in neurodynamics is slim at best.

3) Even if one accepts that there are non-trivial quantum processes, all that this does is allow one to apply a theoretical interpretation of the formalisms (mathematical equations, symbols, etc.) in quantum mechanics or quantum field theory in which these processes can do something they can’t in some other theoretical interpretation.

4) There are an increasing number of groups across the physical sciences breaking away from classical reductionism and forming new approaches which are superior in their explanatory power in more ways than they are deficient because their approach is not reductionist, or at least not limited to reductionism. There are interdisciplinary journals, conferences, monograph/volume series, and edited volumes which have in common a systems approach (or, more generally, an approach which incorporates, improves, adapts, and implements the methods and models used across fields which are non-reductionist). Then there are the same but for a specific field or research area (like cognitive neuroscience). Not all of those who subscribe to this rather nebulous conglomeration of theoretical backgrounds, methods, and techniques, etc., believe that the reductionist program is ontologically flawed (i.e., even though they may model some system like a cell or brain or plant in a way which precludes causality, they do this because they believe we lack the ability at this point to continue to gain much from the reductionist approach, but that in reality can in principle be explained using “classical” reductionist views). However, a large number (perhaps a majority) do.

What, however, does this mean for Max’s decision? Even outside QM theories of consciousness, there isn’t a single cohesive model (either reductionist or not). Those who argue that emergent properties of biological systems (or at least some such systems) are irreducible don’t all agree on the nature of these properties, let alone how they might be produced. Rather go into this in depth, then, avoiding the more radical theories is probably best. The cell example and metabolism is usually thought of in terms of an emergent functional property, in that while it cannot be produced simply by the actions of the cells component parts, it also only the name we give various processes which helps us to explain the state of the cell in a way classical reductionism does not. An only slightly more complicated type of emergence can be useful to describe Max’s decision. Just as there is no physical entity in Max’s brain which represents “course” or “major” or “university” or even “change”, neither does one exist which is “decision” or “decision to change my major”. Instead, the structure of Max’s brain is capable of producing not only functional, but conceptual properties/processes, including a reflexive concept which allows Max to understand himself as in some way an agent. We call that “consciousness” or “self-awareness”. These properties are the key: the capacity for emergent concepts from irreducibly synchronized neural networks, including a concept of self, create the necessary ingredients for self-governing agency, or the ability of a system to use functional and conceptual emergence to produce still another property (agency) which is both a product of the system (like the other irreducible, emergent properties), but which at the same time determines it.

Of course, I’m simplifying greatly here, but even were I as technical as is possible, there would still be one important little problem: if this is the way the mind works (or anything works), why are there still scientists arguing that the brain and every other system is reducible and deterministic (at least in principle)? Unfortunately, the very reason that the reductionist program is increasingly being rejected or at least added to is the answer: reductionism succeeded for so long because of the ability for reductionist models to explain everything in terms of parts which could be treated as variables in some mathematical model. What are thought of the limits of reductionism and the need for some sort of systems approach which allows for emergent properties come from the failure of classical models, which means that alternative models are more schematic, holistic, abstract, and non-reductionist. Which also means that no matter how well they explain things, or how useful they can be to learn things, what they can’t do is show that there is not a reductionist model which could explain what’s going on. Strictly speaking, there actually has been a proof of this for almost 2 decades, as well as subsequent “proofs” of a similar nature, but the main problem with them is similar to the problem plaguing modern physics: when the variables in your model don’t correspond to well-defined properties or processes, but are more abstract or interpretative, it’s hard to “prove” that your model isn’t missing something.

In closing, though, and leaving Max’s decision behind for now, I’d like to point to some very different reason for suspecting the reductionist approach is at least incomplete, and that the human brain is a system governed by emergent properties which include self-awareness and agency. Namely, the “reductionist” was never exactly formally incorporated into science, but followed from the way in which the first “scientists” approached modeling: they deliberately restricted their models to components of reality which were, or could be considered, in relative isolation and which were inert. And for a time, that approach yielded so much that what had been merely a method, rather than an axiom, became integral to the scientific approach. The experiments which set the scientific endeavor in motion were necessarily reductionist, but in the beginning those like Descartes and others stated explicitly that this reductionism was limited to a rather small group of phenomena. However, these experimental paradigms increased in sophistication and application, but not in the way reductionism was incorporated into the underlying framework. As a result, it sort of just became the framework, or a part of it, without much evidence that it could adequately apply everywhere.

The same sorts of assumptions were behind a decreased interest in physics near the turn of the century, because of a “we pretty much know everything” attitude which, as it turned out, wasn’t just wrong, but amazingly, spectacularly wrong. So wrong that although quantum theory has been around for a century or so, there is still fundamental disagreement its basic nature, let alone the unbelievable turnabout which ideas like spacetime had when put into historical perspective (the idea that time is distinct not only coincides with our everyday experience, but has a few thousand years of philosophy and then science behind it, but doesn’t coincide with physics since 1905 at least). So if the reductionist, deterministic causality which sort of “crept” into scientific practice and method until it was suddenly a foundational component had at best as much support for it as did all the concepts which were overturned by relativity and QM, and perhaps much less, why cling to this epistemological approach to science and reality in spite of evidence to the contrary?
 

Ouroboros

Coincidentia oppositorum
Good read. I haven't read all parts, only (maybe half) here and there, and so far there's nothing I would disagree with. It seems like I've come to realize the same things as you state here. A thought I had, one reason reductionism is popular is its success, as you say, and I think it's also popular because it's just "simpler" or easier than any holistic oriented method. I could be wrong, but we're so attuned with the idea of breaking it down to the nuts and bolts to see how things work (don't know how many tape players I broke for my dad as a kid, taking them apart). To break free from reductionism is as hard, or maybe harder, than breaking free from traditional religion. However, I'm glad to hear that there are some scientists and fields of science where new thinking is happening. I will make an attempt to finish reading all the parts later, but I'm not sure there's anything I can add (besides agreeing).
 

viole

Ontological Naturalist
Premium Member
I hope to bypass of the issues relating to the “free will” debate by focusing on what most (whether they believe in “free will” or not) would agree that “free will” necessarily entails: a person’s capacity to make a choice or decision such that they could have made a different choice or decision than the one they did.

Example: A large number of people end up going to college/university. Consider a specific student and a decision about a change of major. We’ll call our student Max, a student of German extraction who is studying classical languages with a minor in philosophy (although it isn’t really important, it so happens that Max, whether for simplicity or out of needless shame, only gave the university his first and last name, rather than including the middle part: Ernst Ludwig). After about two years of studying, certain philosophy courses have made Max really interested in physics, especially theoretical or “foundational” physics.

After considering for some time what changing his program of study would entail (extra semesters, a new set of courses required for the new major, more money, etc.) Max decides “dash it all, one only lives thrice, so I might as well give it the ol’ college try, wot? Righto, then, it’s settled.” (Max is a bit of an Anglophile). And he decides to switch his major and study physics, ignoring the advice of others (including a mentor of his, Phil Jolly, who was particularly opposed).

We have here a decision, or choice, that Max made after considerable thought (making it a “conscious” choice, understanding that the adjective “conscious” hasn’t been defined and is to be understood in the colloquial sense). Sticking to the confines of “free will” we defined above, the question is whether or not Max could have decided differently. We are not concerned with the extent to which his upbringing, friends, the courses he took, the state of the universe several hundred millions of years ago, etc., influenced his decision. We’ll even grant that there were a great many things, from his upbringing to a passing remark in one of Max’s classes made by Prof. Rosen, over which Max had little (and in some cases no) influence over, but which did influence Max.

The question is not whether Max was influenced, but whether or not, no matter how “big” we allow our sphere of possible influences to be (e.g., as “big” as the state/conditions of everything from the origins of the universe up to Max’s decision) we can say that Max had the capacity to decide not to change majors. Put another way, Max could have made a decision other than the one he did.

But asking is just the easy part. Answering is another thing altogether. However, we have a starting place, this decision of Max’s along with our question about it, which naturally brings to mind other questions, issues, problems. For example, sticking with what we know about the make-up of human physiology, what about it can we relate to Max’s decision? Most would probably agree that the brain is rather key here. In fact, it’s not only key, but if there is any way in which Max could have made a different decision, then something about the way Max’s brain works must make that a possibility.

Another thought or question which no doubt presents itself here is why or why not we might think (whatever the nature of Max’s brain) there is reason to suppose that Max only thinks he made a choice which he could have made differently, when in reality someone with a suitable amount of data (even if it needs to include the state of the entire universe since the big bang) and the right computing device could have told us what decision Max would make a day before, a month before, a billion years before, etc. In other words, Max’s choice couldn’t have been other than what it was, because it is at least in principle for us to know what that choice will be before Max actually made it, which necessarily entails that it was the only choice he could make.


There are a set of related reasons responsible for entertaining the seemingly impossible (in that it is counter to our everyday experience) notion that there is no decision or choice we can make such that a different one was possible. The first is another type of everyday experiences: effects and their relationship with time and cause. Simply put, the idea that one thing causes another, and that the cause precedes the other (the effect), is something we experience constantly. Why is Cathy Conduitt crying? Because Uncle Newton, during another rant about gravity, has dropped a filled glass and spilled milk all over the floor she had just cleaned. If asked why she was crying, Cathy would say it was because of the mess made by the spilt milk on the floor. And Newton would proudly announce that he had dropped the class, causing gravity to take over, and then ramble on about structural integrity and the essence of hardwood flooring relative to melted sand blown into glass.

That’s the type of thing we experience all the time, mainly because we experience “time” as the unfolding of something we intuitively understand as events (which are more or less temporal intervals we conceptualize as “wholes”). When we can see these events or actions, whether a dropped glass or a match lit or a declaration of war, we conceptualize them in terms of how we experience time and the activity which happens as we experience this time; that is, as a linear sequence of intervals/moments/actions/effects/etc., each one and every one resulting from some previous set of effects which came before it and which caused it.

There is another main reason for supposing that Max only thinks he made a decision which, in the end, was the result of his ability to (at least at times and/or to some extent) determine what he does, such that he could have made a different one. To explain this reason, we have to look back in history a bit. Specifically, we have to deal with a Greek by the name of Aristotle who’s been dead for millennia. For centuries, the big issue here (Max’s decision and whether it was inevitable) had a lot to do with language. Aristotle illustrated the issue with a sea battle, which in Greek is one word (ναυμαχία) and which in Greece of that time was a common enough experience, but which rather dated now. Instead we’ll go with rain. Like Aristotle, philosophers and others even unto today deal with “truth-bearing” statements called propositions. Thus, “is it raining?” is not truth-bearing, but “it’s raining” is. If I say “don’t go outside, it’s raining” and you go outside to find that there’s not a cloud in the sky nor a drop of water falling from it, then what I said was false. But what if you had asked about the weather report? And what if I had answered “It’s going to rain tomorrow”? This appears to be a proposition, in that although we can’t determine whether or not it’s true right after it is said, we can do so the next day.

And because philosophers are lazy, borderline psychotic, obsessive, and generally useless to society, for centuries reasonable people have tried to keep them confined to universities or similar institutions, so that they could spend hundreds of years arguing about how exactly “it’s going to rain tomorrow” is or isn’t a “truth-bearing” statement (proposition), and whether the answer to this question entails fatalism.

Let’s go back to Max. What if, after a last ditch attempt dissuade Max, Phil Jolly had said (just as Max closed the door behind him on the way out of Dr. Jolly’s office) “he’s going to change his major”, let’s assume this statement to be truth-bearing. It turns out that Jolly was correct here, and Max changed his major. Which seems to mean that when Dr. Jolly predicted this, his statement was true. If it was true when he made it, then necessarily Max had to change his major, otherwise Phil Jolly’s statement would be false. Of course, if it were false, then Max couldn’t change his major, because then “he’s going to change his major” would have been true. And for a very, very long time, the safely secluded philosophers argued about this while the rest of society did real work.

But, a few centuries ago, things began which would end up changing how philosophers wasted their time: the physical sciences. People like Descarted, Kepler, Fermat, Newton, and others began to develop and apply mathematical formulae to physical phenomena in order to describe and model physical reality, from the movement of planets to why everything is about apples being where they shouldn’t (the apple which isn’t mentioned in Genesis, the apple which belittles Newton’s work on gravity by reducing it to getting hit on the head, the millions of deaths caused by people who were sure that “an apple a day” would keep the need for medical attention and doctors away, etc.).

They called this “new” approach to figuring out why things were the way they were “science”, derived from the Latin scientia which means “people without common sense”. And they got better and better at it, creating new fields of research where before there was only one, and almost all of this was related to being able to know what was going on and what would be without fondling the innards of sheep (and other messy divination methods).

Even though things didn’t begin with the intent to create a complete set of laws enabling one to (at least in principle) determine how anything and everything would happen, the more cohesive the “natural sciences” became, and the better and more accurate the ever-increasing number of “laws” became at demonstrating how stuff worked, the more it appeared as if everything operated differently than Descartes had thought: mechanics (or laws of motion) wasn’t just seen as applicable only to non-living systems incapable of agency. Instead the entire universe increasingly seemed to obey deterministic “laws” of physic. Natural philosophers (proto-scientists like Newton and Laplace) and later physicists began to think that there isn’t much of a difference between knowing how to answer those insufferably boring, irritating, and pointless questions of the form “if Alice drives east at a rate of 1000 furlongs per fortnight, and Bob drives north for seven moons at a rate of…” and knowing how to determine what decisions people would make before they made them. Sure, the latter is a lot harder, but if everything operates according to deterministic physical laws, then it is at least possible to calculate in principle the state of any system (like a person) arbitrarily into the future.

In fact, scientists spent so long obsessed with models which showed how parts worked and how this equation enabled one to know how X action would produce Y result that the idea of determinism and naïve causality almost became what science was (or strived to be). For simplicity, we’ll say causation and determinism mean that every system (a brain, person, solar system, ant colony, etc.) can be reduced to physical laws which govern the interactions of its parts, right down to the most fundamental level of parts (the indivisible “atoms” or later “particles” that ultimately made up all matter). Additionally, knowing these laws and the state of the parts of a system (as well as whatever relevant external forces are or will act upon it) entails the ability to know exactly what will happen to the system. Finally, we can describe the activity of all these parts in a linear, causal way, such that for any arbitrary interval of time, whatever is happening can be explained completely by a series of immediately prior causes consisting of interactions of fundamental parts.

This would appear to be the place where some statement about the revolutionary changes in physics, namely relativity and quantum mechanics, changed everything. It’s not. It’s time to go back to Max’s brain.


A common misconception about physics is that before quantum physics, we didn’t just think that everything followed this deterministic causation model outlined immediately above, but had sort of “proven” it. Moreover, whatever relationship the new physics has to the brain, it’s generally believed that either the brain is only trivially governed by quantum mechanics, and thus we’re back at a deterministic causality model, or somehow QM can “save” us from admitting we never actually make decisions which couldn’t be (in principle) perfectly predicted before we made them. This isn’t inaccurate.

In fact, it’s not just wrong, it overlooks what is (at least as far as consciousness and choice are concerned) a far more important development of the 20th century: complexity. Until the 20th century, and actually rather late in the 20th century, it was generally believed that the ubiquitous, pervasive, and relative complexity of “curvature” intrinsic in nature wasn’t so much of an issue anymore, as we finally had a sufficiently formal foundation for the calculus, which is all about nonlinearities.

Enter chaos: both the theory AND the frenzied, desperate attempts to retain this idea that, well, simple things could always be represented by simple mathematics, and more complicated things just required a quantitatively (not qualitatively) more complex approach. It turned out that “simple” things like a pendulum swinging could exhibit behavior which could not be precisely solved by any “general” (analytic) mathematical model. REALLY complicated systems, with lots of interacting parts, turned out to be capable of behavior that resulted in processes which couldn’t be reduced to the “sum of their parts” even without getting into the fact that modern particle physics has basically shown reductionism to be dead in the water (even with a generous ontological interpretation of “particles” in modern physics, there aren’t any set that is the most fundamental such that all matter can be understood as made up of these and only these). In fact, everywhere scientists looked they found that non-living things often seemed to “randomly” self-organize, exhibiting properties which were the result of the synchronized activity of the collective, rather than the component parts.

The idea that “the whole is greater than the sum of its parts” is nothing new, but it wasn’t something which the physical sciences were or are equipped to deal with except by abandoning the deterministic, reductionist enterprise. For non-living systems, from clouds to crystals, the emergence of structure, patterns, and properties out of the dynamic activity of constituents created a major problem for causality, or the ability which had increasingly become only an issue for philosophers (as scientists could get results just find without worrying about the nuances of kinds of causation). It wasn’t that things like tornadoes or ant colonies exhibited behavior that made causation irrelevant or even inapplicable in these specific cases (that would bring us to quantum physics beyond physics). Rather, scientists faced problems such as determining what the “cause” was.

Mathematical models behind such problematic systems are too complicated to be useful here, but a simpler abstraction will suffice: a unit circle. We can use algebra to describe: x2 +y2= 1. We can graph this circle, and even know what it looks like without actually constructing the graph (the geometric representation of the algebraic equation). But contrast this with the equation (one of them, anyway) of a line: y=mx +b. With lines, if we know the slope and intercept (m & b, respectively) the value of y at any point is completely determined by the value of x. This is not true of points on a unit circle. For any point of on that circle, we can determine x by y’s value, and vice versa. It’s not that we don’t know the values which give us this circle, it’s that we can’t define one as a function of the other except by arbitrary choice.

That’s basically the issue with many nonlinear systems: the equations we have, with variables representing a complete model of the system, can be arbitrarily described as the function (cause) of others, or can be caused by others.

If that were the only problem, we’d have no issue here. But it seems that biological systems, from cellular activity of some organism to the entire organism, are qualitatively different than other natural systems.


I’ll give two examples:


1) Ant colonies. We know that any individual ant is basically mindless; a completely reactive drone we could simulate on a computer. Ant colonies, on the other hand, can perform incredibly complex tasks extremely effectively (so effectively that an entire subfield of machine learning, swarm intelligence, is dedicated to studying and reproducing this capacity). Yet we still don’t know how this works. We know quite well that it isn’t just the sum of the parts, because for one thing putting 100 ants down or even more will just end up with the ants running around in circles until they die. At some unknown point, however, and for some unknown reason, put enough of the ants together, and they synchronize, forming a complex network capable of emergent functional properties which cannot be produced simply by understanding each ant separately.

2) Cells in living tissue, plants, etc., are constantly active. More specifically, regardless of the type of cell or what plant or animal it is a part of, a large part of cellular activity is described as metabolism and repair. In other words, the activities which allow the cell to “create” energy for power, to repair itself, etc. This metabolic-repair is fundamental for cellular function (after all, without “power”, how would the cell do anything?). Pretty much the entire cell is constantly influences (i.e., in some sense “caused”) by this metabolic-repair process. The question is what causes this process? What most people informally term “cause” corresponds fairly well with what philosophers and scientists have termed “efficient” cause since Aristotle. Thus I can talk about how, for example, certain normal human behavior is “caused” (through evolution) by the state of an environment thousands and thousands of years ago (evolutionary psychology). But when someone has a hard time sticking to a diet rather than eating “junk food” filled with sugars and fats and so forth, it isn’t because at the moment they decide to have a chocolate bar rather than a granola bar they are thinking about the conditions of life thousands of years before civilization which is “causing” the craving. It’s not an efficient cause. The efficient causes would be more things like the neural signals coming and going from their digestive system and pre-frontal cortex. The problem with metabolic-repair in cells is that it appears to be closed to efficient causation. In other words, the same parts of the cell which are part of the metabolic-repair process are also influenced by it at the same time. It’s not just that we can’t figure out what’s causing what because we can arbitrarily choose (as before), but a more serious problem (so serious that at the moment it there exists a mathematical proof that cells cannot be computed, which has caused a rather heated debate for scientists in fields ranging from computational biology to machine learning to mathematicians). The “efficient” cause of the metabolic-repair process is cellular activity, but cellular activity which is also the “efficient” cause of metabolic-repair at the same time. Despite the death-grip reductionism has on the sciences, particularly in areas like biology, it has increasingly become at least partly abandoned because it fails: too often reducing a biological system to its components means you cannot model the system itself, because the behavior is more than just the summed activity of its components.


For several reasons (the fact that physicists were distracted by QM and relativity, the fact that it took some time before we had the computational power to realize that computational power wasn’t the issue, and the work on increasingly complex yet never adequate mathematical models), it wasn’t until recently that limits to classical physics which don’t have anything to do with QM are behind the failure of certain reductionist attempts (in particular those within biology).


And once again, we are finally back to Max’s brain. When he decides to change his major, the process is like the metabolic-repair described above, only on steroids. Instead of an emergent, irreducible functional property of a cell, we have a system of so powerfully synchronized networks coordinated with one another that no system we know of begins to compare in terms not merely of complexity, but of what appear to be violations of physical laws as we understand them. This remains true quite apart from whatever quantum dynamics which may be at work in the brain (not to mention the little problem that when it comes to a lot of modern physics, the reason there are constant, never-ending releases in print, internet, and television media on some other-wordly model of physics that is so much better and cooler than the plain ol’ vanilla “standard model” has nothing to do with experimental research; it’s because even if physicists actually agreed on what the standard model really is, the preference for other models the creation of other models is either entirely or largely due to the fact that we haven’t much of a clue what the models, standard or no, actually describe).

Here biologists (from those who develop evolutionary algorithms for computational models of modularity to neuroscientists) have an advantage: unlike physicists whose field concerns separation in spacetime that is well beyond observation or who conduct experiments in which we have only symbols to describe whatever is going on at the “quantum” level which we can’t see, even biologists who study the origins of life frequently have more “observable” experimental paradigms to collect data.

Within neuroscience, primarily functional imaging is behind such experiments (functional if the “f” in fMRI which, like EEG and PET, create “dynamic” pictures rather than static ones like those produced by MRI or X-ray. At the moment, we aren’t anywhere near models of consciousness which aren’t highly theoretical. However, we do have a good deal of data which we can’t seem to explain, causing everything from descriptions of quantum effects to “quantum-like” neural activity to ignoring these data and focusing on other things like neural correlates rather than how these correlates do what they do.

There are not, therefore, two camps in the sciences: the reductionists/determinists vs. the quantum mechanics allows “free will” of some sort. There are certainly both positions, but

1) The “deterministic/reductionist” or “classical” camp has an increasing number of increasingly difficult experimental results to explain using their theoretical framework

2) What empirical evidence does exist which supports non-trivial quantum processes in neurodynamics is slim at best.

3) Even if one accepts that there are non-trivial quantum processes, all that this does is allow one to apply a theoretical interpretation of the formalisms (mathematical equations, symbols, etc.) in quantum mechanics or quantum field theory in which these processes can do something they can’t in some other theoretical interpretation.

4) There are an increasing number of groups across the physical sciences breaking away from classical reductionism and forming new approaches which are superior in their explanatory power in more ways than they are deficient because their approach is not reductionist, or at least not limited to reductionism. There are interdisciplinary journals, conferences, monograph/volume series, and edited volumes which have in common a systems approach (or, more generally, an approach which incorporates, improves, adapts, and implements the methods and models used across fields which are non-reductionist). Then there are the same but for a specific field or research area (like cognitive neuroscience). Not all of those who subscribe to this rather nebulous conglomeration of theoretical backgrounds, methods, and techniques, etc., believe that the reductionist program is ontologically flawed (i.e., even though they may model some system like a cell or brain or plant in a way which precludes causality, they do this because they believe we lack the ability at this point to continue to gain much from the reductionist approach, but that in reality can in principle be explained using “classical” reductionist views). However, a large number (perhaps a majority) do.

What, however, does this mean for Max’s decision? Even outside QM theories of consciousness, there isn’t a single cohesive model (either reductionist or not). Those who argue that emergent properties of biological systems (or at least some such systems) are irreducible don’t all agree on the nature of these properties, let alone how they might be produced. Rather go into this in depth, then, avoiding the more radical theories is probably best. The cell example and metabolism is usually thought of in terms of an emergent functional property, in that while it cannot be produced simply by the actions of the cells component parts, it also only the name we give various processes which helps us to explain the state of the cell in a way classical reductionism does not. An only slightly more complicated type of emergence can be useful to describe Max’s decision. Just as there is no physical entity in Max’s brain which represents “course” or “major” or “university” or even “change”, neither does one exist which is “decision” or “decision to change my major”. Instead, the structure of Max’s brain is capable of producing not only functional, but conceptual properties/processes, including a reflexive concept which allows Max to understand himself as in some way an agent. We call that “consciousness” or “self-awareness”. These properties are the key: the capacity for emergent concepts from irreducibly synchronized neural networks, including a concept of self, create the necessary ingredients for self-governing agency, or the ability of a system to use functional and conceptual emergence to produce still another property (agency) which is both a product of the system (like the other irreducible, emergent properties), but which at the same time determines it.

Of course, I’m simplifying greatly here, but even were I as technical as is possible, there would still be one important little problem: if this is the way the mind works (or anything works), why are there still scientists arguing that the brain and every other system is reducible and deterministic (at least in principle)? Unfortunately, the very reason that the reductionist program is increasingly being rejected or at least added to is the answer: reductionism succeeded for so long because of the ability for reductionist models to explain everything in terms of parts which could be treated as variables in some mathematical model. What are thought of the limits of reductionism and the need for some sort of systems approach which allows for emergent properties come from the failure of classical models, which means that alternative models are more schematic, holistic, abstract, and non-reductionist. Which also means that no matter how well they explain things, or how useful they can be to learn things, what they can’t do is show that there is not a reductionist model which could explain what’s going on. Strictly speaking, there actually has been a proof of this for almost 2 decades, as well as subsequent “proofs” of a similar nature, but the main problem with them is similar to the problem plaguing modern physics: when the variables in your model don’t correspond to well-defined properties or processes, but are more abstract or interpretative, it’s hard to “prove” that your model isn’t missing something.

In closing, though, and leaving Max’s decision behind for now, I’d like to point to some very different reason for suspecting the reductionist approach is at least incomplete, and that the human brain is a system governed by emergent properties which include self-awareness and agency. Namely, the “reductionist” was never exactly formally incorporated into science, but followed from the way in which the first “scientists” approached modeling: they deliberately restricted their models to components of reality which were, or could be considered, in relative isolation and which were inert. And for a time, that approach yielded so much that what had been merely a method, rather than an axiom, became integral to the scientific approach. The experiments which set the scientific endeavor in motion were necessarily reductionist, but in the beginning those like Descartes and others stated explicitly that this reductionism was limited to a rather small group of phenomena. However, these experimental paradigms increased in sophistication and application, but not in the way reductionism was incorporated into the underlying framework. As a result, it sort of just became the framework, or a part of it, without much evidence that it could adequately apply everywhere.

The same sorts of assumptions were behind a decreased interest in physics near the turn of the century, because of a “we pretty much know everything” attitude which, as it turned out, wasn’t just wrong, but amazingly, spectacularly wrong. So wrong that although quantum theory has been around for a century or so, there is still fundamental disagreement its basic nature, let alone the unbelievable turnabout which ideas like spacetime had when put into historical perspective (the idea that time is distinct not only coincides with our everyday experience, but has a few thousand years of philosophy and then science behind it, but doesn’t coincide with physics since 1905 at least). So if the reductionist, deterministic causality which sort of “crept” into scientific practice and method until it was suddenly a foundational component had at best as much support for it as did all the concepts which were overturned by relativity and QM, and perhaps much less, why cling to this epistemological approach to science and reality in spite of evidence to the contrary?

That the whole is more than the sum of its parts seems pretty obvious to me. For starters, we have to add (or subtract if we identify information with entropy) additional information in the system specifying how those parts are assembled and interfaced. In other words: there are many possible wholes, but only few ones work.

But apart from that, I think I am an irreducible reductionist.

That does not entail that I use reductionist models all the time. For instance, I do not open a book of analytical mechanics when I play roulette. I am much safer to use probability theory, even though I am aware that a roulette is pretty deterministic.

Actually, that is is what I think of free will. Freedom is the only viable model, even if it is, at the very core, not true.

An interesting problem, is connected to free will and moral responsability.

Suppose that I exercise my free will and kill a man. To escape justice, I make a perfect copy of my brain (and body) and kill myself. Is my clone still guilty of murder?

Ciao

- viole
 

Ouroboros

Coincidentia oppositorum
...And what if I had answered “It’s going to rain tomorrow”? This appears to be a proposition, in that although we can’t determine whether or not it’s true right after it is said, we can do so the next day.
Reminds me of something that I thought was funny a few weeks ago. We were checking the weather on our smart phones. It said "Rain, 12 PM, 100%". Meaning 100% probability that it would rain at noon. Noon comes around. No rain. Checking the app again, and it says, "Rain, 12 PM, 20%". So the 100% chance (a statement of absolute guaranteed probability in my opinion) was now only 20%. It's not 100% unless that time actually comes. Until then, it can at best only be 99.999...% I guess they're just rounding it up. :D
 

LegionOnomaMoi

Veteran Member
Premium Member
I wanted to add something that is little discussed outside of philosophical, metaphysical, and some scientific literature but deeply relevant to any discussion of free will (as well as determinism, causality, change, randomness, etc.). It is widely believed that the universe is indeterministic (and there is an ever-growing mountain of empirical support for this conclusion). As covered above, that this must in some sense be true doesn’t just come from quantum theory, but from emergence and other seeming violations of classical causality in complex systems to the mere fact that living systems in general and brains in particular have resisted our attempts at reductionist, deterministic explanations.

But, as is often argued here and elsewhere, isn’t the only alternative to determinism “randomness” of a sort that is equally incompatible to free will (for, it is claimed, this “randomness” means that the outcome of events is left to chance, not choice)?

To understand some less operational aspects of quantum mechanics, randomness, and free will it is necessary to know a few things about probability. Although philosophical arguments on what it means for something to be “likely” or “probable” in the modern sense go back to the early modern period (and especially Laplace), probability theory was only developed into a rigorous, cohesive mathematical framework in the 20th century (not long after calculus, already a cohesive framework for a couple of centuries, was finally rigorously formulated and around the time that the recently achieve pinnacle in formal logic was to be dealt a death blow by Gödel). Thus philosophical as well as mathematical work on probability was able to receive a kind of treatment that it had as yet resisted.

Andrey Kolmogorov developed modern probability theory. Although he was hardly the first to put forward a “frequentist”-type interpretation of probability (see below), this interpretation was part of what made his mathematical formulation not just rigorous but accepted by statisticians and researchers. In fact, not long before Kolmogorov published his Foundations of the Theory of Probability (Grundbegriffe der Wahrscheinlichkeitsrechnung) in 1933, one of the most influential statisticians had published the second edition of one of the most influential books on research methods: Fisher’s Statistical Methods for Research Workers. In it and in the first edition of his 1935 The Design for Research Workers Fisher basically founded modern (null significance) hypothesis testing, a collection of (very related) methods used in countless studies across numerous disparate sciences (climate science, sociology, neuroscience, medical sciences, business, etc.), often virtually all research.

But what is the frequentist interpretation of probability that is at least very related to Kolmogorov’s axiomatic probability theory and is the foundation for so much scientific research? The frequentist position holds that we should say a fair coin has 50/50 chances of yielding heads or tails, even when it is unlikely that any given sequence of tosses will actually be split into 50 heads and 50 tails, because if we view probability in terms of the frequency with which we would obtain a given outcome, the frequency with which we would tend to get heads is equal to that of tails (likewise, we the frequency with which we would draw an ace from a deck of cards will tend to 4/52=1/13, the frequency with which we will get a 6 when rolling die tends to 1/6, etc.). Put concisely, probability is a limiting ratio of the outcomes of identically (if idealized) repeated events.

But even frequentists have remained largely unhappy with the “version” of their position adopted in practice (i.e., by researchers), while most philosophers and many physicists and other scientists have dismissed it as logically flawed and even unscientific. For one thing, both the frequentist position(s) and (at least many) subjectivist position(s) of probability regard the question of the probability of any single event as meaningless or non-existent. For another, there is one area of science in which modern probability theory fails mathematically at a fundamental level: quantum physics. In QM, probabilities are never calculated directly. In probability theory, probabilities are calculated from the a sample space Ω (and a particular type of collection of subsets of Ω which correspond to possible “events”), combined with a probability measure P. The important part of this is that they are calculated directly. In QM, the “events” are vectors or rays in an abstract (usually infinite-dimensional) complex mathematical space called Hilbert space. The important part of this is that probabilities is quantum mechanics are never calculated directly and, even more important, that given two possible and mutually exclusive “events”, the probability that one or the other will happen isn’t equal to 1 nor are the probabilities of either equal to .5. Likewise, in quantum logic the truth value of “A or not A” isn’t 1 and the truth value of “both A and not A” isn’t 0.

The orthodox interpretation of QM holds that quantum systems are mathematical entities we use to predict the outcome of experiments and is agnostic towards or in contradiction with any ontological interpretations and perhaps realism itself. A very early alternative interpretation that is now a set of interpretations is to interpret quantum probabilities (the basis for indeterminism) based upon a kind of subjective interpretation of probability. Subjective interpretations of probability, briefly and simply put, are epistemic: the probability that a coin has a 50% chance of yielding heads is equated with a rational degree of belief (certainty) that the coin is fair, and is more generally the equating of probabilities with the degree of rational certainty in outcomes. Once again, probabilities in quantum theory allow for an entirely new way to think about subjective probability just as they required an alternative to probability theory to calculate probabilities: "When the province of physical theory was extended to encompass microscopic phenomena through the creation of quantum mechanics, the concept of consciousness came to the fore again: it was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness." (from Eugene Wigner’s papers “Remarks on the Mind-body Question” and “The place of Consciousness in Modern Physics”). This interpretation of quantum physics (the “conscious collapse” of the wavefunction) is based on the work of the great John von Neumann and Eugene Wigner. But it isn’t the only interpretation which incorporates, or sees as necessarily incorporated already, the conscious observer in quantum theory:

“A many minds theory, as I understand it, is a theory which takes completely at face value the account which unitary quantum mechanics gives of the physical world and its evolution over time. In particular, it allows that, just as in special relativity there is a fundamental democracy of Lorentz frames, so in quantum mechanics there is a fundamental democracy of vector bases in Hilbert space. In short, it has no truck with the idea that the laws of physics prescribe an objectively preferred basis. For a many minds theorist, the appearance of there being a preferred basis, like the appearance of state vector reduction, is to be regarded as an illusion. And both illusions can be explained by appealing to a theory about the way in which conscious mentality relates to the physical world as unitary quantum mechanics describes it.” (Lockwood, M. (1996). 'Many Minds'. Interpretations of Quantum Mechanics. British Journal for the Philosophy of Science, 159-188.).

Thus in both collapse and no-collapse interpretations of quantum theory we find consciousness is what makes reality possible, and it does so by forcing particular possible state to occur (collapse) or through the selection of a particular state from a subjectively (but not objectively) distinct set (no collapse). Let’s return, though, to probability. It provides a measure of uncertainty, whether it actually “is” this or not. In application in most scientists, subjective probability provides a mathematically rigorous framework for a rational agent to update her or his beliefs given new information. But in an indeterministic universe governed by probabilities that are realized through observation, this view can be (and is) somewhat inverted. It is the universe/reality/Nature/the cosmos which is “uncertain”. Indeterminism is this uncertainty, and the conscious “observer” can subjectively determine objectively indeterministic states. In short, consciousness can determine and indeterministic reality through free will.

Of course, we don’t need quantum theory here. It’s often said (accurately enough) that classical physics is deterministic and thus makes free will impossible. Determinism does make free will impossible, not the determinism of classical physics even in the classical realm and even without quantum mechanics. One of the reasons for the various interpretations of quantum mechanics is due to its entirely deterministic (formal) nature despite describing indeterminstic phenomena. The point is that just because classical physics, as far as it worked and to the extent it is worth treating as anything other than a flawed theory we keep around for convenience, just because the math describes the dynamics of systems deterministically doesn’t mean they are. In fact, what is perhaps the most central component of classical physics, the “law of gravity”, was a clear violation of classical causality in a way that even quantum nonlocality isn’t. It allowed the faster-than-light (instantaneous) causal influence of one body on another through an immaterial “force”. Things became worse with the development of electromagneticism, which seemed even to its “inventor” (Maxwell) to pose problems for determinism, reductionism, and/or causality (for more modern critiques, see e.g., Frisch, M. (2005). Inconsistency, Asymmetry, and NonLocality: A Philosophical Investigation of Classical Electrodynamics. Oxford University Press.). And when “chaos theory” emerged, it was realized that complex systems posed a variety of problems to the classical interpretation of classical physics. The fact is that most systems we describe using classical physics require statistical mechanics (which was so horrendous when Boltzmann first introduced the scientific community to his ingenious “invention” that the reaction to its introduction caused Boltzmann to spiral into depression and commit suicide). Complex interactions in complex systems allow a degree of uncertainty to future states as determined by local interactions. Free will can thus again be related to subjective probability, in which the organization of the brain’s state is probabilistically determined through choices (among other influences) and especially the “mind”. The mind can, for example, be viewed as a collective process produced but not reducible to the physics governing neuronal networks and neurons that determines brain states the way mathematical operators do in QM or ensemble statistical descriptions do in classical physics (there are far superior approaches to consciousness, but this is as simple as I can get while including the probabilistic component of consciousness).

Randomness itself is a measure of disorder in classical physics, and free will can thus be viewed as the emergence of order within the brain through emergent processes that select particular states out of the statistical possibilities (i.e., allow for “choice”). It is the subjective statistical mechanics of the brain. Admittedly, this is mostly a metaphorical explanation and/or a conceptual simplification, but the vastly more complicated version wouldn’t differ in how indeterminism and randomness can and should be understood (especially in this case) via a kind of “ontological” subjective probability.
 

LegionOnomaMoi

Veteran Member
Premium Member
Reminds me of something that I thought was funny a few weeks ago. We were checking the weather on our smart phones. It said "Rain, 12 PM, 100%". Meaning 100% probability that it would rain at noon. Noon comes around. No rain. Checking the app again, and it says, "Rain, 12 PM, 20%". So the 100% chance (a statement of absolute guaranteed probability in my opinion) was now only 20%. It's not 100% unless that time actually comes. Until then, it can at best only be 99.999...% I guess they're just rounding it up. :D
That's a bit weird. Maybe it was saying that 12PM was 100%. ;)
I shouldn't do this, but I never listen to my own advice (I refuse to take orders from a ******* like that). In (rigorous) probability, there is a shorthand/abbreviation "a.s." meaning "almost surely" (it's the probability theory version of "almost everywhere" in modern analysis). Put simply, probability 1 doesn't mean it will happen, and probability 0 doesn't mean it won't. In fact, for some probabilities the possibility of any single event is 0 (in elementary probability theory, where we have continuous variables and continuous distributions, for a variable or set of possible "events" that is normally distributed the realization of any of these events has probability 0). My new favorite illustration of this kind of "paradox" in probability is the rational numbers. Imagine you could pick any real number from the interval [0,1] at random. We know that in that interval there are infinitely many rational numbers, and that between any two rational numbers there are infinitely many others (the rationals are "dense" in any interval on the real number line). It would seem, then, that there are no gaps as we can get infinitely close to any rational number in this or any other interval using only rational numbers. So what is the probability that, if you could "reach into" the interval [0,1] and pick a random number, that number would be one of the infinite number of infinitely dense rationals? The probability is 0. Despite literally infinitely many possible ways to "reach in" and pick a rational number, it so happens that the rational numbers make up such a small "portion" of this or any other interval (they are negligible in a measure-theoretic sense) that the probability of picking one is infinitesimal.
 

LegionOnomaMoi

Veteran Member
Premium Member
"At the rather basic level of life, and perhaps even in chemistry, there is no reduction: perhaps the simplest proof of this is that while the bases of DNA each obey the laws of physics, the juxtaposition of bases in the nucleotides is physically contingent, so the information content of DNA and the way it serves to encode instructions for constructing proteins is not governed merely by the laws of physics."
Simons, P. (2002). Candidate General Ontologies for Situating Quantum Field Theory. In Kuhlmann, M., Lyre, H., Wayne, A. (Eds.). Ontological Aspects of Quantum Field Theory. World Scientific.
 

Milton Platt

Well-Known Member
"At the rather basic level of life, and perhaps even in chemistry, there is no reduction: perhaps the simplest proof of this is that while the bases of DNA each obey the laws of physics, the juxtaposition of bases in the nucleotides is physically contingent, so the information content of DNA and the way it serves to encode instructions for constructing proteins is not governed merely by the laws of physics."
Simons, P. (2002). Candidate General Ontologies for Situating Quantum Field Theory. In Kuhlmann, M., Lyre, H., Wayne, A. (Eds.). Ontological Aspects of Quantum Field Theory. World Scientific.


Not given to philosophical discussions much, since they don't seem to solve much. But I will put my two cents in concerning free will. I think it can be argued that in the literal sense, we do not have free will. All of our actions and decisions are informed directly and subtley by our environment, including people individually, the society as a whole, and our natural surroundings as a whole. Not to mention the historical accumulation of the effects of all of these on our thinking. So our decisions are really based upon an accumulation of these influences. So free will in the absolute sense does not seem possible. How would you ever make your mind free from all of these influences?
 

LegionOnomaMoi

Veteran Member
Premium Member
i think science has proved that all thoughts are evolved from pond scum and are beyond the control of the individual organism
I think that most of those (if not all) who espouse what "science" has "proved" are not scientists, do not have any graduate or post-graduate experience in scientific research, are not aware of what "proof" or "prove" mean, and/or harbor horrid misconceptions concerning The Scientific Method. I have a feeling that at least one difference between the basis for what I think and what you think here is that one of us has worked as a scientist and researcher for years and has served as a consultant for other scientists in research methods, while the other doesn't have much if any experience in scientific research. I could be idiotically and completely wrong here, of course, but there is also the matter of mathematical proficiency, as this fundamentally relates to the nature of "proof".
 
Top