I hope to bypass of the issues relating to the “free will” debate by focusing on what most (whether they believe in “free will” or not) would agree that “free will” necessarily entails: a person’s capacity to make a choice or decision such that they could have made a different choice or decision than the one they did.
Example: A large number of people end up going to college/university. Consider a specific student and a decision about a change of major. We’ll call our student Max, a student of German extraction who is studying classical languages with a minor in philosophy (although it isn’t really important, it so happens that Max, whether for simplicity or out of needless shame, only gave the university his first and last name, rather than including the middle part: Ernst Ludwig). After about two years of studying, certain philosophy courses have made Max really interested in physics, especially theoretical or “foundational” physics.
After considering for some time what changing his program of study would entail (extra semesters, a new set of courses required for the new major, more money, etc.) Max decides “dash it all, one only lives thrice, so I might as well give it the ol’ college try, wot? Righto, then, it’s settled.” (Max is a bit of an Anglophile). And he decides to switch his major and study physics, ignoring the advice of others (including a mentor of his, Phil Jolly, who was particularly opposed).
We have here a decision, or choice, that Max made after considerable thought (making it a “conscious” choice, understanding that the adjective “conscious” hasn’t been defined and is to be understood in the colloquial sense). Sticking to the confines of “free will” we defined above, the question is whether or not Max could have decided differently. We are not concerned with the extent to which his upbringing, friends, the courses he took, the state of the universe several hundred millions of years ago, etc., influenced his decision. We’ll even grant that there were a great many things, from his upbringing to a passing remark in one of Max’s classes made by Prof. Rosen, over which Max had little (and in some cases no) influence over, but which did influence Max.
The question is not whether Max was influenced, but whether or not, no matter how “big” we allow our sphere of possible influences to be (e.g., as “big” as the state/conditions of everything from the origins of the universe up to Max’s decision) we can say that Max had the capacity to decide not to change majors. Put another way, Max could have made a decision other than the one he did.
But asking is just the easy part. Answering is another thing altogether. However, we have a starting place, this decision of Max’s along with our question about it, which naturally brings to mind other questions, issues, problems. For example, sticking with what we know about the make-up of human physiology, what about it can we relate to Max’s decision? Most would probably agree that the brain is rather key here. In fact, it’s not only key, but if there is any way in which Max could have made a different decision, then something about the way Max’s brain works must make that a possibility.
Another thought or question which no doubt presents itself here is why or why not we might think (whatever the nature of Max’s brain) there is reason to suppose that Max only thinks he made a choice which he could have made differently, when in reality someone with a suitable amount of data (even if it needs to include the state of the entire universe since the big bang) and the right computing device could have told us what decision Max would make a day before, a month before, a billion years before, etc. In other words, Max’s choice couldn’t have been other than what it was, because it is at least in principle for us to know what that choice will be before Max actually made it, which necessarily entails that it was the only choice he could make.
There are a set of related reasons responsible for entertaining the seemingly impossible (in that it is counter to our everyday experience) notion that there is no decision or choice we can make such that a different one was possible. The first is another type of everyday experiences: effects and their relationship with time and cause. Simply put, the idea that one thing causes another, and that the cause precedes the other (the effect), is something we experience constantly. Why is Cathy Conduitt crying? Because Uncle Newton, during another rant about gravity, has dropped a filled glass and spilled milk all over the floor she had just cleaned. If asked why she was crying, Cathy would say it was because of the mess made by the spilt milk on the floor. And Newton would proudly announce that he had dropped the class, causing gravity to take over, and then ramble on about structural integrity and the essence of hardwood flooring relative to melted sand blown into glass.
That’s the type of thing we experience all the time, mainly because we experience “time” as the unfolding of something we intuitively understand as events (which are more or less temporal intervals we conceptualize as “wholes”). When we can see these events or actions, whether a dropped glass or a match lit or a declaration of war, we conceptualize them in terms of how we experience time and the activity which happens as we experience this time; that is, as a linear sequence of intervals/moments/actions/effects/etc., each one and every one resulting from some previous set of effects which came before it and which caused it.
There is another main reason for supposing that Max only thinks he made a decision which, in the end, was the result of his ability to (at least at times and/or to some extent) determine what he does, such that he could have made a different one. To explain this reason, we have to look back in history a bit. Specifically, we have to deal with a Greek by the name of Aristotle who’s been dead for millennia. For centuries, the big issue here (Max’s decision and whether it was inevitable) had a lot to do with language. Aristotle illustrated the issue with a sea battle, which in Greek is one word (ναυμαχία) and which in Greece of that time was a common enough experience, but which rather dated now. Instead we’ll go with rain. Like Aristotle, philosophers and others even unto today deal with “truth-bearing” statements called propositions. Thus, “is it raining?” is not truth-bearing, but “it’s raining” is. If I say “don’t go outside, it’s raining” and you go outside to find that there’s not a cloud in the sky nor a drop of water falling from it, then what I said was false. But what if you had asked about the weather report? And what if I had answered “It’s going to rain tomorrow”? This appears to be a proposition, in that although we can’t determine whether or not it’s true right after it is said, we can do so the next day.
And because philosophers are lazy, borderline psychotic, obsessive, and generally useless to society, for centuries reasonable people have tried to keep them confined to universities or similar institutions, so that they could spend hundreds of years arguing about how exactly “it’s going to rain tomorrow” is or isn’t a “truth-bearing” statement (proposition), and whether the answer to this question entails fatalism.
Let’s go back to Max. What if, after a last ditch attempt dissuade Max, Phil Jolly had said (just as Max closed the door behind him on the way out of Dr. Jolly’s office) “he’s going to change his major”, let’s assume this statement to be truth-bearing. It turns out that Jolly was correct here, and Max changed his major. Which seems to mean that when Dr. Jolly predicted this, his statement was true. If it was true when he made it, then necessarily Max had to change his major, otherwise Phil Jolly’s statement would be false. Of course, if it were false, then Max couldn’t change his major, because then “he’s going to change his major” would have been true. And for a very, very long time, the safely secluded philosophers argued about this while the rest of society did real work.
But, a few centuries ago, things began which would end up changing how philosophers wasted their time: the physical sciences. People like Descarted, Kepler, Fermat, Newton, and others began to develop and apply mathematical formulae to physical phenomena in order to describe and model physical reality, from the movement of planets to why everything is about apples being where they shouldn’t (the apple which isn’t mentioned in Genesis, the apple which belittles Newton’s work on gravity by reducing it to getting hit on the head, the millions of deaths caused by people who were sure that “an apple a day” would keep the need for medical attention and doctors away, etc.).
They called this “new” approach to figuring out why things were the way they were “science”, derived from the Latin scientia which means “people without common sense”. And they got better and better at it, creating new fields of research where before there was only one, and almost all of this was related to being able to know what was going on and what would be without fondling the innards of sheep (and other messy divination methods).
Even though things didn’t begin with the intent to create a complete set of laws enabling one to (at least in principle) determine how anything and everything would happen, the more cohesive the “natural sciences” became, and the better and more accurate the ever-increasing number of “laws” became at demonstrating how stuff worked, the more it appeared as if everything operated differently than Descartes had thought: mechanics (or laws of motion) wasn’t just seen as applicable only to non-living systems incapable of agency. Instead the entire universe increasingly seemed to obey deterministic “laws” of physic. Natural philosophers (proto-scientists like Newton and Laplace) and later physicists began to think that there isn’t much of a difference between knowing how to answer those insufferably boring, irritating, and pointless questions of the form “if Alice drives east at a rate of 1000 furlongs per fortnight, and Bob drives north for seven moons at a rate of…” and knowing how to determine what decisions people would make before they made them. Sure, the latter is a lot harder, but if everything operates according to deterministic physical laws, then it is at least possible to calculate in principle the state of any system (like a person) arbitrarily into the future.
In fact, scientists spent so long obsessed with models which showed how parts worked and how this equation enabled one to know how X action would produce Y result that the idea of determinism and naïve causality almost became what science was (or strived to be). For simplicity, we’ll say causation and determinism mean that every system (a brain, person, solar system, ant colony, etc.) can be reduced to physical laws which govern the interactions of its parts, right down to the most fundamental level of parts (the indivisible “atoms” or later “particles” that ultimately made up all matter). Additionally, knowing these laws and the state of the parts of a system (as well as whatever relevant external forces are or will act upon it) entails the ability to know exactly what will happen to the system. Finally, we can describe the activity of all these parts in a linear, causal way, such that for any arbitrary interval of time, whatever is happening can be explained completely by a series of immediately prior causes consisting of interactions of fundamental parts.
This would appear to be the place where some statement about the revolutionary changes in physics, namely relativity and quantum mechanics, changed everything. It’s not. It’s time to go back to Max’s brain.
A common misconception about physics is that before quantum physics, we didn’t just think that everything followed this deterministic causation model outlined immediately above, but had sort of “proven” it. Moreover, whatever relationship the new physics has to the brain, it’s generally believed that either the brain is only trivially governed by quantum mechanics, and thus we’re back at a deterministic causality model, or somehow QM can “save” us from admitting we never actually make decisions which couldn’t be (in principle) perfectly predicted before we made them. This isn’t inaccurate.
In fact, it’s not just wrong, it overlooks what is (at least as far as consciousness and choice are concerned) a far more important development of the 20th century: complexity. Until the 20th century, and actually rather late in the 20th century, it was generally believed that the ubiquitous, pervasive, and relative complexity of “curvature” intrinsic in nature wasn’t so much of an issue anymore, as we finally had a sufficiently formal foundation for the calculus, which is all about nonlinearities.
Enter chaos: both the theory AND the frenzied, desperate attempts to retain this idea that, well, simple things could always be represented by simple mathematics, and more complicated things just required a quantitatively (not qualitatively) more complex approach. It turned out that “simple” things like a pendulum swinging could exhibit behavior which could not be precisely solved by any “general” (analytic) mathematical model. REALLY complicated systems, with lots of interacting parts, turned out to be capable of behavior that resulted in processes which couldn’t be reduced to the “sum of their parts” even without getting into the fact that modern particle physics has basically shown reductionism to be dead in the water (even with a generous ontological interpretation of “particles” in modern physics, there aren’t any set that is the most fundamental such that all matter can be understood as made up of these and only these). In fact, everywhere scientists looked they found that non-living things often seemed to “randomly” self-organize, exhibiting properties which were the result of the synchronized activity of the collective, rather than the component parts.
The idea that “the whole is greater than the sum of its parts” is nothing new, but it wasn’t something which the physical sciences were or are equipped to deal with except by abandoning the deterministic, reductionist enterprise. For non-living systems, from clouds to crystals, the emergence of structure, patterns, and properties out of the dynamic activity of constituents created a major problem for causality, or the ability which had increasingly become only an issue for philosophers (as scientists could get results just find without worrying about the nuances of kinds of causation). It wasn’t that things like tornadoes or ant colonies exhibited behavior that made causation irrelevant or even inapplicable in these specific cases (that would bring us to quantum physics beyond physics). Rather, scientists faced problems such as determining what the “cause” was.
Mathematical models behind such problematic systems are too complicated to be useful here, but a simpler abstraction will suffice: a unit circle. We can use algebra to describe: x2 +y2= 1. We can graph this circle, and even know what it looks like without actually constructing the graph (the geometric representation of the algebraic equation). But contrast this with the equation (one of them, anyway) of a line: y=mx +b. With lines, if we know the slope and intercept (m & b, respectively) the value of y at any point is completely determined by the value of x. This is not true of points on a unit circle. For any point of on that circle, we can determine x by y’s value, and vice versa. It’s not that we don’t know the values which give us this circle, it’s that we can’t define one as a function of the other except by arbitrary choice.
That’s basically the issue with many nonlinear systems: the equations we have, with variables representing a complete model of the system, can be arbitrarily described as the function (cause) of others, or can be caused by others.
If that were the only problem, we’d have no issue here. But it seems that biological systems, from cellular activity of some organism to the entire organism, are qualitatively different than other natural systems.
I’ll give two examples:
1) Ant colonies. We know that any individual ant is basically mindless; a completely reactive drone we could simulate on a computer. Ant colonies, on the other hand, can perform incredibly complex tasks extremely effectively (so effectively that an entire subfield of machine learning, swarm intelligence, is dedicated to studying and reproducing this capacity). Yet we still don’t know how this works. We know quite well that it isn’t just the sum of the parts, because for one thing putting 100 ants down or even more will just end up with the ants running around in circles until they die. At some unknown point, however, and for some unknown reason, put enough of the ants together, and they synchronize, forming a complex network capable of emergent functional properties which cannot be produced simply by understanding each ant separately.
2) Cells in living tissue, plants, etc., are constantly active. More specifically, regardless of the type of cell or what plant or animal it is a part of, a large part of cellular activity is described as metabolism and repair. In other words, the activities which allow the cell to “create” energy for power, to repair itself, etc. This metabolic-repair is fundamental for cellular function (after all, without “power”, how would the cell do anything?). Pretty much the entire cell is constantly influences (i.e., in some sense “caused”) by this metabolic-repair process. The question is what causes this process? What most people informally term “cause” corresponds fairly well with what philosophers and scientists have termed “efficient” cause since Aristotle. Thus I can talk about how, for example, certain normal human behavior is “caused” (through evolution) by the state of an environment thousands and thousands of years ago (evolutionary psychology). But when someone has a hard time sticking to a diet rather than eating “junk food” filled with sugars and fats and so forth, it isn’t because at the moment they decide to have a chocolate bar rather than a granola bar they are thinking about the conditions of life thousands of years before civilization which is “causing” the craving. It’s not an efficient cause. The efficient causes would be more things like the neural signals coming and going from their digestive system and pre-frontal cortex. The problem with metabolic-repair in cells is that it appears to be closed to efficient causation. In other words, the same parts of the cell which are part of the metabolic-repair process are also influenced by it at the same time. It’s not just that we can’t figure out what’s causing what because we can arbitrarily choose (as before), but a more serious problem (so serious that at the moment it there exists a mathematical proof that cells cannot be computed, which has caused a rather heated debate for scientists in fields ranging from computational biology to machine learning to mathematicians). The “efficient” cause of the metabolic-repair process is cellular activity, but cellular activity which is also the “efficient” cause of metabolic-repair at the same time. Despite the death-grip reductionism has on the sciences, particularly in areas like biology, it has increasingly become at least partly abandoned because it fails: too often reducing a biological system to its components means you cannot model the system itself, because the behavior is more than just the summed activity of its components.
For several reasons (the fact that physicists were distracted by QM and relativity, the fact that it took some time before we had the computational power to realize that computational power wasn’t the issue, and the work on increasingly complex yet never adequate mathematical models), it wasn’t until recently that limits to classical physics which don’t have anything to do with QM are behind the failure of certain reductionist attempts (in particular those within biology).
And once again, we are finally back to Max’s brain. When he decides to change his major, the process is like the metabolic-repair described above, only on steroids. Instead of an emergent, irreducible functional property of a cell, we have a system of so powerfully synchronized networks coordinated with one another that no system we know of begins to compare in terms not merely of complexity, but of what appear to be violations of physical laws as we understand them. This remains true quite apart from whatever quantum dynamics which may be at work in the brain (not to mention the little problem that when it comes to a lot of modern physics, the reason there are constant, never-ending releases in print, internet, and television media on some other-wordly model of physics that is so much better and cooler than the plain ol’ vanilla “standard model” has nothing to do with experimental research; it’s because even if physicists actually agreed on what the standard model really is, the preference for other models the creation of other models is either entirely or largely due to the fact that we haven’t much of a clue what the models, standard or no, actually describe).
Here biologists (from those who develop evolutionary algorithms for computational models of modularity to neuroscientists) have an advantage: unlike physicists whose field concerns separation in spacetime that is well beyond observation or who conduct experiments in which we have only symbols to describe whatever is going on at the “quantum” level which we can’t see, even biologists who study the origins of life frequently have more “observable” experimental paradigms to collect data.
Within neuroscience, primarily functional imaging is behind such experiments (functional if the “f” in fMRI which, like EEG and PET, create “dynamic” pictures rather than static ones like those produced by MRI or X-ray. At the moment, we aren’t anywhere near models of consciousness which aren’t highly theoretical. However, we do have a good deal of data which we can’t seem to explain, causing everything from descriptions of quantum effects to “quantum-like” neural activity to ignoring these data and focusing on other things like neural correlates rather than how these correlates do what they do.
There are not, therefore, two camps in the sciences: the reductionists/determinists vs. the quantum mechanics allows “free will” of some sort. There are certainly both positions, but
1) The “deterministic/reductionist” or “classical” camp has an increasing number of increasingly difficult experimental results to explain using their theoretical framework
2) What empirical evidence does exist which supports non-trivial quantum processes in neurodynamics is slim at best.
3) Even if one accepts that there are non-trivial quantum processes, all that this does is allow one to apply a theoretical interpretation of the formalisms (mathematical equations, symbols, etc.) in quantum mechanics or quantum field theory in which these processes can do something they can’t in some other theoretical interpretation.
4) There are an increasing number of groups across the physical sciences breaking away from classical reductionism and forming new approaches which are superior in their explanatory power in more ways than they are deficient because their approach is not reductionist, or at least not limited to reductionism. There are interdisciplinary journals, conferences, monograph/volume series, and edited volumes which have in common a systems approach (or, more generally, an approach which incorporates, improves, adapts, and implements the methods and models used across fields which are non-reductionist). Then there are the same but for a specific field or research area (like cognitive neuroscience). Not all of those who subscribe to this rather nebulous conglomeration of theoretical backgrounds, methods, and techniques, etc., believe that the reductionist program is ontologically flawed (i.e., even though they may model some system like a cell or brain or plant in a way which precludes causality, they do this because they believe we lack the ability at this point to continue to gain much from the reductionist approach, but that in reality can in principle be explained using “classical” reductionist views). However, a large number (perhaps a majority) do.
What, however, does this mean for Max’s decision? Even outside QM theories of consciousness, there isn’t a single cohesive model (either reductionist or not). Those who argue that emergent properties of biological systems (or at least some such systems) are irreducible don’t all agree on the nature of these properties, let alone how they might be produced. Rather go into this in depth, then, avoiding the more radical theories is probably best. The cell example and metabolism is usually thought of in terms of an emergent functional property, in that while it cannot be produced simply by the actions of the cells component parts, it also only the name we give various processes which helps us to explain the state of the cell in a way classical reductionism does not. An only slightly more complicated type of emergence can be useful to describe Max’s decision. Just as there is no physical entity in Max’s brain which represents “course” or “major” or “university” or even “change”, neither does one exist which is “decision” or “decision to change my major”. Instead, the structure of Max’s brain is capable of producing not only functional, but conceptual properties/processes, including a reflexive concept which allows Max to understand himself as in some way an agent. We call that “consciousness” or “self-awareness”. These properties are the key: the capacity for emergent concepts from irreducibly synchronized neural networks, including a concept of self, create the necessary ingredients for self-governing agency, or the ability of a system to use functional and conceptual emergence to produce still another property (agency) which is both a product of the system (like the other irreducible, emergent properties), but which at the same time determines it.
Of course, I’m simplifying greatly here, but even were I as technical as is possible, there would still be one important little problem: if this is the way the mind works (or anything works), why are there still scientists arguing that the brain and every other system is reducible and deterministic (at least in principle)? Unfortunately, the very reason that the reductionist program is increasingly being rejected or at least added to is the answer: reductionism succeeded for so long because of the ability for reductionist models to explain everything in terms of parts which could be treated as variables in some mathematical model. What are thought of the limits of reductionism and the need for some sort of systems approach which allows for emergent properties come from the failure of classical models, which means that alternative models are more schematic, holistic, abstract, and non-reductionist. Which also means that no matter how well they explain things, or how useful they can be to learn things, what they can’t do is show that there is not a reductionist model which could explain what’s going on. Strictly speaking, there actually has been a proof of this for almost 2 decades, as well as subsequent “proofs” of a similar nature, but the main problem with them is similar to the problem plaguing modern physics: when the variables in your model don’t correspond to well-defined properties or processes, but are more abstract or interpretative, it’s hard to “prove” that your model isn’t missing something.
In closing, though, and leaving Max’s decision behind for now, I’d like to point to some very different reason for suspecting the reductionist approach is at least incomplete, and that the human brain is a system governed by emergent properties which include self-awareness and agency. Namely, the “reductionist” was never exactly formally incorporated into science, but followed from the way in which the first “scientists” approached modeling: they deliberately restricted their models to components of reality which were, or could be considered, in relative isolation and which were inert. And for a time, that approach yielded so much that what had been merely a method, rather than an axiom, became integral to the scientific approach. The experiments which set the scientific endeavor in motion were necessarily reductionist, but in the beginning those like Descartes and others stated explicitly that this reductionism was limited to a rather small group of phenomena. However, these experimental paradigms increased in sophistication and application, but not in the way reductionism was incorporated into the underlying framework. As a result, it sort of just became the framework, or a part of it, without much evidence that it could adequately apply everywhere.
The same sorts of assumptions were behind a decreased interest in physics near the turn of the century, because of a “we pretty much know everything” attitude which, as it turned out, wasn’t just wrong, but amazingly, spectacularly wrong. So wrong that although quantum theory has been around for a century or so, there is still fundamental disagreement its basic nature, let alone the unbelievable turnabout which ideas like spacetime had when put into historical perspective (the idea that time is distinct not only coincides with our everyday experience, but has a few thousand years of philosophy and then science behind it, but doesn’t coincide with physics since 1905 at least). So if the reductionist, deterministic causality which sort of “crept” into scientific practice and method until it was suddenly a foundational component had at best as much support for it as did all the concepts which were overturned by relativity and QM, and perhaps much less, why cling to this epistemological approach to science and reality in spite of evidence to the contrary?
Example: A large number of people end up going to college/university. Consider a specific student and a decision about a change of major. We’ll call our student Max, a student of German extraction who is studying classical languages with a minor in philosophy (although it isn’t really important, it so happens that Max, whether for simplicity or out of needless shame, only gave the university his first and last name, rather than including the middle part: Ernst Ludwig). After about two years of studying, certain philosophy courses have made Max really interested in physics, especially theoretical or “foundational” physics.
After considering for some time what changing his program of study would entail (extra semesters, a new set of courses required for the new major, more money, etc.) Max decides “dash it all, one only lives thrice, so I might as well give it the ol’ college try, wot? Righto, then, it’s settled.” (Max is a bit of an Anglophile). And he decides to switch his major and study physics, ignoring the advice of others (including a mentor of his, Phil Jolly, who was particularly opposed).
We have here a decision, or choice, that Max made after considerable thought (making it a “conscious” choice, understanding that the adjective “conscious” hasn’t been defined and is to be understood in the colloquial sense). Sticking to the confines of “free will” we defined above, the question is whether or not Max could have decided differently. We are not concerned with the extent to which his upbringing, friends, the courses he took, the state of the universe several hundred millions of years ago, etc., influenced his decision. We’ll even grant that there were a great many things, from his upbringing to a passing remark in one of Max’s classes made by Prof. Rosen, over which Max had little (and in some cases no) influence over, but which did influence Max.
The question is not whether Max was influenced, but whether or not, no matter how “big” we allow our sphere of possible influences to be (e.g., as “big” as the state/conditions of everything from the origins of the universe up to Max’s decision) we can say that Max had the capacity to decide not to change majors. Put another way, Max could have made a decision other than the one he did.
But asking is just the easy part. Answering is another thing altogether. However, we have a starting place, this decision of Max’s along with our question about it, which naturally brings to mind other questions, issues, problems. For example, sticking with what we know about the make-up of human physiology, what about it can we relate to Max’s decision? Most would probably agree that the brain is rather key here. In fact, it’s not only key, but if there is any way in which Max could have made a different decision, then something about the way Max’s brain works must make that a possibility.
Another thought or question which no doubt presents itself here is why or why not we might think (whatever the nature of Max’s brain) there is reason to suppose that Max only thinks he made a choice which he could have made differently, when in reality someone with a suitable amount of data (even if it needs to include the state of the entire universe since the big bang) and the right computing device could have told us what decision Max would make a day before, a month before, a billion years before, etc. In other words, Max’s choice couldn’t have been other than what it was, because it is at least in principle for us to know what that choice will be before Max actually made it, which necessarily entails that it was the only choice he could make.
There are a set of related reasons responsible for entertaining the seemingly impossible (in that it is counter to our everyday experience) notion that there is no decision or choice we can make such that a different one was possible. The first is another type of everyday experiences: effects and their relationship with time and cause. Simply put, the idea that one thing causes another, and that the cause precedes the other (the effect), is something we experience constantly. Why is Cathy Conduitt crying? Because Uncle Newton, during another rant about gravity, has dropped a filled glass and spilled milk all over the floor she had just cleaned. If asked why she was crying, Cathy would say it was because of the mess made by the spilt milk on the floor. And Newton would proudly announce that he had dropped the class, causing gravity to take over, and then ramble on about structural integrity and the essence of hardwood flooring relative to melted sand blown into glass.
That’s the type of thing we experience all the time, mainly because we experience “time” as the unfolding of something we intuitively understand as events (which are more or less temporal intervals we conceptualize as “wholes”). When we can see these events or actions, whether a dropped glass or a match lit or a declaration of war, we conceptualize them in terms of how we experience time and the activity which happens as we experience this time; that is, as a linear sequence of intervals/moments/actions/effects/etc., each one and every one resulting from some previous set of effects which came before it and which caused it.
There is another main reason for supposing that Max only thinks he made a decision which, in the end, was the result of his ability to (at least at times and/or to some extent) determine what he does, such that he could have made a different one. To explain this reason, we have to look back in history a bit. Specifically, we have to deal with a Greek by the name of Aristotle who’s been dead for millennia. For centuries, the big issue here (Max’s decision and whether it was inevitable) had a lot to do with language. Aristotle illustrated the issue with a sea battle, which in Greek is one word (ναυμαχία) and which in Greece of that time was a common enough experience, but which rather dated now. Instead we’ll go with rain. Like Aristotle, philosophers and others even unto today deal with “truth-bearing” statements called propositions. Thus, “is it raining?” is not truth-bearing, but “it’s raining” is. If I say “don’t go outside, it’s raining” and you go outside to find that there’s not a cloud in the sky nor a drop of water falling from it, then what I said was false. But what if you had asked about the weather report? And what if I had answered “It’s going to rain tomorrow”? This appears to be a proposition, in that although we can’t determine whether or not it’s true right after it is said, we can do so the next day.
And because philosophers are lazy, borderline psychotic, obsessive, and generally useless to society, for centuries reasonable people have tried to keep them confined to universities or similar institutions, so that they could spend hundreds of years arguing about how exactly “it’s going to rain tomorrow” is or isn’t a “truth-bearing” statement (proposition), and whether the answer to this question entails fatalism.
Let’s go back to Max. What if, after a last ditch attempt dissuade Max, Phil Jolly had said (just as Max closed the door behind him on the way out of Dr. Jolly’s office) “he’s going to change his major”, let’s assume this statement to be truth-bearing. It turns out that Jolly was correct here, and Max changed his major. Which seems to mean that when Dr. Jolly predicted this, his statement was true. If it was true when he made it, then necessarily Max had to change his major, otherwise Phil Jolly’s statement would be false. Of course, if it were false, then Max couldn’t change his major, because then “he’s going to change his major” would have been true. And for a very, very long time, the safely secluded philosophers argued about this while the rest of society did real work.
But, a few centuries ago, things began which would end up changing how philosophers wasted their time: the physical sciences. People like Descarted, Kepler, Fermat, Newton, and others began to develop and apply mathematical formulae to physical phenomena in order to describe and model physical reality, from the movement of planets to why everything is about apples being where they shouldn’t (the apple which isn’t mentioned in Genesis, the apple which belittles Newton’s work on gravity by reducing it to getting hit on the head, the millions of deaths caused by people who were sure that “an apple a day” would keep the need for medical attention and doctors away, etc.).
They called this “new” approach to figuring out why things were the way they were “science”, derived from the Latin scientia which means “people without common sense”. And they got better and better at it, creating new fields of research where before there was only one, and almost all of this was related to being able to know what was going on and what would be without fondling the innards of sheep (and other messy divination methods).
Even though things didn’t begin with the intent to create a complete set of laws enabling one to (at least in principle) determine how anything and everything would happen, the more cohesive the “natural sciences” became, and the better and more accurate the ever-increasing number of “laws” became at demonstrating how stuff worked, the more it appeared as if everything operated differently than Descartes had thought: mechanics (or laws of motion) wasn’t just seen as applicable only to non-living systems incapable of agency. Instead the entire universe increasingly seemed to obey deterministic “laws” of physic. Natural philosophers (proto-scientists like Newton and Laplace) and later physicists began to think that there isn’t much of a difference between knowing how to answer those insufferably boring, irritating, and pointless questions of the form “if Alice drives east at a rate of 1000 furlongs per fortnight, and Bob drives north for seven moons at a rate of…” and knowing how to determine what decisions people would make before they made them. Sure, the latter is a lot harder, but if everything operates according to deterministic physical laws, then it is at least possible to calculate in principle the state of any system (like a person) arbitrarily into the future.
In fact, scientists spent so long obsessed with models which showed how parts worked and how this equation enabled one to know how X action would produce Y result that the idea of determinism and naïve causality almost became what science was (or strived to be). For simplicity, we’ll say causation and determinism mean that every system (a brain, person, solar system, ant colony, etc.) can be reduced to physical laws which govern the interactions of its parts, right down to the most fundamental level of parts (the indivisible “atoms” or later “particles” that ultimately made up all matter). Additionally, knowing these laws and the state of the parts of a system (as well as whatever relevant external forces are or will act upon it) entails the ability to know exactly what will happen to the system. Finally, we can describe the activity of all these parts in a linear, causal way, such that for any arbitrary interval of time, whatever is happening can be explained completely by a series of immediately prior causes consisting of interactions of fundamental parts.
This would appear to be the place where some statement about the revolutionary changes in physics, namely relativity and quantum mechanics, changed everything. It’s not. It’s time to go back to Max’s brain.
A common misconception about physics is that before quantum physics, we didn’t just think that everything followed this deterministic causation model outlined immediately above, but had sort of “proven” it. Moreover, whatever relationship the new physics has to the brain, it’s generally believed that either the brain is only trivially governed by quantum mechanics, and thus we’re back at a deterministic causality model, or somehow QM can “save” us from admitting we never actually make decisions which couldn’t be (in principle) perfectly predicted before we made them. This isn’t inaccurate.
In fact, it’s not just wrong, it overlooks what is (at least as far as consciousness and choice are concerned) a far more important development of the 20th century: complexity. Until the 20th century, and actually rather late in the 20th century, it was generally believed that the ubiquitous, pervasive, and relative complexity of “curvature” intrinsic in nature wasn’t so much of an issue anymore, as we finally had a sufficiently formal foundation for the calculus, which is all about nonlinearities.
Enter chaos: both the theory AND the frenzied, desperate attempts to retain this idea that, well, simple things could always be represented by simple mathematics, and more complicated things just required a quantitatively (not qualitatively) more complex approach. It turned out that “simple” things like a pendulum swinging could exhibit behavior which could not be precisely solved by any “general” (analytic) mathematical model. REALLY complicated systems, with lots of interacting parts, turned out to be capable of behavior that resulted in processes which couldn’t be reduced to the “sum of their parts” even without getting into the fact that modern particle physics has basically shown reductionism to be dead in the water (even with a generous ontological interpretation of “particles” in modern physics, there aren’t any set that is the most fundamental such that all matter can be understood as made up of these and only these). In fact, everywhere scientists looked they found that non-living things often seemed to “randomly” self-organize, exhibiting properties which were the result of the synchronized activity of the collective, rather than the component parts.
The idea that “the whole is greater than the sum of its parts” is nothing new, but it wasn’t something which the physical sciences were or are equipped to deal with except by abandoning the deterministic, reductionist enterprise. For non-living systems, from clouds to crystals, the emergence of structure, patterns, and properties out of the dynamic activity of constituents created a major problem for causality, or the ability which had increasingly become only an issue for philosophers (as scientists could get results just find without worrying about the nuances of kinds of causation). It wasn’t that things like tornadoes or ant colonies exhibited behavior that made causation irrelevant or even inapplicable in these specific cases (that would bring us to quantum physics beyond physics). Rather, scientists faced problems such as determining what the “cause” was.
Mathematical models behind such problematic systems are too complicated to be useful here, but a simpler abstraction will suffice: a unit circle. We can use algebra to describe: x2 +y2= 1. We can graph this circle, and even know what it looks like without actually constructing the graph (the geometric representation of the algebraic equation). But contrast this with the equation (one of them, anyway) of a line: y=mx +b. With lines, if we know the slope and intercept (m & b, respectively) the value of y at any point is completely determined by the value of x. This is not true of points on a unit circle. For any point of on that circle, we can determine x by y’s value, and vice versa. It’s not that we don’t know the values which give us this circle, it’s that we can’t define one as a function of the other except by arbitrary choice.
That’s basically the issue with many nonlinear systems: the equations we have, with variables representing a complete model of the system, can be arbitrarily described as the function (cause) of others, or can be caused by others.
If that were the only problem, we’d have no issue here. But it seems that biological systems, from cellular activity of some organism to the entire organism, are qualitatively different than other natural systems.
I’ll give two examples:
1) Ant colonies. We know that any individual ant is basically mindless; a completely reactive drone we could simulate on a computer. Ant colonies, on the other hand, can perform incredibly complex tasks extremely effectively (so effectively that an entire subfield of machine learning, swarm intelligence, is dedicated to studying and reproducing this capacity). Yet we still don’t know how this works. We know quite well that it isn’t just the sum of the parts, because for one thing putting 100 ants down or even more will just end up with the ants running around in circles until they die. At some unknown point, however, and for some unknown reason, put enough of the ants together, and they synchronize, forming a complex network capable of emergent functional properties which cannot be produced simply by understanding each ant separately.
2) Cells in living tissue, plants, etc., are constantly active. More specifically, regardless of the type of cell or what plant or animal it is a part of, a large part of cellular activity is described as metabolism and repair. In other words, the activities which allow the cell to “create” energy for power, to repair itself, etc. This metabolic-repair is fundamental for cellular function (after all, without “power”, how would the cell do anything?). Pretty much the entire cell is constantly influences (i.e., in some sense “caused”) by this metabolic-repair process. The question is what causes this process? What most people informally term “cause” corresponds fairly well with what philosophers and scientists have termed “efficient” cause since Aristotle. Thus I can talk about how, for example, certain normal human behavior is “caused” (through evolution) by the state of an environment thousands and thousands of years ago (evolutionary psychology). But when someone has a hard time sticking to a diet rather than eating “junk food” filled with sugars and fats and so forth, it isn’t because at the moment they decide to have a chocolate bar rather than a granola bar they are thinking about the conditions of life thousands of years before civilization which is “causing” the craving. It’s not an efficient cause. The efficient causes would be more things like the neural signals coming and going from their digestive system and pre-frontal cortex. The problem with metabolic-repair in cells is that it appears to be closed to efficient causation. In other words, the same parts of the cell which are part of the metabolic-repair process are also influenced by it at the same time. It’s not just that we can’t figure out what’s causing what because we can arbitrarily choose (as before), but a more serious problem (so serious that at the moment it there exists a mathematical proof that cells cannot be computed, which has caused a rather heated debate for scientists in fields ranging from computational biology to machine learning to mathematicians). The “efficient” cause of the metabolic-repair process is cellular activity, but cellular activity which is also the “efficient” cause of metabolic-repair at the same time. Despite the death-grip reductionism has on the sciences, particularly in areas like biology, it has increasingly become at least partly abandoned because it fails: too often reducing a biological system to its components means you cannot model the system itself, because the behavior is more than just the summed activity of its components.
For several reasons (the fact that physicists were distracted by QM and relativity, the fact that it took some time before we had the computational power to realize that computational power wasn’t the issue, and the work on increasingly complex yet never adequate mathematical models), it wasn’t until recently that limits to classical physics which don’t have anything to do with QM are behind the failure of certain reductionist attempts (in particular those within biology).
And once again, we are finally back to Max’s brain. When he decides to change his major, the process is like the metabolic-repair described above, only on steroids. Instead of an emergent, irreducible functional property of a cell, we have a system of so powerfully synchronized networks coordinated with one another that no system we know of begins to compare in terms not merely of complexity, but of what appear to be violations of physical laws as we understand them. This remains true quite apart from whatever quantum dynamics which may be at work in the brain (not to mention the little problem that when it comes to a lot of modern physics, the reason there are constant, never-ending releases in print, internet, and television media on some other-wordly model of physics that is so much better and cooler than the plain ol’ vanilla “standard model” has nothing to do with experimental research; it’s because even if physicists actually agreed on what the standard model really is, the preference for other models the creation of other models is either entirely or largely due to the fact that we haven’t much of a clue what the models, standard or no, actually describe).
Here biologists (from those who develop evolutionary algorithms for computational models of modularity to neuroscientists) have an advantage: unlike physicists whose field concerns separation in spacetime that is well beyond observation or who conduct experiments in which we have only symbols to describe whatever is going on at the “quantum” level which we can’t see, even biologists who study the origins of life frequently have more “observable” experimental paradigms to collect data.
Within neuroscience, primarily functional imaging is behind such experiments (functional if the “f” in fMRI which, like EEG and PET, create “dynamic” pictures rather than static ones like those produced by MRI or X-ray. At the moment, we aren’t anywhere near models of consciousness which aren’t highly theoretical. However, we do have a good deal of data which we can’t seem to explain, causing everything from descriptions of quantum effects to “quantum-like” neural activity to ignoring these data and focusing on other things like neural correlates rather than how these correlates do what they do.
There are not, therefore, two camps in the sciences: the reductionists/determinists vs. the quantum mechanics allows “free will” of some sort. There are certainly both positions, but
1) The “deterministic/reductionist” or “classical” camp has an increasing number of increasingly difficult experimental results to explain using their theoretical framework
2) What empirical evidence does exist which supports non-trivial quantum processes in neurodynamics is slim at best.
3) Even if one accepts that there are non-trivial quantum processes, all that this does is allow one to apply a theoretical interpretation of the formalisms (mathematical equations, symbols, etc.) in quantum mechanics or quantum field theory in which these processes can do something they can’t in some other theoretical interpretation.
4) There are an increasing number of groups across the physical sciences breaking away from classical reductionism and forming new approaches which are superior in their explanatory power in more ways than they are deficient because their approach is not reductionist, or at least not limited to reductionism. There are interdisciplinary journals, conferences, monograph/volume series, and edited volumes which have in common a systems approach (or, more generally, an approach which incorporates, improves, adapts, and implements the methods and models used across fields which are non-reductionist). Then there are the same but for a specific field or research area (like cognitive neuroscience). Not all of those who subscribe to this rather nebulous conglomeration of theoretical backgrounds, methods, and techniques, etc., believe that the reductionist program is ontologically flawed (i.e., even though they may model some system like a cell or brain or plant in a way which precludes causality, they do this because they believe we lack the ability at this point to continue to gain much from the reductionist approach, but that in reality can in principle be explained using “classical” reductionist views). However, a large number (perhaps a majority) do.
What, however, does this mean for Max’s decision? Even outside QM theories of consciousness, there isn’t a single cohesive model (either reductionist or not). Those who argue that emergent properties of biological systems (or at least some such systems) are irreducible don’t all agree on the nature of these properties, let alone how they might be produced. Rather go into this in depth, then, avoiding the more radical theories is probably best. The cell example and metabolism is usually thought of in terms of an emergent functional property, in that while it cannot be produced simply by the actions of the cells component parts, it also only the name we give various processes which helps us to explain the state of the cell in a way classical reductionism does not. An only slightly more complicated type of emergence can be useful to describe Max’s decision. Just as there is no physical entity in Max’s brain which represents “course” or “major” or “university” or even “change”, neither does one exist which is “decision” or “decision to change my major”. Instead, the structure of Max’s brain is capable of producing not only functional, but conceptual properties/processes, including a reflexive concept which allows Max to understand himself as in some way an agent. We call that “consciousness” or “self-awareness”. These properties are the key: the capacity for emergent concepts from irreducibly synchronized neural networks, including a concept of self, create the necessary ingredients for self-governing agency, or the ability of a system to use functional and conceptual emergence to produce still another property (agency) which is both a product of the system (like the other irreducible, emergent properties), but which at the same time determines it.
Of course, I’m simplifying greatly here, but even were I as technical as is possible, there would still be one important little problem: if this is the way the mind works (or anything works), why are there still scientists arguing that the brain and every other system is reducible and deterministic (at least in principle)? Unfortunately, the very reason that the reductionist program is increasingly being rejected or at least added to is the answer: reductionism succeeded for so long because of the ability for reductionist models to explain everything in terms of parts which could be treated as variables in some mathematical model. What are thought of the limits of reductionism and the need for some sort of systems approach which allows for emergent properties come from the failure of classical models, which means that alternative models are more schematic, holistic, abstract, and non-reductionist. Which also means that no matter how well they explain things, or how useful they can be to learn things, what they can’t do is show that there is not a reductionist model which could explain what’s going on. Strictly speaking, there actually has been a proof of this for almost 2 decades, as well as subsequent “proofs” of a similar nature, but the main problem with them is similar to the problem plaguing modern physics: when the variables in your model don’t correspond to well-defined properties or processes, but are more abstract or interpretative, it’s hard to “prove” that your model isn’t missing something.
In closing, though, and leaving Max’s decision behind for now, I’d like to point to some very different reason for suspecting the reductionist approach is at least incomplete, and that the human brain is a system governed by emergent properties which include self-awareness and agency. Namely, the “reductionist” was never exactly formally incorporated into science, but followed from the way in which the first “scientists” approached modeling: they deliberately restricted their models to components of reality which were, or could be considered, in relative isolation and which were inert. And for a time, that approach yielded so much that what had been merely a method, rather than an axiom, became integral to the scientific approach. The experiments which set the scientific endeavor in motion were necessarily reductionist, but in the beginning those like Descartes and others stated explicitly that this reductionism was limited to a rather small group of phenomena. However, these experimental paradigms increased in sophistication and application, but not in the way reductionism was incorporated into the underlying framework. As a result, it sort of just became the framework, or a part of it, without much evidence that it could adequately apply everywhere.
The same sorts of assumptions were behind a decreased interest in physics near the turn of the century, because of a “we pretty much know everything” attitude which, as it turned out, wasn’t just wrong, but amazingly, spectacularly wrong. So wrong that although quantum theory has been around for a century or so, there is still fundamental disagreement its basic nature, let alone the unbelievable turnabout which ideas like spacetime had when put into historical perspective (the idea that time is distinct not only coincides with our everyday experience, but has a few thousand years of philosophy and then science behind it, but doesn’t coincide with physics since 1905 at least). So if the reductionist, deterministic causality which sort of “crept” into scientific practice and method until it was suddenly a foundational component had at best as much support for it as did all the concepts which were overturned by relativity and QM, and perhaps much less, why cling to this epistemological approach to science and reality in spite of evidence to the contrary?