Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
No non-entanglement effect propagates faster than c. I don't really see how you can deny that one.1) Technically, the central issue concerns superluminal signals, but more than one specialist has written about the "vague" ways in which signaling is defined, which make debating the possibilities of CTCs and whether they violate SR difficult (and tend to create study after study about the hypothesized properties which may or may not exist in what is a hypothesized constraint to begin with).
The issue of CTC in SR is irrelevant if the question is already resolved in GR, because GR is a strict superset of SR.2) Not only is there an issue with defining what constitutes a "signal", but also how, why, and in what way CTCs might (or might not) violate SR. See, e.g,. Closed Timelike Curves via Postselection: Theory and Experimental Test of Consistency:
Abstract: "Closed timelike curves (CTCs) are trajectories in spacetime that effectively travel backwards in time: a test particle following a CTC can interact with its former self in the past. A widely accepted quantum theory of CTCs was proposed by Deutsch. Here we analyze an alternative quantum formulation of CTCs based on teleportation and postselection, and show that it is inequivalent to Deutsch’s. The predictions or retrodictions of our theory can be simulated experimentally: we report the results of an experiment illustrating how in our particular theory the “grandfather paradox” is resolved."
(I chose this one because there is a free copy available)
There's also Richard Feynmann's (IIRC) view - there is no reality other than experiment. Assuming that one can't simplify the formalism, then that formalism is reality for all intents and purposes, so long as it produces the right results.M3) This concerns relativistic models, not necessarily QM (where we get into what "entanglement" actually is or is not). The problem here is well-known because the problem is the unknowns. Most of QM is formalism and the relationship between this formalism and reality is unknown (see e.g., Plotnitsky's paper "On physical and mathematical causality in quantum mechanics" in the journal Physica E Vol. 42)
The only actual specifics I've seen you quote on this is Rosen's modelling, and other people quoting it. I'm yet to be convinced that Rosen is actually doing the logic correctly - his modelling bears no apparent resemblance to chemistry, which is what underlies biology.4) In contrast to CTCs and all the unknowns, biological systems lend themselves to a great deal more actual empirical study. And here we have a problem:
The amount of research in biology alone (apart from the above, see e.g., "From exact sciences to life phenomena: Following Schrödinger and Turing on Programs, Life and Causality" published in Information and Computation vol. 207) represents a rather serious problem for reductionism, but when we get to neurobiology and consciousness, the "leaders" of the movement towards fundamentally indeterministic systems and mechanisms in the brain which allow for a form of "free will" are primarily physicists.
I'm being serious. You're abusing the language if you try to have mutating objects in maths without being explicit about it. Maths has no sense of time. (For instance, as suggested: one can have an object that varies as time passes by defining it as a series of immutable objects, parametrized by time.)I can't tell if you are joking or seriously asserting the above.
There's no such thing as a "fuzzy" function in standard logic. (Feel free to use a fuzzy logic, but that is also clearly defined.) By using the language of mathematics and logic, you are automatically unfuzzying it.f is not just a notational device used to describe an abstract mapping, but to represent a literal function: metabolism. Unlike QM, cellular processes can (in general) be observed. The problem is thay are extremely difficult to model. This is because (unlike programs run on computers), they are fuzzy systems with functions that aren't well-defined and don't seem to fit into linear causal models.
Abstractions, e.g. your "functional process," do not exist in and of themselves. They are emergent phenomena of their components - take away the components, and the abstraction no longer happens. As an example, it does not make sense to speak of "wet" without a medium that is wet.The material "parts" create a functional process or processes which are independent of any of them but are produced and affected by all of them.
Hence why I defined it in a way that matches up with how we normally think of causality.He (actually they, in that this began with Rosen but hardly stopped there, instead largely creating a new framework for biology) did not mean that. Causality can certainly be "defined mathematically". I can define it as 3. Or 42. The issue is the relationship between the operations and symbolism and reality. The problem with well-defined functions in this case is the fuzzy boundaries (providing they exist at all) between the domain and image.
I object because it is vague. You cannot be vague in mathematics - that defeats the point.Here's the problem: you've objected to the use of ill-defined functions and so forth in the mathematical description, because (I think this is why) they are incompatible with computation theory.
If your "formal description" is on the level of the one you quoted above, no wonder you're having problems with mathematically analysing it.The mathematical formalism you object to is the way it is not because of Mikulecky's poor quantitative reasoning abilities, but for the same reason we find similar functions, schematics, and so forth elsewhere in systems biology, neuroscience, etc.: we can either make the mathematical descriptions such that a computer deal with them, which means we loose most of what's going on in our "model", or we can depict and describe as formally as possible what so far we have no idea how to model using a computer and which may (according to one's interpretation of various proofs) be impossible to run on any Turing-equivalent machine.
Multiple-universe quantum mechanics (where the only truly real object is the universe wavefunction; all else is abstraction and bookkeeping) restores locality by making the faster-than-light effects vanish. In this version, your measuring of the entangled particle just establishes what universe you're in; it doesn't do anything to the particle itself. Nothing travels faster than light, and so causality is restored.You're going to need to be more specific, as extra dimensions, multiverse theories, and so forth are not only diverse, but also do exactly what I said: deal with space-like nonlocality by constructing some alternative model, not by "restoring" locality.
No non-entanglement effect propagates faster than c. I don't really see how you can deny that one.
The issue of CTC in SR is irrelevant if the question is already resolved in GR, because GR is a strict superset of SR.
The issue of CTC in SR is irrelevant if the question is already resolved in GR, because GR is a strict superset of SR.
I'm being serious. You're abusing the language if you try to have mutating objects in maths without being explicit about it. Maths has no sense of time. (For instance, as suggested: one can have an object that varies as time passes by defining it as a series of immutable objects, parametrized by time.)
There's no such thing as a "fuzzy" function in standard logic. (Feel free to use a fuzzy logic, but that is also clearly defined.) By using the language of mathematics and logic, you are automatically unfuzzying it.
Also, if f is actually describing the metabolism, then one can't really model it as inputting molecules and outputting other molecules. That ignores not only the temporal component, but also the state of the organism.
Abstractions, e.g. your "functional process," do not exist in and of themselves. They are emergent phenomena of their components - take away the components, and the abstraction no longer happens. As an example, it does not make sense to speak of "wet" without a medium that is wet.
I object because it is vague. You cannot be vague in mathematics - that defeats the point.
If your "formal description" is on the level of the one you quoted above, no wonder you're having problems with mathematically analysing it.
Multiple-universe quantum mechanics (where the only truly real object is the universe wavefunction; all else is abstraction and bookkeeping) restores locality by making the faster-than-light effects vanish. In this version, your measuring of the entangled particle just establishes what universe you're in; it doesn't do anything to the particle itself. Nothing travels faster than light, and so causality is restored.
A number of academic conferences, from one held at Cambridge University in 2001 to another at the same place (different college, same university) in 2005, but in particular one held at Stanford in 2003 resulted in the publication of a volume which shares the name of the 2003 conference: "Universe or Multiverse?". The book (edited by Bernard Carr) consists of a number of papers written by various physicists, cosmologists, etc., who were involved at these conferences, and was published by Cambridge University Press in 2007. The first paper is an introductory paper on the subject and an outline of the volume (this is standard practice) written by Carr (again, standard, as edited volumes usually contain this type of contribution by the editor or editors).
In this introduction to the volume, Carr notes the following:
"Despite the growing popularity of the multiverse proposal, it must be admitted that many physicists remain deeply uncomfortable with it. The reason is clear: the idea is highly speculative and, from both a cosmological and a particle physics perspective, the reality of a multiverse is currently untestable. Indeed, it may always remain so, in the sense that astronomers may never be able to observe the other universes with telescopes a and particle physicists may never be able to observe the extra dimensions with their accelerators...
For these reasons, some physicists do not regard these ideas as coming under the purvey of science at all. Since our confidence in them is based on faith and aesthetic considerations (for example mathematical beauty) rather than experimental data, they regard them as having more in common with religion than science. This view has been expressed forcefully by commentators such as Sheldon Glashowm Martin Gardner and George Ellis, with widely differing metaphysical outlooks. Indeed, Paul Davies regards the concept of a multiverse as just as metaphysical as that of a Creator who fine-tuned a single universe for our existence. At the very least the notion of the multiverse requires us to extend our idea of what constitutes legitimate science.
Its still your will. And do you need an outside force to make yourself move? lol no.
Well, yes, GR suggests that CTCs are possible... but if we are using GR, any result from SR is completely irrelevant because they've been obsoleted by GR. Since AFAIK GR is consistent with both CTCs and superliminal signalling, there doesn't seem to be a problem.Additionally, this:
seems completely backward. The problematic "paradoxes" and issues with causality result from what CTCs entail given SR, and the reason anybody is discussing the issue at all is because GR suggests CTCs are possible.
What is Dirac's theory trying to describe? From what I've heard of QED, you don't need forward-propagating anything to explain anything.[Dirac]
Life itself is a localized violation of the 2nd law of thermo - the law only applies invariably to closed systems. (Excepting the Poincaré recurrence theorem, but that's irrelevant on timescales we care about.) Since the future and past are defined in terms of the 2nd law of thermo, then if you make the second law not work then linear time similarly stops working. However, one can always infer a single chain of causality by the global behaviour of the entire universe. (Which is by definition a closed system.)All of QM concerns activity at a sufficiently "small" spacelike region, and although it is possible to use QM equations instead of its classical counterparts, it's generally considered both inconvenient and unnecessary. However, although this "unnecessary" used to include molecular processes in biological systems, the sufficiently small levels of analysis at which violations of the 2nd law of thermodynamics which occur seem to include relevant processes in biological systems.
The reason I linked to a page on programming, rather than mathematics, is because mutable structures do not exist in mathematics - every structure is unvarying, because there is nothing for it to vary with. In order to have a value vary with time, you actually need to define it as a series of value, indexed by a single real variable.Apart from the fact that it wasn't my language (just in case that "you're" was directed specificaly at me rather than the general use), I don't know if I understand what your problem is, precisely. After all, you linked to a wiki page on programming, not the philosophy of mathematics or even mathematical theory. And it certainly isn't required of mathematical models and metamodels.
No, I have never come across any sort of objects that could be described as vague. (As opposed to objects that have properties that have no specific value but are known to belong to some set. Those are fine.)So you've never come across the term "vague objects" in works on mathematical logic? How about relational systems theory?
So the domain is the set of all series of states of the components of the cell, and the outputs are transformations of cell components? (i.e. mappings from some cell component config to another one)f isn't the model. The entirety of the function f and it's domain and image are the model. Nor is it "inputting" molecules" and "outputing" others. It's inputs are the processes of the components within a biosystem like a cell which "produce" it. It's outputs are the effects this feature has on the components of the cell.
Remember, mathematical objects are not produced because there is no time in which to do the production. They are also not real in any physical sense, except perhaps the "bottom" one, so saying that there is some irreducible "functional" component to a biological system is simply nonsense. It's like saying there's some functional component to a computer that means that I can't build a CPU by scratching silicon wafers into the right configurations.What you object to is that this function f is produced by what it is producing. And that certainly flies in the face of much of mathematics and computer science.
But I have no reason to believe that the function f accurately describes anything about how a cell or other bio-machinery actually works. You seem to have simply defined an arbitrary function out of thin air. Now, I am a biology layman, so perhaps it is intuitively obvious that the function is accurate, but that sounds very unlikely.As you say, it certainly isn't typical (and it absolutely isn't well-defined") for some function to exist and operate in the way f does here. But guess what? That's why biology isn't like a computer, why it's so unbelievably difficult to model biological processes without loosing far too much through approximations, and why reductionism hasn't succeeded here.
Upward or downward causation isn't a thing at all, because you're mixing up abstractions in ways that fundamentally don't make sense. Only things which exist on the same level of abstraction influence each other - things lower down influence the components of things further up, and only via that do they influence the more abstract things. e.g. It is impossible for me to influence single molecules of ATP... yet, the cells that make up me do it billions of times.This is true (well, true enough). I don't see the point though, as that's the idea: emergent phenomena which don't obey either strict upward causation or downward.
What on earth would ontological vagueness even mean? That the universe itself is not well-defined? That shoots science itself in the foot; well done.You can. Not only that, the issue of whether this vagueness also reflects an ontological vagueness is something which continues to be argued.
I'll let them know once I've finished helping the physicists and chemists invent advanced nanotechnology.Yes, that's why biologists are having trouble. They all are incapable of doing math. You could make a killing by showing thousands of scientists in a diverse range of fields how to properly use "maths" such that it reaches the "level" you would prefer. They may object, however, as they did that for decades (and it is still done), but the problem is that it doesn't work. However, as you seem convinced it can, you could make a fortune by showing how. I mean, seriously, we aren't talking about social "scientists", but real ones who are trained in maths & modelling.
Quarks. That is all. Well...See in particular the quoted section:
Well, yes, GR suggests that CTCs are possible... but if we are using GR, any result from SR is completely irrelevant because they've been obsoleted by GR. Since AFAIK GR is consistent with both CTCs and superliminal signalling, there doesn't seem to be a problem.
FYI, I ignored the first half or so of that post because you appeared to be talking about how to define causality in the context of CTCs existing, and I already know that causality is no longer a coherent option with CTCs - but CTCs don't appear to exist, so causality works fine for the moment.
What is Dirac's theory trying to describe? From what I've heard of QED, you don't need forward-propagating anything to explain anything.
Life itself is a localized violation of the 2nd law of thermo - the law only applies invariably to closed systems.
Since the future and past are defined in terms of the 2nd law of thermo, then if you make the second law not work then linear time similarly stops working. However, one can always infer a single chain of causality by the global behaviour of the entire universe. (Which is by definition a closed system.)
The reason I linked to a page on programming, rather than mathematics, is because mutable structures do not exist in mathematics - every structure is unvarying, because there is nothing for it to vary with. In order to have a value vary with time, you actually need to define it as a series of value, indexed by a single real variable.
(Also, if you doubt the validity of any programming concept in this context, you'll find that computer programs are merely a different way to write mathematics.)
The problem is your original specification of what the domain and image sets were involved things being added and removed from those sets - this is impossible without a value to measure the time in which to do the adding and removing.
So the domain is the set of all series of states of the components of the cell, and the outputs are transformations of cell components? (i.e. mappings from some cell component config to another one)
While this is unambiguous and time-invariant, it doesn't seem that useful.
Remember, mathematical objects are not produced because there is no time in which to do the production.
It's like saying there's some functional component to a computer that means that I can't build a CPU by scratching silicon wafers into the right configurations.
But I have no reason to believe that the function f accurately describes anything about how a cell or other bio-machinery actually works. You seem to have simply defined an arbitrary function out of thin air. Now, I am a biology layman, so perhaps it is intuitively obvious that the function is accurate, but that sounds very unlikely.
Upward or downward causation isn't a thing at all, because you're mixing up abstractions in ways that fundamentally don't make sense
What on earth would ontological vagueness even mean?
But isn't it rather odd that all this mathematical modelling works on systems like metamaterials and nano-medicine, and yet apparently does not and cannot work on systems that are... just the same stuff but more of it?
That's not the issue. At issue is what is it that we aren't observing.Believing in things that fundamentally cannot be observed directly is part and parcel of modern particle physics.
When a highly accurate and tested theory throws a curveball, e.g. most of the universe is unobservable, unless you have a very good evidence-based reason to think otherwise, it's generally more rational to believe it
(Before you bring up string theory or something, nobody has been able to perform an experiment that differentiates string theory from standard model QM, so the standard model wins because it's the least complex explanation of currently existing evidence.)
...Is it? Causality depends on the idea that events are well-ordered. I don't see how that's tied into locality, assuming that CTCs don't exist.I don't see how you can assert this, especially given your initial request that I frame the dynamics of biological systems in terms of light cones. These do, of course, exist in GTR (albeit in a quite different way). However, classical causation is local, whether in some Minkowski space or something which can be an approximation of it (e.g., Euclidean space).
I'm not sure what this has to do with the point I was trying to make, which is that SR's answers are irrelevant if they contradict GR, just as Newtonian answers are irrelevant if they contradict SR.This in and of itself, however, doesn't pose much of a problem because we are generally interested in local regions of spacetime and the laws which govern and/or influence the dynamics within that region.
1) Variance of what? "Invariant" is an adjective. Also, I imagine that the critical quantity, the spacetime interval, is invariant under whatever transformation GR uses to translate between reference frames.So the fact that we cannot assert within the GTR framework that frame transformations will not involve variance is irrelevant, because for each and every spacelike region or timelike region the invariance of transformations in SR hold.
We still have the partial ordering of events by spacetime interval, though, even though there is no absolute time function.So we go from classical phyics with a single, invariant and global time, to SR and many global time functions, to GTR with none.
Quantum entanglement does not let you transmit any sort of signal. Regardless of how you want to interpret what entanglement is actually doing to the particles, (and if you follow the maths, you'll see quite clearly that nothing non-relativistic is going on) it's impossible to transmit any sort of non-random information through the entanglement, and so causality is preserved.At the moment, however, there is no general agreement among physicists/cosmologists regarding how to reconcile the theoretical work of EPR and Bell and the experimental work of those like Gisin with a model of causality which incorporates both relativity and QM.
This sounds absurd. Why should backwards-propagating waves be necessary to prevent the particle interacting with its own field? From the brief mention I've found on Google, they aren't - Dirac's formulation is just another way of writing a system that does propagate in one direction.[silliness]
(That's because only the entire universe is a perfectly closed system. )It only applies invariably to the entire universe.
An "upper bound", i.e. a set including everything that caused the event, but also including lots of things which didn't have any effect on it, is the contents of the event's past lightcone.If atemporal procceses (those which violate the 2nd law) are determining the state of a biological system in a non-trivial way, then how do you decide what is causing what? In other words, if both forward and backword forces are at play, then (as above with Dirac's work) the state at time t is causing future states and being caused by them if we make t constant, and if we look at the states of the system over some interval of time we can see the resulting state, but we cannot know what caused it.
...Your model of an object which inherently performs processes does not include time? You're going to have to explain that in more detail than just "too much lost information."The philosophy of mathematics is an entirely different issue, but it is enough to note here that the models in systems biology (and elsewhere) do not always include time at all for a good reason: reduction results in too much lost information.
I'm not sure about that. The page on proofs-as-programs correspondence seems to suggest that all types of logic have a corresponding model of computation. (And since all models of computation are executable on a Universal Turing machine, there is lots of fun to be had. )The converse, however, doesn't hold. You cannot "write" all of mathematics with computer programs.
Time exists on the lower level of interacting molecules. It can't just disappear when you move up abstractions.That's because there is no way to incorporate time:
I don't think that's a coherent distinction.Basically, although linear approximations are often the best choice, often enough (and particularly with biological systems) an abstract model is better than a formal approximation.
Well, yes, but none of that explains why a function from "the set of all series of states of the components of the cell" to "transformations of cell components (i.e. mappings from some cell component config to another one)" is a useful thing to consider.It wouldn't be if life behaved like a computer. It doesn't. There are certainly numerous researchers working to develop better and better ways to deal with complexity.
...And? What do we get out of that?We're dealing with mathematical models of complex systems, not arithmetic or linear programming.
So if I magically pop atoms into existence with the exact same position/momentum functions as they would have in a working cell, why don't I get a cell? I get a computer if I do the same thing with silicon atoms.No, it's like saying you can't break biological systems down like this.
...Isn't that obvious? The amino acids interact with each other, after all. That's obvious even in something like the n-body problem.You can search the literature if you wish: "Biological organisms show emergent properties that arise from interactions both among their components and with external factors. For example, the properties of a protein are not equivalent to the sum of the properties of each amino acid."
From Mazzocchi's "Complexity in biology: Exceeding the limits of reductionism and determinism using complexity theory" EMBO reports Vol. 9 No. 1 (2008)
This seems to be straight out contradicting itself. Just because something emerges from the combination of more concrete things doesn't mean its "irreducible.""An emergence is strong when, contrary to what happens in nominal emergence, emergent properties have some irreducible causal power on the underlying entities. In this context, macro causal powers have effects on both macro and micro-levels, and macro to micro effects are termed downward causation.
We know abstractions leak.Depends on the who you ask, but it begins with uncommon common sense:
We have some idea. After all, they have units attached.Just like our words corrrespond to conceptual abstractions rather than individual instantiations, so to does mathematics often enough represent (a perhaps quantifiably) vague entity. Mathematical physics, statistical physics, computational biology, etc., are filled with notations standing in for things like lung capacity, frequency of neural spikes, intracellular translation instantiation, and on and on. With QM, we aren't actually sure what the notations are supposed to represent.
So how come there isn't a suggestion that any other sort of nanotech can't be computed or modelled mathematically?Not really. Just look at the nanoscience or bioengineering literature. You'll find a systems perspective all throughout.
That's not the issue. At issue is what is it that we aren't observing
We know what the wavefunction is, just not why it is - it's defined in terms of maths. You could probably get a definition entirely in terms of real numbers if you wanted.Again, it's what we should believe in. There are fundamental disagreements about the proper interpretation of the formalisms of GTR and QM (hence the various unified theories). You weren't happy with f as a notational device for a cellular function in part because it wasn't well-defined. Neither is the wavefunction. We can see cellular activities such that when we call f the processes which are part of cellular metabolism, they are part of this function. With the wavefunction, we can't observe what it is supposed to represent, so we guess.
It's a set of equations describing evolution of an objectThere are even some rather fundamental disagreements about what the "standard model" actually is.
It depends upon two things (in so far as a "classical" model of causality, or a model of causality period, exists at all): temporal locality ("events [which] are well-ordered") and spatial locality. It doesn't matter if my kicking a ball happens in a "well-ordered" way if I miss and don't make contact....Is it? Causality depends on the idea that events are well-ordered.
I'm not sure what this has to do with the point I was trying to make, which is that SR's answers are irrelevant if they contradict GR, just as Newtonian answers are irrelevant if they contradict SR.
1) Variance of what?
2) How can regions be timelike or spacelike?
We still have the partial ordering of events by spacetime interval
Quantum entanglement does not let you transmit any sort of signal.
Regardless of how you want to interpret what entanglement is actually doing to the particles, (and if you follow the maths, you'll see quite clearly that nothing non-relativistic is going on) it's impossible to transmit any sort of non-random information through the entanglement, and so causality is preserved.
This sounds absurd. Why should backwards-propagating waves be necessary to prevent the particle interacting with its own field?
From the brief mention I've found on Google, they aren't - Dirac's formulation is just another way of writing a system that does propagate in one direction.
An "upper bound", i.e. a set including everything that caused the event, but also including lots of things which didn't have any effect on it, is the contents of the event's past lightcone.
Your model
I'm not sure about that.
The problem is that this is a statement about formal languages, but despite the formal nature of mathematics, there is a difference between formal expressions and mathematical formalism. And your link to the wiki page isn't saying that "math is programming" or anything really approaching that. It's an old but still developing further way in which symbolic languages used in formal logic relate to programming languages. it isn't "programs=math" or "everything in mathematics is reducible to a program". It's not even related to some major (sub)-branches of mathematics at all (see here)The page on proofs-as-programs correspondence seems to suggest that all types of logic have a corresponding model of computation.
Time exists on the lower level of interacting molecules. It can't just disappear when you move up abstractions.
Well, yes, but none of that explains why a function from "the set of all series of states of the components of the cell" to "transformations of cell components (i.e. mappings from some cell component config to another one)" is a useful thing to consider.
The amino acids interact with each other, after all. That's obvious even in something like the n-body problem.
This seems to be straight out contradicting itself. Just because something emerges from the combination of more concrete things doesn't mean its "irreducible."
No, seperable. Things that can't be explained by the parts, but only through the activity of interaction itself. If the parts work together and I can explain the system and its activity simply by the activity of the parts, that's a reducible system. If the interaction of the parts itself produces something seperable from the parts which can only be exlained through the interaction activity itself, rather than the summation of the activity of the parts, the system is not reducible.In fact, the only things which could be irreducible in that way would be things severable from the components.
So how come there isn't a suggestion that any other sort of nanotech can't be computed or modelled mathematically?
We know what the wavefunction is, just not why it is - it's defined in terms of maths.
I'm not sure what this has to do with the point I was trying to make, which is that SR's answers are irrelevant if they contradict GR, just as Newtonian answers are irrelevant if they contradict SR.
Newtonian answers are irrelevant if they contradict SR.
This sounds absurd...Dirac's formulation is just another way of writing a system that does propagate in one direction.
Quantum entanglement does not let you transmit any sort of signal.
Regardless of how you want to interpret what entanglement is actually doing to the particles, (and if you follow the maths, you'll see quite clearly that nothing non-relativistic is going on) it's impossible to transmit any sort of non-random information through the entanglement, and so causality is preserved.
We know what the wavefunction is, just not why it is - it's defined in terms of maths. You could probably get a definition entirely in terms of real numbers if you wanted.
Quantum entanglement does not let you transmit any sort of signal.
The reason I linked to a page on programming, rather than mathematics, is because mutable structures do not exist in mathematics - every structure is unvarying, because there is nothing for it to vary with. In order to have a value vary with time, you actually need to define it as a series of value, indexed by a single real variable.
(Also, if you doubt the validity of any programming concept in this context, you'll find that computer programs are merely a different way to write mathematics.)
...Your model of an object which inherently performs processes does not include time? You're going to have to explain that in more detail than just "too much lost information."
I'm not sure about that. The page on proofs-as-programs correspondence seems to suggest that all types of logic have a corresponding model of computation. (And since all models of computation are executable on a Universal Turing machine, there is lots of fun to be had. )
Not too many of us suggest we have no free will in our everyday life.
However it does get kind of mystical when we wonder about what is happening, after a bout of idle mind, upstream from the underlying causes of our next thought.
Any thoughts anybody?
:human:
Where have you looked? I ask because (among other things) of the introduction to the edited volume Free Will & Consciousness: How They Might Work (Oxford University Press, 2010). In the intro paper, which is written by the volume's editors (Roy F. Baumeister, Alfred R. Mele, & Kathleen D. Vohs), is the following:I have never seen a definition of free will that I found credible, myself.
Most look like interesting sci-fi concepts, that certainly would create wildly different worlds from our own. Some are essentially meaningless. Very few even have any relation to either freedom or will.