• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Free will?

PolyHedral

Superabacus Mystic
1) Technically, the central issue concerns superluminal signals, but more than one specialist has written about the "vague" ways in which signaling is defined, which make debating the possibilities of CTCs and whether they violate SR difficult (and tend to create study after study about the hypothesized properties which may or may not exist in what is a hypothesized constraint to begin with).
No non-entanglement effect propagates faster than c. I don't really see how you can deny that one.

2) Not only is there an issue with defining what constitutes a "signal", but also how, why, and in what way CTCs might (or might not) violate SR. See, e.g,. Closed Timelike Curves via Postselection: Theory and Experimental Test of Consistency:

Abstract: "Closed timelike curves (CTCs) are trajectories in spacetime that effectively travel backwards in time: a test particle following a CTC can interact with its former self in the past. A widely accepted quantum theory of CTCs was proposed by Deutsch. Here we analyze an alternative quantum formulation of CTCs based on teleportation and postselection, and show that it is inequivalent to Deutsch’s. The predictions or retrodictions of our theory can be simulated experimentally: we report the results of an experiment illustrating how in our particular theory the “grandfather paradox” is resolved."

(I chose this one because there is a free copy available)
The issue of CTC in SR is irrelevant if the question is already resolved in GR, because GR is a strict superset of SR.

M3) This concerns relativistic models, not necessarily QM (where we get into what "entanglement" actually is or is not). The problem here is well-known because the problem is the unknowns. Most of QM is formalism and the relationship between this formalism and reality is unknown (see e.g., Plotnitsky's paper "On physical and mathematical causality in quantum mechanics" in the journal Physica E Vol. 42)
There's also Richard Feynmann's (IIRC) view - there is no reality other than experiment. Assuming that one can't simplify the formalism, then that formalism is reality for all intents and purposes, so long as it produces the right results.

4) In contrast to CTCs and all the unknowns, biological systems lend themselves to a great deal more actual empirical study. And here we have a problem:

The amount of research in biology alone (apart from the above, see e.g., "From exact sciences to life phenomena: Following Schrödinger and Turing on Programs, Life and Causality" published in Information and Computation vol. 207) represents a rather serious problem for reductionism, but when we get to neurobiology and consciousness, the "leaders" of the movement towards fundamentally indeterministic systems and mechanisms in the brain which allow for a form of "free will" are primarily physicists.
The only actual specifics I've seen you quote on this is Rosen's modelling, and other people quoting it. I'm yet to be convinced that Rosen is actually doing the logic correctly - his modelling bears no apparent resemblance to chemistry, which is what underlies biology.

I can't tell if you are joking or seriously asserting the above.
I'm being serious. You're abusing the language if you try to have mutating objects in maths without being explicit about it. Maths has no sense of time. (For instance, as suggested: one can have an object that varies as time passes by defining it as a series of immutable objects, parametrized by time.)


f is not just a notational device used to describe an abstract mapping, but to represent a literal function: metabolism. Unlike QM, cellular processes can (in general) be observed. The problem is thay are extremely difficult to model. This is because (unlike programs run on computers), they are fuzzy systems with functions that aren't well-defined and don't seem to fit into linear causal models.
There's no such thing as a "fuzzy" function in standard logic. (Feel free to use a fuzzy logic, but that is also clearly defined.) By using the language of mathematics and logic, you are automatically unfuzzying it.

Also, if f is actually describing the metabolism, then one can't really model it as inputting molecules and outputting other molecules. That ignores not only the temporal component, but also the state of the organism.

The material "parts" create a functional process or processes which are independent of any of them but are produced and affected by all of them.
Abstractions, e.g. your "functional process," do not exist in and of themselves. They are emergent phenomena of their components - take away the components, and the abstraction no longer happens. As an example, it does not make sense to speak of "wet" without a medium that is wet.

He (actually they, in that this began with Rosen but hardly stopped there, instead largely creating a new framework for biology) did not mean that. Causality can certainly be "defined mathematically". I can define it as 3. Or 42. The issue is the relationship between the operations and symbolism and reality. The problem with well-defined functions in this case is the fuzzy boundaries (providing they exist at all) between the domain and image.
Hence why I defined it in a way that matches up with how we normally think of causality. :shrug:

(Also, the domain and range can overlap. But there's no "not-quite-subset" operator in normal set theory. :p)

Here's the problem: you've objected to the use of ill-defined functions and so forth in the mathematical description, because (I think this is why) they are incompatible with computation theory.
I object because it is vague. You cannot be vague in mathematics - that defeats the point.

The mathematical formalism you object to is the way it is not because of Mikulecky's poor quantitative reasoning abilities, but for the same reason we find similar functions, schematics, and so forth elsewhere in systems biology, neuroscience, etc.: we can either make the mathematical descriptions such that a computer deal with them, which means we loose most of what's going on in our "model", or we can depict and describe as formally as possible what so far we have no idea how to model using a computer and which may (according to one's interpretation of various proofs) be impossible to run on any Turing-equivalent machine.
If your "formal description" is on the level of the one you quoted above, no wonder you're having problems with mathematically analysing it. :shrug:

You're going to need to be more specific, as extra dimensions, multiverse theories, and so forth are not only diverse, but also do exactly what I said: deal with space-like nonlocality by constructing some alternative model, not by "restoring" locality.
Multiple-universe quantum mechanics (where the only truly real object is the universe wavefunction; all else is abstraction and bookkeeping) restores locality by making the faster-than-light effects vanish. In this version, your measuring of the entangled particle just establishes what universe you're in; it doesn't do anything to the particle itself. Nothing travels faster than light, and so causality is restored.
 

LegionOnomaMoi

Veteran Member
Premium Member
No non-entanglement effect propagates faster than c. I don't really see how you can deny that one.

The issue of CTC in SR is irrelevant if the question is already resolved in GR, because GR is a strict superset of SR.

It isn't "resolved in GR" because the reason for suspecting that CTCs are possible comes from GR:

"Although time travel is usally taken to be the stuff of science fiction, Einstein's theory of general relativity admits the possibility of closed timelike curves (CTCs)'."

That's the opening line of "Closed Timelike Curves via Postselection: Theory and Experimental Test of Consistency" which I linked to both on the journal's website (the journal being Physical Review Letters, one of the journals published by the American Physical Society), and also on an MIT server which had the entire paper.

Actually, a fair number of studies on the possibility or non-possibility of CTCs begin that way. For example, in "Localized Closed Timelike Curves Can Perfectly Distinguish Quantum States" (same journal) we have "The theory of general relativity points to the possible existence of closed timelike curves (CTCs)."

Switching journals, we find in a recent (2012) paper in Foundations of Physics (42) "Perfect state distinguishability and computational speedups with postselected closed timelike curves" the opening "Einstein’s field equations for general relativity predict the existence of closed timelike curves (CTCs) in certain exotic spacetime geometries [1–3], but the bizarre consequences lead many physicists to doubt that such “time machines” could exist. Closed timelike curves, if they existed, would allow particles to interact with their former selves, suggesting the possibility of grandfather-like paradoxes in both classical and quantum theories."

I could go on and on here, but obviously opening lines are rather meaningless if the finding is consistently that CTCs are an impossibility. However, the reason I chose the last link and quote immediately above (apart from the fact that you don't need to pay to get access) is that it begins to make clear what the debate is about. It has less to do with the speed of light and more to do with paradoxes. Weinstein's paper Superluminal Signaling and Relativity (Synthese 148; 2006) is clearer here, in that the focus is clarity itself (or rather the lack thereof):

"Though Einstein and others realized that the theory also implied that one could not accelerate massive objects up to or past the speed of light (due to the increase of mass with velocity), it was noted early on that
it might be possible to have massive objects which always moved faster than light (Sommerfeld’s ‘tachyon’).
Nonetheless, it is commonly asserted that special relativity rules out the possibility of sending signals faster than light, of ‘superluminal signaling’. However, it is well-known that there are physical phenomena perfectly compatible with special relativity in which ‘something’ travels faster than light. Thus accounts of such phenomena are usually accompanied by disclaimers explaining why the phenomenon in question cannot be used to send signals." pp. 381-382

In other words, what is possible in GR because it is general would in SR result in a number of theoretical problems, such as the classic "twin" paradox and (more relevant) the issue of causality. From the same paper quoted immediately above: "Though standard relativistic quantum field theory nominally incorporates ‘causality’ by requiring that spacelike observables commute, it is by no means obvious that this requirement is appropriate for, and even consistent with, equations of motion which exhibit superluminal propagation of disturbances. In other words, it is largely an open question how to quantize relativistic media of the sort discussed above, media which permit superluminal signaling given appropriate initial conditions." (p. 392).

So to respond to your implicity question ("I don't really see how you can deny that one) the answer is "because whatever personal views individual physicists have, as far as the literature is concerned this is an open question. In fact, for some it appears like an answered question:

"For many years physicists believed that the existence of closed timelike curve (CTC) was only a theoretical possibility rather than a feasibility. A closed time like curve typically connects back on itself, for example, in the presence of a spacetime wormhole that could link a future spacetime point with a past spacetime point. However, there have been criticisms to the existence of CTCs and the “grandfather paradox” is one. But Deutsch proposed a computational model of quantum systems in the presence of CTCs and resolved this paradox by presenting a method for finding self-consistent solutions of CTC interactions in quantum theory" from Pati, Chakrabarty, & Agrawal "Quantum States, Entanglement and Closed Timelike Curves" (from the American Institute of Physics Conference Proceedings, 1384, 2011).


Additionally, this:

The issue of CTC in SR is irrelevant if the question is already resolved in GR, because GR is a strict superset of SR.

seems completely backward. The problematic "paradoxes" and issues with causality result from what CTCs entail given SR, and the reason anybody is discussing the issue at all is because GR suggests CTCs are possible.

And finally, superluminal causation is likewise an unresolved issue (emphasis added):
"A second sense in which Dirac's theory is nonlocal is that the theory allows for superluminal causal propagation. On the one hand, the present acceleration of a charge is determined by future fields according to (4.4). On the other hand, an accelerated charge produces a retarded radiation field which affects the total electromagnetic field along the forward light cone of the charge. The combination of the backward-causal effect of an external field on a charge and the forward-causal influence of a charge on the total field can result in causal propagation between spacelike separated events. If the radiation field due to a charge q 1at τ 1is nonzero where its forward light cone intersects the worldline of a charge q 2 , then the acceleration of q 2at τ2 will be affected by the field due to q 1 , even when the two charges are spacelike separated. Again, one could make this point in terms of interventions into an otherwise closed system: If q 1were accelerated by an external force, then the motion of a spacelike separated charge q 2would be different from what it is without the intervention. In principle (if τ 0were not so extremely small) the causal connection between spacelike separated events could be exploited to send superluminal signals. By measuring the acceleration of q 2an experimenter could find out whether the spacelike separated charge q 1was accelerated or not, and therefore it should in principle be possible to transmit information superluminally in Dirac's theory.
Dirac's theory is a relativistic theory because it is Lorentz-invariant. Thus, contrary to what is sometimes said, the Lorentz-invariant structure of a theory alone does not prohibit the superluminal propagation of causal processes. In a Lorentz-invariant theory the speed of light is an invariant but need not be an upper limit on the propagation of signals. Moreover, as Tim Maudlin (1994) has shown, we can conceive of laws of transmission between spacelike separated events that could not allow us to pick out a particular Lorentz frame as privileged, which would be in violation of the principle of relativity. Maudlin's discussion focuses on the possibility of 'tachionic' signals that directly connect spacelike separated events. Dirac's theory provides us with an alternative mechanism for superluminal signaling: a combination of subluminal forward- and backward-causal processes." Sect. 4.3

Frisch, M. (2005). Inconsistency, Asymmetry, and NonLocality: A Philosophical Investigation of Classical Electrodynamics (Oxford University Press).

And leaving pure relativity issues for a moment, there is (apart from entanglement) a less contenscious component of a pseudo-superluminal effect in biosystems. All of QM concerns activity at a sufficiently "small" spacelike region, and although it is possible to use QM equations instead of its classical counterparts, it's generally considered both inconvenient and unnecessary. However, although this "unnecessary" used to include molecular processes in biological systems, the sufficiently small levels of analysis at which violations of the 2nd law of thermodynamics which occur seem to include relevant processes in biological systems. In other words, to borrow a section header from Bellac's paper "The role of probabilities in physics" (Progress in Biophysics and Molecular Biology vol. 110; 2012), "Time’s arrow is blurred in small (e.g. biological) systems". Even in Hollowood and Shore's in "The refractive index of curved spacetime: The fate of causality in QED" (Nuclear Physics B 795; 2008), which sought to preserve causality against the the violations of superluminal velocities entailed in Kramers-Kronig, only "succeeded" (i.e., they said they did) at the macroscopic levell. In their work to resolve "the outstanding problem" of "how to reconcile the prediction of a superluminal phase velocity at low frequency with causality", they found that "[r]emarkably, the resolution involves the violation of analyticity calling into question micro-causality in curved spacetime."

However, I'll address more of this and how it is related to biological systems (including the new framework of "biological relativity" in a later post".
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I'm being serious. You're abusing the language if you try to have mutating objects in maths without being explicit about it. Maths has no sense of time. (For instance, as suggested: one can have an object that varies as time passes by defining it as a series of immutable objects, parametrized by time.)

Apart from the fact that it wasn't my language (just in case that "you're" was directed specificaly at me rather than the general use), I don't know if I understand what your problem is, precisely. After all, you linked to a wiki page on programming, not the philosophy of mathematics or even mathematical theory. And it certainly isn't required of mathematical models and metamodels.


There's no such thing as a "fuzzy" function in standard logic. (Feel free to use a fuzzy logic, but that is also clearly defined.) By using the language of mathematics and logic, you are automatically unfuzzying it.

So you've never come across the term "vague objects" in works on mathematical logic? How about relational systems theory? And your objection is, again, the problem with modelling biological systems (and often nonlinear systems in general). It would be great if everything was reducible to first-order logic. It isn't.

Also, if f is actually describing the metabolism, then one can't really model it as inputting molecules and outputting other molecules. That ignores not only the temporal component, but also the state of the organism.

f isn't the model. The entirety of the function f and it's domain and image are the model. Nor is it "inputting" molecules" and "outputing" others. It's inputs are the processes of the components within a biosystem like a cell which "produce" it. It's outputs are the effects this feature has on the components of the cell. What you object to is that this function f is produced by what it is producing. And that certainly flies in the face of much of mathematics and computer science. As you say, it certainly isn't typical (and it absolutely isn't well-defined") for some function to exist and operate in the way f does here. But guess what? That's why biology isn't like a computer, why it's so unbelievably difficult to model biological processes without loosing far too much through approximations, and why reductionism hasn't succeeded here.

Abstractions, e.g. your "functional process," do not exist in and of themselves. They are emergent phenomena of their components - take away the components, and the abstraction no longer happens. As an example, it does not make sense to speak of "wet" without a medium that is wet.

This is true (well, true enough). I don't see the point though, as that's the idea: emergent phenomena which don't obey either strict upward causation or downward.


I object because it is vague. You cannot be vague in mathematics - that defeats the point.

You can. Not only that, the issue of whether this vagueness also reflects an ontological vagueness is something which continues to be argued. Finally, if you look at a book on modelling theory, you will find that as the point is different so too are the ways in which formal languages and operations are used (it need not even by specific to biology, as you'll find objections to your statement above in works like Henderson-Sellers' On the Mathematics of Modelling, Metamodelling, Ontologies and Modelling Languages (a volume from the edited series SpringerBriefs in Computer Science).


If your "formal description" is on the level of the one you quoted above, no wonder you're having problems with mathematically analysing it. :shrug:

Yes, that's why biologists are having trouble. They all are incapable of doing math. You could make a killing by showing thousands of scientists in a diverse range of fields how to properly use "maths" such that it reaches the "level" you would prefer. They may object, however, as they did that for decades (and it is still done), but the problem is that it doesn't work. However, as you seem convinced it can, you could make a fortune by showing how. I mean, seriously, we aren't talking about social "scientists", but real ones who are trained in maths & modelling. :D

Multiple-universe quantum mechanics (where the only truly real object is the universe wavefunction; all else is abstraction and bookkeeping) restores locality by making the faster-than-light effects vanish. In this version, your measuring of the entangled particle just establishes what universe you're in; it doesn't do anything to the particle itself. Nothing travels faster than light, and so causality is restored.

See in particular the quoted section:

A number of academic conferences, from one held at Cambridge University in 2001 to another at the same place (different college, same university) in 2005, but in particular one held at Stanford in 2003 resulted in the publication of a volume which shares the name of the 2003 conference: "Universe or Multiverse?". The book (edited by Bernard Carr) consists of a number of papers written by various physicists, cosmologists, etc., who were involved at these conferences, and was published by Cambridge University Press in 2007. The first paper is an introductory paper on the subject and an outline of the volume (this is standard practice) written by Carr (again, standard, as edited volumes usually contain this type of contribution by the editor or editors).

In this introduction to the volume, Carr notes the following:

"Despite the growing popularity of the multiverse proposal, it must be admitted that many physicists remain deeply uncomfortable with it. The reason is clear: the idea is highly speculative and, from both a cosmological and a particle physics perspective, the reality of a multiverse is currently untestable. Indeed, it may always remain so, in the sense that astronomers may never be able to observe the other universes with telescopes a and particle physicists may never be able to observe the extra dimensions with their accelerators...
For these reasons, some physicists do not regard these ideas as coming under the purvey of science at all. Since our confidence in them is based on faith and aesthetic considerations (for example mathematical beauty) rather than experimental data, they regard them as having more in common with religion than science. This view has been expressed forcefully by commentators such as Sheldon Glashowm Martin Gardner and George Ellis, with widely differing metaphysical outlooks. Indeed, Paul Davies regards the concept of a multiverse as just as metaphysical as that of a Creator who fine-tuned a single universe for our existence. At the very least the notion of the multiverse requires us to extend our idea of what constitutes legitimate science.
 
Last edited:
Its still your will. And do you need an outside force to make yourself move? lol no.

Has anyone considered, that will might not be controlled? Because, it seems, the will only controls itself. What controls this controlling? Also, the planets move, and the whole universe moves, and we say that we move because of our will. Is it also, that the planets move because of their free will? We may be living, but are we more or less important, even if living, than all the objects in the universe that we consider as non-living?
 

PolyHedral

Superabacus Mystic
Additionally, this:
seems completely backward. The problematic "paradoxes" and issues with causality result from what CTCs entail given SR, and the reason anybody is discussing the issue at all is because GR suggests CTCs are possible.
Well, yes, GR suggests that CTCs are possible... but if we are using GR, any result from SR is completely irrelevant because they've been obsoleted by GR. Since AFAIK GR is consistent with both CTCs and superliminal signalling, there doesn't seem to be a problem.

FYI, I ignored the first half or so of that post because you appeared to be talking about how to define causality in the context of CTCs existing, and I already know that causality is no longer a coherent option with CTCs - but CTCs don't appear to exist, so causality works fine for the moment.

What is Dirac's theory trying to describe? From what I've heard of QED, you don't need forward-propagating anything to explain anything.

All of QM concerns activity at a sufficiently "small" spacelike region, and although it is possible to use QM equations instead of its classical counterparts, it's generally considered both inconvenient and unnecessary. However, although this "unnecessary" used to include molecular processes in biological systems, the sufficiently small levels of analysis at which violations of the 2nd law of thermodynamics which occur seem to include relevant processes in biological systems.
Life itself is a localized violation of the 2nd law of thermo - the law only applies invariably to closed systems. (Excepting the Poincaré recurrence theorem, but that's irrelevant on timescales we care about.) Since the future and past are defined in terms of the 2nd law of thermo, then if you make the second law not work then linear time similarly stops working. However, one can always infer a single chain of causality by the global behaviour of the entire universe. (Which is by definition a closed system.)

Apart from the fact that it wasn't my language (just in case that "you're" was directed specificaly at me rather than the general use), I don't know if I understand what your problem is, precisely. After all, you linked to a wiki page on programming, not the philosophy of mathematics or even mathematical theory. And it certainly isn't required of mathematical models and metamodels.
The reason I linked to a page on programming, rather than mathematics, is because mutable structures do not exist in mathematics - every structure is unvarying, because there is nothing for it to vary with. In order to have a value vary with time, you actually need to define it as a series of value, indexed by a single real variable.

(Also, if you doubt the validity of any programming concept in this context, you'll find that computer programs are merely a different way to write mathematics.)

The problem is your original specification of what the domain and image sets were involved things being added and removed from those sets - this is impossible without a value to measure the time in which to do the adding and removing.

So you've never come across the term "vague objects" in works on mathematical logic? How about relational systems theory?
No, I have never come across any sort of objects that could be described as vague. (As opposed to objects that have properties that have no specific value but are known to belong to some set. Those are fine.)

f isn't the model. The entirety of the function f and it's domain and image are the model. Nor is it "inputting" molecules" and "outputing" others. It's inputs are the processes of the components within a biosystem like a cell which "produce" it. It's outputs are the effects this feature has on the components of the cell.
So the domain is the set of all series of states of the components of the cell, and the outputs are transformations of cell components? (i.e. mappings from some cell component config to another one)

While this is unambiguous and time-invariant, it doesn't seem that useful. :shrug:

What you object to is that this function f is produced by what it is producing. And that certainly flies in the face of much of mathematics and computer science.
Remember, mathematical objects are not produced because there is no time in which to do the production. ;) They are also not real in any physical sense, except perhaps the "bottom" one, so saying that there is some irreducible "functional" component to a biological system is simply nonsense. It's like saying there's some functional component to a computer that means that I can't build a CPU by scratching silicon wafers into the right configurations.

As you say, it certainly isn't typical (and it absolutely isn't well-defined") for some function to exist and operate in the way f does here. But guess what? That's why biology isn't like a computer, why it's so unbelievably difficult to model biological processes without loosing far too much through approximations, and why reductionism hasn't succeeded here.
But I have no reason to believe that the function f accurately describes anything about how a cell or other bio-machinery actually works. You seem to have simply defined an arbitrary function out of thin air. Now, I am a biology layman, so perhaps it is intuitively obvious that the function is accurate, but that sounds very unlikely.

This is true (well, true enough). I don't see the point though, as that's the idea: emergent phenomena which don't obey either strict upward causation or downward.
Upward or downward causation isn't a thing at all, because you're mixing up abstractions in ways that fundamentally don't make sense. Only things which exist on the same level of abstraction influence each other - things lower down influence the components of things further up, and only via that do they influence the more abstract things. e.g. It is impossible for me to influence single molecules of ATP... yet, the cells that make up me do it billions of times.

You can. Not only that, the issue of whether this vagueness also reflects an ontological vagueness is something which continues to be argued.
What on earth would ontological vagueness even mean? That the universe itself is not well-defined? That shoots science itself in the foot; well done.

Yes, that's why biologists are having trouble. They all are incapable of doing math. You could make a killing by showing thousands of scientists in a diverse range of fields how to properly use "maths" such that it reaches the "level" you would prefer. They may object, however, as they did that for decades (and it is still done), but the problem is that it doesn't work. However, as you seem convinced it can, you could make a fortune by showing how. I mean, seriously, we aren't talking about social "scientists", but real ones who are trained in maths & modelling. :D
I'll let them know once I've finished helping the physicists and chemists invent advanced nanotechnology. :D

But isn't it rather odd that all this mathematical modelling works on systems like metamaterials and nano-medicine, and yet apparently does not and cannot work on systems that are... just the same stuff but more of it? :p

See in particular the quoted section:
Quarks. That is all. Well...

Believing in things that fundamentally cannot be observed directly is part and parcel of modern particle physics. When a highly accurate and tested theory throws a curveball, e.g. most of the universe is unobservable, unless you have a very good evidence-based reason to think otherwise, it's generally more rational to believe it. After all, it is highly accurate in all other respects. (Before you bring up string theory or something, nobody has been able to perform an experiment that differentiates string theory from standard model QM, so the standard model wins because it's the least complex explanation of currently existing evidence.)
 

LegionOnomaMoi

Veteran Member
Premium Member
[post redacted temporarily]

The response I had written was unncessarily curt (at times, downright rude) and did more to hamper, rather than contribute to, the dicussion. It was, shall we say, one of those days for me. So, my apologies to PolyHedral both for the response (if he read it) and for the delay caused by not writing what I should have to begin with.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Well, yes, GR suggests that CTCs are possible... but if we are using GR, any result from SR is completely irrelevant because they've been obsoleted by GR. Since AFAIK GR is consistent with both CTCs and superliminal signalling, there doesn't seem to be a problem.

I don't see how you can assert this, especially given your initial request that I frame the dynamics of biological systems in terms of light cones. These do, of course, exist in GTR (albeit in a quite different way). However, classical causation is local, whether in some Minkowski space or something which can be an approximation of it (e.g., Euclidean space). Just because GTR is an extension of sorts of SR does not mean that SR is necessarily irrelevant or that it is to GTR what classical mechanics is to quantum. For example, Lorentz transformations (or, looking at the issue a bit more globally, Lorentz invariance) are rather fundamental in SR. However, no global Lorentz transformations exist within the GTR framework. This in and of itself, however, doesn't pose much of a problem because we are generally interested in local regions of spacetime and the laws which govern and/or influence the dynamics within that region. So the fact that we cannot assert within the GTR framework that frame transformations will not involve variance is irrelevant, because for each and every spacelike region or timelike region the invariance of transformations in SR hold.

However, the conflict arises because the GTR entails no privileged frames and thus no global "time" (or no global time function). Minkowski geometry (needed for spacetime configuration and metrics in SR) admits relativized "flattened" hyperplanes. SR, therefore, may entail any metric is an arbitrary one, but the nature of Minkowski geometry ensures any state can be treated as a flat, spacelike hyperplane. There is no such guarentee with the curvature entailed in the GTR. So we go from classical phyics with a single, invariant and global time, to SR and many global time functions, to GTR with none.



FYI, I ignored the first half or so of that post because you appeared to be talking about how to define causality in the context of CTCs existing, and I already know that causality is no longer a coherent option with CTCs - but CTCs don't appear to exist, so causality works fine for the moment.

I wasn't talking about that, nor is it necessarily the case that causality cannot exist with CTCs (or that it does exist without them). Relativity, both the STR and the GTR, conflict in general with the non-locality of QM, and there are various ways in which these conflicts can be resolved (by, for example, noting that superluminal signals/travel is not, strictly speaking, a violation of relativity, in that the true foundation upon which relativity is built is the invariance of the speed of light). At the moment, however, there is no general agreement among physicists/cosmologists regarding how to reconcile the theoretical work of EPR and Bell and the experimental work of those like Gisin with a model of causality which incorporates both relativity and QM.


What is Dirac's theory trying to describe? From what I've heard of QED, you don't need forward-propagating anything to explain anything.

You don't with the Maxwell (or Maxwell-Lorentz) approach. However, Dirac's work, and the resulting reformulation of Maxwell-Lorentz's equation into Dirac's (or Lorentz-Dirac's) fixed the problem inherent in the earlier approach which had failed to account for the motion of a charge caused by its own radiation. Dirac fixed this by deriving an equation of motion for a point charge. However, the only way to do this was through a global assumption, so while Dirac's equation doesn't involve nonlocality explicitly, the only way to use it to describe state changes through time is via a global assumption: at any time t the equation relates acceleration at that moment to any other time after t (or, to put it another way, to all other times after t). Whatever forces are acting on the mass of a charged particle at any time t only partially determine the acceleration at time t, because all nonzero forces on that worldline (the worldline of the charged particle) are also affecting/determining acceleration (i.e., forces at time t + n are affecting acceleration at time t). According to this model, therefore, there is no finite time interval which includes all the causes of the charge's current acceleration. And as an accelerated charge creates a field which effects EMF along the charge's light cone, we then have backward causation on the charge's acceleration thanks to fields acting on it in the future, and forward causation thanks to the effect of the charge on the total field.


Life itself is a localized violation of the 2nd law of thermo - the law only applies invariably to closed systems.

It only applies invariably to the entire universe.

Since the future and past are defined in terms of the 2nd law of thermo, then if you make the second law not work then linear time similarly stops working. However, one can always infer a single chain of causality by the global behaviour of the entire universe. (Which is by definition a closed system.)

How? If atemporal procceses (those which violate the 2nd law) are determining the state of a biological system in a non-trivial way, then how do you decide what is causing what? In other words, if both forward and backword forces are at play, then (as above with Dirac's work) the state at time t is causing future states and being caused by them if we make t constant, and if we look at the states of the system over some interval of time we can see the resulting state, but we cannot know what caused it.

The reason I linked to a page on programming, rather than mathematics, is because mutable structures do not exist in mathematics - every structure is unvarying, because there is nothing for it to vary with. In order to have a value vary with time, you actually need to define it as a series of value, indexed by a single real variable.

The philosophy of mathematics is an entirely different issue, but it is enough to note here that the models in systems biology (and elsewhere) do not always include time at all for a good reason: reduction results in too much lost information.

(Also, if you doubt the validity of any programming concept in this context, you'll find that computer programs are merely a different way to write mathematics.)

The converse, however, doesn't hold. You cannot "write" all of mathematics with computer programs.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
(cont. from above)

The problem is your original specification of what the domain and image sets were involved things being added and removed from those sets - this is impossible without a value to measure the time in which to do the adding and removing.

That's because there is no way to incorporate time:
"One of the major theoretical outcomes of multilevel modelling is that causation in biological systems runs in both directions: upwards from the genome and downwards from all other levels.There are feedforward and feedback loops between the different levels. Developing the mathematical and computational tools to deal with these multiple causation loops is itself a major challenge. The mathematics that naturally suits one level may be very different from that for another level. Connecting levels is not therefore trivial. Nor are the problems simply mathematical and computational." from Noble's "Biophysics and systems biology" Phil. Trans. R. Soc. A 2010 368, 1125-1139

"emergent phenomena that occur at the level of the organism cannot be fully explained by theories that describe events at the level of cells or macromolecules. However, there are also various general features and characteristics— described in this article— that permeate all levels of organization and that allow the study of biological systems in the framework of complexity science."

From Mazzocchi's "Complexity in biology: Exceeding the limits of reductionism and determinism using complexity theory" EMBO reports Vol. 9 No. 1 (2008)

Basically, although linear approximations are often the best choice, often enough (and particularly with biological systems) an abstract model is better than a formal approximation.


So the domain is the set of all series of states of the components of the cell, and the outputs are transformations of cell components? (i.e. mappings from some cell component config to another one)

While this is unambiguous and time-invariant, it doesn't seem that useful. :shrug:

It wouldn't be if life behaved like a computer. It doesn't. There are certainly numerous researchers working to develop better and better ways to deal with complexity. For example, we have Chen, Nagl, and Clack's study "A formalism for multi-level emergent behaviours in designed component-based systems and agent-based simulations" published in the edited volume From System Complexity to Emergent Properties (from the edited series Understanding Complex systems; Springer, 2009). They begin with "Emergent behaviours and functions in decentralisedmulti-component systems are of fundamental interest to both Complex Systems engineers and scientists. In both complex system engineering and agent-based modelling, there is a need to understand how certain interactions between designed or modelled components can group together to perform higher level functions; there is also a corresponding need to determine the lower level mechanisms underlying undesired behaviours.
There currently exists no theory of emergent behaviours in designed or modelled multi-component systems that allows specific behaviours to be described. Although general definitions of emergence in such systems exist, they only serve to distinguish emergent behaviours from non-emergent behaviours and do not address the practical issue of specifying these behaviours for empirical investigation."

Remember, mathematical objects are not produced because there is no time in which to do the production.

We're dealing with mathematical models of complex systems, not arithmetic or linear programming. From Lin's General Systems Theory: A Mathematical Approach (IFSR International Series on Systems Science and Engineering vol 12; 2002):
"It can be seen that the concept of systems is a higher-level abstraction of mathematical structures. For example, n-tuple relations, networks, abstract automatic machines, algebraic systems, topological spaces, vector spaces, algebras, fuzzy sets and fuzzy relations, manifolds, metric spaces, normed spaces, Frechet spaces, Banach spaces, Banach algebras, normed rings, Hilbert spaces, semigroups, Riesz spaces, semiordered spaces, and systems of axioms and formal languages are all systems. From these examples it is natural to consider the problem: Find properties of networks, automatic machines, topological spaces, vector spaces, algebras, fuzzy structures, systems of axioms, etc., such that in general systems theory they have the same appearance" p. 359.

All models are wrong, so the question becomes whether or not a reductionist approach is better or not. Using schemata which are intuitively clearer or which provide an informal but structural model are sometimes worth the sacrifice of precision, because the alternative (unambiguous, rigorous formalism) can only be achieved through poor approximation.

It's like saying there's some functional component to a computer that means that I can't build a CPU by scratching silicon wafers into the right configurations.

No, it's like saying you can't break biological systems down like this.


But I have no reason to believe that the function f accurately describes anything about how a cell or other bio-machinery actually works. You seem to have simply defined an arbitrary function out of thin air. Now, I am a biology layman, so perhaps it is intuitively obvious that the function is accurate, but that sounds very unlikely.

You can search the literature if you wish: "Biological organisms show emergent properties that arise from interactions both among their components and with external factors. For example, the properties of a protein are not equivalent to the sum of the properties of each amino acid."
From Mazzocchi's "Complexity in biology: Exceeding the limits of reductionism and determinism using complexity theory" EMBO reports Vol. 9 No. 1 (2008)


Upward or downward causation isn't a thing at all, because you're mixing up abstractions in ways that fundamentally don't make sense

Or you are projecting hierarchical abstractions within computer science and programming onto fields which use different nomenclature. From Varenne's introductory paper to the volume From System Complexity to Emergent Properties entitled "Models and Simulations in the Historical Emergence of the Science of Complexity" p. 17:

"An emergence is strong when, contrary to what happens in nominal emergence, emergent properties have some irreducible causal power on the underlying entities. In this context, ”macro causal powers have effects on both macro and micro-levels, and macro to micro effects are termed downward causation.”

From Noble's "A theory of biological relativity: No privileged level of causation" (Interface Focus 2012 2, 55-64)

"Reference to emergence leads me to a fundamental point about the limits of reductionism. An important motivation towards reductionism is that of reducing complexity. The idea is that if a phenomenon is too complex to understand at level X then go down to level Y and see, first, whether the interactions at level Y are easier to understand and theorize about, then, second, see whether from that understanding one can automatically understand level X. If indeed all that is important at level X were to be entirely derivable from a theory at level Y, then we would have a case of what I would call ‘weak emergence’, meaning that descriptions at level X can then be seen to be a kind of shorthand for a more detailed explanatory analysis at level Y. ‘Strong emergence’ could then be defined as cases where this does not work, as we found with the heart rhythm model described above. They would be precisely those cases where what would be merely contingent at level Y is systematic at level X. I am arguing that, if level Y is the genome, then we already know that ‘weak emergence’ does not work. There is ‘strong emergence’ because contingency beyond what is in the genome, i.e. in its environment, also determines what happens.
This kind of limit to reductionism is not restricted to biology. Spontaneous symmetry breaking in particle physics is a comparable case." p. 61
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
(cont. from above)

What on earth would ontological vagueness even mean?

Depends on the who you ask, but it begins with uncommon common sense: "Reasons in favour of ontological vagueness are not hard to find. Firstly, and most obviously, common sense seems to speak in its favour. Tables, chairs, clouds, mountains, landscapes, persons, cats and dogs all seem to be vague – in many cases we cannot determinately specify exact spatial or temporal boundaries for such things, and the most obvious explanation for this is that they simply lack sharp boundaries. As with most material objects, there will sometimes be no fact of the matter as to what something’s exact spatial parts are and no fact of the matter as to exactly when it came into and went out of existence." p. 106 from Hyde's Vagueness, Logic, & Ontology (a volume from the edited series Ashgate New Critical Thinging in Philosophy; 2008).

Just like our words corrrespond to conceptual abstractions rather than individual instantiations, so to does mathematics often enough represent (a perhaps quantifiably) vague entity. Mathematical physics, statistical physics, computational biology, etc., are filled with notations standing in for things like lung capacity, frequency of neural spikes, intracellular translation instantiation, and on and on. With QM, we aren't actually sure what the notations are supposed to represent.


But isn't it rather odd that all this mathematical modelling works on systems like metamaterials and nano-medicine, and yet apparently does not and cannot work on systems that are... just the same stuff but more of it? :p

Not really. Just look at the nanoscience or bioengineering literature. You'll find a systems perspective all throughout. Take, for example, Vera & Wolkenhauer's "A System Biology Approach to Understand Functional Activity of Cell Communications Systems" in the Methods in Nano Cell Biology Vol. 90. They not only (like others) adopt a systems biology approach, but note that whatever one's approach, reductionist or not, "the simplification made in the model relates to the technological impossibility to characterize every biochemical process in the pathway, in detail" (p. 406).

Additionally, a lot of the work in nanoscience is in its infancy and as our understanding and abilities grow, so to do the challenges, including creating stable functional microstructures through directed self-assembly, increasing our capacity to manipulate processes within natural cells, alter and increase growth of natural tissue, etc. Most work is either theoretical or involves altering pre-formed biosystems (for current and future status, see e.g., Nanotechnology Research Directions for Societal Needs in 2020 from the series Science Policy Reports; Springer, 2011).



Believing in things that fundamentally cannot be observed directly is part and parcel of modern particle physics.
That's not the issue. At issue is what is it that we aren't observing.


When a highly accurate and tested theory throws a curveball, e.g. most of the universe is unobservable, unless you have a very good evidence-based reason to think otherwise, it's generally more rational to believe it

Again, it's what we should believe in. There are fundamental disagreements about the proper interpretation of the formalisms of GTR and QM (hence the various unified theories). You weren't happy with f as a notational device for a cellular function in part because it wasn't well-defined. Neither is the wavefunction. We can see cellular activities such that when we call f the processes which are part of cellular metabolism, they are part of this function. With the wavefunction, we can't observe what it is supposed to represent, so we guess.

(Before you bring up string theory or something, nobody has been able to perform an experiment that differentiates string theory from standard model QM, so the standard model wins because it's the least complex explanation of currently existing evidence.)

There are even some rather fundamental disagreements about what the "standard model" actually is.
 
Last edited:

idea

Question Everything
freewill is a question of the ultimate initial cause - what was the first cause? (free will = internally caused, vs. others' will means you do it through outward causes) I think everything - including us - is eternal, with no first cause. There was never a time when nothing existed. Something (energy, or matter, or strings, or something) has always existed - without cause - or you could say self-caused - and part of the eternal something is within us - the matter that makes up our bodies is eternal, perhaps it was once a tree, or once in a star - but it is eternal, changing forms, but not blinking in and out of existence. Everything that is eternal - without cause - without beginning - is it's own cause. Some of the things we do through influence of those around us, but some of the things that we do come from our eternal being, and that is the real root of our free will.
 
Last edited:

PolyHedral

Superabacus Mystic
I don't see how you can assert this, especially given your initial request that I frame the dynamics of biological systems in terms of light cones. These do, of course, exist in GTR (albeit in a quite different way). However, classical causation is local, whether in some Minkowski space or something which can be an approximation of it (e.g., Euclidean space).
...Is it? Causality depends on the idea that events are well-ordered. I don't see how that's tied into locality, assuming that CTCs don't exist.

This in and of itself, however, doesn't pose much of a problem because we are generally interested in local regions of spacetime and the laws which govern and/or influence the dynamics within that region.
I'm not sure what this has to do with the point I was trying to make, which is that SR's answers are irrelevant if they contradict GR, just as Newtonian answers are irrelevant if they contradict SR.

So the fact that we cannot assert within the GTR framework that frame transformations will not involve variance is irrelevant, because for each and every spacelike region or timelike region the invariance of transformations in SR hold.
1) Variance of what? "Invariant" is an adjective. Also, I imagine that the critical quantity, the spacetime interval, is invariant under whatever transformation GR uses to translate between reference frames.
2) How can regions be timelike or spacelike? Those adjectives only apply to 1d intervals.

So we go from classical phyics with a single, invariant and global time, to SR and many global time functions, to GTR with none.
We still have the partial ordering of events by spacetime interval, though, even though there is no absolute time function.

At the moment, however, there is no general agreement among physicists/cosmologists regarding how to reconcile the theoretical work of EPR and Bell and the experimental work of those like Gisin with a model of causality which incorporates both relativity and QM.
Quantum entanglement does not let you transmit any sort of signal. Regardless of how you want to interpret what entanglement is actually doing to the particles, (and if you follow the maths, you'll see quite clearly that nothing non-relativistic is going on) it's impossible to transmit any sort of non-random information through the entanglement, and so causality is preserved.

[silliness]
This sounds absurd. Why should backwards-propagating waves be necessary to prevent the particle interacting with its own field? From the brief mention I've found on Google, they aren't - Dirac's formulation is just another way of writing a system that does propagate in one direction.

It only applies invariably to the entire universe.
(That's because only the entire universe is a perfectly closed system. )

If atemporal procceses (those which violate the 2nd law) are determining the state of a biological system in a non-trivial way, then how do you decide what is causing what? In other words, if both forward and backword forces are at play, then (as above with Dirac's work) the state at time t is causing future states and being caused by them if we make t constant, and if we look at the states of the system over some interval of time we can see the resulting state, but we cannot know what caused it.
An "upper bound", i.e. a set including everything that caused the event, but also including lots of things which didn't have any effect on it, is the contents of the event's past lightcone.

I'm aware that this isn't a very useful answer, but it is one. ;)


The philosophy of mathematics is an entirely different issue, but it is enough to note here that the models in systems biology (and elsewhere) do not always include time at all for a good reason: reduction results in too much lost information.
...Your model of an object which inherently performs processes does not include time? You're going to have to explain that in more detail than just "too much lost information."

The converse, however, doesn't hold. You cannot "write" all of mathematics with computer programs.
I'm not sure about that. The page on proofs-as-programs correspondence seems to suggest that all types of logic have a corresponding model of computation. (And since all models of computation are executable on a Universal Turing machine, there is lots of fun to be had. )

That's because there is no way to incorporate time:
Time exists on the lower level of interacting molecules. It can't just disappear when you move up abstractions.

Basically, although linear approximations are often the best choice, often enough (and particularly with biological systems) an abstract model is better than a formal approximation.
I don't think that's a coherent distinction.
It wouldn't be if life behaved like a computer. It doesn't. There are certainly numerous researchers working to develop better and better ways to deal with complexity.
Well, yes, but none of that explains why a function from "the set of all series of states of the components of the cell" to "transformations of cell components (i.e. mappings from some cell component config to another one)" is a useful thing to consider.
We're dealing with mathematical models of complex systems, not arithmetic or linear programming.
...And? What do we get out of that? :p

No, it's like saying you can't break biological systems down like this.
So if I magically pop atoms into existence with the exact same position/momentum functions as they would have in a working cell, why don't I get a cell? I get a computer if I do the same thing with silicon atoms.

You can search the literature if you wish: "Biological organisms show emergent properties that arise from interactions both among their components and with external factors. For example, the properties of a protein are not equivalent to the sum of the properties of each amino acid."
From Mazzocchi's "Complexity in biology: Exceeding the limits of reductionism and determinism using complexity theory" EMBO reports Vol. 9 No. 1 (2008)
...Isn't that obvious? The amino acids interact with each other, after all. That's obvious even in something like the n-body problem.

"An emergence is strong when, contrary to what happens in nominal emergence, emergent properties have some irreducible causal power on the underlying entities. In this context, ”macro causal powers have effects on both macro and micro-levels, and macro to micro effects are termed downward causation.”
This seems to be straight out contradicting itself. Just because something emerges from the combination of more concrete things doesn't mean its "irreducible."

In fact, the only things which could be irreducible in that way would be things severable from the components. If any such thing existed, putting all the components together in the same way should produce a result different from what we'd expect from reality. Examples such as the n-body problem suggest that's not true.

Depends on the who you ask, but it begins with uncommon common sense:
We know abstractions leak.

Just like our words corrrespond to conceptual abstractions rather than individual instantiations, so to does mathematics often enough represent (a perhaps quantifiably) vague entity. Mathematical physics, statistical physics, computational biology, etc., are filled with notations standing in for things like lung capacity, frequency of neural spikes, intracellular translation instantiation, and on and on. With QM, we aren't actually sure what the notations are supposed to represent.
We have some idea. After all, they have units attached.

Not really. Just look at the nanoscience or bioengineering literature. You'll find a systems perspective all throughout.
So how come there isn't a suggestion that any other sort of nanotech can't be computed or modelled mathematically?

That's not the issue. At issue is what is it that we aren't observing
Again, it's what we should believe in. There are fundamental disagreements about the proper interpretation of the formalisms of GTR and QM (hence the various unified theories). You weren't happy with f as a notational device for a cellular function in part because it wasn't well-defined. Neither is the wavefunction. We can see cellular activities such that when we call f the processes which are part of cellular metabolism, they are part of this function. With the wavefunction, we can't observe what it is supposed to represent, so we guess.
We know what the wavefunction is, just not why it is - it's defined in terms of maths. You could probably get a definition entirely in terms of real numbers if you wanted.

There are even some rather fundamental disagreements about what the "standard model" actually is.
It's a set of equations describing evolution of an object but that's not important right now
 

LegionOnomaMoi

Veteran Member
Premium Member
...Is it? Causality depends on the idea that events are well-ordered.
It depends upon two things (in so far as a "classical" model of causality, or a model of causality period, exists at all): temporal locality ("events [which] are well-ordered") and spatial locality. It doesn't matter if my kicking a ball happens in a "well-ordered" way if I miss and don't make contact.

The reason light cones provide a possible model of causation is because of the way in which they work in SR. Specifically, they provide a possible constraint on causation specific to whatever is supposed to "cause" anything (italics in original):

"Suppose all causal processes are constrained to propagate at or below the speed of light. Then any event would be causally influenced only by events in its past light cone and would only influence events in its future light cone. Furthermore, common causes for two events must lie in the overlap of their past light cones. In such a world, space-like separated events can only be causally implicated because of common causes. If A and B are space-like separated and A would not have occurred had B not occurred then there must also be some event in A’s past light cone which would not have occurred had B not occurred, an event which may serve as the common cause of A and B.
We are now in a position to derive a sufficient condition for superluminal influences. If no causal influences are superluminal then it cannot be the case for space-like separated A and B that A would not have occurred had B not occurred and everything in A’s past light cone been the same. By keeping everything in the past light cone the same we keep all causal influences the same in a causally local theory. The non-occurrence of B could not preclude A by having an influence on it. Nor could the non-occurrence of B be a reliable indication of the non-occurrence of A by being an effect of A since it does not lie in A’s future light cone. Taking the contra positive of the conditional given above yields our sufficient condition: given a pair of space-like separated events A and B, if A would not have occurred had B not occurred even though everything in A’s past light cone was the same then there must be superluminal influences." p. 118
from Tim Maudlin's Quantum Non-Locality and Relativity: Metaphysical Intimations of Modern Physics (Third Edition; Wiley-Blackwell; 2011).

And even if we ignore SR, by using GR we're simply defining locality using the curvature of space in GR instead of relying on Minkowski geometry (emphasis added):
"In Newtonian physics, causality is enforced by the relentless forward march of time. In special relativity things are more restrictive;, not only must you move forward in time, but the speed of light provides a limit on how swiftly you may move through space (you must stay within your forward light cone). In General Relativity, this becomes a strictly local notion, as globally the curvature of spacetime might "tilt" light cones from one place to another. It becomes possible for light cones to be sufficiently distorted that an observer can move on a forward-directed path that is everywhere timelike and yet intersects itself at a point in its "past"." p. 80 from Sean Carrol's Spacetime and Geometry: An Introduction to General Relativity (Addison-Wesley; 2004)

In other words, the whole point behind relating causation and light cones is the "special" part of special relativity: every "actor" whose "act" can be a "cause" is theoretically restricted to that "actor's" specific light cone as it exists only in special relativity. The problem the GTR creates is its effect on these light cones: they are distorted or curved enough (in theory) that the whole point of using them in a causal model defeates itself and you end up with CTCs and paradoxes. The way out of this situation is to basically treat GR as if it were SR: restrict causality via locality and ignore light-cones entirely. In other words, instead of relying directly on Minkowski geometry and its lightlike geodiscs, we ignore the "global"-ness of the GTR and use a Lorentzian manifold, take a convex neighborhood within it, and construct a causal curve as a subset of the convex curve. What this does, though, is turn GR into SR for all intents and purposes because locally the manifold's convex curve will result in the same type of straight "lines" that geodiscs are in Minkowski space.



The bigger problem is that none of this seems to work very well, as we have arguments (e.g., Weinstein's I cited in a previous post) that relativity doesn't restrict superluminal signaling alongside arguments that we can either abandon relativistic causality entirely or reframe it to be consistent with QM (or the other way around), as we can't derive the former using the latter (which Popescue & Rohrlich showed in the paper "Quantum nonlocality as an axiom" published in the journal Foundations of Physics all the way back in 1994).

I'm not sure what this has to do with the point I was trying to make, which is that SR's answers are irrelevant if they contradict GR, just as Newtonian answers are irrelevant if they contradict SR.

Because they aren't. GTR doesn't make SR irrelevant. And as you asked that causation be framed in terms of light cones, then you are referring to SR and not GR, which doesn't have light cones.


1) Variance of what?

Frame of references.

2) How can regions be timelike or spacelike?

?? In Minkowski geometry, you uses a Minkowski metric. That's what makes it "spacetime". Let &#951; be a Minkowski metric: the class of vectors x such that &#951;(x,x)> 0 are "spacelike". Likewise, the class of vectors y such that &#951;(y,y)<0 OR &#951;(y,y)= 0 are "timelike" or "lightlike". This is sort of the whole basis for light cones...

We still have the partial ordering of events by spacetime interval

Unless, as seems to be the case, we have nonlocal causation:
"Any attempt to avoid superluminal causation, then, is faced with a dilemma. If stochastic laws are proposed one is impaled on the perfect correlations. This danger can be avoided by deterministic laws, but then one is
gored by Bell&#8217;s inequality. Standard quantum mechanics embraces the first horn, Bohm&#8217;s theory the second." from p. 126 of Quantum Non-Locality and Relativity (full citation above).

Quantum entanglement does not let you transmit any sort of signal.

It's not supposed to. But again, that leads us back to what a signal is, and whether or not this restriction is a product of anything more than trying to cling to relativistic causality. In fact, in various ways such constraints are placed on interpretations of QM. In part this is largely just to preserve causality, but it is also motivated by theoretical constraints. However, understanding what those constraints are, and how they may or may not allow nonlocal causation, is a matter of much contention.

Regardless of how you want to interpret what entanglement is actually doing to the particles, (and if you follow the maths, you'll see quite clearly that nothing non-relativistic is going on) it's impossible to transmit any sort of non-random information through the entanglement, and so causality is preserved.

"Now in the quantum world, as well, we have two physical principles that come close to contradicting each other. One is the principle of causality: relativistic causality, also called &#8220;no signalling&#8221;, is the principle that no signal can travel faster than light. The other principle is nonlocality. Quantum mechanics is nonlocal, indeed twice nonlocal: it is nonlocal in two inequivalent ways. There is the nonlocality of the
Aharonov-Bohm and related effects, and there is the nonlocality implicit in quantum correlations that violate Bell&#8217;s inequality." from Rohrlich's contribution to the edited volume Probablity in Physics (Springer, 2012).

In theory (or theories), neither nonlocal condition violates the limit of the speed of light. But how, and what exactly do the explanations commonly cited mean?

"The fact that the operators commute does show that measurements made on one side do not affect the long-term statistics on the other, i.e. that no signals can be sent from one region to the other by manipulating polarizers on one side. We acknowledged this point in chapter 4, but still have found that superluminal influences are required to explain the correlations between the two regions. So citing the ETCRs shows nothing about the need for superluminal causation in quantum field theory.
The reason that quantum field theory may seem to be evidently compatible with relativity is that the non-local influences in orthodox quantum theory are carried by wave collapse, and wave collapse is commonly ignored in physics texts. But since we have been adamantly sweeping the dirt out from under rugs, we must now face squarely the question of how various interpretations of quantum theory, those with and without collapse of the wavefunction, can fare in the relativistic domain." p. 178-79 from Quantum Non-Locality and Relativity.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
It simply isn't the case that superluminal signaling is impossible using quantum states, at least in theory. In fact, the main reason that we get nonlocality and perserve causality is...well, we want to. So we describe "wave collapse" and similar effects so that causality is preserved. Actual measurements, difficult as they are (in various ways), however, aren't so supportive. For example, the Hollowood & Graham's paper "The refractive index of curved spacetime: The fate of causality in QED" (Nuclear Physics B 795; 2008) is an attempt to deal with Kramers-Kronig and the issue of causality, but although they claim to preserve it to some extent, the curvature of space and the violation of K & K's dispertion ratio violate micro-causality. This is the same type of violation that Stapp and others have said is involved in neurophysiological dynamics and the "mind". Thus, even without getting into the physics of "biological relativity" (see e.g., "A theory of biological relativity: no privileged level of causation" in the journal Interface Focus 2; 2012, a journal published by the Royal Society). In a similar way, M. Le Bellac argues in "The role of probability in physics" (Progress in Biophysics & Molecular Biology 110; 2012) not so much that superluminal limits are violated but that "time" itself (or the arrow of time) is sufficient "blurred" that circular causality is ensured.

This sounds absurd. Why should backwards-propagating waves be necessary to prevent the particle interacting with its own field?

I don't know, because I didn't say anything about "preventing" anything. The Lorentz-Dirac equation addressed the problem of deriving an equation of motion from Maxwell's equations for a point charge by using "renormalization". However, the resulting equation allows (or forces) solutions which relate acceleration at any time t to the acceleration at all other times after it, at that time t.

From the brief mention I've found on Google, they aren't - Dirac's formulation is just another way of writing a system that does propagate in one direction.

The equation itself is local (as I already said). The problem is that it is local via a global assumption, and therefore solutions relate the state of the systems at time t to all other times after t (at that moment t):

"A second sense in which Dirac’s theory is nonlocal is that the theory allows for superluminal causal propagation. On the one hand, the present acceleration of a charge is determined by future fields according to (4.4). On the other hand, an accelerated charge produces a retarded radiation field which affects the total electromagnetic field along the forward light cone of the charge. The combination of the backward-causal effect of an external field on a charge and the forwardcausal influence of a charge on the total field can result in causal propagation between spacelike separated events. If the radiation field due to a charge q 1 at t 1 is nonzero where its forward light cone intersects the worldline of a charge q 2 , then the acceleration of q 2 at t 2 will be affected by the field due to q 1 , even when the two charges are spacelike separated. Again, one could make this point in terms of interventions into an otherwise closed system: If q 1 were accelerated by an external force, then the motion of a spacelike separated charge q 2 would be different from what it is without the intervention. In principle (if t 0 were not so extremely small) the causal connection between spacelike separated events could be exploited to send superluminal signals. By measuring the acceleration of q 2 an experimenter could find out whether the spacelike separated charge q 1 was accelerated or not, and therefore it should in principle be possible to transmit information superluminally in Dirac’s theory."
Frisch, M. Inconsistency, Asymmetry, and Non-Locality (Oxford University Press, 2005) p 92.

The equation is "local" but the assumptions in it allow forces which are timelike seperated from t to have causal effects at t.

An "upper bound", i.e. a set including everything that caused the event, but also including lots of things which didn't have any effect on it, is the contents of the event's past lightcone.

That's the problem. The "light cone" idea breaks down here (if it works at all).


Your model

You keep saying "your". It isn't "mine."
I'm not sure about that.

So a program can classify problems by their computational complexity, prove they are uncomputable if they are (or undecidable), and use the concept of the infinite set of integers and their ability to serve as an index for infinite series for induction proofs? To say that mathematics can be reduced to computer programs is to ignore almost all of mathematics. First, because a great deal of mathematics isn't about logical processes but equations which can be used in computations but which alone are simply strings of variables, utterly useless for computing purposes. Second, there are entire mathematical disciplines (numerical analysis, for example) which deal with the approximations required by computations and the errors involved, because while humans can conceptualize infinite series, computers cannot.


The page on proofs-as-programs correspondence seems to suggest that all types of logic have a corresponding model of computation.
The problem is that this is a statement about formal languages, but despite the formal nature of mathematics, there is a difference between formal expressions and mathematical formalism. And your link to the wiki page isn't saying that "math is programming" or anything really approaching that. It's an old but still developing further way in which symbolic languages used in formal logic relate to programming languages. it isn't "programs=math" or "everything in mathematics is reducible to a program". It's not even related to some major (sub)-branches of mathematics at all (see here)


Time exists on the lower level of interacting molecules. It can't just disappear when you move up abstractions.

See the paper on biological relativity cited above.

Well, yes, but none of that explains why a function from "the set of all series of states of the components of the cell" to "transformations of cell components (i.e. mappings from some cell component config to another one)" is a useful thing to consider.

Because it actually describes what's going on, but what's going on can't be computed. Either the model is wrong, then (and so far no one has shown this to be the case, despite several attempts), or biological systems aren't Turing equivalent.


The amino acids interact with each other, after all. That's obvious even in something like the n-body problem.

Think about the statement again for a moment: "Biological organisms show emergent properties that arise from interactions both among their components and with external factors. For example, the properties of a protein are not equivalent to the sum of the properties of each amino acid."

If the properties are not equivalent to what makes up the whole, in that properties not inherent in any of the parts "emerge", how is the principle of reduction maintained?

This seems to be straight out contradicting itself. Just because something emerges from the combination of more concrete things doesn't mean its "irreducible."

That's exactly what it means. That's the fundamental precept of reduction: I can explain everything by explaining the activity of the individual parts, thus reducing the system to its parts without loosing any explanatory power. When something "emerges" that isn't explainable except through the interactions of parts, it isn't reducible. You can take a part a computer and it won't work, sure. But nothing happens which isn't explainable by summing the activity of the parts. That's what reducible means.

In fact, the only things which could be irreducible in that way would be things severable from the components.
No, seperable. Things that can't be explained by the parts, but only through the activity of interaction itself. If the parts work together and I can explain the system and its activity simply by the activity of the parts, that's a reducible system. If the interaction of the parts itself produces something seperable from the parts which can only be exlained through the interaction activity itself, rather than the summation of the activity of the parts, the system is not reducible.

So how come there isn't a suggestion that any other sort of nanotech can't be computed or modelled mathematically?

There is. Specifically, the model you object to so much is a serious issue for the creation of artificial biosystems because it means these are non-computable. What we do know is to use biological material and manipulate it. We don't compute it.


We know what the wavefunction is, just not why it is - it's defined in terms of maths.

I can create any function and call it a "wavefunction" but the physics community would laugh, ignore me, or (at best) explain how idiotic that would be of me. The reason it would be is because the wavefunction only exists thanks to observations, theory, and experimentation with physical reality. It is there to model in some way what's going on with that reality. If it were defined only in terms of "maths", then I could redefine it at will, just like I can any arbitrary function.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Unfortunately, my lack of patience (and endurance?) combined with the length of my last reply and the time it took to quote sources, attempt to cite them well enough, not to mention simply writing the response (which I'm sure is riddled with grammatical, spelling, and hopefully minimal other errors) resulted in a steady decrease in the quality and comprehensive treatment of certain issues. I hope to resolve that here. In particular, I wish to address the issue of superluminal transmission, including why it is so often repeated that nonlocality doesn't entail superluminal transmission of information, research which suggests that such transmission is indeed possible, and how even Maxwell's equations (or the "updated" versions thanks to the work of Lorentz and Dirac) entail nonlocal causation.

I'm not sure what this has to do with the point I was trying to make, which is that SR's answers are irrelevant if they contradict GR, just as Newtonian answers are irrelevant if they contradict SR.

I addressed (hopefully in an adequate way) the problems with thinking that "SR's answers are irrelevant if they contradict GR". Namely:

1) the entirety of the "light cone" constraint is situated squarely within the STR, not the GTR,
2) using GTR instead means abandoning the "light cone" in favor of the (needlessly more complicated) method of locally applying the principles of GR to essentially "imitate" the light cones of the STR,
3) and finally, GR does not make SR irrelevant, and in particular with respect to the issue of causation and superluminal transmission. This is because SR deals with individual frames of reference. Causality is all about one "thing" acting on (or interacting with) another. With the STR, each and every "thing" (particle or person) has a particular frame of reference, special to that "thing", which (if superluminal causation is impossible) provides a causal constraint (the light cone). GR does not.


However, I said nothing in my response to the other assertion:
Newtonian answers are irrelevant if they contradict SR.


Assuming "Newtonian" here means "classical mechanics" (it usually does and I use the terms interchangably, even though technically it wasn't until Maxwell at the very least that classical mechanics became, well, classical mechanics), the idea that classical mechanics is "irrelevant" if some aspect of it contradicts, conflicts, or is inconsistent with relativity or QM is not necessarily true. This is important mainly because the reason superluminal transmission is considered impossible is related to the reason classical mechanics is considered relevant only as an approximation of modern physics, including relativistic "newtonian" or classical mechanics and quantum.

Allori & Zangi published a paper "On the Classical Limit of Quantum Mechanics" (Foundations of Physics 39; 2009) which addresses the relationship between classical mechanics and both QM and relativity. There focus is mainly on the former, but their discussion of motion makes relativity an issue as well. Simplistically, they address the open "problem of the emergence of classical mechanics from quantum mechanics" and suggest one specific resolution in the hopes that it will enable a more general one. Instead of a wave function as it is typically formulated (at least from a computational physics point of view), they suggest a replacement: a "plane wave". In reality, this "plane wave" is more or less a very spread out or "large" wavefunction, or more accurately the summation of wave packets. One issue, however, is that this effectively amounts to nonlocal causation (at least in theory) which is simply masked by describing it in terms of a local "neighborhood" of some "relevent" component of the entire "plane wave" after collapse. In a sense, then, the reason locality is preserved is mathematical sleight of the hand: by "expanding" the wavefunction into their "local plane wave", what would be nonlocal becomes local simply by describing it as such via some shuffling of coordinates.

The issue of the relation between classical mechanics and relativistic classical mechanics and QED, QFT, and/or QM, doesn't end (or begin) with the above paper. In particular, there is the issue I outlined in my last several responses with what should be a theory or model consistent with classical and relativistic causality: the Maxwell-Lorentz-Dirac equations. With that in mind:

This sounds absurd...Dirac's formulation is just another way of writing a system that does propagate in one direction.

That even "updated" equations (i.e. QED) from classical mechanics causality does sound absurd. Yet it is precisely the fact that it sounds absurd which has created such a problem (one without a generally agreed upon solution, at the moment). Selleri's paper "Superluminal signals and the Resolution of the Causal Paradox" (Foundations of Physics 36; 2006) is one example of an attempt to deal with the "problems" of possible superluminal signals. However, while most such work is restricted to interpretations of QM in some field theory, Selleri's study also deals with the fact that "[f]rom the theoretical point of view it has been shown that solutions of Maxwell's equations exist representing electromagnetic waves propagating with arbitrarily large group velocities". This is one of "two independent developments" which "make the existence of superluminal signals (SLS) probable".

Even more noteworthy is that Selleri's solution doesn't involve showing that SLSs are impossible, but that perhaps the main reason for thinking so (a particular interpretation of EPR and Bell) is incorrect, and even that "entanglement" itself in QFT is flawed. This highlights a rather central theme from Einstein onwards: what quickly becomes almost axiomatic within modern physics results not from experimental evidence, or even mathematical, but from a desire to avoid classical conceptions of causality, determinism, and the mathematical manipulations used to do so. In an unusually blunt and straightforward declaration for "dealing" with the ways in which modern physics entails superluminal signals, Russo begins his paper "Conditions for the generation of causal paradoxes from superluminal signals" (published in Electronic Journal of Theoretical Physics 8; 2005) with the following: "In the framework of the special theory of relativity (STR), recent theoretical and experimental evidencies of superluminal motions lead to unacceptable causal loop paradoxes". He follows with the equally explicit statement of purpose: "The aim of this paper is to show the connection between the superluminal signal existence and the causality violation by means of a simple mathematical derivation".

Notice that the aim is not to show that SLSs are in some way falsified by or at least in conflict with experimental or even theoretical findings (in fact, the opening line admits the converse: they are entailed by such evidence). Instead, the attempt is to prove that such signals would violate causality (at least as it is understood classically, i.e., causality is necessarily local). But how can we use math to prove that theory and experiments don't entail a particular conclusion? Certainly, there are ways in which a mathematical proof can do exactly this, but in the context of physics (or the sciences in general), a mathematical proof can't falsify theoretical and experimental findings unless it deals with these directly (as a simple example, it can be easily shown at times that some study's findings were due to the improper use of certain statistical techniques). In this case, theory and observations were ignored, and the math deals only with what actual evidence supposedly entails: a philosophically objectionable result.

The result is directly related to this claim:
Quantum entanglement does not let you transmit any sort of signal.

Like most academic societies, the American Institute of Physics holds conferences and admits papers from contributers for consideration (after review) in the publication of the conference proceedings. Vol. 263 of the AIP Conference Proceedings contains J. G. Cramer's paper "Reverse Causation and the Transactional Interpretation of Quantum Mechanics". Of particular interest is the second section, in which Cramer applies the transactional interpretation of quantum mechanics (TI) to EPR/Bell/etc., superluminal communication, and "backwards-in-time-communication and reverse causation".

Almost as important (for the purposes of this discussion) as Cramer's findings is his description of their context (emphasis added):
"Because of its use of advanced waves and time symmetry, reverse causation lies just below the surface in the transactional interpretation. Only by carefully pairing off advanced and retarded waves in a transaction can the retrocausal effects of the advanced waves be avoided. Quantum nonlocality, first highlighted in the famous EPR paper and now generally acknowledged to be implicit in the quantum formalism, also seems to imply the possibilities of superluminal communication and reverse causation.
Over the years, however, a number of authors have presented “proofs” that superluminal observer-to-observer communication is impossible within the standard quantum formalism. The TI, which closely follows the standard formalism, is neutral on the issue of superluminal communication, but could provide an explanation of such effects if present. More recently, it has been pointed out that the “proofs” ruling out superluminal effects are tautological, with assumptions implying conclusion. Standard Bose-Einstein symmetrization has been used as a counter-example shown to be inconsistent with the assumptions of the “proof”."
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
(cont. from above)

Here again we find that the "constraints" are less a matter of evidence and more a matter of "faith" or at least a very strong resistence to yielding classical conceptions of causality. Yet as Cramer and many other show, there is a notable lack of "hard" support (including theoretical) behind assertions like these:
Regardless of how you want to interpret what entanglement is actually doing to the particles, (and if you follow the maths, you'll see quite clearly that nothing non-relativistic is going on) it's impossible to transmit any sort of non-random information through the entanglement, and so causality is preserved.
We know what the wavefunction is, just not why it is - it's defined in terms of maths. You could probably get a definition entirely in terms of real numbers if you wanted.

The latter claim is addressed in a single sentence: "While the wave function has proved to be extremely useful mathematical object for applications of quantum mechanics, its meaning remains a matter of heated debate." p. 21.

The former claim is far more central (and contentious), but it is also addressed (emphasis added):
"From the point of view of the Transactional Interpretation, the nonlocal connection between detection events at the Transmitter and Receiver stations arises because the detection transactions for the two entangled photons must share a &#8220;two-way handshake&#8221; at the LiIO3 crystal, a condition that can only be realized when the summed energies and momenta of the two photons equal that of the pump-laser photon that created them. Moreover, if a photon is detected when the Transmitter detector is in the &#8216;1&#8221; position where the slits are imaged, the matching advanced-wave confirmation in the lower arm can pass only through one slit, so no 2-slit interference is possible. However, if a photon is detected when the Transmitter detector is in the &#8220;0&#8221; position illuminated by both slits, the advanced-wave confirmation in the lower arm can pass through both slits and 2-slit interference is present. No barrier to retrocausation is apparent in this analysis, and the paradox remains." p. 25

An even more explicit (if longer and more technical) study demonstrating it is not the case that
Quantum entanglement does not let you transmit any sort of signal.

is found in Garber's "Explaining quantum entanglement as interactions mediated by a new superluminal field" (Physics Essays 24; 2011). While Garber agrees with others that the repeated attempts to retain a superluminal "constraint" and quantum entanglement fail, and indeed quotes Moffat's statement regarding this (that a "Minkowskian metric with one light cone is not adequate to explain the physics of quantum entanglement. The standard classical description of space-time must be extended when quantum mechanical systems are measured"), the solution is fairly novel. Like others, Garber begins with the recognition that relativity only really requires that light be constant. However, here most similarities between Garber's study and others stop, as Garber's solution is to posit a theoretical new "superluminal field" or "S-field" for which there is no more evidence other than the inability to reconcile quantum entanglement and non-locality with relativistic causality without violating "proven aspects of relativistic quantum theories" (via e.g., the introduction of a preferred reference frame). So instead of quitely ignoring certain very inconvenient problems with standard QFT which retain both the nonlocality of QM and forbid superluminal signals, Garber posits an extra "something" which allows superluminal information travel rather than those who rely on "light signals" or "gravitational waves" to explain superluminal causation.

An example of another, simpler (and perhaps more realistic from an experimental and theoretical standpoint) approach is that found in Jensen's "On using Greenberger-Horne-Zeilinger three-particle states for superluminal communication" (found in AIP Conference Proceedings vol. 1208). Here the entangled states of photons are argued to allow sender and receiver to transmit information nonlocally, and therefore faster than the speed of light.

In summary:

1) It is by no means the case that superluminal or nonlocal causation has shown to be impossible. Quite the contrary.

2) Other studies have been published, and will almost certainly continue to be, which argue that superluminal transmission or nonlocal causation is impossible. However, it must be understood that a central component of these "proofs" is certain assumptions which are not supported by either theoretical or experimental results. Rather, they are aesthetically and/or philosophically motivated.

3) What "entanglement" means is so much of an issue that certain physicists argue all of QM must be either considered incomplete or re-formulated.

4) The relationship between SR and GR is not that of the latter making any finding of the former irrelevant, and in fact the relationship between classical mechanics and modern physics is still a matter of some debate.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I just received a book I purchased which has been on my "wish list" for a while, and was immediately reminded of this:

The reason I linked to a page on programming, rather than mathematics, is because mutable structures do not exist in mathematics - every structure is unvarying, because there is nothing for it to vary with. In order to have a value vary with time, you actually need to define it as a series of value, indexed by a single real variable.

(Also, if you doubt the validity of any programming concept in this context, you'll find that computer programs are merely a different way to write mathematics.)

...Your model of an object which inherently performs processes does not include time? You're going to have to explain that in more detail than just "too much lost information."

I'm not sure about that. The page on proofs-as-programs correspondence seems to suggest that all types of logic have a corresponding model of computation. (And since all models of computation are executable on a Universal Turing machine, there is lots of fun to be had. )

Why was I immediately reminded of this discussion on the relationship between math, models, computing, and programming (he asks rhetorically, rerfering to himself 3rd person in the way only a self-important, pompous, self-centered a**/"donkey" would)? Because of the way the book begins, including the title of chapter 1: "What is a computable model?" and the heading of section 1.1 "Mathematical models".

So, without further ado, from Raymond Turner's Computable Models (Springer-Verlag, 2009):

"The term mathematical model is often used to mean a model of a system built from mathematical structures and their associated notions. Such models are employed throughout engineering, the natural sciences, and the social ones. Typical examples range from models constructed from sets, number systems of various kinds, algebraic structures, especially categories, topological spaces through to probabilistic and statistical models, etc. [followed by 2 examples, specifically "the model of a particle in a potential field and the second is the Black–Scholes partial differential equation for a derivative price"]

The exact meaning of the terms involved need not detain us; our only concern is that they illustrate how mathematical notions (in these cases differential equations) are used to model natural or artificial phenomena. This very general notion of modeling partly illustrates how we intend to use the term mathematical modeling.

However, our primary use of the term is closer to that found in logic and set theory where sets, relations, and functions (conceived of as sets of tuples) are the basic building blocks for the construction of mathematical models of axiomatic systems. While this kind of modeling may be seen as a special case of the more general notion, it is distinguished by the central role it gives to sets." (italics in original; pp. 1-2)

So, there's one definition of mathematical models, but this begs the question, what about computable models, or the relationship betweem mathematical models and computer programs/programming?

First, the difference between mathematical and programming models (emphasis added):

"While our models will not be set-theoretic, neither will they be programming models, where the latter consists solely of programs written in some programming language. There is a crucial distinction between mathematical models and programming ones. For while it is true that the process of programming results in the construction of models from programs and data types, and so fits our desiderata for being computable, they are not, by themselves, mathematical models. In isolation, a programming language is just that, i.e., a language. And without some mathematical interpretation of its constructs, aside from the formal nature of its grammar, it has no mathematical content. And neither do the programs written in it. Such content has to be imposed upon it via a semantic account (of some kind) and it is this that renders it amenable to mathematical analysis." (p. 2).

The author continues with further distinctions and clarifications, such as data types, and turning to the notion of a theory of data types (TDTs) and their relationship to computable (rather than either mathematical or programming) models. Which brings us to what is central (indeed a defining characteristic) to these (italics in original):

"The natural numbers not only form a very basic data type (or several), but they come equipped with the paradigm notion of computability for relations and functions. Consequently, a better mathematical characterization insists that a TDT has a model in the natural numbers in which its types and its basic operations and relations, and in particular its notion of equality, are Turing computable. More exactly, any basic operations must be Turing computable and any basic relation and any type are to be recursively enumerable. Put more succinctly, it must have a recursive model." (pp. 5-6)

And finally, the relationship (and distinctions) between programs, programming models, computable models, and mathematical models (emphases added):

"Many set-theoretic models have no recursive interpretation. However, it is often possible to replace the set-theoretic model with a computable counterpart. Of course, such a replacement does not entail mathematical equivalence. But computable models have the advantage that they can be implemented in the sense that they have a recursive interpretation. Indeed, more directly, a computable model could be coded in a version of Prolog, i.e., one with the appropriate type structure. Such interpretations offer the possibility of building a prototype implementation of the mathematical model.

In contrast, the formal and conceptual relationships between set-theoretic models and actual implementations are often obscure and/or complicated. Mostly one can only implement an approximation or simulation of the mathematical model. Both theoretically and practically, this is far from satisfactory. Computable models are implementable specifications, and so there is a precise and direct connection
between the specification and the implementation." (pp. 7-8)
 
Last edited:

LuisDantas

Aura of atheification
Premium Member
Not too many of us suggest we have no free will in our everyday life.

However it does get kind of mystical when we wonder about what is happening, after a bout of idle mind, upstream from the underlying causes of our next thought.

Any thoughts anybody?
:human:


I have never seen a definition of free will that I found credible, myself. Most look like interesting sci-fi concepts, that certainly would create wildly different worlds from our own. Some are essentially meaningless. Very few even have any relation to either freedom or will.
 

LegionOnomaMoi

Veteran Member
Premium Member
I have never seen a definition of free will that I found credible, myself.
Where have you looked? I ask because (among other things) of the introduction to the edited volume Free Will & Consciousness: How They Might Work (Oxford University Press, 2010). In the intro paper, which is written by the volume's editors (Roy F. Baumeister, Alfred R. Mele, & Kathleen D. Vohs), is the following:

"Free will and consciousness may seem so familiar to you as to need no introduction. Even so, it has been argued that free will is an illusion and that consciousness does little work. One of us (Alfred Mele) began a recent book defending the thesis that scientific arguments for the nonexistence of free will are unpersuasive and that conscious intentions and decisions do make a difference, by quoting the following e-mail message; it fortuitously arrived out of the blue while he was writing the book:

[they follow the above with the email, but it is sufficient here to say that it was someone who had watched a DVD by Wolinsky and was disturbed by what appeared to be scientific "proof" that free will doesn't exist]

This is not an isolated incident. The belief that scientists have shown that there is no such thing as free will is disturbing, and two of us (Roy Baumeister and Kathleen Vohs) have produced evidence that lowering people&#8217;s subjective probability that they have free will increases misbehavior&#8212;for example, lying, cheating, stealing, and socially aggressive conduct (Baumeister, Masicampo, & DeWall, 2009; Vohs & Schooler, 2008).

One thing we worry about is that most people who read in newspapers or hear on DVDs that scientists have shown that free will is a myth might understand the expression &#8216;&#8216;free will&#8217;&#8217; very differently than those scientists do. Consider the following from neuroscientist P. Read Montague:

Free will is the idea that we make choices and have thoughts independent of anything remotely resembling a physical process. Free will is the close cousin to the idea of the soul&#8212;the concept that &#8216;you&#8217;, your thoughts and feelings, derive from an entity that is separate and distinct from the physical mechanisms that make up your body. From this prospective, your choices are not caused by physical events, but instead emerge wholly formed from somewhere indescribable and outside the purview of physical descriptions. This implies that free will cannot have evolved by natural selection, as that would place it directly in a stream of causally connected events. (2008, p. 584)
If, instead of reading that scientists have demonstrated that there is no &#8216;&#8216;free will,&#8217;&#8217; people were to read that there is scientific proof that we have no magical, nonphysical, nonevolved powers, would they respond with despair? We doubt it, and there is evidence that most people do not understand free will in the way Read Montague does. Rather, the layperson&#8217;s belief in free will is closer to the legal concepts, namely that the person could do something different from what he or she eventually does and is mentally competent to know the difference between right and wrong (e.g., Baumeister, 2008; Nahmias, Morris, Nadelhoffer, & Turner, 2006; Nahmias, in press; Paulhus & Margesson, 1994)."

(pp. 1-2).

In other words, not only is the idea of "free will" usually used in a very ill-defined way, and/or in a religious context, but there is an important distinction between how the term is used in academic literature compared to general use. Perhaps even more important it the way that certain studies about the relationship between "choice" and the brain are reported in mainstream media are always inaccurate and sometimes outright utter distortions.

Most look like interesting sci-fi concepts, that certainly would create wildly different worlds from our own. Some are essentially meaningless. Very few even have any relation to either freedom or will.

Could you provide some examples (i.e. quotes or paraphrases from the some of the sources in which you've seen these types of definitions)? Thanks.
 

PolyHedral

Superabacus Mystic
Legion, can you explain how functional processes happens in a simpler example? Such as the n-body gravitational problem?

(because reading thousands of words either way is getting tiresome, and I suspect it's not actually getting us anywhere useful.)
 
Top