• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

How do you exactly define 'free will'?

JerryL

Well-Known Member
Let us go back to the beginning of our discussion.

I accept not everything I do is a matter of choice. Some things occur as a reaction. Like a calculator. You push keys and get a result. Now noting the mind can work in this fashion doesn't mean it only works in this fashion.

It doesn't *mean* that it works in "this fashion", but it does.

In this case "this fashion" means that the outcome is the inevitable result of the initial conditions.

A logic gate could have one value or another; but it will always have the value that the conditions insist it must have. A rock could move or stay still, but it will always chose to do which one the conditions dictate. Your brain can be in a nigh-infinite number of states; but it will always be in the state the conditions dictate it must be in. Your soul can whatever, but will always whatever the conditions dictate.

Or there's a random element.
 

Nakosis

Non-Binary Physicalist
Premium Member
It doesn't *mean* that it works in "this fashion", but it does.

In this case "this fashion" means that the outcome is the inevitable result of the initial conditions.

A logic gate could have one value or another; but it will always have the value that the conditions insist it must have. A rock could move or stay still, but it will always chose to do which one the conditions dictate. Your brain can be in a nigh-infinite number of states; but it will always be in the state the conditions dictate it must be in. Your soul can whatever, but will always whatever the conditions dictate.

Or there's a random element.

Logic gates also have an indeterminate state. This basically means they have been set to ignore input conditions.

The human brain can at least do this. Some event occurs which would normally cause anger. I can choose to ignore those inputs that would normally cause me to act in anger. I can also choose not to ignore though inputs.

So the brain can choose to set for itself certain conditions before a decision is made. Until the brain chooses to set those conditions the result is not determined. Consciousness allows a individual to control/ overseen their responses. The human brain is very flexible and recursive. You cannot give it a set of inputs and determine what the output will be. And randomness or indeterminate internal process may affect our will.

The question as I see it is whether we can control our wants and desires and by doing so alter our will. The resultant decision is an internal process which the brain has some control over. Because of that input does not determine output.

Like a calculator that can choose to ignore the 2 key.
 

Thief

Rogue Theologian
Then no one can be trusted.
I've had people I barely know tell me they trust me, just by looking at me. I'll politely nod my head while thinking to myself they are idiots. Not because I would intend them harm but because they have no good reason for that trust.

and you now suppose an example of poor performance in judgment is a rebuttal to my post?!
THAT is a line of reasoning NOT to be trusted!
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
But more to the point: your post says nothing about free will.
I'll start with the most relevant part:
Were you correct that the brain is basically a calculator than it would be not only equivalent to a finite-state machine but necessarily deterministic. That you are wrong doesn't mean we have free will, but it does rule out an argument against it.

So you've never heard of fuzzy logic nor looked at an artificial neural net.
The first logic text I bought, after taking an intro to symbolic/mathematical logic class as an undergrad, was Merrie Bergmann's An Introduction to Many-Valued and Fuzzy Logic: Semantics, Algebras, and Derivation Systems and the my first neural network text (Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory) had a forward by Zadeh and a chapter devoted to fuzzy sets and fuzzy neural networks. My "album" here has scanned & cropped images from that book from an explanation of ANNs in a post I wrote maybe two years ago (I quoted it in a response to you, albeit hidden under a spoiler button so as not to detract from the main point). By the time I started grad school I was convinced that fuzzy set theory was the way to go via itsd its incorporation in probability, automata, statistics, Likert-like data analysis, Bayesian inference, expert systems, support vector machines, cluster analysis, genetic algorithms, etc.

I've since become far less convinced that there is some single tool that one should always see if one can use. Quite apart from the general computational complexity (increased time interval for learning, additional techniques to select the appropriate number of fuzzy rules, etc.) there is the more general issue of dimensionality. Nor are fuzzy sets the only extensions of the reals:
full

The complex-valued classifiers all out performed the fuzzy classifier in this case. Also, fuzzy sets aren't particularly extendable to physical instantiations of neural networks (see e.g.,
Perelman, Y., & Ginosar, R. (2008). The Neuroprocessor: An Integrated Interface to Biological Neural Networks. Springer &
Rasche, C. (2005). The making of a neuromorphic visual system. Springer; see also
Kozma, R., Pino, R. E., & Pazienza, G. E. (2012). Advances in neuromorphic memristor science and applications (Springer Series in Cognitive and Neural Systems Vol. 4) for a thorough treatment of more general treatment of a specific kind of neuromorphic technology).


Indeed: you've never looked at failure-tolerant insect AI.
That would be most swarm intelligence algorithms (I've kept up with ICSI since the first international conference an the same with SEMCCO). The above is just some terms put together that one might find actually used in something close to it (Wedde, H. F., Farooq, M., & Zhang, Y. (2004). BeeHive: An efficient fault-tolerant routing algorithm inspired by honey bee behavior. In Ant colony optimization and swarm intelligence (pp. 83-94). Springer). Meanwhile, those of us who actually work with such systems keep up with more general trends both in terms of computational intelligence paradigms and soft computing/machine learning. Natural Computing Series puts out monographs on every manner of bio-inspired computing as well as more theoretical, mathematical, or computational analyses of these (as opposed to application).

Your statements aren't just irrelevant to what I said, they're pretty irrelevant for anybody working in or interested in AI/computation intelligence or machine learning as the first is so general as to be useless and the second is fairly meaningless.
 

LegionOnomaMoi

Veteran Member
Premium Member
Ignoring that that study has been refuted:
It hasn't. In fact it is part of an ongoing debate largely characterized by those who produce supposed models of (M,R) - Systems and subsequent criticisms on the ways in which these computable models or expressions of closure to efficient causation are inaccurate or exploit certain ambiguities to make more grandiose claims than they actually support:
"Efforts to mathematically disprove Rosen's contention that an organism cannot have simulable models have not resolved the question. Louie (2007) has been highly critical of some of the arguments (Chu and Ho, 2006), and, as we have discussed in Section 3, there are problems also with some of the others. Other supposed contradictions can be attributed to the use of loose definitions in place of Rosen's very precise ones. As noted above, for example, the definition of computability used by Mossio et al. (2009) does not require termination of the program in a finite number of steps. Their definition of computability is widely accepted, but a more serious problem is their representation of Rosen's scheme with an incorrect set of equations. Similarly Wells (2006) replaced Rosen's precise definition of a mechanism by a vague one based on everyday ideas of what a machine is, and used it to claim that Rosen's conclusions were mistaken."
Luz Cárdenas, M., Letelier, J. C., Gutierrez, C., Cornish-Bowden, A., & Soto-Andrade, J. (2010). Closure to efficient causation, computability and artificial life. Journal of theoretical biology, 263(1), 79-92.

I didn't say the brain was a computer. In fact, in what you quoted I didn't mention a brain at all.

What I have said is that a brain makes choices like (IOW: under the same definition of "choice") a calculator.
There is no relevant difference between a calculator and a computer. They are both equivalent in that they are reducible to Turing machines. Calculators compute, and computers calculate. Your comparison of the brain to a calculator is just harder to understand (in terms of anything remotely similar, as computers can be made to seem far more like minds than calculators). However, it's less bizarre than the idea that calculators make choices, I suppose.
 

JerryL

Well-Known Member
So the brain can choose to set for itself certain conditions before a decision is made. Until the brain chooses to set those conditions the result is not determined.

That "choice" to "set conditions" is, itself, the result of conditions. So you merely move the goalposts (turtles all the way down).

A computer can "chose" which section of code to run. Until it chooses that, the results of the code are not determined.

Consciousness allows a individual to control/ overseen their responses. The human brain is very flexible and recursive. You cannot give it a set of inputs and determine what the output will be. And randomness or indeterminate internal process may affect our will.
Either there's randomness (which is not will), or there is not randomness (which means you could determine output if you had an accurate model and enough information). Now "enough information" may be beyond the realm of realism; but that's not really the point.


Like a calculator that can choose to ignore the 2 key.
Like your brain can chose to ignore getting hit on the knee with a hammer?

Don't confuse "complex" with "fundamentally different". That would, again, be pushing the metaphor too far.
 

JerryL

Well-Known Member
I'll start with the most relevant part:
Were you correct that the brain is basically a calculator than it would be not only equivalent to a finite-state machine but necessarily deterministic. That you are wrong doesn't mean we have free will, but it does rule out an argument against it.
Am I seriously the only one of us two who knows what a metaphor / simile is and how to use one?

Me: A blimp flies by being lighter than the fluid (air) it's in; where as a plane is like a bird.
You: If a plane were like a bird it would lay eggs!

And I'm not using the calculator analogy as part of my proof. I'm using the calculator analogy to address a particular set of equivocation fallacies.

Your statements aren't just irrelevant to what I said,
I know that feeling.

It hasn't. In fact it is part of an ongoing debate
Your earlier citation disagreed with that claim. More to the point: you've just admitted that it's not established (being debated).


There is no relevant difference between a calculator and a computer.
Of course there are. Calculators don't have alphanumeric keyboards (usually), nor do they run Steam!

You seriously cannot manage a metaphor, but can make assertions like "no relevant differences"!?!
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Am I seriously the only one of us two who knows what a metaphor / simile is and how to use one?

This particular metaphor is so well-known and posed such problems that if you survey the literature you can find repeated appeals from various fields to abandon such naïve analogies:

Inadequacies of the computer metaphor

"Metaphors for the mind or the brain go through fashions, usually based on the prominent technology of the day...The computer metaphor has in recent decades been so all-pervasive that its tenets have cased to be made explicit. It is all the more dangerous when it is taken for granted, and left out of any debate. These assumptions have affected the directions taken in connectionist research, which could be (but rarely are) fitted into a different metaphor."
(source)

The following is interesting not just as another example of the less than pointless and misleading metaphors analogies between brains and computers (or calculators), but also draws upon cognitive linguistics and the importance of metaphors in the cognitive sciences (and in particular in cognitive linguistics) :

Randall, W. L. (2007). From Computer to Compost Rethinking Our Metaphors for Memory. Theory & psychology, 17(5), 611-633.

“while the “weak” AI program can claim some spectacular successes…this has come at the price of giving up on the strong AI objective of producing a truly artificial, conscious, intelligence. The latter research program, while certainly possible in principle, seems at the moment to have ground to a screeching halt. One of the possible explanations for the failure of strong AI is precisely that its attempts at “reverse engineering” the brain are too confidently based on the idea that the brain is analogous to a machine (to be precise, to an electronic computer), as opposed to an organic product of blind evolution."
Boudry, M., & Pigliucci, M. (2013). The mismeasure of machine: Synthetic biology and the trouble with engineering metaphors. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 660-668. (emphases added)

And so on..

Drawing the analogy you do illuminates nothing, misleads in multiple ways, and mischaracterizes much. It isn't just wrong, but is an inaccurate comparison that enables one (having assumed there is any merit to such nonsense) to more easily make the leap to the idea of humans (and many other species) as being simply machines that process input and produce output.

Me: A blimp flies by being lighter than the fluid (air) it's in; where as a plane is like a bird.
You: If a plane were like a bird it would lay eggs!

You are the one suggesting I use basic, elementary aspects of what I do and in addition that I am inadequately familiar with your meaningless nonsense about "failure tolerant insect AI" which is at best a confused use of terminology and at worst an indication of quote-mining sources you don't understand. I don't really care which, as the point is you responded to a post with such utterly meaningless rejoinders that are far more trivial than my objection to your metaphor as worse than useless.

And I'm using the calculator analogy as part of my proof.
A proof requires the capacity for formal representation. I've asked you before to represent your arguments formally in order to indicate not just that you actually have an argument, but that you are even capable of formulating a real proof. So I'll be less constraining: feel free to use natural language providing it is capable of easy relation to real, formal proofs (i.e., the use of phrases such as "there exists" or "for all" or "if and only if", etc.).


I'm using the calculator analogy to address a particular set of equivocation fallacies.

Your misusing it, and haven't demonstrated that you actually understand what that fallacy really is any more than you understand neural networks or logic.


Your earlier citation disagreed with that claim.
Wrong.


More to the point: you've just admitted that it's not established (being debated).

It's established, certainly. However, what the fact that it is established entails is under debate. The nuances of such disagreements are irrelevant here as your little calculator analogy still fails and necessarily fails.

Of course there are. Calculators don't have alphanumeric keyboards (usually), nor do they run Steam!

Mathematically and logically they are identical and are treated as such in the computer sciences from a computational/computability framework (which is practically computer science itself). What matters is Turing equivalence.

You seriously cannot manage a metaphor, but can make assertions like "no relevant differences"!?!

I can manage the metaphor just fine. It's just uninformed, wrong, and even for those who would make similar metaphors who know what they are talking about wouldn't do so as you have.
 

LegionOnomaMoi

Veteran Member
Premium Member

That "choice" to "set conditions" is, itself, the result of conditions.


It's the result of a functional process that is irreducible and closed to efficient causation. So "simple" a system as sandpiles don't fit into your little explanation:

"Granular media are neither completely solid-like nor completely liquid-like in their behaviour – they pack like solids, but flow like liquids. They can, like liquids, take the shape of their containing vessel, but unlike liquids, they can also adopt a variety of shapes when they are freestanding. This leads to the everyday phenomenon of the angle of repose, which is the angle that a sandpile makes with the horizontal...in the intervening range of angles, the sandpile manifests bistability, in that it can either be at rest or have flowing down it. This avalanche flow is such that all the motion occurs in a relatively narrow boundary layer, so that granular flow is strongly non-Newtonian...
The athermal nature of granular media implies in turn that granular configurations cannot relax spontaneously in the absence of external perturbations. This leads typically to the generation of a large number of metastable configurations; it also results in hysteresis, since the sandpile carries forward a memory of its initial conditions. Bistability at the angle of repose is yet another consequence, since the manner in which the sandpile was formed determines whether avalanche motion will, or will not, occur at a given angle.
The above taken together, suggest that sandpiles show complexity; that is, the occurrence and relative stability of a large number of metastable configurational states govern their behaviour."
Mehta, A. (2007). Granular physics. Cambridge University Press.

The final configuration is logically consistent with physical laws but is not itself a consequence of only laws of physics. To a certain extent, sandpiles (like crystalline structures) self-determine their configuration states when they are subjected to external pressures, temperatures, etc., that force them to reconfigure.

So you merely move the goalposts (turtles all the way down).
Alternatively, one can stop playing word games and defining effects as necessarily effects, contradicting both determinism as a philosophy and classical and modern physics.

A computer can "chose" which section of code to run. Until it chooses that, the results of the code are not determined.
It can't. Computers possess no ability define how their states change based on their previous internal states and some given input such that the act of information processing is represented by state changes and governed by these. A simpler way of putting this is that computers can store memory and can manipulate meaningless input according to programmed rules. Living systems respond to input by altering not only their states but how their states will change in one and the same way. For living systems with brains, the state changes (which are continuous) are determine by an emergent function which takes as inputs correlations in and among neural networks and at the came time determines how these networks change given the same input. Systems biologists call this a functional process, complex systems specialists might call it an emergent property, but insofar as it relates to determinism and free will the best term is circular causality (and/or closure to efficient causality).

Either there's randomness (which is not will), or there is not randomness (which means you could determine output if you had an accurate model and enough information).
1) There is no single formal definition of random, let alone a common parlance definition.
2) There absence of randomness absolutely does not necessarily mean you could determine output. All of quantum physics rests upon this. Were it deterministic, it wouldn't be quantum physics it would be classical (only we'd simply call it physics). Were it random we couldn't use it. It is probabilistic, and in such a way that observers determine outcomes (so much so that Stapp based his quantum theory of mind largely on this).
 

JerryL

Well-Known Member
This particular metaphor is so well-known and posed such problems that if you survey the literature you can find repeated appeals from various fields to abandon such naïve analogies:
This is will come as a shock. Brace yourself.

Just because someone else has made a metaphor between a brain and a computer, doesn't mean I'm making the *same* metaphor.

So. Straw man. You are hacking a metaphor fundamentally different from my own. (which, you may recall, was never a proof but an illustration to avoid equivocation... I mentioned so in my last post).

I can manage the metaphor just fine. It's just uninformed, wrong, and even for those who would make similar metaphors who know what they are talking about wouldn't do so as you have.
Then you are trolling by pretending to not.


1) There is no single formal definition of random, let alone a common parlance definition.
I defined by inference in my original claim.

The common theme in your responses is that you seem to have difficulty with abstractions, inference, metaphor, thought experiment, etc.

2) There absence of randomness absolutely does not necessarily mean you could determine output. All of quantum physics rests upon this. Were it deterministic, it wouldn't be quantum physics it would be classical (only we'd simply call it physics). Were it random we couldn't use it. It is probabilistic, and in such a way that observers determine outcomes (so much so that Stapp based his quantum theory of mind largely on this).
So deterministic.
 

LegionOnomaMoi

Veteran Member
Premium Member
*
This is will come as a shock. Brace yourself.

Just because someone else has made a metaphor between a brain and a computer, doesn't mean I'm making the *same* metaphor.
1) My apologies for assuming you had the familiarity with the cognitive sciences a freshman who'd had a single course on the subject would have had.
2) The fact that your metaphor is distinct from that which pervaded these fields doesn't make it any less useless, misleading, pointless, and (apparently) indicative of a rather thorough ignorance of any relevant subjects, research, or fields here.

So. Straw man. You are hacking a metaphor fundamentally different from my own.
No. The fact that your metaphor lacked even the sophistication and nuances of those far superior doesn't make your own any less bereft of value and complete in its utter incapacity to demonstrate anything other than a fundamental misunderstanding of the brain, computers, computability, and logic. Essentially, I assumed your metaphor was more sophisticated than it was, yet was wrong. It turns out that it wasn't at all sophisticated or informed, but this only makes it less of an argument.

(which, you may recall, was never a proof but an illustration to avoid equivocation... I mentioned so in my last post).

You've never offered anything remotely resembling a proof.


I defined by inference in my original claim.

This is logic. The fact that you can dream up irrelevant definitions for logical systems that don't exist in order to speak of proofs you don't offer which would have to be constructed in formal systems that don't exist makes your definitions the equivalent of Humpty Dumpty's definition of "wabe" as the grass around a sundial. You simply have no idea what you are talking about, but remain convinced that because you define things into existence and fail to meet any basic requirements such that your definitions could be described as valid (still less sound) that this is a substantive argument rather than the nonsense word play of the Jabberwocky.

I don't care if your familiarity with logic is so limited you don't understand what inference is, this doesn't license your idiomatic definitions which serve as axioms and as proofs at once, thereby rendering them neither evidence nor components of either.

The common theme in your responses is that you seem to have difficulty with abstractions, inference, metaphor, thought experiment, etc.

That's one possibility. Of course, as you haven't actually demonstrated any familiarity with any of the above, indicated you don't know what you are talking about, indicated that you are content to quote-mine jargon you don't understand in order to make claims, and failed to offer any indications you are familiar with the basics, I won't hold my breath waiting for you to demonstrate a knowledge of the basics (rather than repeatedly indicating you lack any such foundations).


So deterministic.
Case in point. Oh, and still missing that proof you have made so many references to but which doesn't exist, nor does any indication that you are capable of formulating one.
 

Willamena

Just me
Premium Member
Randomness stands as much in contrast to free will as determination. In both cases, there was no choice by you.

Random = free will is a straw man.
 

LegionOnomaMoi

Veteran Member
Premium Member
Randomness stands as much in contrast to free will as determination. In both cases, there was no choice by you.

Random = free will is a straw man.
I wouldn't call it a straw man. It's just wrong. Randomness is at least as antithetical to "free will" as determinism. However, the notion that these are the only two options is quite misplaced.
 

JerryL

Well-Known Member
*

1) My apologies for assuming you had the familiarity with the cognitive sciences a freshman who'd had a single course on the subject would have had. (don't want to imply you are being dishonest; but didn't you a few days ago say that I lacked a basic grasp of language, logic, math, etc? That seems rather contrary to your claim above)
2) The fact that your metaphor is distinct from that which pervaded these fields doesn't make it any less useless, misleading, pointless, and (apparently) indicative of a rather thorough ignorance of any relevant subjects, research, or fields here.
1) Congratulations: you have a functional grasp of passive-aggressive. Now go learn why it's a bad thing.
2) That's a subject change. You hacked a straw man.

No. The fact that your metaphor lacked even the sophistication and nuances of those far superior doesn't make your own any less bereft of value and complete in its utter incapacity to demonstrate anything other than a fundamental misunderstanding of the brain, computers, computability, and logic. Essentially, I assumed your metaphor was more sophisticated than it was, yet was wrong. It turns out that it wasn't at all sophisticated or informed, but this only makes it less of an argument.

And I'm not using the calculator analogy as part of my proof. I'm using the calculator analogy to address a particular set of equivocation fallacies.
You are hacking a metaphor fundamentally different from my own. (which, you may recall, was never a proof but an illustration to avoid equivocation... I mentioned so in my last post).
JerryL said:
I didn't say the brain was a computer. In fact, in what you quoted I didn't mention a brain at all.
What I have said is that a brain makes choices like (IOW: under the same definition of "choice") a calculator.
JerryL said:
That's not what I said. What I said was that the lack of freedom was easier to see in a computer (because the mechanisms of choice can be illustrated more clearly)
JerryL said:
If decisions were, say, made by metaphysical souls; I'd be hard pressed to show *how* decisions come about. But in the end I don't care about the mechanism because the results tell me if they are deterministic or not.
And so on and so forth.


I admit that I forgot one word in one sentence 2 posts ago (since corrected); but you should have been able to get past that based on context. This seems to re-enforce my earlier point that it's an area of difficulty for you.
 

LegionOnomaMoi

Veteran Member
Premium Member
k
2) That's a subject change. You hacked a straw man.

Here's the problem: you made the metaphor. Regardless of why you did, it's a bad metaphor and as you weren't the only one discussing the brain in terms of calculators or computers I showed why it was a bad metaphor. You claim I address a straw-man because your calculator nonsense was part of a "proof". You offer no proof. You've given no indication that you know how (and your gaffe with mathematical notation you incorrectly copied from an obscure physics paper suggests otherwise, as does your use of "logic"), but have repeatedly claimed that this was the reason for your metaphor. If I have hacked a straw-man, it's because your actual argument doesn't exist. There's no proof you've given such that I could address your metaphor in the context of said proof.

Sorry. Took me a minute to find your quote contradicting this claim:
1) My apologies for assuming you had the familiarity with the cognitive sciences a freshman who'd had a single course on the subject would have had.
You don't know logic, language, philosophy, or physics.

Where, in my contradicting claim, did I mention the cognitive sciences that a freshman with a single course would know? I didn't. But I much appreciate your demonstration that the logical notion of contradiction escapes your grasp.
 
Last edited:

JerryL

Well-Known Member
Here's the problem: you made the metaphor. Regardless of why you did, it's a bad metaphor and as you weren't the only one discussing the brain in terms of calculators or computers I showed why it was a bad metaphor.

No. You showed why someone else's metaphor was a bad metaphor.

Metaphor: A butterfly is like a bee in that they both fly
Different Metaphor: A butterfly is like a bee in that they both sting.

Actually those are similes' but the moral is the same.

You claim I address a straw-man because your calculator nonsense was part of a "proof".
No. I claim you committed a straw man fallacy because you attacked a metaphor that was not mine and declared victory.

You've never addressed *my* metaphor; but there's no point in addressing it either as it's not a part of my assertion. It was put up to clarify noun-use. It's good as long as it does that, and bad if it does not. But since it was never to you in the first place....


Where, in my contradicting claim, did I mention the cognitive sciences that a freshman with a single course would know? I didn't. But I much appreciate your demonstration that the logical notion of contradiction escapes your grasp.
This troll is too much to resist. I'll address it:

In your universe perhaps people who don't understand (your words) "language" or "logic" can have completed first-year college with an elective in cognitive sciences.

Perhaps in your universe its reasonable to assume someone who has shown they don't know "don't know logic, language, philosophy, or physics.", must none-the-less no "cognitive sciences"

But we both know that's not true. We both know your passive-aggressive, and thereby childish comment was intended to be condensation. Be a man. Own up to your insults!
 

LegionOnomaMoi

Veteran Member
Premium Member
No. You showed why someone else's metaphor was a bad metaphor.
As you show, that doesn't necessarily negate my point, it's utility, or accuracy. However, as this is tangential I will wrap spoilers around my response so that they don't retract from the flow, but you can still read why your "different metaphor" argument falls apart in general:
Metaphor: A butterfly is like a bee in that they both fly
Different Metaphor: A butterfly is like a bee in that they both sting.
For entire classes of metaphors, these two different similes can be useless. For example, one might use the first simile to argue that a caterpillar in a cocoon ceases to exist because the butterfly that emerges is totally different, more like a bee than a caterpillar. One might use the second to argue that Muhammad Ali's "Float like a butterfly, sting like a bee. The hands can't hit what the eyes can't see." is redundant because he could have just said "float and sting like a butterfly". Or one could use the simile to indicate inaccurate reasoning by using an inaccurate comparison: as butterflies don't sting, comparing them to bees would seem to be a poor analogy, but that's simply because one is focusing on a single property rather than noting that they both fly.
Metaphors compare things. Generally speaking, when making comparisons even seemingly very different uses can be classed as the same (you might enjoy Metaphors We Live By). Cognitive linguists especially have classified types of metaphors fundamental to cognition. That's largely irrelevant here, as your particular comparison makes my criticisms relevant and renders your argument flawed.

No. I claim you committed a straw man fallacy because you attacked a metaphor that was not mine and declared victory.
That's basically what I said, but no matter. In point of fact my criticisms of the metaphor (thanks to the nature of metaphors) extends to your usages:
So I program the computer. I don't make the choice for the computer. The computer does that. The computer meaning the hardware and software working together.
Like your brain.

If you go back to my post on the nature of the computer vs. the brain (not the quotations, but my explanation), you'll find I address one failure of the computer-brain metaphor on the structural grounds. Computers have processors vs. memory (and hardware vs. software) but living systems process information via state changes. This is why it is not true that the description you quoted is "like a brain."

Computers are an example where we can see that in action (choice but no freedom).

Only we can't. Computers make no choices nor decisions. I went over this in some detail in the same post referred to above.

That's not what I said. What I said was that the lack of freedom was easier to see in a computer (because the mechanisms of choice can be illustrated more clearly)

It is easier to make an inaccurate metaphor of the type I've been criticizing that you make (again) immediately above, and you've denied making in some sense or another. It isn't easier to see because there are no mechanisms of choice.


You've never addressed *my* metaphor
I did above, and I'm still waiting for this proof you kept talking about.

but there's no point in addressing it either as it's not a part of my assertion.
A post or two ago it was still part of your proof (whatever "it" was, given the multiple comparisons you've made between brains and machines).

It was put up to clarify noun-use.
Sort of like when you looked up free will in a dictionary you mistook for the OED but still found "free will" defined as a noun, not an adjective and a noun?


In your universe perhaps people who don't understand (your words) "language" or "logic" can have completed first-year college with an elective in cognitive sciences.

Having actually taught such courses, yes (using "understand language" to mean, as I did, understand its nature or how it works such that one doesn't treat free will as an adjective plus a noun).

Perhaps in your universe its reasonable to assume someone who has shown they don't know "don't know logic, language, philosophy, or physics.", must none-the-less no "cognitive sciences"

Fallacy, I'm afraid. Nowhere did I indicate that because of what you don't know you "must none-the-less know" intro cognitive science material (or anything else).

We both know your passive-aggressive, and thereby childish comment was intended to be
condensation. Be a man. Own up to your insults!
I am trying to. I was criticizing you for something very different in the second quote, and you are conflating distinct criticisms of your ability to produce something relevant and not riddled with errors of various kinds on this topic. Your lack of familiarity with the topics I mentioned previously is distinct from a lack of basic familiarity with intro level cognitive science, but all are relevant here.
 

JerryL

Well-Known Member
That's basically what I said, but no matter. In point of fact my criticisms of the metaphor (thanks to the nature of metaphors) extends to your usages:

Ahh. if only repeating your assertion could make it become true...

Though actually, the more I think about this, the more I realize it's not a metaphor at all (which really makes your counter look silly). It's a direct claim regarding the consequence of a chosen definition.

Wow. And you wasted all that time on believing it was a metaphor.

Only we can't. Computers make no choices nor decisions. I went over this in some detail in the same post referred to above.
That depends on the definition of "choice" and "decision". Under the definition the earlier poster was using, they most certainly can.

Similarly, under other definitions your brain makes no choices nor decisions (nor soul, nor whatever else you want to imagine is the controlling mechanism behind thought).

But please feel free to keep equivocating. Lord knows I'm not going to stop you.

So back to the topic. Any definition of "free" which requires "non-deterministic" is incompatible with any definition of "choice" or "will" which asserts non-randomness.

A definition of "choice" while allows for it to be the inevitable result of the state of the universe would apply to calculators.
 
Last edited:
Top