• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

If you believe in free will, respond to these two objections

not nom

Well-Known Member
The term "step debugger"and the idea of running a complex ANN "step by step" have nothing to do with ANN's, nor is it possible to use a step debugger or to go through the program step by step and anyone with even a BASIC familiarity with neural network programming would know that.

blah blah blah. why not spend some time making an argument, instead of argument by authority? and everytime I say "nah", you simply repeat the exercise, with more text, and still no argument.

Instead of bothering to check up on what I was talking about, you posted your flippant response and then made it worse by exapanding on it, continuing to apply concepts from programming in general which don't apply here.

oh, but they do.

ANNs do not use explicit algorithms which allow one to run the program step by step. Every time a complex ANN is run, the input neurons communicate with perhaps multiple hidden layers of neurons until the information reaches the output neurons and a final response. Each run results in an adjusting of weights according to the intial complex nonlinear algorithms. However, the code never specificies how the weights are adjusted after or during each run, nor is it possible to "stop" the program and "see" what connections led to this or that weight change or this or that final output.

"hidden" layers? you mean they get set up, then stimuli pass through them and they adjust their weights, according to what other nodes are up to. and that then comes up with things we cannot understand.

but that doesn't mean you cannot run it step by step, just that it doesn't really help. because it's so many operations, you could spend your life "watching" the program, and be none the wiser. but since you know how you set it up, and since you can know the deterministic behaviour of individual operations, you can assume a deterministic whole follows from it. unless you wanna get esoteric about it, or need grants or something.

make sure you do. Because you continue to apply traditional programming logic to a field designed specifically to avoid that "step-by-step" process,

bah, strawman. that you keep pounding it is your thing, I don't "keep applying" that, I never did in the first place. I do know how neural nets work, basically, but I also know that they're made up of variables which hold one value at a time, and that unless you take entropy from input, it's deterministic.

and our inability to know the system trajectory in some cases, or to know how the result was derived, has nothing to do with programmers being unable to "step through" the code.

more importantly, it has nothing to do with wether it's deterministic or not.

And while your debugger approach is standard just about everywhere else in programming, if you knew what you were talking about, you'd know it doesn't work here, and you wouldn't have made reference to step debebugging.

oh. my. *******. god.

you're like a dog with a bone or something.

still waiting for the argument, and how neural nets don't run on deterministic machines, which have a specific state at every given moment. though you kinda gave that up anyway, so I guess this'll just peter out.

Again, this only demonstrates that you are completely unfamiliar with artificial neural network programming, but rather than retract the rude comments you made you'd prefer to just dig yourself a deeper grave.

*yawn*

First, an actual "turing machine" requires and infinite length of tape, and one of the points (or results) was to strike another blow against Hilberts dream (which Turing did using Cantor's diagonal infinite proof method). But more importantly, "Turing machines" outside of the theoretical concept and even within Turing's paper use formal and linear (which does not exclude loops) logic. ANN's are fundamentally different and deliberately so.

yeah, but HOW so? for someone constantly dissing me for lack of knowledge, you kinde exhibit none of your own. explain to me how they're fundamentally different, instead of telling me "if I had a clue, I would know". I mean, what's your point in posting in the first place, then? everybody has to take it as face value and that's that? why don't you get a blog, that would be better suited.

Unlike other programs, even extremely complex ones, ANNs are designed to "write" their own code in a sense.

lol. dude, I do know the basics of a neural net. and yes, "in a sense" is the keyword here haha.

They adapt to input in highly complex ways making it impossible for the programmer to always know how or why certain changes in weights resulted or why the output was what it was, and also to run through these changes "step-by-step" to find the answer.

that's just because it's too much information for a programmer to see -- it's basically just tables of numbers, after all, neuron weights etc. -- not because it's not a step by step process.

you cannot run an algorithm on a CPU that fetches data and instructions in deterministic fashion that makes it magically non-deterministic, and I am tired of your fluff by now. I never said it's determinable for us, I said that it isn't doesn't mean it's not deterministic -- you weren't debating that

This isn't saying that they are indeterministic (although again, that has been suggested), but it does mean that your mocking comments about how it's just a matter of using a step-by-step debugging approach means you don't understand how ANNs work.

you're really desperate for that, huh? no, it just means I correctly identified them as deterministic -- do you seriously think I was suggesting one should step-by-step debug a neural net to "understand why it does this or that" -- ?? :/

Only that isn't what I said:

If they say "under these circumstances it is deterministic, but under these we can't assert that it is" (which is exactly what they say) than yes, that does equal "we have found even just the slightest indication that it isn't." The whole reason to bring up determinism of this type of ANN was to note how under certain conditions it is deterministic, and under others that can't be said. If it can't be said, then they can't say that for a reason.

then gimme that reason, and cut the filler.

Given your mocking "step-by-step" solution to the problem faced by experts in mathematics, computer science, cognitive science, etc., you'd think that either they'd have figured out all they had to do was use debugging techniques everyone else has for the past several decades, or it is relevant and you don't know what you are talking about.

again, you run for the strawman. at least you're not denying those advanced neural nets run on pretty much standard CPU's. so substracting all the huffing and puffing, thanks for pretty much confirming all I said.
 
Last edited:

PolyHedral

Superabacus Mystic
However, the code never specificies how the weights are adjusted after or during each run, nor is it possible to "stop" the program and "see" what connections led to this or that weight change or this or that final output.
How is that possible? If the code is not adjusting the weights, what is? If code (or hardware) is not keeping track of connections and their weights, what is?

And while your debugger approach is standard just about everywhere else in programming, if you knew what you were talking about, you'd know it doesn't work here, and you wouldn't have made reference to step debebugging.
It theoretically does work here, but humans are too impatient.


First, an actual "turing machine" requires and infinite length of tape...
An arbitrarily long length of tape. It'd only need an infinite length if it ran for an infinite time, which is impossible.

ANN's are fundamentally different and deliberately so
.
They are fundamentally identical, actually. ANNs are still Turing-computable.

They adapt to input in highly complex ways making it impossible for the programmer to always know how or why certain changes in weights resulted or why the output was what it was, and also to run through these changes "step-by-step" to find the answer.
Again, how is this possible? If the simulation of the ANN is not keeping track of this, what is?
 

LegionOnomaMoi

Veteran Member
Premium Member
blah blah blah. why not spend some time making an argument, instead of argument by authority? and everytime I say "nah", you simply repeat the exercise, with more text, and still no argument.



oh, but they do.



"hidden" layers? you mean they get set up, then stimuli pass through them and they adjust their weights, according to what other nodes are up to. and that then comes up with things we cannot understand.

The algorithms specify how weights shift given input. In a trained ANN, the weights shift based on the response the trainer givers. Simplisitically, the program selects which weights "helped" it return the "right" result, or which "prevented" it from returning the "wrong" results." But this isn't binary, nor exact. The weights are summed and the product of layers of heirarchy. Each hidden neuron receives multiple inputs and the total summation (in general) decides whether or not it "fires." Which means there is no method to determining which specific neurons tended to cause the neuron to fire. By the time the various neurons have all fired or not, and the output neurons return their result, there often isn't a way to determine, given a specific neuron among a hidden layer, which input weights caused it to tend to fire.

but that doesn't mean you cannot run it step by step, just that it doesn't really help.
It does. Because unlike most programs, ANNs do not follow standard linear programming logic. Where does your expertise on these systems come from? Within 5 feet of where I set, there are 6 books on the subject. I have access to multiple journals and many other books. I have cited some, and can cite more. What are you relying on for your assertions on the operations of ANNs?


I don't "keep applying" that, I never did in the first place. I do know how neural nets work
Well great! Tracking the trajectory of ANNs has baffledd leaders in the fields for years, YOU have the solution with basic programming concepts.

it's deterministic.
Even granting that, despite the suggestion of some in the field, that's not the point. My objection was to your rude, arrogant, and ignorant response about "step debugging" and so forth which has ZERO application here. But feel free to refer to your sources to show me that I'm wrong.


still waiting for the argument, and how neural nets don't run on deterministic machines, which have a specific state at every given moment. though you kinda gave that up anyway, so I guess this'll just peter out.
1) non-deterministic systems can run on deterministic "machines."
2) Having a specified state at every moment doesn't mean anything. Only whether this state was uniquely determined by the previous one.
3) Again, it was your rude, flippant, and ignorant mocking comment about "step debugging" which I object to, and rather than retract it, you continue to act as if you have ANY idea what you are talking about.

yeah, but HOW so?
Take a look at Turing's paper. A major finding was "to show that the Hilbert Entscheidungsproblem can have no solution." I found a copy for you here: On computable numbers, with an application to the Entscheidungsproblem - A. M. Turing, 1936.



lol. dude, I do know the basics of a neural net
Despite the fact they aren't called that.

that's just because it's too much information for a programmer to see -- it's basically just tables of numbers, after all, neuron weights etc. -- not because it's not a step by step process.

WRONG. I've given you quotations from academic specialists working in this field who disagree. What have you offered other than sarcasm? There is no "step-by-step" process because every input feeds into multiple neurons, which are also connected to multiple neurons, and each connection is weighted seperatedly. For each neuron, the weights are summed and it fires or it doesn't. The system then finds which neurons tended to give the right response (this is simplistic, and generally describes trainable ANNs, but it still works), and changes the weights attached to those neurons. However, each neuron fires as a result of SUMMED weights, meaning that distinguishing which weights resulted in the neuron firing is difficult or impossible. And that's a simple ANN. The whole point is to mimic massic parallelism.

Again, where are your getting your info on neural networks and the work involved in tracking the trajectories of complex ANNs?

I never said it's determinable for us, I said that it isn't doesn't mean it's not deterministic -- you weren't debating that

Again, some have argued that these networks ARE indeterministic. But what I objected to was your rude sarcastic remarks which mocked my post and in doing so showed you know NOTHING about ANNs, but rather than retract your statement you persist.

do you seriously think I was suggesting one should step-by-step debug a neural net to "understand why it does this or that" -- ?? :/
Let's see:
"programmer, this is step debugger. step debugger, this is programmer.

I'll leave you two alone now, it seems you have a lot to catch up on..."
that's not what a step debugger is. a step debugger lets you run the programm step by step, while inspecting its state. you can watch it grab input, and how *exactly* it reacts to it and why.

Why talk about "step debuggers" or running the program "step-by-step" or knowing "how exactly it reacts" and so forth when that is just completely inaccurate?


then gimme that reason, and cut the filler.

The point is, they don't know. It's that complex.



again, you run for the strawman. at least you're not denying those advanced neural nets run on pretty much standard CPU's. so substracting all the huffing and puffing, thanks for pretty much confirming all I said

You use the wrong terms, the wrong approach, you have yet to refer to anything which evenly remotely suggests you know what you are talking about, but if that confirms things in you mind, welll more power to delusion. In the meantime, if you are interested, I can provide you with paper after paper, as well as plenty of books on the subject which you so clearly no nothing about.
 

LegionOnomaMoi

Veteran Member
Premium Member
How is that possible? If the code is not adjusting the weights, what is? If code (or hardware) is not keeping track of connections and their weights, what is?
The initial algorithm(s). They determine a general method for the network to adjust its weight, but for even a simple network, after a few trials it becomes difficult or impossible to track weight changes. The key is that these networks are based off of actual neurons. For example, a given neuron in a given layer for a single trial may have a great many inputs, some inhibitory and some excitatory. The summed result determines whether the neuron fires. If it does, it then sends out multiple messages to other neurons. Let's say this is a supervised neural network. After the output, the programmer basically tells the network "yes, you that was the right result" or not. The first problem is determining which neurons were MOST responsible for the correct response, as different neurons in in the immediately preceeding layer (barring more complex feedback architecture) contribute differently to the final outcome. Factoring the weights, and the problem is exponentially complicated.


It theoretically does work here, but humans are too impatient.
It doesn't. At all. Because the the programs don't proceed in a step-by-step fashion.


An arbitrarily long length of tape. It'd only need an infinite length if it ran for an infinite time, which is impossible.

A central finding of Turing's paper required an infinitely long tape.

.
They are fundamentally identical, actually. ANNs are still Turing-computable.
They aren't. Because Turing machines are still step-by-step machines, rather than "massively" parallel.
 

CarlinKnew

Well-Known Member
No, I'm not saying the violate the laws of physics, merely that they cannont be determined completely by the laws of physics. That is, because physics act on neurons at the local level, but the emergent structure cannot be reduced to local behavior, the structure is not determined by physics. Of course, this opens the question of "what does determine the structure" and "how, if the individual neurons obey completely the laws of physics, the emergent behavior cannot be determined by an entity like Laplace's demon?" I don't now, nor does any one else. I've read arguments for non-deterministic self-organization based on quantam mechanics, as well as refutations for such views. Then there are arguments based on a level of complexity we are incapable (at the moment) of understanding.
Yes, I share the questions you put in quotes here. This subject is a fascination of mine that I am pursuing, but I'm not yet as familiar with it as you are. Although it all comes down to the fundamental question of, "What does this imply about free will?" If the structure can't be determined, if it's not just a problem of a lack of information/understanding, doesn't this necessitate an element of randomness? How else can something be undetermined?
 

LegionOnomaMoi

Veteran Member
Premium Member
"What does this imply about free will?" If the structure can't be determined, if it's not just a problem of a lack of information/understanding, doesn't this necessitate an element of randomness? How else can something be undetermined?
Randomness, or individual will, undetermined because only the mind (or the mind and other entities) "determine" it? A constrainded undetermined system. Or not. Just do me a favor and if you figure it out, make sure to publish it.
 

PolyHedral

Superabacus Mystic
The initial algorithm(s). They determine a general method for the network to adjust its weight, but for even a simple network, after a few trials it becomes difficult or impossible to track weight changes.
But algorithms cannot be random. Given an initial weight, and the algorithm, (and the other information needed to compute the algorithm) it is plain to see what the newly computed weight will be. By induction, the weights can be predicted arbitrarily far into the future.

The first problem is determining which neurons were MOST responsible for the correct response, as different neurons in in the immediately preceeding layer (barring more complex feedback architecture) contribute differently to the final outcome. Factoring the weights, and the problem is exponentially complicated.
Yes, but it is still computable. It will remain computable in principle regardless of the complexity of the network.

It doesn't. At all. Because the the programs don't proceed in a step-by-step fashion.
Certainly in a software simulation of a neural network, they have to; they are programs. In hardware, I am relatively sure that there will still be a well-defined ordering of events, it will just be harder to track.

A central finding of Turing's paper required an infinitely long tape.
Only in the event that the machine runs for an infinitely long time, which is impossible as mentioned.

They aren't. Because Turing machines are still step-by-step machines, rather than "massively" parallel.
They can't be computed by Turing machines? So how do software simulations of neural nets work? Modern CPUs are equivalent to Turing machines.
 

idav

Being
Premium Member
Randomness, or individual will, undetermined because only the mind (or the mind and other entities) "determine" it? A constrainded undetermined system. Or not. Just do me a favor and if you figure it out, make sure to publish it.
Some fascinating stuff that you posted. Did you say thoughts in neurons are non-linear, would that mean they dont go by one straight path but multiple lines? How would it violate a causual chain? Maybe I misunderstood.
 

CarlinKnew

Well-Known Member
Randomness, or individual will, undetermined because only the mind (or the mind and other entities) "determine" it? A constrainded undetermined system. Or not. Just do me a favor and if you figure it out, make sure to publish it.
Yes, and we both know that randomness isn't desirable in terms of free will. I just don't see how, a previous state of mind (along with stimuli) wouldn't determine a future state of mind, without an aspect of randomness. I don't see how a cause could lead to one of multiple possible effects, without an aspect of randomness.
 

LegionOnomaMoi

Veteran Member
Premium Member
But algorithms cannot be random. Given an initial weight, and the algorithm, (and the other information needed to compute the algorithm) it is plain to see what the newly computed weight will be. By induction, the weights can be predicted arbitrarily far into the future.

But you are missing the biggest piece. Say I'm designing a supervised ANN with two output neurons, one hidden layer composed over half a dozen neurons, and an input layer composed of roughly the same amount (this is a very simple set up). I want it to recognize rom letters. Not just letters like those one would see typed, but hand-written as well, or letters which come close to the shape they are supposed to but were written by a child. The initial weights and the algorithms only provide a starting place. As soon as I began to "train" the network, some ANNs begin to alter in unpredictable ways (which doesn't make them non-deterministic). For example, say I input something that looks like te letter "H." Initially, let's say certain "paths" to the output neurons result in one output neuron firing (a "yes") and another not. At this point, everything is pretty-straightforward, but even now I would have to do a fair amount of work to figure out how the network would respond to that input.

This is where things start to get more complex. I tell the network that the output which fired "yes" was correct. The network uses the algorithm to determine which neurons led it to the correct answer. But this isn't a binary process. It isn't that some neurons were right and others were wrong, or that some weights were right and others were wrong. Just that certain weights were involved, to varying degrees (some were weak, others strong, and everywhere in between). So the network uses the algorithms to adjust the weights across the network based on the weights that led it to the correct response and those that did not. Again, however, some weights that were involved in the correct response didn't actually "help" and vice versa. Now I show it a square. Theoretically, the network will tend to have more connections between input neurons and the hidden layer with lower weights, so it is more likely that the output neurons won't fire. But once again, a lot of the adjustment the network made was out of error. Whatever the case, I continue to tell the network when an output neuron fires correctly, and it continues to adjust based on my responses.

One part of the complexity of even this simple network is the input. Squares have a lot of straight lines, like an "H" or and "L" or an "N" but not so much an "n" and certainly not an "O." So weights which tend to lead a neuron to fire for an "H' might also do the same for a square, but not for an "O." It takes a lot of training for the neuron to "remember" the patterns of weights which allowed it to distinguish between an "H" and a square but also recognize an "O." It doesn't take a lot of time before the network has changed its weights in unpredictable ways.

Say I give the network an "H," an "A," and a "V." The networks memory (the weights) will be attuned to that type of shape, and with a fair amount of work I can predict, if I know the EXACT shape of the letters I show (because we aren't talking merely about typed letters), exactly how the weights will change. If I then show the network an "O," most likely both output neurons will fail to fire. I then tell the network they are both wrong. This is tricky, because a lot of the weights which led to the wrong answer were correct before, and the network remembers this. This is where certain parameters in the algorithm become really important: if the network adjusts too fast, then showing it an "O" here is basically like going back to ground zero.

Instead, the network will probably try to select weights which contributed less to the correct answers before, along with some which contributed less to wrong answers. Now it won't be as good at recognizing an "A," but will be better prepared for a "g" or a "C."

Again, this is a simple network, but it is a dynamical system, so small perturbations can mean a lot. Make an "H" a little more curvy, or an "O" a little more like a square, and the whole network adjusts its memory.

Yes, but it is still computable. It will remain computable in principle regardless of the complexity of the network.
One issue is that that the algorithm doesn't specify HOW the weights will change given stimuli 1, 2, 3,...n, only how it adapts, remembers, and learns. Given a sufficiently complex ANN, predicting the evolution of the network is a bit like predicting the weather. Even knowing how it will change from simulus 15 to 16 and why becomes hard or impossible. One of the big areas of work in this field is coming up with tools which allow the programmer to figure out how certain answers were achieved, not just predicting future ones (although the two issues are very related).

Certainly in a software simulation of a neural network, they have to; they are programs. In hardware, I am relatively sure that there will still be a well-defined ordering of events, it will just be harder to track.


A basic artificial neuron has a threshold value (some nonlinear function) and is connected to other neurons by a series of weights. The neuron fires (perhaps sending information to a series of neurons, perhaps to an output neuron) if the weights cause the neuron to reach its threshold, it fires (given different set-ups, like feedforward vs. backpropagation, this may alter connections only upward, or also backward). Now, I can write the program so that it tells me whether or not each neuron fired and what the weights were, but tracking isn't the issue. Interpreting is much more difficult. Knowing that a neuron fired is pretty useless. The eventual output relies on the networks memory, or certain weight patterns, and I can only know how the weight pattern changes from one state from another after it has done so. Tracking the firing of each neuron won't enable me to do that.

Also it is important to note that the functions which determine whether a neuron fires and (sometimes) what that means are nonlinear functions in n-dimensional space. That means the behavior of a given neuron is complex enough. Determining how a series of multi-dimensional nonlinear functions interact in response to a stimulus resulting in a final pattern of weight distributions is a nightmare, and proceeding "step-by-step" either defeats the whole process (it becomes way too slow) and/or is useless, because I can't tell, given that neurons 3, 5,6, & 7 fired on x hidden layer but the others didn't, what that really means.

The output is the pattern as a whole, and tracking it often just provides you with useless information. In order to determine what caused the network to respond the way it did involves a series of differential equations, not tracking.

Only in the event that the machine runs for an infinitely long time, which is impossible as mentioned.

The point of Turing's original paper was to prove that a procedure was possible to determine the truth value of any mathematical proposition. The proof (which showed that this was impossible) relied on infinity. But yes, one can build a turing machine with finite tape. It's been done.

They can't be computed by Turing machines? So how do software simulations of neural nets work? Modern CPUs are equivalent to Turing machines.

Modern CPUs aren't exactly equivalent to Turing machines, but I think I see your point. With some exceptions, most neural networks only simultate the "massive parallelism" of the brain. However, increasingly a number of researchers aren't running ANNs on a CPU, but are implementing ANN algorithms using truly parallel ANNs. In other words, they are using parallel hardware (for a simple and straightforward example, see http://cactuscode.org/media/news/ncur20/Cactus_WesleySmith06.pdf).

However, even when only simulating parallism, again the the linear nature computing may allow one to runn ANN algorithms on an actual turing maching (tape and all), but the problem is the same as running it on a CPU. Being able to "track" changes simply doesn't give you the information you need to understand whether the system is heading.
 

LegionOnomaMoi

Veteran Member
Premium Member
Yes, and we both know that randomness isn't desirable in terms of free will. I just don't see how, a previous state of mind (along with stimuli) wouldn't determine a future state of mind, without an aspect of randomness. I don't see how a cause could lead to one of multiple possible effects, without an aspect of randomness.
Neither do I, both for logical reasons (i.e., some of the more sophisticated modern varations of Aristotles sea-battle argument for fatalism) and given what we know about the nature of the phsyical world.

The simple answer is that it is "random" only in that it is impossible to predict (ontologically impossible), because "free will" ultimately determines the next state given the total possibility space. Of course, not only is this almost a "free will in the gaps" solution, it also doesn't resolve certain fatalist arguments. Aristotle's has been beaten to death, but there are some pretty convincing versions. For the sake of anyone unfamiliar with the basic structure of the argument, it runs as follows:

Tomorrow there either will or will not be a sea-battle. If I say today "tomorrow there will be a sea-battle" I am uttering (under one interpretation) a proposition which by definition has a truth-value. Therefore, even though I don't know its truth, it is either true or false when I say it. If there is a sea-battle tomorrow, it is true. If not, it is false. But if, for the sake of argument, it is false, then there cannot be a sea-battle tomorrow.

Aristotle's argument (from de interpretatione) is I think sufficiently addressed. Other variations are less easy to judge, and there are still plenty of philosophers who propose that a certain type of fatalism is logically necessary. Which would mean that decisions too are fated, even in a system which is non-causally indeterministic in terms of physics.

But I think there is some merit to the arguments which revolve around "randomness" being defined by the capacity to make a decision via "free will." That is, the issue of "free will" entailing "randomness" and therefore a lack of responsibility is potentially negated by understanding 1) the restricted probability space given any decision and 2) that by non-causally indeterministic we mean the "mind" links the initial conditions to the resulting decision causally, but causally in the sense that it has the capacity to choice various decisions, and by choosing one the outcome is only trivially a matter of causation. The decision is an outcome with a cause, but that cause is my "free will" to choice among a number of options, and while it isn't predictable, it isn't "random" because I determine it.

But again, while I'm usually pretty swayed against the fatalist arguments made from logic (Susan Haack has some good rejoinders), the mechanisms for a system which is ontologically non-deterministic are still unknown and those who propose solutions rely heavily on speculation.
 

LegionOnomaMoi

Veteran Member
Premium Member
Some fascinating stuff that you posted. Did you say thoughts in neurons are non-linear, would that mean they dont go by one straight path but multiple lines? How would it violate a causual chain? Maybe I misunderstood.

By nonlinear I mean (simplistically) a function of the type f(x)=something where that function won't graph as a straight line. In other words, something other than y=mx+b. However, this becomes more complex in higher dimensions. I think statistics is probably the easiest way to explain this. Imagine a class of 100 students with a midterm and a final. Every student has a score on both tests (let the score on the midterm be X, and the score on the final be Y). If there were a truly linear relationship between the scores of the students on both tests, then given any score on one test, you could perfectly predict the score on the other. Technically, the X and Y scores are vectors in 100 dimensional space. But if you plot the scores on a graph, with the scores of the final on the vertical axis and the scores on the midterm as the X axis, each dot (representing a student's score on both tests) would fall on a straight line.

Nonlinear functions are harder to work with. There is no function for a neuron which can be plotted as a straight line. Here is one particularly famous neural model: Hodgkin-Huxley. Once you leave the realm of lines and linear functions, you enter the realm of differential equations and integration, which are (in a sense) really complicated ways of treating curves like lines.
 

Thief

Rogue Theologian
Gender is a difference which may have caused variance.


I suspect it was because God setup the conditions so they would act exactly as they did.

Not much point that warning is given if no choice can be made.
It was indeed a stand back and see how it goes....situation.


Yes, by choice. It a choice their brain makes to receive whatever pleasure it gets from the action. I think the mistake is in thinking they could have chosen other behavior then they did.

Choice is not indicate of freewill?
Without freewill....wouldn't the animal stand still?...unable to choose?


We often manipulate people by motivating them through their desires. They are free to choose to act according to those desires.

Manipulation is as it sounds.
The garden event is a story of manipulation...the body of Man.
It is also a story of choice...and freewill.

If God had put up an impassible barrier around the Tree of Knowledge then Adam and Eve would not have been free to act on those desires.

Last sentence....true.
But the lack of opportunity, is not a definition for the lack of freewill.
 
Top