• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

On the origin and function of minds

LegionOnomaMoi

Veteran Member
Premium Member
Equations are not algorithms?

No. Most of the time, they are quite easily translated into algorithms and the difference (as you say) is notational. Algorithms are pretty much defined by a stepwise notation. They need not use actual code (logic, pseudo-code, etc., are all fine), but the stepwise part is fundamental. The only other fundamental component is "well-defined" (although what this means can be an issue), and as an equation is not necessarily translatable into any series of well-defined steps, the two are not equivalent quite apart from the notational difference.

It doesn't really matter if you can predict it ahead of time; computable numbers are those which are generated by computable functions, of which pi is one. :shrug:

Computable numbers are not just those generated by computable functions. Or rather, the fact that a number is computable requires treating it in a particular way. Pi is a computable number because there exists a finite number of well-defined steps which can in principle output Pi given an infinite amount of time. However, that's simply a matter of how numbers are approached, and is more an issue of convenience. There is all the difference in the world between computing rational numbers (even infinite decimals) and computing Pi. The latter requires a specific algorithm, while the former requires only general algorithms not specific to any particular number or operations on these numbers. Moreover, approaches differently, Pi becomes at least intractable and (depending on one's definition) even incomputable. Treat the terms of Pi as steps and things change. Unlike 1/3 or other rational numbers with infinite repeating decimals, there is no way you can know (even approximately) what will follow the nth term until you compute n + 1.

According to whom?

I'd have thought it obvious pretty fast that an algorithm specifying the response doesn't work.

Hindsight is 20/20. It took computer scientists and cog. sci. reseachers a while to get there.

Hofstadter and Yudkowsky seem to have a pretty good idea.

When your theory relies on something like "strange loops" or "emergence" or any other form of "...and then stuff happens and we get consciousness/awareness" it isn't much better than god in the gaps. There are tons of theories about how the brain goes from the neural interactions we know something about to conscious thought and conceptual processing, but they all rely on ill-defined notions.



Formal languages predate computer science. Depending on whether one cares about published versions, they either go back at least as far as Frege and possibly as far as Leibniz.

I hear Siri is pretty good at understanding. :p
That's because Apple is God.

They are; that would be how being able to speak multiple languages works.

Being able to speak multiple languages has nothing to do with that. In fact, if you study other languages (especially those which are not IE languages), this becomes more obvious. Take "there's" constructions (there's Paul with his new car, there's a cat on your house, there's a new way to compute Pi, etc.). There is no single "type" of these constructions (we have deictic and existential among others), nor are they readily understood in contrast to other impersonal constructions in English (here's that car vs. there's that car; it's useless vs. there's no point; etc.).

Most importantly, though, a divide between syntax and lexicon fails utterly here. The correct/grammatical/allowable forms here cannot be accounted for by any syntax or rules unless they include those specific to what can follow "there is/there's". Which means that the rules are specific to a particular lexical combination and cannot be generalized, which in turn means that the divide between rules and lexicon fails. The fact that german has "es gibt" and French has both "il est" and "c'est" doesn't change things.

AFAIK, the rules which we use to understand text have absolutely nothing to do with the words themselves

They have everything to do with the words themselves. There are general rules, sure. But most of what goes into parsing speech (and minimizing online processing) has to do with schemata which are specific to certain combinations of words or defined by structures specific to the use of certain words. You don't understand "all of a sudden" or "it takes one to know one" because you know "rules" and the words, otherwise I could say "some of a sudden/most of a sudden" or "it took one to have known one". Likewise, the fact that the comparative is usually found in constructions like "X is better/faster/stronger/etc. than Y" doesn't have anything much to do with the vast number of ways one specific but different use occurs: "the higher you climb, the harder you fall" "the more you take time to relax, the more productive you'll end up being" "the X-er, the Y-er". The rules governing these parallel "idiomatic" uses of the comparative are specific to them (they cannot be generalized). So we have rules for what is "allowable" here, but only once it is clear that we have parallel uses of "the + comparative".

That's language. Schemata which range from the very abstract (subject/object, noun, etc.) to relatively general/abstract ("there's" constructions, "the X-er, the Y-er"), to prefabs ("all of a sudden", "once in a while", "pick and choose" "in point of fact", etc.), idioms ("birds of a feather..." "kick the bucket", etc.) and "word + preposition" ("have to", "blind to/blind from", "here for X", "wish for/wish to", etc.), to lexemes.

"Curious green ideas sleep furiously" makes perfect sense.... unless you are aware that ideas are not a thing that can sleep.

But "all of a sudden" doesn't make sense given the words or typical rules. It's idiomatic. And "pull strings" makes perfect sense, only "He pulls the strings around here" has nothing to do with actual strings. I can build an argument or a church, fight an urge or an opponent in an MMA compeition. The wind blows, but I blow my nose. "Jack kicked the ball" is nothing like "Jack kicked the bucket." And so on. Rules simply fail to do much with language unless you incorporate the rules specific to words and certain combinations of words.

So the confusion is because two identical-looking pointers refer to different things?

It's because rules and words are not two seperate things. Certain word senses license certain combinations ("I wish to leave" vs. "I wish for world peace", "I give to charity" vs. "I give you candy" vs. "I give for the good of others"), certain words only go together because they do ("all of a sudden" etc.), and certain constructions are governed by rules specific to them.

And not only are rules specific to particular words, word combinations, and constructions, but novel uses are readily understood if they fit certain patterns or metaphors. If you hear a server say "Table 9 wants their check" you know that the server is referring to the people at the table (metonymy).
(Also, I disagree: I think human ability to understand language is the cutting-edge pattern matching algorithm.)

As you said, we understand the meanings of words. Which means we don't just match patterns.

Oops. I mean to say, ask the stock market analysts. :p
Anyway, I would say that if someone were to write a self-evaluating stock market bot, then it would count in every way as "intelligent," despite not having a body or even interacting with normal 3D space.
Intelligent and having a "mind" are far from the same thing. Intelligence does not require awareness, understanding, conceptual representation, etc.


I have to leave so I'll address the rest later.
 

LegionOnomaMoi

Veteran Member
Premium Member
...What? I completely fail to understand how you could possibly arrive at "understanding requires interaction with the world." The reverse, that interacting with the world requires understanding and abstraction, is a lot more plausible. (But also somewhat obvious?)

The "somewhat obvious" reversal doesn't exist for the vast majority of life throughout the history of this planet. It happens that certain species have some capacity to understand concepts in some way.

The question is how can meaningless, automated responses to stimuli (which is what most life and what computers are capable of) become conscious responses to concepts? My dog understands that "food" corresponds with certain things and with certain actions. She does because experience has taught her that a certain range of auditory input corresponds with things like her food bowl, table scraps, eating, etc. Those who argue that Searle's chinese room argument is invalid or flawed often do so because they argue that meaningless symbols can become meaningful through an interaction with one's environment. Keep feeding a machine symbols it process syntactically and that's all it will ever do. It will never "understand" what it is processing. One argument about the missing link is that the ability to "learn" (to react adaptively according to computational intelligence paradigms) is simply not enough. Learning through interaction with "things" allows a system to gain some abstract concept of "thing" (the basis for nouns) based on experience with actual, physical objects. Same with actions (verbs) and experience observing and engaging in activity.

As far as I can tell, (since that paper is a response to something I haven't read) it appears to be arguing against spherical cows. Organisms are made of atoms, so shouldn't the proof of organisms being uncomputable refer to the behaviour of atoms and molecules?

The basis for Rosen's argument is metabolic processes in organisms (specifically metabolic closure). However, since his work the comparison between metabolic closure in organisms and quantum properties hasn't gone unnoticed. From "Bridging the Gap: Does Closure to Efficient Causation Entail Quantum-Like Attributes?"

“although specific processes occurring within the living machinery have been well characterized by the tools of classical physics and chemistry, life itself does not admit a clear cut definition in mechanistic terms (Woese 2004). Questions like what it takes to be alive and what kind of process brings matter from a non-living into a living state remain an insoluble conundrum so far (Brooks 2001; Pereto´ 2005). Not surprisingly, mathematical modeling of the living process in its totality has proved to be a hard problem (Rosen 2000). That is because the analytic process of fragmenting the system in its elementary fractions is not suitable for the task at hand, mainly due to the structural complexity of the system, the intertwined pattern of bottom-up as well as top-down interactions across different spatial and temporal scales, the presence of ‘circular’ self-referential causal loops, the anticipatory and inherently teleological nature of life, and the awkward capacity of autonomy and self-maintenance (Boogerd et al. 2007). As a consequence, the fabrication of artificial life from non-living material constituents, despite extraordinary efforts (Szostak et al. 2001; Rasmussen et al. 2004), remains an elusive chimera (Rosen 2000; Louie 2010a)…It is my conjecture that quantum and living systems are related in non-trivial ways, and that the comprehension of this link would be of crucial benefit not only for theoretical biology, but also for quantum theory.”

Impossibly fast... for an architecture and hardware we don't understand very well? Do we know how the brain works or not? :shrug:

From Manrubia, Susanna C.; Mikhailov, Alexander S.; Zanette, Damian H..(2004). Emergence of Dynamical Order : Synchronization Phenomena in Complex Systems. World Scientific Publishing Co., p 312:
"
The biological “hardware” on which the brain is based is extremely slow. A typical interval between the spikes of an individual neuron is about 50 ms and the time needed to propagate a signal from one neuron to another is not much shorter than such an interval. This corresponds to a characteristic frequency of merely 100 Hz. Recalling that modern digital computers should operate at a frequency of lo9 Hz and yet are not able to reproduce its main functions, we are lead to conclude that the brain should work in a way fundamentally different from digital information processing.

Simple estimates indicate that spiking in populations of neurons must be synchronized in order to yield the known brain operations. “Humans can recognize and classify complex (visual) scenes within 400-500 ms. In a simple reaction time experiment, responses are given by pressing or releasing a button. Since movement of the finger alone takes about 200-300 ms, this leaves less than 200 ms to make the decision and classify the visual scene” [Gerstner (2001)l. This means that, within the time during which the decision has been made, a single neuron could have fired only 4 or 5 times! The perception of a visual scene involves a concerted action of a population of neurons. We see that exchange of information between them should take place within such a short time that only a few spikes are generated by each neuron. Therefore, information cannot be encoded only in the rates of firing and the phases (that is, the precise moments of firing) are important. In other words, phase relationships in the spikes of individual neurons in a population are essential and the firing moments of neurons should be correlated."

Basically all of quantum mechanics revolves around the behaviour and conservation of information, e.g. the black hole information paradox, the holographic principle. For any given quantum state, a finite number of bits would describe it absolutely. (Even if there are so fantastically many of them that they're uncountable in practice.)

First, there is no agreed upon model of quantum mechanics (in fact, there isn't even much of an agreement on what the so-called "Copenhagen Interpretation" actually is). A fundamental area of contention is the relationship between mathematical models and reality. Take the wave function:

"Quantum mechanics was discovered roughly a century ago. In spite of its long
history, the interpretation of the wave function remains an open question."
Nakahara & Ohmi's Quantum Computing: From Linear Algebra to Physical Realizations

Second, things like "information" or "description" are not exactly well-defined terms (and when they are, there is not exactly an agreed upon definition and certainly not a consensus that such things have any meaning as a component of physical reality rather than as a conceptual framework). To say that bits can "describe" anything absolutely relies on a rather cavalier use of the term "describe." After all, if bits were all it took, nobody would really care much about quantum computing:
"The fundamental problem with the classical strong Church–Turing Thesis is that it appears that classical physics is not powerful enough to efficiently simulate quantum physics" from Kaye, Laflamme, & Mosca (2007). An Introduction to Quantum Computing.



But a qubit doesn't have infinitely many possible states; it only has two.

1) The act of observation collapses the wave function such that it takes on one of two states. However, this means that in order to make qubits correspond to something like "bits" or "information units" we force it to do so in a very specific way.
2) The big (hopefully) advantage of quantum computers will be the entanglement of quantum states, which is neither binary nor reducible to binary encoding.

It's wavefunction, a physically unobservable quantity,

It's a mathematical entity. The relationship between the wavefunction as a formal, symbolic notation and physical reality is unknown.


Except for the fact that the universe can be treated as a computer.
According to whom? And based upon what?
 

LegionOnomaMoi

Veteran Member
Premium Member
The complexity can be broken down.

And this assertion is based on...?

Either neurons are aware or we aren't aware either.

You brought up ant colonies. As I said, if you take 100 army ants and place them down, they do nothing except die. Take the same type of ant, but half a million of them, and something radically different happens: collective intelligence.

Neurons aren't aware of anything. If they were, we'd only need one.

Cells have a job to do and the ego is merely the collective of that awareness that neurons must have.

The "ego" is merely an outdated unscientific Freudian concept which has no place in science.
Ants are aware of their environment and respond accordingly.
Unless of course there aren't enough for swarm intelligence. In which case all they do is die. Why? Because a single ant isn't capable of any complex behavior. It can move around until it dies. A colony is capable of much, much, more. Why? Because you can't always break down systems into components. Certainly not with neurons.
 

idav

Being
Premium Member
And this assertion is based on...?
Evolution

You brought up ant colonies. As I said, if you take 100 army ants and place them down, they do nothing except die. Take the same type of ant, but half a million of them, and something radically different happens: collective intelligence.
I agree.
Neurons aren't aware of anything. If they were, we'd only need one.
But the point, as seen above is it takes the collective to even have intelligence. Aware at a basic level but then Self-aware as the collective grows in capacity.


The "ego" is merely an outdated unscientific Freudian concept which has no place in science.
I'm not trying to argue if psychology is a real science but the ego, the self aware part of us, the I, is real at least to us and it is testable.
Unless of course there aren't enough for swarm intelligence. In which case all they do is die. Why? Because a single ant isn't capable of any complex behavior. It can move around until it dies. A colony is capable of much, much, more. Why? Because you can't always break down systems into components. Certainly not with neurons.

Sure and one cell or neuron wouldn't live long by itself either. Our cells need the others. They wouldn't be very intelligent on their own either. They do their own job well enough but it is the collective that makes us aware of ourselves even without being aware of what our cells are aware of.
 

LegionOnomaMoi

Veteran Member
Premium Member
Evolution

Evolution says nothing about complexity or reduction which would justify your claim.


I agree.

But the point, as seen above is it takes the collective to even have intelligence. Aware at a basic level but then Self-aware as the collective grows in capacity.
The point of "collective intelligence" or "emergence" or similar notions is that the individual components cannot be reduced (or "broken down"). Neurons are not aware of anything. Only via their collective action can consciousness/awareness emerge.

I'm not trying to argue if psychology is a real science but the ego, the self aware part of us, the I, is real at least to us and it is testable.

How does one test the "ego"?
 

idav

Being
Premium Member
Evolution says nothing about complexity or reduction which would justify your claim.
Everything can be broken down to the chemical, molecular or atomic level. We can pinpoint how all of these things evolved to the point of complexity we see to today from chemical evolution to biological evolution etc.

The point of "collective intelligence" or "emergence" or similar notions is that the individual components cannot be reduced (or "broken down"). Neurons are not aware of anything. Only via their collective action can consciousness/awareness emerge.

I don't see any other way for it to work. One or many neurons see the light before you actualize it.
How does one test the "ego"?
Testing animals acting selfishly, there is a biological imperative for it.
 

LegionOnomaMoi

Veteran Member
Premium Member
Everything can be broken down to the chemical, molecular or atomic level.

1) There is no consensus concerning the nature of the atomic level. We don't know how it might be broken down.
2) Breaking down "thought" (and similar neural functions) into individual neurons is meaningless. It's like breaking down words into letters. You can look at individual neural behavior, but it won't tell you anything much.

We can pinpoint how all of these things evolved to the point of complexity we see to today from chemical evolution to biological evolution etc.
We can't.


I don't see any other way for it to work.
If all it took was jamming together component parts, then the number of ants wouldn't matter qualitatively. Yet it does. 100 ants run around and die. A larger number suddenly engages in complex, collective intelligence. If we could simply reduce such systems to component parts, they wouldn't be complex in the way they are.


Testing animals acting selfishly, there is a biological imperative for it.
What does that have to do with the "ego"?
 

Me Myself

Back to my username
Philosophical zombie.

The materialistic things we associate with consiousness are not consiousness itself.
 

idav

Being
Premium Member
1) There is no consensus concerning the nature of the atomic level. We don't know how it might be broken down.
2) Breaking down "thought" (and similar neural functions) into individual neurons is meaningless. It's like breaking down words into letters. You can look at individual neural behavior, but it won't tell you anything much.
We know the function of our cells and the chemical reactions necessary. It can be and is broken down. Of course anything as complex as the human mind will take more time to break down.
We can't.

Don't act as if we know nothing about the evolution of the universe and chemicals and biological life.

If all it took was jamming together component parts, then the number of ants wouldn't matter qualitatively. Yet it does. 100 ants run around and die. A larger number suddenly engages in complex, collective intelligence. If we could simply reduce such systems to component parts, they wouldn't be complex in the way they are.
No doubt. Yes it takes team effort even for humans. Just cause one of us would die if we were left without society doesn't take from our intelligence. Of course anything as a collective is much more effective if they are efficient and actually cooperate.

What does that have to do with the "ego"?
Ego is the I, the self-awareness and selfishness is result of it. We show this with just about everything we do as do most organisms that is aware of themselves.
 

LegionOnomaMoi

Veteran Member
Premium Member
We know the function of our cells and the chemical reactions necessary. It can be and is broken down. Of course anything as complex as the human mind will take more time to break down.

Broken down how? I can break down the word "neuron" into letters. But if I just give you "n" then this reduction is pointless. What is the point of breaking down neural processes when the actions of individual neurons are as meaningless as pointing out that a word uses a particular letter?


Don't act as if we know nothing about the evolution of the universe and chemicals and biological life.
What we know has nothing to do with the defintions of complexity or the idea that it can be "broken down."


No doubt. Yes it takes team effort even for humans. Just cause one of us would die if we were left without society doesn't take from our intelligence. Of course anything as a collective is much more effective if they are efficient and actually cooperate.

You are fundamentally misunderstanding "collective". The whole point is that the collective ability is only possible through the collective. You can't reduce it to the component parts. Looking at neurons or individual ants gives you nothing.


Ego is the I, the self-awareness and selfishness is result of it. We show this with just about everything we do as do most organisms that is aware of themselves.
You are using the definition to define itself.
 

Thief

Rogue Theologian
Philosophical zombie.

The materialistic things we associate with consiousness are not consiousness itself.

A frubal for this.

I see our abilities to interact with this world a the result of having a body.
And the body was made to produce a unique spirit.

Together all at once, your body ages and will fail.
Your spirit is maturing as you learn.

If it's all chemistry...only the grave awaits you.
It's all dust.

If you are a developing spirit...your thoughts and feelings will go on.
 

Looncall

Well-Known Member
1) There is no consensus concerning the nature of the atomic level. We don't know how it might be broken down.
2) Breaking down "thought" (and similar neural functions) into individual neurons is meaningless. It's like breaking down words into letters. You can look at individual neural behavior, but it won't tell you anything much.


We can't.



If all it took was jamming together component parts, then the number of ants wouldn't matter qualitatively. Yet it does. 100 ants run around and die. A larger number suddenly engages in complex, collective intelligence. If we could simply reduce such systems to component parts, they wouldn't be complex in the way they are.



What does that have to do with the "ego"?


You do still need the individual ants. The behaviour of the collection of ants does depend on the properties of the individual ants. There is nothing mystical about complex systems that have numerous parts. The properties of the whole depend on the properties of the parts and how the parts are organized to form the whole.

You do need to understand the neurons to understand what the brain does. You cannot sneak souls in through the back door just by appealing to complexity.
 

LegionOnomaMoi

Veteran Member
Premium Member
You do still need the individual ants. The behaviour of the collection of ants does depend on the properties of the individual ants. There is nothing mystical about complex systems that have numerous parts. The properties of the whole depend on the properties of the parts and how the parts are organized to form the whole.

This is all true. But it doesn't get us much closer to understanding the system. More importantly, you cannot under stand the whole simply by understanding the parts, which was my point. This was what I was objecting to:
Like only the collective of the ant colony makes it seem like they are doing things that are intelligent, yet a single ant is aware in itself with its own intelligence.

The whole point of complex systems which exhibit emergent properties is that they are irreducible. You cannot understand the system only by understanding how the component parts work. You need to understand what you bolded above (the "organized to form a whole" part).
You do need to understand the neurons to understand what the brain does. You cannot sneak souls in through the back door just by appealing to complexity.
Who said anything about souls? And in fact I specifically talked about the importance of understanding neurons, including in my response to your last post:
A main area of disagreement concerns neural firing (how neurons "communicate"). Most people know that neurons "fire": they generate an electrical signal called an action potential. However, most undergrad psychology textbooks, from intro textbooks to those specifically on neuroscience, talk about these signals as "all or nothing spikes" such that one gets the impression these are the basic units of information in the way bits are for a computer. However, this is incredibly inaccurate. Rather, something about the timing of this firing is really the "basic unit".

The problem is identifying what the "something" is. Much of the debate focuses on whether neurons use rate coding, temporal coding, both but primarily one of the two (and in that case when and why), or both more or less equally (and then again when and why).

From Manrubia, Susanna C.; Mikhailov, Alexander S.; Zanette, Damian H..(2004). Emergence of Dynamical Order : Synchronization Phenomena in Complex Systems. World Scientific Publishing Co., p 312:
"The biological “hardware” on which the brain is based is extremely slow. A typical interval between the spikes of an individual neuron is about 50 ms and the time needed to propagate a signal from one neuron to another is not much shorter than such an interval. This corresponds to a characteristic frequency of merely 100 Hz. Recalling that modern digital computers should operate at a frequency of lo9 Hz and yet are not able to reproduce its main functions, we are lead to conclude that the brain should work in a way fundamentally different from digital information processing.

Simple estimates indicate that spiking in populations of neurons must be synchronized in order to yield the known brain operations. “Humans can recognize and classify complex (visual) scenes within 400-500 ms. In a simple reaction time experiment, responses are given by pressing or releasing a button. Since movement of the finger alone takes about 200-300 ms, this leaves less than 200 ms to make the decision and classify the visual scene” [Gerstner (2001)l. This means that, within the time during which the decision has been made, a single neuron could have fired only 4 or 5 times! The perception of a visual scene involves a concerted action of a population of neurons. We see that exchange of information between them should take place within such a short time that only a few spikes are generated by each neuron. Therefore, information cannot be encoded only in the rates of firing and the phases (that is, the precise moments of firing) are important. In other words, phase relationships in the spikes of individual neurons in a population are essential and the firing moments of neurons should be correlated."
 

Copernicus

Industrial Strength Linguist
I wish I had more time to reply to some of these great posts, but my replies tend to be too wordy. By the time I'm ready to post something, the conversation has shifted to other topics. I can't seem to get my army of ants mobilized and in formation quick enough. :) But carry on. I'm enjoying the read.
 

PolyHedral

Superabacus Mystic
Algorithms are pretty much defined by a stepwise notation
But look at my translation of the equation - it doesn't specify any ordering or stepwise process. Although any given implementation of map will probably do the calculation in the order of the list, it doesn’t have to match its spec – in fact, a commonly used variant launches a thread for each item and so doesn’t do the calculation in any deterministic order at all. Reduce similarly has no limits on ordering, because addition is commutative. For all I care, reduce could calculate all the prime-numbered entries and then work backwards through the composites - it’d still do what I wanted.

The latter requires a specific algorithm, while the former requires only general algorithms not specific to any particular number or operations on these numbers.
The latter requires a specific algorithm because we want the specific value of pi. If we just wanted any value, even a transcendent one, there would probably be some formula schema which gives us those. (Although we might run into problems because the set of transcendentals is uncountable.)

There is no way you can know (even approximately) what will follow the nth term until you compute n + 1.
Sure I can. There are many formulas that present pi as a series of predictable terms, e.g. the Leibniz formula.

According to whom?
Sorry, I meant to say that an equation is a syntax tree, but that should be obvious from the structure and the fact that operator precedence is a thing. It is a tree of values depending on one another, with the syntactical atoms being the variables and numerals.

Hindsight is 20/20. It took computer scientists and cog. sci. reseachers a while to get there.
The point I was trying to convey is that your methods to emulate humans must be very abstract, because human behaviour is very abstract. I would think that your algorithms would have to deal with very abstract concepts, (algorithms to build techniques to write procedures...) and the process to "expand" them into concrete actions would be quite long. (Consider multiplying very large matrices. Simply writing "AB" can be considered as one high-level action that wraps the many hundreds/thousands of computations involved into a single packet.)

However, there's no reason to think that conventional software can't do that.

There are tons of theories about how the brain goes from the neural interactions we know something about to conscious thought and conceptual processing, but they all rely on ill-defined notions.
I didn't see anything in GEB that was particularly ill-defined. Like I said, it is very similar to some models of object-orientation. (Although I don't think I've seen any one language implement all of the features one would need.)

That's because Apple is God.
So you're a member of the cult of Jobs? :p

["There's" syntax]
I personally don't see what you mean, since those examples make perfect sense to me. "There's" is a contraction of "there is", which is (AFAIK) synonymous with "there exists [now]." Specifically in your example of "it's useless" vs. "there's no point," I don't see an issue. The phrases are constructed that way because useless is an adjective, and a point is a noun.

They have everything to do with the words themselves. There are general rules, sure. [...] So we have rules for what is "allowable" here, but only once it is clear that we have parallel uses of "the + comparative".
Oh, I think I see what you mean now. You mean the schema for "the X-er, the Y-er" isn't derivable from any more basic rules of syntax. However, I think all that’s happened there is that you now have several layers of rules – those which control how the elements of language work, i.e. the syntax, and an extra layer of less general rules which are the exceptions. (for instance, that “the [adjective]-er, the [different adjective]-er” is a valid clause/sentence.)

That's language. Schemata which range from the very abstract to relatively general/abstract, to prefabs, idioms, and "word + preposition", to lexemes.
[…] Rules simply fail to do much with language unless you incorporate the rules specific to words and certain combinations of words.
Schemata are rules. I fail to see a problem.

It's because rules and words are not two seperate things.
But they absolutely are. We do have rules that govern individual words, but those aren’t the same thing. (And can be ignored and/or generated on the fly with far greater ease than the more fundamental rules of syntax, etc.) I’m picky about this because, as mentioned, it’s impossible to derive what the rules are from the words, without seeing the words used, so they’re very much distinguishable.

Intelligent and having a "mind" are far from the same thing. Intelligence does not require awareness, understanding, conceptual representation, etc.
A stock trader is very much aware of the state of the market, and represents such abstractions as risk and liquidity.
The difference happens to be that the human can tell you it does these things.

The "somewhat obvious" reversal doesn't exist for the vast majority of life throughout the history of this planet. It happens that certain species have some capacity to understand concepts in some way.
Perhaps the statement should be “is required to be as effective at life as we are.” We are so powerful, in terms of survival, because we built the infrastructure needed for memetic evolution (and the orders-of-magnitude speed increase that provides over genetic evolution) and then augmented that with a drive to write new memes, i.e. curiosity, and that strategy turned out to be evolution’s game-breaker – there are precious few circumstances where a non-intelligent animal can beat a human.
IOW, to live on a level capable of competing with humans, you need to be able to abstract. I can’t think of any reason that the reverse could apply – you can build abstractions about chess just as much as you can about 3D space.

The question is how can meaningless, automated responses to stimuli (which is what most life and what computers are capable of) become conscious responses to concepts?
What belies a conscious response that cannot be reproduced through chaotic, adapting “automation?” IMO, a p-zombie doesn’t exist, because consciousness is not a special property – to be able to appear conscious, you need to actually be doing the modelling that qualifies you as conscious.
One argument about the missing link is that the ability to "learn" (to react adaptively according to computational intelligence paradigms) is simply not enough. Learning through interaction with "things" allows a system to gain some abstract concept of "thing" (the basis for nouns) based on experience with actual, physical objects. Same with actions (verbs) and experience observing and engaging in activity.
That seems to arbitrarily declare that our environment of 3D space is somehow more important than any other. Internet streams are just as much “things” as chairs.
The basis for Rosen's argument is metabolic processes in organisms (specifically metabolic closure). However, since his work the comparison between metabolic closure in organisms and quantum properties hasn't gone unnoticed. From "Bridging the Gap: Does Closure to Efficient Causation Entail Quantum-Like Attributes?"
I’ve read around Rosen’s argument, and as far as I can understand it (not very) I don’t like it. He seems to be using a mix of vitalism and Platoism to argue that “organisation” is some nebulous thing disconnected from the physical elements that make up a system.

….the presence of ‘circular’ self-referential causal loops…
Weren’t you complaining earlier that you didn’t like Hofstatder’s use of this concept?

In other words, phase relationships in the spikes of individual neurons in a population are essential and the firing moments of neurons should be correlated."
Doesn’t this contradict what you said earlier, or am I misremembering? You mentioned that information is conveyed by the rate, not the timing.

First, there is no agreed upon model of quantum mechanics (in fact, there isn't even much of an agreement on what the so-called "Copenhagen Interpretation" actually is).
There is: the equations. If you want to make the relativistic paperwork disappear, then you can use the path integral.
Epistemology? What’s that? :p
 

PolyHedral

Superabacus Mystic
(cont.)
Note: This and the previous post were edited for space, but I still didn't squeeze in under 10k.
A fundamental area of contention is the relationship between mathematical models and reality. Take the wave function:

"Quantum mechanics was discovered roughly a century ago. In spite of its long
history, the interpretation of the wave function remains an open question."
Nakahara & Ohmi's Quantum Computing: From Linear Algebra to Physical Realizations
They can shut up and calculate. :p More specifically, look at what I said carefully: I didn’t say anything about how the wavefunction relates to reality, I said it was physically unmeasurable, and it is. Measuring the state makes the whole thing vanish and produce a base state. (With a sideways glance and an innocent whistle that says, “It’s always been like that. Honest, guv’nor.”)

"The fundamental problem with the classical strong Church–Turing Thesis is that it appears that classical physics is not powerful enough to efficiently simulate quantum physics" from Kaye, Laflamme, & Mosca (2007). An Introduction to Quantum Computing.
This is false. A Turing machine can perfectly well simulate a quantum computer; in fact, all quantum algorithms fall inside PSPACE. (However, the polynomial is impracticably large for any real-life machine to simulate a non-trivial quantum system.)


1) The act of observation collapses the wave function such that it takes on one of two states. However, this means that in order to make qubits correspond to something like "bits" or "information units" we force it to do so in a very specific way.
Not really. The bits are naturally… well, bits. They fall to one of two states when they get measured. The only new trick is being able to do operators on the wavefunction instead of on the measurable state.
2) The big (hopefully) advantage of quantum computers will be the entanglement of quantum states, which is neither binary nor reducible to binary encoding.
See above.

According to whom? And based upon what?
The Dirac equation (which is more accurate than Schrodinger’s) is a set of PDEs. I don’t know if a solution exists in all situations, but since no mention is made of solutions not existing, I assume so. Since they exist, they can be computed. Et voila: you’ve just solved the universe. (And now the key question: what did you run the calculation on? :p)

EDIT: Also, missed this.
The biological “hardware” on which the brain is based is extremely slow. A typical interval between the spikes of an individual neuron is about 50 ms and the time needed to propagate a signal from one neuron to another is not much shorter than such an interval. This corresponds to a characteristic frequency of merely 100 Hz. Recalling that modern digital computers should operate at a frequency of lo9 Hz and yet are not able to reproduce its main functions, we are lead to conclude that the brain should work in a way fundamentally different from digital information processing.
tommy-lee-jones-implied-face-palm.png

That should say, "the brain should work in a way fundamentally different from current algorithms." To expand what we have currently to all we possibly have is a category error.
 
Last edited:

idav

Being
Premium Member
The whole point of complex systems which exhibit emergent properties is that they are irreducible. You cannot understand the system only by understanding how the component parts work. You need to understand what you bolded above (the "organized to form a whole" part).

Who said anything about souls? And in fact I specifically talked about the importance of understanding neurons, including in my response to your last post:
The whole idea of thought boils down to something that is saved on one or several thousand nuerons. It is data is it not? If it isn't data it is a sequence of proteins doing the coding? When you remember a picture are you remembering/recalling data? Understanding and interpreting it is another thing and computers are starting to be able to do that as well.
 

LegionOnomaMoi

Veteran Member
Premium Member
But look at my translation of the equation - it doesn't specify any ordering or stepwise process.

Because it is inherent to the formal language one uses. Whether one writes an algorithm in python, C#, or even pseudo-code, either the notation itself specifies a finite, step-like procedure, or the computer will read it as such (either through a compiler or through some interpreter).

Although any given implementation of map will probably do the calculation in the order of the list, it doesn’t have to match its spec – in fact, a commonly used variant launches a thread for each item and so doesn’t do the calculation in any deterministic order at all.
Algorithms can be recursive, they can loop, and so on, but this doesn't change the fact that they are a notational device to show the finite states they will (or can) generate. This goes back to a time before computers existed. Algorithms grew out of formal languages and are fundamental to computer science/computation theory because they consist of these well-defined steps. Computers do not tolerate ambiguity, and algorithms are a conceptual, abstract notion designed to emphasize the need for specificity/well-defined procedures. In a very real way, algorithms and an algorithmic approach are ways for humans to approach a problem which can be formulated and understood by a human in perhaps a few words ("turn the following verbs into participial form") into procedures which can be understood by a computer (either directly, if written in code, or easily enough by adapting pseudo-code into some specific code).

Equations don't just predate algorithms, they are fundamentally different (which is not to say that they do not share properties, but then natural language shares properties with algorithms). Equations can have infinitely many solutions, no solutions, be probabilitistic, describe a physical system in a ways which which would require much more for some simulation on a computer, and so forth. After all, an equations like:

legiononomamoi-albums-other-picture3908-equation1.jpg


b7da7b99e956e955b3bc6c246127f5b1.png







and

legiononomamoi-albums-other-picture3907-equation2.jpg


are both extremely useful in mathematics as is, are either impossible to be turned into algorithms or can be extremely difficult.

The latter requires a specific algorithm because we want the specific value of pi. If we just wanted any value, even a transcendent one, there would probably be some formula schema which gives us those. (Although we might run into problems because the set of transcendentals is uncountable.)

Sure I can. There are many formulas that present pi as a series of predictable terms, e.g. the Leibniz formula.
These two comments are related and either you are joking or you misunderstood me or I you. The Leibniz formula is the basic definition of pi divided by the constant (i.e. 4). The errors can be computed, but like any series representation or forumula for pi, the Liebniz formula (used as an approximation or to actually compute pi) suffers from the same issues, because as far as we know the terms of Pi are ML-random. We can get a Pi to a specific term or approximate to a specific length, but unlike 1/3 following terms cannot be predicted until they are computed (or at best until an approximation is, which just saves time).


I didn't see anything in GEB that was particularly ill-defined.
"strange loops" are well-defined? If they were, we'd have stong AI. Like emergence, strange loops and so forth only well-defined until it matters. "The brain follows the following formulae, and then X stuff happens and hey presto we have consciousness." Well, great. But "well-defined" means that whatever "X stuff" is we can make a computer do it. 30+ years since Hofstadter and we have nothing remotely close to an artificial "mind". Why? Because "strange loops" aren't well-defined. And in his book which focuses on the loops themselves, he says so quite explicitly.

So you're a member of the cult of Jobs? :p

Heck no. I'm a heathen, hunted and pursued by "hip" people (probably the wrong term to use, but as I'm not "hip" I wouldn't know the right term) in Cambridge sporting iPhones and carrying iMacs who tell me I'm going to hell (which is kind of like a cross between Tron and pac-man, in which demons shaped like apples devour you for eternity).

I personally don't see what you mean, since those examples make perfect sense to me. "There's" is a contraction of "there is", which is (AFAIK) synonymous with "there exists [now]."

"There's Johnny at the door"
* "There exists Johnny at the door"
If this were simply an issue of "rules" apart from words then we have some problems (most examples adapted/taken from Lakoff, 1987):

1a: There's a Japanese executive in the waiting room.
1b: A Japanese executive is in the waiting room

These mean pretty much the same. However,

2a: There's a Japanese executive in our company
*2b: A Japanese executive is in our company

do not. The "transformations" on the syntax are not different, but one "works" (synonymy is maintained) and not the other.

Even better:

1c: There's a receptionist in the waiting room
vs.
*1d: A receptionist is in the waiting room

Again, the meaning isn't the same, although this time the syntactic "transformations" are identical to the first pair (1a and 1b), where the syntactic manipulations didn't change the meaning. Additionally, if
"There's" is a contraction of "there is", which is (AFAIK) synonymous with "there exists [now]"
and if words and syntax are seperate components, then why can't I say things like
**"There will go Jack to the ballgame"
**"There can't come any muggers here" but I can say
"There goes Jack to the ballgame"

Not only does "there/there's" specify rules specific to these constructions, but these rules also differe depending on different "senses" of "there/there's" constructions. The existential use in "There's laws against that behavior" has the same surface structure as "There's students against that wall". However, treating these as constractions of "there is" fails:
**There is laws against that behavior
**There is students against that wall

Additionally, these represent two different types of "there" constructions. Similar "surface structures" are grammatical in one but not the other:

"There's Jack over there"
vs.
**"There's laws over there"

This inseperability of various word senses, idioms, formulaic phrases (prefabs), collocations, etc. and syntax/rules pervades language. Take the English "passive": "I taught the student" becomes "the student was taught by me", a simple rule for understanding and generating grammatical sentences...until you stop using textbook examples:

"Jack weighs 150 pounds"
** "150 pounds is weighed by Jack."

or the reverse:

** "Jack walked a cane."
"Jack walked with a cane."

*"This bed was slept in by Jack"
"This bed was slept in by George Washington"

Oh, I think I see what you mean now. You mean the schema for "the X-er, the Y-er" isn't derivable from any more basic rules of syntax. However, I think all that’s happened there is that you now have several layers of rules – those which control how the elements of language work, i.e. the syntax, and an extra layer of less general rules which are the exceptions.
That would be great, if it weren't for the fact that such exceptions comprise the most of language.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Schemata are rules. I fail to see a problem.
They are more than that, at least in that the "rules" involve both general metaphors and metonymy which aren't formal and can't be treated formally (e.g., why one can build an argument, build a building, construct a building, build a construct, construct a construct, and so on, but win a battle/argument/debate/fight, etc.).

Parts of speech are schemata. And for a long time (and still by some linguists) these were treated as just notations for what operations could be performed. The problem is that tests to determine what is or isn't a noun or verb fail even in a single language (and trying to decide what, if anything, qualifies as a noun, verb, or adjective in another language is frequently hopeless). This is true even of english, where one has to frequently and arbitrarily determine that e.g., X is both a noun and a verb or Y is an adjective but can be used like nouns:
"Jack elbowed me"
"Jack punched me"
"Jack helped me"

All verbs, right? After all, they take verbal endings. But why do these make them verbs unless we assume they do? And once we do, what happens when they behave like nouns?

"Jack's punch caught me in the chin"
"Jack's elbow caught me in the chin"

Moreover, schemata don't seperate words from the constructions in which they occur, which means there is no real divide between rules (syntax) and words (lexicon). The same verb can allow different schematic "frames" which can't be divided from syntax:

1a :Jack's elbow caught my eye.
1b :Jack's elbow caught my attention
can mean the same thing or something very different (1a can mean "Jack elbowed me in the eye"). Yet

2a :Jack's punch caught my chin.
2b :Jack's punch caught my attention

cannot mean the same thing. Notice that all four examples are identical in structure and each doublet differs by only one word (and the difference between any 2 of the 4 is at most two words). If rules were so seperate from words, why can't I replace 2a with "Jack's help caught my chin" but I can replace 2b with "Jack's help caught my attention/eye"?

(And can be ignored and/or generated on the fly with far greater ease than the more fundamental rules of syntax, etc.)

Only what you wish to "ignore" constitutes perhaps more than half of speech. Rules specific to words, formulaic/prefabricated combinations, idioms, and similar "exceptions" are estimated to account for anywhere between 1/3 to over 1/2 of language (all the sources I wanted to link to cost money if you don't have access, so this paper was the best I could do).

I’m picky about this because, as mentioned, it’s impossible to derive what the rules are from the words, without seeing the words used, so they’re very much distinguishable.

This isn't strictly true. After all, I know that the word "weigh" obeys certain rules verbs do and not others. But perhaps "distinguishable" is not the best choice of words. Inseperable is better. The point is that the goal of generative linguistics was to be able to use rules to manipulate words like meaningless units as much as possible, only it turns out that this doesn't get you anywhere. There are too many exceptions to a set of too few really "general" rules.

A stock trader is very much aware of the state of the market, and represents such abstractions as risk and liquidity.
The difference happens to be that the human can tell you it does these things.
That difference is all that matters when it comes to AI/computational intelligence.


Perhaps the statement should be “is required to be as effective at life as we are.” We are so powerful, in terms of survival, because we built the infrastructure needed for memetic evolution (and the orders-of-magnitude speed increase that provides over genetic evolution) and then augmented that with a drive to write new memes, i.e. curiosity, and that strategy turned out to be evolution’s game-breaker – there are precious few circumstances where a non-intelligent animal can beat a human.
We are the only members of our genus to survive. Even Neadertals died out, and we aren't entirely sure that our intellect was superior. In fact, we aren't even really sure how to disentangle the genus and its evolutionary trajectories; see e.g. the discussions in Patterns in Prehistory: Humankind’s First Three Million Years by Wenke & Olszweski (OUP, 5th ed., 2007). All we really know is that for however many tens of thousands of years we've been around, we were far less likely to make it than ants or bacteria. Sharks, ants, bacteria, and various other life forms are vastly more successful in terms of survival.

That seems to arbitrarily declare that our environment of 3D space is somehow more important than any other. Internet streams are just as much “things” as chairs.

They aren't. We are used to the concept of "nouns" so we take the idea of equating abstract and concrete things for granted. But this world is a 3D world (I don't think non-Euclidean 4D geometries are really relevant here), for a computer or us. Our brains are uniquely suited for the type of thinking we do (abstract, conceptual processing). Other than the starting assumptions back during Turing's day (which even he questioned later), we don't have much reason to think that everything can be reduced to finite state machines.

I’ve read around Rosen’s argument, and as far as I can understand it (not very) I don’t like it. He seems to be using a mix of vitalism and Platoism to argue that “organisation” is some nebulous thing disconnected from the physical elements that make up a system.

Not disconnected. Irreducible.

Weren’t you complaining earlier that you didn’t like Hofstatder’s use of this concept?
I don't object to the use, just the idea that recursion and vaguely defined "strange loops" somehow mean much of anything. In particular, I dislike the assumption, too often treated as fact, that this recursive behavior is no different from that within computer programming, because there is every evidence that it is worlds apart. "self-referential causal loops" mean that the neurons within strongly coupled networks both cause the network and are caused by the at the same time. We can't break them down into components, which means that they are fundamentally different from Turing machines.

Doesn’t this contradict what you said earlier, or am I misremembering? You mentioned that information is conveyed by the rate, not the timing.

What I said was that there is a disagreement over rate vs. temporal encoding for individual neurons. However, most of the time the behavior of individual neurons are meaningless, because the correlations between neurons is what matters.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
cont.

(cont.)
Note: This and the previous post were edited for space, but I still didn't squeeze in under 10k.
I know the feeling.

They can shut up and calculate. :p More specifically, look at what I said carefully: I didn’t say anything about how the wavefunction relates to reality, I said it was physically unmeasurable, and it is.

The problem was this statement:
There is only a finite amount of information associated with any given system.
which you defended with a link to quantum information:
The problem is quantum information theory says nothing about "finite information within any given system". The term "information" here (and elsehwere) has a meaning which doesn't correspond to any inherent, physical structure of reality. IOW, the fact that quantum information theory combines quantum mechanics and computer science doesn't say anything about some ontological truth about "information" within a system.


Yet you linked to a wiki page saying the exact same thing earlier:
From that page: "In general, quantum mechanics does not allow us to read out the state of a quantum system with arbitrary precision. The existence of Bell correlations between quantum systems cannot be converted into classical information. It is only possible to transform quantum information between quantum systems of sufficient information capacity."

A Turing machine can perfectly well simulate a quantum computer
You seem to be conflating quantum information and computing with quantum mechanics/systems. The former uses the latter. They aren't the same. Moreover, it although it is perhaps true that if quantum computing is possible (in the way hoped), that it will simply be faster. However, "faster" here is a qualitative diffence, such that "this speed advantage is so significant that many researchers believe that no conceivable amount of progress in classical computation would be able to overcome the gap between the power of a classical computer and the power of a quantum computer." (from Nielsen and Chuang's Quantum Computation and Quantum Information, Cambridge University Press, 2000). Additionally, it may be that there are certain protocals and procedures which involve entanglement of qubits (e.g. superdense coding) and that no classical counterparts exist even in principle.

Quantum computers are not quantum reality. They are ways we can manipulate (hopefully, for the most part) quantum mechanics for superior computing power.

Not really. The bits are naturally… well, bits. They fall to one of two states when they get measured.
Entanglement allows qubits to behave quite differently than binary code, at least in theory. Of course, in reality, most of quantum computing is all just "in theory."

The only new trick is being able to do operators on the wavefunction instead of on the measurable state.
Not really. "The new trick" is the increase in gates and the entanglement of qubits prior to collapse.


The Dirac equation (which is more accurate than Schrodinger’s) is a set of PDEs. I don’t know if a solution exists in all situations, but since no mention is made of solutions not existing, I assume so.
1) The Dirac equation is again a mathematical formalism which corresponds to physical reality in an as yet unknown way.
2) Even if a function has a Jacobian matrix, that doesn't make it differentiable, and even if it is, that is simply a linear transformation. Such linearization is an approximation of nonlinearity (which is most of reality). In particular, the Dirac equation (or the Lorentz-Dirac equation) is such an approximation.
3) The solutions are constructed. Because of the nature of experimental measurements, QM formalism (including the use of Dirac's equation) involve a projection onto normal space of the system which allows for multiple interpretations (which is intuitively obvious, as it typically also involves multiple solutions)


Since they exist, they can be computed
What do the solutions mean (especially when the equation yields two)?

That should say, "the brain should work in a way fundamentally different from current algorithms." To expand what we have currently to all we possibly have is a category error.
First, I included a typo. The characteristic frequency should read 100Hz for neurons and 10^9 for computers. Second, the "should" there is not an American usage. It means "seems" (more or less). But when you have unbelievably "slow hardware" (brains) which easily perform tasks that something operating literally around a million+ times faster can't, chances are the mechanisms are fundamentally different.
 
Last edited:
Top