• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

On the origin and function of minds

Copernicus

Industrial Strength Linguist
I wish he was. And maybe it's just that I'm working literally only ~2miles away from where Chomsky does (and Pinker works in the same building), but Chomsky's rejection of the behaviorist account of language along with his insistence that language and the brain need to be studied together (which, actually, came later, although apparently even some of his students don't realize that) put him alongside Shannon & Weaver, Simon & Newell, Miller, and a few others as the "founders" of cognitive science. When you step out of the elevator on the floor I work on, there is a glassed-encased display with pictures, plaques, and other memorabilia to honor the great achievements of those giants who founded cognitive science. Among other things (my favorite is a blow-up of Miller's paper which begins "My problem is that I have been persecuted by an integer"), there is an original, 1957 copy of Syntactic Structures. I've has to teach about his "foundational work" and when I was an undergrad my intro to cognitive science textbook also described his work as among those which created the field.
I have great respect for the East Coast enclave, but it does tend to have a "not invented here" attitude towards new ideas until they suddenly become "invented here". Lakoff learned his linguistics from Chomsky, but he was located essentially where you are--not too far from ground zero. Ultimately, he became an apostate and was sort of "excommunicated" by the establishment there. (McCawley and Postal once mused that Chomsky had become the "Pope" of linguistics, and they wished that he would have students who did more yelling and less worshiping.)

Syntactic Structures was the clearest work that Chomsky ever wrote. It might amuse you to know that Fillmore once used it as a text for an intermediate course in syntax. It was the best syntax course I ever took.

That, I think, was a major blow for the development of the field. The other (related) problem was the number of computer scientists vs. psychologists. Classical cognitive science wasn't just a friend of computer science; they were lovers.
I do see that as something of an eastern perspective. I think that you are seeing it from the perspective of what I would call "formal linguistics." The more formal approach to generative grammar has always had good sex with computer science, but not everyone favors the missionary position. :)

Apart from the influence computational theory had, there was also the fact that psychology and behaviorism were almost one and the same and had been for a few decades. Once Chomsky's paper obliterated Skinner's Verbal Behavior, and the work of others (particularly Tolman, Ritchie, and Kalish) took down the rest of the behaviorists view of the brain, telling psychologists that linguistic behavior was important wasn't likely to go over very well. It still hasn't for many. But I'm hoping I'm right about the way things are changing.
One of the big problem areas is language acquisition, which Chomsky's approach was never able to account for in very insightful ways. It was really pretty obvious that he got things wrong, because intuitions of well-formedness don't match up with stages of acquisition. Or take language loss. You can describe what goes on in aphasics as the acquisition of new phonological rules. I once asked Chomsky why he thought that was. Why should a conk on the head cause you to add rules to the "grammar"? You would expect brain damage to cause rule loss. Naturally, he didn't have an answer and didn't see the relevance of the question. There is an answer, but it really suggests a very different direction from the one he and his generative school had been going in.

That's something I never knew. I didn't realize that Chomsky and the rest saw themselves as reviving anything.
Well, Chomsky and Halle did think that they were reviving Edward Sapir's concept of the psychological phoneme (although they weren't). That is why they named their seminal work on phonology The Sound Pattern of English. That was a take on Sapir's famous paper "The Sound Pattern of Language." What Chomsky was doing was to look for precursors to a psychological perspective on language in order to give their work some historical leverage. Linguists in the structuralist period had basically suppressed work that had made reference to the psychology of language, so their "revival" was part of the ideological battle to overturn approaches based on structuralism.

The formalism (and even the tree-diagrams) pre-date chomsky. And if memory serves, the innateness of grammar wasn't paired with his theory about infants until later (albeit still early). But I have never found the "poverty of stimulus" argument convincing, particularly when there is (at least now) a good deal of evidence that it is wrong. And if speech were really the result of some combo of predicate calculus and combinatorial algorithms, the students I've taught or tutored would be doing a whole lot better in math.
I actually attended some lectures by Chomsky's mentor, the structuralist linguist, Zellig Harris, when I was teaching at Columbia. I came to understand where Chomsky got his linguistic chutzpah from. ;)

You just had to mention again that you knew Fillmore. As if I weren't already jealous enough.
He was one of my mentors when I was an undergrad and grad student, but I knew most of the major figures in the field in the 1970s. It was an exciting time to be a linguist.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I have great respect for the East Coast enclave, but it does tend to have a "not invented here" attitude towards new ideas until they suddenly become "invented here".

And once "invented here", it was never anywhere else before.

As I understand it, however, Chomsky was and is more...um...certain of his own brilliance and abilities as well as the error of all those who dare to question them? Part of this impression comes from those I've spoken with who have taken classes and/or worked with him and who will take the long way around to say "arrogant" (and always in something akin to a whisper, as if Chomsky could sense those speaking his name in vain), but eventually get there. There was also a certain professor at Cornell whom I know of and who not only disagreed with much TGG, but also Chomsky's attitute. Little of his opinion on either subject exists in print, but there is at least one article ("The Impact of Transformational Grammar upon Stylistics and Literary Analysis; Linguistics; 1971) in which some of it is expressed, albeit relegated to a footnote: "Chomsky (1965) has moved a long way to meet the objections of his critics, and his tone is considerably less positive than in Chomsky (1961). The latter piece shows Chomsky as a resourceful but ungracious controversialist who rarely if ever admits to error. No doubt it is precisely because of this attitude that other linguists take an unholy, or at least unscholarly, joy in baiting the transformationalists. Had Chomsky admitted, for example, in his 1964 dispute with Fred W. Householder, that he had on occasion used the term 'simplicity' in its common or garden sense, I for one would not have felt such a surge of Shadenfreude in reading Erica C. Garcia's crushing demonstration that this was so."

Lakoff learned his linguistics from Chomsky, but he was located essentially where you are--not too far from ground zero. Ultimately, he became an apostate and was sort of "excommunicated" by the establishment there. (McCawley and Postal once mused that Chomsky had become the "Pope" of linguistics, and they wished that he would have students who did more yelling and less worshiping.)

I know there is a book on the "linguistic wars" (I believe the author coined the phrase, but I could be wrong), and I have been meaning to purchase it for several years now. As I understand it, Lakoff was not the only one "exommunicated".

Syntactic Structures was the clearest work that Chomsky ever wrote. It might amuse you to know that Fillmore once used it as a text for an intermediate course in syntax. It was the best syntax course I ever took.

Amusing and interesting. But how much the quality of the course was due to the book, as opposed to the instructor? I never read Syntactic Structures. I did, however, read The Case for Case. Chomsky's later evolutions of generative grammar seemed to have made much of his early work irrelevant, but despite being outdated, Fillmore's early approaches to case grammar were still quite worth reading.

I do see that as something of an eastern perspective. I think that you are seeing it from the perspective of what I would call "formal linguistics." The more formal approach to generative grammar has always had good sex with computer science, but not everyone favors the missionary position. :)

It's not exactly the eastern perspective (and, just for the record, I'm not disagreeing out of spite just because you so clearly outdid my use of metaphor). I changed my plan for grad school and added a minor (too late to add a third major) in cognitive science after my IE studies led me to Lakoff, Langacker, FIllmore, and a number of later linguists whose work I thought superior to various generative grammars. This meant reading much more about cognition, neuroscience, other areas within/related to cognitive science apart from linguistics. It was rather disheartening to find such a disparity between the approach to thought, concepts, "symbols", and thus to some extent language from non-linguists. I had assumed (given the name) that "cognitive linguistics" would be the framework accepted by at least most cognitive scientists. I was so disappointed that I made a research project out of the issue in order to figure out what happened and why.

The formalist/functionalist distinction in linguistics, even in the early 90s, wasn't really much of a concern in psychology, neuroscience, computer science, and other fields in which researchers focused on cognition. In her 2 volume Mind as Machine: A History of Cognitive Science Margaret Boden (who isn't exactly "east coast US" but does have a connection to it) linguistics isn't discussed much at all, but when it is Chomsky is "The One". And when she turns to developments after Chomsky, apparently only LFG, GPSG, and HPSG make the cut, and the chapter ends with "linguistics eclipsed" by NLP. This she treats as seperate from linguistics. Instead, it is all programmers and psychologists. We do find "The idea that the concepts named by words may be based in concepts derived from the body isn't new to cognitive science. For instance, George Lakoff and Mark Johnson suggested a quarter-century ago that abstract concepts...may be be metaphorical extensions of body-grounded concepts..." But not only is Lakoff's groundbreaking 1987 work ignored entirely, she goes on to 1) treat embodied cognition as having developed in psychology and 2) mentions that "even earlier" connections between language and bodily experience were around before Lakoff and Johnson. I wish I could say her treatment was abnormal, but whether it is the first chapter of an intro textbook or a book like hers specifically on the history of cognitive science, such a description is typical.

Nor is it restricted to such works. For example, in the 1987 book The Elements of Artificial Intelligence by Steven Tanimoto (from the University of Washington), the only reference to Fillmore is in the chapter on NLP, where his The Case for Case is cited. Winograd, Woods, Schank, Pinker, even Searle are cited here too and elsewhere, sometimes more than once, yet nothing from Lakoff and only this single reference to Fillmore. Nilson's 1998 Artificial Intelligence: A New Synthesis mentions Lakoff but not Fillmore, and while most of the chapters on language concern formal logic (it's an intro textbook), chap. 24 has a section on NLP as well as speech acts, semantics, and similar topics in computational linguistics. However, despite the fact that Nilson was educated (and later taught) at Stanford, we find "The foundational work on language syntax and parsing is that of Chomsky 1965". In Mark Steedman's contribution to the edited volume Artificial Intelligence (1996), "Natural Language Processing", neither Fillmore or Lakoff are mentioned (although Chomsky is cited several times). Fodor's 1998 Concepts: Where cognitive science went wrong has an entire chapter on "The Demise of Definitions, Part I: The Linguist's Tale" in which apparently "linguists" means Pinker and Jackendoff.

The list goes on.

But what's even worse is how quickly other people became the important "embodied cognition" names. Embodied Minds in Action by Hanna & Maiese (2009) doesn't mention or cite Lakoff at all. Neither does Jeannerod's Motor Cognition: What Actions Tell the Self. For most cognitive scientists, somehow people like Lawrence Barsalou, Ray Gibbs Jr., Antonio Damasio, and more recently Linda L. Chao, Friedemann Pulvermuller, Olaf Hauk, etc., all pretty much developed embodied cognition apart from the work of Lakoff. Most of the time, if he's cited, it's for Gallese & Lakoff (2005). There are plenty of notable exceptions, but alas they are just that: exceptions.

On the other hand, for books/papers which deal with linguistics or at least focus more on linguistics, Lakoff is given proper credit constantly.

For some reason, those who work in cognitive science but do not have a background in linguistics would like to pretend that the only work on language which matters comes from computer science or psychology (Chomsky and Jackendoff are about the only regular exceptions). Again, there are exceptions, but the various "next generation" algorithms along with the now well-established use of neuroimaging have so far prevented most cognitive scientists from dealing with linguistic theory outside of generative linguistics, and even those who believe that cognition is domain-general too often "forget" that embodied cognition was around before fMRI studies became widespread.





Well, Chomsky and Halle did think that they were reviving Edward Sapir's concept of the psychological phoneme (although they weren't). That is why they named their seminal work on phonology The Sound Pattern of English. That was a take on Sapir's famous paper "The Sound Pattern of Language."

Ah. I read Sapir's book Language but that was it (and it was mainly because I inherited it). And if I ever came across the connection before, I must have forgotten. That is interesting. Thank you.




I actually attended some lectures by Chomsky's mentor, the structuralist linguist, Zellig Harris, when I was teaching at Columbia. I came to understand where Chomsky got his linguistic chutzpah from. ;)

Interestingly enough, the professor whose footnote I quoted at the beginning of this post also mentioned Harris in a 1961 review article in Language: "It is significant to find Einar Haugen calling attention recently to [Zellig] Haris's acknowledged in ability to express himself in clear and simple English."

Somehow, I don't find many of Chomsky's students of accused of similar attitudes.
 
Last edited:

PolyHedral

Superabacus Mystic
In other words, computing the nth digit of Pi has been increasingly easy even before the series expansion you use above. However, Pi itself is never computed, and thus (as I said) by standard basic definitions of computability Pi is at best intractable and therefore not Turing computable.
But you didn't ask to compute Pi exactly. You said we needed an algorithm capable of "computing Pi to any desirable term." Keeping in mind that there is no last term, I provided one.

(Also, since computers are just as capable of symbolic manipulation as we are, one can compute Pi exactly; you merely end up with an expression involving infinite lists.)

Then it was the "isomorphic" which confused me. But given that at the moment this is what I do (study the neural processes involved in conceptual processing so that we can create models), I'm well aware of the cutting-edge tools, methods, and research in soft computing, neural modeling (NEURON is, after all, free), and machine intelligence. Which means I am aware that although the view that "strong AI" can be developed and "run" using boolean based architecture still exists, it is increasingly unpopular. Almost a century of progress combined with utter failure to get closer to something which has more the ability to "learn" the way a slug does (procedurally-based purely reactive processing) will do that.
"We've failed so far, therefore it can't be done?" :shrug:


Computational linguistics is now largely independent of the attempt to understand human language, as linguists increasingly adopted models outside of the generative program (which was always related to algorithms capable of generating language).
If it's not studying human language, and it's not studying Chomsky's machine-languages, what's it studying? :shrug:

What started with language and "cognitive linguistics" has increasingly become the main view of cognitive science. The director of my lab is an old school cognitive scientist who is one of the most ardent critics of embodied cognition and similar views I know of. But once you've spent 50 years with an opinion on how the mind works, it becomes that much harder to accept an almost completely opposed view. Yet this view has done nothing but increase in terms of research, proponents, journals, books, etc.
Minds-in-general certainly don't need bodies. You only have to look at the stock trader analysts to see that. (And the other half of the coin is almost self-evident: of course a mind that evolved to control a humanoid avatar is based around... controlling a humanoid body.)

This still is problematic. Programming languages, like the computers they run on, are "isomorphic" in that the mapping is direct (just like the cardinal numbers and the rational numbers). What you are talking about is not just whether a computer can do what the brain does, but whether the "algorithms" (whatever that might be) which the brain uses/is are such that they can be re-written in computer code, just like something written in Java can be written in C++ or Perl even MATLAB.
IMO, this is self-evidently true, because all programming is synonymous with the lamda calculus, and the lambda calculus is synonymous with mathematics. Unless you want to posit that the brain cannot be mathematically described - in which case you need to rewrite all of the ontology behind physics - it can be computed.

So, a model of physics which isn't complete and which has diverging views? Unless the guys and gals at CERN found a lot more than the Higgs, the problem of a reconciliation between GTR and QM still exists. And there is still no agreed upon model "particle" physics (at the very least, I would be happy if we could finally settle on how many dimensions we need in our model of reality).
The corrections GTR would make to QM are hardly relevant, since the brain is very small, light, and slow. (We can already do relativistic QED. The only bit missing is gravity, and that's not going to have a noticeable effect on the brain.)

But finite "information" is more or less meaningless. It rests on the method used to define information, information units, and how these exist in biological/natural systems. So far, nothing from this has actual empirical support in the way needed to state that the brain (or most things) can be run on a computer.
See here.

And for years, the Shannon-Weaver, Turing, Church, etc., view held: only the algorithms matter. And that failed. Then there was "well, maybe the algorithms need only generate a discrete process of adaption." So far, that's had no more success in getting beyond the learning ability of slugs than the previous one. And the more work is done in computational neuroscience, computational biology, and theoretical biology (among others), the more specialists think the architecture is of fundamental importance.
It is a fundamental property of the mathematics underlying computation that hardware architecture does not matter, because architecture can be faked.
 

LegionOnomaMoi

Veteran Member
Premium Member
But you didn't ask to compute Pi exactly. You said we needed an algorithm capable of "computing Pi to any desirable term." Keeping in mind that there is no last term, I provided one.

Technically, you provided an equation, but I see your point and you are right. It was my fault because I started to rewrite what I had written but didn't change it all. Hence the thing about "finite steps" which originally had been about "terms" and apparently I changed that but not the rest. The point was intended to be about the difference between a computable function and a computable number (which is why I included the exception about Pi as a number being computable). After all, intractibility wouldn't be an issue for finite terms, but only a "solvable" problem which required infinite time (at least as far as a number is concerned). It's also why ML-Randomness comes into play. There's no way of knowing what term will come (as far as we know) except by computing, and thus if we think of the terms as states in the program determined by an algorithm, you can't predict what will "happen" (what the output will be) until you get there.

(Also, since computers are just as capable of symbolic manipulation as we are, one can compute Pi exactly; you merely end up with an expression involving infinite lists.)

The second part is certainly true, hence intractibility, but computers have nowhere near our capacity to "manipulate symbols" in some ways, and are vastly superior in others.


"We've failed so far, therefore it can't be done?" :shrug:
No. Clearly it can be, as we do it. The issue is the approach. The algorithmic approach of classical AI (or just AI, as computational intelligence is often used to denote the modern approach which has superseded AI), in which the algorithm specified the response, rather than provided a basis for adaption, isn't used at all. But everything since has simply improved our ability to do what computers were always doing: manipulation of symbols which have no meaning to what is manipulating them, only to us.

It's not just failure, but the nature of the failure and of our progress. My cell phone is not just superior (in terms of speed, storage, memory, etc.) to ENIAC, it's superior to the computers I had when I was growing up. Where once we had only cellular automata, evolutionary computation is a field unto itself. Swarm intelligence, fuzzy systems, wavelets, classification & clustering, etc., have not only vastly increased the complexity of computer models and computational paradigms independently, they are almost always combined with one another.

Yet despite these leaps and bounds, we might as well still have punch cards fed to computers the size of cars and slower than modern mp3 players when it comes to "strong AI". And as much as our ability to deal with complexity has increased, it has not increased as quickly as our ability to understand how much we can't yet deal with (the more we know, the more we realize how much more complex things are). Moreover, we have really no idea what to do to get where we want to be, because we don't understand how brains (even non-human brains) can encode, represent, and manipulate concepts.

In other words, it isn't just a matter of failure, or really failure at all. It's massive succcess, with the small problem of the success not resulting in what we wanted, or even getting us closer. When the field keeps advancing, especially advancing as fast as computational paradigms, architectures, and hardware have, but none of it gets you closer to where you want to be in a certain area, then in all likelihood you are going about it wrong.

If it's not studying human language, and it's not studying Chomsky's machine-languages, what's it studying? :shrug:
Chomsky didn't create machine languages, so I don't know what that means. But much of the work is dedicated to getting programs to be able to appear like they understand language. It's sophisticated pattern matching which uses specialized methods for language: databases which are specifically designed to provide common collocations, constructions, phrases, and so forth such that flexible algorithms can parse various forms of "go" (going, go, goes) and see whether the word immediately following or perhaps the word after that is something which humans use to indicate purpose or the "immediate future" ("I'm going to want dinner later", "I'm going to be the best there is", etc) rather than the verb of motion ("He's going to the store"). However, as even the most sophisticated programs are no closer than SHRDLU at understanding the meaning behind the words parsed, a sentence like "I'll go later" is problematic. Should the program match it with the sense/frame of "go" which is purpose/intention/immediate future, or with the verb of motion?

The problem is that humans simply don't process words by applying a series of algorithms. The "rules" which we use to create and understand examples of speech or text are not seperate from the words themselves. Treating them like symbols which can just be manipulated doesn't work because most of what actual speech is can't be seperated from the "lexicon". There are too many "senses" to most words which govern how they can be combined with other words. So I can "lie down" and "sit down", but I can "lie for you" but not "sit for you" and I can "sit/seat you" but not "lie you". Because humans understand the words, they don't have to use massive databases and cutting-edge pattern matching algorithms to realize that there is all the difference in the world between "He kicked the bucket" and "he kicked the ball."

Minds-in-general certainly don't need bodies. You only have to look at the stock trader analysts to see that.
??
(And the other half of the coin is almost self-evident: of course a mind that evolved to control a humanoid avatar is based around... controlling a humanoid body.)
The issue isn't that it is "based around controlling". Why would the fact that we have human bodies mean that we use highly abstract reasoning to metaphorically extend more basic meaning to abstract ones? One reason could be that understanding requires interaction with the world. That a system needs to have some sort of "experience" of actions to process verbs.
IMO, this is self-evidently true, because all programming is synonymous with the lamda calculus, and the lambda calculus is synonymous with mathematics. Unless you want to posit that the brain cannot be mathematically described - in which case you need to rewrite all of the ontology behind physics - it can be computed.

What model of physics has shown that computability theory extends to the universe? Physicists who believe this are just guessing. There is nothing within physics or computational theory which is accepted by either physicists or computer scientists (or any other specialist in a related field) that anything which occurs in the real world can be simulated on a computer (not approximated). In fact, there is currently a proof within (theoretical) biology that organisms are noncomputable. If you were correct, no one would be arguing about this.

The corrections GTR would make to QM are hardly relevant, since the brain is very small, light, and slow.
The brain appears to be very slow, yet computes things impossibly fast. So impossibly that there are a number of theorists who believe only quantum mind theories can explain it.


So... you are referencing a theoretical system which concerns theoretical mechanisms which (so it is hoped) will be capable of solving noncomputable problems as a method of saying something about information? From Nakahara & Ohmi's Quantum Computing: From Linear Algebra to Physical Realizations: "Quantum mechanics is founded on several postulates, which cannot be proven theoretically. They are justified only through an empirical fact that they are consistent with all the known experimental results. The choice of the postulates depends heavily on authors’ taste." (p. 29). As almost all of the work within quantum computing is theoretical, and involves a fair amount of disagreement, how does this say anything about "finite information"?

Also, I said "finite information is more or less meaningless" and you link to quantum computing, which relies on qubits: theoretical "bits" which are vectors in a 2-dimensional complex coordinate system defined on the basis of the physical realization of a (theoretical) processor? Quantum information theory simply uses (or, for the most part, might use) the infinitely-many different possible states of a qubit along with the collapse of the state vector to extract what is equivalent to a classical bit. It says nothing about "information" within systems at all which has anything relevant to the computability of the "mind" or much of anything in nature. The reason quantum cryptography has seen more work than any other area within quantum information theory or quantum computing has to do with the impossibility of anybody except the receiver (who collapses the state vectors of the encrypted message, determing what it will be) "decrypting" messages. The information is packaged, and in fact defined by, the computation.
It is a fundamental property of the mathematics underlying computation that hardware architecture does not matter, because architecture can be faked.
It is a fundamental property of mathematics underlying computation that any architecture using these mathematical principles is equivalent to any other. It is an assumption that this says anything about the human mind.
 
Last edited:

PolyHedral

Superabacus Mystic
Technically, you provided an equation, but I see your point and you are right.
Equations are not algorithms? I don't see how that particular notation does not mean the same as...
Code:
1/(2*sqrt(2)/9801 * 
        reduce(operator.add, 
            map(
                lambda k: fac(4*k)*(1103+26390*k)/fac(k)**4 * 396 ** (4k), 
                range(1,n)
            )
        )
    )
which happens to be the Python formulation. (With the right definitions imported.)

The point was intended to be about the difference between a computable function and a computable number (which is why I included the exception about Pi as a number being computable). [...] There's no way of knowing what term will come (as far as we know) except by computing, and thus if we think of the terms as states in the program determined by an algorithm, you can't predict what will "happen" (what the output will be) until you get there.
It doesn't really matter if you can predict it ahead of time; computable numbers are those which are generated by computable functions, of which pi is one. :shrug:

The second part is certainly true, hence intractibility, but computers have nowhere near our capacity to "manipulate symbols" in some ways, and are vastly superior in others.
Math is just a syntax tree. It's not that hard. (The trick being, of course, determining which manipulations are useful. However, all of the game-solving techniques can pop up there too, because you can build a tree out of your options.)

No. Clearly it can be, as we do it. The issue is the approach. The algorithmic approach of classical AI (or just AI, as computational intelligence is often used to denote the modern approach which has superseded AI), in which the algorithm specified the response, rather than provided a basis for adaption, isn't used at all. But everything since has simply improved our ability to do what computers were always doing: manipulation of symbols which have no meaning to what is manipulating them, only to us.
I'd have thought it obvious pretty fast that an algorithm specifying the response doesn't work. You need an algorithm to specify the method used to choose the technique for finding a response from a set of possibilities... or something like that. :p

It's not just failure, but the nature of the failure and of our progress. My cell phone is not just superior (in terms of speed, storage, memory, etc.) to ENIAC, it's superior to the computers I had when I was growing up. Where once we had only cellular automata, evolutionary computation is a field unto itself. Swarm intelligence, fuzzy systems, wavelets, classification & clustering, etc., have not only vastly increased the complexity of computer models and computational paradigms independently, they are almost always combined with one another.

Yet despite these leaps and bounds, we might as well still have punch cards fed to computers the size of cars and slower than modern mp3 players when it comes to "strong AI". And as much as our ability to deal with complexity has increased, it has not increased as quickly as our ability to understand how much we can't yet deal with (the more we know, the more we realize how much more complex things are). Moreover, we have really no idea what to do to get where we want to be, because we don't understand how brains (even non-human brains) can encode, represent, and manipulate concepts.
Hofstadter and Yudkowsky seem to have a pretty good idea. Hofstadter's theory of prototyped symbols is a pretty good one, IMO, but you might not like it - it looks suspiciously like object-orientated programming. :p

Chomsky didn't create machine languages, so I don't know what that means.
These things: Chomsky hierarchy - Wikipedia, the free encyclopedia

It's sophisticated pattern matching which uses specialized methods for language: databases which are specifically designed to provide common collocations, constructions, phrases, and so forth such that flexible algorithms can parse various forms of "go" (going, go, goes) and see whether the word immediately following or perhaps the word after that is something which humans use to indicate purpose or the "immediate future" ("I'm going to want dinner later", "I'm going to be the best there is", etc) rather than the verb of motion ("He's going to the store"). However, as even the most sophisticated programs are no closer than SHRDLU at understanding the meaning behind the words parsed, a sentence like "I'll go later" is problematic. Should the program match it with the sense/frame of "go" which is purpose/intention/immediate future, or with the verb of motion?
I hear Siri is pretty good at understanding. :p

The problem is that humans simply don't process words by applying a series of algorithms. The "rules" which we use to create and understand examples of speech or text are not seperate from the words themselves.
They are; that would be how being able to speak multiple languages works. AFAIK, the rules which we use to understand text have absolutely nothing to do with the words themselves, and the reason they're so impossible for AI to work out is because they're related to what words mean. "Curious green ideas sleep furiously" makes perfect sense.... unless you are aware that ideas are not a thing that can sleep.

Treating them like symbols which can just be manipulated doesn't work because most of what actual speech is can't be seperated from the "lexicon". There are too many "senses" to most words which govern how they can be combined with other words. So I can "lie down" and "sit down", but I can "lie for you" but not "sit for you" and I can "sit/seat you" but not "lie you". Because humans understand the words, they don't have to use massive databases and cutting-edge pattern matching algorithms to realize that there is all the difference in the world between "He kicked the bucket" and "he kicked the ball."
So the confusion is because two identical-looking pointers refer to different things? (Also, I disagree: I think human ability to understand language is the cutting-edge pattern matching algorithm.)

Oops. I mean to say, ask the stock market analysts. :p
Anyway, I would say that if someone were to write a self-evaluating stock market bot, then it would count in every way as "intelligent," despite not having a body or even interacting with normal 3D space.

That a system needs to have some sort of "experience" of actions to process verbs.
...What? I completely fail to understand how you could possibly arrive at "understanding requires interaction with the world." The reverse, that interacting with the world requires understanding and abstraction, is a lot more plausible. (But also somewhat obvious?)

In fact, there is currently a proof within (theoretical) biology that organisms are noncomputable. If you were correct, no one would be arguing about this.
As far as I can tell, (since that paper is a response to something I haven't read) it appears to be arguing against spherical cows. Organisms are made of atoms, so shouldn't the proof of organisms being uncomputable refer to the behaviour of atoms and molecules?

The brain appears to be very slow, yet computes things impossibly fast.
Impossibly fast... for an architecture and hardware we don't understand very well? Do we know how the brain works or not? :shrug:

As almost all of the work within quantum computing is theoretical, and involves a fair amount of disagreement, how does this say anything about "finite information"?
Basically all of quantum mechanics revolves around the behaviour and conservation of information, e.g. the black hole information paradox, the holographic principle. For any given quantum state, a finite number of bits would describe it absolutely. (Even if there are so fantastically many of them that they're uncountable in practice.)

Quantum information theory simply uses (or, for the most part, might use) the infinitely-many different possible states of a qubit along with the collapse of the state vector to extract what is equivalent to a classical bit. It says nothing about "information" within systems at all which has anything relevant to the computability of the "mind" or much of anything in nature.
But a qubit doesn't have infinitely many possible states; it only has two. It's wavefunction, a physically unobservable quantity, has an infinite number of states (given infinite time...) but I'm fairly confident that it's impossible to put the wavefunction into a configuration that's uncomputable.

The reason quantum cryptography has seen more work than any other area within quantum information theory or quantum computing has to do with the impossibility of anybody except the receiver (who collapses the state vectors of the encrypted message, determing what it will be) "decrypting" messages. The information is packaged, and in fact defined by, the computation.
Quantum crypto is just like any other in terms of computation; the only difference is that the no-cloning theorem screws over Eve.

It is a fundamental property of mathematics underlying computation that any architecture using these mathematical principles is equivalent to any other. It is an assumption that this says anything about the human mind.
Except for the fact that the universe can be treated as a computer. The only way around that is if closed time-like curves or similar structures are possible, and those are not apparent in the brain.
 

Copernicus

Industrial Strength Linguist
...Part of this impression comes from those I've spoken with who have taken classes and/or worked with him and who will take the long way around to say "arrogant"...
I honestly think that that attitude reflects more the polemical tradition that he came from. In person, he comes off as quite mild and pleasant, but he is quite combative in the face of criticism. I witnessed a debate between him and Jerry Katz at Columbia in which Katz took the position that Chomsky was basically a neo-Platonist. Katz quoted some things Chomsky said, and Chomsky claimed he had never said those things when his turn at the podium came. Katz shouted "I'll bet you $5!" Chomsky nodded. Katz, sitting behind him, began furiously paging through Chomsky's book. He then jumped up and interrupted Chomsky to point at the passage, much to the amusement of the audience. At the reception afterwards, both Jerry and Chomsky were present. I asked Jerry if Chomsky had given him his $5. Jerry looked a bit sad and said "I don't know. He won't talk to me." :) But that was the way Chomsky's first generation always interacted with him. I heard that Postal and Katz came to class dressed in motorcycle drag. Later generations became more worshipful.

There was also a certain professor at Cornell whom I know of and who not only disagreed with much TGG, but also Chomsky's attitute. Little of his opinion on either subject exists in print, but there is at least one article ("The Impact of Transformational Grammar upon Stylistics and Literary Analysis; Linguistics; 1971) in which some of it is expressed, albeit relegated to a footnote: "Chomsky (1965) has moved a long way to meet the objections of his critics, and his tone is considerably less positive than in Chomsky (1961)...
You are speaking of the very famous Charles Hockett, whom my wife had as a teacher when she was at Cornell. Alas, I never met him, but I studied Slavic linguistics from one of his students. Hockett was also quite arrogant and vocal in his dismissal of TG. It was a fun time.

...The latter piece shows Chomsky as a resourceful but ungracious controversialist who rarely if ever admits to error. No doubt it is precisely because of this attitude that other linguists take an unholy, or at least unscholarly, joy in baiting the transformationalists. Had Chomsky admitted, for example, in his 1964 dispute with Fred W. Householder, that he had on occasion used the term 'simplicity' in its common or garden sense, I for one would not have felt such a surge of Shadenfreude in reading Erica C. Garcia's crushing demonstration that this was so."
TBH, I saw a lot of that in academia from those days. It wasn't just Chomsky. There were a lot of big egos. Lakoff, too, could get quite pugnacious, but my favorite people were Charles Fillmore and the late Jim McCawley. They were on the opposite side of the scale--not very pretentious.

I know there is a book on the "linguistic wars" (I believe the author coined the phrase, but I could be wrong), and I have been meaning to purchase it for several years now. As I understand it, Lakoff was not the only one "exommunicated".
You are talking about Fritz Newmeyer, who was here at the University of Washington before he retired. Ironically, Fritz belonged to the Generative Semantics camp when I first met him. UW itself became a strongly partisan formalist school, although it now has an HPSGer. I disagree with some of Fritz's views about the linguisitic wars, but I always remained on the Generative Semantics side of the divide.

Amusing and interesting. But how much the quality of the course was due to the book, as opposed to the instructor? I never read Syntactic Structures. I did, however, read The Case for Case. Chomsky's later evolutions of generative grammar seemed to have made much of his early work irrelevant, but despite being outdated, Fillmore's early approaches to case grammar were still quite worth reading.
Charles Fillmore was hands-down the best teacher I ever had, and that opinion is echoed by just about everyone else who has had him as a teacher. Just to give you an idea--when I was a teaching assistant of his, he realized once that he had to give one of his students a D. He turned to me and asked "What have I done wrong?" You didn't meet many teachers with that kind of attitude, but I put it down to his years of teaching in Japan. :)

It's not exactly the eastern perspective (and, just for the record, I'm not disagreeing out of spite just because you so clearly outdid my use of metaphor). I changed my plan for grad school and added a minor (too late to add a third major) in cognitive science after my IE studies led me to Lakoff, Langacker, FIllmore, and a number of later linguists whose work I thought superior to various generative grammars. This meant reading much more about cognition, neuroscience, other areas within/related to cognitive science apart from linguistics. It was rather disheartening to find such a disparity between the approach to thought, concepts, "symbols", and thus to some extent language from non-linguists. I had assumed (given the name) that "cognitive linguistics" would be the framework accepted by at least most cognitive scientists. I was so disappointed that I made a research project out of the issue in order to figure out what happened and why.
Having been something of an alien in the East Coast enclave, I am not at all surprised by this. Linguistics is a very esoteric field of study, and people who were not part of it find it very difficult to penetrate. There really are a lot of things about language that people do not know that they don't know. One of the things that irked a lot of linguists the most was that Noam Chomsky came to be seen by the public as the "Albert Einstein" of linguistics. There is some truth to that, in that he revolutionized the field. However, he got most of the attention, and his students, consequently, got most of the jobs. That's why, when it was extremely hard to get an academic job in the 1970s (and I was a lucky exception), most who got jobs were formalists from his school, even though his brand of transformationalism was becoming marginalized in terms of the volume of articles published.

The formalist/functionalist distinction in linguistics, even in the early 90s, wasn't really much of a concern in psychology, neuroscience, computer science, and other fields in which researchers focused on cognition...
By the 90s, linguistics itself had been marginalized as an academic subject. Always in competition with other departments for students, linguistics departments were reduced or eliminated at several schools. It is nowhere near as popular as it was when transformationalism was at its peak. Nowadays, most of the funding comes in for studies that are computer-related. That is too bad, but I think part of the problem has been Chomsky's dominance.
 
Last edited:

Copernicus

Industrial Strength Linguist
Continued...
...Margaret Boden...when she turns to developments after Chomsky, apparently only LFG, GPSG, and HPSG make the cut, and the chapter ends with "linguistics eclipsed" by NLP. This she treats as seperate from linguistics. Instead, it is all programmers and psychologists...
She is seeing linguistics from the perspective of someone outside of the linguistics community, but I somewhat agree with the perspective that NLP has eclipsed theoretical linguistics. That has more to do with funding sources than academic merit, however. LFG, GPSG, and HPSG are part of what is loosely termed "formal linguistics." I think that the formalist approach is more coherent to outsiders (and many insiders), so I am not very surprised that she pays little attention to cognitive linguistics. Cognitivists represent a minority view nowadays. There was a time when I think one could say that Generative Semantics dominated, but that faded in the late 1970s and early 1980s.

We do find "The idea that the concepts named by words may be based in concepts derived from the body isn't new to cognitive science. For instance, George Lakoff and Mark Johnson suggested a quarter-century ago that abstract concepts...may be be metaphorical extensions of body-grounded concepts..." But not only is Lakoff's groundbreaking 1987 work ignored entirely, she goes on to 1) treat embodied cognition as having developed in psychology and 2) mentions that "even earlier" connections between language and bodily experience were around before Lakoff and Johnson. I wish I could say her treatment was abnormal, but whether it is the first chapter of an intro textbook or a book like hers specifically on the history of cognitive science, such a description is typical.
Let's not forget what a profound effect Eleanor Rosch had on Lakoff's thinking when he was writing Women, Fire, and Dangerous Things. He named a lot of precursors that led him to his "experientialist" framework. What Lakoff did was to add linguistic theory to the very eclectic field that came to be known as "cognitive science." I consider him one of the founders of cognitive science for that contribution, but by no means the founder. The problem is that people always tend to think of "Chomsky" first when they think of modern linguistic theory, and outsiders really know very little of the intellectual trends that split Chomsky and his erstwhile "generative semantics" progeny apart. In my opinion, Fritz Newmeyer presents a view skewed in favor of formalists in his "linguistics wars" narrative. There are other perspectives on why linguistic formalism ultimately pushed the less formalist approaches aside. I think that it had more to do with academic politics than what inspired working linguists. "Frankenstein" was always a clumsy monster, and it wasn't hard to put him down with torches and pitchforks. Ultimately, what mattered was the lack of a coherent alternative to competence theory.

Nor is it restricted to such works. For example, in the 1987 book The Elements of Artificial Intelligence by Steven Tanimoto (from the University of Washington), the only reference to Fillmore is in the chapter on NLP, where his The Case for Case is cited...
Not surprising, considering that Joe Emonds dominated UW linguistics at that point. UW has always been a fairly orthodox formalist school, even though the university is on the West Coast.

Winograd, Woods, Schank, Pinker, even Searle are cited here too and elsewhere, sometimes more than once, yet nothing from Lakoff and only this single reference to Fillmore. Nilson's 1998 Artificial Intelligence: A New Synthesis mentions Lakoff but not Fillmore, and while most of the chapters on language concern formal logic (it's an intro textbook), chap. 24 has a section on NLP as well as speech acts, semantics, and similar topics in computational linguistics. However, despite the fact that Nilson was educated (and later taught) at Stanford, we find "The foundational work on language syntax and parsing is that of Chomsky 1965". In Mark Steedman's contribution to the edited volume Artificial Intelligence (1996), "Natural Language Processing", neither Fillmore or Lakoff are mentioned (although Chomsky is cited several times). Fodor's 1998 Concepts: Where cognitive science went wrong has an entire chapter on "The Demise of Definitions, Part I: The Linguist's Tale" in which apparently "linguists" means Pinker and Jackendoff.
There is a lot of history there that you are missing, I think. Fillmore was never a generative semanticist, but lots of linguists tended to put him in that intellectual bucket. Jim McCawley had a series of interchanges with Jackendoff and other "Chomskyites" back in the 1970s. The title of a great 1971 paper was "Interpretive Semantics Meets Frankenstein," wherein he detailed the relative coherence of the pro-Chomsky clique as opposed to the very loosely-defined school called "generative semantics." For one thing, there was a somewhat formalist branch of the opposition that we called "Abstract Syntax." Linguists of that flavor (e.g. Perlmutter, Postal, Katz, Ross) tended to build arguments that drove the level of deep structure (in the classical transformational paragraph) to more abstract extremes. Jim McCawley and George Lakoff tended to start from the position that "deep structure" was semantic representation or "natural logical representation"--a view that ultimately failed when they could not adequately account for presuppositionsand speech act theory. Jim once told me that I could become rich if I were able to incorporate presuppositions into "natural logical representation."

Anyway, I enjoy reminiscing, so I have let the thread drift well beyond the intent of the OP. Thanks for the opportunity to blather on about these things.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I honestly think that that attitude reflects more the polemical tradition that he came from. In person, he comes off as quite mild and pleasant, but he is quite combative in the face of criticism.

That's good to know. All lot of what I've heard about Chomsky, I've also heard about Pinker, only after I had already met him. He had never come off like anything but someone who was polite, helpful, often funny, etc. So I was suprised that he had come off so differently to others. Coming off as arrogant/pugnacious when critiqued but generally much friendlier is understandable (especially considering that Chomsky began a linguistic tradition which required condemning the behaviorist and the statistical approach, followed by the battle between him and some of the next generation of linguists).


You are speaking of the very famous Charles Hockett, whom my wife had as a teacher when she was at Cornell.

Actually I was speaking of Gordon Messing, who is not really known at all. I knew him, and happened to know what he felt about Chomsky, but on the other hand he was hardly the most open-minded person in the world (I don't know what his colleagues thought of him, though).

Ironically, Fritz belonged to the Generative Semantics camp when I first met him.
Really? Another neat tidbit I didn't know anything about.

Charles Fillmore was hands-down the best teacher I ever had, and that opinion is echoed by just about everyone else who has had him as a teacher. Just to give you an idea--when I was a teaching assistant of his, he realized once that he had to give one of his students a D. He turned to me and asked "What have I done wrong?" You didn't meet many teachers with that kind of attitude, but I put it down to his years of teaching in Japan. :)

I'm not suprised at all. Some people's personalities seem to come through in their writings, at least occasionally, whether through their use of humor or in how they talk/refer to their own work (Langacker, for example, at least once used his own earlier work as an exemplar of the wrong approach within mainstream linguistics). Fillmore seemed to be quite the teacher and concerned with getting it right rather than being right.

Having been something of an alien in the East Coast enclave, I am not at all surprised by this. Linguistics is a very esoteric field of study, and people who were not part of it find it very difficult to penetrate. There really are a lot of things about language that people do not know that they don't know. One of the things that irked a lot of linguists the most was that Noam Chomsky came to be seen by the public as the "Albert Einstein" of linguistics. There is some truth to that, in that he revolutionized the field. However, he got most of the attention, and his students, consequently, got most of the jobs.

It's certainly a lot easier to become the "titan" of the field when you become most influential in deciding who gets to continue to define the field and who doesn't. As someone who early on came to the outgrowth of the generative semanticists and other long ignored views, and only later (and with much chagrin) back to studying work within the generative tradition, I often felt like Chomsky had started linguists on their way to the right path, only to ensure they stayed off it as much as possible for the next several decades.


By the 90s, linguistics itself had been marginalized as an academic subject.

That explains a great deal.

Nowadays, most of the funding comes in for studies that are computer-related. That is too bad, but I think part of the problem has been Chomsky's dominance.
Computer-related or for cognitive psychology, which is also well-funded. But I agree that the change seems to have come after I failure of the generative approach to work for computational linguistics and after some changes within psychology. I just with that the changes within computer science and cognitive science had resulted in less reliance on generative linguistics along with a greater acceptance of the cognitive linguistic framework, rather than just less reliance on linguistics. To some extent, I think that is happening now, but it should have happened some time ago.

I think that the formalist approach is more coherent to outsiders (and many insiders), so I am not very surprised that she pays little attention to cognitive linguistics.

It seems like it is more than this, though. Because there are more and more of those within psychology who rely (directly or indirectly) on early work by those who became cognitive linguists. Embodied cognition is an increasingly accepted view within psychology. But I think that the approach to cognition within psychology and linguistics has been defined by formalism and computer science ever since the behaviorist view (the mind as a black box) died out. Thus, when computer scientists stopped using generative models to get computers to "parse" language and began relying on flexible evolutionary & neural net algorithms, a huge support for formalist linguistics period gave way. But with (what appears to be) support for embodied cognition from neuroimaging, a different view of language is emerging within psychology. However, too often when researchers agree with cog. ling. views they either don't realize it or don't acknowledge their dependence linguistic research.

There is a lot of history there that you are missing, I think.

I don't doubt it. But I went looking for the history at one point, studying the history of linguistics from before it existed to Chomsky and the following generations. I have no doubt (especially given the number of things I've learned from you) that I missed a lot, whether it is recorded or no. I also think, however, that what I said about current views of linguistics and the evolution of linguistics within cognitive science neither reflect actual history nor my own views. It's only because I started with linguistics before I did cognitive science, and went looking for what was missing that I believe I know some of the "missing" history left out of the various textbooks/accounts from non-linguists I've read. Alas, though, I wasn't there. And I don't work with linguists, so virtually all I know about the history of linguistics comes from reading (whether "classic" works or accounts of the history of linguistic theory).

Anyway, I enjoy reminiscing, so I have let the thread drift well beyond the intent of the OP. Thanks for the opportunity to blather on about these things.

Thank you! I've greatly appreciated hearing what happened from someone who was there and not in a "pre-packaged" version (e.g., a book or paper).
 

Looncall

Well-Known Member
As a chemist, I find this discussion fascinating. To an outsider, all this sounds more like some kind of politics than any kind of scholarship.

How come these studies are not more regulated by the results of experimentation? It almost sounds like they are arguing about something that is imaginary.
 

LegionOnomaMoi

Veteran Member
Premium Member
As a chemist, I find this discussion fascinating. To an outsider, all this sounds more like some kind of politics than any kind of scholarship.

How come these studies are not more regulated by the results of experimentation? It almost sounds like they are arguing about something that is imaginary.

Copernicus was kind enough to allow this thread to get a bit off topic. His area of expertise is linguistics. Although I wish to focus on the cognitive basis and mechanisms underlying language, my current area of research is cognitive neuropsychology, and much of what I know of linguistics comes from (more or less) reading journals, monographs, etc., in various linguistic subfields/disciplines on my own. So a lot of some of the latest posts here were less on the nature of the field but on its history, specifically those parts which were most dominated by academic politics. He knows the history of the field and the players from personal experience, and very helpfully educated me on such topics.

As Kuhn, Irving Janis, Feyerabend, and numerous others have all pointed out, the history of science demonstrates that whether we are talking physics or psychology, or any scientific discipline, there will always be politics, culture, and other factors which influence academia in ways that have little to do with evidence. I don't study much chemistry outside of the chemical properties of neurons, neuronal signal propagation, etc. But even here, where we are pretty much in what is sometimes called "hard science", there is plenty of disagreement. A main area of disagreement concerns neural firing (how neurons "communicate"). Most people know that neurons "fire": they generate an electrical signal called an action potential. However, most undergrad psychology textbooks, from intro textbooks to those specifically on neuroscience, talk about these signals as "all or nothing spikes" such that one gets the impression these are the basic units of information in the way bits are for a computer. However, this is incredibly inaccurate. Rather, something about the timing of this firing is really the "basic unit".

The problem is identifying what the "something" is. Much of the debate focuses on whether neurons use rate coding, temporal coding, both but primarily one of the two (and in that case when and why), or both more or less equally (and then again when and why).

This is a comparatively basic, biological issue at the cellular level, not like identifying areas of the brain most involved in language or semantics or certain types of memory. Just how neurons pass "information" using action potentials. Yet here there are disagreements.

All it takes for "politics" to become an issue is some area of uncertainty and people who have built something of a reputation defending and promoting one point of view. I can't imagine that chemistry as a field has no or even few areas in which this isn't an issue. Were all of chemistry so straightforward, it seems to me that it wouldn't need many people to advance our knowledge.

Linguistics is no different, except that as language is vastly more complex than most complex systems (hence our inability to produce computer programs capable of appearing as if they understand speech) there are far more areas of uncertainty than in many other areas of research. With that uncertainty comes different theories, methods, approaches, assumptions, and people defending a subset of these.
 
Last edited:

idav

Being
Premium Member
Most people know that neurons "fire": they generate an electrical signal called an action potential. However, most undergrad psychology textbooks, from intro textbooks to those specifically on neuroscience, talk about these signals as "all or nothing spikes" such that one gets the impression these are the basic units of information in the way bits are for a computer. However, this is incredibly inaccurate. Rather, something about the timing of this firing is really the "basic unit".

The problem is identifying what the "something" is. Much of the debate focuses on whether neurons use rate coding, temporal coding, both but primarily one of the two (and in that case when and why), or both more or less equally (and then again when and why).

This is a comparatively basic, biological issue at the cellular level, not like identifying areas of the brain most involved in language or semantics or certain types of memory. Just how neurons pass "information" using action potentials. Yet here there are disagreements.
Do you think that how a neuron fires is the difference that gives us consciousness? I don't doubt that the neuron is quite complex but one neuron does the job of one computer within a network. There are dozens of ways to communicate and translate. Once "communication" is occurring, when is it aware of this communication? Is a neuron aware it is communicating cause I'm pretty sure a computer is aware it is communicating, in fact the computers in a network are constantly verifying they exist with every bit that is sent and confirmed that it was sent. The bits within a neuron are still just different frequencies(potentials) that communicate something different, and yes the order would be important as with any program otherwise it would fail.
 

LegionOnomaMoi

Veteran Member
Premium Member
Do you think that how a neuron fires is the difference that gives us consciousness?
Difference between what? I think that how a neuron fires is important as far as consciousness is concerned.

I don't doubt that the neuron is quite complex but one neuron does the job of one computer within a network.

It doesn't. Computers, whether in a network or not, are state machines. We're still dealing with discrete bits and discrete states. At times, a single neuron does appear to send information which does not depend on other neurons. That is, at times the firing of a single neuron is somewhat like a bit in that although no program ever stores a variable or something in one bit, that bit does store meaningful information (0 or 1). However, usually this is not the case. A single neuron conveys meaningful unit via correlations with the activity of other neurons. At times, these synchronized signals across neural populations seem to defy physics (neural networks appear to synchronize nonlocally and nearly instantaneously). Computer networks can be broken down into component parts just like a single computer. Neurons do not work like this.

Is a neuron aware it is communicating cause I'm pretty sure a computer is aware it is communicating, in fact the computers in a network are constantly verifying they exist with every bit that is sent and confirmed that it was sent.
Computers are not aware of anything, nor are neurons. Computers cannot verify they exist because they have no idea what "existing" is. They can do what a single cell does and react purely without awareness. How aware are you of the chemical reactions going on to regulate your heartbeat, your sweat glands, your antibodies, and so on? All of those automated responses, from the cellular level up, which happen every second of every day in a human body without the person's knowledge, are far more complex than computers or computer networks. Yet there is no awareness.

The bits within a neuron are still just different frequencies(potentials) that communicate something different, and yes the order would be important as with any program otherwise it would fail.

There are no bits within a neuron. In fact, the very idea of anything in the brain being a bit depends more on a bad analogy than on anything useful. Bits are "units" of information which are manipulated by logic gates. This allows algorithms to use these discrete units of storage for programs, turning computers into finite state machines where the distributions of 0's and 1's is constantly updated based on input and a number of predefined, determined, and specific responses given by the algorithms.

Brains don't have bits. They have no minimum unit of information, unless you define "minimum" as a changable entity depending on whether or not a single neuron is sending meaningful information (and even then the nature of this "bit" changes), or the minimum unit is the correlated activity of a few neurons, or it is the result of irreducible networks.
 

idav

Being
Premium Member
Difference between what? I think that how a neuron fires is important as far as consciousness is concerned.
The brain can't be the universes only answer to consciousness. I actually doubt it.
It doesn't. Computers, whether in a network or not, are state machines. We're still dealing with discrete bits and discrete states.
A neuron is a machine.
At times, a single neuron does appear to send information which does not depend on other neurons.
It is true there is awesome redundancy so that one neuron could go and the system would still function. This doesn't take away from the fact that neurons correlate to put across what it's trying to accomplish. A thought depends on many neurons.
That is, at times the firing of a single neuron is somewhat like a bit in that although no program ever stores a variable or something in one bit, that bit does store meaningful information (0 or 1). However, usually this is not the case. A single neuron conveys meaningful unit via correlations with the activity of other neurons. At times, these synchronized signals across neural populations seem to defy physics (neural networks appear to synchronize nonlocally and nearly instantaneously). Computer networks can be broken down into component parts just like a single computer. Neurons do not work like this.
Redundancy aside, many computers could be used to convey a single component within a network. We replicate the ability without having to go to that extreme and it would be extremely unnecessarily complex to make a system like that. The computer has gotten passed that with the ability to do multiple cores within a cpu.

Computers are not aware of anything, nor are neurons. Computers cannot verify they exist because they have no idea what "existing" is. They can do what a single cell does and react purely without awareness. How aware are you of the chemical reactions going on to regulate your heartbeat, your sweat glands, your antibodies, and so on? All of those automated responses, from the cellular level up, which happen every second of every day in a human body without the person's knowledge, are far more complex than computers or computer networks. Yet there is no awareness.
I do consider the complexity and function of our molecules and chemicals and their responses when asking these questions. What it seems like your saying is it is the complexity that makes neurons really know anything. The problem I have with that is that it is the neurons that give us a mind therefore it isn't too far-fetched to say that these neurons are aware for us.

We start to see intelligence when the complexity makes an organism seem to have a will. Like only the collective of the ant colony makes it seem like they are doing things that are intelligent, yet a single ant is aware in itself with its own intelligence.

We can't know that a neuron isn't aware but the evidence shows that it is communicating and learning for us making their collective an I.

There are no bits within a neuron. In fact, the very idea of anything in the brain being a bit depends more on a bad analogy than on anything useful. Bits are "units" of information which are manipulated by logic gates. This allows algorithms to use these discrete units of storage for programs, turning computers into finite state machines where the distributions of 0's and 1's is constantly updated based on input and a number of predefined, determined, and specific responses given by the algorithms.
The neuron puts out it's own algorithm and is predefined based on updated inputs etc.... The analogy can go far but you jump to the part where it ends. :)
Brains don't have bits. They have no minimum unit of information, unless you define "minimum" as a changable entity depending on whether or not a single neuron is sending meaningful information (and even then the nature of this "bit" changes), or the minimum unit is the correlated activity of a few neurons, or it is the result of irreducible networks.
I know that brains work more complicated than they have to. This is due to the nature of evolution and how undesigned it was. That certainly doesn't mean it is the only way to have a mind. Give machines a break...give them a hundred thousand years of evolution and see what happens. :D
 

LegionOnomaMoi

Veteran Member
Premium Member
The brain can't be the universes only answer to consciousness. I actually doubt it.

1) Why would the universe need an answer to consciousness?
2) Assuming that the above makes sense, why would brains be insufficient?

A neuron is a machine.

I said "state machine." I suppose you could conceptualize just about anything as a machine, but finite state machines have rather specific definitions.

It is true there is awesome redundancy so that one neuron could go and the system would still function.

What I said has nothing to do with that. In fact, it's almost the opposite. Most of the time, a single neuron doesn't really convey anything, as the "message" is the correlated firings of 2 or more neurons. In other words, take away a neuron, and there is no information. The "bit" (minimum information unit) is the correlations themselves, which means that a single neuron by itelf means nothing. You can think of it somewhat like letters in a word: the word "neuron" only means "neuron" if you have all the letters together. If you just have an "n" and/or an "r" then you loose all meaning. The same is true for neurons most of the time. Only the combination/correlations between at least a few neurons conveys anything meaningful. Take one away, and it's like removing a bunch of letters from a word such that it doesn't spell anything.
This doesn't take away from the fact that neurons correlate to put across what it's trying to accomplish. A thought depends on many neurons.

The issue isn't that thoughts take several neurons. To illustrate, using words as an example again, imagine a sentence. It takes a bunch of words, put in a certain order, with each word inflected properly, and so forth. But the individual words have meanings. The individual letters, or sounds, do not. The same is true for thoughts. You can look at the brain and see that this and that neural population is involved in processing verbs or something. However, you can't break thoughts down to individual neurons, only neural populations of varying sizes.

Redundancy aside, many computers could be used to convey a single component within a network.
What I was referring to has more in common with a lack of redunancy. When we look for something like a "bit" in the brain, often enough it's the firing of several neurons combined. Take one away and there goes the bit.


What it seems like your saying is it is the complexity that makes neurons really know anything.
Neurons don't know anything.



Like only the collective of the ant colony makes it seem like they are doing things that are intelligent, yet a single ant is aware in itself with its own intelligence.

This is a fundamental mischaracterization of swarm intelligence, including ants. Remove a certain amount of ants, and you have no intelligent behavior at all:

Nigel Franks, in his paper "Army Ants: A collective intelligence", writes "if 100 army ants are placed on a flat surface, they will walk around in never decreasing circles until they die of exhaustion. In extremely high numbers, however, it is a different story. A colony of 500,000 Eciton army ants can form a nest of their own bodies that will regulate temperature accurately within limits of plus or minus 10 C. In a single day, the colony can raid 200 m through the dim depths of the tropical rain forest, all the while maintaining a steady compass bearing. The ants can form super-efficient teams for the purpose of transporting large items of prey."

We can't know that a neuron isn't aware but the evidence shows that it is communicating and learning for us making their collective an I.

Neither learning nor communication implies awareness. Ants aren't aware, either as a colony or just individual ants.

The neuron puts out it's own algorithm and is predefined based on updated inputs etc....

Then you know more about neurons than all the neuroscientists on the planet. Because nobody can do more than assume that neurons are governed by algorithms, as it doesn't appear they are and if they are nobody knows what these may be.
 
Last edited:

idav

Being
Premium Member
Neither learning nor communication implies awareness. Ants aren't aware, either as a colony or just individual ants.



Then you know more about neurons than all the neuroscientists on the planet. Because nobody can do more than assume that neurons are governed by algorithms, as it doesn't appear they are and if they are nobody knows what these may be.

The complexity can be broken down. Either neurons are aware or we aren't aware either. Cells have a job to do and the ego is merely the collective of that awareness that neurons must have. When someone sees something it is the neurons seeing a decoded message, we aren't actually seeing anything, rather a representation. An organism that is aware of it's environment can use any sense it is born with. Ants are aware of their environment and respond accordingly.
 

PolyHedral

Superabacus Mystic
The complexity can be broken down. Either neurons are aware or we aren't aware either. Cells have a job to do and the ego is merely the collective of that awareness that neurons must have. When someone sees something it is the neurons seeing a decoded message, we aren't actually seeing anything, rather a representation. An organism that is aware of it's environment can use any sense it is born with. Ants are aware of their environment and respond accordingly.
We are the neurons.
 

JohnLeo

Member
My question is why there should ever be such a thing as a disembodied mind. Most people take it for granted that minds can exist independently of the bodies that they are evolved to control, but what possible purpose could a mind without a body have? Our gods are idealized versions of ourselves--our minds. People often argue that a kind of supermind might better explain the origin of the universe, because a mind would have to exist prior to the existence of physical reality. It's just that our own minds clearly evolved to serve our physical conditions, not vice versa. So why would a mind like ours exist independently of a body? Does that really make sense?
It may not make any sense, but we have 150 plus years of evidence resulting from the work of top-notch scientists in many fields which indicates that that is indeed the case.
 
Top