• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Getting from cause effect to awareness

idav

Being
Premium Member
If you define awareness as conscious knowledge of one's environment, yes. If you define awareness as the capacity to react to one's environment, no.
Push a button and the machine reacts.

Consciousness requires conceptual processing. Concepts, however, are never singular and are always abstract. To understand the idea of tree requires understanding an abstract entity. Humans do this by representing "tree" in multiple patterns constantly active and changing in the brain. Turn this into a reliable, stable storage and you kill this ability.

Machines can be programmed to work conceptually. I dont see why having more efficient storage and recall should be a problem. The brain does it cause it has to. Machines dont have to its called redundancy.
 

LegionOnomaMoi

Veteran Member
Premium Member
Machines can be programmed to work conceptually.
They can't. If they could I'd know about it. It would be the single most important development since the first computers were built.

I dont see why having more efficient storage and recall should be a problem
The brain is the most efficient system of understanding there is by far. It so vastly outstrips any other system that comparisons are laughable. Massively parallel processing, artificial neural networks, swarm intelligence, and other attempts to make computers better at learning do so by working against what computers were designed to do: follow very specific procedures and give very specific outputs. To learn, we've had to make computers "soft" and have them be able to make mistakes. You cannot learn without error tolerances, mistakes, reconditioning, etc. That's why living systems, even without brains, are so much better than computers. They're perfectly designed for adaption. They have no hardware just wetware that is being re-written by contact with environment or learning (unconscious learning, but learning). We have to make computers fake that kind of adaptability. And we can do it for very simple systems that don't have brains like ant colonies or plants. We aren't even close to the kind of understanding brains have because these are able to learn associatively. Because of the massive parallelism in brains and the way data is encoded as being in constant overlap with others and constantly encoded (it's always active), brains can relate stimuli to abstract notions (concepts) and relate these to each other. Computers can't do this. They aren't designed to. They're giant calculators.

The brain does it cause it has to. Machines dont have to its called redundancy.
The reason the brain has to is because it is necessary for thought.
 

idav

Being
Premium Member
The reason the brain has to is because it is necessary for thought.
Can you back this claim up? Your claiming that a nonperfect form of memory, that of the brain, can conceptualize better than a system that has better memory not as prone to mistakes, like the memory of a machine. If we told a machine to go out into a parking lot and our car it would have no problem while the human would be wandering in various isles trying to remember where it might be. Just give the proper tools to the machine that it can do special recognition to make the fight fair.
 

LegionOnomaMoi

Veteran Member
Premium Member
Can you back this claim up? Your claiming that a nonperfect form of memory, that of the brain, can conceptualize better than a system that has better memory not as prone to mistakes
When we teach computers to learn we make sure they can make mistakes. The problem is that concepts are not "crisp", even when they correspond to physical objects. Two computers need not be of the same color, trees can have leaves or needles, dogs can be the size of wolves or the size of housecats, etc. The world is filled with vagueness and ambiguity. Computers do not, as a rule, tolerate any ambiguity. Polyhedral is fond of using objects to show that computers can learn about actual objects. In java, I might define a class of objects "trees" that have properties like green, leaves, a given height range, etc. Then every single tree must have only those exact properties I specify. It's why object oriented languages aren't especially well-suited for object recognition software. If I try to get a program to recognize faces I can either specify exactly what every face I want it to recognize looks like (and the faces cannot be seen from even a slightly different angle or this won't work), or I can allow it some room for error. I have to be very careful though, because if I allow too much error it might not recognize the difference between my face and Brad Pitt's face (this happens to me all the time, of course), but if I make it too exact it won't recognize my face shown with just the hint of a shadow, or a bruise, or ever-so-sleight an angle.

You don't even need to get to the way brains work to see how necessary mistakes are. Let's say that I flip a coin many times. I tell you the sequence of H's vs. T's. Which would most people think more likely? This sequence:

HHHTHTTHHTHTHTTHHHTHTHHHTHTHHTHTH

or this:

TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
?

I can write a program to calculate the answer to this question exactly. It's very easy. The answer is that all sequences of any n flips are equally likely no matter what n is. If n is 50, than 50 heads is as likely as 25 heads and 25 tails in some particular order. What is much, much, harder, however, is getting a computer to answer "which sequence looks more like what we'd expect?" We expect some heads and some tails. We don't expect to flip a coin 50 times and get 50 heads or almost 50 heads. That's why the first sequence looks more likely to us. We recognize it as a pattern of heads and tails and we don't pay attention to the fact that it is actually a very specific sequence. By ignoring or seeing imperfectly extraneous details we're actually better off. We see patterns so much better because we abstract away from the specifics and generalize. Unfortunately, this means error. Some generalizations will be wrong. Sometimes details do need to be remembered. But in order to get computers just to recognize patterns (not even to understand them, just to correctly identify e.g., the face of a dog vs. a cat vs. something else) we force them to be able to err. Classification algorithms necessarily place values on data corresponding to two or more classes. But the values exist in ranges. We make the computer capable of error. Dimensioality reduction techniques are all over the place in machine learning and elsewhere. They amount to throwing out data and trying to preserve or highlight what's most vital. We do it without thinking. Computers require some of the most sophisticated programming just to reproduce what mice do naturally.

If we told a machine to go out into a parking lot and our car it would have no problem

It would. Because there is no such thing as "car", because even if we told it to recognize "our car" it would have enormous difficulty recognizing the same car from two different places, and because it doesn't store coordinates in a relative frame but an absolute one. Let's say you park your car at the mall and then go inside an shop for a few hours. You don't have to keep track of every step you take so that when you want to go home you have to retrace exactly every, single step. You can have an idea that the car is on the 3rd level towards the middle and find it by starting at the left or the right entrance on whatever level you please. When you turn, you don't expect the world to turn with you. Computers do. They are naturally orientated to precision. You run a program backwards to get to the first step and it will re-take every, single step. In order to program a computer to simply remember where your car is you have to do amazingly complicated programming just to make the computer able to recognize your car from different angles and even harder programming (nearly impossible) to have it remember where the car is in relative relation to it. Modern navigation systems bypass this by using permanent markers and gps. The problem is that this only works for relatively unchanging maps like streets. Computers can't remember "back that way" or other directions that are useful when navigating unknown terrain. They must follow explicit steps or they suck at it. Getting it to remember where your car is means spelling out in painstaking detail every step to get to your car. You don't have to do that because you can remember generalities.

Just give the proper tools to the machine that it can do special recognition to make the fight fair.
We don't know how to give it those tools. They involve abstracting away to generalities, forgetting particular details, and error. Computers were designed for precision. Making them reproduce what is general is incredibly hard.
 
Last edited:

idav

Being
Premium Member
When we teach computers to learn we make sure they can make mistakes. The problem is that concepts are not "crisp", even when they correspond to physical objects. Two computers need not be of the same color, trees can have leaves or needles, dogs can be the size of wolves or the size of housecats, etc. The world is filled with vagueness and ambiguity. Computers do not, as a rule, tolerate any ambiguity. Polyhedral is fond of using objects to show that computers can learn about actual objects. In java, I might define a class of objects "trees" that have properties like green, leaves, a given height range, etc. Then every single tree must have only those exact properties I specify. It's why object oriented languages aren't especially well-suited for object recognition software. If I try to get a program to recognize faces I can either specify exactly what every face I want it to recognize looks like (and the faces cannot be seen from even a slightly different angle or this won't work), or I can allow it some room for error. I have to be very careful though, because if I allow too much error it might not recognize the difference between my face and Brad Pitt's face (this happens to me all the time, of course), but if I make it too exact it won't recognize my face shown with just the hint of a shadow, or a bruise, or ever-so-sleight an angle.
When humans are babies we already start learning how faces are supposed to look and so on. Of course it is difficult to get a computer to do that but they need the same exposure at least as the baby learning language and faces.
You don't even need to get to the way brains work to see how necessary mistakes are. Let's say that I flip a coin many times. I tell you the sequence of H's vs. T's. Which would most people think more likely? This sequence:

HHHTHTTHHTHTHTTHHHTHTHHHTHTHHTHTH

or this:

TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
?

I can write a program to calculate the answer to this question exactly. It's very easy. The answer is that all sequences of any n flips are equally likely no matter what n is. If n is 50, than 50 heads is as likely as 25 heads and 25 tails in some particular order. What is much, much, harder, however, is getting a computer to answer "which sequence looks more like what we'd expect?" We expect some heads and some tails. We don't expect to flip a coin 50 times and get 50 heads or almost 50 heads. That's why the first sequence looks more likely to us. We recognize it as a pattern of heads and tails and we don't pay attention to the fact that it is actually a very specific sequence. By ignoring or seeing imperfectly extraneous details we're actually better off. We see patterns so much better because we abstract away from the specifics and generalize. Unfortunately, this means error. Some generalizations will be wrong. Sometimes details do need to be remembered. But in order to get computers just to recognize patterns (not even to understand them, just to correctly identify e.g., the face of a dog vs. a cat vs. something else) we force them to be able to err. Classification algorithms necessarily place values on data corresponding to two or more classes. But the values exist in ranges. We make the computer capable of error. Dimensioality reduction techniques are all over the place in machine learning and elsewhere. They amount to throwing out data and trying to preserve or highlight what's most vital. We do it without thinking. Computers require some of the most sophisticated programming just to reproduce what mice do naturally.



It would. Because there is no such thing as "car", because even if we told it to recognize "our car" it would have enormous difficulty recognizing the same car from two different places, and because it doesn't store coordinates in a relative frame but an absolute one. Let's say you park your car at the mall and then go inside an shop for a few hours. You don't have to keep track of every step you take so that when you want to go home you have to retrace exactly every, single step. You can have an idea that the car is on the 3rd level towards the middle and find it by starting at the left or the right entrance on whatever level you please. When you turn, you don't expect the world to turn with you. Computers do. They are naturally orientated to precision. You run a program backwards to get to the first step and it will re-take every, single step. In order to program a computer to simply remember where your car is you have to do amazingly complicated programming just to make the computer able to recognize your car from different angles and even harder programming (nearly impossible) to have it remember where the car is in relative relation to it. Modern navigation systems bypass this by using permanent markers and gps. The problem is that this only works for relatively unchanging maps like streets. Computers can't remember "back that way" or other directions that are useful when navigating unknown terrain. They must follow explicit steps or they suck at it. Getting it to remember where your car is means spelling out in painstaking detail every step to get to your car. You don't have to do that because you can remember generalities.
I don't think any of that means that a computer would be better off with generalities. It is unnecessary for a computer act like a fallible human in order to be aware. Certainly consciousness should be possible with more precise memory techniques. We don't say a person with a photographic memory is not cognitive cause they are too much like a machine.

We can be very precise and tell the machine exact color, structure, make, model and position and it will find the car faster than most humans because humans get confused with all those details.

We don't know how to give it those tools. They involve abstracting away to generalities, forgetting particular details, and error. Computers were designed for precision. Making them reproduce what is general is incredibly hard.
We know what it takes and it is painstaking as you mentioned. Incredibly hard, sure but not impossible.

I still don't see why having fuzzy memory makes a person more cognitive than having photographic/precise type memory. It just means the human has to try harder to keep a memory in place and it takes even more effort to keep the memory accurate.
 

LegionOnomaMoi

Veteran Member
Premium Member
When humans are babies we already start learning how faces are supposed to look and so on.
Humans can do this at birth.

Of course it is difficult to get a computer to do that but they need the same exposure at least as the baby learning language and faces.
You can expose a human to millions of faces in a few hours. In fact, machine learning generally involves thousands and thousands of trials to learn. To teach a computer to learn to play checkers requires thousands upon thousands of games before it can play. It requires much more to play well. I don't know if anybody has ever bothered to get a computer to learn to play chess well. It's too hard.

I don't think any of that means that a computer would be better off with generalities.
That's because you take generalities for granted. Every single word is a generality. No word corresponds even to some physical entity like a face the way a computer needs. When a define an object in a computer program it is represented precisely by one particular value. If I define tree, then there is only that one tree. That's what exactness, what precision does. It makes learning impossible. It makes recognition impossible.

It is unnecessary for a computer act like a fallible human in order to be aware.
Have you ever tried programming a computer to learn anything? To recognize anything?

Certainly consciousness should be possible with more precise memory techniques.
Why? Consciousness involves remembering concepts. Concepts are not precise. Remembering concepts requires remembering imprecision. That's why when we try to get computers to learn, we make them imprecise.
We don't say a person with a photographic memory is not cognitive cause they are too much like a machine.

People who really do have eidetic memory have severe problems. Nobody actually has a photographic memory but people who can easily remember, without training, images have a great deal of trouble because they can't tune out unnecessary information.

We can be very precise and tell the machine exact color, structure, make, model and position and it will find the car faster than most humans because humans get confused with all those details
The exact structure changes depending upon one's angle. We can't certainly specify a very precise route for a computer to follow. It's just an absolute waste of time because it will only work from one specific location to one other specific location. Any slight change requires an entirely new program.

We know what it takes
We don't know. Computers do not deal with abstracts. It is incredibly hard to get them to imitate simple abstractions. To get them to actually deal with generalities is impossible.
I still don't see why having fuzzy memory makes a person more cognitive than having photographic/precise type memory
You can't be "more cognitive". But if you mean "have superior cognitive abilities" then the answer is that memory precision becomes very problematic very quickly. We think of a good memory in terms of being able to recall lines from a movie or book or poem exactly, or being able to remember the first 60 digits of pi because we memorized it in 6th grade during a boring math class or being able to remember an entire functional map of the brain. We don't think of it as remembering every pixel in an image or the exact shape of every character in a book. That kind of memory is problematic because instead of remembering "face" one remembers billions of very specific values. Those values change ever-so-slightly constantly (a face will never look exactly the same in one picture as in another). In order for me to remember faces I have to remember details unique to a face similar to the way a computer does. It's still better than a computer but it means I generally have trouble recognizing faces. Computers cannot abstract away from "Andrew's face in this photograph" to "Andrew's face". They can only deal with specifics and exact values. When we try to "fuzzify" things by allowing for error and mistakes, they can do better but then they are terrible at it. That's our choice: specify exactly what must be done (no learning) or getting a computer to learn like an insect can.

It just means the human has to try harder to keep a memory in place and it takes even more effort to keep the memory accurate.
Humans can remember concepts. It means that I don't have to keep in mind every instance of "face" in my mind. I have the concept face. Computers can't learn concepts.
 
Last edited:

idav

Being
Premium Member
Humans can do this at birth.
No, human babies have to learn to recognize faces. There sight is all messed up and they have trouble even recognizing human faces or parents faces until they have time to soak it up. They also have trouble differentiating dialects but all those problems go a way fairly quickly to the point that mom is easlily recognized and their dialect is recognized as well as other types of non-common faces and accents become foreign.

You can expose a human to millions of faces in a few hours. In fact, machine learning generally involves thousands and thousands of trials to learn. To teach a computer to learn to play checkers requires thousands upon thousands of games before it can play. It requires much more to play well. I don't know if anybody has ever bothered to get a computer to learn to play chess well. It's too hard.
That was IBM's first challenge in the 90's with Deep Blue. Yes, extremely difficult but we got the computer to beat the World Chess masters.

That's because you take generalities for granted. Every single word is a generality. No word corresponds even to some physical entity like a face the way a computer needs. When a define an object in a computer program it is represented precisely by one particular value. If I define tree, then there is only that one tree. That's what exactness, what precision does. It makes learning impossible. It makes recognition impossible.
It doesn't make learning impossible, it makes learning precise.

Have you ever tried programming a computer to learn anything? To recognize anything?
I'm aware of the issues, my field is computer science.

Why? Consciousness involves remembering concepts. Concepts are not precise. Remembering concepts requires remembering imprecision. That's why when we try to get computers to learn, we make them imprecise.
I call BS.:D

People who really do have eidetic memory have severe problems. Nobody actually has a photographic memory but people who can easily remember, without training, images have a great deal of trouble because they can't tune out unnecessary information.
This is correct but they aren't any less cognizant just from having more precise memory. If anything they are able to remember concepts better. I understand the issue though, imagination is what makes us better at handling our ambiguous world of concepts. A computer lacking the ability to extrapolate is the issue, cause a human can think for themselves and problem solve while the machine needs to be told exactly and any ambiguity means fail. That said, memory is not the issue whether precise or ambiguous.
The exact structure changes depending upon one's angle. We can't certainly specify a very precise route for a computer to follow. It's just an absolute waste of time because it will only work from one specific location to one other specific location. Any slight change requires an entirely new program.
Then that program didn't account for enough exceptions.


We don't know. Computers do not deal with abstracts. It is incredibly hard to get them to imitate simple abstractions. To get them to actually deal with generalities is impossible.

Not impossible, just time consuming because the programmer has to account for every variable known and unknown. Humans had billions of years to get "programmed" so we already count for variable in fact we can even count variable that may not even exist in reality.

You can't be "more cognitive". But if you mean "have superior cognitive abilities" then the answer is that memory precision becomes very problematic very quickly. We think of a good memory in terms of being able to recall lines from a movie or book or poem exactly, or being able to remember the first 60 digits of pi because we memorized it in 6th grade during a boring math class or being able to remember an entire functional map of the brain. We don't think of it as remembering every pixel in an image or the exact shape of every character in a book. That kind of memory is problematic because instead of remembering "face" one remembers billions of very specific values. Those values change ever-so-slightly constantly (a face will never look exactly the same in one picture as in another). In order for me to remember faces I have to remember details unique to a face similar to the way a computer does. It's still better than a computer but it means I generally have trouble recognizing faces. Computers cannot abstract away from "Andrew's face in this photograph" to "Andrew's face". They can only deal with specifics and exact values. When we try to "fuzzify" things by allowing for error and mistakes, they can do better but then they are terrible at it. That's our choice: specify exactly what must be done (no learning) or getting a computer to learn like an insect can.
Software has been getting dang good at facial and voice recognition, better than humans even, I'm even having trouble doing some of the newer captchas that computers are getting better at recognizing.

We've been using the technology to try and catch terrorists with recognition software and the most recent problem I've seen is trying to spell these guys names and having the same spelling in the various systems we have tracking such things. Trying to teach a human to spell is hard enough.
Humans can remember concepts. It means that I don't have to keep in mind every instance of "face" in my mind. I have the concept face. Computers can't learn concepts.
When we think car, our brain goes through all sorts of instances using that ambiguity your so fond of. It gets the job done but it is overkill big time. I should be able to tell someone my car and there should be no ambiguity.
 

LegionOnomaMoi

Veteran Member
Premium Member
Can you back this claim up? Your claiming that a nonperfect form of memory, that of the brain, can conceptualize better than a system that has better memory not as prone to mistakes, like the memory of a machine.
I think it is vital to look at how what seems to you to be less capable than a machine is really vastly superior. For example, how is it that humans can effortlessly classify "nearly infinite" numbers of objects? Because they store these in nonlocal patterns of active networks that are constantly re-wiring and linking with others. Here are some instructive or landmark papers you might think about looking over:

Freeman, W. J. (2003). A neurobiological theory of meaning in perception Part I: Information and meaning in nonconvergent and nonlocal brain dynamics. International Journal of Bifurcation and Chaos, 13(09), 2493-2511.

Freeman, W. J. (2003). A neurobiological theory of meaning in perception Part II: Spatial Patterns of phase in gamma EEGs from primary sensory cortices reveal the dynamics of mesoscopic wave packets. International Journal of Bifurcation and Chaos, 13(09), 2513-2535.


Gray, C. M., König, P., Engel, A. K., & Singer, W. (1989). Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature, 338(6213), 334-337.

Singer, W., & Gray, C. M. (1995). Visual feature integration and the temporal correlation hypothesis. Annual review of neuroscience, 18(1), 555-586.
 

idav

Being
Premium Member
I think it is vital to look at how what seems to you to be less capable than a machine is really vastly superior. For example, how is it that humans can effortlessly classify "nearly infinite" numbers of objects? Because they store these in nonlocal patterns of active networks that are constantly re-wiring and linking with others. Here are some instructive or landmark papers you might think about looking over:

Freeman, W. J. (2003). A neurobiological theory of meaning in perception Part I: Information and meaning in nonconvergent and nonlocal brain dynamics. International Journal of Bifurcation and Chaos, 13(09), 2493-2511.

Freeman, W. J. (2003). A neurobiological theory of meaning in perception Part II: Spatial Patterns of phase in gamma EEGs from primary sensory cortices reveal the dynamics of mesoscopic wave packets. International Journal of Bifurcation and Chaos, 13(09), 2513-2535.


Gray, C. M., König, P., Engel, A. K., & Singer, W. (1989). Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature, 338(6213), 334-337.

Singer, W., & Gray, C. M. (1995). Visual feature integration and the temporal correlation hypothesis. Annual review of neuroscience, 18(1), 555-586.
I will look through some those links as I have time. That first link is interesting, gets into the root of our argument. I'm of mind that information is enough but the proposal in that link is that we need an theory of meaning rather than information. Thing about meaning is that it is largely subjective. Two humans can read different contexts of the same sentence. Same would be for a computer but obviously more prone to mistakes since a computer has far less experience than a human regarding communication. To me that doesn't really take away from the ability to perceive the environment. Humans don't need language or intelligence to be aware so neither does a computer. The human can be in a vegetative state for all I know but awareness and experience is still occurring and doesn't require massive memory or calculation.
 

LegionOnomaMoi

Veteran Member
Premium Member
No, human babies have to learn to recognize faces.
We've known for decades this isn't true. This study in the 90s already reports on previous studies showing 4 day old infants recognizing their mother's faces. More recent studies (e.g., Bulf, H., & Turati, C. (2010). The role of rigid motion in newborns' face recognition. Visual Cognition, 18(4), 504-512.) have investigated how day old infants recognize faces.

There sight is all messed up and they have trouble even recognizing human faces or parents faces until they have time to soak it up
They have trouble compared to us. They are better than any computer.

They also have trouble differentiating dialects

They don't. They need to learn language. No computer is capable of this.

but all those problems go a way fairly quickly to the point that mom is easlily recognized
That happens on the first day.

and their dialect is recognized as well as other types of non-common faces and accents become foreign.
The reason foreign languages start to sound foreign is that children loose the ability to differentiate particular sounds. They forget. For example, many Asians have trouble differentiating between palatals like the consonants "r" and "l". English speakers can't recognize glottal stops. Infants and very young children are capable of learning any language because they can hear the full range of sounds. But this makes it harder to learn. As they stop being able to distinguish certain sounds, they get better. Poorer memory, less clarity, and hey presto better learning.

That was IBM's first challenge in the 90's with Deep Blue. Yes, extremely difficult but we got the computer to beat the World Chess masters.

We didn't teach it to. We programmed it to. That's the difference. Chess has specific rules. It's possible to program a computer to follow specific procedures exactly. That's not learning.
It doesn't make learning impossible, it makes learning precise.

Precision makes learning impossible at that level. You cannot classify anything with the level of precision a computer uses. You can only do so by forcing it to be imprecise.


I'm aware of the issues, my field is computer science.
Can you name a single learning algorithm that has the kind of precision you are talking about? Do you know what principal component analysis is, what perceptron learning is, how ANNs work, genetic of evolutionary algorithms work, or how machine learning in general works? Because if so this can get a lot easier very quickly by just being technical and talking about parameter values, the important differential equations, eigenvalues, etc.


I call BS.:D

Name a precise concept (one that requires no abstraction and can be easily translated into a mathematical model).


This is correct but they aren't any less cognizant just from having more precise memory.
They are less cognizant. They cannot remember the important things because they cannot pay attention to the important stuff. They pay attention to everything. It's overload.

If anything they are able to remember concepts better.
There's no direct relation between visual memory and conceptual representation.

A computer lacking the ability to extrapolate is the issue
To abstract from the specifics and generalize requires being able to weed-out the unimportant stuff. We do that automatically. Computers are not designed to and are awful at it.


Then that program didn't account for enough exceptions.
It's impossible to do this.


Not impossible, just time consuming because the programmer has to account for every variable known and unknown

You've heard of P vs. NP?

Software has been getting dang good at facial and voice recognition, better than humans even
This is completely untrue.

I'm even having trouble doing some of the newer captchas that computers are getting better at recognizing.
Computers are stumped by simple CAPTCHAs. That's the point. If computers were better CAPTHAs would be completely useless. The entire point is a task that humans can do and computers can't.

We've been using the technology to try and catch terrorists with recognition software
I know. I was at a conference not that long ago with people who did this for a living. Humans vastly outstrip computers in every form of recognition. So do mice. The problem is that we can't get humans (or, alas, mice) to run through enormous amounts of data non-stop. So we get computers to do the grunt work and rule things out.

Trying to teach a human to spell is hard enough.
Trying to teach a computer to spell is impossible. And speaking of spelling, if you program as a computer scientist why is it that we don't have programming languages where you can misspell a word or forget a semi-colon and the program will run perfectly anyway because the computer is able to "get" what you meant the way you or I can when we misspell words?

When we think car, our brain goes through all sorts of instances using that ambiguity your so fond of
It doesn't. The concept of car is inherently ambiguous. There is no "car" there are lots of instantiations of "car". Computers only deal in absolutes. Each car is treated separately because computers can't deal with the concept of "car".
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Same would be for a computer but obviously more prone to mistakes since a computer has far less experience than a human regarding communication
It has vastly more. A typical computer parses more communication in a few days than most humans in a lifetime. A computer programmed to parse communications does more in a few hours than a human in a lifetime.

Humans don't need language or intelligence to be aware so neither does a computer.
Language is essential to conscious awareness.
 

PolyHedral

Superabacus Mystic
They can't. If they could I'd know about it. It would be the single most important development since the first computers were built.
I disagree, because I think you're arbitarily declaring that, and ignoring that every single entity within a OOP computer system represents a concept. Human consideration of concrete objects is quite clearly made in terms of instances of classes, and that thinking can be modelled trivially in a programming model. More advanced systems can even model meta-thought, including the notion of "concepts." :p
 

LegionOnomaMoi

Veteran Member
Premium Member
I disagree, because I think you're arbitarily declaring that, and ignoring that every single entity within a OOP computer system represents a concept.
That's because you equate objects in code with the words they resemble. All words are abstractions. We can't mathematically represent abstractions. We can only fake it and fake it poorly. Were OOP at all useful here it would be used in object recognition in robotics and advanced machine learning. It isn't. Mathematical techniques like PCA (see Murase and Nuyar's work in the International Journal of Computer Vision) or statistical shape analysis are, along with custom environments like ORCC.
See e.g.,
Shape Analysis and ClassificationL Theory and Practice (Image Processing Series)
2D Object Detection and Recognition: Models, Algorithms and Networks
Advances in Object Recognition Systems
Shape Classification Using the Inner-Distance

Human consideration of concrete objects
Depends on the ability to abstract away from them. Object classes are completely specified, unvarying, and absolute.
is quite clearly made in terms of instances of classes
Classes were made with human object recognition in mind. However, that was 40 years ago and it didn't amount to much.
More advanced systems can even model meta-thought, including the notion of "concepts." :p
Can you name an example?
 
Last edited:

PolyHedral

Superabacus Mystic
That's because you equate objects in code with the words they resemble.
No, I equate objects in code with the properties they hold to. :D The difference is subtle, but vastly important.

All words are abstractions. We can't mathematically represent abstractions. We can only fake it and fake it poorly. Were OOP at all useful here it would be used in object recognition in robotics and advanced machine learning. It isn't.
Any sort of mathematical representation is an abstraction - all mathematical objects are abstract, and defined in terms of propertise, not what they "really" are.

Depends on the ability to abstract away from them. Object classes are completely specified, unvarying, and absolute.
Until you get into code generation, reflection, and mixin systems. :D

Can you name an example?
System.Type.
 

LegionOnomaMoi

Veteran Member
Premium Member
No, I equate objects in code with the properties they hold to. :D The difference is subtle, but vastly important.
Vitally important. The problem is that the properties computer objects hold are incredibly restrictive and incapable of being extended to actual objects.


Any sort of mathematical representation is an abstraction
No. Any mathematical representation is an abstraction to US. To a computer, it's a procedure.


My bad. I meant an example of an actual system/program that was trained to understand concepts. Not a programming environment, system, or language which can easily be mistaken by humans for something resembling conceptual processing because it uses words that corresponds to "object", "class", and mimics the way we describe understanding.
 
Last edited:

PolyHedral

Superabacus Mystic
Vitally important. The problem is that the properties computer objects hold are incredibly restrictive and incapable of being extended to actual object.
So what about, say, arithmetic on the range [-2^63, 2^63-1] is "incapable of being extended to actual object"? (Assuming that integers are actual objects in themselves.)

NO. Any mathematical representation is an abstraction to US. TO a computer, it's a procedure.
...which can be unpacked and analyzed into, e.g. data structures, pre-conditions, post-conditions, etc. In a framework with reflection capabilities, there are no black boxes apart from the ones that underpin the framework. That's equally true in the brain, and is actually a major problem in philosophy. Introspection can't deconstruct certain concepts into component bits either.

My bad. I meant an example of an actual system/program that was trained to understand concepts.
Why is System.Type not a concept? It implements a reasonable approximation of the properties which classes-in-general have.

Not a programming environment, system, or language which can easily be mistaken by humans for something resembling conceptual processing because it uses words that corresponds to "object", "class", and mimics the way we describe understanding.
It doesn't resemble conceptual processing because it uses the words we use - it performs conceptual processing because it implements the same logic we use.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
So what about, say, arithmetic on the range [-2^63, 2^63-1] is "incapable of being extended to actual object"? (Assuming that integers are actual objects in themselves.)

Arithmetic isn't an object, unless you mean the concept itself, which computers can't comprehend (because they can't comprehend).


...which can be unpacked and analyzed
The procedures are the lowest level. They are what the computer does: syntactic processing. They cannot be unpacked, analyzes, or anything else because that's what computers do: implement math procedurally. It's all they do.

That's equally true in the brain
There is no evidence whatsoever that brains follow any computable algorithms and there is evidence (and indeed proofs) that they don't.
, and is actually a major problem in philosophy.
Citation? I'm not sure what you are referring to here.


Why is System.Type not a concept?
You didn't say whether a programming system or environment can be a concept but rather:
More advanced systems can even model meta-thought, including the notion of "concepts."
So show me a constructed model using System.Type that models the notion of concepts.

It implements a reasonable approximation of the properties which classes-in-general have.
Show me a functional example, not a mistaken comparison between linguistic referents in code and their conceptual objects or an assumption of logical equivalence between syntactic processing and conceptual that is a meaningful example, and we can go from there.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
it performs conceptual processing because it implements the same logic we use.
We don't implement logic to understand concepts. Logical parsing is entirely syntactical and devoid of meaning. Necessarily so, as this is the entire point of formal languages- stripping meaning away.
 

PolyHedral

Superabacus Mystic
The procedures are the lowest level. They are what the computer does: syntactic processing. They cannot be unpacked, analyzes, or anything else because that's what computers do: implement math procedurally. It's all they do.
Opcodes are the lowest level, and even they can be represented within the computer system by statements about how the computer's state transforms. Everything higher up than that can be represented "natively."

This includes structures that represent (that is, has the properties of) notions such as general predicates over types, syntax trees, [i.e. the thing that governs understanding of math] transformations of objects [including where the "object" is a representation of the computer's own state] and property-based collections.

I understand the very idea of understanding to be this: you understand an object when you have a comprehensive (not necessarily complete) knowledge of the object's properties. You understand arithmetic on integers when you know that, e.g. adding two integers always produces an integer. This can be represented in the computer system, which means the computer understands arithmetic. It may not know the meta-fact that it knows arithmetic, but it does. (However, the meta-knowledge can be arranged as well, via reflection. :D)

Citation? I'm not sure what you are referring to here.
Mostly, the whole idea occured to me debating the nature of qualia. I know how to represent the notion of redness in mathematical terms, but not because I have some way to introspect and understand what I mean when I say "qualia" - I had to work backwards and explicitly construct a concept that behaved in the same way as qualia intutively behaved. IOW, in exactly the same way any other mathemaical concept is formalized.

You didn't say whether a programming system or environment can be a concept but rather:
So show me a constructed model using System.Type that models the notion of concepts.
System.Type is a model, in the maths sense. It behaves according to a set of properties which are shared by the intuitive notion of a "class of things."

Show me a functional example, not a mistaken comparison between linguistic referents in code and their conceptual objects, and we can go from there.
If I have a C++ program in front of me, do I have to understand it to be able to write an assembly program that will do the same thing?
If I have a set of English instructions in front of me, do I have to understand it to be able to write a C++ program that will do the same thing?
 
Top