Can you back this claim up? Your claiming that a nonperfect form of memory, that of the brain, can conceptualize better than a system that has better memory not as prone to mistakes
When we teach computers to learn we make sure they can make mistakes. The problem is that concepts are not "crisp", even when they correspond to physical objects. Two computers need not be of the same color, trees can have leaves or needles, dogs can be the size of wolves or the size of housecats, etc. The world is filled with vagueness and ambiguity. Computers do not, as a rule, tolerate any ambiguity. Polyhedral is fond of using objects to show that computers can learn about actual objects. In java, I might define a class of objects "trees" that have properties like green, leaves, a given height range, etc. Then every single tree must have only those exact properties I specify. It's why object oriented languages aren't especially well-suited for object recognition software. If I try to get a program to recognize faces I can either specify exactly what every face I want it to recognize looks like (and the faces cannot be seen from even a slightly different angle or this won't work), or I can allow it some room for error. I have to be very careful though, because if I allow too much error it might not recognize the difference between my face and Brad Pitt's face (this happens to me all the time, of course), but if I make it too exact it won't recognize my face shown with just the hint of a shadow, or a bruise, or ever-so-sleight an angle.
You don't even need to get to the way brains work to see how necessary mistakes are. Let's say that I flip a coin many times. I tell you the sequence of H's vs. T's. Which would most people think more likely? This sequence:
HHHTHTTHHTHTHTTHHHTHTHHHTHTHHTHTH
or this:
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
?
I can write a program to calculate the answer to this question exactly. It's very easy. The answer is that all sequences of any
n flips are equally likely no matter what
n is. If
n is 50, than 50 heads is as likely as 25 heads and 25 tails in some particular order. What is much, much, harder, however, is getting a computer to answer "which sequence
looks more like what we'd expect?" We expect some heads and some tails. We don't expect to flip a coin 50 times and get 50 heads or almost 50 heads. That's why the first sequence
looks more likely to us. We recognize it as a
pattern of heads and tails and we don't pay attention to the fact that it is actually a very specific sequence. By ignoring or seeing imperfectly extraneous details we're actually better off. We see patterns so much better because we abstract away from the specifics and generalize. Unfortunately, this means error. Some generalizations will be wrong. Sometimes details
do need to be remembered. But in order to get computers just to
recognize patterns (not even to understand them, just to correctly identify e.g., the face of a dog vs. a cat vs. something else) we force them to be able to err. Classification algorithms necessarily place values on data corresponding to two or more classes. But the values exist in ranges. We make the computer capable of error. Dimensioality reduction techniques are all over the place in machine learning and elsewhere. They amount to throwing out data and trying to preserve or highlight what's most vital. We do it without thinking. Computers require some of the most sophisticated programming just to reproduce what mice do naturally.
If we told a machine to go out into a parking lot and our car it would have no problem
It would. Because there is no such thing as "car", because even if we told it to recognize "our car" it would have enormous difficulty recognizing the same car from two different places, and because it doesn't store coordinates in a relative frame but an absolute one. Let's say you park your car at the mall and then go inside an shop for a few hours. You don't have to keep track of every step you take so that when you want to go home you have to retrace
exactly every, single step. You can have an idea that the car is on the 3rd level towards the middle and find it by starting at the left or the right entrance on whatever level you please. When you turn, you don't expect the world to turn with you. Computers do. They are naturally orientated to precision. You run a program backwards to get to the first step and it will re-take every, single step. In order to program a computer to simply remember where your car is you have to do amazingly complicated programming just to make the computer able to recognize your car from different angles and even harder programming (nearly impossible) to have it remember where the car is in relative relation to it. Modern navigation systems bypass this by using permanent markers and gps. The problem is that this only works for relatively unchanging maps like streets. Computers can't remember "back that way" or other directions that are useful when navigating unknown terrain. They must follow explicit steps or they suck at it. Getting it to remember where your car is means spelling out in painstaking detail every step to get to your car. You don't have to do that because you can remember generalities.
Just give the proper tools to the machine that it can do special recognition to make the fight fair.
We don't know how to give it those tools. They involve abstracting away to generalities, forgetting particular details, and error. Computers were designed for precision. Making them reproduce what is general is incredibly hard.