There are many projects going on developing ways to solve NP Complete problems in feasible time. There has been limited success with DNA computing. So far there has been a complete lack of success as far as quantum computing (btw if you haven't heard of the quantum phenomenon, you should read about it, by far the most fascinating thing I've heard of). Will computers evolve? I would say no. Why do I say this? In all of our experiences, intelligence cannot be created, even by ourselves (intelligent agents). How then, could a non-intelligent being create intelligence? Furthermore, it is something of a higher order to claim that a non-intelligent, non-conscious entity could create both. I can't imagine any possible scenario for this. Still, anything is possible. It's also possible that a coke can will fall from the sky and land in my back yard out of nowhere, but I don't expect that it will ever happen.
My only problem with this is that we have only finite experience out of a possible infinite number of experiences. We could develop a computer that would be self conscious (it certainly wouldn't be binary). At least, that is what I would contend. I see no reason why a non biological matrix would necessarily be unable to hold consciousness. How could a non-intelligent being create intelligence? What are you referring to? A computer making itself conscious without help? Obviously impossible. A future computer with wildly different foundations being programmed in such a way as to enable it to reprogram itself in the same way we do for our consciousness? Perhaps. I'm not saying it
will definately happen, I'm saying I can't see a reason to mark it off as impossible as of yet. To do so is to close avenues of thought and experiment rashly, without exploring them fully.
Amongst the general public, I would wager that a vast majority would say otherwise.
Schools today teach that computers are not intelligent. They teach that computers only take input in and give an output, as they were programmed.
Aha. That is more evidence for creation than I ever could have put forth on my own.
Not really. Genes are self-perpetuating. Each individual gene works with the whole of the DNA to build an efficient gene-transport and gene-reproduction vehicle, which can perpetuate the DNA further. If the gene fails to reproduce itself consistently (for instance, by reducing the chance of survival of the gene-vehicle, or organism) it disappears completely over time.
The DNA we are programmed with was not programmed by a creator, but by the "guiding hand" of billions of years of randomness and natural selection arising from randomness.
The first "organisms" replicated in order to reproduce copies of or their defining proteins. The molecules that started this were self-replicating protein strands, that automatically, as a function of their structure, reproduced themselves as they found the necessary substances. These formed because of the nature of Earth's environment at the time. They weren't anything special, except they led to more complex replications. The protein strands that replicate better than others would eventually have many more copies floating around than those that were poor replicators. Over the hundreds of millions of years after the first self-replicating protein strands, mistakes in replication with the excellent replicators led to new ways of finding chemical building blocks (food). They started to disassemble other protein strands, for their building blocks. The ones that were better at not being disassembled were able to replicate better (simply because they still existed). And so on. Of course we don't know the whole picture, but how could we? Just because we don't know everything doesn't mean we need to resort to magic or God.
No, not really. For one, that intelligence must come from somewhere. We have no idea how exactly our brain works. You are studying neuroscience, so I would wager that you know more about how it functions than I. By all accounts that I have studied, there are many mysteries (ie where are memories stored? how are they lost, if that really is the case?).
We know a lot more about our brains than you'd think. We don't know everything, but you can liken the present state of neuroscience to the state of world exploration at the discovery of the Americas. We knew some of the coastline, but nothing of the interior. As we explored it further, we learned more. We are exploring the coastline of the brain, and even venturing into the interior (not literally, it's just a metaphor, obviously) and we will eventually learn as much about it as we know about our other organs, like the heart and the kidney. At one point, we knew that blood circulated, but we didn't know it was the heart that did it.
Long term memories are stored in the hippocampus, by the way. You have an indexing system in your brain which labels memories according to their contents (this is vastly simplified so that we can discuss it easily). If the label is damaged, or deteriorates (this is caused when you do not activate the neurons that control the label in question often enough), you won't be able to access the memory. You simply can't find it in the index.
Would you like to talk in MSN or AIM? You seem an interesting fellow, and I could probably better discuss this in an IM medium better than here.
You are likening the discovery of fire (an intelligent being discovering a new way to use his environment) to an unintelligent being creating intelligence.
No, I am not. I am likening the discovery of fire to the discovery of ways of creating machine intelligence (not an unintelligent being creating intelligence, no, but a sentient being creating a sentient being/machine). I do not propose that a computer could cause sentience to occur in itself without our direct design, or the design of systems that would promote the evolution of machine intelligence.
This seems to be a very vague explanation. I notice that scientists are in this habit when they don't exactly understand how something works. Perhaps I am way off base here. Do you have a neutral source that I could look at with the gritty details?
Anything by Doug Hofstadter is good. I would recommend "The Mind's I", but before reading that "Up From Dragons" is great. Go to your library and look for either of those. The books that are in their section are likely to be excellent on the subject as well.
The explanation is vague, by the way, because I would need to write a book to explain it adequately, or enough books to fill libraries in order to describe it perfectly (something that is still impossible anyway).
What do you even mean by democratic? Where would you suggest I research this?
"I Am a Strange Loop" is good for this, if I recall correctly.
Basically rather than having a controller or a little person in the mind, a "conductor", you have a democracy of neurons, which can sway in many directions to make different choices depending on the weights attached to each neuron and their roles in the neural nets they belong to. The only potential candidate for "conductor" is the prefrontal lobes, which are rather necessary to consciousness as we know it, but even then the prefrontal lobes act as a neuronal democracy and not a hierarchical master of consciousness/the brain.