How do you know the DNA cell is more complex than a space shuttle? What's the argument for that?
That's actually a good question (well, of course it is but I mean especially good), because it hits on a number of related yet vital points.
The first is "How do you know that...DNA...is more complex than...[insert system] ?"
The second, naturally, is how we determine the degree to which anything is complex (i.e., how do we define complexity?).
The third question is that, given there exist a fair number of complexity metrics available, which if any are the most suitable here?
The problem, however, is that complexity has been wedded to Shannon's information theory and his application of entropy in physics to information. Algorithmic and similar computational complexity metrics are more sophisticated, but as they are based around such as algorithms and computers, they are ideal for systems in which the number of elements, generally idealized in terms of bits for a computer itself, determines how complex the system is by possible configurations of the elements. More simply, all elements of the system are treated equally.
This fails completely as any useful measure of complexity for living systems.
For one thing, we find that computing genetic complexity of humans using such a method yields a value that is much lower than many subsystems of the human body that are governed by DNA. The brain, for example, would be massively more complex than the entire person. As this is nonsense, clearly better approaches are required.
Human genome: 3 billion base pairs. Xbox One Main SoC: 5 billion transistors.
Although estimates vary (going as high as 1 trillion), the human brain has around ~100 billion neurons. Far more importantly, these neurons vary in the number of possible connections to other neurons. Pyramidal can have as many as 100,000 dendritic connections to other neurons, and 10,000 in cortical regions is a pretty low number. What determines, or "codes", the neurobiological processes in the brain? DNA. So we wind up with a relatively low number for the genetic complexity of the entire human system, and a vastly larger number using the same metric or a part of that system.
By the way, talking about the space shuttle, NASA has used Genetic Algorithms to design parts
"Design" parts? Gene expression programming, genetic algorithms, evolutionary algorithms, fitness functions, etc., are all fantastic for a wide-range of problems ranging from optimization to NLP.
However, they are idealized models that are most useful when they are least similar to evolutionary processes. We determine parameters such as selection methods, reproduction operators, and indeed the fitness function (with its parameters) itself!
That is why the use of such algorithms in applications that have 0 to do with evolution far exceeds the use of these as models within biology. In fact, fitness functions within computational intelligence/soft computing/AI/etc. are defined differently than within mainstream biology.
I'm not sure how it can be said that evolutionary theory can't work when the same principle can be used successfully to design parts to space ships?
The problem with your analogy is that it suggests someone designed the algorithms. It's akin to saying that the ways in which living systems would respond to their environments was coded into the environments and all living systems in a vast, "fine-tuned", "specified complexity" nonsense of ID. I don't think that's your point, though (or your belief), but I could be wrong. Either way, I would argue that as evolutionary processes are not "designed" there are very severe limits we must place on the ways in which we use computational models and adaptive algorithms based upon living systems (whether artificial neural networks or genetic algorithms), and the systems themselves.