The reason the so-called hard problem of consciousness is a useful example is because, if you credit the philosophical argument, then it really isn't a question of "we don't have an answer yet". The argument is not that it's just something that science hasn't explained yet, but that it's fundamentally something which could not have an explanation in the usual way science proceeds. For example, here's what Chalmers argued in one of his early papers:
"Why are the easy problems easy, and why is the hard problem hard? The easy problems are easy precisely because they concern the explanation of cognitive
abilities and
functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well-suited for this sort of explanation, and so are well-suited to the easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. (Here "function" is not used in the narrow teleological sense of something that a system is designed to do, but in the broader sense of any causal role in the production of behavior that a system might perform.)
Throughout the higher-level sciences, reductive explanation works in just this way. To explain the gene, for instance, we needed to specify the mechanism that stores and transmits hereditary information from one generation to the next. It turns out that DNA performs this function; once we explain how the function is performed, we have explained the gene. To explain life, we ultimately need to explain how a system can reproduce, adapt to its environment, metabolize, and so on. All of these are questions about the performance of functions, and so are well-suited to reductive explanation. The same holds for most problems in cognitive science. To explain learning, we need to explain the way in which a system's behavioral capacities are modified in light of environmental information, and the way in which new information can be brought to bear in adapting a system's actions to its environment. If we show how a neural or computational mechanism does the job, we have explained learning. We can say the same for other cognitive phenomena, such as perception, memory, and language. Sometimes the relevant functions need to be characterized quite subtly, but it is clear that insofar as cognitive science explains these phenomena at all, it does so by explaining the performance of functions.
When it comes to conscious experience, this sort of explanation fails. What makes the hard problem hard and almost unique is that it goes
beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience - perceptual discrimination, categorization, internal access, verbal report - there may still remain a further unanswered question:
Why is the performance of these functions accompanied by experience? A simple explanation of the functions leaves this question open.
There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says "I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a
gene", then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says "I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is
experienced", they are not making a conceptual mistake. This is a nontrivial further question.
This further question is the key question in the problem of consciousness. Why doesn't all this information-processing go on "in the dark", free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience
does arise when these functions are performed, but the very fact that it arises is the central mystery. There is an
explanatory gap (a term due to Levine 1983) between the functions and experience, and we need an explanatory bridge to cross it.
-
Facing Up to the Problem of Consciousness
I can't speak to how popular various theories in philosophy of mind or cognitive science are (the last time I was talking about this subject with someone, they told me that reductive physicalism was not as popular as it used to be, but I don't know), but I can say that for those who consider it not to be an issue, they are just generally eliminative about consciousness as a qualitative phenomena. That is, they simply dismiss the need for an explanation of
experience in the way Chalmers talks about.
What is interesting about this argument is that it's an intuitive one, rather than a logical or empirical one. Those who believe that consciousness raises a problem for physicalism or for normal scientific methods take this subjective qualitative element of experience as a given, as something that is the most immediate fact of the matter that we could possibly be aware of, and take that seriously. It could not possibly have a more basic explanation because it is itself, for us, something ultimate. Eliminative or reductive materialism dismisses this not by explaining the phenomena but by simply declaring that our intuition about the immediacy of experience is not a sufficient justification to take it seriously.