• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

What If Consciousness Comes First?

Milton Platt

Well-Known Member
This is, to me, an interesting Psychology Today piece It is a discussion of how pure consciousness is a mandatory part of existence.

Despite the success of neuroscience in establishing a wide range of correlations between brain processes and conscious experience, there is at least one question about the relationship between the brain and consciousness that continues to appear unanswerable, even in principle. This is the question of why we have conscious experience at all.


The problem is that there could conceivably be brains that perform all the same sensory and decision-making functions as ours but in which there is no conscious experience. That is, there could be brains that react as though sad but that don’t feel sadness, brains that can discriminate between wavelengths of light but that don’t see red or yellow or blue or any other color, brains that direct their bodies to eat certain foods but that don’t taste them. So why is there nevertheless something that it’s like to be us?
...
The issue is that physical properties are by their nature relational, dispositional properties. That is, they describe the way that something is related to other things and/or has the disposition to affect or be affected by those other things. Most notably, physical properties describe the way that something affects an outside observer of that thing. But there is something going on in conscious experience that goes beyond how that conscious experience affects people looking at it from the outside. For this reason, the “what it’s like” to be a conscious mind can’t be described in the purely relational, dispositional terms accessible to science. There’s just no way to get there from here.

This explanatory gap is what is now commonly referred to as the “hard problem” of consciousness...

if the universe is to actually exist, its properties can’t be exclusively relational/dispositional. Something in the universe has to have some kind of quality in and of itself to give all the other relational/dispositional properties any meaning. Something has to get the ball rolling.

That something (at least in our universe) is consciousness.

No brain or nervous system....no consciousness.
 

atanu

Member
Premium Member
Well, one of the points of a p-zombie is that it would have *exactly* the same physical reactions as a conscious person. So, whatever happens 'internally', the p-zombie will report having qualia, it will report every 'experience' in exactly the same way as a conscious person.

So, to me that *is* evidence that the p-zombie is conscious. That is, in fact, sufficient evidence for such, as you already admitted exactly the same physical evidence would show a conscious person to be conscious.

In other words, the p-zombie actually is conscious and actually does have qualia and those qualia are exactly the same as those of the conscious person. And, of course, that means the whole concept of a p-zombie is incoherent.

And how, precisely, do you know your inner experiences *cannot* be functionalized? If we can correlate your brain states with your internal states, how is that *not* functionalizing them?

Hello sir. I have not brought in the point of zombie. If you insist that you indeed are a zombie, then I think we have nothing more to discuss about.
...

For record, I will recapitulate the key point that I raised. We are conscious. But eliminativists assert that there is no unitive conscious self, which is an illusion created by neuronal activity etc. The first question that has been asked of Dennett is "Illusion to WHOM?". To be illuded you require a conscious self.

OTOH, except Dennett and Churchland duo, very few other scientist or philosopher assert that the hard problem of consciousness is solved. No scientist or philosopher other than Dennett and Churchland claim that consciousness itself is an illusion. Dennett discards the very consciousness with which he observes brain states and makes absurd conclusions.

In an earlier post, I had summarized the issue and let me repeat that here:

Quale does not differ from sensation but it differs from the observable behaviours. For example, suppose xyz is in pain and I see him wincing and groaning. But I do not have any idea of his subjective sensation of pain. Two aspects of consciousness elude functalisation and they are "the self (consciousness itself) and its subjective experiences and mental causations. Let me illustrate this with an example.

To use correlation data of ‘partly visible and partly subjective first party experience’ versus ‘measurable brain state’ (suppose correlation of ‘pain’ to stimulation of ‘x centre’ in brain), we have to functionalise ‘Pain’. An example is given below:
  • Observation: xyz has his x-centre in brain stimulated at t.
  • Established Correlation data: x-centre stimulation (in humans) is caused by tissue damage and it in turn causes winces and groans.
  • Functional definition of pain: To be in pain, by definition, is to be in a state which is caused by tissue damage and which in turn causes winces and groans.
  • Prediction: Therefore, xyz must be wincing and groaning due to pain at t.
The third line, a functional definition of pain, does not represent empirical/factual information about pain; it gives us the meaning of “pain”. This way we can predict xyz's pain from physical/behavioural information alone. This also answers as to why sensations accompany the brain’s workings. Here, I assume that we are able to functionalise the behavioural aspects in a foolproof manner, incorporating all aspects that matter. Yet sensations, or qualia, resist functional reduction and there still is no glimmer of an explanation in above. Groans and winces are observable effects. But the inner sense of pain and its intensity are not functionally definable.

We know that in brain ions move across membranes and cause electrical activity which can be measured. That in turn causes the neuron to turn on its metabolic interface and cause release of different kinds of neurotransmitters that move across synaptic cleft and activate other neurone/s. So where in all of this does the thought occur? Where is our thought? Where is our experience of the world? When we say we see something, we feel something, we think something where in all of that is that really happening? And so if you give a person a drug or if a person meditates, how do you ultimately link that back to what’s going on in the brain itself and how reductionistic can we ultimately be?
 

atanu

Member
Premium Member
Vanishing of "I am " from a human body at death tells us that consciousness was endowed to the human temporarily by the Absolute Conscious Being. Right, please?
Regards

God cannot be proven from the empirical observation of absence of consciousness from a dead body. But at the minimum, absence of consciousness in dead-body brain indicates that consciousness is not the intrinsic property of body-brain.

To that the materialist replies, consciousness is born of material processes in brain, as if we know as to what processes cause life-consciousness.
 

atanu

Member
Premium Member
So we are now just quibbling over the definition of consciousness. I can accept yours if you wish.

The OP talks about ‘conscious experience’. Do you think that learning or avoidance or pain etc. Do not qualify to be categorised under consciousness?

One may say that a thermostat (or any process control machine including computer and AI) is also conscious. But that is not true since these machines simply reflect the machine designer’s consciousness.
 

Polymath257

Think & Care
Staff member
Premium Member
Hello sir. I have not brought in the point of zombie. If you insist that you indeed are a zombie, then I think we have nothing more to discuss about.
...

I am not insisting that I am a zombie. I am wondering whether or not I am. How can I tell? how can you?

For record, I will recapitulate the key point that I raised. We are conscious. But eliminativists assert that there is no unitive conscious self, which is an illusion created by neuronal activity etc. The first question that has been asked of Dennett is "Illusion to WHOM?". To be illuded you require a conscious self.

Consciousness is a brain information neural process. Because it isn't 'aware' of discontinuities it interprets itself as continuous. The same thing happens in other brain processes such as the blind spot (where our field of vision seems continuous but there is actually a rather large blind area we are usually just not aware of).

OTOH, except Dennett and Churchland duo, very few other scientist or philosopher assert that the hard problem of consciousness is solved. No scientist or philosopher other than Dennett and Churchland claim that consciousness itself is an illusion. Dennett discards the very consciousness with which he observes brain states and makes absurd conclusions.

Once aain, I have yet to see a good description of what the hard problem of consciousness *is*. As far as I can see, it hasn't been solved because it simply doesn't exist at all.

You seem to confuse being alive with being conscious. Then you confuse the ability to learn with an aspect of consciousness. I don't see *either* as necessarily meaning there is an 'internal mental state' because life alone doesn't imply the existence of a mind. Simple reactivity is not the same as consciousness.

So, at least part of the problem here seems to be definitional. What, precisely, do you mean when you say something is conscious? You have previously claimed that robots cannot ever be conscious. Why not? You have claimed that insects, for example, are conscious. Why so? What do you see as the relevant difference between the two (robots and insects)?

I see consciousness as a type of information processing that includes knowledge of 'self'. But, for example, I do not consider the immune system to be a type of consciousness even though it deals with 'self' versus 'non-self'.
 

Polymath257

Think & Care
Staff member
Premium Member
The OP talks about ‘conscious experience’. Do you think that learning or avoidance or pain etc. Do not qualify to be categorised under consciousness?

No, I do not think that mere avoidance of pain implies that there is a conscious mental state.

The same for learning.

One may say that a thermostat (or any process control machine including computer and AI) is also conscious. But that is not true since these machines simply reflect the machine designer’s consciousness.

Hmmm....I'm not sure why reflecting consciousness implies they are not conscious. They seem to have reactivity, change of behavior based on environment, etc.

Please give more detail why a computer program that learns is not conscious.
 

Milton Platt

Well-Known Member
The OP talks about ‘conscious experience’. Do you think that learning or avoidance or pain etc. Do not qualify to be categorised under consciousness?

One may say that a thermostat (or any process control machine including computer and AI) is also conscious. But that is not true since these machines simply reflect the machine designer’s consciousness.

There are many ways to define consciousness. I don't know that just being able to respond to external stimuli is consciousness under the normative usage of the word. I tend to think of consciousness as the ability to be self aware and have an awareness of surroundings, not just react through a chemical process. But as I indicated, for purposes of this thread, I can entertain other definitions.

I do not consider machines conscious in the normative use of the word.
 
Last edited:

Polymath257

Think & Care
Staff member
Premium Member
There are many ways to define consciousness. I don't know that just being able to respond to external stimuli is consciousness under the normative usage of the word. I tend to think of consciousness as the ability to be self aware and have an awareness of surroundings, not just react through a chemical process. But as I indicted, for purposes of this thread, I can entertain other definitions.

I do not consider machines conscious in the normative use of the word.


Is that limited to current machines? or is it a blanket statement about all possible machines?
 

Polymath257

Think & Care
Staff member
Premium Member
Current machines. I broke my crystal ball and am no longer able to predict the future :D


I have to admit that I consider some of the more advanced robots to be borderline cases. Their behavior is complex enough and responds accurately to a variety of different environments in ways that include learning.

We seem to be at the level of 'simple insect' now in terms of machine cognitive abilities.
 

Milton Platt

Well-Known Member
I have to admit that I consider some of the more advanced robots to be borderline cases. Their behavior is complex enough and responds accurately to a variety of different environments in ways that include learning.

We seem to be at the level of 'simple insect' now in terms of machine cognitive abilities.
I find current AI remarkable, but still quite far from what I would consider consciousness. If robots approach human-like levels of self-awareness, many moral and ethical conundrums will arise.

For your perusal:

A single-celled organism capable of learning
 

Polymath257

Think & Care
Staff member
Premium Member
I find current AI remarkable, but still quite far from what I would consider consciousness. If robots approach human-like levels of self-awareness, many moral and ethical conundrums will arise.

Oh, I agree. And I think we are quite far from that at this point. But I have the impression we are at the 'programmed insect' level where there is a lot of flexibility of behavior with a lot of programmed functions. So those who consider insects to be conscious may have to consider some robots to be so for consistency sake.


Yes, I had seen that one. Slime molds are strange 'creatures' anyway. They can be single celled at some stages of their life cycle and multicellular during others. As such, they are a fascinating case study of the development of intercellular communication.

I would not call them conscious, however.
 

Milton Platt

Well-Known Member
Oh, I agree. And I think we are quite far from that at this point. But I have the impression we are at the 'programmed insect' level where there is a lot of flexibility of behavior with a lot of programmed functions. So those who consider insects to be conscious may have to consider some robots to be so for consistency sake.



Yes, I had seen that one. Slime molds are strange 'creatures' anyway. They can be single celled at some stages of their life cycle and multicellular during others. As such, they are a fascinating case study of the development of intercellular communication.

I would not call them conscious, however.

I thought the slime mold story was a good pointer to the fact that the word consciousness is ill-defined and a moving target. Defining consciousness is a bit like trying to nail jello to the wall.
 

Polymath257

Think & Care
Staff member
Premium Member
Here is a recent article. We now have *maps* of how different words affect the brain. This means we can know (some of) what a person is reading by looking at their brain scans.

A map of the brain can tell what you're reading about: Through brain imaging, scientists open another door to our inner thoughts and narratives

Here is a similar situation where we can tell what a person is seeing from a brain scan:

'Mind-reading' brain-decoding tech

And here is one where we can identify the emotion someone is experiencing from a brain scan:

Scientists identify emotions based on brain activity

And here is one (old) that recreates what someone sees (their internal images):

Scientists use brain imaging to reveal the movies in our mind

And these are just the tip of an iceberg of how much we can map internal states to brain states. We can, in some situations, literally read someone's mind.

Isn't this a demonstration that the hard problem isn't really a problem at all?
 

Ben Dhyan

Veteran Member
I have to admit that I consider some of the more advanced robots to be borderline cases. Their behavior is complex enough and responds accurately to a variety of different environments in ways that include learning.
What if AI eventually learns that the universe is conscious, and becomes a devoted Christian? :)
 

Truly Enlightened

Well-Known Member
Everyone knows that our consciousness is the conceptual manifestation of an emergent property(something greater than the sum of its parts), that is based on our direct interaction with our environment. Think about it, if our senses were 100% more acute, don't you think that this would also affect our conscious perception of reality as well. We are talking about an emergent property of millions of years of evolution, and natural trial and error. We are also talking about the fine-tuning mutation mechanism for change, and its direct connection with the environment, our ancestors, and other species. Remember we started out as a cell, and after billions of years at a 99.9% failure rate, we are here. We can't shortcut this natural system of evolution. I certainly believe we could create a computer algorithm in an artificial lifeform, that can simulate feeling pain, joy, fear, and other human emotions. But, I think it would be impossible to create any algorithm that would allow an artificial life that could actually feel these things. I think that it is impossible for artificial life to be consciously aware of being conscious. What would be its internal dialogue, or level of introspection? How would it know if it has programs continually running, or just taking a long time? No matter how many neural connection we establish, an AI's basic units are not biological units. Consciousness itself is not physical. It's an emergent zero dimensional property, of a functioning human organism, that is directly dependent on two things. Arousal and Awareness. Both are critical

Arousal is regulated by the Brain Stem, which also regulates our sleep/wake cycles, and our respiration and heart rates. Awareness is thought to be somewhere in the cerebral cortex(the matter on the outer layer of the brain). Studies conducted have confirmed these connections

Researchers analyzing 36 patients with brainstem lesions, of which 12 led to coma and 24 did not. By mapping their injuries, revealed that a small "coma-specific" area of the brainstem("rostral dorsolateral pontine tegmentum"), was specifically associated with a coma. 10 out of the 12 coma-inducing brainstem lesions involved this area, and just one of the 24 control lesions did. By using the wiring diagram of a healthy human brain, we can find which other parts of the brain were connected to these coma-causing lesions. It was found that two areas in the cortex, were connected to the coma-specific region of the brainstem. One, is called the, "ventral, anterior insula (AI)". The other is found in the, "pregenual anterior cingulate cortex (pACC)". The normal functioning of this brainstem-cortex network, is what we call consciousness. Or the "consciousness network. In patients with disorders of consciousness and coma, this network was functioning in another subset of pathways. Using a special type of MRI scan, scientists have confirmed this newly identified "consciousness network". Using MRI scans, we can now clearly see pathway disruptions in patients with impaired consciousness. This opens the door to our understanding of visual and auditory hallucinations, impaired speech, and movement disorders. Maybe we can even start stimulating patients in a vegetative state? But research takes time, we don't want to make claims that we can't explain?

If Consciousness could exist before the physical brain develops, what would the mechanism for its experience(qualia) be? How could consciousness itself be aware of being conscious? At what point does consciousness become subjective? Six weeks old, 6 month old, and how does it make that transition to becoming I? I may not be certain if an objective reality, or a universal consciousness exists, but I do know that our human consciousness is subjectively real. Because it can't be tested, without the subject.
 

Bear Wild

Well-Known Member
.

So, at least part of the problem here seems to be definitional. What, precisely, do you mean when you say something is conscious? You have previously claimed that robots cannot ever be conscious. Why not? You have claimed that insects, for example, are conscious. Why so? What do you see as the relevant difference between the two (robots and insects)?

I see consciousness as a type of information processing that includes knowledge of 'self'. But, for example, I do not consider the immune system to be a type of consciousness even though it deals with 'self' versus 'non-self'.

The difference between robots and insects is the way they have developed. The insect nervous system where we would like to place consciousness was developed as an interaction of the insect to its environment. The evolutionary selection process creates an intimate relationship between the insect and it world as in the case of all living things. The sensory interaction that is continuous creates a very different relationship than a robot which is created without the selection factors of its environment. We can attach all kinds of sensors but we do not know how to integrate them to the same degree that the natural selection process would. How can we make them intimately connected to an environment? Our direct and intimate sensory/affective connection to our environment creates the neurologic patterns that creates our consciousness. If humans (and many other animals) are kept in isolation with sensory depravation they become dysfunctional and solitary confinement with severe sensory depravation shows how severe this is. I suspect a robot or artificial intelligence would not react the same. Thus there is a difference that I am not sure we can program sufficiently to reproduce at least not at this time.
 
Top