• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Can people comprehend assertions without believing them?

Whose view is closer to the truth?

  • Descartes

    Votes: 3 27.3%
  • Spinoza

    Votes: 4 36.4%
  • Something else

    Votes: 4 36.4%

  • Total voters
    11

Milton Platt

Well-Known Member
"Is there a difference between believing and merely understanding an idea? Descartes thought so. He considered the acceptance and rejection of an idea to be alternative outcomes of an effortful assessment process that occurs subsequent to the automatic comprehension of that idea... if one wishes to know the truth, then one should not believe an assertion until one finds evidence to justify doing so... One may entertain any hypothesis, but one may only believe those hypotheses that are supported by the facts.

According to Spinoza, the act of understanding is the act of believing. As such, people are incapable of withholding their acceptance of that which they understand. They may indeed change their minds after accepting the assertions they comprehend, but they cannot stop their minds from being changed by contact with those assertions. [He believed] that (a) the acceptance of an idea is part of the automatic comprehension of that idea and (b) the rejection of an idea occurs subsequent to, and more effortfully than, its acceptance."

(From: You Can't Not Believe Everything You Read - Daniel T. Gilbert, Romin W Tafarodi, and Patrick S. Malone & How mental systems believe - D Gilbert)


Whose view do you agree with? Do we withhold judgement until we choose to accept or reject an idea, or do we accept an idea until we choose to reject it? Are we affected by everything we read/hear as it is impossible to have no belief about any concept that we can understand?

If Spinoza is correct, do you believe that this has significant consequences for our beliefs (especially as we are living in the 'information age')?

What do you think?

[I believe Spinoza has it more correct, but won't go into details yet]
I'm saying you might only believe it momentarily before you 'correct' yourself. On the other hand you might believe it long term if you don't.

“[Dan Gilbert] proposed that you must first know what the idea would mean if it were true. Only then can you decide whether or not to unbelieve it. The initial attempt to believe is an automatic operation of System 1, which involves the construction of the best possible interpretation of the situation. Even a nonsensical statement, Gilbert argues, will evoke initial belief.

Try his example: “whitefish eat candy.” You probably were aware of vague impressions of fish and candy as an automatic process of associative memory searched for links between the two ideas that would make sense of the nonsense.

Gilbert sees unbelieving as an operation of System 2, and he reported an elegant experiment to make his point. The participants saw nonsensical assertions, such as “a dinca is a flame,” followed after a few seconds by a single word, “true” or “false.” They were later tested for their memory of which sentences had been labeled “true.” In one condition of the experiment subjects were required to hold digits in memory during the task. The disruption of System 2 had a selective effect: it made it difficult for people to “unbelieve” false sentences. In a later test of memory, the depleted participants ended up thinking that many of the false sentences were true.

The moral is significant: when System 2 is otherwise engaged, we will believe almost anything. System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy. Indeed, there is evidence that people are more likely to be influenced by empty persuasive messages, such as commercials, when they are tired and depleted.”

Kahneman, Daniel. “Thinking, Fast and Slow.”




You probably overstate the degree to which you do this (not a criticism, I do too).

You correct yourself sometimes when you have knowledge, often we lack the knowledge to correct ourselves and aren't doing so even if we think we are.

Breaking news on a terrorist attack for example, number of attackers, reports of explosions, and all sorts of minor details are frequently wrong but I can guarantee that you will remember some of these as being true.

Also, if you read an article about a topic/place you know very well you will often notice many errors in the reporting. 2 mins later you will read another article about something you have no idea about, but you don't read it as if it is full of errors like the last article. You also have nothing to replace the false information with. You don't go from belief to absence of belief, you go from belief to replacement belief.


So if you understand that something is a lie, does that mean you believe the lie???
 

Jumi

Well-Known Member
I'm saying you might only believe it momentarily before you 'correct' yourself. On the other hand you might believe it long term if you don't.
Or we might not. Observing our mental processes at work is not simple, I'll grant you that, but how have they observed this "default is belief"? I'm not going to accept on faith something that doesn't seem to match reality.

“[Dan Gilbert] proposed that you must first know what the idea would mean if it were true. Only then can you decide whether or not to unbelieve it. The initial attempt to believe is an automatic operation of System 1, which involves the construction of the best possible interpretation of the situation. Even a nonsensical statement, Gilbert argues, will evoke initial belief.
How was this observed? What were the sizes of the sample, how was sampling done and how many were the ones who were didn't Gilbert's proposition? How was the experiment performed and how many repeats of the study were done by different research groups?

Try his example: “whitefish eat candy.” You probably were aware of vague impressions of fish and candy as an automatic process of associative memory searched for links between the two ideas that would make sense of the nonsense.
So having basics of the words definitions down equals belief? I think this is, to be blunt, ridiculous. It seems that this belief is quite different from boolean (true, false) logic. It seems what he means by belief is something quite different from regular belief. How does he define it?

You probably overstate the degree to which you do this (not a criticism, I do too).
Probably not.

You correct yourself sometimes when you have knowledge, often we lack the knowledge to correct ourselves and aren't doing so even if we think we are.
Seems rather automatic to disbelieve known untruths to me when we have knowledge, belief in 1+1=3 doesn't even enter when I write it or see it written by anyone. There is no correcting process, just the understanding what the symbols mean.

Unless the proposed process is completely unconscious, then we can say that it's not <1s belief but a misdescription of a mechanism. If it were ~1s we could observe it in meditation, as it is, it must be significantly faster to exist.

Breaking news on a terrorist attack for example, number of attackers, reports of explosions, and all sorts of minor details are frequently wrong but I can guarantee that you will remember some of these as being true.
If I relied on breaking news. I may be different than most, that on such events I go to different sources.

Also, if you read an article about a topic/place you know very well you will often notice many errors in the reporting. 2 mins later you will read another article about something you have no idea about, but you don't read it as if it is full of errors like the last article. You also have nothing to replace the false information with. You don't go from belief to absence of belief, you go from belief to replacement belief.
Even if we don't know the topic or place, we can readily withhold judgment. It's actually easier to learn complex sets of information when our default is questioning.

When I was taking a course that included cosmology I questioned everything presented as fact there to the point that it took me hours to get through a few pages. So no I'm not "exaggerating the degree" to which I do this. Google didn't exist in those days, so I had to write question marks that I could only get some verification by going to the top science library. Many of those question marks remain unanswered, hence I don't subscribe to any cosmological theories at the moment.
 
Or we might not. Observing our mental processes at work is not simple, I'll grant you that, but how have they observed this "default is belief"? I'm not going to accept on faith something that doesn't seem to match reality.

I actually think it does match reality.

This is the abstract of one of the papers that shows that unacceptance is cognitively effortful, whereas acceptance seems not to be.:

Spinoza suggested that all information is accepted during comprehension and that false information is then unaccepted. Subjects were presented with true and false linguistic propositions and, on some trials, their processing of that information was interrupted. As Spinoza's model predicted, interruption increased the likelihood that subjects would consider false propositions true but not vice versa (Study 1). This was so even when the proposition was iconic and when its veracity was revealed before its comprehension (Study 2). In fact, merely comprehending a false proposition increased the likelihood that subjects would later consider it true (Study 3). The results suggest that both true and false information are initially represented as true and that people are not easily able to alter this method of representation.

How was this observed? What were the sizes of the sample, how was sampling done and how many were the ones who were didn't Gilbert's proposition? How was the experiment performed and how many repeats of the study were done by different research groups?

There are several papers, each with several trials. Methodology is not easy to sum up.

So having basics of the words definitions down equals belief? I think this is, to be blunt, ridiculous. It seems that this belief is quite different from boolean (true, false) logic. It seems what he means by belief is something quite different from regular belief. How does he define it?

Belief is holding something to be true; either the brain does or it doesn't.


Daniel Kahneman is widely considered to be the most important psychologist of the past 50 years for his work on heuristics and biases. It is unlikely that he would include a 'ridiculous' idea in a book summarising his life's work. That doesn't prove it is true, but seems to rule out it being 'ridiculous'.

We aren't rational machines, what suited our ancestors was not what would suit someone trying to create a purely rational person.

“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

The labels of System 1 and System 2 are widely used in psychology, but I go further than most in this book, which you can read as a psychodrama with two characters.
When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations. The highly diverse operations of System 2 have one feature in common: they require attention and are disrupted when attention is drawn away.

Excerpt From: Kahneman, Daniel. “Thinking, Fast and Slow.”

Comprehension is system 1, evaluation is system 2.

The experiment by Gilbert showed you can make people more likely to believe by placing them under cognitive load and thus less able to 'activate' system 2.

Probably not.

So you believe that if you watched 100 hours of tv news, you wouldn't pick up any incorrect beliefs as a result of this?

If I relied on breaking news. I may be different than most, that on such events I go to different sources.

You are describing something different here. This isn't about whether it is possible to be highly sceptical, but about is it possible to not be affected by exposure to incorrect information, even if you later make a great effort to correct it.

I don't watch breaking news neutrally, I watch it with the expectation that most information is likely untrue. I frequently find remembering details that were not true, even long after the event.

I'm sceptical of our ability to always be sceptical as it requires constant effort which is not always possible.


Even if we don't know the topic or place, we can readily withhold judgment. It's actually easier to learn complex sets of information when our default is questioning.

You are describing system 2, the conscious overriding of system 1. You can withhold judgement, but it is the 2nd step, not the 1st.
 

9-10ths_Penguin

1/10 Subway Stalinist
Premium Member
Numerous modern scientific studies (including the ones in the OP) have provided evidence in favour of this position. If you can access a journal database or 2 they aren't too difficult to find.

Alternatively, Daniel Gilbert's studies are discussed in 'thinking fast and slow' by Daniel Kahneman which you might have read.
So when I say to you something like "what you suggest in the OP is nonsense", you initially believe it? :D
 
So when I say to you something like "what you suggest in the OP is nonsense", you initially believe it? :D

It might well be that I can do nothing other than that :D

Given that the idea is supported by 2 of the most influential psychologists of recent times, probably wouldn't be too difficult to reject the idea that it is 'nonsense' though.
 

Acim

Revelation all the time
"Is there a difference between believing and merely understanding an idea? Descartes thought so. He considered the acceptance and rejection of an idea to be alternative outcomes of an effortful assessment process that occurs subsequent to the automatic comprehension of that idea... if one wishes to know the truth, then one should not believe an assertion until one finds evidence to justify doing so... One may entertain any hypothesis, but one may only believe those hypotheses that are supported by the facts.

According to Spinoza, the act of understanding is the act of believing. As such, people are incapable of withholding their acceptance of that which they understand. They may indeed change their minds after accepting the assertions they comprehend, but they cannot stop their minds from being changed by contact with those assertions. [He believed] that (a) the acceptance of an idea is part of the automatic comprehension of that idea and (b) the rejection of an idea occurs subsequent to, and more effortfully than, its acceptance."

(From: You Can't Not Believe Everything You Read - Daniel T. Gilbert, Romin W Tafarodi, and Patrick S. Malone & How mental systems believe - D Gilbert)


Whose view do you agree with? Do we withhold judgement until we choose to accept or reject an idea, or do we accept an idea until we choose to reject it? Are we affected by everything we read/hear as it is impossible to have no belief about any concept that we can understand?

If Spinoza is correct, do you believe that this has significant consequences for our beliefs (especially as we are living in the 'information age')?

What do you think?

[I believe Spinoza has it more correct, but won't go into details yet]

I think beliefs for the person hearing the message are chosen, even while understanding is to some degree automatic. So, I agree with Descartes.

In trying to understand a message I am hearing or reading, I act as if it is true for the person conveying it, but am skeptical (almost constantly) if it actually applies to me. Depending on context, I may be actively contentious about the message. Thus able to understand it, but denying any belief in it myself.

I initially thought I agreed with Spinoza because I do agree that people are incapable of withholding acceptance of that which they understand. But I see that as acceptance for the person conveying the message. If I think of how art works, I see it working that way. Like a song that is telling a story. I'm not accepting that as true for myself. I'm accepting it as true for the lyricists (as a tale) and understanding the message. Depending on the message, I may find things that immediately relate to me/my life, but am waiting to see where it is going, what is it concluding with, and not accepting the belief(s) as something that is true for me. Movie narratives would be another example. Perhaps a better example. I accept for the story to work it needs to be a galaxy that is far far away, but the moment I heard (or read) that and each moment after that, if someone were to ask me, "do you believe this actually occurred in a galaxy far far away," I'd probably say no. But I'm willing to go along for the ride, and to accept it as true for the narrative.

Same goes with academic and non fiction works. There's a premise or axiom stated. I may contest that (to myself) but am willing to read on because I understand it is true for that person/paper and is what the rest of the 'narrative' will be based on, to reach the conclusion(s) it will inevitably make. All of which I may be skeptical of, and end up rejecting. But I understood the ideas / assertions along the way.
 

Willamena

Just me
Premium Member
"Is there a difference between believing and merely understanding an idea?
...
Whose view do you agree with?
Descartes, as depicted above, seems to be talking about beliefs in the form of statements that might be analysed before they are accepted as knowledge. Still, analysis aside, acceptance is the act of believing, and believing happens in the moment. Spinoza, as depicted above, seems to be talking about believing.

Do we withhold judgement until we choose to accept or reject an idea, or do we accept an idea until we choose to reject it?
Until and unless there is sufficient information that a truth value gets assigned, judgement can fairly be said to be withheld. As soon as something (the idea) is accepted as true, it is believed.

No choice is involved, it's just definition.

Are we affected by everything we read/hear as it is impossible to have no belief about any concept that we can understand?
We must grasp what the concept or idea is before we can believe. Until we have something (an idea, concept or proposition) to possibly believe, there is nothing to attach a truth value to.

If we attach the truth value "false" to it, then we don't believe it, though we understand it. If the scenario is that no truth value gets assigned (void), then the whole issue of belief has been avoided.

If Spinoza is correct, do you believe that this has significant consequences for our beliefs (especially as we are living in the 'information age')?
Don't know.
 

Willamena

Just me
Premium Member
If the default is to believe something unless we overrule it, does exposure to lots of information, much of it false, simply fill our heads with lots of false beliefs?

As such, more information (noise) can actually make us less intelligent.

Think of watching breaking news on a terror attack. 80% will be false information, but most people will end up believing much of what they heard.

If we want to be trully informed should we not take steps to limit our exposure to poor quality information (such as the mainstream media)?
Easier just to understand that what we believe as true isn't true because we believe it.
 

Willamena

Just me
Premium Member
I'm saying you might only believe it momentarily before you 'correct' yourself. On the other hand you might believe it long term if you don't.
That`s so, if there`s no reason to doubt. But that`s convention, not integral process.
 

Jumi

Well-Known Member
I actually think it does match reality.
That much is certain.

This is the abstract of one of the papers that shows that unacceptance is cognitively effortful, whereas acceptance seems not to be.:
I could read the whole paper and how repeatable it's claims were.

There are several papers, each with several trials. Methodology is not easy to sum up.
That's no problem if the papers are available for free.

Belief is holding something to be true; either the brain does or it doesn't.
Agreed. Either we believe something or we don't. I actually thought you had a different view.

Daniel Kahneman is widely considered to be the most important psychologist of the past 50 years for his work on heuristics and biases. It is unlikely that he would include a 'ridiculous' idea in a book summarising his life's work. That doesn't prove it is true, but seems to rule out it being 'ridiculous'.
I have to admit not being a fan of the soft sciences, especially economics. Ridiculous depends how he interpreted the data and how repeatable it was. My workmate at my previous place of employment was a recognized specialist in his field and he said he often dismissed conclusions made in his field as ridiculous. Mind you, he understood that these people were top minds in the field. Similarly I understand Kahneman is a top mind in his field.

We aren't rational machines, what suited our ancestors was not what would suit someone trying to create a purely rational person.
Yet what he or you are proposing is very much a mechanical system where statements always pass as true into our minds.

“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
Apparently the proposed system 1 is fast enough that no one can observe it on their own. I wonder how they observed it in the first place.

The experiment by Gilbert showed you can make people more likely to believe by placing them under cognitive load and thus less able to 'activate' system 2.
Therefore basically studying trancestates. Overloading a system, I can think of alternative explanations for that than it leading to revealing a system below the surface. Instead what I know of trance states, there are alternative systems where unquestioning mind is a given. We can change them through things such as lucid dreaming.

So you believe that if you watched 100 hours of tv news, you wouldn't pick up any incorrect beliefs as a result of this?
Depends. If there is reason to accept the sources then picking up incorrect beliefs is a given. Say a specialist doctor talking about a condition I have might lead to an incorrect belief.

I watch or more accurately listen to stuff that is contrary to my beliefs. I would say most of the stuff I watch these days is something I don't believe in. Call it entertainment value.

You are describing something different here. This isn't about whether it is possible to be highly sceptical, but about is it possible to not be affected by exposure to incorrect information, even if you later make a great effort to correct it.
And I believe much of it is effortless, unless it happens in a trance state.

I'm sceptical of our ability to always be sceptical as it requires constant effort which is not always possible.
We had a cult active where I live that took people to saunas for extended periods and otherwise tired them, to make them more programmable. It's an exceptional state where beliefs bypass our natural filter.

There is another cult working here still, that makes seeming miracles happen using technology and bypasses the filter of followers who become unquestioning followers.
 
Last edited:
That's no problem if the papers are available for free.

They can be found online for free, google the titles and you should find them.

Also you can get almost any paywalled journal article very easily, I'm not allowed to tell you how unfortunately (pesky forum rules)...
 

Jumi

Well-Known Member
What are the titles of the research articles that go into System 1 and System 2?

I found this from a NYtimes book review:

At this point, the skeptical reader might wonder how seriously to take all this talk of System 1 and System 2. Are they actually a pair of little agents in our head, each with its distinctive personality? Not really, says Kahneman. Rather, they are “useful fictions” — useful because they help explain the quirks of the human mind.

If this "useful fiction" is an accurate presentation of Kahneman's view, then I can agree with it. It's a useful model, but what it's not is something that is observable in mechanical and instantaneous formation of belief with every encountered claim.
 

Jumi

Well-Known Member
I did find Judgment under Uncertainty: Heuristics and Biases. I gave it a quick read, it was interesting so I might read it fully later. It doesn't go into system 1 and 2 though.
 
I have to admit not being a fan of the soft sciences, especially economics. Ridiculous depends how he interpreted the data and how repeatable it was. My workmate at my previous place of employment was a recognized specialist in his field and he said he often dismissed conclusions made in his field as ridiculous. Mind you, he understood that these people were top minds in the field. Similarly I understand Kahneman is a top mind in his field.

What has economics got to do with it? Neither are economists, although Kahneman did win a "Nobel" Prize in economics because his psychological theories explained aspects of economic decision making better than conventional economics did.

As to whether it is 'soft science' is also debatable. For example:

“These defects in reasoning have been cataloged and investigated by a powerful research tradition represented by a school called the Society of Judgment and Decision Making (the only academic and professional society of which I am a member, and proudly so; its gatherings are the only ones where I do not have tension in my shoulders or anger fits). It is associated with the school of research started by Daniel Kahneman, Amos Tversky, and their friends, such as Robyn Dawes and Paul Slovic. It is mostly composed of empirical psychologists and cognitive scientists whose methodology hews strictly to running very precise, controlled experiments (physics-style) on humans and making catalogs of how people react with minimal theorizing. ”

Excerpt From: Nassim Nicholas Taleb. “The Black Swan.”


Yet what he or you are proposing is very much a mechanical system where statements always pass as true into our minds.

A variety of evidence suggests that people have a tendency to believe what they should not. For example, repeated exposure to assertions for which there is no evidence increases the likelihood that people will believe those assertions... Once such beliefs are formed, people have considerable difficulty undoing them... Moreover, several studies have suggested that under some circumstances people will believe assertions that are explicitly labeled as false... If people are capable of withholding their acceptance of that which they comprehend, then the presentation of explicitly false information would provide a propitious opportunity to do so. Yet, people do not always seem to exercise that option.
(You can't not believe everything you read - Gilbert)


Apparently the proposed system 1 is fast enough that no one can observe it on their own. I wonder how they observed it in the first place.

Experiments that can be repeated and that demonstrate patterns/regularities. When 'system 2' was disabled by putting people under cognitive load (i.e. occupying system 2 with another task), people increasingly misattributed false statements as being true. This wasn't simple confusion because they didn't increasingly misattribute true statements as being false.

Gilbert et al. (1990) attempted to circumvent these problems by presenting subjects with assertions whose veracity they could not assess because one word of the assertion was in a foreign language (e.g., "A monishna is a star"). After reading each assertion, subjects were sometimes told that the assertion was true or that it was false. On some trials, subjects were interrupted by a tone-detection task just a few milliseconds after being told of the assertion's veracity. At the end of the experiment, subjects were asked to recall whether each assertion had been labeled as true or as false.

The Spinozan hypothesis predicted that interruption (a) would prevent subjects from unbelieving the assertions that they automatically accepted on comprehension and would thus cause subjects to report that false assertions were true, but (b) would not cause subjects to report that true assertions were false. This asymmetry did, in fact, emerge and is not easily explained by the Cartesian hypothesis.

Therefore basically studying trancestates. Overloading a system, I can think of alternative explanations for that than it leading to revealing a system below the surface.

Cognitive load means things like 'do the task while remembering this phone number', distracting the 'thoughtful' part of the brain to see what effect this has on the instinctive part.

And I believe much of it is effortless,

It is, on topics that we know about because the new information contradicts existing information and is quickly 'falsified'. On unknown topics we don't have the ability to do this, and repeated experiments show that we are affected simply by being exposed to such information.

We had a cult active where I live that took people to saunas for extended periods and otherwise tired them, to make them more programmable.

Tiring people out makes 'system 2 ' less effective, and thus more open to believing. The cult implicitly understands this.

If this "useful fiction" is an accurate presentation of Kahneman's view, then I can agree with it. It's a useful model, but what it's not is something that is observable in mechanical and instantaneous formation of belief with every encountered claim.

System 1 + 2 are to describe a wide range of phenomena under the topic of 'heuristics and biases', they are not specific to the OP. They are fictions in the sense that there is no actual objectively existing system 1 & 2, they are a loose connection of similar but ultimately independent aspects of cognition, not a fiction in the sense that each individual component is not observable through experimentation. The fiction is their unity.

While experiments can only give a window into the workings of the mind, if they identify patterns in cognition that are consistently repeated then it is fair to identify them as resulting from an underlying mechanism at work, in this case our predisposition to viewing statements as being true and only later rejecting them as false.

Another experiment showed that the brain takes less time to confirm a statement as true, than reject it as being false which add weight to this.

Overall, the idea that we comprehend statements neutrally doesn't seem to be supported by a wide range of experimental data, and fits into the larger category of system 1 & 2 that Kahneman created out of his research on heuristics and biases.

Descartes approach is based on how people think we should think, but we are repeatedly shown to not think in ways people believe we should, and that we are far less 'rational' than we like to give ourselves credit for.

I feel acknowledging this is important, as the myth of rationality causes all kinds of negative effects on society.
 
So if you understand that something is a lie, does that mean you believe the lie???

Perhaps for a fraction of a second, often we pretty much instantaneously falsify statements though.

Until and unless there is sufficient information that a truth value gets assigned, judgement can fairly be said to be withheld. As soon as something (the idea) is accepted as true, it is believed.

No choice is involved, it's just definition.

But one of the experiments showed that even when being told a statement is false, it can readily be believed as being true.

This does not happen the other way round.

We must grasp what the concept or idea is before we can believe. Until we have something (an idea, concept or proposition) to possibly believe, there is nothing to attach a truth value to.

If we attach the truth value "false" to it, then we don't believe it, though we understand it. If the scenario is that no truth value gets assigned (void), then the whole issue of belief has been avoided.

This is saying that comprehending and believing are separate 'actions', rather than comprehending being acceptance until 'corrected'.

It becomes very hard to 'prove' whether comprehension is acceptance or after comprehension we assign a value of true unless we 'correct' our understanding. It doesn't seem to be the case that we comprehend - neutral - true/false/unknown etc. it seems to be comprehension/acceptance - potential correction, or comprehension - acceptance - potential correction

Don't know.

Given that people are exposed to all sorts of information, much of it false, and that we inevitably will end up believing a fair proportion of it, the more poor quality information we are exposed to increases our misunderstanding of the world.

The faster paced information is the less likely it is to be true. For example, weekly news will be more accurate than daily news, and daily news will be more accurate than instantaneous news.

The only way to avoid this is to limit ones exposure to poor quality information sources, we can't rely on our 'rationality' to self-correct everything.
 
Top