• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Do Concepts Exist Apart from Physical Processes in the Brain?

LegionOnomaMoi

Veteran Member
Premium Member
the part in bold is the issue, as it means you're probably assuming that a concept has physical properties of existing, i.e. at a guess, it is a solid and is a whole and cannot be reduced to neurological patterns.
That's the opposite of what I was saying. Also, just an FYI- neurological refers to clinical (structural) study of the brain. It's a source of common confusion (especially given that there are clinical neuroscientists and neurologists who work in the cognitive sciences), but I thought you might wish to know.


Materialism is a form of apriori reasoning so the belief in that ideas can be reduced to material processes of the brain. it is an assumption rather than a conclusion which can be scientifically verified.
I don't think materialism can be scientifically verified (i.e., no matter how many processes we are able to explain without reference to anything outside the "material" world, one can always posit these exist), but I agree about the brain. The problem is the nature of these material processes and how far we are from being able to make that qualitative shift between the kind of statistical learning machines and most living systems are capable of and conceptual representation/processing. Not that this is evidence that physical processes do not underlie consciousness, concepts, etc. It isn't. It's just annoying for those of us who look back not just on our own work but the past ~60 years and see continual promises for answers yield more questions.
 

LegionOnomaMoi

Veteran Member
Premium Member
I only quoted this paragraph to show the context of the sentence, "Nothing is stored." Where did you learn this?
I don't think I can give you a specific source (I suppose the material I read near the end of my undergrad material). I do recall that two of the most important sources I read then (both from the same series- MIT's Computational Neuroscience) were Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems and Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. But really the placed I learned it was in the lab using neuroimaging technologies (mostly fMRI) to study how the brain works. Reading the literature is obviously required, but one can't really gain a full appreciation of it or of its subject matter without actually studying first-hand. Why do you ask?
 

ImaTroll

Member
Subjectively, from the standpoint of our conscious awareness of thinking, it is easy to see thoughts as disembodied concepts. That is, it is easy to see them as conceptual products of physical processes in the brain; conceptual products that are somehow and to some extent separate from those physical processes.
mental concepts do not exist apart from the physical brain, especially god concepts.
 

Brickjectivity

Veteran Member
Staff member
Premium Member
When the most successful theory of all time tells you that a thing can be both A & ~A and that everything is made up of nonlocal, indiscrete components (there are no "particles", just things that are wave-like but whose wave-like properties become infinitesimal as the the size of the system exceeds its de Broglie wavelength), there's reason to think deeply about what it means to describe something as "material" or "physical. And as this theory didn't develop like the (mostly) progressive successes of classical physics but was born out of the catastrophic realization that the entire framework of physics was fundamentally flawed, are we really so justified in relying on that framework except where it failed so spectacularly before?
Cool paragraph.

I will restate my opinion that I have stated numerous times now. Numbers and order exist regardless of whether they are being counted or whether someone is checking order. Therefore mental 'Concepts' of numbers might not exist, but the numbers themselves or the order they represent exists as a potential. Call it the potential to be thought about or the potential to be counted.
 

Excaljnur

Green String
I've been thinking a lot about this question over the past few days (it's disrupted other planned activity, for which I blame Sunstone, but luckily foremost among what I'd planned to finish was in the way of commentary over some reading material I now have a scapegoat for). Neuroscientists tend to have a reputation for being the most hard-core reductivists around, as many of them come from backgrounds in psychology and approach the biological sciences (of which only a minority go beyond the very basics) as a necessary evil that need only be understood to the point at which psychological phenomenon can be descriptively reduced to biological processes. However, there are a lot of exceptions, although a great many of these aren't neuroscientists so much as they are specialists from other fields who have used their expertise in tandem with neurologists, psychiatrists, neuroscientists, and others whose study is centered on the brain. Then there are those of us who are interested in closing the gap between computational neuroscience and cognitive neuroscience, which is to say constructing explanatory models of cognitive processes through neurophysiological processes.

I started down my path to insanity (well, another kind of insanity than that I already had) by trying to show that, at the very least, we could rule out quantum theories of consciousness (and not via reliance on the quantum-to-classical transition). This was a mistake, as on the road to quantum physics lies madness. I usually describe my position as being that of a physicalist, but one of the problems with modern physics is that many physical systems are mathematical entities. It's not just that the fundamental constituents of all matter are (in the standard view) probability functions. It's not even the ontological indeterminacy. It's the ways in which the classical world we experience can be recovered through our forcing reality to exist as one thing vs. another. This is a central epistemic and philosophical basis (sometimes explicitly stated, as with Stapp's quantum consciousness theory) for quantum theories of mind: we seems to decide to do things in ways that nothing in classical physics can explain but that is built into quantum theory. To me, this is only a good starting point if one can use it as a foundation for identifying how quantum processes give rise to or even contribute to what little we know of the physics of consciousness, conceptual processing, & cognition in general. It hasn't, and until someone supporting such theories can offer this I think it is actually preventing (as quantum physics has in general) us from looking anew at "classical physics".

It is often somehow assumed that classical physics is only incomplete when it comes to the atomic/sub-atomic scale (or mechanics at scales astrophysicists deal with if one doesn't view special relativity as classical physics). That is, it's sort of tacitly taken for granted that if we don't need quantum physics or relativistic physics we can rely on classical physics. In reality, classical physics at its best wasn't a very good tool to understand most physical phenomena, but was virtually bankrupt at explaining the dynamics of living systems. And, while modern physics & chemistry have made tremendous strides in our ability to model, explain, predict, and in general understand even complex systems that aren't living, the glaring failure of similar progress in biology has motivated a great many to propose that we don't understand the classical realm as well as we thought.

Classical physics arose out of natural philosophy as classical mechanics in order to explain why inanimate things moved the way they did. It continued to do mainly this right up through the origins of quantum physics. Take a system as complex as you'd like, such as the climate, and despite all the nonlinearities, highly complex interactions between and among "networks", etc., it's still all about how forces act on inanimate objects to make them move in particular ways. Living systems (even single-celled organisms) are qualitatively different. They are animate, for one thing, and classical physics was developed to explain the inanimate. Our models of their dynamics involve constant appeals to processes that we use to explain the dynamics of the "parts" of the system, only somehow these parts are also causing the functional processes.

Concepts, however they relate to the physical brain (and not just those of humans), are perhaps the greatest challenge to the fundamental ideas that drove the development of classical physics for centuries. Concepts involved in awareness, the sense of "I/me" that is self-awareness/consciousness, even those of desires appear fundamentally different kinds of "forces" than those that motivated most of physics up-to and including today: ethereal, seemingly non-corporeal abstractions that some how act upon a physical system causally. Moreover, the brain is in some ways the most complex system known (trivially speaking, it clearly must not be as the body contains the brain and much else, and therefore is necessarily more complex). We can not only model to a high degree of accuracy the kind of "learning" or reactions to environment that most living systems are capable of, but many of the methods used in AI, soft computing, computational intelligence, etc., are based off of how living systems of this type "learn" (non-conceptually). We are not remotely close to understanding or creating models of living systems capable of conceptual processing. So we have a physics that is particularly inadequate when it comes to understanding even the simplest life that we wish to use to understand something that runs counter to the foundations for classical physics and in most ways physics itself. Meanwhile, just what it means to be "physical" has becoming increasingly less clear.

When the most successful theory of all time tells you that a thing can be both A & ~A and that everything is made up of nonlocal, indiscrete components (there are no "particles", just things that are wave-like but whose wave-like properties become infinitesimal as the the size of the system exceeds its de Broglie wavelength), there's reason to think deeply about what it means to describe something as "material" or "physical. And as this theory didn't develop like the (mostly) progressive successes of classical physics but was born out of the catastrophic realization that the entire framework of physics was fundamentally flawed, are we really so justified in relying on that framework except where it failed so spectacularly before?

I can see why this would keep you up all night. Have you looked into the 10 Dimensions Theory? Well, despite the fact that it is a theory and uncertain, merely a possible model of the universe(s), it does reveal a critical insight. The way in which we view the world, the way in which causal relationships are measured and observed may be limited by our point of reference. While this doesn't seem too astounding, think of it in the context of how the general theory of relativity was first criticized as a radical proposition. The theory stipulated that from one perspective, in real-time, an object moving in space took 5 seconds to travel from point A to point B which was measured from Perspective 1 as 10 meters. However, from Perspective 2, the distance traveled in those 5 seconds from point A to point B was 20 meters. Of course, this mistake can only be honestly made in a 2 dimensional worldview. Although, the Theory expanded to explain time distortions which can now by accounted for using relative kinetics, which essentially made equations more complicated, the same insight is applied now for the 10 Dimensions theory. The insight is that we may lack a dimension of measurement to understand the unexplained problems that arise in quantum theory like wave-particle contradictions and entanglement, to name just two.

My purpose for explaining all this is to reach the point that our understanding of the universe should be described as 'incomplete' as an alternative to 'incorrect'. For this reason of incompleteness, we are perceiving contradictions because our understanding is limited. The reason why I am stressing the distinction between incorrectness and incompleteness is because of its significance. If we are incorrect, then our methods for understanding the universe and everything in it (scientific method, logic, capacity to reason, etc) would be flawed. This is a hazardous idea because of the conclusion (that I believe you came to) that we must doubt conclusions that we rationally came to since they are based on methods embedded with flaws. However, if we are incomplete in our answers and explanations, then it our methods for understanding the universe are not flawed, rather, we lack certain methods or more developed methods. This conclusion, I believe, is more satisfying simply because it is not obstructively skeptical. Coming to the conclusion that we have an incorrect understanding leaves us with self-doubt. Coming to the conclusion that we have an incomplete understanding leaves us almost with a sense of hope, that answers are not as far out of reach since we can continue to build upon previous advancements rather than scraping what we have as starting anew.

So I am saying that maybe someday, in the far or near future, quantum and classical worldviews will be reconciled. I admit, this is very optimistic, but doubting classical physics and rendering it inapplicable, in my opinion, is akin to subscribing to Hume's Global Skepticism (literally doubting everything because a lack of a logical foundation despite perceived evidence of functionality).

So where does this leave us? It leaves us as conscious creatures with the capacity to reason (which is to say, capable or both good and bad reasoning). This is why I spend my free time reading contemporary literature on psychology, neuroscience, philosophy, biology, etc. Really anything that I believe will give me a greater understanding of why we think the way we do because it may perhaps lead to an insight that will allow me to view current knowledge and theories through another perspective (albeit, not another physical dimension).

All this being said, I have spent the greater part of the last month making models that describe the physicality of our thoughts, only to run into areas that require more experimentation and are still largely or to some degree unknown. I still have lots of work to do!


So it I may attempt an answer your question, it would be: Yes, and no. We are justified to the extent that we do not settle as satisfied with using these models without trying to improve them if we know they are flawed. I believe the human capacity for curiosity is taking care of this dilemma and hopefully in the near future (hopefully while I'm still alive) there will be substantial advancements.
 
Last edited:

Excaljnur

Green String
I don't think I can give you a specific source (I suppose the material I read near the end of my undergrad material). I do recall that two of the most important sources I read then (both from the same series- MIT's Computational Neuroscience) were Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems and Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. But really the placed I learned it was in the lab using neuroimaging technologies (mostly fMRI) to study how the brain works. Reading the literature is obviously required, but one can't really gain a full appreciation of it or of its subject matter without actually studying first-hand. Why do you ask?
I've never come across anyone who has claimed that. I've always learned that the brain's two main functions were, generally, to process (encodes and retrieve) and store stimuli, but I never encountered an explanation for how the "storage" was done or if it could be even called storage at all. So to read that nothing is stored at all kind of makes intuitive sense because storing in the context of brain mechanisms is, maybe not impossible, but at the very least, difficult to conceptualize.
 

LegionOnomaMoi

Veteran Member
Premium Member
I've never come across anyone who has claimed that. I've always learned that the brain's two main functions were, generally, to process (encodes and retrieve) and store stimuli, but I never encountered an explanation for how the "storage" was done or if it could be even called storage at all. So to read that nothing is stored at all kind of makes intuitive sense because storing in the context of brain mechanisms is, maybe not impossible, but at the very least, difficult to conceptualize.

I have a document I keep adding to every time I run across someone in the cognitive sciences who writes in some form of specialist literature on how flawed the "computer metaphor" is, because for many reasons people (beginning with "the experts" who founded what I call "classic" cognitive science) are inclined to think of the brain in ways akin to the computer. One difference among the many is perhaps most important, though, and that is that there is no "processor" in the brain and no "memory" (by that I mean no place where encoded information just sits there as in the 0's and 1's in a computer). In a computer, all forms of storage are localized and static in that bits have physical locations that do not change and their values remain static until some command changes them. In the brain, nothing is static. All neurons are always active, and information is represented by patterns of ever-changing rates & timings of spike trains, changes in the individual weights given by a neuron to different connections to other neurons, synchronization within and among neuronal populations, and so on. But it's not just that everything is always changing, it's that part of how we represent information (concepts, memories, etc.) is also how we process it. There's no distinction between "processor" and "hard-drive". And to emphasize how deeply rooted the computer metaphor is, those cognitive psychologists/neuroscientists/etc. who believe that cognition is embodied (i.e., abstract concepts and notions are largely metaphorical extensions of bodily/perceptual experiences and concepts) will point to neuroimaging data showing that exposure to e.g., images to tools activate "motor programs". This is a fancy and (IMO) bad way to describe that we rely on sensorimotor brain regions to represent and process information that is seemingly completely unrelated to sensorimotor functions.

Also, what's especially interesting (to me, anyway) is that those who tend to study higher-level cognition (cognitive neuroscience) tend to use such "computer terms" more than computational neuroscientists who are using computers to build computer models of neuronal networks, neurons, etc.
 

Excaljnur

Green String
I have a document I keep adding to every time I run across someone in the cognitive sciences who writes in some form of specialist literature on how flawed the "computer metaphor" is, because for many reasons people (beginning with "the experts" who founded what I call "classic" cognitive science) are inclined to think of the brain in ways akin to the computer. One difference among the many is perhaps most important, though, and that is that there is no "processor" in the brain and no "memory" (by that I mean no place where encoded information just sits there as in the 0's and 1's in a computer). In a computer, all forms of storage are localized and static in that bits have physical locations that do not change and their values remain static until some command changes them. In the brain, nothing is static. All neurons are always active, and information is represented by patterns of ever-changing rates & timings of spike trains, changes in the individual weights given by a neuron to different connections to other neurons, synchronization within and among neuronal populations, and so on. But it's not just that everything is always changing, it's that part of how we represent information (concepts, memories, etc.) is also how we process it. There's no distinction between "processor" and "hard-drive". And to emphasize how deeply rooted the computer metaphor is, those cognitive psychologists/neuroscientists/etc. who believe that cognition is embodied (i.e., abstract concepts and notions are largely metaphorical extensions of bodily/perceptual experiences and concepts) will point to neuroimaging data showing that exposure to e.g., images to tools activate "motor programs". This is a fancy and (IMO) bad way to describe that we rely on sensorimotor brain regions to represent and process information that is seemingly completely unrelated to sensorimotor functions.

Also, what's especially interesting (to me, anyway) is that those who tend to study higher-level cognition (cognitive neuroscience) tend to use such "computer terms" more than computational neuroscientists who are using computers to build computer models of neuronal networks, neurons, etc.
Wow. That is very interesting stuff. I will keep it in mind.
 

LegionOnomaMoi

Veteran Member
Premium Member
I can see why this would keep you up all night. Have you looked into the 10 Dimensions Theory?
Not specifically 10, in that I view this number to be more of a longer answer required by a formulation of string theory that is mostly superseded, and because of the nature of these dimensions (which are wholly unlike the 4D spacetime of either special or general relativity). These dimensions are far more mathematical, as one can see in e.g., the equating of 10-dimensional superstring theory with a 4D gauge theory. Then there's the fact that the preference for 10 dimensions rather than an equally consistent 26 is, I think, one of economy. Finally, there's the nature of the dimensions and the space to consider. For example, we describe quantum physics only as they exist in a space that extends infinitely along infinitely many "directions" called Hilbert space. The 4D spacetime of special relativity is geometrically different from that of general relativity (which is actually a rather big problem, as the latter allows for causal paradoxes such as closed timelike curves or CTCs). There's a pretty good monograph by Bars & Terning (2010). Extra dimensions in space and time (Multiversal Journeys) that goes over various proposals and is important in how thoroughly it addresses the ways different models "split" the dimensions (e.g., two-time physics isn't ever 2D) and the various alterations on the more standard models beyond the "standard models". I came into physics more through mathematics than anything else, so I got used to thinking about 1,000th dimensional spaces or more before I know more than the basics of QM. To me, the ontology of spacetime is more of a daunting question than posited solutions of extra-dimensions that resolve mathematical difficulties in cosmology & theoretical physics.


While this doesn't seem too astounding, think of it in the context of how the general theory of relativity was first criticized as a radical proposition. The theory stipulated that from one perspective, in real-time, an object moving in space took 5 seconds to travel from point A to point B which was measured from Perspective 1 as 10 meters. However, from Perspective 2, the distance traveled in those 5 seconds from point A to point B was 20 meters.
Actually these "paradoxes" were introduced 10 years earlier by Einstein's 1905 paper founding special relativity (and with it length contraction and time dilation). Things get even more troublesome in general relativity, even though we are still dealing with 4-dimensions.


The insight is that we may lack a dimension of measurement to understand the unexplained problems that arise in quantum theory like wave-particle contradictions and entanglement, to name just two.
I don't know if I'd call the wave-like nature of all matter (wave-particle duality is a misnomer, as in reality QM posits that everything is wave-like but becomes increasingly localized the larger the system is). It's counterintuitive, yes, but it's only a problem if it doesn't reflect reality (also, I'm not sure I follow you regarding how nonlocality, entanglement, etc., are addressed by physics beyond the standard model in ways that "resolve" these).

My purpose for explaining all this is to reach the point that our understanding of the universe should be described as 'incomplete' as an alternative to 'incorrect'.
Good point. Although one could argue that incomplete knowledge entails incorrect knowledge, or at least that it can.


If we are incorrect, then our methods for understanding the universe and everything in it (scientific method, logic, capacity to reason, etc) would be flawed.
Many a scientific theory has been held that turned out not incomplete but incorrect. Certainly, before another 1905 paper by Einstein which really sparked the development of quantum physics, it was thought that classical physics was incomplete. However, it was barely incomplete (so barely that the general view was work in physics was basically done). It turned out that what was incomplete was incomplete because most of classical physics was fundamentally incorrect.

This, though, is not something I see as a problem for logic or scientific methods. There's an oft-quoted dictum "all models are wrong, but some are useful." It's one thing if inaccuracies, failures, and being incorrect prevented us from making progress. It's another to recognize that we'll be wrong most of the time to some degree, but this doesn't prevent us form making firm statements about the nature of the world/cosmos.

I understand, I think, where you are coming from (and certainly your concern) but I think the failure of classical physics was one of the most important wake-up calls without which modern sciences wouldn't really be possible. Two different experiments continually confirmed mutually exclusive results, until we were forced to realize that the entire theoretical framework was flawed, and the experiments and their findings closely related to and motivated by theory. Now we know this is so.

This is a hazardous idea because of the conclusion (that I believe you came to) that we must doubt conclusions that we rationally came to since they are based on methods embedded with flaws.

What we realized, and one of the most important results of the failure of classical physics, was that all hypotheses are theory-laden. That is, when we develop hypotheses we wish to test we do so relying on theory to generate them, theory to inform how we set-up our experiments, and theory to interpret the results. The idea that there is The Scientific Method (always an idealization) consisting of
1) Develop hypothesis
2) Try to disprove hypothesis repeatedly
3) If repeatedly confirmed, hypothesis becomes theory

is alien to the sciences and has been for a century. If we don't recognize that the inter-relationship between theory, methods, hypotheses, and interpretations we run the risk of repeating the catastrophic failure of physics during the early 20th century. Given any field, there are numerous disagreements over aspects of theories in that field, or theories in that field. One of the reasons that a theory I mentioned in an earlier post, embodied cognition, has been around alongside an incompatible theory for ~30 years is because of the ways in which methods and findings are theory-laden.
 

Excaljnur

Green String
Clearly you have the extensive math and physics background that I don't. I was riding on whatever knowledge (which it seems you noticed was limited) I could muster to arrive to my main point. Though you did bring up a point that I agree is important, the inter-relationship between theory, methods, hypotheses and interpretations. The difficulty that I see often with opposing theories is that they are both based on evidence of some sort, good or bad. However, when you have two theories, both based on good evidence, the issue may become the distinction between 'best evidence' versus 'most evidence'. This would be in the area of interpretation, that is, who believes that their methods to have been the most accurate and who can argue their theory is more representative of the "correct" interpretations.

I think we must also recognize that the truth is not discovered in interpretations of results per se, but rather it is the results themselves because suppose a study was conducted and the data verifiable and trusted, multiple interpretations can be seen as attempted explanations for the truth. Only once an interpretation is proven in/correct with subsequent research, can that interpretation be regarded as true or false. But by that point, I ceases to be an interpretation. Truth-seeking is then the motivation for further research, to reveal that a specific model or component of it is the true model. This process after results are published does allow people to continue promoting a theory that will eventually be dis-proven, especially since results are published with a favored theory in mind. But I think that is because it is easier to understand raw data in the context of a specific theory, whether or not widely-accepted. I don't think methods and finding will ever cease to be theory-laden because research is always conducted with a theory in mind, also since and uncertainty in the theory most likely generated the hypothesis.

What we realized, and one of the most important results of the failure of classical physics, was that all hypotheses are theory-laden. That is, when we develop hypotheses we wish to test we do so relying on theory to generate them, theory to inform how we set-up our experiments, and theory to interpret the results.
I think you said that here.

While theories are often misleading, and interpreting data with one or some theories in mind is easier, I also believe it is necessary. Ideally, we may often picture the disinterested scientist with a neutral position conducting research and reporting data, but I don't think that can ever be the case. A theory seems to be analogous to the "concept" discussed in older posts of this thread. There is individual evidence and interpretation within the theory, but to address all the individual interpretations and pieces of evidence in regular discussion or in the analysis section of a study, we need to speak in categorical terms or theory for the sake of brevity and digestion. But a theory in this sense becomes more abstract to the point where generalizations are made and assumed to be true according to in/correct interpretations that are used to support a favored theory. It is simple and intuitive to use an untested but expected assumption to support a favored theory if it makes it stronger; before you know it, an extensive theory is made largely based on untested assumptions. This natural tendency, I believe, is the most significant danger of not recognizing the inter-relationship you mentioned.
 

Laika

Well-Known Member
Premium Member
That's the opposite of what I was saying. Also, just an FYI- neurological refers to clinical (structural) study of the brain. It's a source of common confusion (especially given that there are clinical neuroscientists and neurologists who work in the cognitive sciences), but I thought you might wish to know.



I don't think materialism can be scientifically verified (i.e., no matter how many processes we are able to explain without reference to anything outside the "material" world, one can always posit these exist), but I agree about the brain. The problem is the nature of these material processes and how far we are from being able to make that qualitative shift between the kind of statistical learning machines and most living systems are capable of and conceptual representation/processing. Not that this is evidence that physical processes do not underlie consciousness, concepts, etc. It isn't. It's just annoying for those of us who look back not just on our own work but the past ~60 years and see continual promises for answers yield more questions.

Thanks. I've only heard "Neurological" from a couple of sources (watching House M.D. mainly) so that was helpful. Yeah, Materialism cannot be scientifically verified, that is it's main problem. it's advantage is that it assumes that all questions can be answered; as if everything is material or physical it is therefore observable and able to be studied. Dialectical Materialism has an extremely difficult conception of causality because it's based on internal contradiction and takes a long time to grasp, but again assumes answers are possible which is why I use it more than any other reason as that can be very empowering. I have only a very vague notion of what's going on scientifically, but it does seem to be that a lot of explanations people have had are unraveling (quantum mechanics comes to mind). would I be right in thinking that?
 

LegionOnomaMoi

Veteran Member
Premium Member
watching House M.D. mainly
I really enjoyed that show. Perhaps it has something to do with House being based on Sherlock Holmes (after all, perhaps my favorite show, the BBC's Sherlock is not only set in modern time but is actually concerning Sherlock Holmes).

it's advantage is that it assumes that all questions can be answered; as if everything is material or physical it is therefore observable and able to be studied
There's a great deal to be said for such an assumption. At worst, it turns out that one can't explain something in which case one is at least in the position to assert reasons for why this is so, and at best one is able to explain "everything". The one problem (as we've seen more from historical analyses of the sciences from those like Conant and particularly Kuhn) is that the more one's formulation of epistemic justification is made rigorous, the more one may find that the foundations are swept under one's feet (as in the case of Hilbert and some of the main goals of logical positivism or in physics). Marxist materialism rightly criticized the epistemology Hegel propounded (as this formalist approach was to come to a rather crashing halt thanks to Turing and Gödel while philosophers of science and logicians such as Quine, Putnam, Popper, etc.) tried to rebuilt a structure for empirical inquiry that did not depend upon the hopes that all of arithmetic (and by extension mathematics) could be axiomized and with it the then-queen of sciences become itself the logic of science. However, Marx's denial of formalism had its price too. He took for granted a notion so alien to human culture that it arose only once: the belief that a systematic investigation into the nature of the cosmos was both possible and desirable and could be (and should be) conducted within the appropriate framework. Before early modern "science" developed in Europe, the closest to science that humans ever came was probably among the Greeks, where the formal framework/logic was worked out there as nowhere else. However, it wasn't applied to the systematic study of natural phenomena.

The problem is that explanations of natural phenomena come to us naturally. We are predisposed to see cause where none exists and patterns where none are. It is primarily the formalism that Hegel (among others) espoused which allows empiricism to exist as a successful program. Yet Marx and Engels could no more realize this, given their historical contexts, than Hegel could anticipate the fall of logical positivism.

Nor could any foresee a different formalism now used as a basis for epistemic justification that is entirely compatible with the empiricism of Marx et al: game theory, Bayesian inference/reasoning (or subjective probability), non-classical logics, etc.

.
Dialectical Materialism has an extremely difficult conception of causality
Any conception of causality that wasn't extremely difficult would, I think, be a very poor one. Causality is extremely subtle, as it not only involves the various categories Aristotle recognized and more, but common models (such as counterfactual causation) do not hold for much of modern physics and we remain without any consensus as to how to interpret the causal connections when Bell's inequality is violated.

would I be right in thinking that?
I've no idea. Only questions that lead to more questions. :)
 

LegionOnomaMoi

Veteran Member
Premium Member
Clearly you have the extensive math and physics background that I don't.
Well I have wasted a lot of time studying things no sane person would wish to. I would like to think some kind of knowledge emerged from such frivolous activities.

The difficulty that I see often with opposing theories is that they are both based on evidence of some sort, good or bad. However, when you have two theories, both based on good evidence, the issue may become the distinction between 'best evidence' versus 'most evidence'.

I recently shared a wake-up call I received early on regarding this, which I have quoted below:
Very early in my graduate career the head of my lab ran a weekly graduate seminar (some post-docs were there as well) largely devoted to criticizing a wide-spread theory in neuroscience—embodied cognition—that he (and many others) fundamentally disagreed with. Like any good advisor/lab director, he sought to indoctrinate his graduate students, and there was one in particular with whom he had particular success. At one point during the seminar, this graduate student scathingly criticized a peer-reviewed study supporting embodied cognition, but surprisingly the lab director actually reined her in. He said that while the study was flawed, it was an improvement on previous studies by the same author and similar researchers and that it addressed many of the “flaws” we had covered in studies in previous weeks. Here comes the interesting/important part. He went on to say that it could be future studies continued to find the evidence for embodied cognition and did so without any flaws. In that case, he stated, we would have to realize that the methods used by neuroscientists were inadequate.


Now, as anybody who has taken even high school science classes can tell you, The Scientific Method (TSM) says that if you continually confirm some hypothesis, then you accept it as theory (at least until it is falsified). So why was this distinguished professor, with an academic pedigree few could match, so blatantly rejecting the basis for TSM? I asked him how we could determine when the methods were the problem vs. the theory (that is, given many experiments in some field of science that all support the same theory, how can we determine whether findings reflect reality or poor methods)? He replied with something to the effect of “when you have a really good reason for thinking that certain evidence should exist but you don’t find it, it’s because your methods are wrong.”


This would be in the area of interpretation, that is, who believes that their methods to have been the most accurate and who can argue their theory is more representative of the "correct" interpretations.

suppose a study was conducted and the data verifiable and trusted, multiple interpretations can be seen as attempted explanations for the truth. Only once an interpretation is proven in/correct with subsequent research, can that interpretation be regarded as true or false. But by that point, it ceases to be an interpretation.
I like that. Yes, I think I basically agree. The problem (at least at times) is knowing when that point is reached.

Ideally, we may often picture the disinterested scientist with a neutral position conducting research and reporting data, but I don't think that can ever be the case.
I agree. Despite occasional appearances to the contrary, scientists are human too. We are all subject to biases, and I think often the best we can do is hope to recognize these
 

Bunyip

pro scapegoat
I challenge you to explain why there is essentially a connection between 'abstract' and 'non-physical'.
Abstract MEANS non physical. Abstract is defined as 'existing as a thought or idea, but not having a concrete or physical existence.
 
Top